text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1605–1617 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1605 End-to-End Neural Word Alignment Outperforms GIZA++ Thomas Zenkel, Joern Wuebker, John DeNero Lilt, Inc. first [email protected] Abstract Word alignment was once a core unsupervised learning task in natural language processing because of its essential role in training statistical machine translation (MT) models. Although unnecessary for training neural MT models, word alignment still plays an important role in interactive applications of neural machine translation, such as annotation transfer and lexicon injection. While statistical MT methods have been replaced by neural approaches with superior performance, the twenty-year-old GIZA++ toolkit remains a key component of state-of-the-art word alignment systems. Prior work on neural word alignment has only been able to outperform GIZA++ by using its output during training. We present the first end-to-end neural word alignment method that consistently outperforms GIZA++ on three data sets. Our approach repurposes a Transformer model trained for supervised translation to also serve as an unsupervised word alignment model in a manner that is tightly integrated and does not affect translation quality. 1 Introduction Although word alignments are no longer necessary to train machine translation (MT) systems, they still play an important role in applications of neural MT. For example, they enable injection of an external lexicon into the inference process to enforce the use of domain-specific terminology or improve the translations of low-frequency content words (Arthur et al., 2016). The most important application today for word alignments is to transfer text annotations from source to target (M¨uller, 2017; Tezcan and Vandeghinste, 2011; Joanis et al., 2013; Escartın and Arcedillo, 2015). For example, if part of a source sentence is underlined, the corresponding part of its translation should be underlined as well. HTML tags and other markup must be transferred for published documents. Although annotations could in principle be generated directly as part of the output sequence, they are instead typically transferred via word alignments because example annotations typically do not exist in MT training data. The Transformer architecture provides state-ofthe-art performance for neural machine translation (Vaswani et al., 2017). The decoder has multiple layers, each with several attention heads, which makes it difficult to interpret attention activations as word alignments. As a result, the most widely used tools to infer word alignments, namely GIZA++ (Och and Ney, 2003) and FastAlign (Dyer et al., 2013), are still based on the statistical IBM word alignment models developed nearly thirty years ago (Brown et al., 1993). No previous unsupervised neural approach has matched their performance. Recent work on alignment components that are integrated into neural translation models either underperform the IBM models or must use the output of IBM models during training to outperform them (Zenkel et al., 2019; Garg et al., 2019). This work combines key components from Zenkel et al. (2019) and Garg et al. (2019) and presents two novel extensions. Statistical alignment methods contain an explicit bias towards contiguous word alignments in which adjacent source words are aligned to adjacent target words. This bias is expressed in statistical systems using a hidden Markov model (HMM) (Vogel et al., 1996), as well as symmetrization heuristics such as the growdiag-final algorithm (Och and Ney, 2000b; Koehn et al., 2005). We design an auxiliary loss function that can be added to any attention-based network to encourage contiguous attention matrices. The second extension replaces heuristic symmetrization of word alignments with an activation optimization technique. After training two alignment models that translate in opposite direc1606 Figure 1: Word alignment generated by a human annotator. tions, we infer a symmetrized attention matrix that jointly optimizes the likelihood of the correct output words under both models in both languages. Ablation experiments highlight the effectiveness of this novel extension, which is reminiscent of agreement-based methods for statistical models (Liang et al., 2006; Grac¸a et al., 2008; DeNero and Macherey, 2011). End-to-end experiments show that our system is the first to consistently yield higher alignment quality than GIZA++ using a fully unsupervised neural model that does not use the output of a statistical alignment model in any way. 2 Related Work 2.1 Statistical Models Statistical alignment models directly build on the lexical translation models of Brown et al. (1993), known as the IBM models. The most popular statistical alignment tool is GIZA++ (Och and Ney, 2000b, 2003; Gao and Vogel, 2008). For optimal performance, the training pipeline of GIZA++ relies on multiple iterations of IBM Model 1, Model 3, Model 4 and the HMM alignment model (Vogel et al., 1996). Initialized with parameters from previous models, each subsequent model adds more assumptions about word alignments. Model 2 introduces non-uniform distortion, and Model 3 introduces fertility. Model 4 and the HMM alignment model introduce relative distortion, where the likelihood of the position of each alignment link is conditioned on the position of the previous alignment link. While simpler and faster tools exist such as FastAlign (Dyer et al., 2013), which is based on a reparametrization of IBM Model 2, the GIZA++ implementation of Model 4 is still used today in applications where alignment quality is important. In contrast to GIZA++, our neural approach is easy to integrate on top of an attention-based translation network, has a training pipeline with fewer steps, and leads to superior alignment quality. Moreover, our fully neural approach that shares most parameters with a neural translation model can potentially take advantage of improvements to the underlying translation model, for example from domain adaptation via fine-tuning. 2.2 Neural Models Most neural alignment approaches in the literature, such as Tamura et al. (2014) and Alkhouli et al. (2018), rely on alignments generated by statistical systems that are used as supervision for training the neural systems. These approaches tend to learn to copy the alignment errors from the supervising statistical models. Zenkel et al. (2019) use attention to extract alignments from a dedicated alignment layer of a neural model without using any output from a statistical aligner, but fail to match the quality of GIZA++. Garg et al. (2019) represents the current state of the art in word alignment, outperforming GIZA++ by training a single model that is able to both translate and align. This model is supervised with a guided alignment loss, and existing word alignments must be provided to the model during training. Garg et al. (2019) can produce alignments using an end-to-end neural training pipeline guided by attention activations, but this approach underperforms GIZA++. The performance of GIZA++ is only surpassed by training the guided alignment loss using GIZA++ output. Our method also uses guided alignment training, but our work is the first to surpass the alignment quality of GIZA++ without relying on GIZA++ output for supervision. Stengel-Eskin et al. (2019) introduce a discriminative neural alignment model that uses a dotproduct-based distance measure between learned source and target representation to predict if a given source-target pair should be aligned. Alignment decisions condition on the neighboring decisions using convolution. The model is trained using gold alignments. In contrast, our approach is fully unsupervised; it does not require gold alignments generated by human annotators during training. Instead, our system implicitly learns reasonable alignments by predicting future target words as part of the translation task, but selects attention activations using an auxiliary loss function to find contiguous alignment links that explain the data. 1607 3 Background 3.1 The Alignment Task Given a source-language sentence x = x1, . . . , xn of length n and its target-language translation y = y1, . . . , ym of length m, an alignment A is a set of pairs of source and target positions: A ⊆{(s, t) : s ∈{1, . . . , n}, t ∈{1, . . . , m}} Aligned words are assumed to correspond to each other, i.e. the source and the target word are translations of each other within the context of the sentence. Gold alignments are commonly generated by multiple annotators based on the Blinker guidelines (Melamed, 1998). The most commonly used metric to compare automatically generated alignments to gold alignments is alignment error rate (AER) (Och and Ney, 2000b). 3.2 Attention-Based Translation Models Bahdanau et al. (2015) introduced attention-based neural networks for machine translation. These models typically consist of an encoder for the source sentence and a decoder that has access to the previously generated target tokens and generates the target sequence from left to right. Before predicting a token, the decoder “attends” to the position-wise source representations generated by the encoder, and it produces a context vector that is a weighted sum of the contextualized source embeddings. The Transformer (Vaswani et al., 2017) attention mechanism uses a query Q and a set of k key-value pairs K, V with Q ∈Rd and V, K ∈Rk×d. Attention logits AL computed by a scaled dot product are converted into a probability distribution A using the softmax function. The attention A serves as mixture weights for the values V to form a context vector c: AL = calcAttLogits(Q, K) = Q · KT √ d A = calcAtt(Q, K) = softmax(AL) c = applyAtt(A, V ) = A · V A state-of-the-art Transformer includes multiple attention heads whose context vectors are stacked to form the context activation for a layer, and the encoder and decoder have multiple layers. For all experiments, we use a downscaled Transformer model trained for translation with a 6-layer encoder, a 3-layer decoder, and 256-dimensional hidden states and embedding vectors. For the purpose of word alignment, this translation Transformer is used as-is to extract representations of the source and the target sequences, and our alignment technique does not change the parameters of the Transformer. Therefore, improvements to the translation system can be expected to directly carry over to alignment quality, and the alignment component does not affect translation output in any way. 3.3 Alignment Layer To improve the alignment quality achieved by interpreting attention activations, Zenkel et al. (2019) designed an additional alignment layer on top of the Transformer architecture. In the alignment layer, the context vector is computed as applyAtt(A, V ), just as in other decoder layers, but this context vector is the only input to predicting the target word via a linear layer and a softmax that gives a probability distribution over the target vocabulary. This design forces attention onto the source positions that are most useful in predicting the target word. Figure 2 depicts its architecture. This alignment layer uses the learned representations of the underlying translation model. Alignments can be extracted from the activations of this model by running a forward pass to obtain the attention weights A from the alignment layer and subsequently selecting the maximum probability source position for each target position as an alignment link: {(argmaxi (Ai,j) , j) : j ∈[1, m]}. The alignment layer predicts the next target token yi based on the source representations x extracted from the encoder of the Transformer and all past target representations y<i extracted from the decoder. Thus the probability is conditioned as p(yi|x, y<i). The encoder representation used as key and value for the attention component is the sum of the input embeddings and the encoder output. This ensures that lexical and context information are both salient in the input to the attention component. 3.4 Attention Optimization Extracting alignments with attention-based models works well when used in combination with greedy translation inference (Li et al., 2019). However, the alignment task involves predicting an alignment between a sentence and an observed translation, which requires forced decoding. When a token in the target sentence is unexpected given the preceding target prefix, attention activations computed 1608 Encoder Output Emb. Linear Softmax Word Softmax Linear Linear Linear Linear Word CalcAttLogits ApplyAtt K Q V Alignment Layer Attention Optimization Input Emb. A Softmax AL E Decoder Figure 2: Architecture of the alignment layer. During inference the attention logits AL of the sub-network Attention Optimization are optimized towards predicting the next word correctly. during forced decoding are not reliable because they do not explicitly condition on the target word being aligned. Zenkel et al. (2019) introduce a method called attention optimization, which searches for attention activations that maximize the probability of the output sequence by directly optimizing the attention activations A in the alignment layer using gradient descent for the given sentence pair (x, y) to maximize the probability of each observed target token yi while keeping all other parameters of the neural network M fixed: argmaxA p(yi|y<i, x, A; M) Attention optimization yields superior alignments when used during forced decoding when gradient descent is initialized with the activations from a forward pass through the alignment layer. 3.5 Full Context Model with Guided Alignment Loss The models described so far are based on autoregressive translation models, so they are limited to only attend to the left context of the target sequence. However, for the word alignment task the current and future target context is also available and should be considered at inference time. Garg et al. (2019) train a single model to both predict the target sentence and the alignments using guided alignment training. When the model is trained to Encoder Output Emb. Linear Linear CalcAttLogits K Q Input Emb. A Softmax AL Decoder Guided Loss Self Attention Alignment Layer Figure 3: Alignment layer with additional unmasked self attention sublayer to use the full decoder context. predict alignments, the full target context can be used to obtain improved alignment quality. The alignment loss requires supervision by a set of alignment links for each sentence pair in the training data. These alignments can be generated by the current model or can be provided by an external alignment system or human annotators. Assuming one alignment link per target token, we denote the alignment source position for the target token at position t as at.1 The guided alignment loss La, given attention probabilities Aat,t for each source position at and target position t for a target sequence of length m, is defined as: La(A) = −1 m m X i=1 log(Aat,t) As depicted in Figure 3, we insert an additional selfattention component into the original alignment layer, and leave the encoder and decoder of the Transformer unchanged. In contrast to Garg et al. (2019), this design does not require updating any translation model parameters; we only optimize the alignment layer parameters with the guided alignment loss. Adding an alignment layer for guided alignment training has a small parameter overhead as it only adds a single decoder layer, resulting in an increase in parameters of less than 5%.2 Unlike the standard decoder-side self-attention layers in the Transformer architecture, the current and future target context are not masked in the 1For the purpose of the guided alignment loss we assume target tokens that do not have an alignment link to be aligned to the end-of-sentence (EOS) token of the source sequence. 2The translation model contains 15 million parameters, while the additional alignment layer has 700 thousand parameters. 1609 alignment layer self-attention component in order to provide the full target sentence as context. Alignment layer parameters are trained using the guided alignment loss. 4 Contiguity Loss Contiguous alignment connections are very common in word alignments, especially for pairs of Indo-European languages. That is, if a target word at position t is aligned to a source word at position s, the next target word at position t + 1 is often aligned to s −1, s or s + 1 (Vogel et al., 1996). Our goal is to design a loss function that encourages alignments with contiguous clusters of links. The attention activations form a 2-dimensional matrix A ∈Rn×m, where n is the number of source tokens and m the number of target tokens: each entry represents a probability that specifies how much attention weight the network puts on each source word to predict the next target word. By using a convolution with a static kernel K over these attention scores, we can measure how much attention is focused on each rectangle within the two dimensional attention matrix: ¯A = conv(A, K) LC = − m X t=1 log( max s∈{1,...,n}( ¯As,t)) We use a 2 × 2 kernel K ∈R2×2 with each element set to 0.5. Therefore, ¯A ∈Rn×m will contain the normalized attention mass of each 2×2 square of the attention matrix A. The resulting values after the convolution will be in the interval [0.0, 1.0]. For each target word we select the square with the highest attention mass, encouraging a sparse distribution over source positions in ¯A and thus effectively training the model towards strong attention values on neighboring positions. We mask the contiguity loss such that the end of sentence symbol is not considered during this procedure. We apply a position-wise dropout of 0.1 on the attention logits before using the softmax function to obtain A, which turned out to be important to avoid getting stuck in trivial solutions during training.3 Optimizing the alignment loss especially encour3A trivial solution the network converged to when adding the contiguity loss without dropout was to align each target token to the same source token. Figure 4: Example of alignment patterns that lead to a minimal contiguity loss. ages diagonal and horizontal patterns4 as visualized in Figure 4. These correspond well to a large portion of patterns appearing in human alignment annotations as shown in Figure 1. 5 Bidirectional Attention Optimization A common way to extract word alignments is to train two models, one for the forward direction (source to target) and one for the backward direction (target to source). For each model, one can extract separate word alignments and symmetrize these using heuristics like grow-diagonal (Och and Ney, 2000b; Koehn et al., 2005). However, this approach uses the hard word alignments of both directions as an input, and does not consider any other information of the forward and backward model. For attention-based neural networks it is possible to adapt attention optimization as described in Section 3.4 to consider two models at the same time. The goal of attention optimization is to find attention activations that lead to the correct prediction of the target sequence for a single neural network. We extend this procedure to optimize the likelihood of the sentence pair jointly under both the forward and the backward model, with the additional bias to favor contiguous alignments. Figure 5 depicts this procedure. 5.1 Initialization Since attention optimization uses gradient descent to find good attention activations, it is important to start with a reasonable initialization. We extract the attention logits (attention before applying the softmax) from the forward (AL)F and the backward model (AL)B and average these to get a starting point for gradient descent: (AL)init = 1 2((AL)F + (AL)T B). 5.2 Optimization Our goal is to find attention logits AL that lead to the correct prediction for both the forward MF 4Vertical patterns are not encouraged, as it is not possible to have an attention probability above 0.5 for two source words and the same target word, because we use the softmax function over the source dimension. 1610 Softmax Linear Word ApplyAtt VF A Softmax Softmax Linear Word ApplyAtt VB A Softmax Attention Logits Transpose Forward Model Backward Model Loss Loss Contiguity Loss Figure 5: Bidirectional Attention Optimization. We optimize the attention logits towards the correct prediction of the next token when used for both the forward and backward model. The attention values VF and VB extracted from the forward and backward model remain static. Additionally, the attention logits are biased towards producing contiguous alignments. and the backward model MB, while also representing contiguous alignments. We will use the cross entropy loss CE for a whole target sequence y of length m to define the loss, given probabilities for each target token p(yt|At; M) under model parameters M and a given attention activation vector At: CE(p(y|A; M)) = m X t=1 −log(p(yt|At; M)) Let x, y be the source and target sequence, so that we can define a loss function for each component with the interpolation parameter λ for the contiguity loss LC as follows: LF = CE(p(y|softmax(AL); MF )) LB = CE(p(x|softmax(AT L); MB)) L = LF + LB + λLC We apply gradient descent to optimize all losses simultaneously, thus approximating a solution of argminALL(x, y|AL, MF , MB). 5.3 Alignment Extraction After optimizing the attention logits, we still have to decide which alignment links to extract, i.e. how to convert the soft attentions into hard alignments. For neural models using a single direction a common method is to extract the alignment with the highest attention score for each target token. For our bidirectional method we use the following approach: We merge the attention probabilities extracted from both directions using element-wise multiplication, where ⊗denotes a Hadamard product: AF = softmax(AL) AB = softmax(AT L)T AM = AF ⊗AM This favors alignments that effectively predict observed words in both the source and target sentences. Given the number of source tokens n and target tokens m in the sentence, we select min(n, m) alignments that have the highest values in the merged attention scores AM. In contrast to selecting one alignment per target token, this allows unaligned tokens, one-to-many, many-to-one and many-to-many alignment patterns. 6 Experiments 6.1 Data We use the same experimental setup5 as described by Zenkel et al. (2019) and used by Garg et al. (2019). It contains three language pairs: German→English, Romanian→English and English→French (Och and Ney, 2000a; Mihalcea and Pedersen, 2003). We learn a joint byte pair encoding (BPE) for the source and the target language with 40k merge operation (Sennrich et al., 2016). To convert from alignments between word pieces to alignments between words, we align a source word to a target word if an alignment link exists between any of its word pieces. Using BPE units instead of words also improved results for GIZA++ (e.g., 20.9% vs. 18.9% for German→English in a single direction). Therefore, we use the exact same input data for GIZA++ and all our neural approaches. For training GIZA++ we use five iterations each for Model 1, the HMM model, Model 3 and Model 4. 6.2 Training Most of the language pairs do not contain an adequately sized development set for word alignment experiments. Therefore, rather than early stopping, we used a fixed number of updates for each training stage across all languages pairs: 90k for training the translation model, 10k for the alignment layer and 10k for guided alignment training (batch-size: 5https://github.com/lilt/ alignment-scripts 1611 36k words). Training longer did not improve or degrade test-set AER on German→English; the AER only fluctuated by less than 1% when training the alignment layer for up to 20k updates while evaluating it every 2k updates. We also trained a base transformer with an alignment layer for German→English, but achieved similar results in terms of AER, so we used the smaller model described in sub-section 3.2 for other language pairs. We adopted most hyperparameters from Zenkel et al. (2019), see the Supplemental Material for a summary. We tuned the interpolation factor for the contiguity loss on German→English. 6.3 Contiguity Loss Results of ablation experiments for the contiguity loss can be found in Table 1. Our first experiment uses the contiguity loss during training and we extract the alignments from the forward pass using a single direction without application of attention optimization. We observe an absolute improvement of 6.4% AER (34.2% to 27.8%) after adding the contiguity loss during training. Afterwards, we use the model trained with contiguity loss and use attention optimization to extract alignments. Adding the contiguity loss during attention optimization further improves the AER scores by 1.2%. Both during training and attention optimization we used an interpolation coefficient of λ = 1.0 for the contiguity loss. By visualizing the attention activations in Figure 7 we see that the contiguity loss leads to sparse activations. Additionally, by favoring contiguous alignments it disambiguates correctly the alignment between the words “we” and “wir”, which appear twice in the sentence pair. In the remaining experiments we use the contiguous loss for both training and attention optimization. While we used a kernel of size 2x2 in our experiments, we also looked at different sizes. Using a 1x1 kernel6 during attention optimization leads to an AER of 22.8%, while a 3x3 kernel achieves the best result with an AER of 21.2%, compared to 21.5% of the 2x2 kernel. Larger kernel sizes lead to slightly worse results: 21.4% for a 4x4 kernel and 21.5% for a 5x5 kernel. 6.4 Bidirectional Attention Optimization The most commonly used methods to merge alignments from models trained in opposite direc6A 1x1 only encourages sparse alignments, and does not encourage contiguous alignments. Method No Contiguity Contiguity Forward 34.2% 27.8% Att. Opt 22.7% 21.5% Table 1: AER results with and without using the contiguity loss when extracting alignments from the forward pass or when using attention optimization for the language pair German→English. AER DeEn 21.5% EnDe 25.6% Grow-diag 19.6% Grow-diag-final 19.7% Bidir. Att. Opt 17.9% Table 2: Comparison of AER scores between bidirectional attention optimization and methods to merge hard alignments. tions are variants of grow-diagonal. We extract hard alignments for both German→English and English→German with (monolingual) attention optimization, which leads to an AER of 21.5% and 25.6%, respectively. Merging these alignments with grow-diagonal leads to an AER of 19.6%, while grow-diagonal-final yields an AER of 19.7%. We tuned the interpolation factor λ for the contiguity loss during bidirectional optimization. A parameter of 1.0 leads to an AER of 18.2%, 2.0 leads to 18.0% while 5.0 leads to 17.9%. Compared to unidirectional attention optimization it makes sense to pick a higher interpolation factor for the contiguity loss, as it is applied with the loss of the forward and backward model. For the remaining experiments we use 5.0 as the interpolation factor. Bidirectional attention optimization improves the resulting alignment error rate compared to the grow-diagonal heuristic by up to 1.8% for German→English. These results are summarized in Table 2. Variants of grow-diagonal have to rely on the hard alignments generated by the forward and the backward model. They only choose from these alignment links and therefore do not have the ability to generate new alignment links. In contrast, bidirectional attention optimization takes the parameters of the underlying models into account and optimizes the underlying attention logits simultaneously for both models to fit the sentence pair. In the example in Figure 8 bidirectional attention optimization is able to correctly predict 1612 0 2 4 6 8 10 15 20 25 30 Gradient Descent Steps Unidir Unidir+CL Bidir Bidir+CL Figure 6: AER with respect to gradient descent steps during attention optimization for German→English. Both unidirectional (Unidir) and bidirectional (Bidir) optimization benefit from the contiguity loss (CL). Without the contiguity loss AER slightly degrades after more than three optimization steps. an alignment link between “¨ubereinstimmend” and “proven” that did not appear at all in the individual alignments of the forward and backward model. We plot the behavior of attention optimization with a varying number of gradient descent steps in Figure 6. For both unidirectional and bidirectional models attention optimization leads to steadily improving results. Without using the additional contiguity loss, the lowest AER appears after three gradient descent steps and slightly increases afterwards. When using the contiguity loss AER results continue to decrease with additional steps. The contiguity loss seems to stabilize optimization and avoids overfitting of the optimized attention activations when tuning them for a single sentence pair. 6.5 Guided Alignment Training We now use the alignment layer with the full decoder context by adding an additional self-attention layer that does not mask out the future target context. We extract alignments from the previous models with bidirectional attention optimization and use those alignments for guided alignment training. This works surprisingly well. While the alignments used for training yielded an AER of 17.9% after bidirectional attention optimization (Table 4), the full context model trained with these alignments further improved the AER to 16.0% while using a Method DeEn EnFr RoEn Att. Opt. 21.5% 15.0% 29.2% +Guided 16.0% 6.6% 23.4% Zenkel et al. (2019) 26.6% 23.8% 32.3% GIZA++ 18.9% 7.9% 27.3% Table 3: Comparison of unidirectional models with GIZA++. Method DeEn EnFr RoEn Bidir. Att. Opt. 17.9% 8.4% 24.1% +Guided 16.3% 5.0% 23.4% Zenkel et al. (2019) 21.2% 10.0% 27.6% Garg et al. (2019) 20.2% 7.7% 26.0% GIZA++ 18.7% 5.5% 26.5% Table 4: Comparison of neural alignment approaches with GIZA++ after using symmetrization of the forward and backward model. single model for German→English (Table 3). After guided alignment training is complete, we do not apply attention optimization, since that would require a distribution over target words, which is not available in this model. 6.6 End-to-End Results We now report AER results across all three language pairs. Precision and recall scores are included in the Supplemental Material. We first extract alignments from a unidirectional model, a common use case where translations and alignments need to be extracted simultaneously. Table 3 compares our results to GIZA++ and Zenkel et al. (2019).7 We observe that guided alignment training leads to gains across all language pairs. In a single direction our approach consistently outperforms GIZA++ by an absolute AER difference between 1.3% (EnFr) and 3.9% (RoEn). Table 4 compares bidirectional results after symmetrization. We compare to purely neural and purely statistical systems.8 For symmetrizing alignments of the guided model and GIZA++, we use grow-diagonal. Bidirectional attention optimization is already able to outperform GIZA++ and Garg et al. (2019) on all language pairs except English→French. Using guided alignment training further improves results across all language pairs 7Garg et al. (2019) only report bidirectional results after symmetrization. 8For additional comparisons including neural models bootstrapped with GIZA++ alignments, see the Supplemental Material. 1613 (a) Without Contiguity Loss (b) With Contiguity Loss Figure 7: Attention activations of the alignment layer after attention optimization. Using the contiguity loss during training leads to sparse activations, the correct alignment of the two occurrences of “we”-“wir” and to correct alignment of the period. (a) Intersection/Union (b) Bidir. Optimization (c) Gold Alignments Figure 8: Example of symmetrization with bidirectional attention optimization. We show all alignments extracted from the forward and backward direction with unidirectional attention optimization in Subfigure 8a (alignments that are only present in one direction are grey). Bidirectional attention optimization is able to extract the correct alignment between “¨ubereinstimmend“ and “proven” which did neither appear as an alignment link in the forward nor in the backward direction. and leads to a consistent AER improvement compared to GIZA++ and neural results reported by Garg et al. (2019). These results show that it is possible to outperform GIZA++ both in a single direction and after symmetrization without using any alignments generated from statistical alignment systems to bootstrap training. 7 Conclusion This work presents the first end-to-end neural approach to the word alignment task which consistently outperforms GIZA++ in terms of alignment error rate. Our approach extends a pre-trained stateof-the-art neural translation model with an additional alignment layer, which is trained in isolation without changing the parameters used for the translation task. We introduce a novel auxiliary loss function to encourage contiguity in the alignment matrix and a symmetrization algorithm that jointly optimizes the alignment matrix within two models which are trained in opposite directions. In a final step the model is re-trained to leverage full target context with a guided alignment loss. Our results on three language pairs are consistently superior to both GIZA++ and prior work on end-to-end neural alignment. As the resulting model repurposes a pre-trained translation model without changing its parameters, it can directly benefit from improvements in translation quality, e.g. by adaptation via fine-tuning. 1614 References Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. Proceedings of the Third Conference on Machine Translation. Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational linguistics, 19(2):263–311. John DeNero and Klaus Macherey. 2011. Model-based aligner combination using dual decomposition. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 420–429. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648. Carla Parra Escartın and Manuel Arcedillo. 2015. Machine translation evaluation made fuzzier: A study on post-editing productivity and evaluation metrics in commercial settings. Proceedings of MT Summit XV, page 131. Qin Gao and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In Software engineering, testing, and quality assurance for natural language processing, pages 49–57. Association for Computational Linguistics. Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4452–4461, Hong Kong, China. Association for Computational Linguistics. Jo˜ao Grac¸a, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. In Advances in neural information processing systems. Eric Joanis, Darlene Stewart, Samuel Larkin, and Roland Kuhn. 2013. Transferring markup tags in statistical machine translation: A two-stream approach. Machine Translation Summit XIV, page 73. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In International Workshop on Spoken Language Translation (IWSLT) 2005. Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1293–1303, Florence, Italy. Association for Computational Linguistics. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics. I Dan Melamed. 1998. Annotation style guide for the blinker project. arXiv preprint cmp-lg/9805004. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop on Building and using parallel texts data driven machine translation and beyond. Association for Computational Linguistics. Mathias M¨uller. 2017. Treatment of markup in statistical machine translation. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 36–46. Franz Josef Och and Hermann Ney. 2000a. A comparison of alignment models for statistical machine translation. In COLING 2000 Volume 2: The 18th International Conference on Computational Linguistics, volume 2. Franz Josef Och and Hermann Ney. 2000b. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. 1615 Elias Stengel-Eskin, Tzu-ray Su, Matt Post, and Benjamin Van Durme. 2019. A discriminative neural model for cross-lingual word alignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 910–920, Hong Kong, China. Association for Computational Linguistics. Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2014. Recurrent neural networks for word alignment model. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics. Arda Tezcan and Vincent Vandeghinste. 2011. Smtcat integration in a technical domain. handling xml mark-up using pre and post-editing processing methods. In Proceedings of the 15th International Conference of the European Association for Machine Translation (EAMT-2011), page 8. Centre for Computational Linguistics; Leuven. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics. Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural translation models improves word alignment. arXiv preprint arXiv:1901.11359. 1616 A Supplemental Material Table 5 and Table 6 summarize the hyperparameters used for the translation model and the additional alignment layer. In Table 7 we report both AER results and precision and recall for all language pairs. Hyperparameter Value Dropout Rate 0.1 Embedding Size 256 Hidden Units 512 Encoder Layers 6 Decoder Layers 3 Attention Heads Per Layer 8 Table 5: Hyperparameters of the translation model. Hyperparameter Value Dropout Rate 0.1 Embedding Size 256 Hidden Units 256 Attention Heads 1 Table 6: Hyperparameters of the alignment layer. 1617 Method DeEn EnDe Bidir EnFr FrEn Bidir RoEn EnRo Bidir Att. Opt. 21.5% 25.6% 17.9% 15.0% 14.3% 8.4% 29.2% 28.8% 24.1% 76/81 73/76 85/79 81/92 82/93 90/95 74/68 74/69 85/69 Guided 16.0% 16.6% 16.3% 6.6% 6.3% 5.0% 23.4% 23.1% 23.4% 88/80 89/78 93/76 92/95 93/95 96/94 88/68 90/67 93/65 GIZA++ word 20.9% 23.1% 21.4% 8.0% 9.8% 5.9% 28.7% 32.2% 27.9% 86/72 87/69 94/67 91/93 92/88 98/90 83/63 80/59 94/59 GIZA++ subword 18.9% 20.4% 18.7% 7.9% 8.5% 5.5% 27.3% 29.4% 26.5% 89/74 88/72 95/71 92/93 93/89 98/91 85/64 83/62 93/61 Zenkel et al. (2019) 26.6% 30.4% 21.2% 23.8% 20.5% 10.0% 32.3% 34.8% 27.6% Garg et al. (2019) n/a n/a 20.2% n/a n/a 7.7% n/a n/a 26.0% + GIZA++ n/a n/a 16.0% n/a n/a 4.6% n/a n/a 23.1% Table 7: AER and—when available—precision/recall scores in percentage in the following row. The Bidir column reports results for the DeEn, EnFr and RoEn translation direction, respectively, and uses grow-diagonal for all columns except when attention optimization is used. For attention optimization we merge alignments with bidirectional attention optimization.
2020
146
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1618–1627 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1618 Enhancing Machine Translation with Dependency-Aware Self-Attention Emanuele Bugliarello∗ University of Copenhagen [email protected] Naoaki Okazaki Tokyo Institute of Technology [email protected] Abstract Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism. In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, dependency-aware self-attention mechanism that improves its translation quality, especially for long sentences and in low-resource scenarios. We show the efficacy of each approach on WMT English↔German and English→Turkish, and WAT English→Japanese translation tasks. 1 Introduction Research in neural machine translation (NMT) has mostly exploited corpora consisting of pairs of parallel sentences, with the assumption that a model can automatically learn prior linguistic knowledge via an attention mechanism (Luong et al., 2015). However, Shi et al. (2006) found that these models still fail to capture deep structural details, and several studies (Sennrich and Haddow, 2016; Eriguchi et al., 2017; Chen et al., 2017, 2018) have shown that syntactic information has the potential to improve these models. Nevertheless, the majority of syntax-aware NMT models are based on recurrent neural networks (RNNs; Elman 1990), with only a few recent studies that have investigated methods for the Transformer model (Vaswani et al., 2017). Wu et al. (2018) evaluated an approach to incorporate syntax in NMT with a Transformer model, which not only required three encoders and two decoders, but also target-side dependency relations (precluding its use to low-resource target languages). Zhang et al. (2019) integrate source-side syntax by concatenating the intermediate representations of a dependency parser to word embeddings. ∗Work done while at Tokyo Institute of Technology. In contrast to ours, this approach does not allow to learn sub-word units at the source side, requiring a larger vocabulary to minimize out-of-vocabulary words. Saunders et al. (2018) interleave words with syntax representations which results in longer sequences – requiring gradient accumulation for effective training – while only leading to +0.5 BLEU on WAT Ja-En when using ensembles of Transformers. Finally, Currey and Heafield (2019) propose two simple data augmentation techniques to incorporate source-side syntax: one that works well on low-resource data, and one that achieves a high score on a large-scale task. Our approach, on the other hand, performs equally well in both settings. While these studies improve the translation quality of the Transformer, they do not exploit its properties. In response, we propose to explicitly enhance the its self-attention mechanism (a core component of this architecture) to include syntactic information without compromising its flexibility. Recent studies have, in fact, shown that self-attention networks benefit from modeling local contexts by reducing the dispersion of the attention distribution (Shaw et al., 2018; Yang et al., 2018, 2019), and that they might not capture the inherent syntactic structure of languages as well as recurrent models, especially in low-resource settings (Tran et al., 2018; Tang et al., 2018). Here, we present parentscaled self-attention (PASCAL): a novel, parameterfree local attention mechanism that lets the model focus on the dependency parent of each token when encoding the source sentence. Our method is simple yet effective, improving translation quality with no additional parameter or computational overhead. Our main contributions are: • introducing PASCAL: an effective parameterfree local self-attention mechanism to incorporate source-side syntax into Transformers; • adapting LISA (Strubell et al., 2018) to subword representations and applying it to NMT; 1619 Dᵖ dmodel WVʰ WQʰ WKʰ d dmodel T X d T Vʰ Qʰ Kʰ d T T Sʰ softmax() p 2 3 3 5 3 The monkey eats a banana Input: The monkey eats a banana ⨯ * ⨀ ⨯ ⨯ -1/2 T T T T Nʰ T d Mʰ T dist() Figure 1: Parent-Scaled Self-Attention (PASCAL) head for the input sequence “The monkey eats a banana”. • similar to concurrent work (Pham et al., 2019), we find that modeling linguistic knowledge into the self-attention mechanism leads to better translations than other approaches. Our extensive experiments on standard En↔De, En→Tr and En→Ja translation tasks also show that (a) approaches to embed syntax in RNNs do not always transfer to the Transformer, and (b) PASCAL consistently exhibits significant improvements in translation quality, especially for long sentences. 2 Model In order to design a neural network that is efficient to train and that exploits syntactic information while producing high-quality translations, we base our model on the Transformer architecture (Vaswani et al., 2017) and upgrade its encoder with parent-scaled self-attention (PASCAL) heads at layer ls. PASCAL heads enforce contextualization from the syntactic dependencies of each source token, and, in practice, we replace standard selfattention heads with PASCAL ones in the first layer as its inputs are word embeddings that lack any contextual information. Our PASCAL sub-layer has the same number H of attention heads as other layers. Source syntax Similar to previous work, instead of just providing sequences of tokens, we supply the encoder with dependency relations given by an external parser. Our approach explicitly exploits sub-word units, which enable open-vocabulary translation: after generating sub-word units, we compute the middle position of each word in terms of number of tokens. For instance, if a word in position 4 is split into three tokens, now in positions 6, 7 and 8, its middle position is 7. We then map each sub-word of a given word to the middle position of its parent. For the root word, we define its parent to be itself, resulting in a parse that is a directed graph. The input to our encoder is a sequence of T tokens and the absolute positions of their parents. 2.1 Parent-Scaled Self-Attention Figure 1 shows our parent-scaled self-attention sublayer. Here, for a sequence of length T, the input to each head is a matrix X ∈RT×dmodel of token embeddings and a vector p ∈RT whose t-th entry pt is the middle position of the t-th token’s dependency parent. Following Vaswani et al. (2017), in each attention head h, we compute three vectors (called query, key and value) for each token, resulting in the three matrices Kh ∈RT×d, Qh ∈RT×d, and Vh ∈RT×d for the whole sequence, where d = dmodel/H. We then compute dot products between each query and all the keys, giving scores of how much focus to place on other parts of the input when encoding a token at a given position. The scores are divided by √ d to alleviate the vanishing gradient problem arising if dot products are large: Sh = Qh Kh⊤/ √ d. (1) Our main contribution is in weighing the scores of the token at position t, st, by the distance of each token from the position of t’s dependency parent: nh tj = sh tj dp tj, for j = 1, ..., T, (2) where nh t is the t-th row of the matrix Nh ∈RT×T representing scores normalized by the proximity to t’s parent. dp tj = dist(pt, j) is the (t, j)th entry of the matrix Dp ∈RT×T containing, for each row dt, the distances of every token j from the middle position of token t’s dependency parent pt. In this paper, we compute this distance as the value of the probability density of a normal distribution centered at pt and with variance σ2, N pt, σ2 : dist(pt, j) = fN j pt, σ2 = 1 √ 2πσ2 e−(j−pt)2 2σ2 . (3) 1620 Finally, we apply a softmax function to yield a distribution of weights for each token over all the tokens in the sentence, and multiply the resulting matrix with the value matrix Vh, obtaining the final representations Mh for PASCAL head h. One of the major strengths of our proposal is being parameter-free: no additional parameter is required to train our PASCAL sub-layer as Dp is obtained by computing a distance function that only depends on the vector of tokens’ parent positions and can be evaluated using fast matrix operations. Parent ignoring Due to the lack of parallel corpora with gold-standard parses, we rely on noisy annotations from an external parser. However, the performance of syntactic parsers drops abruptly when evaluated on out-of-domain data (Dredze et al., 2007). To prevent our model from overfitting to noisy dependencies, we introduce a regularization technique for the PASCAL sub-layer: parent ignoring. In a similar vein as dropout (Srivastava et al., 2014), we disregard information during the training phase. Here, we ignore the position of the parent of a given token by randomly setting each row of Dp to 1 ∈RT with some probability q. Gaussian weighing function The choice of weighing each score by a Gaussian probability density is motivated by two of its properties. First, its bell-shaped curve: It allows us to focus most of the probability density at the mean of the distribution, which we set to the middle position of the sub-word units of the dependency parent of each token. In our experiments, we find that most words in the vocabularies are not split into sub-words, hence allowing PASCAL to mostly focus on the actual parent. In addition, non-negligible weights are placed on the neighbors of the parent token, allowing the attention mechanism to also attend to them. This could be useful, for instance, to learn idiomatic expressions such as prepositional verbs in English. The second property of Gaussian-like distributions that we exploit is their support: While most of the weight is placed in a small window of tokens around the mean of the distribution, all the values in the sequence are actually multiplied by non-zero factors; allowing a token j farther away from the parent of token t, pt, to still play a role in the representation of t if its score sh tj is high. PASCAL can be seen as an extension of the local attention mechanism of Luong et al. (2015), with the alignment now guided by syntactic information. Yang et al. (2018) proposed a method to learn a Gaussian bias that is added to, instead of multiplied by, the original attention distribution. As we will see next, our model significantly outperforms this. 3 Experiments 3.1 Experimental Setup Data We evaluate the efficacy of our approach on standard, large-scale benchmarks and on lowresource scenarios, where the Transformer was shown to induce poorer syntax. Following Bastings et al. (2017), we use News Commentary v11 (NC11) with En-De and De-En tasks to simulate low resources and test multiple source languages. To compare with previous work, we train our models on WMT16 En-De and WAT En-Ja tasks, removing sentences in incorrect languages from WMT16 data sets. For a thorough comparison with concurrent work, we also evaluate on the largescale WMT17 En-De and low-resource WMT18 En-Tr tasks. We rely on Stanford CoreNLP (Manning et al., 2014) to parse source sentences.1 Training We implement our models in PyTorch on top of the Fairseq toolkit.2 Hyperparameters, including the number of PASCAL heads, that achieved the highest validation BLEU (Papineni et al., 2002) score were selected via a small grid search. We report previous results in syntax-aware NMT for completeness, and train a Transformer model as a strong, standard baseline. We also investigate the following syntax-aware Transformer approaches:1 • +PASCAL: The model presented in §2. The variance of the normal distribution was set to 1 (i.e., an effective window size of 3) as 99.99% of the source words in our training sets are at most split into 7 sub-words units. • +LISA: We adapt LISA (Strubell et al., 2018) to NMT and sub-word units by defining the parent of a given token as its first sub-word (which represents the root of the parent word). • +MULTI-TASK: Our implementation of the multi-task approach by Currey and Heafield (2019) where a standard Transformer learns to both parse and translate source sentences. • +S&H: Following Sennrich and Haddow (2016), we introduce syntactic information in the form of dependency labels in the embedding matrix of the Transformer encoder. 1For a detailed description, see Appendix A. 2https://github.com/e-bug/pascal. 1621 (0, 10] (10, 20] (20, 30] (30, 40] (40, 50] (50, 100] Source sentence length 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 BLEU NC11 En-De NC11 De-En WMT18 En-Tr WMT16 En-De WMT17 En-De WAT En-Ja (0, 10] (10, 20] (20, 30] (30, 40] (40, 50] (50, 100] Source sentence length 0 10 20 30 40 # Sentences [%] 22.3 40.0 24.2 9.6 2.9 0.9 26.0 40.6 21.8 8.3 2.4 0.8 21.8 37.7 24.3 10.7 3.7 1.8 22.3 40.0 24.2 9.6 2.9 0.9 20.4 41.8 26.6 8.4 2.3 0.4 9.3 42.1 31.3 12.0 4.1 1.2 Figure 2: Analysis by sentence length: ∆BLEU with the Transformer (above) and percentage of data (below). Method NC11 NC11 WMT18 WMT16 WMT17 WAT En-De De-En En-Tr En-De En-De En-Ja [B] En-Ja [R] Eriguchi et al. (2016) 34.9 81.58 Bastings et al. (2017) 16.1 Hashimoto and Tsuruoka (2017) 39.4 82.83 Bisk and Tran (2018) 30.3 24.3 SE+SD-NMT† (Wu et al., 2018) 24.7 36.4 81.83 SE+SD-Transformer† (Wu et al., 2018) 26.2 Mixed Enc. (Currey and Heafield, 2019) 9.6 31.9 26.0 Multi-Task (Currey and Heafield, 2019) 10.6 29.6 23.4 Transformer 25.0 26.6 13.1 33.0 25.5 43.1 83.46 + PASCAL 25.9⇑ 27.4⇑ 14.0⇑ 33.9⇑ 26.1⇑ 44.0⇑ 85.21⇑ + LISA 25.3 27.1 13.6 33.6 25.7 43.2 83.51 + MULTI-TASK 24.8 26.7 14.0 32.4 24.6 42.7 84.18 + S&H 25.5 26.8 13.0 31.9 25.1 42.8 83.88 Table 1: Test BLEU (and RIBES for En-Ja) scores on small-scale (left) and large-scale (right) data sets. Models that also require target-side syntax information are marked with †, while ⇑indicates statistical significance (p < 0.01) against the Transformer baseline via bootstrap re-sampling (Koehn, 2004). 3.2 Results Table 1 presents the main results of our experiments. Clearly, the base Transformer outperforms previous syntax-aware RNN-based approaches, proving it to be a strong baseline in our experiments. The table shows that the simple approach of Sennrich and Haddow (2016) does not lead to notable advantages when applied to the embeddings of the Transformer model. We also see that the multi-task approach benefits from better parameterization, but it only attains comparable performance with the baseline on most tasks. On the other hand, LISA, which embeds syntax in a self-attention head, leads to modest but consistent gains across all tasks, proving that it is also useful for NMT. Finally, PASCAL outperforms all other methods, with consistent gains over the Transformer baseline independently of the source language and corpus size: It gains up to +0.9 BLEU points on most tasks and a substantial +1.75 in RIBES (Isozaki et al., 2010), a metric with stronger correlation with human judgments than BLEU in En↔Ja translations. On WMT17, our slim model compares favorably to other methods, achieving the highest BLEU score across all source-side syntax-aware approaches.3 Overall, our model achieves substantial gains given the grammatically rigorous structure of English and German. Not only do we expect performance gains to further increase on less rigorous sources and with better parses (Zhang et al., 2019), but also higher robustness to noisier syntax trees obtained from back-translated with parent ignoring. Performance by sentence length As shown in Figure 2, our model is particularly useful when translating long sentences, obtaining more than +2 BLEU points when translating long sentences in all low-resource experiments, and +3.5 BLEU points on the distant En-Ja pair. However, only a few sentences (1%) in the evaluation datasets are long. 3Note that modest improvements in this task should not be surprising as Transformers learn better syntactic relationships from larger data sets (Raganato and Tiedemann, 2018). 1622 SRC In a cooling experiment , only a tendency agreed BASE 冷却実験では,わ わ わず ず ずか か かな な な傾向が一致した OURS 冷却実験では傾向の の のみ み み一致した SRC Of course I don’t hate you BASE Nat¨urlich hasste ich dich nicht OURS Nat¨urlich hasse ich dich nicht SRC What are those people fighting for? BASE Was sind die Menschen, f¨ur die k¨ampfen? OURS Wof¨ur k¨ampfen diese Menschen? Table 2: Example of correct translation by PASCAL. Qualitative performance Table 2 presents examples where our model correctly translated the source sentence while the Transformer baseline made a syntactic error. For instance, in the first example, the Transformer misinterprets the adverb “only” as an adjective of “tendency:” the word “only” is an adverb modifying the verb “agreed.” In the second example, “don’t” is incorrectly translated to the past tense instead of present. PASCAL layer When we introduced our model, we motivated our design choice of placing PASCAL heads in the first layer in order to enrich the representations of words from their isolated embeddings by introducing contextualization from their parents. We ran an ablation study on the NC11 data in order to verify our hypothesis. As shown in Table 3a, the performance of our model on the validation sets is lower when placing Pascal heads in upper layers; a trend that we also observed with the LISA mechanism. These results corroborate the findings of Raganato and Tiedemann (2018) who noticed that, in the first layer, more attention heads solely focus on the word to be translated itself rather than its context. We can then deduce that enforcing syntactic dependencies in the first layer effectively leads to better word representations, which further enhance the translation accuracy of the Transformer model. Investigating the performance of multiple syntax-aware layers is left as future work. Gaussian variance Another design choice we made was the variance of the Gaussian weighing function. We set it to 1 in our experiments motivated by the statistics of our datasets, where the vast majority of words is at most split into a few tokens after applying BPE. Table 3b corroborates our choice, showing higher BLEU scores on the NC11 validation sets when the variance equals 1. Here, “parent-only” is the case where weights are only placed to the middle token (i.e. the parent). Layer En-De De-En 1 23.2 24.6 2 22.5 20.1 3 22.5 23.8 4 22.6 23.8 5 22.9 23.8 6 22.4 23.9 (a) Variance En-De De-En Parent-only 22.5 22.4 1 23.2 24.6 4 22.7 24.3 9 22.8 24.3 16 22.7 24.4 25 22.8 24.1 (b) Table 3: Validation BLEU as a function of PASCAL layer (a) and Gaussian’s variance (b) on NC11 data. Sensitivity to hyperparameters Due to the large computational cost required to train Transformer models, we only searched hyperparameters in a small grid. In order to estimate the sensitivity of the proposed approach to hyperparameters, we trained the NC11 De-En model with the hyperparameters of the En-De one. In fact, despite being trained on the same data set, we find that more PASCAL heads help when German (which has a higher syntactic complexity than English) is used as the source language. In this test, we only find −0.2 BLEU points with respect to the score listed in Table 1, showing that our general approach is effective regardless of extensive fine-tuning. Additional analyses are reported in Appendix B. 4 Conclusion This study provides a thorough investigation of approaches to induce syntactic knowledge into self-attention networks. Through extensive evaluations on various translation tasks, we find that approaches effective for RNNs do not necessarily transfer to Transformers (e.g. +S&H). Conversely, dependency-aware self-attention mechanisms (LISA and PASCAL) best embed syntax, for all corpus sizes, with PASCAL consistently outperforming other all approaches. Our results show that exploiting core components of the Transformer to embed linguistic knowledge leads to higher and consistent gains than previous approaches. Acknowledgments We are grateful to the anonymous reviewers, Desmond Elliott and the CoAStaL NLP group for their constructive feedback. The research results have been achieved by “Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation,” the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan. 1623 References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph Convolutional Encoders for Syntax-aware Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Association for Computational Linguistics. Yonatan Bisk and Ke Tran. 2018. Inducing Grammars with and for Neural Machine Translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 25–35, Melbourne, Australia. Association for Computational Linguistics. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017. Improved neural machine translation with a syntax-aware encoder and decoder. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1936–1945. Association for Computational Linguistics. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In AAAI Conference on Artificial Intelligence. Anna Currey and Kenneth Heafield. 2019. Incorporating source syntax into transformer-based neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 24–33, Florence, Italy. Association for Computational Linguistics. Mark Dredze, John Blitzer, Partha Pratim Talukdar, Kuzman Ganchev, Jo˜ao Graca, and Fernando Pereira. 2007. Frustratingly hard domain adaptation for dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 1051–1055, Prague, Czech Republic. Association for Computational Linguistics. Jeffrey L. Elman. 1990. Finding Structure in Time. Cognitive Science, 14(2):179–211. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-Sequence Attentional Neural Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823–833, Berlin, Germany. Association for Computational Linguistics. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to Parse and Translate Improves Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 72–78. Association for Computational Linguistics. Kazuma Hashimoto and Yoshimasa Tsuruoka. 2017. Neural Machine Translation with Source-Side Latent Graph Parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 125–135. Association for Computational Linguistics. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944–952, Cambridge, MA. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388– 395, Barcelona, Spain. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412– 1421, Lisbon, Portugal. Association for Computational Linguistics. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Thuong Hai Pham, Dominik Mach´aˇcek, and Ondˇrej Bojar. 2019. Promoting the knowledge of source syntax in transformer nmt is not needed. Computaci´on y Sistemas, 23(3). Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. 1624 Alessandro Raganato and J¨org Tiedemann. 2018. An analysis of encoder representations in transformerbased machine translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287–297, Brussels, Belgium. Association for Computational Linguistics. Danielle Saunders, Felix Stahlberg, Adri`a de Gispert, and Bill Byrne. 2018. Multi-representation ensembles and delayed SGD updates improve syntaxbased NMT. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 319– 325, Melbourne, Australia. Association for Computational Linguistics. Rico Sennrich and Barry Haddow. 2016. Linguistic Input Features Improve Neural Machine Translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83– 91, Berlin, Germany. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A DOM tree alignment model for mining parallel data from the web. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 489–496, Sydney, Australia. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. Association for Computational Linguistics. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Gongbo Tang, Mathias M¨uller, Annette Rios, and Rico Sennrich. 2018. Why self-attention? a targeted evaluation of neural machine translation architectures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4263–4272, Brussels, Belgium. Association for Computational Linguistics. Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4731–4736, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for Neural Machine Translation. CoRR, abs/1803.07416. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Shuangzhi Wu, Dongdong Zhang, Zhirui Zhang, Nan Yang, Mu Li, and Ming Zhou. 2018. Dependency-todependency neural machine translation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, pages 2132–2141. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Baosong Yang, Jian Li, Derek F. Wong, Lidia S. Chao, Xing Wang, and Zhaopeng Tu. 2019. Context-aware self-attention networks. In AAAI Conference on Artificial Intelligence. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449– 4458, Brussels, Belgium. Association for Computational Linguistics. Meishan Zhang, Zhenghua Li, Guohong Fu, and Min Zhang. 2019. Syntax-enhanced neural machine translation with syntax-aware word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1151–1161, Minneapolis, Minnesota. Association for Computational Linguistics. 1625 Corpus Train Filtered Train Valid Test NC11 En-De 238,843 233,483 2,169 2,999 WMT18 En-Tr 207,373 3,000 3,007 WMT16 En-De 4,500,962 4,281,379 2,169 2,999 WMT17 En-De 5,852,458 2,999 3,004 WAT En-Ja 3,008,500 1,790 1,812 Table 4: Number of sentences in each data set. A Experiment details Data preparation We follow the same preprocessing steps as Vaswani et al. (2017). Unless otherwise specified, we first tokenize the data with Moses (Koehn et al., 2007) and remove sentences longer than 80 tokens in either source or target side. Following Bastings et al. (2017), we train on News Commentary v11 (NC11) data set with English→German (En-De) and German→English (De-En) tasks so as to simulate low-resource cases and to evaluate the performance of our models for different source languages. We also train on the full WMT16 data set for En-De, using newstest2015 and newstest2016 as validation and test sets, respectively, in each of these experiments. Moreover, we notice that these data sets contain sentences in different languages and use langdetect4 to remove sentences in incorrect languages. We also train our models on WMT18 English→Turkish (En-Tr) as a standard lowresource scenario. Models are evaluated on newstest2016 and tested on newstest2017. Previous studies on syntax-aware NMT have commonly been conducted on the WMT16 EnDe and WAT English→Japanese (En-Ja) tasks, while concurrent approaches are evaluated on the WMT17 En-De task. In order to provide a generic and comprehensive evaluation of our proposed approach on large-scale data, we also train our models on the latter tasks. We follow the WAT18 preprocessing steps5 for experiments on En-Ja but use Cabocha6 to tokenize target sentences. On WMT17, we use newstest2016 and newstest2017 as validation and test sets, respectively. Table 4 lists the final sizes of each data set. Baselines We evaluate the impact of syntactic information with the following approaches: • Transformer: We train a base Transformer 4https://pypi.org/project/langdetect. 5http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2018/baseline/dataPreparationJE.html. 6https://taku910.github.io/cabocha/. 0 2 4 6 8 Token position 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Weight 0 2 4 6 8 Token position 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Weight Figure 3: Weights of normal probability density with σ2 = 1 and the means at positions 5 (left) or 4.5 (right). model as a strong, standard baseline using the hyperparameters in the latest Tensor2Tensor (Vaswani et al., 2018) version (3). • +S&H: Following Sennrich and Haddow (2016), we introduce syntactic information in the form of dependency labels in the embedding matrix of the Transformer encoder. More specifically, each token is associated with its dependency label which is first embedded into a vector representation of size 10 and then used to replace the last 10 embedding dimensions of the token embedding, ensuring a final size that matches the original one. • +MULTI-TASK: Our implementation of the multi-task approach by Currey and Heafield (2019) where a standard Transformer learns to both parse and translate source sentences. Each source sentence is first duplicated and associated its linearized parse as target sequence. To distinguish between the two tasks, a special tag indicating the desired task is prepended and appended to each source sentence. Finally, parsing and translation training data is shuffled together. • +LISA: We adapt Linguistically-Informed Self-Attention (LISA; Strubell et al. 2018) to NMT. In one attention head h, Qh and Kh are computed through a feed-forward layer and the key-query dot product to obtain attention weights is replaced by a bi-affine operator U. These attention weights are further supervised to attend to each token’s parent by interpreting each row t as the distribution over possible parents for token t. Here, we extend the authors’ approach to BPE by defining the parent of a given token as its first sub-word unit (which represents the root of the parent word). The model is trained to maximize the joint probability of translations and parent positions. 1626 Component NC11 En-De NC11 De-En WMT18 En-Tr WMT16 En-De WMT17 En-De WAT En-Ja Transformer 22.6 23.8 12.6 29.0 31.5 42.2 + data filtering 22.8 (+0.2) 24.0 (+0.2) 28.7 (-0.3) + PASCAL 23.0 (+0.2) 24.6 (+0.6) 13.6 (+1.0) 29.2 (+0.5) 31.6 (+0.1) 43.5 (+1.3) + parent ignoring 23.2 (+0.2) 13.7 (+0.1) 32.1 (+0.6) Table 5: Validation BLEU when incrementally adding each component used by our best-performing models. Corpus Transformer +PASCAL NC11 En-De 4,134.1 4,188.8 NC11 De-En 4,276.6 4,177.4 WMT18 En-Tr 3,559.7 3,621.1 WMT16 En-De 23,186.3 23,358.8 WMT17 En-De 23,604.1 24,083.6 WAT En-Ja 23,005.8 23,073.0 Table 6: Training times (in seconds) for the Transformer baseline and Transformer+PASCAL on each data set. PASCAL adds negligible overhead. Corpus lr (β1, β2) hP q NC11 En-De 0.0007 (0.9, 0.997) 2 0.4 NC11 De-En 0.0007 (0.9, 0.997) 8 0.0 WMT18 En-Tr 0.0007 (0.9, 0.980) 7 0.3 WMT16 En-De 0.0007 (0.9, 0.980) 5 0.0 WMT17 En-De 0.0007 (0.9, 0.997) 7 0.3 WAT En-Ja 0.0007 (0.9, 0.997) 7 0.0 Table 7: Hyperparameters for the reported models. lr denotes the maximum learning rate, (β1, β2) are Adam’s decay rates, hP is the number of PASCAL heads, and q is the parent ignoring probability. Training details All experiments are based on the base Transformer architecture and optimized following the learning schedule of Vaswani et al. (2017) with 8, 000 warm-up steps. Similarly, we use label smoothing ϵls = 0.1 (Szegedy et al., 2016) during training and employ beam search with a beam size of 4 and length penalty α = 0.6 (Wu et al., 2016) at inference time. We use a batch size of 32K tokens and run experiments on a cluster of 4 machines, each having 4 Nvidia P100 GPUs. See Table 6 for the training times of each experiment. For each model, we run a small grid search over the hyperparameters and select the ones giving the highest BLEU scores on validation sets (Table 7). We use the SACREBLEU (Post, 2018) tool to compute case-sensitive BLEU scores.7 When evaluating En-Ja translations, we follow the procedure employed at WAT by computing BLEU scores after tokenizing target sentences using KyTea.8 7Signature: BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.2.12. 8http://www.phontron.com/kytea/. Following Vaswani et al. (2017), we train Transformer-based models for 100K steps on largescale data. On small-scale data, we train for 20K steps and use a dropout probability Pdrop = 0.3 as they let the Transformer baseline achieve higher performance on this size of data. For instance, in WMT18 En-Tr, our baseline outperforms the one in Currey and Heafield (2019) by +3.5 BLEU. B Analysis Multiplication vs. addition In Equation (2), we calculated the weighing scores by multiplying the self-attention scores by the distance to the parent token. Multiplication is, in fact, the standard way to weight values (e.g., the gating mechanism of LSTMs and GRUs). In our case, it introduces sparseness in the attention scores for non-parent tokens. Moreover, it weights gradients in backpropagation: Let x and y be the attention score and dependency weight, respectively. Consider a loss l = f(z) where z = xy and dl/dx = df(z)/dz ∗y. The attention score receive gradients more on dependent pairs (larger y) than non-dependent ones (smaller y), which is sound for dependency information. In contrast, addition cannot obtain such an effect because it does not affect gradients: dl/dx = df(z)/dz when z = x+y. For completeness, we trained our best NC11 models replacing multiplication by addition. We find that BLEU scores still improve upon the baseline, meaning that our approach is robust, but find them to be slightly lower (−0.2) than with multiplication. Ablation We introduced different techniques to improve neural machine translation with syntax information. Table 5 lists the contribution of each technique, in an incremental fashion, whenever they were used by the models reported in Table 1. While removing sentences whose languages do not match the translation task can lead to better performance (NC11), the precision of the detection tool assumes a major role at large scale. In WMT16, langdetect removes more than 200K sentences and leads to performance losses. It would 1627 also drop 19K pairs on the clean WAT En-Ja data. The proposed PASCAL mechanism is the component that most improves the performance of the models, achieving up to +1.0 and +1.3 BLEU on the distant En-Tr and En-Ja pairs, respectively. With the exception of NC11 En-De, we find parent ignoring useful on the noisier WMT18 En-Tr and WMT17 En-De datasets. In the former, lowresource case, the benefits of parent ignoring are minimal, but it proves fundamental on the largescale WMT17 data, where it leads to significant gains when paired with the PASCAL mechanism.9 Finally, looking at the number of PASCAL heads in Table 7, we notice that most models rely on a large number of syntax-aware heads. Raganato and Tiedemann (2018) found that only a few attention heads per layer encoded a significant amount of syntactic dependencies. Our study shows that the Transformer model can be improved by having more attention heads learn syntactic dependencies. 9Note that this ablation is obtained by stripping away each component from the best performing models and hence only seeing +0.1 for PASCAL on WMT17 En-De does not mean that PASCAL is not helpful in this task but rather that combining it with parent ignoring gives better performance (our second best model achieved +0.5 by using PASCAL only).
2020
147
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1628–1639 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1628 Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation Biao Zhang1 Philip Williams1 Ivan Titov1,2 Rico Sennrich3,1 1School of Informatics, University of Edinburgh 2ILLC, University of Amsterdam 3Department of Computational Linguistics, University of Zurich [email protected], {pwillia4,ititov}@inf.ed.ac.uk, [email protected] Abstract Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations. In this paper, we explore ways to improve them. We argue that multilingual NMT requires stronger modeling capacity to support language pairs with varying typological characteristics, and overcome this bottleneck via language-specific components and deepening NMT architectures. We identify the off-target translation issue (i.e. translating into a wrong target language) as the major source of the inferior zero-shot performance, and propose random online backtranslation to enforce the translation of unseen training language pairs. Experiments on OPUS-100 (a novel multilingual dataset with 100 languages) show that our approach substantially narrows the performance gap with bilingual models in both oneto-many and many-to-many settings, and improves zero-shot performance by ∼10 BLEU, approaching conventional pivot-based methods.1 1 Introduction With the great success of neural machine translation (NMT) on bilingual datasets (Bahdanau et al., 2015; Vaswani et al., 2017; Barrault et al., 2019), there is renewed interest in multilingual translation where a single NMT model is optimized for the translation of multiple language pairs (Firat et al., 2016a; Johnson et al., 2017; Lu et al., 2018; Aharoni et al., 2019). Multilingual NMT eases model deployment and can encourage knowledge transfer among related language pairs (Lakew et al., 2018; Tan et al., 2019), improve low-resource translation (Ha et al., 2016; Arivazhagan et al., 2019b), 1We release our code at https://github. com/bzhangGo/zero. We release the OPUS-100 dataset at https://github.com/EdinburghNLP/ opus-100-corpus. Source Jusqu’à ce qu’on trouve le moment clé, celui où tu as su que tu l’aimais. Reference Bis wir den unverkennbaren Moment gefunden haben, den Moment, wo du wusstest, du liebst ihn. Zero-Shot Jusqu’à ce qu’on trouve le moment clé, celui où tu as su que tu l’aimais. Source Les États membres ont été consultés et ont approuvé cette proposition. Reference Die Mitgliedstaaten wurden konsultiert und sprachen sich für diesen Vorschlag aus. Zero-Shot Les Member States have been consultedés and have approved this proposal. Table 1: Illustration of the off-target translation issue with French→German zero-shot translations with a multilingual NMT model. Our baseline multilingual NMT model often translates into the wrong language for zero-shot language pairs, such as copying the source sentence or translating into English rather than German. and enable zero-shot translation (i.e. direct translation between a language pair never seen in training) (Firat et al., 2016b; Johnson et al., 2017; AlShedivat and Parikh, 2019; Gu et al., 2019). Despite these potential benefits, multilingual NMT tends to underperform its bilingual counterparts (Johnson et al., 2017; Arivazhagan et al., 2019b) and results in considerably worse translation performance when many languages are accommodated (Aharoni et al., 2019). Since multilingual NMT must distribute its modeling capacity between different translation directions, we ascribe this deteriorated performance to the deficient capacity of single NMT models and seek solutions that are capable of overcoming this capacity bottleneck. We propose language-aware layer normalization and linear transformation to relax the representation constraint in multilingual NMT models. The linear transformation is inserted in-between the encoder and the decoder so as to facilitate the induction of language-specific translation correspon1629 dences. We also investigate deep NMT architectures (Wang et al., 2019a; Zhang et al., 2019) aiming at further reducing the performance gap with bilingual methods. Another pitfall of massively multilingual NMT is its poor zero-shot performance, particularly compared to pivot-based models. Without access to parallel training data for zero-shot language pairs, multilingual models easily fall into the trap of offtarget translation where a model ignores the given target information and translates into a wrong language as shown in Table 1. To avoid such a trap, we propose the random online backtranslation (ROBT) algorithm. ROBT finetunes a pretrained multilingual NMT model for unseen training language pairs with pseudo parallel batches generated by back-translating the target-side training data.2 We perform backtranslation (Sennrich et al., 2016a) into randomly picked intermediate languages to ensure good coverage of ∼10,000 zero-shot directions. Although backtranslation has been successfully applied to zero-shot translation (Firat et al., 2016b; Gu et al., 2019; Lakew et al., 2019), whether it works in the massively multilingual set-up remained an open question and we investigate it in our work. For experiments, we collect OPUS-100, a massively multilingual dataset sampled from OPUS (Tiedemann, 2012). OPUS-100 consists of 55M English-centric sentence pairs covering 100 languages. As far as we know, no similar dataset is publicly available.3 We have released OPUS100 to facilitate future research.4 We adopt the Transformer model (Vaswani et al., 2017) and evaluate our approach under one-to-many and manyto-many translation settings. Our main findings are summarized as follows: • Increasing the capacity of multilingual NMT yields large improvements and narrows the performance gap with bilingual models. Lowresource translation benefits more from the increased capacity. • Language-specific modeling and deep NMT architectures can slightly improve zero-shot 2Note that backtranslation actually converts the zero-shot problem into a zero-resource problem. We follow previous work and continue referring to zero-shot translation, even when using synthetic training data. 3Previous studies (Aharoni et al., 2019; Arivazhagan et al., 2019b) adopt in-house data which was not released. 4https://github.com/EdinburghNLP/ opus-100-corpus translation, but fail to alleviate the off-target translation issue. • Finetuning multilingual NMT with ROBT substantially reduces the proportion of offtarget translations (by ∼50%) and delivers an improvement of ∼10 BLEU in zero-shot settings, approaching the conventional pivotbased method. We show that finetuning with ROBT converges within a few thousand steps. 2 Related Work Pioneering work on multilingual NMT began with multitask learning, which shared the encoder for one-to-many translation (Dong et al., 2015) or the attention mechanism for many-to-many translation (Firat et al., 2016a). These methods required a dedicated encoder or decoder for each language, limiting their scalability. By contrast, Lee et al. (2017) exploited character-level inputs and adopted a shared encoder for many-to-one translation. Ha et al. (2016) and Johnson et al. (2017) further successfully trained a single NMT model for multilingual translation with a target language symbol guiding the translation direction. This approach serves as our baseline. Still, this paradigm forces different languages into one joint representation space, neglecting their linguistic diversity. Several subsequent studies have explored different strategies to mitigate this representation bottleneck, ranging from reorganizing parameter sharing (Blackwood et al., 2018; Sachan and Neubig, 2018; Lu et al., 2018; Wang et al., 2019c; Vázquez et al., 2019), designing language-specific parameter generators (Platanios et al., 2018), decoupling multilingual word encodings (Wang et al., 2019b) to language clustering (Tan et al., 2019). Our languagespecific modeling continues in this direction, but with a special focus on broadening normalization layers and encoder outputs. Multilingual NMT allows us to perform zeroshot translation, although the quality is not guaranteed (Firat et al., 2016b; Johnson et al., 2017). We observe that multilingual NMT often translates into the wrong target language on zero-shot directions (Table 1), resonating with the ‘missing ingredient problem’ (Arivazhagan et al., 2019a) and the spurious correlation issue (Gu et al., 2019). Approaches to improve zero-shot performance fall into two categories: 1) developing novel cross-lingual regularizers, such as the alignment regularizer (Arivazhagan et al., 2019a) and the consistency regularizer (Al1630 Shedivat and Parikh, 2019); and 2) generating artificial parallel data with backtranslation (Firat et al., 2016b; Gu et al., 2019; Lakew et al., 2019) or pivotbased translation (Currey and Heafield, 2019). The proposed ROBT algorithm belongs to the second category. In contrast to Gu et al. (2019) and Lakew et al. (2019), however, we perform online backtranslation for each training step with randomly selected intermediate languages. ROBT avoids decoding the whole training set for each zero-shot language pair and can therefore scale to massively multilingual settings. Our work belongs to a line of research on massively multilingual translation (Aharoni et al., 2019; Arivazhagan et al., 2019b). Aharoni et al. (2019) demonstrated the feasibility of massively multilingual NMT and reported encouraging results. We continue in this direction by developing approaches that improve both multilingual and zero-shot performance. Independently from our work, Arivazhagan et al. (2019b) also find that increasing model capacity with deep architectures (Wang et al., 2019a; Zhang et al., 2019) substantially improves multilingual performance. A concurrent related work is (Bapna and Firat, 2019), which introduces taskspecific and lightweight adaptors for fast and scalable model adaptation. Compared to these adaptors, our language-aware layers are jointly trained with the whole NMT model from scratch without relying on any pretraining. 3 Multilingual NMT We briefly review the multilingual approach (Ha et al., 2016; Johnson et al., 2017) and the Transformer model (Vaswani et al., 2017), which are used as our baseline. Johnson et al. (2017) rely on prepending tokens specifying the target language to each source sentence. In that way a single NMT model can be trained on the modified multilingual dataset and used to perform multilingual translation. Given a source sentence x=(x1, x2, . . . , x|x|), its target reference y=(y1, y2, . . . , y|y|) and the target language token t5, multilingual NMT translates under the encoder-decoder framework (Bahdanau et al., 2015): H = Encoder([t, x]), (1) S = Decoder(y, H), (2) 5t is in the form of “<2X>” where X is a language name, such as <2EN> meaning translating into English. where H ∈R|x|×d/S ∈R|y|×d denote the encoder/decoder output. d is the model dimension. We employ the Transformer (Vaswani et al., 2017) as the backbone NMT model due to its superior multilingual performance (Lakew et al., 2018). The encoder is a stack of L = 6 identical layers, each containing a self-attention sublayer and a point-wise feedforward sublayer. The decoder follows a similar structure, except for an extra cross-attention sublayer used to condition the decoder on the source sentence. Each sublayer is equipped with a residual connection (He et al., 2015), followed by layer normalization (Ba et al., 2016, LN(·)): ¯a = LN(a | g, b) = a −µ σ ⊙g + b, (3) where ⊙denotes element-wise multiplication, µ and σ are the mean and standard deviation of the input vector a ∈Rd, respectively. g ∈Rd and b ∈Rd are model parameters. They control the sharpness and location of the regularized layer output ¯a. Layer normalization has proven effective in accelerating model convergence (Ba et al., 2016). 4 Approach Despite its success, multilingual NMT still suffers from 1) insufficient modeling capacity, where including more languages results in reduction in translation quality (Aharoni et al., 2019); and 2) off-target translation, where models translate into a wrong target language on zero-shot directions (Arivazhagan et al., 2019a). These drawbacks become severe in massively multilingual settings and we explore approaches to alleviate them. We hypothesize that the vanilla Transformer has insufficient capacity and search for model-level strategies such as deepening Transformer and devising languagespecific components. By contrast, we regard the lack of parallel data as the reason behind the offtarget issue. We resort to data-level strategy by creating, in online fashion, artificial parallel training data for each zero-shot language pair in order to encourage its translation. Deep Transformer One natural way to improve the capacity is to increase model depth. Deeper neural models are often capable of inducing more generalizable (‘abstract’) representations and capturing more complex dependencies and have shown encouraging performance on bilingual translation (Bapna et al., 2018; Zhang et al., 2019; Wang 1631 et al., 2019a). We adopt the depth-scaled initialization method (Zhang et al., 2019) to train a deep Transformer for multilingual translation. Language-aware Layer Normalization Regardless of linguistic differences, layer normalization in multilingual NMT simply constrains all languages into one joint Gaussian space, which makes learning more difficult. We propose to relax this restriction by conditioning the normalization on the given target language token t (LALN for short) as follows: ¯a = LN(a | gt, bt). (4) We apply this formula to all normalization layers, and leave the study of conditioning on source language information for the future. Language-aware Linear Transformation Different language pairs have different translation correspondences or word alignments (Koehn, 2010). In addition to LALN, we introduce a targetlanguage-aware linear transformation (LALT for short) between the encoder and the decoder to enhance the freedom of multilingual NMT in expressing flexible translation relationships. We adapt Eq. (2) as follows: S = Decoder(y, HWt), (5) where Wt ∈Rd×d denotes model parameters. Note that adding one more target language in LALT brings in only one weight matrix.6 Compared to existing work (Firat et al., 2016b; Sachan and Neubig, 2018), LALT reaches a better trade-off between expressivity and scalability. Random Online Backtranslation Prior studies on backtranslation for zero-shot translation decode the whole training set for each zero-shot language pair (Gu et al., 2019; Lakew et al., 2019), and scalability to massively multilingual translation is questionable – in our setting, the number of zero-shot translation directions is 9702. We address scalability by performing online backtranslation paired with randomly sampled intermediate languages. Algorithm 1 shows the detail of ROBT, where for each training instance (xk, yk, tk), we uniformly sample an intermediate language t′ k (tk ̸= t′ k), back-translate yk into 6We also attempted to factorize Wt into smaller matrices/vectors to reduce the number of parameters. Unfortunately, the final performance was rather disappointing. Algorithm 1: Algorithm for Random Online Backtranslation Input :Multilingual training data, D; Pretrained multilingual model, M; Maximum finetuning step, N; Finetuning batch size, B; Target language set, T ; Output: Zero-shot enabled model, M 1 i ←0 2 while i ≤N ∧not converged do 3 B ←sample batch from D 4 for k ←1 to B do 5 (xk, yk, tk) ←Bk 6 t′ k ∼Uniform(T ) such that t′ k ̸= tk 7 x′ k ←M([t′ k, yk]) // backtrans tk →t′ k to produce training example for t′ k →tk 8 B ←B ∪(x′ k, yk, tk) 9 Optimize M using B 10 i ←i + 1 11 return M t′ k to obtain x′ k, and train on the new instance (x′ k, yk, tk). Although x′ k may be poor initially (translations are produced on-line by the model being trained), ROBT still benefits from the translation signal of t′ k →tk. To reduce the computational cost, we implement batch-based greedy decoding for line 7. 5 OPUS-100 Recent work has scaled up multilingual NMT from a handful of languages to tens or hundreds, with many-to-many systems being capable of translation in thousands of directions. Following Aharoni et al. (2019), we created an English-centric dataset, meaning that all training pairs include English on either the source or target side. Translation for any language pair that does not include English is zero-shot or must be pivoted through English. We created OPUS-100 by sampling data from the OPUS collection (Tiedemann, 2012). OPUS100 is at a similar scale to Aharoni et al. (2019)’s, with 100 languages (including English) on both sides and up to 1M training pairs for each language pair. We selected the languages based on the volume of parallel data available in OPUS. The OPUS collection is comprised of multiple corpora, ranging from movie subtitles to GNOME 1632 ID Model Architecture L #Param BLEU94 WR BLEU4 1 Transformer, Bilingual 6 106M 20.90 2 Transformer, Bilingual 12 150M 22.75 3 Transformer 6 106M 24.64 ref 18.95 4 3 + MATT 6 99M 23.81 20.2 17.95 5 4 + LALN 6 102M 24.22 28.7 18.50 6 4 + LALT 6 126M 27.11 72.3 20.28 7 4 + LALN + LALT 6 129M 27.18 75.5 20.08 8 4 12 137M 25.69 81.9 19.13 9 7 12 169M 28.04 91.5 19.93 10 7 24 249M 29.60 92.6 21.23 Table 2: Test BLEU for one-to-many translation on OPUS-100 (100 languages). “Bilingual”: bilingual NMT, “L”: model depth (for both encoder and decoder), “#Param”: parameter number, “WR”: win ratio (%) compared to ref ( 3⃝), MATT: the merged attention (Zhang et al., 2019). LALN and LALT denote the proposed language-aware layer normalization and linear transformation, respectively. “BLEU94/BLEU4”: average BLEU over all 94 translation directions in test set and En→De/Zh/Br/Te, respectively. Higher BLEU and WR indicate better result. Best scores are highlighted in bold. documentation to the Bible. We did not curate the data or attempt to balance the representation of different domains, instead opting for the simplest approach of downloading all corpora for each language pair and concatenating them. We randomly sampled up to 1M sentence pairs per language pair for training, as well as 2000 for validation and 2000 for testing.7 To ensure that there was no overlap (at the monolingual sentence level) between the training and validation/test data, we applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done crosslingually, so an English sentence in the PortugueseEnglish portion of the training data could not occur in the Hindi-English test set, for instance. OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k. To evaluate zero-shot translation, we also sampled 2000 sentence pairs of test data for each of the 15 pairings of Arabic, Chinese, Dutch, French, German, and Russian. Filtering was used to exclude sentences already in OPUS-100. 6 Experiments 6.1 Setup We perform one-to-many (English-X) and manyto-many (English-X ∪X-English) translation on OPUS-100 (|T | is 100). We apply byte pair encoding (BPE) (Sennrich et al., 2016b; Kudo and Richardson, 2018) to handle multilingual words with a joint vocabulary size of 64k. We randomly 7For efficiency, we only use 200 sentences per language pair for validation in our multilingual experiments. shuffle the training set to mix instances of different language pairs. We adopt BLEU (Papineni et al., 2002) for translation evaluation with the toolkit SacreBLEU (Post, 2018)8. We employ the langdetect library9 to detect the language of translations, and measure the translation-language accuracy for zero-shot cases. Rather than providing numbers for each language pair, we report average BLEU over all 94 language pairs with test sets (BLEU94). We also show the win ratio (WR), counting the proportion where our approach outperforms its baseline. Apart from multilingual NMT, our baselines also involve bilingual NMT and pivot-based translation (only for zero-shot comparison). We select four typologically different target languages (German/De, Chinese/Zh, Breton/Br, Telugu/Te) with varied training data size for comparison to bilingual models as applying bilingual NMT to each language pair is resource-consuming. We report average BLEU over these four languages as BLEU4. We reuse the multilingual BPE vocabulary for bilingual NMT. We train all NMT models with the Transformer base settings (512/2048, 8 heads) (Vaswani et al., 2017). We pair our approaches with the merged attention (MATT) (Zhang et al., 2019) to reduce training time. Other details about model settings are in the Appendix. 6.2 Results on One-to-Many Translation Table 2 summarizes the results. The inferior performance of multilingual NMT ( 3⃝) against its 8Signature: BLEU+case.mixed+numrefs.1+smooth.exp+ tok.13a+version.1.4.1 9https://github.com/Mimino666/ langdetect 1633 ID Model Architecture L #Param w/o ROBT w/ ROBT BLEU94 WR BLEU4 BLEU94 WR BLEU4 1 Transformer, Bilingual 6 110M 20.28 2 Transformer 6 110M 19.50 ref 15.35 18.75 4.3 14.73 3 2 + MATT 6 103M 18.49 5.3 14.90 17.85 6.4 14.38 4 3 + LALN + LALT 6 133M 21.39 78.7 18.13 20.81 69.1 17.45 5 3 12 141M 20.77 94.7 16.08 20.24 84.0 15.80 6 4 12 173M 22.86 97.9 19.25 22.39 97.9 18.23 7 4 24 254M 23.96 100.0 19.83 23.36 97.9 19.45 Table 3: English→X test BLEU for many-to-many translation on OPUS-100 (100 languages). “WR”: win ratio (%) compared to ref ( 2⃝w/o ROBT). ROBT denotes the proposed random online backtranslation method. ID Model Architecture L #Param w/o ROBT w/ ROBT BLEU94 WR BLEU4 BLEU94 WR BLEU4 1 Transformer, Bilingual 6 110M 21.23 2 Transformer 6 110M 27.60 ref 23.35 27.02 14.9 22.50 3 2 + MATT 6 103M 26.90 2.1 22.78 26.28 4.3 21.53 4 3 + LALN + LALT 6 133M 27.50 37.2 23.05 27.22 23.4 23.30 5 3 12 141M 29.15 98.9 24.15 28.80 91.5 24.03 6 4 12 173M 29.49 97.9 24.53 29.54 96.8 25.43 7 4 24 254M 31.36 98.9 26.03 30.98 95.7 26.78 Table 4: X→English test BLEU for many-to-many translation on OPUS-100 (100 languages). “WR”: win ratio (%) compared to ref ( 2⃝w/o ROBT). bilingual counterpart ( 1⃝) reflects the capacity issue (-1.95 BLEU4). Replacing the self-attention with MATT slightly deteriorates performance (0.83 BLEU94 3⃝→4⃝); we still use MATT for more efficiently training deep models. Our ablation study ( 4⃝- 7⃝) shows that enriching the language awareness in multilingual NMT substantially alleviates this capacity problem. Relaxing the normalization constraints with LALN gains 0.41 BLEU94 with 8.5% WR ( 4⃝→5⃝). Decoupling different translation relationships with LALT delivers an improvement of 3.30 BLEU94 and 52.1% WR ( 4⃝→6⃝). Combining LALT and LALN demonstrates their complementarity (+3.37 BLEU94 and +55.3% WR, 4⃝→7⃝), significantly outperforming the multilingual baseline (+2.54 BLEU94, 3⃝→7⃝), albeit still behind the bilingual models (-0.82 BLEU4, 1⃝→7⃝). Deepening the Transformer also improves the modeling capacity (+1.88 BLEU94, 4⃝→8⃝). Although deep Transformer performs worse than LALN+LALT under a similar number of model parameters in terms of BLEU (-1.49 BLEU94, 7⃝→8⃝), it shows more consistent improvements across different language pairs (+6.4% WR). We obtain better performance when integrating all approaches ( 9⃝). By increasing the model depth to 24 (10 ⃝), Transformer with our approach yields a score of 29.60 BLEU94 and 21.23 BLEU4, beating the baseline ( 3⃝) on 92.6% tasks and outperforming the base bilingual model ( 1⃝) by 0.33 BLEU4. Our approach significantly narrows the performance gap between multilingual NMT and bilingual NMT (20.90 BLEU4 →21.23 BLEU4, 1⃝→10 ⃝), although similarly deepening bilingual models surpasses our approach by 1.52 BLEU4 (10 ⃝→2⃝). 6.3 Results on Many-to-Many Translation We train many-to-many NMT models on the concatenation of the one-to-many dataset (English→X) and its reversed version (X→English), and evaluate the zero-shot performance on X→X language pairs. Table 3 and Table 4 show the translation results for English→X and X→English, respectively.10 We focus on the translation performance w/o ROBT in this subsection. Compared to the one-to-many translation, the many-to-many translation must accommodate twice as many translation directions. We observe that many-to-many NMT models suffer more se10Note that the one-to-many training and test sets were not yet aggressively filtered for sentence overlap as described in Section 5, so results in Table 2 and Table 3 are not directly comparable. 1634 ID Model Architecture L #Param English→X X→English High Med Low High Med Low 1 Transformer 6 110M 20.69 20.82 15.18 26.99 28.60 27.49 2 1 + MATT 6 103M 19.70 19.77 14.17 26.32 27.81 26.84 3 2 + LALN + LALT 6 133M 21.07 22.88 19.99 27.03 28.60 26.97 4 2 12 141M 21.67 22.17 16.95 28.39 30.24 29.26 5 3 12 173M 22.48 24.38 21.58 28.66 30.73 29.50 6 3 24 254M 23.69 25.61 22.24 30.29 32.58 31.90 Table 5: Test BLEU for High/Medium/Low (High/Med/Low) resource language pairs in many-to-many setting on OPUS-100 (100 languages). We report average BLEU for each category. ID Model Architecture L #Param w/o ROBT w/ ROBT BLEUzero ACCzero BLEUzero ACCzero 1 Transformer, Pivot & Bilingual 6 110M 12.98 84.87 2 Transformer 6 110M 3.97 36.04 10.11 86.08 3 2 + MATT 6 103M 3.49 31.62 9.67 85.87 4 3 + LALN + LALT 6 133M 4.02 45.43 11.23 87.40 5 3 12 141M 4.71 39.40 11.87 87.44 6 4 12 173M 5.41 51.40 12.62 87.99 7 4 24 254M 5.24 47.91 14.08 87.68 8 7 + Pivot 24 254M 14.71 84.81 14.78 85.09 Table 6: Test BLEU and translation-language accuracy for zero-shot translation in many-to-many setting on OPUS-100 (100 languages). “BLEUzero/ACCzero”: average BLEU/accuracy over all zero-shot translation directions in test set, “Pivot”: the pivot-based translation that first translates one source sentence into English (X→English NMT), and then into the target language (English→X NMT). Lower accuracy indicates severe off-target translation. The average Pearson correlation coefficient between language accuracy and the corresponding BLEU is 0.93 (significant at p < 0.01). rious capacity issues on English→X tasks (-4.93 BLEU4, 1⃝→2⃝in Table 3 versus -1.95 BLEU4 in Table 2), where the deep Transformer with LALN + LALT effectively reduces this gap to -0.45 BLEU4 ( 1⃝→7⃝, Table 3), resonating with our findings from Table 2. By contrast, multilingual NMT benefits X→English tasks considerably from the multitask learning alone, outperforming bilingual NMT by 2.13 BLEU4 ( 1⃝→2⃝, Table 4). Enhancing model capacity further enlarges this margin to +4.80 BLEU4 ( 1⃝→7⃝, Table 4). We find that the overall quality of English→X translation (19.50/23.96 BLEU94, 2⃝/ 7⃝, Table 3) lags far behind that of its X→English counterpart (27.60/31.36 BLEU94, 2⃝/12 ⃝, Table 4), regardless of the modeling capacity. We ascribe this to the highly skewed training data distribution, where half of the training set uses English as the target. This strengthens the ability of the decoder to translate into English, and also encourages knowledge transfer for X→English language pairs. LALN and LALT show the largest benefit for English→X (+2.9 BLEU94, 3⃝→4⃝, Table 3), and only a small benefit for X→English (+0.6 BLEU94, 3⃝→4⃝, Table 4). This makes sense considering that LALN and LALT are specific to the target language, so capacity is mainly increased for English→X. Deepening the Transformer yields benefits in both directions (+2.57 BLEU94 for English→X, +3.86 BLEU94 for X→English; 4⃝→7⃝, Tables 3 and 4). 6.4 Effect of Training Corpus Size Our multilingual training data is distributed unevenly across different language pairs, which could affect the knowledge transfer delivered by language-aware modeling and deep Transformer in multilingual translation. We investigate this effect by grouping different language pairs in OPUS-100 into three categories according to their training data size: High (≥0.9M, 45), Low (< 0.1M, 18) and Medium (others, 31). Table 5 shows the results. Language-aware modeling benefits low-resource language pairs the most on English→X translation (+5.82 BLEU, Low versus +1.37/+3.11 BLEU, High/Med, 2⃝→3⃝), but has marginal impact on X→English translation as analyzed in Section 6.3. By contrast, deep Transformers yield similar benefits across different data scales (+2.38 average BLEU, English→X and +2.31 average BLEU, X→English, 2⃝→4⃝). We obtain the best perfor1635 mance by integrating both ( 1⃝→6⃝) with a clear positive transfer to low-resource language pairs. 6.5 Results on Zero-Shot Translation Previous work shows that a well-trained multilingual model can do zero-shot X→Y translation directly (Firat et al., 2016b; Johnson et al., 2017). Our results in Table 6 reveal that the translation quality is rather poor (3.97 BLEUzero, 2⃝w/o ROBT) compared to the pivot-based bilingual baseline (12.98 BLEUzero, 1⃝) under the massively multilingual setting (Aharoni et al., 2019), although translations into different target languages show varied performance. The marginal gain by the deep Transformer with LALN + LALT (+1.44 BLEUzero, 2⃝→6⃝, w/o ROBT) suggests that weak model capacity is not the major cause of this inferior performance. In a manual analysis on the zero-shot NMT outputs, we found many instances of off-target translation (Table 1). We use translation-language accuracy to measure the proportion of translations that are in the correct target language. Results in Table 6 show that there is a huge accuracy gap between the multilingual and the pivot-based method (-48.83% ACCzero, 1⃝→2⃝, w/o ROBT), from which we conclude that the off-target translation issue is one source of the poor zero-shot performance. We apply ROBT to multilingual models by finetuning them for an extra 100k steps with the same batch size as for training. Table 6 shows that ROBT substantially improves ACCzero by 35%∼50%, reaching 85%∼87% under different model settings. The multilingual Transformer with ROBT achieves a translation improvement of up to 10.11 BLEUzero ( 2⃝w/o ROBT→7⃝w/ ROBT), outperforming the bilingual baseline by 1.1 BLEUzero ( 1⃝w/o ROBT→7⃝w/ ROBT) and approaching the pivotbased multilingual baseline (-0.63 BLEUzero, 8⃝ w/o ROBT→7⃝w/ ROBT).11 The strong Pearson correlation between the accuracy and BLEU (0.92 on average, significant at p < 0.01) suggests that the improvement on the off-target translation issue explains the increased translation performance to a large extent. Results in Table 3 and 4 show that ROBT’s success on zero-shot translation comes at the cost of sacrificing ∼0.50 BLEU94 and ∼4% WR on English→X and X→English translation. We also note that models with more capacity yield higher 11Note that ROBT improves all zero-shot directions due to its randomness in sampling the intermediate languages. We do not bias ROBT to the given zero-shot test set. 500000 520000 540000 560000 580000 600000 Training Steps 4 6 8 10 12 Zero-Shot Average Test BLEU TF (6 Layers) TF + All (6 Layers) TF + MAtt (12 Layers) TF + All (12 Layers) Figure 1: Zero-shot average test BLEU for multilingual NMT models finetuned by ROBT. ALL = MATT + LALN + LALT. Multilingual models with ROBT quickly converge on zero-shot directions. Setting BLEUzero 6-to-6 11.98 100-to-100 11.23 Table 7: Zero-short translation quality for ROBT under different settings. “100-to-100”: the setting used in the above experiments; we set T to all target languages. “6-to-6”: T only includes the zero-shot languages in the test set. We employ 6-layer Transformer with LALN and LALT for experiments. language accuracy (+7.78%/+13.81% ACCzero, 3⃝→5⃝/ 3⃝→4⃝, w/o ROBT) and deliver better zero-shot performance before (+1.22/+0.53 BLEUzero, 3⃝→5⃝/ 3⃝→4⃝, w/o ROBT) and after ROBT (+2.20/+1.56 BLEUzero, 3⃝→5⃝/ 3⃝→4⃝, w/ ROBT). In other words, increasing the modeling capacity benefits zero-shot translation and improves robustness. Convergence of ROBT. Unlike prior studies (Gu et al., 2019; Lakew et al., 2019), we resort to an online method for backtranslation. The curve in Figure 1 shows that ROBT is very effective, and takes only a few thousand steps to converge, suggesting that it is unnecessary to decode the whole training set for each zero-shot language pair. We leave it to future work to explore whether different back-translation strategies (other than greedy decoding) will deliver larger and continued benefits with ROBT. Impact of T on ROBT. ROBT heavily relies on T , the set of target languages considered, to distribute the modeling capacity on zero-shot directions. To study its impact, we provide a comparison by constraining T to 6 languages in the zero-shot test set. Results in Table 7 show that the biased ROBT outperforms the baseline by 0.75 BLEUzero. By narrowing T , more capacity is scheduled to the focused languages, which results in performance improvements. But the small scale of this improve1636 ment suggests that the number of zero-shot directions is not ROBT’s biggest bottleneck. 7 Conclusion and Future Work This paper explores approaches to improve massively multilingual NMT, especially on zero-shot translation. We show that multilingual NMT suffers from weak capacity, and propose to enhance it by deepening the Transformer and devising language-aware neural models. We find that multilingual NMT often generates off-target translations on zero-shot directions, and propose to correct it with a random online backtranslation algorithm. We empirically demonstrate the feasibility of backtranslation in massively multilingual settings to allow for massively zero-shot translation for the first time. We release OPUS-100, a multilingual dataset from OPUS including 100 languages with around 55M sentence pairs for future study. Our experiments on this dataset show that the proposed approaches substantially increase translation performance, narrowing the performance gap with bilingual NMT models and pivot-based methods. In the future, we will develop lightweight alternatives to LALT to reduce the number of model parameters. We will also exploit novel strategies to break the upper bound of ROBT and obtain larger zero-shot improvements, such as generative modeling (Zhang et al., 2016; Su et al., 2018; García et al., 2020; Zheng et al., 2020). Acknowledgments This project has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreements 825460 (ELITR) and 825299 (GoURMET). This project has received support from Samsung Electronics Polska sp. z o.o. - Samsung R&D Institute Poland. Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727). References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Maruan Al-Shedivat and Ankur Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1184–1197, Minneapolis, Minnesota. Association for Computational Linguistics. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019a. The missing ingredient in zero-shot neural machine translation. CoRR, abs/1903.07091. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019b. Massively multilingual neural machine translation in the wild: Findings and challenges. CoRR, abs/1907.05019. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training deeper neural machine translation models with transparent attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028–3033, Brussels, Belgium. Association for Computational Linguistics. Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China. Association for Computational Linguistics. Loïc Barrault, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. In Proceedings of the 27th International Conference on Computational 1637 Linguistics, pages 3112–3122, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Anna Currey and Kenneth Heafield. 2019. Zeroresource neural machine translation with monolingual pivot data. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 99–107, Hong Kong. Association for Computational Linguistics. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723–1732, Beijing, China. Association for Computational Linguistics. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 866–875, San Diego, California. Association for Computational Linguistics. Orhan Firat, Baskaran Sankaran, Yaser Al-onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 268–277, Austin, Texas. Association for Computational Linguistics. Xavier García, Pierre Forêt, Thibault Sellam, and Ankur P. Parikh. 2020. A multilingual view of unsupervised machine translation. ArXiv, abs/2002.02955. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1258–1268, Florence, Italy. Association for Computational Linguistics. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. In Proceedings of the 13th International Workshop on Spoken Language Translation (IWSLT), Seattle, USA. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep residual learning for image recognition. CoRR, abs/1512.03385. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Philipp Koehn. 2010. Statistical Machine Translation, 1st edition. Cambridge University Press, New York, NY, USA. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Surafel M. Lakew, Marcello Federico, Matteo Negri, and Marco Turchi. 2019. Multilingual Neural Machine Translation for Zero-Resource Languages. arXiv e-prints, page arXiv:1909.07342. Surafel Melaku Lakew, Mauro Cettolo, and Marcello Federico. 2018. A comparison of transformer and recurrent neural networks on multilingual neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 641–652, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine translation without explicit segmentation. Transactions of the Association for Computational Linguistics, 5:365–378. Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 84–92, Brussels, Belgium. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for universal neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 425–435, Brussels, Belgium. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on 1638 Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Devendra Sachan and Graham Neubig. 2018. Parameter sharing methods for multilingual self-attentional translation models. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 261–271, Brussels, Belgium. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence. Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao QIN, and Tie-Yan Liu. 2019. Multilingual neural machine translation with language clustering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 963–973, Hong Kong, China. Association for Computational Linguistics. Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. European Language Resources Association (ELRA). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Raúl Vázquez, Alessandro Raganato, Jörg Tiedemann, and Mathias Creutz. 2019. Multilingual NMT with a language-independent attention bridge. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 33–39, Florence, Italy. Association for Computational Linguistics. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019a. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810–1822, Florence, Italy. Association for Computational Linguistics. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019b. Multilingual neural machine translation with soft decoupled encoding. In International Conference on Learning Representations. Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2019c. A compact and language-sensitive multilingual translation method. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1213–1223, Florence, Italy. Association for Computational Linguistics. Biao Zhang, Ivan Titov, and Rico Sennrich. 2019. Improving deep transformer with depth-scaled initialization and merged attention. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 898–909, Hong Kong, China. Association for Computational Linguistics. Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 521–530, Austin, Texas. Association for Computational Linguistics. Zaixiang Zheng, Hao Zhou, Shujian Huang, Lei Li, Xin-Yu Dai, and Jiajun Chen. 2020. Mirrorgenerative neural machine translation. In International Conference on Learning Representations. A OPUS-100: The OPUS Multilingual Dataset Table 8 lists the languages (other than English) and numbers of sentence pairs in the English-centric multilingual dataset. B Model Settings We optimize model parameters using Adam (β1 = 0.9, β2 = 0.98) (Kingma and Ba, 2015) with label smoothing of 0.1 and scheduled learning rate (warmup step 4k). We set the initial learning rate to 1.0 for bilingual models, but use 0.5 for multilingual models in order to stabilize training. We apply dropout to residual layers and attention weights, with a rate of 0.1/0.1 for 6-layer Transformer models and 0.3/0.2 for deeper ones. We group sentence 1639 Table 8: Numbers of training, validation, and test sentence pairs in the English-centric multilingual dataset. Language Train Valid Test Language Train Valid Test af Afrikaans 275512 2000 2000 lv Latvian 1000000 2000 2000 am Amharic 89027 2000 2000 mg Malagasy 590771 2000 2000 an Aragonese 6961 0 0 mk Macedonian 1000000 2000 2000 ar Arabic 1000000 2000 2000 ml Malayalam 822746 2000 2000 as Assamese 138479 2000 2000 mn Mongolian 4294 0 0 az Azerbaijani 262089 2000 2000 mr Marathi 27007 2000 2000 be Belarusian 67312 2000 2000 ms Malay 1000000 2000 2000 bg Bulgarian 1000000 2000 2000 mt Maltese 1000000 2000 2000 bn Bengali 1000000 2000 2000 my Burmese 24594 2000 2000 br Breton 153447 2000 2000 nb Norwegian Bokmål 142906 2000 2000 bs Bosnian 1000000 2000 2000 ne Nepali 406381 2000 2000 ca Catalan 1000000 2000 2000 nl Dutch 1000000 2000 2000 cs Czech 1000000 2000 2000 nn Norwegian Nynorsk 486055 2000 2000 cy Welsh 289521 2000 2000 no Norwegian 1000000 2000 2000 da Danish 1000000 2000 2000 oc Occitan 35791 2000 2000 de German 1000000 2000 2000 or Oriya 14273 1317 1318 dz Dzongkha 624 0 0 pa Panjabi 107296 2000 2000 el Greek 1000000 2000 2000 pl Polish 1000000 2000 2000 eo Esperanto 337106 2000 2000 ps Pashto 79127 2000 2000 es Spanish 1000000 2000 2000 pt Portuguese 1000000 2000 2000 et Estonian 1000000 2000 2000 ro Romanian 1000000 2000 2000 eu Basque 1000000 2000 2000 ru Russian 1000000 2000 2000 fa Persian 1000000 2000 2000 rw Kinyarwanda 173823 2000 2000 fi Finnish 1000000 2000 2000 se Northern Sami 35907 2000 2000 fr French 1000000 2000 2000 sh Serbo-Croatian 267211 2000 2000 fy Western Frisian 54342 2000 2000 si Sinhala 979109 2000 2000 ga Irish 289524 2000 2000 sk Slovak 1000000 2000 2000 gd Gaelic 16316 1605 1606 sl Slovenian 1000000 2000 2000 gl Galician 515344 2000 2000 sq Albanian 1000000 2000 2000 gu Gujarati 318306 2000 2000 sr Serbian 1000000 2000 2000 ha Hausa 97983 2000 2000 sv Swedish 1000000 2000 2000 he Hebrew 1000000 2000 2000 ta Tamil 227014 2000 2000 hi Hindi 534319 2000 2000 te Telugu 64352 2000 2000 hr Croatian 1000000 2000 2000 tg Tajik 193882 2000 2000 hu Hungarian 1000000 2000 2000 th Thai 1000000 2000 2000 hy Armenian 7059 0 0 tk Turkmen 13110 1852 1852 id Indonesian 1000000 2000 2000 tr Turkish 1000000 2000 2000 ig Igbo 18415 1843 1843 tt Tatar 100843 2000 2000 is Icelandic 1000000 2000 2000 ug Uighur 72170 2000 2000 it Italian 1000000 2000 2000 uk Ukrainian 1000000 2000 2000 ja Japanese 1000000 2000 2000 ur Urdu 753913 2000 2000 ka Georgian 377306 2000 2000 uz Uzbek 173157 2000 2000 kk Kazakh 79927 2000 2000 vi Vietnamese 1000000 2000 2000 km Central Khmer 111483 2000 2000 wa Walloon 104496 2000 2000 kn Kannada 14537 917 918 xh Xhosa 439671 2000 2000 ko Korean 1000000 2000 2000 yi Yiddish 15010 2000 2000 ku Kurdish 144844 2000 2000 yo Yoruba 10375 0 0 ky Kyrgyz 27215 2000 2000 zh Chinese 1000000 2000 2000 li Limburgan 25535 2000 2000 zu Zulu 38616 2000 2000 lt Lithuanian 1000000 2000 2000 pairs of roughly 50k target tokens into one training/finetuning batch, except for bilingual models where 25k target tokens are used. We train multilingual and bilingual models for 500k and 100k steps, respectively. We average the last 5 checkpoints for evaluation, and employ beam search for decoding with a beam size of 4 and length penalty of 0.6.
2020
148
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1640–1649 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1640 It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information Emanuele BugliarelloC Sabrina J. MielkeH Antonios Anastasopoulos@ Ryan CotterellD,Q Naoaki OkazakiN CUniversity of Copenhagen HJohns Hopkins University @Carnegie Mellon University DUniversity of Cambridge QETH Z¨urich NTokyo Institute of Technology [email protected], [email protected], [email protected], [email protected], [email protected] Abstract The performance of neural machine translation systems is commonly evaluated in terms of BLEU. However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model. In this paper, we propose cross-mutual information (XMI): an asymmetric information-theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models. XMI allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target-side generation component independent of the translation task. We then present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems. Code for replicating our experiments is available online at https://github.com/ e-bug/nmt-difficulty. 1 Introduction Machine translation (MT) is one of the core research areas in natural language processing. Current state-of-the-art MT systems are based on neural networks (Sutskever et al., 2014; Bahdanau et al., 2015), which generally surpass phrase-based systems (Koehn, 2009) in a variety of domains and languages (Bentivogli et al., 2016; Toral and S´anchez-Cartagena, 2017; Castilho et al., 2017; Bojar et al., 2018; Barrault et al., 2019). Using phrase-based MT systems, various controlled studies to understand where the translation difficulties lie for different language pairs were conducted (Birch et al., 2008; Koehn et al., 2009). However, comparable studies have yet to be performed for neural machine translation (NMT). As a result, it is still unclear whether all translation directions are equally easy (or hard) to model for NMT. This paper hence aims at filling this gap: Ceteris paribus, MI: characterize language H(S) H(S | T) MI(S ; T) ⇒ intrinsic source/target language variation shared information H(T) H(T | S) MI(S ; T) ⇐ XMI: characterize models HqLM(S) HqMT(S | T) XMI(T →S) ⇒ intrinsic source/target modeling difficulty transfer difficulty HqLM(T) HqMT(T | S) XMI(S →T) ⇐ Figure 1: Left: Decomposing the uncertainty of a sentence as mutual information plus language-inherent uncertainty: mutual information (MI) corresponds to just how much easier it becomes to predict T when you are given S. MI is symmetric but the relation between H(S) and H(T) can be arbitrary. Right: estimating cross-entropies using models qMT and qLM invalidates relations between bars, except that Hq·(·) ≥H(·). XMI, our proposed metric, is no longer purely a symmetric measure of language, but now an asymmetric measure that mostly highlights models’ shortcomings. is it easier to translate from English into Finnish or into Hungarian? And how much easier is it? Conversely, is it equally hard to translate Finnish and Hungarian into another language? Based on BLEU (Papineni et al., 2002) scores, previous work (Belinkov et al., 2017) suggests that translating into morphologically rich languages, such as Hungarian or Finnish, is harder than translating into morphologically poor ones, such as English. However, a major obstacle in the crosslingual comparison of MT systems is that many automatic evaluation metrics, including BLEU and METEOR (Banerjee and Lavie, 2005), are not cross-lingually comparable. In fact, being a function of n-gram overlap between candidate and reference translations, they only allow for a fair comparison of the performance between models when translating into the same test set in the same target language. Indeed, one cannot and should not draw conclusions about the difficulty of translating a source language into different target languages purely based on BLEU (or METEOR) scores. 1641 In response, we propose cross-mutual information (XMI), a new metric towards cross-linguistic comparability in NMT. In contrast to BLEU, this information-theoretic quantity no longer explicitly depends on language, model, and tokenization choices. It does, however, require that the models under consideration are probabilistic. As an initial starting point, we perform a case study with a controlled experiment on 21 European languages. Our analysis showcases XMI’s potential for shedding light on the difficulties of translation as an effect of the properties of the source or target language. We also perform a correlation analysis in an attempt to further explain our findings. Here, in contrast to the general wisdom, we find no significant evidence that translating into a morphologically rich language is harder than translating into a morphologically impoverished one. In fact, the only significant correlate of MT difficulty we find is source-side type–token ratio. 2 Cross-Linguistic Comparability through Likelihoods, not BLEU Human evaluation will always be the gold standard of MT evaluation. However, it is both timeconsuming and expensive to perform. To help researchers and practitioners quickly deploy and evaluate new systems, automatic metrics that correlate fairly well with human evaluations have been proposed over the years (Banerjee and Lavie, 2005; Snover et al., 2006; Isozaki et al., 2010; Lo, 2019). BLEU (Papineni et al., 2002), however, has remained the most common metric to report the performance of MT systems. BLEU is a precisionbased metric: a BLEU score is proportional to the geometric average of the number of n-grams in the candidate translation that also appear in the reference translation for 1 ≤n ≤4.1 In the context of our study, we take issue with two shortcomings of BLEU scores that prevent a cross-linguistically comparable study. First, it is not possible to directly compare BLEU scores across languages because different languages might express the same meaning with a very different number of words. For instance, agglutinative languages like Turkish often use a single word to express what other languages have periphrastic constructions for. To be concrete, the expression “I will have been programming” is five words in En1BLEU also corrects for reference coverage and includes a length penalty, but we focus on the high-level picture. glish, but could easily have been one word in a language with sufficient morphological markings; this unfairly boosts BLEU scores when translating into English. The problem is further exacerbated by tokenization techniques as finer granularities result in more partial credit and higher n for the n-gram matches (Post, 2018). In summary, BLEU only allows us to compare models for a fixed target language and tokenization scheme, i.e. it only allows us to draw conclusions about the difficulty of translating different source languages into a specific target one (with downstream performance as a proxy for difficulty). Thus, BLEU scores cannot provide an answer to which translation direction is easier between any two source–target pairs. In this work, we address this particular shortcoming by considering an information-theoretic evaluation. Formally, let VS and VT be source- and target-language vocabularies, respectively. Let S and T be source- and target-sentence-valued random variables for languages S and T, respectively; then S and T respectively range over V∗ S and V∗ T. These random variables S and T are distributed according to some true, unknown probability distribution p. The cross-entropy between the true distribution p and a probabilistic neural translation model qMT(t | s) is defined as: HqMT(T | S) = (1) − X t∈V∗ T X s∈V∗ S p(t, s) log2 qMT(t | s) Since we do not know p, we cannot compute eq. (1). However, given a held-out data set of sentence pairs {(s(i), t(i))}N i=1 assumed to be drawn from p, we can approximate the true cross-entropy as follows: HqMT(T | S) ≈ (2) −1 N N X i=1 log2 qMT(t(i) | s(i)) In the limit as N →∞, eq. (2) converges to eq. (1). We emphasize that this evaluation does not rely on language tokenization provided that the model qMT does not (Mielke, 2019). While common in the evaluation of language models, cross-entropy evaluation has been eschewed in machine translation research since (i) not all MT models are probabilistic and (ii) we are often interested in measuring the quality of the candidate translation our model actually produces, e.g. under approximate decoding. However, an information-theoretic evaluation 1642 is much more suitable for measuring the more abstract notion of which language pairs are hardest to translate to and from, which is our purpose here. 3 Disentangling Translation Difficulty and Monolingual Complexity We contend that simply reporting cross-entropies is not enough. A second issue in performing a controlled, cross-lingual MT comparison is that the language generation component (without translation) is not equally difficult across languages (Cotterell et al., 2018). We claim that the difficulty of translation corresponds more closely to the mutual information MI(S; T) between the source and target language, which tells us how much easier it becomes to predict T when S is given (see Figure 1). But what is the appropriate analogue of mutual information for cross-entropy? One such natural generalization is a novel quantity that we term cross-mutual information, defined as: XMI(S →T) = HqLM(T) −HqMT(T | S) (3) where HqLM(T) denotes the cross-entropy of the target sentence T under the model qLM. As in §2, this can, analogously, be approximated by the crossentropy of a separate target-side language model qLM over our held-out data set: XMI(S →T) ≈ (4) −1 N N X i=1 log2 qLM(t(i)) qMT(t(i) | s(i)) which, again, becomes exact as N →∞. In practice, we note that we mix different distributions qLM(t) and qMT(t | s) and, thus, qLM(t) is not necessarily representable as a marginal: there need not be any distribution ˜q(s) such that qLM(t) = P s∈V∗ S qMT(t | s) · ˜q(s). While qMT and qLM can, in general, be any two models, we exploit the characteristics of NMT models to provide a more meaningful, model-specific estimate of XMI. NMT architectures typically consist of two components: an encoder that embeds the input text sequence, and a decoder that generates translated output text. The latter acts as a conditional language model, where the source-language sentence embedded by the encoder drives the target-language generation. Hence, we use the decoder of qMT as our qLM to accurately estimate the difficulty of translation for a given architecture in a controlled way. In summary, by looking at XMI, we can effectively decouple the language generation component, whose difficulties have been investigated by Cotterell et al. 2018 and Mielke et al. 2019, from the translation component. This gives us a measure of how rich and useful the information extracted from the source language is for the target-language generation component. 4 Experiments In order to measure which pairs of languages are harder to translate to and from, we make use of the latest release v7 of Europarl (Koehn, 2005): a corpus of the proceedings of the European Parliament containing parallel sentences between English (en) and 20 other European languages: Bulgarian (bg), Czech (cs), Danish (da), German (de), Greek (el), Spanish (es), Estonian (et), Finnish (fi), French (fr), Hungarian (hu), Italian (it), Lithuanian (lt), Latvian (lv), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Slovak (sk), Slovene (sl) and Swedish (sv). Pre-processing steps In order to precisely effect a fully controlled experiment, we enforce a fair comparison by selecting the set of parallel sentences available across all 21 languages in Europarl. This fully controls for the semantic content of the sentences; however, we cannot adequately control for translationese (Stymne, 2017; Zhang and Toral, 2019). Our subset of Europarl contains 190,733 sentences for training, 1,000 unique, random sentences for validation and 2,000 unique, random sentences for testing. For each parallel corpus, we jointly learn byte-pair encodings (BPE; Sennrich et al., 2016) for the source and target languages, using 16,000 merge operations. We use the same vocabularies for the language models.2 Setup In our experiments, we train Transformer models (Vaswani et al., 2017), which often achieve state-of-the-art performance on MT for various language pairs. In particular, we rely on the PyTorch (Paszke et al., 2019) re-implementation of the Transformer model in the fairseq toolkit (Ott et al., 2019). For language modeling, we use the decoder from the same architecture, training it at the sentence level, as opposed to commonly used fixedlength chunks. We train our systems using label smoothing (LS; Szegedy et al., 2016; Meister et al., 2For English, we arbitrarily chose the English portion of the en-bg vocabulary. 1643 →en bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv avg BLEU 47.4 42.4 46.3 44.0 50.0 50.6 39.3 38.2 44.9 38.4 40.8 37.6 40.3 38.3 39.8 48.3 50.5 44.2 45.3 43.7 43.5 XMI( →en) 102.3 97.0 99.7 96.5 105.3 103.8 92.8 92.1 97.0 92.5 92.1 89.2 94.2 86.5 91.9 102.5 106.1 99.8 100.1 96.9 96.9 HqLM(en) 154.2 154.2 HqMT(en | ) 51.8 57.2 54.5 57.7 48.9 50.4 61.4 62.0 57.2 61.6 62.1 65.0 60.0 67.7 62.3 51.7 48.1 54.4 54.1 57.3 57.3 en → bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv avg BLEU 46.3 34.7 45.0 36.3 45.5 50.2 27.7 30.5 45.7 30.3 37.9 31.0 34.6 34.9 30.5 46.7 44.2 39.8 41.5 41.3 38.73 XMI(en to ) 106.2 102.8 103.3 104.0 111.0 108.1 100.2 98.0 99.7 99.1 95.3 96.0 99.3 90.4 98.3 105.2 112.4 105.8 107.9 100.1 102.1 HqLM( ) 156.5 164.0 152.7 167.6 163.7 159.3 162.5 158.6 154.9 166.6 158.6 159.2 156.4 159.7 163.4 159.3 160.5 157.7 158.2 153.1 159.6 HqMT( | en) 50.3 61.2 49.4 63.6 52.7 51.3 62.4 60.6 55.1 67.5 63.3 63.1 57.0 69.3 65.1 54.1 48.1 51.9 50.3 53.0 57.5 Table 1: Test scores, from and into English, Europarl, visualized in Figure 2 and Figure 3. 80 90 100 110 30 40 50 qMT captures more shared content → translations match references better → bg cs da de el es et fi fr hu it lt lv nl pl pt ro sksl sv bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv XMI BLEU 50 60 70 150 155 160 165 target given source harder to predict for qMT → target string is harder to predict for qLM → bg cs da de eles etfi fr huit lt lv nl pl pt ro sk sl sv bg cs da de el es et fi fr hu itlt lv nl pl pt ro sk sl sv HqMT(T | S) HqLM(T) 90 100 110 40 50 60 70 qMT captures more shared content → target given source harder to predict for qMT → bg cs da de el es et fi fr hu it lt lv nl pl pt ro sksl sv bg cs da de el es et fi fr hu itlt lv nl pl pt ro sk sl sv XMI HqMT(T | S) Figure 2: Some correlations between metrics in Table 1, into and from English. More correlations in Figure 4. 2020) as it has been shown to prevent models from over-confident predictions, which helps to regularize the models. We report cross-entropies (HqMT, HqLM), XMI, and BLEU scores obtained using SACREBLEU (Post, 2018).3 Finally, in a similar vein to Cotterell et al. (2018), we multiply crossentropy values by the number of sub-word units generated by each model to make our quantities independent of sentence lengths (and divide them by the total number of sentences to match our approximations of the true distributions). See App. A for experimental details. 5 Results and Analysis We train 40 systems, translating each language into and from English.4 The models’ performance in terms of BLEU scores, and the cross-mutual information (XMI) and cross-entropy values over the test sets are reported in Table 1 with significant values marked in App. B. 3Signature: BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.2.12. 4Due to resource limitations, we chose these tasks because most of the information available in the web is in English (https://w3techs.com/technologies/ overview/content_language) and effectively translating it into any other language would reduce the digital language divide (http://labs.theguardian.com/ digital-language-divide/). Besides, translating into English gives most people access to any local information. Translating into English When translating into the same target language (in this case, English), BLEU scores are, in fact, comparable, and can be used as a proxy for difficulty. We can then conclude, for instance, that Lithuanian (lt) is the hardest language to translate from, while Spanish (es) is the easiest. In this scenario, given the good correlation of BLEU scores with human evaluations, it is desirable that XMI correlates well with BLEU. This behavior is indeed apparent in the blue points in the left part of Figure 2, confirming the efficacy of XMI in evaluating the difficulty of translation while still being independent of the target language generation component. Translating from English Despite the large gaps between BLEU scores in Table 1, one should not be tempted to claim that it is easier to translate into English than from English for these languages as often hinted at in previous work (e.g., Belinkov et al., 2017). As we described above, different target languages are not directly comparable, and we actually find that XMI is slightly higher, on average, when translating from English, indicating that it is actually easier, on average, to transfer information correctly in this direction. For instance, translation from English to Finnish is shown to be easier than from Finnish to English, despite the large gap 1644 da sv fr lv bg sk sl it fi lt pt es nl ro et pl el cs hu de 0 25 50 75 100 125 150 175 HqLM(T) 99.7 54.5 96.9 57.3 97.0 57.2 94.2 60.0 102.3 51.8 99.8 54.4 100.1 54.1 92.1 62.1 92.1 62.0 89.2 65.0 102.5 51.7 103.8 50.4 86.5 67.7 106.1 48.1 92.8 61.4 91.9 62.3 105.3 48.9 97.0 57.2 92.5 61.6 96.5 57.7 103.3 49.4 153 100.1 53.0 153 99.7 55.1 155 99.3 57.0 156 106.2 50.3 156 105.8 51.9 158 107.9 50.3 158 95.3 63.3 159 98.0 60.6 159 96.0 63.2 159 105.2 54.1 159 108.1 51.3 159 90.4 69.3 160 112.4 48.1 161 100.2 62.4 163 98.3 65.1 163 111.0 52.7 164 102.8 61.2 164 99.1 67.5 167 104.0 63.6 168 154 HqLM(en) XMI(◦→en) HqMT(en | ◦) XMI(en →◦) HqMT(◦| en) Figure 3: HqLM(T), decomposed into XMI(S →T), the information that the system successfully transfers, and HqMT(T | S), the uncertainty that remains in the target language, all measured in bits. Note that in XMI(S →T) the translation is from the left to the right argument. Metric Pearson Spearman word number ratio 0.2988 (0.0611) 0.3570 (0.0237) TTRsrc -0.5196 (0.0006) -0.5136 (0.0007) TTRtgt 0.1651 (0.3086) 0.3355 (0.0343) dTTR -0.4427 (0.0042) -0.4660 (0.0024) word overlap ratio 0.1383 (0.3949) 0.1731 (0.2853) Table 2: Correlation coefficients (and p-values) between XMI and data-related features. in BLEU scores. This suggests that the former model is heavily penalized by the target-side language model; this is likely because Finnish has a large number of inflections for nouns and verbs. Another interesting example is given by Greek (el) and Spanish (es) in Table 1, where, again, the two tasks achieve very different BLEU scores but similar XMI. In light of the correlation with BLEU when translating into English, this shows us that Greek is just harder to language-model, corroborating the findings of Mielke et al. (2019). Moreover, Figure 2 clearly shows that, as expected, XMI is not as well correlated with BLEU when translating from English, given that BLEU scores are not cross-lingually comparable. Correlations with linguistic and data features Last, we conduct a correlation study between the translation difficulties as measured by XMI and the linguistic and data-dependent properties of each translation task, following the approaches of Lin et al. (2019) and Mielke et al. (2019). Table 2 lists Pearson’s and Spearman’s correlation coefficients for data-dependent metrics, where bold values indicate statistically significant results (p < 0.05) after Bonferroni correction (p < 0.0029). Interestingly, the only features that significantly correlate with our metric are related to the type-to-token ratio (TTR) for the source language and the distance between source and target TTRs. This implies that a potential explanation for the differences in translation difficulty lies in lexical variation. For full correlation results, refer to App. D. 6 Conclusion In this work, we propose a novel informationtheoretic approach, XMI, to measure the translation difficulty of probabilistic MT models. Differently from BLEU and other metrics, ours is language- and tokenization-agnostic, enabling the first systematic and controlled study of crosslingual translation difficulties. Our results show that XMI correlates well with BLEU scores when translating into the same language (where they are comparable), and that higher BLEU scores in different languages do not necessarily imply easier translations. In future work, we plan to extend this analysis across more translation pairs, more diverse languages and multiple domains, as well as investigating the effect of translationese or source-side grammatical errors (Anastasopoulos, 2019). Acknowledgments The authors are thankful to the anonymous reviewers for their valuable feedback. The second-to-last author acknowledges a Facebook Fellowship and discussions with Tiago Pimentel. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199, the National Science Foundation under grant 1761548, and by “Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation,” the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan. 1645 References Antonios Anastasopoulos. 2019. An analysis of sourceside grammatical errors in NMT. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 213–223, Florence, Italy. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, Vancouver, Canada. Association for Computational Linguistics. Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrasebased machine translation quality: a case study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 257–267, Austin, Texas. Association for Computational Linguistics. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 745–754, Honolulu, Hawaii. Association for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Sheila Castilho, Joss Moorkens, Federico Gaspari, Iacer Calixto, John Tinsley, and Andy Way. 2017. Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics, 108(1):109–120. Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536–541, New Orleans, Louisiana. Association for Computational Linguistics. Mathieu Dehouck and Pascal Denis. 2018. A framework for understanding the role of morphology in universal dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2864–2870, Brussels, Belgium. Association for Computational Linguistics. Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences, 112(33):10336– 10341. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944–952, Cambridge, MA. Association for Computational Linguistics. Diederick P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388– 395, Barcelona, Spain. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit, volume 5, pages 79–96. Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press. Philipp Koehn, Alexandra Birch, and Ralf Steinberger. 2009. 462 machine translation systems for Europe. In Proceedings of the Twelfth Machine Translation Summit, pages 65–72. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open 1646 source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135, Florence, Italy. Association for Computational Linguistics. Patrick Littell, David R. Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8–14, Valencia, Spain. Association for Computational Linguistics. Chi-kiu Lo. 2019. YiSi - a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 507–513, Florence, Italy. Association for Computational Linguistics. Clara Meister, Elizabeth Salesky, and Ryan Cotterell. 2020. Generalized entropy regularization or: There’s nothing special about label smoothing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, USA. Association for Computational Linguistics. Sabrina J. Mielke. 2019. Can you compare perplexity across different segmentations? Sabrina J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975–4989, Florence, Italy. Association for Computational Linguistics. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Benoˆıt Sagot. 2013. Comparing complexity measures. In Computational Approaches to Morphological Complexity. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas. Sara Stymne. 2017. The effect of translationese on tuning for statistical machine translation. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 241–246, Gothenburg, Sweden. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826. Antonio Toral and V´ıctor M. S´anchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrasebased machine translation for 9 language directions. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational 1647 Linguistics: Volume 1, Long Papers, pages 1063– 1073, Valencia, Spain. Association for Computational Linguistics. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for neural machine translation. CoRR, abs/1803.07416. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 73– 81, Florence, Italy. Association for Computational Linguistics. 1648 A Experimental Details Pre-processing steps To precisely determine the effect of the different properties of each language in translation difficulty, we enforce a fair comparison by selecting the same set of parallel sentences across all the languages evaluated in our data set. The number of parallel sentences available in Europarl varies considerably, ranging from 387K sentences for Polish-English to 2.3M sentences for Dutch-English. Therefore, we proceed by taking the set of English sentences that are shared by all the language pairs. This leaves us with 197,919 sentences for each language pair, from which we then extract 1,000 and 2,000 unique, random sentences for validation and test, respectively. We follow the same pre-processing steps used by Vaswani et al. (2017) to train the Transformer model on WMT data: Data sets are first tokenized using the Moses toolkit (Koehn et al., 2007) and then filtered by removing sentences longer than 80 tokens in either source or target language. Due to this cleaning step that is specific to each training corpus, different sentences are dropped in each data set. We then only select the set of sentence pairs that are shared across all languages. This results in a final number of 190,733 training sentences. For each parallel corpus, we jointly learn byte-pair encodings (BPE; Sennrich et al., 2016) for source and target languages, using 16,000 merge operations. Training setup In our experiments, we train a Transformer model (Vaswani et al., 2017), which achieves state-of-the-art performance on a multitude of language pairs. In particular, we rely on the PyTorch re-implementation of the Transformer model in the Fairseq toolkit (Ott et al., 2019). All experiments are based on the Base Transformer architecture, which we trained for 20,000 steps and evaluated using the checkpoint corresponding to the lowest validation loss. We trained our models on a cluster of 4 machines, each equipped with 4 Nvidia P100 GPUs, resulting in training times of almost 70 minutes for each system. Sentence pairs with similar sequence length were batched together, with each batch containing a total of approximately 32K source tokens and 32K target tokens. We used the hyper-parameters specified in latest version (3) of Google’s Tensor2Tensor (Vaswani et al., 2018) implementation, with the exception of the dropout rate, as we found 0.3 to be more robust across all the models trained on Europarl. Model Train bootstrap Test bootstrap en-es 47.6 (0.233) 50.2 (0.026) en-et 25.6 (0.167) 27.7 (0.026) lt-en 34.5 (0.150) 37.6 (0.027) ro-en 47.5 (0.232) 50.5 (0.027) Table 3: Mean test BLEU scores when bootstrapping train and test sets. Numbers in brackets denote standard deviation over 5 runs (train bootstrap) and 95% confidence interval over 1, 000 samples (test bootstrap). Models are optimized using Adam (Kingma and Ba, 2015) and following the learning schedule specified by Vaswani et al. (2017) with 8,000 warm-up steps. We employed label smoothing ϵls = 0.1 (Szegedy et al., 2016) during training and we used beam search with a beam size of 4 and length penalty α = 0.6 (Wu et al., 2016). For language models, we use a Transformer decoder with the same hyperparameters used in the translation task to effectively measure the contribution given by a translation. These models were trained, using label smoothing ϵls = 0.1, for 10,000 steps on sequences consisting of separate sentences in our corpus. Analogously to translation models, the checkpoints corresponding to the lowest validation losses were used for evaluation. B Statistical Significance Tests Table 3 presents the results when applying bootstrap re-sampling (Koehn, 2004) on either training or test sets to the systems achieving the highest and the lowest BLEU scores in the validation set for each direction. In our experiments, we observe a general trend where the performance of different models varies similarly. For instance, when we bootstrap test sets, we see that the average BLEU scores are equal to the ones seen in Table 1, and that all the models have similar confidence intervals.5 When bootstrapping the training data, we observe a consistent drop in mean performance of 2 −3 BLEU points across the translation tasks. The drop in performance is not surprising as the resulting training sets are more redundant, having fewer unique sentences than the original sets, but it is interesting to see that all models are similarly affected. The standard deviation over 5 runs is also similar across all models but slightly larger on the high-performing ones. 5The same results were observed in all of the 40 models. 1649 80 90 100 110 150 155 160 165 qMT captures more shared content → target string is harder to predict for qLM → bg cs da de el es et fi fr hu it lt lv nl pl pt ro sksl sv bg cs da de el es et fi fr hu itlt lv nl pl pt ro sk sl sv XMI HqLM(T) 150 155 160 165 30 40 50 target string is harder to predict for qLM → translations match references better → bg cs da de el es etfi fr hu it lt lv nl pl pt ro sk sl sv bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv HqLM(T) BLEU 50 60 70 30 40 50 target given source harder to predict for qMT → translations match references better → bg cs da de eles etfi fr hu it lt lv nl pl pt ro sk sl sv bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv HqMT(T | S) BLEU 90 100 110 90 100 110 bg cs da de el es et fi fr hu it lt lv nl pl pt ro sk sl sv XMI into English XMI from English Figure 4: More correlations between metrics in Table 1, into and from English. Metric Pearson Spearman →en en → both →en en → both MCCsrc -0.2579 (0.2723) – -0.4302 (0.0056) -0.2135 (0.3660) – -0.4444 (0.0041) MCCtgt – -0.1260 (0.5965) 0.2619 (0.1025) – -0.1263 (0.5957) 0.3778 (0.0162) ADLsrc -0.2972 (0.2032) – -0.1166 (0.4737) -0.2887 (0.2170) – 0.0166 (0.9188) ADLtgt – -0.2254 (0.3393) -0.2110 (0.1912) – -0.1820 (0.4426) -0.3798 (0.0156) HPE-meansrc 0.2012 (0.3950) – 0.4567 (0.0031) 0.2000 (0.3979) – 0.4508 (0.0035) HPE-meantgt – 0.0142 (0.9525) -0.4115 (0.0083) – 0.0120 (0.9599) -0.4103 (0.0085) genetic 0.0433 (0.8563) 0.0777 (0.7446) 0.0544 (0.7387) -0.1526 (0.5207) -0.1741 (0.4630) -0.1360 (0.4028) syntactic -0.3643 (0.1143) -0.2056 (0.3845) -0.2556 (0.1114) -0.3560 (0.1234) -0.2695 (0.2506) -0.2688 (0.0935) featural -0.0561 (0.8142) -0.0577 (0.8090) -0.0511 (0.7540) 0.0121 (0.9597) -0.0093 (0.9690) -0.0109 (0.9467) phonological -0.1442 (0.5441) -0.2222 (0.3465) -0.1647 (0.3097) -0.0435 (0.8556) -0.0948 (0.6909) -0.0906 (0.5782) inventory 0.1125 (0.6369) 0.1048 (0.6601) 0.0976 (0.5492) 0.1231 (0.6052) 0.1472 (0.5356) 0.1128 (0.4884) geographic 0.1983 (0.4019) 0.3388 (0.1440) 0.2416 (0.1332) 0.1336 (0.5745) 0.2550 (0.2779) 0.2062 (0.2017) word number ratio 0.4559 (0.0434) -0.2953 (0.2063) 0.2988 (0.0611) 0.4602 (0.0412) -0.3278 (0.1582) 0.3570 (0.0237) TTRsrc -0.4746 (0.0345) – -0.5196 (0.0006) -0.4857 (0.0299) – -0.5136 (0.0007) TTRtgt – -0.2931 (0.2099) 0.1651 (0.3086) – -0.3128 (0.1794) 0.3355 (0.0343) dTTR -0.4434 (0.0502) -0.2404 (0.3072) -0.4427 (0.0042) -0.4857 (0.0299) -0.3128 (0.1794) -0.4660 (0.0024) word overlap ratio 0.2563 (0.2754) 0.0526 (0.8258) 0.1383 (0.3949) 0.1474 (0.5352) 0.1474 (0.5352) 0.1731 (0.2853) Table 4: All Pearson’s and Spearman’s correlation coefficients and corresponding p-values (in brackets) between XMI and various metrics. Values in black are statistically significant at p < 0.05, and bold values are also statistically significant after Bonferroni correction (p < 0.0029). C More Correlations between Metrics Figure 4 shows more correlations between the metrics we reported in our experiments (see Table 1). D Correlation Analysis Table 4 shows Pearson’s and Spearman’s correlations between XMI and all investigated predictors, including per-direction results. Following Lin et al. (2019) and Mielke et al. (2019), we evaluated: • MCC: Morphological counting complexity (Sagot, 2013), using the values for Europarl reported by Cotterell et al. (2018). • ADL: Average dependency length (Futrell et al., 2015), using the values reported for Europarl by Mielke et al. (2019). • HPE-mean: mean over all Europarl tokens of Head-POS Entropy (Dehouck and Denis, 2018), as reported by Mielke et al. (2019). • Six different linguistic distances (genetic, syntactic, featural, phonological, inventory, geographic) from the URIEL Typological Database (Littell et al., 2017). We refer the reader to Lin et al. (2019) for more details. • Word number ratio: number of source tokens over number of target tokens used for training. • TTRsrc and TTRtgt: type-to-token ratio evaluated on the source and target language training data, respectively, to measure lexical diversity. • dTTR: distance between the TTRs of the source and target language corpora, as a rough indication of their morphological similarity: dTTR =  1 −TTRsrc TTRtgt 2 . • Word overlap ratio: we measure the similarity between the vocabularies of source and target languages as the ratio between the number of shared types and the size of their union.
2020
149
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 149–159 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 149 A Study of Non-autoregressive Model for Sequence Generation Yi Ren ∗ Zhejiang University [email protected] Jinglin Liu ∗ Zhejiang University [email protected] Xu Tan Microsoft Research Asia [email protected] Zhou Zhao† Zhejiang University [email protected] Sheng Zhao Microsoft STC Asia [email protected] Tie-Yan Liu Microsoft Research Asia [email protected] Abstract Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy. Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the gap between AR and NAR models in various tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). With the help of those techniques, NAR models can catch up with the accuracy of AR models in some tasks but not in some others. In this work, we conduct a study to understand the difficulty of NAR sequence generation and try to answer: (1) Why NAR models can catch up with AR models in some tasks but not all? (2) Why techniques like knowledge distillation and source-target alignment can help NAR models. Since the main difference between AR and NAR models is that NAR models do not use dependency among target tokens while AR models do, intuitively the difficulty of NAR sequence generation heavily depends on the strongness of dependency among target tokens. To quantify such dependency, we propose an analysis model called CoMMA to characterize the difficulty of different NAR sequence generation tasks. We have several interesting findings: 1) Among the NMT, ASR and TTS tasks, ASR has the most target-token dependency while TTS has the least. 2) Knowledge distillation reduces the target-token dependency in target sequence and thus improves the accuracy of NAR models. 3) Source-target alignment constraint encourages dependency ∗Equal contribution. † Corresponding author of a target token on source tokens and thus eases the training of NAR models. 1 Introduction Non-autoregressive (NAR) models (Oord et al., 2017; Gu et al., 2017; Chen et al., 2019; Ren et al., 2019), which generate all the tokens in a target sequence in parallel and can speed up inference, are widely explored in natural language and speech processing tasks such as neural machine translation (NMT) (Gu et al., 2017; Lee et al., 2018; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b), automatic speech recognition (ASR) (Chen et al., 2019) and text to speech (TTS) synthesis (Oord et al., 2017; Ren et al., 2019). However, NAR models usually lead to lower accuracy than their autoregressive (AR) counterparts since the inner dependencies among the target tokens are explicitly removed. Several techniques have been proposed to alleviate the accuracy degradation, including 1) knowledge distillation (Oord et al., 2017; Gu et al., 2017; Guo et al., 2019a,b; Ren et al., 2019), 2) imposing source-target alignment constraint with fertility (Gu et al., 2017), word mapping (Guo et al., 2019a), attention distillation (Li et al., 2019b) and duration prediction (Ren et al., 2019). With the help of those techniques, it is observed that NAR models can match the accuracy of AR models for some tasks (Ren et al., 2019), but the gap still exists for some other tasks (Gu et al., 2017; Chen et al., 2019). Therefore, several questions come out naturally: (1) Why the gap still exists for some tasks? Are some tasks more difficult for NAR generation than others? (2) Why the techniques like knowledge distillation and source-target alignment can help NAR generation? 150 The main difference between AR and NAR models is that NAR models do not consider the dependency among target tokens, which is also the root cause of accuracy drop of NAR models. Thus, to better understand NAR sequence generation and answer the above questions, we need to characterize and quantify the target-token dependency, which turns out to be non-trivial since the sequences could be of different modalities (i.e., speech or text). For this purpose, we design a novel model called COnditional Masked prediction model with MixAttention (CoMMA), inspired by the mix-attention in He et al. (2018) and the masked language modeling in Devlin et al. (2018): in CoMMA, (1) the prediction of one target token can attend to all the source and target tokens with mix-attention, and 2) target tokens are randomly masked with varying probabilities. CoMMA can help us to measure target-token dependency using the ratio of the attention weights on target context over that on full (both source and target) context when predicting a target token: bigger ratio, larger dependency among target tokens. We conduct a comprehensive study in this work and obtain several interesting discoveries that can answer previous questions. First, we find that the rank of the target-token dependency among the three tasks is ASR>NMT>TTS: ASR has the largest dependency while TTS has the smallest. This finding is consistent with the accuracy gap between AR and NAR models and demonstrates the difficulty of NAR generation across tasks. Second, we replace the target sequence of original training data with the sequence generated by an AR model (i.e., through knowledge distillation) and use the new data to train CoMMA; we find that the targettoken dependency is reduced. Smaller target-token dependency makes NAR training easier and thus improves the accuracy. Third, source-target alignment constraint such as explicit duration prediction (Ren et al., 2019) or implicit attention distillation (Li et al., 2019b) also reduces the target-token dependency, thus helping the training of NAR models. The main contributions of this work are as follows: • We design a novel model, conditional masked prediction model with mix-attention (CoMMA), to measure the token dependency for sequence generation. • With CoMMA, we find that: 1) Among the three tasks, ASR is the most difficult and TTS is the least for NAR generation; 2) both knowledge distillation and imposing source-target alignment constraint reduce the target-token dependency, and thus reduce the difficulty of training NAR models. 2 CoMMA In this section, we analyze the token dependency in the target sequence with a novel conditional masked prediction model with mix-attention (CoMMA). We first introduce the design and structure of CoMMA, and then describe how to measure the target token dependency based on CoMMA. 2.1 The Design of CoMMA It is non-trivial to directly measure and compare the target token dependency in different modalities (i.e., speech or text) and different conditional source modalities (i.e., speech or text). Therefore, we have several considerations in the design of CoMMA: 1) We use masked language modeling in BERT (Devlin et al., 2018) with source condition to train CoMMA, which can help measure the dependency on target context when predicting the current masked token. 2) In order to ensure the dependency on source and target tokens can be comparable, we use mix-attention (He et al., 2018) to calculate the attention weights on both source and target tokens in a single softmax function. The model architecture of CoMMA is shown in Figure 1. Specifically, CoMMA differs from standard Transformer (Vaswani et al., 2017) as follows: 1) Some tokens are randomly replaced by a special mask token ⟨M⟩with probability p, and the model is trained to predict original unmasked tokens. 2) We employ mix-attention mechanism (He et al., 2018) where layer i in the decoder can attend to itself and the layer i in the encoder at the same time and compute the attention weights in a single softmax function. We share the parameters of attention and feed-forward layer between the encoder and decoder. 3) Following He et al. (2018), we add source/target embedding to tell the model whether a token is from the source or target sequence, and also add position embedding with the positions of source and target tokens both starting from zero. 4) The encoder and decoder pre-net (Shen et al., 2018) vary in different tasks: For TTS, encoder prenet consists of only embedding lookup table, and decoder pre-net consists of 2-layer dense network 151 CoMMA model N × Add & Norm Feed Forward Add & Norm Linear Self-Attention X1 X2 X3 X4 X5 EOS Y1 M Y3 M Y5 Y6 Source Token Target Token Softmax Mixed Attention Y2 Y4 Encoder Pre-Net Decoder Pre-Net Es Es Es Es Es Es Et Et Et Et Et Et Source/Target Embedding Ex1 Ex2 Ex3 Ex4 Ex5 Eeos Ey1 Em Ey3 Em Ey5 Ey6 Input Tokens Token Embedding E1 E2 E3 E4 E5 E6 E1 E2 E3 E4 E5 E6 CoMMA model Positional Embedding (a) The main structure of CoMMA. CoMMA model N × Add & Norm Feed Forward Add & Norm Linear Self-Attention X1 X2 X3 X4 X5 EOS Y1 M Y3 M Y5 Y6 Source Token Target Token Softmax Mixed Attention Y2 Y4 Encoder Pre-Net Decoder Pre-Net Es Es Es Es Es Es Et Et Et Et Et Et Source/Target Embedding Ex1 Ex2 Ex3 Ex4 Ex5 Eeos Ey1 Em Ey3 Em Ey5 Ey6 Input Tokens Token Embedding E1 E2 E3 E4 E5 E6 E1 E2 E3 E4 E5 E6 CoMMA model Positional Embedding (b) The input module of CoMMA. Figure 1: The architecture of conditional masked prediction model with mix-attention (CoMMA). with ReLU activation. For ASR, encoder pre-net consists of 3-layer 2D convolutional network, and decoder pre-net consists of only embedding lookup table. For NMT, both encoder and decoder pre-net consist of only embedding lookup table. CoMMA is designed to measure the target token dependency in a variety of sequence generations, including AR (unidirectional) generation, NAR generation, bidirectional generation or even identity copy. To this end, we vary the mask probability p (the ratio of the masked tokens in the whole target tokens1) in a uniform distribution p ∼U(0.0, 1.0) when training CoMMA. In this way, p = 1 covers NAR generation, p = 0 covers identity copy, and in some cases, p can also cover AR generation. 2.2 How to Measure Target Token Dependency based on CoMMA To measure the target token dependency, we define a metric called attention density ratio R, which represents the ratio of the attention density (the normalized attention weights) on target context in mix-attention when predicting the target token with a well-trained CoMMA. We describe the calculation of R in the following steps. First, we define the attention density ratio α for a single target token i as αi = 1 N PN j=1 Ai,j 1 N PN j=1 Ai,j + 1 M PN+M j=N+1 Ai,j , (1) 1Considering the continuity of the mel-spectrogram frames in speech sequence, we mask the frames by chunk, each chunk with frame size 10. where Ai,j denotes the attention weights from token i to token j in mix-attention, and i ∈[1, N] represents the target token while j ∈[N + 1, N + M] represents the source token, M and N is the length of source and target sequence respectively, PN+M j=1 Ai,j = 1. αi represents the ratio of attention density on target context when predicting target token i. Second, we average the attention density ratio αi over all the predicted tokens (with masked probability p) in a sentence and get 1 |Mp| X i∈Mp αi, (2) where Mp represents the set of masked target tokens under mask probability p and |Mp| denotes the number of tokens in the set. Third, for a given p, we calculate R(p) over all test data and average them to get the final attention density ratio R(p) = Avg( 1 |Mp| X i∈Mp αi). (3) We vary p and calculate R(p) to measure the density ratio under different conditions, where a small p represents more target context that can be leveraged and a large p represents less context. In the extreme cases, p = 1 represent NAR generation while p = 0 represents to learn identity copy. Given the proposed attention density ratio R(p) based on CoMMA, we can measure the target token dependency of the NAR model in different tasks, 152 Task NMT ASR TTS AR Transformer (Vaswani et al., 2017) Transformer ASR (Karita et al., 2019) Transformer TTS (Li et al., 2019a) NAR NAT (Gu et al., 2017) w/ AC NAR-ASR (Chen et al., 2019) w/ AC FastSpeech (Ren et al., 2019) Table 1: The AR and NAR model we consider in each task. “AC” means attention constraint we mentioned in Section 5. which can help understand a series of important research questions, as we introduce in the following three sections. 3 Study on the Difficulty of NAR Generation In this section, we aim to find out why the gap still exists for ASR and NMT tasks, while in TTS, NAR can catch up with the accuracy of AR model. We also analyze the causes of different difficulties for different tasks. We start from evaluating the accuracy gap between AR and NAR models for NMT, ASR and TTS, and then measure the token dependency based on our proposed CoMMA. 3.1 The Accuracy Gap We first train the AR and NAR models in each task and check the accuracy gap between AR and NAR models to measure the difficulty of NAR generation in each task. Configuration of AR and NAR Model The AR and NAR models we considered are shown in Table 1, where we use Transformer as the AR models while the representative NAR models in each task. For a fair comparison, we make some modifications on the NAR models: 1) For ASR, we train a Transformer ASR first as teacher model and then constrain the attention distributions of NAR-ASR with the alignments converted by teacher attention weights, which will be introduced and discussed in Section 5. 2) For NMT, we constrain the KLdivergence of the encoder-to-decoder attention distributions between the AR and NAR models following Li et al. (2019b). We also list the hyperparameters of AR and NAR models for each task in Section A. Datasets and Evaluations for NMT, ASR and TTS We conduct experiments on IWSLT 2014 German-English (De-En) translation dataset2 for NMT, LibriTTS dataset (Zen et al., 2019) for ASR and LJSpeech dataset (Ito) for TTS. For 2https://wit3.fbk.eu/mt.php?release=2014-01 speech data, we transform the raw audio into melspectrograms following Shen et al. (2018) with 50 ms frame size and 12.5 ms hop size. For text data, we tokenize sentences with moses tokenizer3 and then segment into subword symbols using Byte Pair Encoding (BPE) (Sennrich et al., 2015) for subword-level analysis, and convert the text sequence into phoneme sequence with grapheme-to-phoneme conversion (Sun et al., 2019) for phoneme-level analysis. We use BPE for NMT and ASR, while phoneme for TTS by default unless otherwise stated. We train all models on 2 NVIDIA 2080Ti GPUs using Adam optimizer with β1 = 0.9, β2 = 0.98, ε = 10−9 and following the same learning rate schedule in (Vaswani et al., 2017). For ASR, we evaluate word error rate (WER) on test-clean set in LibriTTS dataset. For NMT, we evaluate the BLEU score on IWSLT 2014 De-En test set. For TTS, we randomly split the LJSpeech dataset into 3 sets: 12500 samples for training, 300 samples for validation and 300 samples for testing, and then evaluate the mean opinion score (MOS) on the test set to measure the audio quality. The output mel-spectrograms of TTS model are transformed into audio samples using the pretrained WaveGlow (Prenger et al., 2019). Each audio is listened by at least 20 testers, who are all native English speakers. Task Model Accuracy NMT (BLEU/WER) Transformer 33.90/47.18 NAT 27.12/54.90 ASR (BLEU/WER) Transformer ASR 66.60/20.10 NAR-ASR 39.23/36.20 TTS (MOS) Transformer TTS 3.82 ± 0.08 FastSpeech 3.79 ± 0.12 Table 2: The accuracy gap between NAR and AR models. Results of Accuracy Gap The accuracies of the AR and NAR models in each task are shown in 3https://github.com/moses-smt/mosesdecoder/blob/mast er/scripts/tokenizer/tokenizer.perl 153 Table 2. It can be seen that NAR model can match the accuracy of AR model gap in TTS, while the gap still exists in ASR and NMT. We calculate both the WER and BLEU metrics in ASR and NMT for better comparison. It can be seen that ASR has a larger gap than NMT. Larger accuracy gap may indicate more difficult for NAR generation in this task. Next, we try to understand what factors influence difficulties among different tasks. 3.2 The Token Dependency In the last subsection, we analyze the difficulty of NAR models from the perspective of the accuracy gap. In this subsection, we try to find evidence from the target token dependency, which is supposed to be consistent with the accuracy gap to measure the task difficulty. Configuration of CoMMA We train CoMMA with the same configuration on NMT, ASR and TTS: the hidden size and the feed-forward hidden size and the number of layers are set to 512, 1024 and 6 respectively. We list other hyperparameters of CoMMA in Section B. We also use the same datasets for each task as described in Section 3.1 to train CoMMA. Results of Token Dependency We use the attention density ratio calculated from CoMMA (as described in Section 2.2) to measure the target token dependency and show the results in Figure 2. It can be seen that the rank of attention density ratio R(p) is ASR>NMT>TTS for all p. Considering that R(p) measures how much context information from target side is needed to generate a target token, we can see that ASR has more dependency on the target context and less on the source context, while TTS is the opposite, which is consistent with the accuracy gap between AR and NAR models as we described in Section 3.1. As we vary p from 0.1 to 0.5, R(p) decreases for all tasks since more tokens in the target side are masked. We also find that R(p) in NMT decreases quicker than the other two tasks, which indicates that NMT is good at learning from source context when less context information can be leveraged from the target side while R(p) in ASR decreases little. This can also explain why NAR in NMT achieves less gap than ASR. Figure 2: Attention density ratio R(p) under different p in different tasks for performance gap analysis. 4 Study on Knowledge Distillation In the current and next sections, we investigate why some techniques can help NAR generation from the aspect of target token dependency. We only analyze knowledge distillation and attention alignment techniques which are widely used in NAR, but we believe our analysis method can be applied to other NAR techniques, such as iterative refinement (Lee et al., 2018), fine-tuning from an AR model (Guo et al., 2019b) and so on. Most existing NAR models (Oord et al., 2017; Gu et al., 2017; Wang et al., 2019; Guo et al., 2019a,b; Ren et al., 2019) rely on the technique of knowledge distillation, which generates the new target sequence given original source sequence from a pre-trained AR model and trains the NAR model for better accuracy. In this section, we first conduct experiments to verify the accuracy improvements of knowledge distillation. Next, based on our proposed CoMMA, we analyze why knowledge distillation could help NAR models. 4.1 The Effectiveness of Knowledge Distillation Knowledge Distillation for NAR Models Given a well-trained AR model θT and source sequence x ∈X from the original training data, a new target sequence can be generated through y′ ∼P(y|x; θT ). (4) We can use beam search for NMT and ASR and greedy search for TTS to generate y′. Given the set of generated sequence pairs (X, Y′), we train the NAR models with negative log-likelihood loss L((X, Y′); θ) = − X (x,y′)∈(X,Y′) log P(y′|x; θ), (5) 154 where θ is the parameters set of the NAR model. Task Model Accuracy NMT (BLEU) Transformer 33.90 NAT 27.12 NAT w/o KD 21.79 TTS (MOS) Transformer TTS 3.82 ± 0.08 FastSpeech 3.79 ± 0.12 FastSpeech w/o KD 3.58 ± 0.13 Table 3: The comparison between NAR models with and without knowledge distillation. Experimental Results We only conducted knowledge distillation on NMT and TTS since there is no previous works on ASR yet. We train the NAR models in NMT and TTS with raw target token sequence instead of teacher outputs and compare the results with that in Table 2. The accuracy improvements of knowledge distillation are shown in Table 3. It can be seen that knowledge distillation can boost the accuracy of NAR in NMT and TTS, which is consistent with the previous works. 4.2 Why Knowledge Distillation Works Recently, Zhou et al. (2019) find that knowledge distillation can reduce the complexity of data sets and help NAT to better model the variations in the output data. However, this explanation is reasonable on its own, but mainly from the perspective of data level and is not easy to understand. In this subsection, we analyze knowledge distillation from a more understandable and intuitive perspective, by observing the change of the token dependency based on our proposed CoMMA. We measure the target token dependency by training CoMMA with the original training data and new data generated through knowledge distillation, respectively. The results are shown in Figure 3. It can be seen that knowledge distillation can decrease the attention density ratio R(p) on both tasks, indicating that knowledge distillation can reduce the dependency on the target-side context when predicting a target token, which can be helpful for NAT model training. 5 Study on Alignment Constraint Without the help of target context, NAR models usually suffer from ambiguous attention to the source context, which affects the accuracy. ReFigure 3: Attention density ratio R(p) for NMT and TTS tasks under different p with and without knowledge distillation, where “KD” means knowledge distillation. cently, many works have proposed a variety of approaches to help with the source-target alignment of NAR models, which can improve the estimation of the soft alignment in attention mechanism model. For example, Li et al. (2019b) constrain the KL-divergence of the encoder-to-decoder attention distributions between the AR and NAR models. Gu et al. (2017) predict the fertility of the source tokens to approximate the alignments between target sequence and source sequence. Guo et al. (2019a) convert the source token to target token with phrase table or embedding mapping for alignments. Ren et al. (2019) predict the duration (the number of mel-spectrograms) of each phoneme. In this section, we first study the effectiveness of alignment constraint for NAR models, and then analyze why alignment constraint can help the NAR models by observing the changes of token dependency based on our proposed CoMMA. 5.1 The Effectiveness of Alignment Constraint Alignment Constraint for NAR Models We choose the attention constraint mechanism which is commonly used based on previous works for each task. For NMT, we follow Li et al. (2019b) to minimize the KL-divergence between the attention distributions of AR and NAR model as follow: Lac = 1 N N X i=1 DKL(A′ i||Ai), (6) where A′ i and Ai denote the source-target attention weights from the AR teacher model and NAR student model respectively. A′, A ⊂RN×M where N 155 and M are the number of tokens in the target and source sequence. For TTS, we follow Ren et al. (2019) to extract the encoder-to-decoder attention alignments from the well-trained AR teacher model and convert them to phoneme duration sequence, and then train the duration predictor to expand the hidden of the source sequence to match the length of target sequence. For ASR, since there is no previous work proposing alignment constraint for NAR, we design a new alignment constraint method and explore its effectiveness. We first calculate the expectation position of teacher’s attention distributions for i-th target token: Ei = PM j=1 j∗A′ i,j and cast it to the nearest integer. Then we constrain the attention weights of i-th target token for NAR model so that it can only attend to the source position between Ei−1 and Ei+1. Specially, the first target token can only attend to the source position between 1 and E2 while the last target token can only attend to the position between EN−1 and M. We apply this alignment constraint for ASR only in the training stage. Task Model Accuracy NMT (BLEU) Transformer 33.90 NAT 27.12 NAT w/o AC 25.03 ASR (WER) Transformer ASR 20.1 NAR-ASR 33.1 NAR-ASR w/o AC 39.23 TTS (MOS) Transformer TTS 3.82 ± 0.08 FastSpeech 3.79 ± 0.12 FastSpeech w/o AC 1.97 ± 0.16 Table 4: The comparison between NAR models with and without alignment constraint. Experimental Results We follow the model configuration and datasets as described in Section 3.1, and explore the accuracy improvements when adding attention constraint to NAR models. The results are shown in Table 4. It can be seen that attention constraint can not only improve the performance of NMT and TTS as previous works (Li et al., 2019b; Ren et al., 2019) demonstrated, but also help the NAR-ASR model achieve better scores. 5.2 Why Alignment Constraint Works We further analyze how alignment constraint could help on NAR models by measuring the changes Figure 4: Attention density ratio R(p) for NMT, ASR and TTS tasks under different p with and without alignment constraint (AC). of token dependency when adding alignment constraint on CoMMA. For simplicity, we use the method described in Equation 6 to help the training of CoMMA, where the teacher model is the AR model and student model is CoMMA. We minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the AR model and CoMMA. First, we normalize the encoder-to-decoder attention weights in each head of mix-attention to convert each row of the attention weights to a distribution: ˆAi,j = Ai,N+j PM k=1 Ai,N+k for each i ∈[1, N], j ∈[1, M], (7) where A ⊂RN×(N+M) is the weights of mixattention described in Section 2.2, ˆA ⊂RN×M is the normalized encoder-to-decoder attention weights, M and N is the length of source and target sequence. Then, we compute the KL-divergence loss for each head as follows: Lac = 1 N N X i=1 DKL(A′ i|| ˆAi), (8) where A′ ⊂RN×M is the encoder-to-decoder attention of AR teacher model. We average Lac over all heads and layers and get the final attention constraint loss for CoMMA. We measure the token dependency by calculating the attention density ratio R(p) based on CoMMA, 156 and show the results in Figure 4. It can be seen that alignment constraint can help reduce ratio R(p) on each task and thus reduce the dependency on target context when predicting target tokens. In the meanwhile, alignment constraint can help the model extract more information from the source context, which can help the learning of NAR models. Another interesting finding is that NAR model in TTS benefits from attention constraint most as shown in Table 4, and in the meanwhile, TTS has the least attention density ratio as shown in Figure 4. These observations suggest that NAR models with small target token dependency could benefit largely from alignment constraint. 6 Related Works Several works try to analyze and understand NAR models on different tasks. We discuss these analyses from the two aspects: knowledge distillation and source-target alignment constraint. Knowledge Distillation Knowledge distillation has long been used to compress the model size (Hinton et al., 2015; Furlanello et al., 2018; Yang et al., 2018; Anil et al., 2018; Li et al., 2017) or transfer the knowledge of teacher model to student model (Tan et al., 2019; Liu et al., 2019a,b), and soon been applied to NAR models (Gu et al., 2017; Oord et al., 2017; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b; Ren et al., 2019) to boost the accuracy. Some works focus on studying why knowledge distillation works: Phuong and Lampert (2019) provide some insights into the mechanisms of knowledge distillation by studying the special case of linear and deep linear classifiers and find that data geometry, optimization bias and strong monotonicity determine the success of distillation; Yuan et al. (2019) argue that the success of KD is also due to the regularization of soft targets, which might be as important as the similarity information between categories. However, few works have studied the cause of why knowledge distillation benefits NAR training. Recently, Zhou et al. (2019) investigate why knowledge distillation is important for the training of NAR model in NMT task and find that knowledge distillation can reduce the complexity of data sets and help NAR model to learn the variations in the output data. Li et al. (2019b) explore the causes of the poor performance of the NAR model by observing the attention distributions and hidden states of NAR model. Lee et al. (2018) presents some experiments and analysis to prove the necessity for multiple iterations generation for NAT. They also investigate the effectiveness of knowledge distillation in different task and make the assumption that teacher model can essentially clean the training data so that the distilled NAR model substantially outperforms NAR model trained with raw data. Attention Alignment Constraint Previous work pointed out that adding additional alignment knowledge can improve the estimation of the soft alignment in attention mechanism model. For example, Chen et al. (2016) uses the Viterbi alignments of the IBM model 4 as an additional knowledge during NMT training by calculating the divergence between the attention weights and the statistical alignment information. Compared with AR model, the attention distributions of NAR model are more ambiguous, which leads to the poor performance of the NAR model. Recent works employ attention alignment constraint between the well-trained AR and NAR model to train a better NAR model. Li et al. (2019b) leverages intermediate hidden information from a well-trained AR-NMT teacher model to improve the NAR-NMT model by minimizing KLdivergence between the per-head encoder-decoder attention of the teacher and the student. Ren et al. (2019) choose the encoder-decoder attention head from the AR-TTS teacher as the attention alignments to improve the performance of the NAR model in TTS. 7 Conclusion In this paper, we conducted a comprehensive study on NAR models in NMT, ASR and TTS tasks to analyze several research questions, including the difficulty of NAR generation and why knowledge distillation and alignment constraint can help NAR models. We design a novel CoMMA and a metric called attention density ratio to measure the dependency on target context when predicting a target token, which can analyze these questions in a unified method. Through a series of empirical studies, we demonstrate that the difficulty of NAR generation correlates on the target token dependency, and knowledge distillation as well as alignment constraint reduces the dependency of target tokens and encourages the model to rely more on source context for target token prediction, which improves the 157 accuracy of NAR models. We believe our analyses can shed light on the understandings and further improvements on NAR models. Acknowledgments This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209). This work was also partially funded by Microsoft Research Asia. Thanks Tao Qin for the valuable suggestions, comments and guidance on this paper. References Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E Dahl, and Geoffrey E Hinton. 2018. Large scale distributed neural network training through online distillation. arXiv preprint arXiv:1804.03235. Nanxin Chen, Shinji Watanabe, Jes´us Villalba, and Najim Dehak. 2019. Non-autoregressive transformer automatic speech recognition. arXiv preprint arXiv:1911.04908. Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. CoRR, abs/1607.01628. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. arXiv preprint arXiv:1805.04770. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Nonautoregressive neural machine translation. arXiv preprint arXiv:1711.02281. Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019a. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3723–3730. Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2019b. Fine-tuning by curriculum learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1911.08717. Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems, pages 7944–7954. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Keith Ito. The lj speech dataset, 2017a. url ttps. keithito. com/LJ-Speech-Dataset. Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al. 2019. A comparative study on transformer vs rnn in speech applications. arXiv preprint arXiv:1909.06317. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901. Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, Ming Liu, and M Zhou. 2019a. Neural speech synthesis with transformer network. AAAI. Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li. 2017. Learning from noisy labels with distillation. In ICCV, pages 1928– 1936. Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019b. Hint-based training for non-autoregressive machine translation. arXiv preprint arXiv:1909.06708. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482. Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019b. End-to-end speech translation with knowledge distillation. arXiv preprint arXiv:1904.08075. Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433. Mary Phuong and Christoph Lampert. 2019. Towards understanding knowledge distillation. In International Conference on Machine Learning, pages 5142–5151. Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In ICASSP 20192019 IEEE International Conference on Acoustics, 158 Speech and Signal Processing (ICASSP), pages 3617–3621. IEEE. Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. Fastspeech: Fast, robust and controllable text to speech. arXiv preprint arXiv:1905.09263. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. 2018. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779–4783. IEEE. Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. 2019. Token-level ensemble distillation for grapheme-to-phoneme conversion. In INTERSPEECH. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. arXiv preprint arXiv:1902.10461. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In AAAI. Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan Yuille. 2018. Knowledge distillation in generations: More tolerant teachers educate better students. arXiv preprint arXiv:1805.05551. Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2019. Revisit knowledge distillation: a teacher-free framework. arXiv preprint arXiv:1909.11723. Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for textto-speech. arXiv preprint arXiv:1904.02882. Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019. Understanding knowledge distillation in nonautoregressive machine translation. arXiv preprint arXiv:1911.02727. 159 A Model Settings of NAR and AR We show the model settings of NAR and AR in Table 5. The hyperpameters in pre-net follow the methods in each task listed in Table 1 in the main part of the paper. Transformer Hyperparameter NMT / NAT ASR / NAR-ASR TTS / FastSpeech Embedding Dimension 512 512 512 Encoder Layers 6 6 6 Encoder Hidden 512 512 512 Encoder Filter Size 1024 1024 1024 Encoder Heads 4 4 4 Decoder Layers 6 6 6 Decoder Hidden Size 512 512 512 Decoder Filter Size 1024 1024 1024 Decoder Heads 4 4 4 Dropout 0.2 0.1 0.2 Batch Size 64 32 32 Base Learning Rate 1e-3 1e-3 1e-3 Table 5: Hyperparameters of transformer-based AR and NAR models. B Model Settings of CoMMA We show the model settings of CoMMA in Table 6. Name Hyperparameter Embedding Dimension 512 Encoder Layers 6 Encoder Hidden 512 Encoder Filter Size 1024 Encoder Heads 4 Decoder Layers 6 Decoder Hidden Size 512 Decoder Filter Size 1024 Decoder Heads 4 Dropout 0.1 Batch Size 64 Base Learning Rate 1e-3 Table 6: Hyperparameters of CoMMA.
2020
15
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650–1655 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1650 Language-aware Interlingua for Multilingual Neural Machine Translation Changfeng Zhu, Heng Yu, Shanbo Cheng, Weihua Luo Machine Intelligence Technology Lab, Alibaba Group {changfeng.zcf,yuheng.yh,shanbo.csb,weihua.luowh} @alibaba-inc.com Abstract Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models. 1 Introduction Neural Machine Translation (NMT) (Sutskever et al., 2014; Vaswani et al., 2017) has significantly improved the translation quality due to its end-to-end modeling and continuous representation. While conventional NMT performs single pair translation well, training a separate model for each language pair is resource consuming, considering there are thousands of languages in the world. Therefore multilingual NMT is introduced to handle multiple language pairs in one model, reducing the online serving and offline training cost. Furthermore, the multilingual NMT framework facilitates the cross-lingual knowledge transfer to improve translation performance on low resource language pairs (Wang et al., 2019). Despite all the mentioned advantages, multilingual NMT remains a challenging task since the language diversity and model capacity limitations lead to inferior performance against individual models that are sufficiently trained. So recent efforts in multilingual NMT mainly focus on enlarging the model capacity, either by introducing multiple Encoders and Decoders to handle different languages (Firat et al., 2016; Zoph and Knight, 2016), or enhancing the attention mechanism with language-specific signals (Blackwood et al., 2018). On the other hand, there have been some efforts to model the specificity of different languages. Johnson et al. (2017) and Ha et al. (2016) tackle this by simply adding some pre-designed tokens at the beginning of the source/target sequence, but we argue that such signals are not strong enough to learn enough language-specific information to transform the continuous representation of each language into the shared semantic space based on our observations. In this paper, we incorporate a language-aware Interlingua module into the Encoder-Decoder architecture. It explicitly models the shared semantic space for all languages and acts as a bridge between the Encoder and Decoder network. Specifically, we first introduce a language embedding to represent unique characteristics of each language and an interlingua embedding to capture the common semantics across languages. Then we use the two embeddings to augment the self-attention mechanism which transforms the Encoder representation into the shared semantic space. To minimize the information loss and keep the semantic consistency during transformation, we also introduce reconstruction loss and semantic consistency loss into the training objective. Besides, to further enhance the language-specific signal we incorporate language-aware positional embedding for both Encoder and Decoder, and take the language embedding as the initial state of the target side. 1651 Figure 1: Our Encoder-Interlingua-Decoder architecture with a language-aware interlingua neural network. We conduct experiments on both standard WMT data sets and large scale in-house data sets. And our proposed model achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with sufficiently trained individual models. 2 Model Architecture As shown in Figure 1, we propose a universal Encoder-Interlingua-Decoder architecture for multilingual NMT. The Encoder and Decoder are identical to the generic self-attention TRANSFORMER (Vaswani et al., 2017), except some modifications in the positional embedding. The Interlingua is shared across languages, but with language-specific embedding as input, so we call it language-aware Interlingua. The Interlingua module is composed of a stack of N identical layers. Each layer has a multi-head attention sub-layer and a feed-forward sub-layer. 2.1 Interlingua The Interlingua module uses multi-head attention mechanism, mapping the Encoder output Henc of different languages to a language-independent representation I. I = FFN(ATT(Q, K, V )) (1) Q = FFN(Lemb, Iemb) ∈Rd×r (2) K, V = Henc ∈Rd×n (3) The Henc denotes the hidden states out of the Encoder, while the d is the hidden size, and the n denotes the length of the source sentence. ATT(.) is the multi-head attention mechanism (Vaswani et al., 2017). The (K, V ) here are computed from the hidden states of the Encoder output Henc. The Q is composed of two parts in simple linear combination. One part is from the language-specific part Lemb, and the other part is a shared matrix Iemb, which we called interlingua embedding. Note that, the interlingua embedding Iemb has a fixed size of [d×r]. the i-th column of Iemb represents a initial semantic subspace that guides what semantic information of the Henc should be attended to at the corresponding position i of the Interlingua output. The r means every Encoder Henc will be mapped into a fixed size representation of r hidden states, and it is set to 10 during all of our experiments, similar to the work of (V´azquez et al., 2018). By incorporating a shared interlingua embedding, we expect that it can exploit the semantics of various subspaces from encoded representation, and the same semantic components of different sentences from both same and different languages should be mapped into the same position i ∈[1, r]. Language embedding Lemb is used as an indicator for the Interlingua that which language it is attending to, as different languages have their own characteristics. So we call the module language-aware Interlingua. FFN(.) is a simple position-wise feed-forward network. By introducing Interlingua module into the Encoder-Decoder structure, we explicitly model the intermediate semantic. In this framework, the language-sensitive Enc is to model the characteristics of each language, and the language-independent Interlingua to enhance cross-language knowledge transfer. 1652 2.2 Language Embedding as Initial State The universal Encoder-Decoder model (Johnson et al., 2017) use a special token (e.g. <2en>) at the beginning of the source sentence, which gives a signal to the Decoder to translate sentences into the right target language. But it is a weak signal as the language information must go through N = 6 Encoder self-attention, and then N = 6 EncoderDecoder attention before the Decoder attends to it. Inspired by Wang et al. (2018), we build a language embedding explicitly, and directly use it as the initial state of the Decoder. 2.3 Language-aware Positional Embedding Considering the structural differences between languages, each language should have a specific positional embedding. Wang et al. (2018) use trigonometric functions with different orders or offsets in the Decoder for different language. Inspired by this, we provide language-aware positional embedding for both Encoder and Decoder by giving language-specific offsets to the original sine(x), cosine(x) functions in TRANSFORMER. The offset is calculated from WLLemb, where WL is a weight matrix and Lemb is the language embedding. 2.4 Training Objective We introduce three types of training objectives in our model, similar to (Escolano et al., 2019). (i) Translation objective: Generally, a bilingual NMT model adopts the cross-entropy loss as the training objective, which we denote as Ls2t, meanwhile, we incorporate another loss Lt2s for translation from the target to the source. (ii) Reconstruction objective: The Interlingua transforms the Encoder output into an intermediate representation I. During translation, the Decoder only uses the I instead of any Encoder information. Inspired by Lample et al. (2017), Tu et al. (2017) and Lample et al. (2018), we incorporate an reconstruction loss for the purpose of minimizing information loss. We denote the X′ = Decoder(Interlingua(Encoder(X))) as the reconstruction of X. So we employ crossentropy between X′ and X as our reconstruction loss, and denote Ls2s for the source, Lt2t for the target. (iii) Semantic consistency objective: Obviously, sentences from different languages with the same semantics should have the same intermediate representation. So we leverage a simple but effective method, cosine similarity to measure the consistency. Similar objectives were incorporated in zero-shot translation (Al-Shedivat and Parikh, 2019; Arivazhagan et al., 2019) sim(Is, It) = 1 r r X i=1 Is i · It i ∥Is i ∥∥It i∥ (4) Where, Is and It denote the Interlingua representation of the source and target sides respectively. Ii is the i-th column of matrix I. Ldist = 1−sim(Is, It) is used as distance loss in our training objective. Finally, the objective function of our learning algorithm is thus: L = Ls2t + Lt2s + Ls2s + Lt2t + Ldist (5) 3 Experiments 3.1 Experimental Settings We conduct our experiments on both WMT data and in-house data. For WMT data, we use the WMT13 English-French (En-Fr) and EnglishSpanish (En-Es) data. The En-Fr and En-Es data consist of 18M and 15M sentence pairs respectively. We use newstest2012 and newstest2013 as our validation set and test set. Our in-house data contains about 130M parallel sentences for each language pair in En-Fr, En-Es, En-Pt (Portuguese), and 80M for En-Tr (Turkish). During all our experiments, we follow the settings of TRANSFORMER-base (Vaswani et al., 2017) with hidden/embedding size 512, 6 hidden layers and 8 attention heads. We set 3 layers for Interlingua, and r = 10 similar to the work of (V´azquez et al., 2018). We apply sub-word NMT (Sennrich et al., 2015), where a joint BPE model is trained for all languages with 50,000 operations. We used a joint vocabulary of 50,000 sub-words for all language pairs. 3.2 Experimental Results 3.2.1 Multilingual NMT vs Bilingual NMT We take the UNIV model introduced by Johnson et al. (2017) as our multilingual NMT baseline, and individual models trained for each language pair as our bilingual NMT baseline. The experimental results on WMT data are shown in Table 1. Compared with the UNIV 1653 one-to-many many-to-one zero-shot En-Fr En-Es AVG Fr-En Es-En AVG Fr-Es Es-Fr AVG INDIV/Pivot 35.09 34.54 34.82 32.91 33.48 33.20 30.36 31.64 31.00 UNIV 33.72 32.78 33.25 32.11 32.38 32.25 15.20 16.18 15.69 INTL 34.15 33.67 33.91 33.68 33.97 33.83 22.48 23.92 23.20 INTL+REC 34.97 34.28 34.63 33.72 34.10 33.91 23.69 25.16 24.43 INTL+SIM 34.09 33.56 33.83 33.54 33.95 33.75 25.93 26.81 26.37 INTL+REC+SIM 34.83 34.15 34.49 33.63 34.06 33.85 26.87 27.24 27.01 Table 1: BLEU scores on newstest2013. INDIV denotes direct model. Pivot is bridge translation system; UNIV denotes the universal framework introduced by Google (Johnson et al., 2017), but with a 9-layer Encoder. INTL refers to Interlingua model with only translation objective, and REC, SIM represent the reconstruction objective and the semantic consistency objective respectively. one-to-many many-to-one En-Fr En-Es En-Pt En-Tr AVG Fr-En Es-En Pt-En Tr-En AVG INDIV 53.96 34.53 52.97 40.14 45.40 59.01 36.92 53.87 38.63 47.11 UNIV 53.12 34.03 52.98 39.43 44.89 59.25 37.36 54.62 38.32 47.39 Ours 53.91 34.71 53.95 40.13 45.68 60.15 38.27 55.57 38.77 48.19 Table 2: BLEU scores on the 470M in-house data of four language pairs. Ours denotes Interlingua model with all training objectives model (Johnson et al., 2017), our model get statistically significant improvements in both manyto-one and one-to-many translation directions on WMT data. Note that we set the Encoder of the UNIV model to 9 layers, which makes it comparable to this work in the term of model size. Compared with the individual models, our model is slightly better for Fr/Es-En in many-to-one scenario. In the one-to-many scenario, the individual models get the best BLEU score, while our model outperforms the universal model in all language pairs. Similarly, the experimental results on in-house large-scale data are shown in Table 2. In one-to-many settings, our model acquires comparable BLEU scores with the bilingual NMT baselines (Individual model), and around 1 BLEU point improvement in En-Pt translation. Our model gets the best BLEU score in many-toone directions for all language pairs. Besides, the proposed model significantly exceeds the multilingual baseline (Universal model) in all directions. The results show that multilingual NMT models perform better in big data scenarios. This might the reason that intermediate representation can be trained more fully and stronger in a large-scale setting. 3.2.2 Zero-shot Translation To examine whether our language-aware Interlingua can help cross-lingual knowledge transfer, we perform zero-shot translation on WMT data. The Fr-Es and Es-Fr translation directions are the zeroshot translations. As shown in Table 1, our method yields more than 10 BLEU points improvement compared with the universal Encoder-Decoder approach and significantly shortens the gap with sufficiently trained individual models. 3.2.3 Ablation study on training objectives We further verify the impact of different training objectives in Table 1. Compared with the INTL baseline, the REC training objective can further improve the translation quality of both supervised and zero-shot language pairs. However, the SIM objective contributes to zero-shot translation quality significantly, with a slight decrease in supervised language pairs. The integration of both REC and SIM in INTL ultimately achieves balance increments between supervised and zero-shot language pairs. This suggests that constraints on Interlingua can lead to better intermediate semantic representations and translation quality. 1654 4 Related Work Multilingual NMT is first proposed by Dong et al. (2015) in a one-to-many scenario and generalized by Firat et al. (2016) to many-to-many scenario. Multilingual NMT suffered from the language diversity and model capacity problem. So one direction is to enlarge the model capacity, such as introducing multiple Encoders and Decoders to handle different languages (Luong et al., 2015; Dong et al., 2015; Firat et al., 2016; Zoph and Knight, 2016), or enhancing the attention mechanism with language-specific signals (Blackwood et al., 2018). The other direction is aimed at a unified framework to handle all language pairs (Ha et al., 2016; Johnson et al., 2017). They try to handle diversity by enhancing language-specific signals, by adding designed language tokens (Ha et al., 2016) or language-dependent positional encoding (Wang et al., 2018). Our work follows the second line by explicitly building a languageaware Interlingua network which provides a much stronger language signal than the previous works. In regards to generating language-independent representation, Lu et al. (2018) and V´azquez et al. (2018) both attempted to build a similar language-independent representation. However, their work is all based on multiple languagedependent LSTM Encoder-Decoders, which significantly increase the model complexity. And they don’t have the specially designed training objective to minimize the information loss and keep the semantic consistency. Whereas our work is more simple and effective in these regards and testified on a much stronger TRANSFORMER based system. 5 Conclusion We have introduced a language-aware Interlingua module to tackle the language diversity problem for multilingual NMT. Experiments show that our method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models. References Maruan Al-Shedivat and Ankur P Parikh. 2019. Consistency by agreement in zero-shot neural machine translation. arXiv preprint arXiv:1904.02338. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Roee Aharoni, Melvin Johnson, and Wolfgang Macherey. 2019. The missing ingredient in zeroshot neural machine translation. arXiv preprint arXiv:1903.07091. Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. arXiv preprint arXiv:1806.03280. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1723–1732. Carlos Escolano, Marta R Costa-juss`a, and Jos´e AR Fonollosa. 2019. From bilingual to multilingual neural machine translation by incremental training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 236–242. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. arXiv preprint arXiv:1601.01073. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. 2018. A neural interlingua for multilingual machine translation. arXiv preprint arXiv:1804.08198. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. 1655 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Thirty-First AAAI Conference on Artificial Intelligence. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Ra´ul V´azquez, Alessandro Raganato, J¨org Tiedemann, and Mathias Creutz. 2018. Multilingual nmt with a language-independent attention bridge. arXiv preprint arXiv:1811.00498. Xinyi Wang, Hieu Pham, Philip Arthur, and Graham Neubig. 2019. Multilingual neural machine translation with soft decoupled encoding. arXiv preprint arXiv:1902.03499. Yining Wang, Jiajun Zhang, Feifei Zhai, Jingfang Xu, and Chengqing Zong. 2018. Three strategies to improve one-to-many multilingual translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2955– 2960. Barret Zoph and Kevin Knight. 2016. Multisource neural translation. arXiv preprint arXiv:1601.00710.
2020
150
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1656–1671 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1656 On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation Wei Zhao†, Goran Glavaˇs‡, Maxime PeyrardΦ, Yang Gao⋆, Robert WestΦ, Steffen Eger† † Technische Universit¨at Darmstadt ‡ University of Mannheim, Germany Φ EPFL, Switerland ⋆Royal Holloway University of London, UK {zhao,eger}@aiphes.tu-darmstadt.de [email protected], [email protected] {maxime.peyrard,robert.west}@epfl.ch Abstract Evaluation of cross-lingual encoders is usually performed either via zero-shot cross-lingual transfer in supervised downstream tasks or via unsupervised cross-lingual textual similarity. In this paper, we concern ourselves with reference-free machine translation (MT) evaluation where we directly compare source texts to (sometimes low-quality) system translations, which represents a natural adversarial setup for multilingual encoders. Referencefree evaluation holds the promise of web-scale comparison of MT systems. We systematically investigate a range of metrics based on state-of-the-art cross-lingual semantic representations obtained with pretrained M-BERT and LASER. We find that they perform poorly as semantic encoders for reference-free MT evaluation and identify their two key limitations, namely, (a) a semantic mismatch between representations of mutual translations and, more prominently, (b) the inability to punish “translationese”, i.e., low-quality literal translations. We propose two partial remedies: (1) post-hoc re-alignment of the vector spaces and (2) coupling of semantic-similarity based metrics with target-side language modeling. In segment-level MT evaluation, our best metric surpasses reference-based BLEU by 5.7 correlation points. We make our MT evaluation code available.1 1 Introduction A standard evaluation setup for supervised machine learning (ML) tasks assumes an evaluation metric which compares a gold label to a classifier prediction. This setup assumes that the task has clearly defined and unambiguous labels and, in most cases, that an instance can be assigned few labels. These assumptions, however, do not hold for natural language generation (NLG) tasks like machine trans1https://github.com/AIPHES/ ACL20-Reference-Free-MT-Evaluation lation (MT) (Bahdanau et al., 2015; Johnson et al., 2017) and text summarization (Rush et al., 2015; Tan et al., 2017), where we do not predict a single discrete label but generate natural language text. Thus, the set of labels for NLG is neither clearly defined nor finite. Yet, the standard evaluation protocols for NLG still predominantly follow the described default paradigm: (1) evaluation datasets come with human-created reference texts and (2) evaluation metrics, e.g., BLEU (Papineni et al., 2002) or METEOR (Lavie and Agarwal, 2007) for MT and ROUGE (Lin and Hovy, 2003) for summarization, count the exact “label” (i.e., n-gram) matches between reference and system-generated text. In other words, established NLG evaluation compares semantically ambiguous labels from an unbounded set (i.e., natural language texts) via hard symbolic matching (i.e., string overlap). The first remedy is to replace the hard symbolic comparison of natural language “labels” with a soft comparison of texts’ meaning, using semantic vector space representations. Recently, a number of MT evaluation methods appeared focusing on semantic comparison of reference and system translations (Shimanaka et al., 2018; Clark et al., 2019; Zhao et al., 2019). While these correlate better than n-gram overlap metrics with human assessments, they do not address inherent limitations stemming from the need for reference translations, namely: (1) references are expensive to obtain; (2) they assume a single correct solution and bias the evaluation, both automatic and human (Dreyer and Marcu, 2012; Fomicheva and Specia, 2016), and (3) limitation of MT evaluation to language pairs with available parallel data. Reliable reference-free evaluation metrics, directly measuring the (semantic) correspondence between the source language text and system translation, would remove the need for human references and allow for unlimited MT evaluations: any 1657 monolingual corpus could be used for evaluating MT systems. However, the proposals of referencefree MT evaluation metrics have been few and far apart and have required either non-negligible supervision (i.e., human translation quality labels) (Specia et al., 2010) or language-specific preprocessing like semantic parsing (Lo et al., 2014; Lo, 2019), both hindering the wide applicability of the proposed metrics. Moreover, they have also typically exhibited performance levels well below those of standard reference-based metrics (Ma et al., 2019). In this work, we comparatively evaluate a number of reference-free MT evaluation metrics that build on the most recent developments in multilingual representation learning, namely cross-lingual contextualized embeddings (Devlin et al., 2019) and cross-lingual sentence encoders (Artetxe and Schwenk, 2019). We investigate two types of crosslingual reference-free metrics: (1) Soft token-level alignment metrics find the optimal soft alignment between source sentence and system translation using Word Mover’s Distance (WMD) (Kusner et al., 2015). Zhao et al. (2019) recently demonstrated that WMD operating on BERT representations (Devlin et al., 2019) substantially outperforms baseline MT evaluation metrics in the reference-based setting. In this work, we investigate whether WMD can yield comparable success in the reference-free (i.e., cross-lingual) setup; (2) Sentence-level similarity metrics measure the similarity between sentence representations of the source sentence and system translation using cosine similarity. Our analysis yields several interesting findings. (i) We show that, unlike in the monolingual reference-based setup, metrics that operate on contextualized representations generally do not outperform symbolic matching metrics like BLEU, which operate in the reference-based environment. (ii) We identify two reasons for this failure: (a) firstly, cross-lingual semantic mismatch, especially for multi-lingual BERT (M-BERT), which construes a shared multilingual space in an unsupervised fashion, without any direct bilingual signal; (b) secondly, the inability of the state-of-the-art crosslingual metrics based on multilingual encoders to adequately capture and punish “translationese”, i.e., literal word-by-word translations of the source sentence—as translationese is an especially persistent property of MT systems, this problem is particularly troubling in our context of referencefree MT evaluation. (iii) We show that by executing an additional weakly-supervised cross-lingual re-mapping step, we can to some extent alleviate both previous issues. (iv) Finally, we show that the combination of cross-lingual reference-free metrics and language modeling on the target side (which is able to detect “translationese”), surpasses the performance of reference-based baselines. Beyond designating a viable prospect of webscale domain-agnostic MT evaluation, our findings indicate that the challenging task of reference-free MT evaluation is able to expose an important limitation of current state-of-the-art multilingual encoders, i.e., the failure to properly represent corrupt input, that may go unnoticed in simpler evaluation setups such as zero-shot cross-lingual text classification or measuring cross-lingual text similarity not involving “adversarial” conditions. We believe this is a promising direction for nuanced, fine-grained evaluation of cross-lingual representations, extending the recent benchmarks which focus on zeroshot transfer scenarios (Hu et al., 2020). 2 Related Work Manual human evaluations of MT systems undoubtedly yield the most reliable results, but are expensive, tedious, and generally do not scale to a multitude of domains. A significant body of research is thus dedicated to the study of automatic evaluation metrics for machine translation. Here, we provide an overview of both reference-based MT evaluation metrics and recent research efforts towards reference-free MT evaluation, which leverage cross-lingual semantic representations and unsupervised MT techniques. Reference-based MT evaluation. Most of the commonly used evaluation metrics in MT compare system and reference translations. They are often based on surface forms such as n-gram overlaps like BLEU (Papineni et al., 2002), SentBLEU, NIST (Doddington, 2002), chrF++ (Popovi´c, 2017) or METEOR++(Guo and Hu, 2019). They have been extensively tested and compared in recent WMT metrics shared tasks (Bojar et al., 2017a; Ma et al., 2018a, 2019). These metrics, however, operate at the surface level, and by design fail to recognize semantic equivalence lacking lexical overlap. To overcome these limitations, some research efforts exploited static word embeddings (Mikolov et al., 2013b) and trained embedding-based supervised metrics on sufficiently large datasets with available human judgments of translation quality (Shimanaka 1658 et al., 2018). With the development of contextual word embeddings (Peters et al., 2018; Devlin et al., 2019), we have witnessed proposals of semantic metrics that account for word order. For example, Clark et al. (2019) introduce a semantic metric relying on sentence mover’s similarity and the contextualized ELMo embeddings (Peters et al., 2018). Similarly, Zhang et al. (2019) describe a reference-based semantic similarity metric based on contextualized BERT representations (Devlin et al., 2019). Zhao et al. (2019) generalize this line of work with their MoverScore metric, which computes the mover’s distance, i.e., the optimal soft alignment between tokens of the two sentences, based on the similarities between their contextualized embeddings. Mathur et al. (2019) train a supervised BERT-based regressor for reference-based MT evaluation. Reference-free MT evaluation. Recently, there has been a growing interest in reference-free MT evaluation (Ma et al., 2019), also referred to as “quality estimation” (QE) in the MT community. In this setup, evaluation metrics semantically compare system translations directly to the source sentences. The attractiveness of automatic referencefree MT evaluation is obvious: it does not require any human effort or parallel data. To approach this task, Popovi´c et al. (2011) exploit a bag-ofword translation model to estimate translation quality, which sums over the likelihoods of aligned word-pairs between source and translation texts. Specia et al. (2013) estimate translation quality using language-agnostic linguistic features extracted from source lanuage texts and system translations. Lo et al. (2014) introduce XMEANT as a crosslingual reference-free variant of MEANT, a metric based on semantic frames. Lo (2019) extended this idea by leveraging M-BERT embeddings. The resulting metric, YiSi-2, evaluates system translations by summing similarity scores over words pairs that are best-aligned mutual translations. YiSi2-SRL optionally combines an additional similarity score based on the alignment over the semantic structures (e.g., semantic roles and frames). Both metrics are reference-free, but YiSi-2-SRL is not resource-lean as it requires a semantic parser for both languages. Moreover, in contrast to our proposed metrics, they do not mitigate the misalignment of cross-lingual embedding spaces and do not integrate a target-side language model, which we identify to be crucial components. Recent progress in cross-lingual semantic similarity (Agirre et al., 2016; Cer et al., 2017) and unsupervised MT (Artetxe and Schwenk, 2019) has also led to novel reference-free metrics. For instance, Yankovskaya et al. (2019) propose to train a metric combining multilingual embeddings extracted from M-BERT and LASER (Artetxe and Schwenk, 2019) together with the log-probability scores from neural machine translation. Our work differs from that of Yankovskaya et al. (2019) in one crucial aspect: the cross-lingual reference-free metrics that we investigate and benchmark do not require any human supervision. Cross-lingual Representations. Cross-lingual text representations offer a prospect of modeling meaning across languages and support crosslingual transfer for downstream tasks (Klementiev et al., 2012; R¨uckl´e et al., 2018; Glavaˇs et al., 2019; Josifoski et al., 2019; Conneau et al., 2020). Most recently, the (massively) multilingual encoders, such as multilingual M-BERT (Devlin et al., 2019), XLM-on-RoBERTa (Conneau et al., 2020), and (sentence-based) LASER, have profiled themselves as state-of-the-art solutions for (massively) multilingual semantic encoding of text. While LASER has been jointly trained on parallel data of 93 languages, M-BERT has been trained on the concatenation of monolingual data in more than 100 languages, without any cross-lingual mapping signal. There has been a recent vivid discussion on the cross-lingual abilities of M-BERT (Pires et al., 2019; K et al., 2020; Cao et al., 2020). In particular, Cao et al. (2020) show that M-BERT often yields disparate vector space representations for mutual translations and propose a multilingual remapping based on parallel corpora, to remedy for this issue. In this work, we introduce re-mapping solutions that are resource-leaner and require easyto-obtain limited-size word translation dictionaries rather than large parallel corpora. 3 Reference-Free MT Evaluation Metrics In the following, we use x to denote a source sentence (i.e., a sequence of tokens in the source language), y to denote a system translation of x in the target language, and y⋆to denote the human reference translation for x. 3.1 Soft Token-Level Alignment We start from the MoverScore (Zhao et al., 2019), a recently proposed reference-based MT evaluation 1659 metric designed to measure the semantic similarity between system outputs (y) and human references (y⋆). It finds an optimal soft semantic alignments between tokens from y and y⋆by minimizing the Word Mover’s Distance (Kusner et al., 2015). In this work, we extend the MoverScore metric to operate in the cross-lingual setup, i.e., to measure the semantic similarity between n-grams (unigram or bigrams) of the source text x and the system translation y, represented with embeddings originating from a cross-lingual semantic space. First, we decompose the source text x into a sequence of n-grams, denoted by xn = (xn 1, . . . , xn m) and then do the same operation for the system translation y, denoting the resulting sequence of n-grams with yn. Given xn and yn, we can then define a distance matrix C such that Cij = ∥E(xn i )−E(yn j )∥2 is the distance between the i-th n-gram of x and the j-th n-gram of y, where E is a cross-lingual embedding function that maps text in different languages to a shared embedding space. With respect to the function E, we experimented with cross-lingual representations induced (a) from static word embeddings with RCSLS (Joulin et al., 2018)) (b) with M-BERT (Devlin et al., 2019) as the multilingual encoder; with a focus on the latter. For M-BERT, we take the representations of the last transformer layer as the text representations. WMD between the two sequences of n-grams xn and yn with associated n-gram weights 2 to fxn ∈R|xn| and fyn ∈R|yn| is defined as: m(x, y) := WMD(xn, yn) = min F X ij Cij · Fij, s.t. F 1 = fxn, F ⊺1 = fyn, where F ∈R|xn|×|yn| is a transportation matrix with Fij denoting the amount of flow traveling from xn i to yn j . 3.2 Sentence-Level Semantic Similarity In addition to measuring semantic distance between x and y at word-level, one can also encode them into sentence representations with multilingual sentence encoders like LASER (Artetxe and Schwenk, 2019), and then measure their cosine distance m(x, y) = 1 − E(x)⊺E(y) ∥E(x)∥· ∥E(y)∥. 2We follow Zhao et al. (2019) in obtaining n-gram embeddings and their associated weights based on IDF. 3.3 Improving Cross-Lingual Alignments Initial analysis indicated that, despite the multilingual pretraining of M-BERT (Devlin et al., 2019) and LASER (Artetxe and Schwenk, 2019), the monolingual subspaces of the multilingual spaces they induce are far from being semantically wellaligned, i.e., we obtain fairly distant vectors for mutual word or sentence translations.3 To this end, we apply two simple, weakly-supervised linear projection methods for post-hoc improvement of the cross-lingual alignments in these multilingual representation spaces. Notation. Let D = {(w1 ℓ, w1 k), . . . , (wn ℓ, wn k)} be a set of matched word or sentence pairs from two different languages ℓand k. We define a remapping function f such that any f(E(wℓ)) and E(wk) are better aligned in the resulting shared vector space. We investigate two resource-lean choices for the re-mapping function f. Linear Cross-lingual Projection (CLP). Following related work (Schuster et al., 2019), we re-map contextualized embedding spaces using linear projection. Given ℓand k, we stack all vectors of the source language words and target language words for pairs D, respectively, to form matrices Xℓand Xk ∈Rn×d, with d as the embedding dimension and n as the number of word or sentence alignments. The word pairs we use to calibrate MBERT are extracted from EuroParl (Koehn, 2005) using FastAlign (Dyer et al., 2013), and the sentence pairs to calibrate LASER are sampled directly from EuroParl.4 Mikolov et al. (2013a) propose to learn a projection matrix W ∈Rd×d by minimizing the Euclidean distance beetween the projected source language vectors and their corresponding target language vectors: min W ∥W Xℓ−Xk∥2. Xing et al. (2015) achieve further improvement on the task of bilingual lexicon induction (BLI) by constraining W to an orthogonal matrix, i.e., such that W ⊺W = I. This turns the optimization into the well-known Procrustes problem (Sch¨onemann, 1966) with the following closed-form solution: ˆ W = UV ⊺, UΣV ⊺= SVD(XℓX⊺ k) 3LASER is jointly trained on parallel corpora of different languages, but in resource-lean language pairs, the induced embeddings from mutual translations may be far apart. 4While LASER requires large parallel corpora in pretraining, we believe that fine-tuning/calibrating the embeddings post-hoc requires fewer data points. 1660 We note that the above CLP re-mapping is known to have deficits, i.e., it requires the embedding spaces of the involved languages to be approximately isomorphic (Søgaard et al., 2018; Vuli´c et al., 2019). Recently, some re-mapping methods that reportedly remedy for this issue have been suggested (Glavaˇs and Vuli´c, 2020; Mohiuddin and Joty, 2020). We leave the investigation of these novel techniques for our future work. Universal Language Mismatch-Direction (UMD) Our second post-hoc linear alignment method is inspired by the recent work on removing biases in distributional word vectors (Dev and Phillips, 2019; Lauscher et al., 2019). We adopt the same approaches in order to quantify and remedy for the “language bias”, i.e., representation mismatches between mutual translations in the initial multilingual space. Formally, given ℓand k, we create individual misalignment vectors E(wi ℓ) −E(wi k) for each bilingual pair in D. Then we stack these individual vectors to form a matrix Q ∈Rn×d. We then obtain the global misalignment vector vB as the top left singular vector of Q. The global misalignment vector presumably captures the direction of the representational misalignment between the languages better than the individual (noisy) misalignment vectors E(wi ℓ) −E(wi k). Finally, we modify all vectors E(wℓ) and E(wk), by subtracting their projections onto the global misalignment direction vector vB: f(E(wℓ)) = E(wℓ) −cos(E(wℓ), vB)vB. Language Model BLEU scores often fail to reflect the fluency level of translated texts (Edunov et al., 2019). Hence, we use the language model (LM) of the target language to regularize the crosslingual semantic similarity metrics, by coupling our cross-lingual similarity scores with a GPT language model of the target language (Radford et al., 2018). We expect the language model to penalize translationese, i.e., unnatural word-by-word translations and boost the performance of our metrics.5 4 Experiments In this section, we evaluate the quality of our MT reference-free metrics by correlating them with human judgments of translation quality. These quality 5We linearly combine the cross-lingual metrics with the LM scores using a coefficient of 0.1 for all setups. We choose this value based on initial experiments on one language pair. judgments are based on comparing human references and system predictions. We will discuss this discrepancy in §5.3. Word-level metrics. We denote our wordlevel alignment metrics based on WMD as MOVERSCORE-NGRAM + ALIGN(EMBEDDING), where ALIGN is one of our two post-hoc crosslingual alignment methods (CLP or UMD). For example, MOVER-2 + UMD(M-BERT) denotes the metric combining MoverScore based on bigram alignments, with M-BERT embeddings and UMD as the post-hoc alignment method. Sentence-level metric. We denote our sentencelevel metrics as: COSINE + ALIGN(EMBEDDING). For example, COSINE + CLP(LASER) measures the cosine distance between the sentence embeddings obtained with LASER, post-hoc aligned with CLP. 4.1 Datasets We collect the source language sentences, their system and reference translations from the WMT17-19 news translation shared task (Bojar et al., 2017b; Ma et al., 2018b, 2019), which contains predictions of 166 translation systems across 16 language pairs in WMT17, 149 translation systems across 14 language pairs in WMT18 and 233 translation systems across 18 language pairs in WMT19. We evaluate for X-en language pairs, selecting X from a set of 12 diverse languages: German (de), Chinese (zh), Czech (cs), Latvian (lv), Finnish (fi), Russian (ru), and Turkish (tr), Gujarati (gu), Kazakh (kk), Lithuanian (lt) and Estonian (et). Each language pair in WMT17-19 has approximately 3,000 source sentences, each associated to one reference translation and to the automatic translations generated by participating systems. 4.2 Baselines We compare with a range of reference-free metrics: ibm1-morpheme and ibm1-pos4gram (Popovi´c, 2012), LASIM (Yankovskaya et al., 2019), LP (Yankovskaya et al., 2019), YiSi-2 and YiSi-2-srl (Lo, 2019), and reference-based baselines BLEU (Papineni et al., 2002), SentBLEU (Koehn et al., 2007) and ChrF++ (Popovi´c, 2017) for MT evaluation (see §2).6 The main results are reported on WMT17. We report the results obtained on WMT18 and WMT19 in the Appendix. 6The code of these unsupervised metrics is not released, thus we compare to their official results on WMT19 only. 1661 Setting Metrics cs-en de-en fi-en lv-en ru-en tr-en zh-en Average m(y∗, y) SENTBLEU 43.5 43.2 57.1 39.3 48.4 53.8 51.2 48.1 CHRF++ 52.3 53.4 67.8 52.0 58.8 61.4 59.3 57.9 m(x, y) Baseline with Original Embeddings MOVER-1 + M-BERT 22.7 37.1 34.8 26.0 26.7 42.5 48.2 34.0 COSINE + LASER 32.6 40.2 41.4 48.3 36.3 42.3 46.7 41.1 Cross-lingual Alignment for Sentence Embedding COSINE + CLP(LASER) 33.4 40.5 42.0 48.6 36.0 44.7 42.2 41.1 COSINE + UMD(LASER) 36.6 28.1 45.5 48.5 31.3 46.2 49.4 40.8 Cross-lingual Alignment for Word Embedding MOVER-1 + RCSLS 18.9 26.4 31.9 33.1 25.7 31.1 34.3 28.8 MOVER-1 + CLP(M-BERT) 33.4 38.6 50.8 48.0 33.9 51.6 53.2 44.2 MOVER-2 + CLP(M-BERT) 33.7 38.8 52.2 50.3 35.4 51.0 53.3 45.0 MOVER-1 + UMD(M-BERT) 22.3 38.1 34.5 30.5 31.2 43.5 48.6 35.5 MOVER-2 + UMD(M-BERT) 23.1 38.9 37.1 34.7 33.0 44.8 48.9 37.2 Combining Language Model COSINE + CLP(LASER) ⊕LM 48.8 46.7 63.2 66.2 51.0 54.6 48.6 54.2 COSINE + UMD(LASER) ⊕LM 49.4 46.2 64.7 66.4 51.1 56.0 52.8 55.2 MOVER-2 + CLP(M-BERT) ⊕LM 46.5 46.4 63.3 63.8 47.6 55.5 53.5 53.8 MOVER-2 + UMD(M-BERT) ⊕LM 41.8 46.8 60.4 59.8 46.1 53.8 52.4 51.6 Table 1: Pearson correlations with segment-level human judgments on the WMT17 dataset. Segment-level System-level 0 20 40 60 80 100 Pearson Correlation 48.1 53.8 93.3 90.1 BLEU This work Figure 1: Average results of our best-performing metric, together with reference-based BLEU on WMT17. 4.3 Results Figure 1 shows that our metric MOVER-2 + CLP(M-BERT) ⊕LM, operating on modified M-BERT with the post-hoc re-mapping and combining a target-side LM, outperforms BLEU by 5.7 points in segment-level evaluation and achieves comparable performance in the system-level evaluation. Figure 2 shows that the same metric obtains 15.3 points gains (73.1 vs. 57.8), averaged over 7 languages, on WMT19 (system-level) compared to the the state-of-the-art reference-free metric YiSi-2. Except for one language pair, gu-en, our metric performs on a par with the reference-based BLEU (see Table 8 in the Appendix) on system-level. In Table 1, we exhaustively compare results for several of our metric variants, based either on MBERT or LASER. We note that re-mapping has 2010 2012 2014 2016 2018 2020 0 20 40 60 80 100 Pearson Correlation ibm1-pos4gram:33.9 ibm1-morpheme:52.4 LASIM:56.2 LP:48.1 YiSi-2:57.8 This Work:73.1 System-level BLEU: 91.2 Figure 2: Average results of our metric best-performing metric, together with the official results of referencefree metrics, and reference-based BLEU on systemlevel WMT19. considerable effect for M-BERT (up to 10 points improvements), but much less so for LASER. We believe that this is because the underlying embedding space of LASER is less ‘misaligned’ since it has been (pre-)trained on parallel data.7 While the re-mapping is thus effective for metrics based on M-BERT, we still require the target-side LM to outperform BLEU. We assume the LM can address challenges that the re-mapping apparently is not able to handle properly; see our discussion in §5.1. Overall, we remark that none of our metric com7However, in the appendix, we find that re-mapping LASER using 2k parallel sentences achieves considerable improvements on low-resource languages, e.g., kk-en (from -61.1 to 49.8) and lt-en (from 68.3 to 75.9); see Table 8. 1662 binations performs consistently best. The reason may be that LASER and M-BERT are pretrained over hundreds of languages with substantial differences in corpora sizes in addition to the different effects of the re-mapping. However, we observe that MOVER-2 + CLP(M-BERT) performs best on average over all language pairs when the LM is not added. When the LM is added, MOVER-2 + CLP(M-BERT) ⊕LM and COSINE + UMD (LASER) ⊕LM perform comparably. This indicates that there may be a saturation effect when it comes to the LM or that the LM coefficients should be tuned individually for each semantic similarity metric based on cross-lingual representations. 5 Analysis We first analyze preferences of our metrics based on M-BERT and LASER (§5.1) and then examine how much parallel data we need for re-mapping our vector spaces (§5.2). Finally, we discuss whether it is legitimate to correlate our metric scores, which evaluate the similarity of system predictions and source texts, to human judgments based on system predictions and references (§5.3). 5.1 Metric preferences To analyze why our metrics based on M-BERT and LASER perform so badly for the task of referencefree MT evaluation, we query them for their preferences. In particular, for a fixed source sentence x, we consider two target sentences ˜y and ˆy and evaluate the following score difference: d(˜y, ˆy; x) := m(x, ˜y) −m(x, ˆy) (1) When d > 0, then metric m prefers ˜y over ˆy, given x, and when d < 0, this relationship is reversed. In the following, we compare preferences of our metrics for specifically modified target sentences ˜y over the human references y⋆. We choose ˜y to be (i) a random reordering of y⋆, to ensure that our metrics do not have the BOW (bag-of-words) property, (ii) a word-order preserving translation of x, i.e., (ii-a) an expert reordering of the human y⋆ to have the same word order as x as well as (ii-b) a word-by-word translation, obtained either using experts or automatically. Especially condition (iib) tests for preferences for literal translations, a common MT-system property. Expert word-by-word translations. We had an expert (one of the co-authors) translate 50 German sentences word-by-word into English. Table 2 illustrates this scenario. We note how bad the word-by-word translations sometimes are even for closely related language pairs such as GermanEnglish. For example, the word-by-word translations in English retain the original German verb final positions, leading to quite ungrammatical English translations. Figure 3 shows histograms for the d statistic for the 50 selected sentences. We first check condition (i) for the 50 sentences. We observe that both MOVER + M-BERT and COSINE+LASER prefer the original human references over random reorderings, indicating that they are not BOW models, a reassuring finding. Concerning (ii-a), they are largely indifferent between correct English word order and the situation where the word order of the human reference is the same as the German. Finally, they strongly prefer the expert word-by-word translations over the human references (ii-b). Condition (ii-a) in part explains why our metrics prefer expert word-by-word translations the most: for a given source text, these have higher lexical overlap than human references and, by (ii-a), they have a favorable target language syntax, viz., where the source and target language word order are equal. Preference for translationese, (ii-b), in turn is apparently a main reason why our metrics do not perform well, by themselves and without a language model, as reference-free MT evaluation metrics. More worryingly, it indicates that crosslingual M-BERT and LASER are not robust to the ‘adversarial inputs’ given by MT systems. Automatic word-by-word translations. For a large-scale analysis of condition (ii-b) across different language pairs, we resort to automatic word-byword translations obtained from Google Translate (GT). To do so, we go over each word in the source sentence x from left to right, look up its translation in GT independently of context and replace the word by the obtained translation. When a word has several translations, we keep the first one offered by GT. Due to context-independence, the GT word-by-word translations are of much lower quality than the expert word-by-word translations since they often pick the wrong word senses—e.g., the German word sein may either be a personal pronoun (his) or the infinitive to be, which would be selected correctly only by chance; cf. Table 2. Instead of reporting histograms of d, we define a “W2W” statistic that counts the relative number of 1663 x Dieser von Langsamkeit gepr¨agte Lebensstil scheint aber ein Patentrezept f¨ur ein hohes Alter zu sein. y⋆ However, this slow pace of life seems to be the key to a long life. y⋆-random To pace slow seems be the this life. life to a key however, of long y⋆-reordered This slow pace of life seems however the key to a long life to be. x′-GT This from slowness embossed lifestyle seems but on nostrum for on high older to his. x′-expert This of slow pace characterized life style seems however a patent recipe for a high age to be. x Putin teilte aus und beschuldigte Ankara, Russland in den R¨ucken gefallen zu sein. y⋆ Mr Putin lashed out, accusing Ankara of stabbing Moscow in the back. y⋆-random Moscow accusing lashed Putin the in Ankara out, Mr of back. stabbing y⋆-reordered Mr Putin lashed out, accusing Ankara of Moscow in the back stabbing. x′-GT Putin divided out and accused Ankara Russia in the move like to his. x′-expert Putin lashed out and accused Ankara, Russia in the back fallen to be. Table 2: Original German input sentence x, together with the human reference y⋆, in English, and a randomly (y⋆-random) and expertly reordered (y⋆-reordered) English sentence as well as expert word-by-word translation (x′) of the German source sentence. The latter is either obtained by the human expert or by Google Translate (GT). 0.25 0.20 0.15 0.10 0.05 0.00 0.05 0 2 4 6 8 10 12 14 m(x, y*-random)-m(x, y*) LASER M-BERT 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0 2 4 6 8 10 12 m(x, y*-reorder)-m(x, y*) LASER M-BERT 0.05 0.00 0.05 0.10 0.15 0 2 4 6 8 10 12 14 m(x, x'-expert)-m(x, y*) LASER M-BERT Figure 3: Histograms of d scores defined in Eq. (1). Left: Metrics based on LASER and M-BERT favor gold over randomly-shuffled human references. Middle: Metrics are roughly indifferent between gold and reordered human references. Right: Metrics favor expert word-by-word translations over gold human references. times that d(x′, y⋆) is positive, where x′ denotes the described literal translation of x into the target language: W2W := 1 N X (x′,y⋆) I( d(x′, y⋆) > 0 ) (2) Here N normalizes W2W to lie in [0, 1] and a high W2W score indicates the metric prefers translationese over human-written references. Table 3 shows that reference-free metrics with original embeddings (LASER and M-BERT) either still prefer literal over human translations (e.g., W2W score of 70.2% for cs-en) or struggle in distinguishing them. Re-mapping helps to a small degree. Only when combined with the LM scores do we get adequate scores for the W2W statistic. Indeed, the LM is expected to capture unnatural word order in the target language and penalize word-by-word translations by recognizing them as much less likely to appear in a language. Note that for expert word-by-word translations, we would expect the metrics to perform even worse. Metrics cs-en de-en fi-en COSINE + LASER 70.2 65.7 53.9 COSINE + CLP(LASER) 70.7 64.8 53.7 COSINE + UMD(LASER) 67.5 59.5 52.9 COSINE + UMD(LASER) ⊕LM 7.0 7.1 6.4 MOVER-2 + M-BERT 61.8 50.2 45.9 MOVER-2 + CLP(M-BERT) 44.6 44.5 32.0 MOVER-2 + UMD(M-BERT) 54.5 44.3 39.6 MOVER-2 + CLP(M-BERT) ⊕LM 7.3 10.2 6.4 Table 3: W2W statistics for selected language pairs. Numbers are in percent. 5.2 Size of Parallel Corpora Figure 4 compares sentence- and word-level remapping trained with a varying number of parallel sentences. Metrics based on M-BERT result in the highest correlations after re-mapping, even with a small amount of training data (1k). We observe that COSINE + CLP(LASER) and MOVER-2 + CLP(M-BERT) show very similar trends with a sharp increase with increasing amounts of parallel data and then level off quickly. However, the M-BERT based Mover-2 reaches its peak and outperforms the original baseline with only 1k data, while LASER needs 2k before beating the corre1664 2000 4000 6000 8000 10000 20 25 30 35 40 45 Pearson Correlation Mover-2 + CLP(M-BERT) Mover-2 + UMD(M-BERT) Mover-2 + M-BERT Cosine + CLP(LASER) Cosine + UMD(LASER) Cosine + LASER Figure 4: Average results of our metrics based on sentence- and word-based re-mappings of vector spaces as a function of different sizes of parallel corpus (x-axis). sponding original baseline. 5.3 Human Judgments The WMT datasets contain segment- and systemlevel human judgments that we use for evaluating the quality of our reference-free metrics. The segment-level judgments assign one direct assessment (DA) score to each pair of system and human translation, while system-level judgments associate each system with a single DA score averaged across all pairs in the dataset. We initially suspected the DA scores to be biased for our setup—which compares x with y—as they are based on comparing y⋆and y. Indeed, it is known that (especially) human professional translators “improve” y⋆, e.g., by making it more readable, relative to the original x (Rabinovich et al., 2017). We investigated the validity of DA scores by collecting human assessments in the cross-lingual settings (CLDA), where annotators directly compare source and translation pairs (x, y) from the WMT17 dataset. This small-scale manual analysis hints that DA scores are a valid proxy for CLDA. Therefore, we decided to treat them as reliable scores for our setup and evaluate our proposed metrics by comparing their correlation with DA scores. 6 Conclusion Existing semantically-motivated metrics for reference-free evaluation of MT systems have so far displayed rather poor correlation with human estimates of translation quality. In this work, we investigate a range of reference-free metrics based on cutting-edge models for inducing cross-lingual semantic representations: cross-lingual (contextualized) word embeddings and cross-lingual sentence embeddings. We have identified some scenarios in which these metrics fail, prominently their inability to punish literal word-by-word translations (the so-called “translationese”). We have investigated two different mechanisms for mitigating this undesired phenomenon: (1) an additional (weakly-supervised) cross-lingual alignment step, reducing the mismatch between representations of mutual translations, and (2) language modeling (LM) on the target side, which is inherently equipped to punish “unnatural” sentences in the target language. We show that the reference-free coupling of cross-lingual similarity scores with the target-side language model surpasses the reference-based BLEU in segment-level MT evaluation. We believe our results have two relevant implications. First, they portray the viability of referencefree MT evaluation and warrant wider research efforts in this direction. Second, they indicate that reference-free MT evaluation may be the most challenging (“adversarial”) evaluation task for multilingual text encoders as it uncovers some of their shortcomings—prominently, the inability to capture semantically non-sensical word-by-word translations or paraphrases—which remain hidden in their common evaluation scenarios. We release our metrics under the name XMoverScore publicly: https://github.com/AIPHES/ ACL20-Reference-Free-MT-Evaluation. Acknowledgments We thank the anonymous reviewers for their insightful comments and suggestions, which greatly improved the final version of the paper. This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) at the Technische Universit¨at Darmstadt under grant No. GRK 1994/1. The contribution of Goran Glavaˇs is supported by the Eliteprogramm of the Baden-W¨urttembergStiftung, within the scope of the grant AGREE. References Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511, San Diego, 1665 California. Association for Computational Linguistics. Mikel Artetxe and Holger Schwenk. 2019. Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Ondrej Bojar, Yvette Graham, and Amir Kamran. 2017a. Results of the WMT17 metrics shared task. In Proceedings of the Conference on Machine Translation (WMT). Ondˇrej Bojar, Yvette Graham, and Amir Kamran. 2017b. Results of the WMT17 metrics shared task. In Proceedings of the Second Conference on Machine Translation, pages 489–513, Copenhagen, Denmark. Association for Computational Linguistics. Steven Cao, Nikita Kitaev, and Dan Klein. 2020. Multilingual alignment of contextual word representations. In International Conference on Learning Representations. Daniel Cer, Mona Diab, Eneko Agirre, I˜nigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Elizabeth Clark, Asli Celikyilmaz, and Noah A Smith. 2019. Sentence Mover’s Similarity: Automatic Evaluation for Multi-Sentence Texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2748–2760, Florence, Italy. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of ACL. Sunipa Dev and Jeff M. Phillips. 2019. Attenuating bias in word vectors. In The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 879–887. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. George Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Cooccurrence Statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Markus Dreyer and Daniel Marcu. 2012. Hyter: Meaning-equivalent semantics for translation evaluation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 162–171. Association for Computational Linguistics. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato, and Michael Auli. 2019. On the evaluation of machine translation systems trained with back-translation. CoRR, abs/1908.05204. Marina Fomicheva and Lucia Specia. 2016. Reference bias in monolingual machine translation evaluation. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016-Short Papers, pages 77–82. ACL Home Association for Computational Linguistics. Goran Glavaˇs, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710–721. Goran Glavaˇs and Ivan Vuli´c. 2020. Non-linear instance-based cross-lingual mapping for non-isomorphic embedding spaces. In Proceedings of ACL. Yinuo Guo and Junfeng Hu. 2019. Meteor++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 501–506, Florence, Italy. Association for Computational Linguistics. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. CoRR, abs/2003.11080. 1666 Melvin Johnson, Mike Schuster, Quoc Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernand a Vi ˜A c⃝gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Martin Josifoski, Ivan S Paskov, Hristo S Paskov, Martin Jaggi, and Robert West. 2019. Crosslingual document embedding as reduced-rank ridge regression. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 744–752. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2979–2984, Brussels, Belgium. Association for Computational Linguistics. Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In International Conference on Learning Representations. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459–1474, Mumbai, India. The COLING 2012 Organizing Committee. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Citeseer. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International conference on machine learning, pages 957–966. Anne Lauscher, Goran Glavaˇs, Simone Paolo Ponzetto, and Ivan Vuli´c. 2019. A general framework for implicit and explicit debiasing of distributional word vector spaces. arXiv preprint arXiv:1909.06092. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150–157. Chi-kiu Lo. 2019. YiSi - a Unified Semantic MT Quality Evaluation and Estimation Metric for Languages with Different Levels of Available Resources. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 507–513, Florence, Italy. Association for Computational Linguistics. Chi-kiu Lo, Meriem Beloucif, Markus Saers, and Dekai Wu. 2014. XMEANT: Better semantic MT evaluation without reference translations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 765–771, Baltimore, Maryland. Association for Computational Linguistics. Qingsong Ma, Ondrej Bojar, and Yvette Graham. 2018a. Results of the WMT18 metrics shared task. In Proceedings of the Third Conference on Machine Translation (WMT). Qingsong Ma, Ondˇrej Bojar, and Yvette Graham. 2018b. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671–688, Belgium, Brussels. Association for Computational Linguistics. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90, Florence, Italy”. Association for Computational Linguistics. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2799–2808, Florence, Italy. Association for Computational Linguistics. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on 1667 Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111– 3119. Bari Saiful M Mohiuddin, Tasnim and Shafiq Joty. 2020. Lnmap: Departures from isomorphic assumption in bilingual lexicon induction through non-linear mapping in latent space. CoRR, abs/1309.4168. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Maja Popovi´c. 2012. Morpheme- and POS-based IBM1 and language model scores for translation quality estimation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 133–137, Montr´eal, Canada. Association for Computational Linguistics. Maja Popovi´c. 2017. chrF++: Words Helping Character N-grams. In Proceedings of the Second Conference on Machine Translation, pages 612–618, Copenhagen, Denmark. Maja Popovi´c, David Vilar, Eleftherios Avramidis, and Aljoscha Burchardt. 2011. Evaluation without references: IBM1 scores as evaluation metrics. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 99–103, Edinburgh, Scotland. Association for Computational Linguistics. Ella Rabinovich, Noam Ordan, and Shuly Wintner. 2017. Found in translation: Reconstructing phylogenetic language trees from translations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 530–540, Vancouver, Canada. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Andreas R¨uckl´e, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Concatenated power mean word embeddings as universal cross-lingual sentence representations. arXiv. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Association for Computational Linguistics. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. RUSE: Regressor using sentence embeddings for automatic machine translation evaluation. In Proceedings of the Third Conference on Machine Translation (WMT). Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Association for Computational Linguistics. Lucia Specia, Dhwaj Raj, and Marco Turchi. 2010. Machine translation evaluation versus quality estimation. Machine translation, 24(1):39–50. Lucia Specia, Kashif Shah, Jose G.C. de Souza, and Trevor Cohn. 2013. QuEst - a translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79–84, Sofia, Bulgaria. Association for Computational Linguistics. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graphbased attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1181. Association for Computational Linguistics. Ivan Vuli´c, Goran Glavaˇs, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International 1668 Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4398–4409. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. 2019. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307. Elizaveta Yankovskaya, Andre T¨attar, and Mark Fishel. 2019. Quality Estimation and Translation Metrics via Pre-trained Word and Sentence Embeddings. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 101–105, Florence, Italy. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with BERT. CoRR, abs/1904.09675. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics. 1669 A Appendix A.1 Zero-shot Transfer to Resource-lean Language Our metric allows for estimating translation quality on new domains. However, the evaluation is limited to those languages covered by multilingual embeddings. This is a major drawback for lowresource languages—e.g., Gujarati is not included in LASER. To this end, we take multilingual USE (Yang et al., 2019) as an illustrating example which covers only 16 languages (in our sample Czech, Latvian and Finish are not included in USE). We re-align the corresponding embedding spaces with our re-mapping functions to induce evaluation metrics even for these languages, using only 2k translation pairs. Table 4 shows that our metric with a composition of re-mapping functions can raise correlation from zero to 0.10 for cs-en and to 0.18 for lv-en. However, for one language pair, fi-en, we see correlation goes from negative to zero, indicating that this approach does not always work. This observation warrants further investigation. Metrics cs-en fi-en lv-en BLEU 0.849 0.834 0.946 COSINE + LAS -0.001 -0.149 0.019 COSINE + CLP(USE) 0.072 -0.068 0.109 COSINE + UMD(USE) 0.056 -0.061 0.113 COSINE + CLP ◦UMD(USE) 0.089 -0.030 0.162 COSINE + UMD ◦CLP(USE) 0.102 -0.007 0.180 Table 4: The Pearson correlation of merics on segmentlevel WMT17. ’◦’ marks the composition of two remapping functions. 1670 Setting Metrics cs-en de-en fi-en lv-en ru-en tr-en zh-en Average m(y∗, y) BLEU 0.971 0.923 0.903 0.979 0.912 0.976 0.864 0.933 CHRF++ 0.940 0.965 0.927 0.973 0.945 0.960 0.880 0.941 m(x, y) Baseline with Original Embeddings MOVER-1 + M-BERT 0.408 0.905 0.570 0.571 0.855 0.576 0.816 0.672 COSINE + LASER 0.821 0.821 0.744 0.754 0.895 0.890 0.676 0.800 Cross-lingual Alignment for Sentence Embedding COSINE + CLP(LASER) 0.824 0.830 0.760 0.766 0.900 0.942 0.757 0.826 COSINE + UMD(LASER) 0.833 0.858 0.735 0.754 0.909 0.870 0.630 0.798 Cross-lingual Alignment for Word Embedding MOVER-1 + RCSLS -0.693 -0.053 0.738 0.251 0.538 0.380 0.439 0.229 MOVER-1 + CLP(M-BERT) 0.796 0.960 0.879 0.874 0.894 0.864 0.898 0.881 MOVER-2 + CLP(M-BERT) 0.818 0.971 0.885 0.887 0.878 0.893 0.896 0.890 MOVER-1 + UMD(M-BERT) 0.610 0.956 0.526 0.599 0.906 0.538 0.898 0.719 MOVER-2 + UMD(M-BERT) 0.650 0.973 0.574 0.649 0.888 0.634 0.901 0.753 Combining Language Model COSINE + CLP(LASER) ⊕LM 0.986 0.909 0.868 0.968 0.858 0.910 0.800 0.900 COSINE + UMD(LASER) ⊕LM 0.984 0.904 0.861 0.968 0.850 0.922 0.817 0.901 MOVER-2 + CLP(M-BERT) ⊕LM 0.977 0.923 0.873 0.944 0.863 0.880 0.803 0.895 MOVER-2 + UMD(M-BERT) ⊕LM 0.968 0.934 0.832 0.951 0.871 0.862 0.821 0.891 Table 5: Pearson correlations with system-level human judgments on the WMT17 dataset. Setting Metrics cs-en de-en et-en fi-en ru-en tr-en zh-en Average m(y∗, y) SENTBLEU 0.233 0.415 0.285 0.154 0.228 0.145 0.178 0.234 YISI-1 0.319 0.488 0.351 0.231 0.300 0.234 0.211 0.305 m(x, y) Baseline with Original Embeddings MOVER-1 + M-BERT 0.005 0.229 0.179 0.115 0.100 0.039 0.082 0.107 COSINE + LASER 0.072 0.317 0.254 0.155 0.102 0.086 0.064 0.150 Cross-lingual Alignment for Word Embedding COSINE + CLP(LASER) 0.093 0.323 0.254 0.151 0.112 0.086 0.074 0.156 COSINE + UMD(LASER) 0.077 0.317 0.252 0.145 0.136 0.083 0.053 0.152 COSINE + UMD ◦CLP(LASER) 0.090 0.337 0.255 0.139 0.145 0.090 0.088 0.163 COSINE + CLP ◦UMD(LASER) 0.096 0.331 0.254 0.153 0.122 0.084 0.076 0.159 Cross-lingual Alignment for Sentence Embedding MOVER-1 + CLP(M-BERT) 0.084 0.279 0.207 0.147 0.145 0.089 0.122 0.153 MOVER-2 + CLP(M-BERT) 0.063 0.283 0.193 0.149 0.136 0.069 0.115 0.144 MOVER-1 + UMD(M-BERT) 0.043 0.264 0.193 0.136 0.138 0.051 0.113 0.134 MOVER-2 + UMD(M-BERT) 0.040 0.268 0.188 0.143 0.141 0.055 0.111 0.135 MOVER-1 + UMD ◦CLP(M-BERT) 0.024 0.282 0.192 0.144 0.133 0.085 0.089 0.136 MOVER-1 + CLP ◦UMD(M-BERT) 0.073 0.277 0.208 0.148 0.142 0.086 0.121 0.151 MOVER-2 + CLP ◦UMD(M-BERT) 0.057 0.283 0.194 0.149 0.137 0.069 0.114 0.143 Combining Language Model COSINE + UMD ◦CLP(LASER) ⊕LM 0.288 0.455 0.226 0.321 0.263 0.159 0.192 0.272 COSINE + CLP ◦UMD(LASER) ⊕LM 0.283 0.457 0.228 0.321 0.265 0.150 0.198 0.272 MOVER-1 + CLP ◦UMD(M-BERT) ⊕LM 0.268 0.428 0.292 0.213 0.261 0.152 0.192 0.258 MOVER-2 + CLP ◦UMD(M-BERT) ⊕LM 0.254 0.426 0.285 0.203 0.251 0.146 0.193 0.251 Table 6: Kendall correlations with segment-level human judgments on the WMT18 dataset. 1671 Setting Metrics cs-en de-en et-en fi-en ru-en tr-en zh-en Average m(y∗, y) BLEU 0.970 0.971 0.986 0.973 0.979 0.657 0.978 0.931 METEOR++ 0.945 0.991 0.978 0.971 0.995 0.864 0.962 0.958 m(x, y) Baseline with Original Embeddings MOVER-1 + M-BERT -0.629 0.915 0.880 0.804 0.847 0.731 0.677 0.604 COSINE + LASER -0.348 0.932 0.930 0.906 0.902 0.832 0.471 0.661 Cross-lingual Alignment for Sentence Embedding COSINE + CLP(LASER) -0.305 0.934 0.937 0.908 0.904 0.801 0.634 0.688 COSINE + UMD(LASER) -0.241 0.944 0.933 0.906 0.902 0.842 0.359 0.664 COSINE + UMD ◦CLP(LASER) 0.195 0.955 0.958 0.913 0.896 0.899 0.784 0.800 COSINE + CLP ◦UMD(LASER) -0.252 0.942 0.941 0.908 0.919 0.811 0.642 0.702 Cross-lingual Alignment for Word Embedding MOVER-1 + CLP(M-BERT) -0.163 0.943 0.918 0.941 0.915 0.628 0.875 0.722 MOVER-2 + CLP(M-BERT) -0.517 0.944 0.909 0.938 0.913 0.526 0.868 0.654 MOVER-1 + UMD(M-BERT) -0.380 0.927 0.897 0.886 0.919 0.679 0.855 0.683 MOVER-2 + UMD(M-BERT) -0.679 0.929 0.891 0.896 0.920 0.616 0.858 0.633 MOVER-1 + UMD ◦CLP(M-BERT) -0.348 0.949 0.905 0.890 0.905 0.636 0.776 0.673 MOVER-1 + CLP ◦UMD(M-BERT) -0.205 0.943 0.916 0.938 0.913 0.641 0.871 0.717 MOVER-2 + CLP ◦UMD(M-BERT) -0.555 0.944 0.908 0.935 0.911 0.551 0.863 0.651 Combining Language Model COSINE + UMD ◦CLP(LASER) ⊕LM 0.979 0.967 0.979 0.947 0.942 0.673 0.954 0.919 COSINE + CLP ◦UMD(LASER) ⊕LM 0.974 0.966 0.983 0.951 0.951 0.255 0.961 0.863 MOVER-1 + CLP ◦UMD(M-BERT) ⊕LM 0.956 0.960 0.949 0.973 0.951 0.097 0.954 0.834 MOVER-2 + CLP ◦UMD(M-BERT) ⊕LM 0.959 0.961 0.947 0.979 0.951 -0.036 0.952 0.815 Table 7: Pearson correlations with system-level human judgments on the WMT18 dataset. Direct Assessment Setting Metrics de-en fi-en gu-en kk-en lt-en ru-en zh-en Average m(y∗, y) BLEU 0.849 0.982 0.834 0.946 0.961 0.879 0.899 0.907 m(x, y) Existing Reference-free Metrics IBM1-MORPHEME(Popovi´c, 2012) 0.345 0.740 0.487 IBM1-POS4GRAM(Popovi´c, 2012) 0.339 LASIM(Yankovskaya et al., 2019) 0.247 0.310 LP(Yankovskaya et al., 2019) 0.474 0.488 YISI-2(Lo, 2019) 0.796 0.642 0.566 0.324 0.442 0.339 0.940 0.578 YISI-2-SRL(Lo, 2019) 0.804 0.947 Baseline with Original Embeddings MOVER-1 + M-BERT 0.358 0.611 -0.396 0.335 0.559 0.261 0.880 0.373 COSINE + LASER 0.217 0.891 -0.745 -0.611 0.683 -0.303 0.842 0.139 Our Cross-lingual based Metrics MOVER-2 + CLP(M-BERT) 0.625 0.890 -0.060 0.993 0.851 0.928 0.968 0.742 COSINE + CLP(LASER) 0.225 0.894 0.041 0.150 0.696 -0.184 0.845 0.381 COSINE + UMD ◦CLP(LASER) 0.074 0.835 -0.633 0.498 0.759 -0.201 0.610 0.277 Our Cross-lingual based Metrics ⊕LM COSINE + CLP(LASER) ⊕LM 0.813 0.910 -0.070 -0.735 0.931 0.630 0.711 0.456 COSINE + UMD(LASER) ⊕LM 0.817 0.908 -0.383 -0.902 0.929 0.573 0.781 0.389 MOVER-2 + CLP(M-BERT) ⊕LM 0.848 0.907 -0.068 0.775 0.963 0.866 0.827 0.731 MOVER-2 + UMD(M-BERT) ⊕LM 0.859 0.914 -0.181 -0.391 0.970 0.702 0.874 0.535 Table 8: Pearson correlations with system-level human judgments on the WMT19 dataset. ’-’ marks the numbers not officially reported in (Ma et al., 2019).
2020
151
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1672–1678 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1672 Parallel Sentence Mining by Constrained Decoding Pinzhen Chen∗ Nikolay Bogoychev∗ Kenneth Heafield Faheem Kirefu School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB {pinzhen.chen, n.bogoych}@ed.ac.uk, {kheafiel, fkirefu}@inf.ed.ac.uk Abstract We present a novel method to extract parallel sentences from two monolingual corpora, using neural machine translation. Our method relies on translating sentences in one corpus, but constraining the decoding by a prefix tree built on the other corpus. We argue that a neural machine translation system by itself can be a sentence similarity scorer and it efficiently approximates pairwise comparison with a modified beam search. When benchmarked on the BUCC shared task, our method achieves results comparable to other submissions. 1 Introduction Having large and high-quality parallel corpora is critical for neural machine translation (NMT). One way to create such a resource is to mine the web (Resnik and Smith, 2003). Once texts are crawled from the web, they form large collections of data in different languages. To find parallel sentences, a natural way is to score sentence similarity between all possible sentence pairs and extract the topscoring ones. This poses two major challenges: 1. Accurately determining the semantic similarity of a sentence pair in two languages. 2. Efficiently scoring sentence similarity for all possible pairs across two languages. Scoring each source sentence against each target sentence results in unaffordable quadratic time complexity. A typical workflow reduces the search complexity in a coarse-to-fine manner by aligning documents then aligning sentences within documents (Uszkoreit et al., 2010). However, translated websites may not have matching document structures. More recent methods focus on direct sentence alignment. The results from Building and Using ∗Equal contribution. Comparable Corpora (BUCC) shared task show that direct sentence alignment can be done by sentence-level lexical comparison, neural comparison or a combination of the two (Zweigenbaum et al., 2017, 2018). A state-of-the-art method maps all sentences to multilingual sentence embeddings and compares them using vector similarity (Artetxe and Schwenk, 2019). Such sentence embeddings are produced by neural encoders, but the rise of the attention mechanism demonstrates that sentence embeddings alone are insufficient to obtain full translation quality (Bahdanau et al., 2015). To exploit quality gains from the attention mechanism, we propose to use a full NMT system with attention to score potentially parallel sentences. The way we avoid pairwise scoring is inspired by constrained decoding in NMT, where the choice of output tokens is constrained to a predefined list (Hokamp and Liu, 2017). Our method works as follows: We designate one language as source and one language as target, and build a trie over all target sentences. Then we translate each source sentence to the target language, but constrain left-to-right beam search to follow the trie. In other words, every translation hypothesis is a prefix of some sentence in the target language. Rather than freely choosing which token to extend by, a hypothesis is limited to extensions that exist in the target language corpus. In effect, we are using beam search to limit target language candidates for each source sentence. Our work makes two contributions to parallel sentence mining. First, instead of comparing translated text or neural similarity, we use an NMT model to directly score and retrieve sentences onthe-fly during decoding. Second, we approximate pairwise comparison with beam search, so only the top-scoring hypotheses need to be considered at each decoding step. 1673 2 Methodology NMT systems can assign a conditional translation probability to an arbitrary sentence pair. Filtering based on this (Junczys-Dowmunt, 2018) won the WMT 2018 shared task on parallel corpus filtering (Koehn et al., 2018). Intuitively, we could score every pair of source and target sentences using a translation system in quadratic time, then return pairs that score highly for further filtering. We approximate this with beam search. 2.1 Trie-constrained decoding We build a prefix tree (trie) containing all sentences in the target language corpus (Figure 1). Then we translate each sentence in the source language corpus using the trie as a constraint on output in the target language. NMT naturally generates translations one token at a time from left to right, so it can follow the trie of target language sentences as it translates. <s> Cakes are the best I like cakes strudels Figure 1: A monolingual trie storing three sentences. Formally, translation typically uses beam search to approximately maximise the probability of a target language sentence given a source language sentence. We modify beam search to restrict partial translations to be a prefix of at least one sentence in the target language. The trie is merely an efficient data structure with which to evaluate this prefix constraint; partial translations are augmented to remember their position in the trie. We consider two places to apply our constraint. In post-expansion pruning, beam search creates hypotheses for the next word, prunes hypotheses to fit in the beam size, and then requires they be prefixes of a target language sentences. In practice, most sentences are do not have translations in the corpus and search terminates early if all hypotheses are pruned. In pre-expansion pruning, a hypothesis in the beam generates a probability distribution over all tokens, but only the tokens corresponding to children of the trie node can be expanded by the hypothesis. The search process is guaranteed to find at least one target sentence for each source sentence. Downstream filtering removes false positives. Algorithm 1 Trie-constrained beam search with maximum output length L, beam size B, vocabulary V and a pre-built trie trie beam0 ←{<s>} match ←{} for time step t in 1 to L do beamt ←{} for hypothesis h in beamt−1 do Vt ←V if pre-expansion then v2 Vt ←Vt ∩Children(trie, h) v2 beamt ←beamt ∪Continue(h, Vt, B) beamt ←NBest(beamt, B −|match|) if post-expansion then v1 beamt ←beamt ∩trie v1 Move full sentences from beamt to match. if beamt is empty then return match return match Algorithm 1 presents both variants of our modified beam search algorithm. Besides canonical beam search, “ v1” indicates post-expansion pruning while “ v2” indicates pre-expansion pruning. Figure 2 visualises trie-constrained beam search with post-expansion pruning. <s> it × I like cakes 0.97 √ strudels 0.03 love × - Source: Me gustan los pasteles (I like cakes) - Target trie: as shown in Figure 1 Figure 2: Trie-constrained decoding with postexpansion pruning, using beam size 2. × denotes pruned hypotheses. √denotes the retrieved sentence. Numbers denote translation probabilities. The modified beam search algorithm allows us to efficiently approximate the comparison between a source sentence and M target sentences. We let B denote beam size and L denote maximum output length. Given each source sentence, our NMT decoder only expands the top B hypotheses intersecting with the trie, for at most L times, regardless of M. With N source sentences, our proposed method will reduce the comparison complexity from O(MN) to O(BLN), where BL ≪M. 1674 2.2 Filtering Pre-expansion pruning leaves each source sentence with an output, which needs to be filtered out if not parallel. We propose to use two methods. When NMT generates an output, a sentence level cross-entropy score is computed too. One way to perform filtering is to only keep sentences with a better per-word cross-entropy than a certain threshold. Another way is to use Bicleaner, an off-the-shelf tool which scores sentence similarity at sentence pair level (S´anchez-Cartagena et al., 2018). Filtering is optional for post-expansion pruning. 2.3 Trie implementation The trie used in our NMT decoding should be fast to query and small enough to fit in memory. We use an array of nodes as the basic data structure. Each node contains a key corresponding to a vocabulary item, as well as a pointer to another array containing all possible continuations in the next level. Binary search is used to find the correct continuations to the next level. With byte pair encoding (BPE) (Sennrich et al., 2016), we can always keep the maximum vocabulary size below 65535, which allows us to use 2-byte integers as keys, minimising memory usage. To integrate the trie into the decoder, we maintain external pointers to possible children nodes in the trie for each active hypothesis. When the hypotheses are expanded at each time step, the pointers are advanced to the next trie depth level. This ensures that cross-referencing the trie has a negligible effect on decoding speed. 3 Experiments 3.1 BUCC shared task We evaluate our method on the BUCC shared task, which requires participants to extract parallel sentences from large monolingual data of English and other languages (Zweigenbaum et al., 2017, 2018). Monolingual and parallel sentences come from Wikipedia and News Commentary respectively. Data are divided into sample, train and test sets at a ratio of 1:10:10. The gold alignments for the test set are not public. Evaluation metrics adopted are precision, recall and F1 score. When inspecting the BUCC shared task data, we discovered overlapping parallel sentences in the sample, train and test sets. For example, more than 60% of the German-English gold pairs in the test set appear in the train set too.1 3.2 Experiment details We apply our methods on English (En) paired with German (De), French (Fr) and Russian (Ru) on BUCC sample data initially. We train separate translation models for each language into English. All models are Transformer-Base (Vaswani et al., 2017), trained using Marian (Junczys-Dowmunt et al., 2018) with BPE applied. We use parallel data from WMT news translation task (Bojar et al., 2015), excluding News Commentary to prevent our systems from memorising the gold parallel sentences given the overlap issue. We choose beam size 90 by performing a grid search on De-En pair and keep it unchanged. Regarding the filtering for pre-expansion pruning, per-word conditional cross-entropy thresholds are tuned separately for each pair, because languages inherently have different (cross-)entropies. For Bicleaner, we stick to its default settings, except that we disable the language model filter. All our models translate into English, but our method is actually language-agnostic. Hence, we train a separate En→De model, which will allow us to compare our method in inverse translation directions. Table 1 reports the performance of our systems on the sample data. Our method exhibits a much higher precision than recall. We hypothesise that if the systems in inverse directions retrieve different sentence pairs, then taking a union will sacrifice some precision for recall, consequently a higher F1. Thus, we present in the same table the results of taking the union of outputs from En→De and De→En systems, labelled as “(3) ∪(4)”. Likewise, we also take the union of the results from cross-entropy and Bicleaner filtering and report scores in the same table. It turns out that pre-expansion works better than post-expansion. In order to directly compare with previous work, we tune parameters of its filtering thresholds on train data for De-En pair, and apply the pre-expansion variant on the test data. Our results, evaluated by the BUCC organisers, are reported in Table 2 together with other submissions. Finally, we conduct an add-on experiment to see how our system would perform with in-domain 1The shared task organisers confirmed the issue after we pointed it out. They re-evaluated previous submissions without overlapping parallel sentences. On average, recall drops by 2% with the largest being 4%. 1675 (1) Fr→En (2) Ru→En (3) De→En (4) En→De (3) ∪(4) P R F1 P R F1 P R F1 P R F1 P R F1 (v1) post-expansion 92 62 74 99 61 75 88 61 72 96 59 73 81 75 81 (v2) pre-expansion + cross-entropy (CE) 97 72 83 98 84 90 96 73 83 98 79 88 96 87 91 + Bicleaner (BC) 86 77 81 n/a* 93 81 86 91 82 86 86 87 87 + CE ∪BC 93 81 86 n/a 91 84 87 90 86 88 91 91 91 * Bicleaner does not have a published classifier model for Ru-En. Table 1: Precision, recall and F1 of our methods on BUCC sample set. data. We fine-tune our De→En and En→De systems on News Commentary, excluding the sentence pairs which appear in BUCC train or test sets. As BUCC submissions are asked not to use News Commentary, this is only used to contrast with our own results on the train set. Train Test Azpeitia et al. (2018) 84.3 85.5 Wieting et al. (2019) 77.5 n/a* Artetxe and Schwenk (2019) 91.9 95.6 (v2) pre-expansion + CE ∪BC 83.0 83.9 + fine-tuning 85.5 n/a * Wieting et al. directly evaluated on the public train set. Table 2: F1 scores of our method and other methods on BUCC De-En train and test sets. 4 Results and Analysis Experiments on the sample data in Table 1 show that pre-expansion pruning outperforms postexpansion by about 10 F1 points. This can be explained by the fact that the decoder has a better chance to generate the correct target sentence if the available vocabulary is constrained. For both variants, the high precision reflects the effectiveness of using NMT as a sentence similarity scorer. Regarding filtering methods, we notice that Bicleaner achieves a more balanced precision and recall, while filtering by per-word cross-entropy leads to very high precision but lower recall. Generally, the latter does better in terms of F1. Taking a union of the output from the two filtering methods results in a even more balanced precision and recall, without damaging F1. This implies that the two filtering techniques keep different sentence pairs. Table 2 shows that our method achieves comparable performance to other methods. Moreover, our models are trained using a vanilla Transformer-Base architecture on WMT data. Without data or model wise techniques (e.g. indomain fine-tuning), they are nowhere close to state-of-the-art NMT systems (Barrault et al., 2019). Contrasting Table 1 and Table 2 reveals a discrepancy between our method’s F1 scores on the sample and train sets. We suspect that when there are more possible target sentences, our model will have more choices, leading to a lower performance. The same behaviour is also observed in other BUCC 2018 submissions which report their scores on the sample data (Azpeitia et al., 2018; Leong et al., 2018). Overall our method does not outperform stateof-the-art which leverages neural embeddings. We identify several weaknesses: beam search can only find local optima, and a genuine parallel sentence cannot be recovered once it is pruned. Thus the method is vulnerable when parallel sentences have different word ordering. For example, “Por el momento, estoy bebiendo un caf´e” (English: “At the moment, I am drinking a coffee”) can hardly match “I am drinking a coffee at the moment”, because an NMT system will have very low probability of generating a reordered translation, unless using an undesirably large beam size. Moreover, compared to methods that consider textual overlap, NMT is sensitive to domain mismatch and rare words (Koehn and Knowles, 2017). When a system is confused by rare words in the source, we observe that the overly zealous language model in the decoder generates a fluent sentence in the trie rather than a translation. This problem is alleviated when our systems are fine-tuned on indomain data, as shown in Table 2 that there is a gain in F1. Finally we discuss the limitation of evaluating our method on the BUCC task. First, our method based on NMT can be liable to favour 1676 machine-translated texts, whereas the BUCC data is unlikely to contain those. Next, we notice that some parallel sentences in BUCC data are not included in the gold alignments. For instance, in De-En train set, “de-000081259” and “de-000081260” are the same German sentence, and so are “en-000036940” and “en-000036941” on the English side. Gold alignments only include (de-000081259, en-000036940) and (de000081260, en-000036941), but not the other two. Lastly, it still remains unknown if a system optimised for F1 will produce the sentences that can truly improve NMT performance. 5 Related Work A typical parallel corpus mining workflow first aligns parallel documents to limit the search space for sentence alignment. Early methods rely on webpage structure (Resnik and Smith, 2003; Shi et al., 2006). Later, Uszkoreit et al. (2010) translate all documents into a single language, and shortlist candidate document pairs based on TFIDF-weighted n-grams. Recently, Guo et al. (2019) suggest a neural method to compare document embeddings obtained from sentence embeddings . With the assumption that matched documents are parallel (no cross-alignment), sentence alignment can be done by comparing sentence length in words (Brown et al., 1991) or characters (Gale and Church, 1993), which is then improved by adding lexical features (Varga et al., 2005). After translating texts into the same language, BLEU can also be used to determine parallel texts, by anchoring the most reliable alignments first (Sennrich and Volk, 2011). Most recently, Thompson and Koehn (2019) propose to compare bilingual sentence embeddings with dynamic programming in linear runtime. There are also research efforts on parallel sentence extraction without the reliance on document alignment. Munteanu and Marcu (2002) acquire parallel phrases from comparable corpora using bilingual tries and seed dictionaries. Azpeitia et al. (2018) computes Jaccard similarity of lexical translation overlap. Leong et al. (2018) use an autoencoder and a maximum entropy classifier. Bouamor and Sajjad (2018) consider cosine similarity between averaged multilingual word embeddings. Guo et al. (2018) design a dual encoder model to learn multilingual sentence embeddings directly with added negative examples. Wieting et al. (2019) obtain sentence embeddings from sub-word embeddings and train a simpler model to distinguish positive and negative examples. Artetxe and Schwenk (2019) refine Guo et al. (2018)’s work and achieve state-of-the-art by looking at the margins of cosine similarities between pairs of nearest neighbours. In our work, using NMT as a similarity scorer relies on constrained decoding (Hokamp and Liu, 2017), which has been applied on image captioning (Anderson et al., 2017) and keyword generation (Lian et al., 2019). 6 Conclusion and Future Work We bring a new insight into using NMT as a similarity scorer for sentences in different languages. By constraining on a target side trie during decoding, beam search can approximate pairwise comparison between source and target sentences. Thus, overall we present an interesting way of finding parallel sentences through trie-constrained decoding. Our method achieves a comparable F1 score to existing systems with a vanilla architecture and data. Maximising machine translation scores is biased towards finding machine translated text produced by a similar model. More research is needed on this problem given the prevalent usage of NMT. We hypothesise that part of the success of dual conditional cross-entropy filtering (JunczysDowmunt, 2018) is checking that scores in both directions are approximately equal, whereas a machine translation would be characterised by a high score in one direction. Finally, scalability is a key issue in large-scale mining of parallel corpora, where both quantity and quality are of concern. The scalability of direct sentence alignment without a document aligner has not been thoroughly investigated in our work, as well as other related work. Acknowledgments This work has received funding from the European Union under grant agreement INEA/CEF/ICT/A2017/1565602 through the Connecting Europe Facility. This paper reflects the authors’ views; INEA is not responsible for any use that may be made of the information contained in this paper. 1677 References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936–945, Copenhagen, Denmark. Association for Computational Linguistics. Mikel Artetxe and Holger Schwenk. 2019. Marginbased parallel corpus mining with multilingual sentence embeddings. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3197–3203. Association for Computational Linguistics. Andoni Azpeitia, Thierry Etchegoyhen, and Eva Mart´ınez Garcia. 2018. Extracting parallel sentences from comparable corpora with STACC variants. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics. Houda Bouamor and Hassan Sajjad. 2018. H2@BUCC18: Parallel sentence extraction from comparable corpora using multilingual sentence embeddings. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Peter F. Brown, Jennifer C. Lai, and Robert L. Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of the 29th Annual Meeting on Association for Computational Linguistics, ACL ’91, pages 169–176, Stroudsburg, PA, USA. Association for Computational Linguistics. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1):75–102. Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 165–176, Belgium, Brussels. Association for Computational Linguistics. Mandy Guo, Yinfei Yang, Keith Stevens, Daniel Cer, Heming Ge, Yun-hsuan Sung, Brian Strope, and Ray Kurzweil. 2019. Hierarchical document encoder for parallel corpus mining. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 64–72, Florence, Italy. Association for Computational Linguistics. Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535– 1546, Vancouver, Canada. Association for Computational Linguistics. Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888–895, Belgium, Brussels. Association for Computational Linguistics. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116– 121, Melbourne, Australia. Association for Computational Linguistics. Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 726–739, Belgium, Brussels. Association for Computational Linguistics. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics. 1678 Chongman Leong, Derek F. Wong, and Lidia S. Chao. 2018. UM-pAligner: Neural network-based parallel sentence identification model. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Yijiang Lian, Zhijie Chen, Jinlong Hu, Kefeng Zhang, Chunwei Yan, Muchenxuan Tong, Wenying Han, Hanju Guan, Ying Li, Ying Cao, Yang Yu, Zhigang Li, Xiaochun Liu, and Yue Wang. 2019. An end-toend generative retrieval method for sponsored search engine–decoding efficiently into a closed target domain. arXiv preprint arXiv:1902.00592. Dragos Stefan Munteanu and Daniel Marcu. 2002. Processing comparable corpora with bilingual suffix trees. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 289–295. Association for Computational Linguistics. Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349–380. V´ıctor M. S´anchez-Cartagena, Marta Ba˜n´on, Sergio Ortiz-Rojas, and Gema Ram´ırez. 2018. Prompsit’s submission to WMT 2018 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 955–962, Belgium, Brussels. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich and Martin Volk. 2011. Iterative, MTbased sentence alignment of parallel texts. In Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011), pages 175– 182, Riga, Latvia. Northern European Association for Language Technology (NEALT). Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A DOM tree alignment model for mining parallel data from the web. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 489–496, Stroudsburg, PA, USA. Association for Computational Linguistics. Brian Thompson and Philipp Koehn. 2019. Vecalign: Improved sentence alignment in linear time and space. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1342–1348, Hong Kong, China. Association for Computational Linguistics. Jakob Uszkoreit, Jay Ponte, Ashok Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1101–1109, Beijing, China. Coling 2010 Organizing Committee. D´aniel Varga, P´eter Hal´acsy, Andr´as Kornai, Viktor Nagy, L´aszl´o N´emeth, and Viktor Tr´on. 2005. Parallel corpora for medium density languages. Proceedings of the RANLP 2005 Conference, pages 590– 596. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. John Wieting, Kevin Gimpel, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Simple and effective paraphrastic similarity from parallel translations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4602–4608, Florence, Italy. Association for Computational Linguistics. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2017. Overview of the second BUCC shared task: Spotting parallel sentences in comparable corpora. In Proceedings of the 10th Workshop on Building and Using Comparable Corpora, pages 60–67, Vancouver, Canada. Association for Computational Linguistics. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the third BUCC shared task: Spotting parallel sentences in comparable corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA).
2020
152
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1679–1685 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1679 Self-Attention with Cross-Lingual Position Representation Liang Ding† Longyue Wang‡ Dacheng Tao† †UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, The University of Sydney {ldin3097,dacheng.tao}@sydney.edu.au ‡Tencent AI Lab [email protected] Abstract Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences. However, in cross-lingual scenarios, e.g., machine translation, the PEs of source and target sentences are modeled independently. Due to word order divergences in different languages, modeling the cross-lingual positional relationships might help SANs tackle this problem. In this paper, we augment SANs with crosslingual position representations to model the bilingually aware latent structure for the input sentence. Specifically, we utilize bracketing transduction grammar (BTG)-based reordering information to encourage SANs to learn bilingual diagonal alignments. Experimental results on WMT’14 English⇒German, WAT’17 Japanese⇒English, and WMT’17 Chinese⇔English translation tasks demonstrate that our approach significantly and consistently improves translation quality over strong baselines. Extensive analyses confirm that the performance gains come from the cross-lingual information. 1 Introduction Although self-attention networks (SANs) (Lin et al., 2017) have achieved the state-of-the-art performance on several natural language processing (NLP) tasks (Vaswani et al., 2017; Devlin et al., 2019; Radford et al., 2018), they possess the innate disadvantage of sequential modeling due to the lack of positional information. Therefore, absolute position encoding (APE) (Vaswani et al., 2017) and relative position encoding (RPE) (Shaw et al., 2018) were introduced to better capture the sequential dependencies. However, either absolute or relative PE is language-independent and its embedding Bush with Sharon held a talk Bush held a talk with Sharon [source] [re-ordered] 0 1 2 3 4 5 [abs POS] 0 3 4 5 1 2 [XL POS] 布什 与 沙龙 举行 了 会谈 [target] 布什 与 沙龙 举行 了 会谈 Bush held a talk with Sharon (a) BTG tree based cross-lingual structure for En-Zh Inverted Straight (b) Absolute(abs) Position vs. Cross-Lingual(XL) Position Figure 1: Illustration of cross-lingual position for English⇒Chinese translation task. (a) BTG tree shows the cross-lingual preordering. The top-left corner is the transduction grammar. (b) the difference between absolute position encoding (APE) and our proposed crosslingual position encoding (XL PE) . remains fixed. This inhibits the capacity of SANs when modelling multiple languages, which have diverse word orders and structures (Gell-Mann and Ruhlen, 2011). Recent work have shown that modeling cross-lingual information (e.g., alignment or reordering) at encoder or attention level improves translation performance for different language pairs (Cohn et al., 2016; Du and Way, 2017; Zhao et al., 2018; Kawara et al., 2018). Inspired by their work, we propose to augment SANs with cross-lingual representations, by encoding reordering indices at embedding level. Taking English⇒Chinese translation task for example, we first reorder the English sentence by deriving a latent bracketing transduction grammar 1680 (BTG) tree (Wu, 1997) (Fig. 1a). Similar to absolute position, the reordering information can be represented as cross-lingual position (Fig. 1b). In addition, we propose two strategies to incorporate cross-lingual position encoding into SANs. We conducted experiments on three commonlycited datasets of machine translation. Results show that exploiting cross-lingual PE consistently improves translation quality . Further analysis reveals that our method improves the alignment quality (§Sec. 4.3) and context-free Transformer (Tang et al., 2019) (§Sec. 4.4). Furthermore, contrastive evaluation demonstrates that NMT models benefits from the cross-lingual information rather than denoising ability (§Sec. 4.5). 2 Background Position Encoding To tackle the position unaware problem, absolute position information is injected into the SANs: PEabs = f(posabs/100002i/dmodel) (1) where posabs denotes the numerical position indices, i is the dimension of the position indices and dmodel means hidden size. f(·) alternately employs sin(·) and cos(·) for even and odd dimensions. Accordingly, the position matrix PE can be obtained given the input X = {x1, . . . , xT } ∈RT×dmodel. Then, the position aware output Z is calculated by: Z = X + PEabs ∈RT×dmodel (2) Self-Attention The SANs compute the attention of each pair of elements in parallel. It first converts the input into three matrices Q, K, V, representing queries, keys, and values, respectively: {Q, K, V} = {ZWQ, ZWK, ZWV } (3) where WQ, WK, WV ∈Rdmodel×dmodel are parameter matrices. The output is then computed as a weighted sum of values by ATT(Q, K, V). SANs can be implemented with multi-head attention mechanism, which requires extra splitting and concatenation operations. Specifically, WQ, WK, WV and Q, K, V in Eq. (3) is split into H sub-matrices, yielding H heads. For the h-th head, the output is computed by: Oh = ATT(Qh, Kh, Vh) ∈RT×dv (4) Where subspace parameters are Wh Q, Wh K ∈ Rdmodel×dk and Wh V ∈Rdmodel×dv, where dk, dv + Nonlinear Fusion (a) InXL SANs (b) HeadXL SANs Abs-PE Input: X XL-PE Multi-Head SANs Multi-Head SANs Multi-Head SANs Concat V K Q + Abs-PE Input: X + XL-PE Multi-Head SANs Multi-Head SANs Multi-Head SANs Concat V K Q Figure 2: The proposed integration strategies. refer to the dimensions of keys and values in the subspace, and normally dk = dv = dmodel/H. Finally, these subspaces are combined with concatenation operation: O = CONCAT(O1, . . . , OH)WO (5) where WO ∈RHdv×dmodel and O ∈RT×dmodel are the parameter matrix and output, respectively. 3 Approach 3.1 Cross-Lingual Position Representation First, we built a BTG-based reordering model (Neubig et al., 2012) to generate a reordered source sentence according to the word order of its corresponding target sentence. Second, we obtained the reordered word indices posXL that correspond with the input sentence X. To output the cross-lingual position matrix PEXL, we inherit the sinusoidal function in Eq. (1). Formally, the process is: PEXL = f(BTG(X)) (6) 3.2 Integration Strategy As shown in Fig. 2, we propose two strategies to integrate the cross-lingual position encoding (XL PE) into SANs: inputting-level XL (InXL) SANs and head-level (HeadXL) SANs. Inputting-level XL SANs As illustrated in Fig. 2a, we employ a non-linear function TANH(·) to fuse PEabs and PEXL: PEIN-XL = TANH(PEabsU + PEXLV) (7) where U, V are trainable parameters. In our preliminary experiments, the non-linear function performs better than element-wise addition. This might because complex non-linear one have better 1681 fitting capabilities, thereby avoiding exceptional reordering to some extent. Next, we perform Eq. (2) to obtain the output representations: ZIN-XL = X + PEIN-XL (8) Similarly, we use Eq. (3)∼(5) to calculate multiple heads of SANs. Head-level XL SANs Instead of projecting XL PE to all attention heads, we feed partial of them, such that some heads contain XL PE and others contain APE, namely HeadXL. As shown in Fig. 2b, we fist add APE and XL PE for X, respectively: Zabs =X + PEabs ZXL =X + PEXL (9) We denote the number of XL PE equipped heads as τ ∈ {0, . . . , H}. To perform the attention calculation, Wi is divided into [WXL i ∈ Rdmodel×τdv; Wabs i ∈Rdmodel×(H−τ)dv] for each i ∈Q, K, V, correspondingly generating two types of {Q, K, V} for XL PE heads and APE heads. According to Eq. (4), the output of each XL PE head is: OXL h = ATT(QXL h , KXL h , VXL h ) ∈RT×dv (10) As a result, the final output of HeadXL is: HEADSAN(X) =CONCAT(OXL 1 , . . . , OXL τ Oabs τ+1, . . . , Oabs H )WO (11) In particular, τ = 0 refers to the original Transformer (Vaswani et al., 2017) and τ = H means that XL PE will propagate over all attention heads. 4 Experiments We conduct experiments on word order-diverse language pairs: WMT’14 English⇒German (En-De), WAT’17 Japanese⇒English (Ja-En), and WMT’17 Chinese⇔English (Zh-En & En-Zh). For English⇒German, the training set consists of 4.5 million sentence pairs and newstest2013 & 2014 are used as the dev. and test sets, respectively. BPE with 32K merge operations is used to handle low-frequency words. For Japanese⇒English, we follow Morishita et al. (2017) to use the first two sections as training data, which consists of 2.0 million sentence pairs. The dev. and test sets contain 1790 and 1812 sentences. For Chinese⇔English, we follow Hassan et al. (2018) to get 20 million 28.3 28.6 28.8 0 2 4 6 8 10 12 14 BLEU τ (#heads with XL PE) Transformer Big Head XL SANs Figure 3: BLEU score on newstest2014 for different τ. sentence pairs. We develop on devtest2017 and test on newstest2017. We use SacreBLEU (Post, 2018) as the evaluation metric with statistical significance test (Collins et al., 2005). We evaluate the proposed XL PE strategies on Transformer. The baseline systems include Relative PE (Shaw et al., 2018) and directional SAN (DiSAN, Shen et al. 2018). We implement them on top of OpenNMT (Klein et al., 2017). In addition, we report the results of previous studies (Hao et al., 2019; Wang et al., 2019; Chen et al., 2019b,a; Du and Way, 2017; Hassan et al., 2018). The reordered source sentences are generated by BTG-based preordering model (Neubig et al., 2012) trained with above sub-word level1 parallel corpus. At training phase, we first obtain word alignments from parallel data using GIZA++ or FastAlign, and then the training process is to find the optimal BTG tree for source sentence consistent with the order of the target sentence based on the word alignments and parallel data. At decoding phase, we only provide source sentences as input and the model can output reordering indices, which will be fed into NMT model. Thus, bilingual alignment information is only used to preprocess training data, but not necessary at decoding time. For fair comparison, we keep the Transformer decoder unchanged and validate different position representation strategies on the encoder. We conduct all experiments on the TRANSFORMER-BIG with four V100 GPUs. 4.1 Effect of τ in HeadXL SANs Fig. 3 reports the results of different τ for Head XL SANs. With increasing of XL PE-informed heads, the best BLEU is achieved when #heads = 4, which is therefore left as the default setting for HeadXL. Then, the BLEU score gradually decreases as the 1Garg et al. (2019) show that sub-word units are beneficial for statistical model. 1682 # System Architecture BLEU #Param. 1 Vaswani et al. (2017) Transformer BIG 28.4 213M 2 Hao et al. (2019) Transformer BIG w/ BiARN 28.98 323.5M 3 Wang et al. (2019) Transformer BIG w/ Structure PE 28.88 – 4 Chen et al. (2019b) Transformer BIG w/ MPRHead 29.11 289.1M 5 Chen et al. (2019a) Transformer BIG w/ Reorder Emb 29.11 308.2M 6 This work Transformer BIG 28.36 282.55M 7 + Relative PE 28.71 +0.06M 8 + DiSAN 28.76 +0.04M 9 + InXL PE 28.66 +0.01M 10 + HeadXL PE 28.72 +0.00M 11 + Combination 29.05↑ +0.01M Table 1: Experiments on WMT’14 En-De. “↑”indicates significant difference (p < 0.01) from Transformer BIG. “#Param” denotes the number of parameters. “+ Combination” represents combining #9 and #10 methods. System JaEn ZhEn EnZh Du and Way (2017) 25.65 – – Hassan et al. (2018) – 24.20 – Transformer BIG 29.22 23.94 33.79 + Relative PE 29.62 24.36 34.21 + DiSAN 29.73 24.44 34.31 + InXL PE 29.52 24.44 34.23 + HeadXL PE 29.62 24.39 34.20 + Combination↑ 29.85 24.71 34.51 Table 2: Experiments on Ja-En, Zh-En and En-Zh. number of APE-informed heads decrease (τ ↑), indicating that sequential position embedding is still essential for SANs. 4.2 Main Results Tab. 1 shows the results on En-De, inputting-level cross-lingual PE (+InXL PE) and head-level crosslingual PE (+HeadXL PE) outperform Transformer BIG by 0.30 and 0.36 BLEU points, and combining these two strategies2 achieves a 0.69 BLEU point increase. For Ja-En, Zh-En, and En-Zh (Tab. 2), we observe a similar phenomenon, demonstrating that XL PE on SANs do improve the translation performance for several language pairs. It is worth noting that our approach introduces nearly no additional parameters (+0.01M over 282.55M). 4.3 Alignment Quality Our proposed XL PE intuitively encourages SANs to learn bilingual diagonal alignment, so has the 2Replace PEXL in Eq. (9) with PEIN-XL in Eq. (8). Model AER P R Transformer BIG 29.7% 69.9% 72.7% + InXL 27.5% 72.2% 74.1% + HeadXL 26.9% 75.4% 73.9% + Combination 24.7% 75.0% 77.6% Table 3: The AER scores of alignments on En-De. potential to induce better attention matrices. We explore this hypothesis on the widely used Gold Alignment dataset3 and follow Tang et al. (2019) to perform the alignment. The only difference being that we average the attention matrices across all heads from the penultimate layer (Garg et al., 2019). The alignment error rate (AER, Och and Ney 2003), precision (P) and recall (R) are reported as the evaluation metrics. Tab. 3 summarizes the results. We can see: 1) XL PE allows SANs to learn better attention matrices, thereby improving alignment performance (27.4 / 26.9 vs. 29.7); and 2) combining the two strategies delivers consistent improvements (24.7 vs. 29.7). 4.4 Gain for Context-Free Model Tang et al. (2019) showed that context-free Transformer (directly propagating the source word embeddings with PE to the decoder) achieved comparable results to the best RNN-based model. We argue that XL PE could further enhance the contextfree Transformer. On English⇒German dataset, 3http://www-i6.informatik.rwth-aachen. de/goldAlignment, the original dataset is GermanEnglish, we reverse it to English-German. 1683 System BLEU #Param. LSTM (6 layers) 24.12 178.90M BIG-noEnc-noPos 9.97 171.58M + Absolute PE 24.11 +0.00M + Relative PE 24.47 +0.01M + InXL PE 24.68 +0.01M Table 4: Gains over Encoder-Free Transformer. we compare LSTM-based model, Transformer BIGnoenc-nopos, +APE, +RPE and +InXL PE. For fair comparison, we set the LSTM hidden size to 1024. In Tab. 4, we can see: 1) position information is the most important component for the context-free model, bringing +14.45 average improvement; 2) InXL PE equipped context-free Transformer significantly outperforms the LSTM model while consuming less parameters; and 3) compared to the increment on standard Transformer (+0.30 over 28.36), InXL PE improves more for context-free Transformer (+0.57 over 24.11), where the improvements are +2.3% vs. +1.1%. 4.5 Effects of Noisy Reordering Information To demonstrate that our improvements come from cross-lingual position information rather than noisy position signals, we attack our model by adding noises4 into reordered indices of training sentences. As shown in Fig. 4, our method can tolerate partial reordering noises and maintain performance to some extent. However, as noise increases, translation quality deteriorates, indicating that noises in reordering information do not work as regularization. This contrastive evaluation also confirms that the model does not benefit from the noise as much as it benefits from the reordering information. 5 Related Work Augmenting SANs with position representation SANs ignore the position of each token due to its position-unaware “bag-of-words” assumption. The most straightforward strategy is adding the position representations as part of the token representations (Vaswani et al., 2017; Shaw et al., 2018). Besides above sequential PE approaches, Wang et al. (2019) enhanced SANs with structural positions extracted from the syntax dependencies. However, none of them considered modeling the cross4We randomly swap two reordered positional indexes with different ratios. 25 26 28 0% 5% 10% 15% 20% BLEU Ratio of noisy reordered indices Ours + Noises Transformer Big Figure 4: Experiments with noise attacks. Ratio of noisy reordered indices ranges from 0% to 20%. lingual position information between languages. Modeling cross-lingual divergence There has been many works modeling cross-lingual divergence (e.g., reordering) in statistical machine translation (Nagata et al., 2006; Durrani et al., 2011, 2013). However, it is difficult to migrant them to neural machine translation. Kawara et al. (2018) pre-reordered the source sentences with a recursive neural network model. Chen et al. (2019a) learned the reordering embedding by considering the relationship between the position embedding of a word and SANS-calculated sentence representation. Yang et al. (2019) showed that SANs in machine translation could learn word order mainly due to the PE, indicating that modeling cross-lingual information at position representation level may be informative. Thus, we propose a novel cross-lingual PE method to improve SANs. 6 Conclusions and Future Work In this paper, we presented a novel cross-lingual position encoding to augment SANs by considering cross-lingual information (i.e., reordering indices) for the input sentence. We designed two strategies to integrate it into SANs. Experiments indicated that the proposed strategies consistently improve the translation performance. In the future, we plan to extend the cross-lingual position encoding to non-autoregressive MT (Gu et al., 2018) and unsupervised NMT (Lample et al., 2018). Acknowledgments This work was supported by Australian Research Council Projects FL-170100117. We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions. 1684 References Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2019a. Neural machine translation with reordering embeddings. In ACL. Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2019b. Recurrent positional embedding for neural machine translation. In EMNLP. Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In NAACL. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Jinhua Du and Andy Way. 2017. Pre-reordering for neural machine translation: Helpful or harmful? The Prague Bulletin of Mathematical Linguistics, 108(1). Nadir Durrani, Alexander Fraser, Helmut Schmid, Hieu Hoang, and Philipp Koehn. 2013. Can Markov models over minimal translation units help phrasebased SMT? In ACL. Nadir Durrani, Helmut Schmid, and Alexander Fraser. 2011. A joint sequence translation model with integrated reordering. In ACL. Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In EMNLP. Murray Gell-Mann and Merritt Ruhlen. 2011. The origin and evolution of word order. Proceedings of the National Academy of Sciences, 108(42). Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Nonautoregressive neural machine translation. In ICLR. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling recurrence for transformer. In NAACL. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic chinese to english news translation. arXiv. Yuki Kawara, Chenhui Chu, and Yuki Arase. 2018. Recursive neural network based preordering for english-to-japanese machine translation. In ACL, Student Research Workshop. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Opensource toolkit for neural machine translation. In ACL. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In EMNLP. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In ICLR. Makoto Morishita, Jun Suzuki, and Masaaki Nagata. 2017. Ntt neural machine translation systems at wat 2017. IJCNLP. Masaaki Nagata, Kuniko Saito, Kazuhide Yamamoto, and Kazuteru Ohashi. 2006. A clustered global phrase reordering model for statistical machine translation. In COLING. Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a discriminative parser to optimize machine translation reordering. In EMNLP. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1). Matt Post. 2018. A call for clarity in reporting bleu scores. In WMT. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In NAACL. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI. Gongbo Tang, Rico Sennrich, and Joakim Nivre. 2019. Understanding neural machine translation by simplification: The case of encoder-free models. In RANLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Xing Wang, Zhaopeng Tu, Longyue Wang, and Shuming Shi. 2019. Self-attention with structural position representations. In EMNLP. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3). 1685 Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019. Assessing the ability of self-attention networks to learn word order. In ACL. Yang Zhao, Jiajun Zhang, and Chengqing Zong. 2018. Exploiting pre-ordering for neural machine translation. In LREC.
2020
153
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1686–1690 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1686 “You Sound Just Like Your Father” Commercial Machine Translation Systems Include Stylistic Biases Dirk Hovy Federico Bianchi Bocconi University Via Sarfatti 25, 20136 Milan, Italy {dirk.hovy, f.bianchi, fornaciari.tommaso}@unibocconi.it Tommaso Fornaciari Abstract The main goal of machine translation has been to convey the correct content. Stylistic considerations have been at best secondary. We show that as a consequence, the output of three commercial machine translation systems (Bing, DeepL, Google) make demographically diverse samples from five languages “sound” older and more male than the original. Our findings suggest that translation models reflect demographic bias in the training data. These results open up interesting new research avenues in machine translation to take stylistic considerations into account. 1 Introduction Translating what is being said is arguably the most important aspect of machine translation, and has been the main focus of all its efforts so far. However, how something is said also has an impact on how the final translation is perceived. Mirkin et al. (2015) have pointed out that demographic aspects of language do play a role in translation, and could help in personalization. As Vanmassenhove et al. (2018) have shown, gendered inflections like “Sono stanco/a” (Italian I am tired) are an important aspect of correct translations. In many cases, capturing the style of a document is equally important as its content: translating a lover’s greeting as “I am entirely pleased to see you” might be semantically correct, but seems out of place. Demographic factors (age, gender, etc.) all manifest in language, and therefore influence style: we do not expect a 6-year old to sound like an adult, and would not translate a person to seem differently gendered. However, in this paper, we show such a change is essentially what happens in machine translation: authors sound on average older and more male. Prior work (Rabinovich et al., 2017) has shown that translation weakens the signal for gender prediction. We substantially extend this analysis in terms of languages, demographic factors, and types of models, controlling for demographically representative samples. We show the direction in which the predicted demographic factors differ in the translations, and find that there are consistent biases towards older and more male profiles. Our findings suggest a severe case of overexposure to writings from these demographics (Hovy and Spruit, 2016), which creates a self-reinforcing loop. In this paper, we use demographicallyrepresentative author samples from five languages (Dutch, English, French, German, Italian), and translate them with three commercially available machine translation systems (Google, Bing, and DeepL). We compare the true demographics with the predicted demographics of each translation (as well as a control predictor trained on the same language). Without making any judgment on the translation of the content, we find a) that there are substantial discrepancies in the perceived demographics, and b) that translations tend to make the writers appear older and considerably more male than they are. Contributions We empirically show how translations affect the demographic profile of a text. We release our data set at https://github.com/ MilaNLProc/translation_bias. Our findings contribute to a growing literature on biases in NLP (see Shah et al. (2020) for a recent overview). 2 Data We use the Trustpilot data set from Hovy et al. (2015), which provides reviews in different languages, and includes information about age and gender. We use only English, German, Italian, French, and Dutch reviews, based on two criteria: 1) availability of the language in translation mod1687 els, and 2) sufficient data for representative samples (see below) in the corpus. For the English data, we use US reviews, rather than UK reviews, based on a general prevalence of this variety in translation engines. 2.1 Translation Data For each language, we restrict ourselves to reviews written in the respective language (according to langid 1 (Lui and Baldwin, 2012)) that have both age and gender information. We use the CIA factbook2 data on age pyramids to sample 200 each male and female. We use the age groups given on the factbook, i.e., 15–24, 25–54, 55–64, and 65+. Based on data sparsity in the Trustpilot data, we do not include the under-15 age group. This sampling procedure results in five test sets of about 400 instances each (the exact numbers vary slightly according to rounding and the proportions in the CIA factbook data), balanced for binary gender. The exception is Italian, where the original data is so heavily skewed towards male reviews that even with downsampling, we only achieve a 48:52 gender ratio. We then translate all non-English test sets into English, and the English test set into all other languages, using three commercially available machine translation tools: Bing, DeepL, and Google Translate. 2.2 Profile Prediction Data We use all instances that are not part of any test set to create training data for the respective age and gender classifiers (see next section). Since we want to compare across languages fairly, the training data sets need to be of comparable size. We are therefore bounded by the size of the smallest available subset (Italian). We sample about 2500 instances per gender, according to the respective age distributions. This sampling results in about 5000 instances per language (again, the exact number varies slightly based on the availability of samples for each group and rounding). We again subsample to approximate the actual age and gender distribution, since, according to Hovy et al. (2015), the data skews strongly male, while otherwise closely matching the official age distributions. 1https://github.com/saffsd/langid.py 2https://www.cia.gov/library/ publications/the-world-factbook/ 3 Methods To assess the demographic profile of a text, we train separate age and gender classifiers for each language. These classifiers allow us to compare the predicted profiles in the original language with the predicted profiles of the translation, and compare both to the actual demographics of the test data. We use simple Logistic Regression models with L2 regularization over 2-6 character-grams, and regularization optimized via 3-fold crossvalidation.3 The numbers in Table 1 indicate that both age and gender can be inferred reasonably well across all of the languages. We use these classifiers in the following analyses. de en fr it nl gender 0.65 0.62 0.64 0.62 0.66 age 0.52 0.53 0.45 0.52 0.49 Table 1: Macro-F1 for age and gender classifiers on each language. For each non-English sample, we predict the age and gender of the author in both the original language and in each of the three English translations (Google, Bing, and DeepL). I.e., we use the respective language’s classifier described above (e.g., a classifier trained on German to predict German test data), and the English classifier described above for the translations. E.g., we use the age and gender classifier trained on English data to predict the translations of the German test set. For the English data, we first translate the texts into each of the other languages, using each of the three translation systems. Then we again predict the author demographics in the original English test set (using the classifier trained on English), as well as in each of the translated versions (using the classifier trained on the respective language). E.g., we create a German, French, Italian, and Dutch translation with each Google, Bing, and DeepL, and classify both the original English and the translation. We can then compare the distribution of age groups and genders in the predictions with the actual distributions. If there is classifier bias, both 3We also experimented with a convolutional neural network with attention, as well as with BERT-based input representations, but did not see significantly better results, presumably due to the higher number of parameters in each case. 1688 the predictions based on the original language and the predictions based on the translations should be skewed in the same direction. We can measure this difference by computing the KullbackLeibler (KL) divergence of the predicted distribution from the true sample distribution. In order to see whether the predictions differ statistically significantly from the original, we use a use a χ2 contingency test and report significance at p <= 0.05 and p <= 0.01. If instead there is a translation bias, then the translated predictions should exhibit a stronger skew than the predictions based on the original language. By using both translations from and into English, we can further tease apart the direction of this effect. 4 Results 4.1 Gender Translating into English Table 2 shows the results when translating into English. It shows for each language the test gender ratio, the predicted ratio from classifiers trained in the same language, as well as their KL divergence from the ratio in the test set, and the ratio predictions and KL divergence on predictions of an English classifier on the translations from three MT systems. For most languages, there exists a male bias in predictions of the original language. The translated English versions create an even stronger skew. The notable exception is French, which most translation engines render in a demographically faithful manner. Dutch is slightly worse, followed by Italian (note, though, that the Italian data was so heavily imbalanced that we could not sample an even distribution for the test data). Somewhat surprisingly, the gender skew is strongest for German, swinging by as much as 15 percentage points. Translating from English Table 3 shows the results when translating from English into the various languages. The format is the same as for Table 2. Again we see large swings, normally exacerbating the balance towards men. However, translating into German with all systems produces estimates that are a lot more female than the original data. This result could be the inverse effect of what we observed above. Again, there is little change for French, though we also see some female bias in two MT systems. 4.2 Age Figure 1: Density distribution and KL for age prediction in various languages and different systems in original and when translated into English. Solid yellow line = true distribution. ∗= predicted distribution differs significantly from gold distribution at p <= 0.05. ∗∗= significant difference at p <= 0.01. Figure 1 shows the kernel density plots for the four age groups in each language (rows) in the same language prediction, and in the English translation. In all cases, the distributions are reasonably close, but in all cases, the predictions overestimate the most prevalent class. To delve a bit deeper into this age mismatch, we also split up the sample by decade (i.e., seven classes: 10s, 20s, etc., up to 70s+). Figure 2 shows the results. The caveat here is that the overall performance is lower, due to the higher number of classes. We also can not guarantee that the distribution still follows the true demographics, since we are subsampling within the larger classes given by the CIA factbook. However, the results still strongly suggest that the observed mismatch is driven predominantly by overprediction of the 50s decade. Because this decade often contributed strongly to the most frequent age category (25–54), predictions did not differ as much from gold in the previous test. It 1689 gold org. lang Google Bing DeepL from F:M split F:M split KL F:M split KL F:M split KL F:M split KL de 50 : 50 48 : 52 0.001 37 : 63∗∗ 0.034 35 : 65∗∗ 0.045 35 : 65∗∗ 0.045 fr 50 : 50 47 : 53 0.002 49 : 51 0.000 48 : 52 0.001 49 : 51 0.000 it 48 : 52 47 : 53 0.000 37 : 63∗∗ 0.026 43 : 57 0.006 36 : 64∗∗ 0.033 nl 50 : 50 49 : 51 0.000 47 : 53 0.001 47 : 53 0.002 44 : 56 0.007 avg 0.000 0.015 0.013 0.021 Table 2: Gender split (%) and KL divergence from gold for each language when translated into English. ∗∗= split differs significantly from gold split at p <= 0.01. gold English Google Bing DeepL F:M split F:M split KL to F:M split KL F:M split KL F:M split KL 50 : 50 49 : 51 0.000 de 59 : 41∗ 0.015 58 : 42∗ 0.013 58 : 42∗ 0.011 fr 49 : 51 0.000 52 : 48 0.001 54 : 46 0.003 it 45 : 55 0.004 44 : 56 0.007 41 : 59∗ 0.016 nl 40 : 60∗∗ 0.020 43 : 57∗ 0.010 40 : 60∗∗ 0.019 avg 0.010 0.008 0.012 Table 3: Gender split (%) and KL divergence from gold for each language when translated from English. ∗= split differs significantly from gold split at p <= 0.05. ∗∗= significant difference at p <= 0.01. also explains the situation of the Italian predictor. In essence, English translations of all these languages, irrespective of the MT system, sound much older than they are. 4.3 Discrepancies between MT Systems All three tested commercial MT systems are close together in terms of performance. However, they also seem to show the same systematic translation biases. The most likely reason is the use of biased training data. The fact that translations into English are perceived as older and more male than translations into other languages could indicate that there is a larger collection of unevenly selected data in English than for other languages. 5 Related Work The work by Rabinovich et al. (2017) is most similar to ours, in that they investigated the effect of translation on gender. However, it differs in a few key points: they show that translation weakens the predictive power, but do not investigate the direction of false predictions. We show that there is a definitive bias. In addition, we extend the analysis to include age. We also use various commercially available MT tools, rather than research systems. Recent research has suggested that machine translation systems reflect cultural and societal biases (Stanovsky et al., 2019; Escud´e Font and Costa-juss`a, 2019), though mostly focusing on data selection and embeddings as sources. Work by Mirkin et al. (2015); Mirkin and Meunier (2015) has set the stage for considering the impact of demographic variation (Hovy et al., 2015) and its integration in MT more general. There is a growing literature on various types of bias in NLP. For a recent overview, see Shah et al. (2020). 6 Conclusion We test what demographic profiles author attribute tools predict for the translations from various commercially available machine translation tools. We find that independent of the MT system and the translation quality, the predicted demographics differ systematically when translating into English. On average, translations make the author seem substantially older and more male. Translating from English into any of the other languages shows more mixed results, but similar tendencies. Acknowledgments The authors would like to thank Pietro Lesci, Serena Pugliese, and Debora Nozza, as well as the anonymous reviewers, for their kind suggestions. The authors are members of the Bocconi Institute 1690 Figure 2: Density distribution and KL for decade prediction in various languages and different systems in original and when translated into English. Solid yellow line = true distribution. ∗= predicted distribution differs significantly from gold distribution at p <= 0.05. ∗∗= significant difference at p <= 0.01. for Data Science and Analytics (BIDSA) and the Data and Marketing Insights (DMI) unit. References Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 147–154, Florence, Italy. Association for Computational Linguistics. Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015. User review sites as a resource for largescale sociolinguistic studies. In Proceedings of the 24th international conference on World Wide Web, pages 452–461. International World Wide Web Conferences Steering Committee. Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598. Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, pages 25–30, Jeju Island, Korea. Association for Computational Linguistics. Shachar Mirkin and Jean-Luc Meunier. 2015. Personalized machine translation: Predicting translational preferences. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2019–2025, Lisbon, Portugal. Association for Computational Linguistics. Shachar Mirkin, Scott Nowson, Caroline Brun, and Julien Perez. 2015. Motivating personality-aware machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1102–1108. Ella Rabinovich, Raj Nath Patel, Shachar Mirkin, Lucia Specia, and Shuly Wintner. 2017. Personalized machine translation: Preserving original author traits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1074–1084. Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Seattle, WA, USA. Association for Computational Linguistics. Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679–1684, Florence, Italy. Association for Computational Linguistics. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008.
2020
154
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1691–1702 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1691 MMPE: A Multi-Modal Interface for Post-Editing Machine Translation Nico Herbig1, Tim D¨uwel1, Santanu Pal1,2, Kalliopi Meladaki1, Mahsa Monshizadeh2, Antonio Kr¨uger1, Josef van Genabith1,2 1German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus, Germany 2Department of Language Science and Technology, Saarland University, Germany {firstname.lastname}@dfki.de {firstname.lastname}@uni-saarland.de Abstract Current advances in machine translation (MT) increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and reduces errors. This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while they are of limited use for longer insertions. On the other hand, speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions but offer less potential for deletion and reordering. Overall, participants were enthusiastic about the new modalities and saw them as good extensions to mouse & keyboard, but not as a complete substitute. 1 Introduction As machine translation (MT) has been making substantial improvements in recent years1, more and more professional translators are integrating this technology into their translation workflows (Zaretskaya et al., 2016; Zaretskaya and Seghiri, 2018). The process of using a pre-translated text as a basis and improving it to create the final translation is called post-editing (PE). Older research showed a strong dislike of translators towards PE (Lagoudaki, 2009; Wallis, 2006), and more recent studies agree that translators are still cautious about PE and question its benefits (Gaspari et al., 2014; Koponen, 1WMT 2019 translation task: http://matrix.statmt.org/, accessed 16/04/2020 2012), partly because they see it as a threat to their profession (Moorkens, 2018). Experienced translators in particular exhibit rather negative attitudes (Moorkens and O’Brien, 2015). Conversely, novice translators have been shown to have more positive views on PE (Yamada, 2015). Green et al. (2013) demonstrated that some translators actually strongly prefer PE and argue that “users might have dated perceptions of MT quality”. Apart from translators’ preference, productivity gains of 36% when using modern neural MT for PE (Toral et al., 2018) already result in substantial changes in translation workflows (Zaretskaya and Seghiri, 2018) and will probably continue to do so the better MT becomes. Thus, PE requires thorough investigation in terms of interface design, since the task changes from mostly text production to comparing and adapting MT and translation memory (TM) proposals, or put differently, from control to supervision. Previous elicitation-based research (Herbig et al., 2019a) investigated how translation environments could better support the PE process and found that translators envision PE interfaces relying on touch, pen, and speech input combined with mouse and keyboard as particularly useful. A small number of prototypes exploring some of these modalities also showed promising results (Teixeira et al., 2019). This paper presents MMPE, the first translation environment combining standard mouse & keyboard input with touch, pen, and speech interactions for PE of MT. The results of a study with 11 professional translators show that participants are enthusiastic about having these alternatives, even though time measurements and subjective ratings do not always agree. Overall, pen and touch modalities are well suited for deletion and reordering operations, while speech and multi-modal interaction are suitable for insertions and replacements. 1692 2 Related Work In this section, we present related research on translation environments and particularly focus on existing multi-modal approaches to PE. 2.1 CAT and Post-Editing Most professional translators nowadays use so-called CAT (computer-aided translation) tools (van den Bergh et al., 2015). These provide features like MT and TM together with quality estimation and concordance functionality (Federico et al., 2014), alignments between source and MT (Schwartz et al., 2015), interactive MT offering assistance like auto-completion (Green et al., 2014b,a), or intelligibility assessments (Coppers et al., 2018; Vandeghinste et al., 2016, 2019). While TM is still often valued higher than MT (Moorkens and O’Brien, 2017), a recent study by Vela et al. (2019) shows that professional translators who were given a choice between translation from scratch, TM, and MT, chose MT in 80% of the cases, highlighting the importance of PE of MT. Regarding the time savings achieved through PE, Zampieri and Vela (2014) find that PE was on average 28% faster for technical translations, Aranberri et al. (2014) show that PE increases translation throughput for both professionals and lay users, and L¨aubli et al. (2013) find that PE also increases productivity in realistic environments. Furthermore, it has been shown that PE not only leads to reduced time but also reduces errors (Green et al., 2013). Furthermore, PE changes the interaction pattern (Carl et al., 2010), leading to a significantly reduced amount of mouse and keyboard events (Green et al., 2013). Therefore, we believe that other modalities or combinations thereof might be more useful for PE. 2.2 Multi-Modal Approaches Dictating translations dates back to the time when secretaries transcribed dictaphone content on a typewriter (Theologitis, 1998); however, the use of automatic speech recognition also has a long history for translation (Dymetman et al., 1994; Brousseau et al., 1995). A more recent approach, called SEECAT (Martinez et al., 2014), investigates the use of automatic speech recognition (ASR) in PE and argues that its combination with typing could boost productivity. A survey regarding speech usage with PE trainees (Mesa-Lao, 2014) finds that they have a positive attitude towards speech input and would consider adopting it, but only as a complement to other modalities. In a small-scale study, Zapata et al. (2017) found that ASR for PE was faster than ASR for translation from scratch. Due to these benefits, commercial CAT tools like memoQ and MateCat are also beginning to integrate ASR. The CASMACAT tool (Alabau et al., 2013) allows the user to input text by writing with e-pens in a special area. A vision paper (Alabau and Casacuberta, 2012) proposes to instead use e-pens for PE sentences with few errors in place and showcases symbols that could be used for this. Studies on mobile PE via touch and speech (O’Brien et al., 2014; Torres-Hostench et al., 2017) show that participants especially liked reordering words through touch drag and drop, and preferred voice when translating from scratch, but used the iPhone keyboard for small changes. Zapata (2016) also explores the use of voice- and touch-enabled devices; however, the study did not focus on PE, and used Microsoft Word instead of a proper CAT environment. Teixeira et al. (2019) explore a combination of touch and speech for translation from scratch, translation using TM, and translation using MT. In their studies, touch input received poor feedback since (a) their tile view (where each word is a tile that can be dragged around) made reading more complicated, and (b) touch insertions were rather complex to achieve within their implementation. In contrast, integrating dictation functionality using speech was shown to be quite useful and even preferred to mouse and keyboard by half of the participants. The results of an elicitation study by Herbig et al. (2019a) indicate that pen, touch, and speech interaction should be combined with mouse and keyboard to improve PE of MT. In contrast, other modalities like eye tracking or gestures were seen as less promising. In summary, previous research suggests that professional translators should switch to PE to increase productivity and reduce errors; however, translators themselves are not always eager to do so. It has been argued that the PE process might be better supported by using different modalities in addition to the common mouse and keyboard approaches, and an elicitation study suggests concrete modalities that should be well suited for various editing tasks. A few of these modalities have already been explored in practice, showing promising results. However, the elicited combination of pen, touch, 1693 and speech, together with mouse and keyboard, has not yet been implemented and evaluated. 3 The MMPE Prototype We present the MMPE prototype (see Figure 1) which combines these modalities for PE of MT. A more detailed description of the prototype can be found in Herbig et al. (2020), and a video demonstration is available at https://youtu.be/ H2YM2R8Wfd8. 3.1 Apparatus & Overall Layout On the software side, we decided to use Angular for the frontend, and node.js for the backend. As requested in Herbig et al. (2019a), we use a large tiltable touch & pen screen for the study (see Figure 1b): the Wacom Cintiq Pro 32 inch display with the Flex Arm that allows the screen to be tilted and moved flat on the table, or to be moved up to work in a standing position. We further use the Sennheiser PC 8 Headset for speech input. The goal of this hardware setup was to limit induced bias as much as possible, in order to get results on the modalities and not on a flawed apparatus. We implemented a horizontal source-target layout (see Figure 1a), where each segment’s status (unedited, edited, confirmed) is visualized between source and target. On the far right, support tools are offered as requested in Herbig et al. (2019a): (1) the unedited MT output, to which the users can revert their editing using a button, and (2) a corpus combined with a dictionary. The current segment is enlarged, thereby offering space for handwritten input and allowing the user to view a lot of context while still seeing the current segment in a comfortable manner (Herbig et al. (2019a); see Figure 1a). The view for the current segment is further divided into the source segment (left) and two editing planes for the target, one for handwriting and drawing gestures (middle), and one for touch deletion & reordering, as well as standard mouse and keyboard input (right). Both initially show the MT proposal and synchronize on changes to either one. The reason for having two editing fields instead of only one is that some interactions are overloaded, e.g., a touch drag can be interpreted as both handwriting (middle) and reordering (right). Undo and redo functionality, as well as confirming segments, are also implemented through buttons between the source and target texts, and can further be triggered through hotkeys. The target text is spell-checked, as a lack of this feature was criticized in Teixeira et al. (2019). 3.2 Left Target View: Handwriting For handwriting recognition (see Figure 1c), we use the MyScript Interactive Ink SDK. Apart from merely recognizing the written input, it offers gestures2 like strike-through or scribble for deletions. For inserting words, one can directly write into an empty space, or create such a space first by breaking the line (draw a long line from top to bottom), and then handwriting the word. All changes are immediately interpreted, i.e., striking through a word deletes it immediately, instead of showing it in a struck-through visualization. The editor further shows the recognized text immediately at the very top of the drawing view in a small gray font, where alternatives for the current recognition are offered. Apart from using the pen, the user can also use his/her finger or the mouse on the left-hand editing view for handwriting. 3.3 Right Target View: Touch Reordering, Mouse & Keyboard On the right-hand editing view, the user can delete words by simply double-tapping them with pen/finger touch, or reorder them through a simple drag and drop procedure (see Figure 1d), which visualizes the picked-up word as well as the current drop position, and automatically fixes spaces between words and punctuation marks. This reordering functionality is strongly related to Teixeira et al. (2019); however, only the currently dragged word is temporarily visualized as a tile to offer better readability. Naturally, the user can also edit using mouse and keyboard, where all common navigation inputs work as expected from other software. 3.4 Speech Input For speech recognition, we stream the audio recorded by the headset to IBM Watson servers to receive a transcription, which is then analyzed in a command-based fashion. Thus, our speech module not only handles dictations as in Teixeira et al. (2019), but can correct mistakes in place. As commands, the user has the option to “insert”, “delete”, “replace”, and “reorder” words or subphrases. To specify the position, if it is ambiguous, one can define anchors as in “after”/“before”/“between”, or define the occurrence 2see https://developer.myscript.com/docs/concepts/editinggestures/, accessed 16/04/2020 1694 (a) Screenshot of the interface. (b) Apparatus. (c) Handwriting on left target view. (d) Touch reordering on right target view. Figure 1: Overview of the MMPE prototype. of the entity (“first”/“second”/“last”). A full example is “insert A after second B”, where A and B can be words or subphrases. Character-level commands are not supported, so instead of e.g. deleting a suffix, one should replace the word. 3.5 Multi-Modal Combinations Last, the user can use a multi-modal combination, i.e., pen/touch/mouse combined with speech. For this, the cursor first needs to be positioned on or next to a word, or the word needs to be long-pressed with pen/touch, resulting in a pickup visualization. Afterwards, the user can then use a simplified voice command like “delete”, “insert A”, “move after/before A/ between A and B”, or “replace by A” without needing to specify the position/word. 3.6 Logging In a log file, we store all concrete keypresses, touched pixel coordinates, etc. Much more importantly, we directly log all UI interactions (like segmentChange), as well as all text manipulations (like replaceWord) together with the concrete changes (e.g. with the oldWord, newWord, and complete segmentText). 4 Evaluation Method The prototype was evaluated by professional translators3. We used EN–DE text, as our participants were German natives and we wanted to avoid ASR recognition errors as reported in Dragsted et al. (2011). In the following, “modalities” refers to Touch (T), Pen (P), Speech (S), Mouse & Keyboard (MK), and Multi-Modal combinations (MM, see Section 3.5), while “operations” refers to Insertions, Deletions, Replacements, and Reorderings. The experiment consisted of the following phases and took approximately 2 hours per participant: 4.1 Introduction & Independent PE First, participants filled in a questionnaire capturing demographics as well as information on CAT usage. Then the experimenter introduced all of the prototype’s features in a prepared order to ensure a similar presentation for all participants. After that, participants were given 10–15 minutes to explore the prototype on their own. We 3The study has been approved by the university’s ethical review board. Freelance participants were paid their usual fee, while in-house translators participated during working hours. The data and analysis scripts can be found at https: //mmpe.dfki.de/data/ACL2020/ 1695 specifically told them that we are more interested in them exploring the presented features than in receiving high-quality translations. This phase had two main purposes: (1) to let the participants become familiar with the interface (e.g., how best to hold the pen) and to resolve questions early on; (2) to see how participants intuitively work with the prototype. Two experimenters carefully observed the participants and took notes on interesting behavior and questions asked. 4.2 Feature-Wise & General Feedback The central part of the study was a structured test of each modality for each of our four operations. For this, we used text from the WMT news test set 2018. Instead of actually running an MT system, we manually introduced errors into the reference set to ensure that there was only a single error per segment. Overall, four sentences had to be corrected per operation using each modality, which results in 4 × 4 × 5 = 80 segments per participant. Within the four sentences per operation, we tried to capture slightly different cases, like deleting single words or a group of words. For this, we adapted the prototype, such that a pop-up occurs when changing the segment, which shows (1) the operation to perform and which modality to use, (2) the source and the “MT”, which is the reference with the introduced error, as well as (3) the correction to apply, which uses color, bold font, and strike-through to easily show the required change to perform. The reason why we provided the correction to apply was to ensure a consistent editing behavior across all participants, thereby making subjective ratings and feedback as well as time measurements comparable. The logging functionality was extended, such that times between clicking “Start” and confirming the segment were also logged. To avoid ordering effects, the participants went through the operations in counter-balanced order, and through the modalities in random order. After every operation (i.e., after 4 × 5 = 20 segments) and similar to Herbig et al. (2019a), participants rated each modality for that operation on three 7point Likert scales ranging from “strongly disagree” to “strongly agree”, namely as to whether the interaction “is a good match for its intended purpose”, whether it “is easy to perform”, and whether it “is a good alternative to the current mouse and keyboard approach”. Furthermore, we asked the translators to give us their thoughts on advantages and disadvantages of the modalities, and how they could be improved. Afterward, participants further had to order the 5 modalities from best to worst. In the end, after completing all 80 segments, we performed a final unstructured interview to capture high-level feedback on the interface as well as things we missed in our implementation. 4.3 Remarks Regarding Methodology While a direct comparison to state-of-the-art CAT tools would be interesting, the results would be highly questionable as the participants would be expert users of their day-to-day tool and novice users of our tool. Furthermore, the focus of our prototype was on the implemented modalities, while widely used features like a TM or consistency checker are currently missing. Since our main question was whether the newly implemented features have potential for PE of MT or not, we focus on qualitative feedback, ratings, and timing information, which is more relevant to this research question. 5 Evaluation Results and Discussion In this section, we present and discuss the study’s main findings. 5.1 Participants Overall, 11 (f=10, m=1, 2 left-handed) professional EN–DE translators participated in the experiment, 3 freelance and 8 in-house translators. Their ages ranged from 30 to 64 (avg=41.6, σ=9.3)4, with 3 to 30 years of professional experience (avg=13.3, σ=7.4) and a total of 27 language pairs (avg=2.6). All translators translate from EN to DE, and all describe their German Language skills as native and their English skills as C1 to native level. For most participants, the self-rated CAT knowledge was good (6 times) or very good (4 times, 1 neutral). However, participants were less confident about their PE skills (4 neutral, 4 good, 3 very good), thereby matching well with the CAT usage surveys. Years of experience with CAT tools ranged from 3 to 20 (avg=11.5, σ=5.1), where participants had used between 1 and 10 distinct CAT tools (avg=4.9, σ=2.7). 5.2 Subjective Ratings Figure 2 shows the subjective ratings provided for each modality and operation on the three scales 4The small number of participants and their age distribution (with 10 participants of age 30 to 48, and only one aged 64) did not us allow to analyze the effect of age on the results. 1696 “Goodness”, “Ease of use”, and “Good alternative to mouse & keyboard” after having tested each feature (see Section 4.2). As can be seen, participants tended to give similar ratings on all three scales. For insertions and replacements, which required the most text input, the classical mouse & keyboard approach was rated highest; however, the multi-modal combination and speech were also perceived as good, while pen and especially touch received lower scores. For deletions and reorderings, pen, touch, and mouse & keyboard were all perceived as very good, where P and T were ranked even slightly higher than MK for reorderings. Speech and multi-modal were considered worse here. 5.3 Orderings After each operation, participants ordered the modalities from best to worst, with ties being allowed. As an example, for “MM & S best, then P, then MK, and last T” we assigned 0.5 times the 1st and 0.5 times the 2nd position to both MM and S, while P got 3rd, MK 4th, and T the 5th position. To get an overall ordering across participants, we then multiplied the total amount of times a modality was rated 1st/2nd/3rd/4th/5th by 1/2/3/4/5 (similar to Zenner and Kr¨uger (2017)). Consequently, a lower score indicates that this modality is better suited for the operation. The scores for each modality and operation are: • Insertions: 1st: MK(20.5), 2nd: MM(26.5), 3rd: S(31.5), 4th: P(38.5), 5th: T(48) • Deletions: 1st: P(21.5), 2nd: MK(29), 3rd: T(31.5), 4th: MM(41), 5th: S(42) • Replacements: 1st: MK(21), 2nd: MM(29), 3rd: S(30), 4th: P(35), 5th: T(50) • Reorderings: 1st: P(21.5), 2nd: T(31), 3rd: S(35.5), 4th: MK(36), 5th: MM(41) 5.4 Timings We analyzed the logged duration of each modalityoperation pair. Note that this is the time from clicking “Start” until confirming the segment; thus, it includes recognition times (for speech and handwriting) and really measures how long it takes until a participant is satisfied with the edit. Even though participants were instructed to provide feedback or ask questions only while the popup is shown, i.e., while the time is not measured, participants infrequently did so during editing. We filtered out such outliers and averaged the 4 sentences of each modality-operation pair per participant to get a single value, thereby making the samples independent for the remaining analyses. Figure 3 shows boxplots of the dataset for the 20 modality-operation pairs. For statistical analysis, we first conducted Friedman tests per operation, showing us that significant differences exist for each operation (all p < 0.001). Afterward, posthoc analyses using Wilcoxon tests with BonferroniHolm correction showed which pairs of modalities are significant and how large the effect r is. For insertions, MK was by far the fastest modality, followed by MM and S. All differences except for MM vs. S and T vs. P are statistically significant with large effect sizes (all p < 0.01, all r > 0.83). As expected, deletions were faster than insertions. Here, MK, T, and P were the fastest, followed by S; MM was slowest by far. Regarding significance, all modalities were significantly faster than MM, and MK was significantly faster than S (all p < 0.01, all r > 0.88). For reordering, P and T were the fastest, followed by MK and S. The statistical analysis revealed that T is significantly faster than all modalities except P, both P and MK are significantly faster than S, and S is significantly faster than MM (all p < 0.05, all r > 0.83). Replacements with MK were the fastest, followed by P, T, S, and MM. MK was significantly faster than all other modalities, and P significantly faster than S and MM (all p < 0.05, all r > 0.83), while no significant differences exist between the other three. 5.5 Qualitative Analysis Apart from the ratings and timings, we present the main qualitative feedback from the interviews. 5.5.1 Pen & Touch Especially for short insertions and replacements, handwriting was seen as a suitable input mode; for more extended changes, one should instead fall back on typing or dictation. Both touch/pen deletion mechanisms (strike-through and doubletap) and touch/pen reordering were highlighted as very useful or even “perfect” as they “nicely resemble a standard correction task”. Most participants seemed to prefer the pen to finger handwriting for insertions and replacements due to its precision, although it was considered less direct. 1697 ● ● ● ● ● ● ● ● ● ● ● ● P−Good P−Ease P−Alt T−Good T−Ease T−Alt S−Good S−Ease S−Alt MK−Good MK−Ease MM−Good MM−Ease MM−Alt 1 2 3 4 5 6 7 (a) Insertions. ● ● ● ● ● ● ● ● P−Good P−Ease P−Alt T−Good T−Ease T−Alt S−Good S−Ease S−Alt MK−Good MK−Ease MM−Good MM−Ease MM−Alt 1 2 3 4 5 6 7 (b) Deletions. P−Good P−Ease P−Alt T−Good T−Ease T−Alt S−Good S−Ease S−Alt MK−Good MK−Ease MM−Good MM−Ease MM−Alt 1 2 3 4 5 6 7 (c) Replacements. ● ● ● ● ● ● ● P−Good P−Ease P−Alt T−Good T−Ease T−Alt S−Good S−Ease S−Alt MK−Good MK−Ease MM−Good MM−Ease MM−Alt 1 2 3 4 5 6 7 (d) Reorderings. Figure 2: Subjective ratings. A major concern was thinking about and creating sufficient space to handwrite into. A suggested improvement was to make the available space configurable to one’s own handwriting. Furthermore, placing the palm of the hand on the screen should not be interpreted as input. Six participants also noted that the text jumps around when reordering a word from the end of a line, as the picked-up word is removed from the text, resulting in all remaining words being moved to the front, which could be prevented by adapting the text only on drop. 5.5.2 Speech & Multi-Modal Combinations Perceptions regarding speech recognition were somewhat mixed, with some thinking it worked “super” while two participants found it exhausting to formulate commands while mentally working with text. Furthermore, speech was considered impractical for translators working in shared offices. Both insertions and replacements using speech received lots of positive feedback (from 8 and 7 participants, respectively), interesting findings being that “the longer the insertion, the more interesting speech becomes”. Speech deletion was considered to “work fine” and to be simpler than insertion as there is usually no need to specify the position. However, it would be unsatisfactory to have to read 10 words to delete them. The main advantage of the multi-modal approach was that “one has to speak/think less”. However, it was also argued that “when you talk, you can also just say everything”, meaning that the simplified MM command was not seen as an advantage for this participant. An interesting statement was that “if there are no ambiguities, speech is better, but if there are, multi-modal is cool”. Ideas on how to improve speech ranged from better highlighting the changes in the target view, to adding the possibility to restate the whole segment. While the ASR tool used (IBM Watson) is one of the state-of-the-art APIs, it might still have negatively impacted the results for S and MM, as a few times a word was wrongly recognized (e.g., when replacing an ending, the ASR did not always correctly recognize the word form). To improve this aspect, participants discussed the idea of passing the text to the speech recognition (Dymetman et al., 1994) or training the ASR towards the user. 5.5.3 Mouse & Keyboard Due to daily usage, participants stated they were strongly biased regarding mouse and keyboard, where “the muscle memory” helps. However, many actually considered MK as very unintuitive if they imagined never having used it before, especially compared to pen and touch, or as one participant stated for reordering: “why do I have to do all of this, why is it not as simple as the pen”. 1698 ● ● ● ● Insert: P Insert:T Insert: S Insert: MK Insert: MM Delete: P Delete:T Delete: S Delete: MK Delete: MM Reorder: P Reorder:T Reorder: S Reorder: MK Reorder: MM Replace: P Replace:T Replace: S Replace: MK Replace: MM 10000 20000 30000 40000 Figure 3: Editing durations (in ms) per operation and modality. 5.5.4 General Feedback In general, we received lots of positive feedback in the final discussion about the prototype, where participants made statements such as “I am going to buy this once you are ready” or expressed “respect for the prototype”. Multiple participants reported that it would be nice to have multiple options to vary between the modalities. It was frequently suggested to combine the two editing views, e.g. by having a switch to enable/disable the drawing mode. Participants also commented positively on the large typeface for the current segment (“you really see what you are working on”). Suggestions for further improvements included adaptation possibilities for the size of the editing fields and a switch between vertical and horizontal source-target layout. 5.6 Discussion This section discusses the main takeaways regarding each modality. 5.6.1 Pen According to ordering scores, subjective ratings, and comments, we see that the pen is among the best modalities for deletions and reordering. However, other modalities are superior for insertions and replacements, where it was seen as suitable for short modifications, but to be avoided for more extended changes. In terms of timings, P was also among the fastest for deletions and reorderings, and among the slowest for insertions. What is interesting, however, is that P was significantly faster than S and MM for replacements, even though it was rated lower. The main concern for handwriting was the need to think about space and to create space before actually writing. 5.6.2 Touch Results for touch were similar, but it was considered worse for insertions and replacements. Furthermore, and as we expected due to its precision, pen was preferred to finger touch by most participants. However, in terms of timings, the two did not differ significantly apart from replace operations, and even for replacements, where it was clearly rated as the worst modality, it actually turned out to be (non-significantly) faster than S and MM. 5.6.3 Speech & Multi-modal Combinations Speech and multi-modal PE were considered the worst and were also the slowest modalities for reordering and deletions. For insertions and replacements, however, these two modalities were rated and ordered 2nd (after MK) and in particular much better than P and T. Timing analysis agrees for insertions, being 2nd after MK. For replacements, however, S and MM were the slowest even though the ratings put them before P and T. An explanation of why MM was slower than S for deletion is that our implementation did not support MM deletions of multiple words in a single command. Still, we would have expected a comparable speed of MM and S for reordering. Insertions are the only oper1699 ation where the multi-modal approach was (nonsignificantly) faster than S since the position did not have to be verbally specified. Furthermore, the participants’ comments highlighted their concern regarding formulating commands while already mentally processing text. Still, S and MM received a lot of positive feedback for insertions and replacements, where they would be more interesting the more text was to be added. The main advantage of the MM approach, as argued by the participants, was that one has to speak less, albeit at the cost of doing two things at once. 5.6.4 Mouse & Keyboard Mouse & keyboard received the best scores for insertions and replacements, where it was the fastest modality. Furthermore, it got good ratings for deletions and reorderings, where it was also fast (but not the fastest) for reordering. However, some participants commented negatively, stating that it only works well because of “years of expertise”. 5.6.5 General Interestingly, our findings are not entirely in line with translators’ intuitions reported in our previous elicitation study (Herbig et al., 2019a): while touch worked much better than expected, handwriting of whole subphrases did not work as well as they thought. Additionally, it is interesting to note that some newly introduced modalities could compete with mouse & keyboard even though participants are biased by years of training with the latter. Overall, many participants provided very positive feedback on this first prototype combining pen, touch, speech, and multi-modal combinations for PE MT, encouraging us to continue. Furthermore, several promising ideas for improving and extending the prototype have been proposed. The focus of our study was to explore the implemented interactions in detail, i.e., each modality for each operation irrespective of frequency. The chosen methodology guaranteed that we receive comparable feedback on all interactions from professional translators by having them correct the same mistakes using different modalities. Nevertheless, a more realistic “natural” workflow follow-up study should be conducted in the future, which will also show if participants swap modalities within sentences depending on the error type, or if they stick to single modalities to avoid frequent modality switches. 6 Conclusion While more and more professional translators are switching to the use of PE to increase productivity and reduce errors, current CAT interfaces still heavily focus on traditional mouse and keyboard input, even though the literature suggests that other modalities could support PE operations well. This paper therefore presents MMPE, a CAT prototype combining pen, touch, speech, and multi-modal interaction together with common mouse and keyboard input possibilities, and explores the use of these modalities by professional translators. The study shows a high level of interest and enthusiasm for using these new modalities. For deletions and reorderings, pen and touch both received high subjective ratings, with pen being even better than mouse & keyboard. In terms of timings, they were also among the fastest for these two operations. For insertions and replacements, speech and multimodal interaction were seen as suitable interaction modes; however, mouse & keyboard were still favored and faster here. As a next step, we will integrate the participants’ valuable feedback to improve the prototype. While the presented study provided interesting first insights regarding participants’ use of and preferences for the implemented modalities, it did not allow us to see how they would use the modalities over a longer time period in day-to-day work, which we also want to investigate in the future. Furthermore, participants in Herbig et al. (2019a) were positive regarding the idea of a user interface that adapts to measured cognitive load, especially if it automatically provides additional resources like TM matches or MT proposals. An exploration of multi-modal measuring approaches (Herbig et al., 2019b) shows the feasibility of this, so we will try to combine explicit multi-modal input, as done in this work, with implicit multi-modal sensor input to better model and support the user during PE. Acknowledgments This research was funded in part by the German Research Foundation (DFG) under grant number GE 2819/2-1 (project MMPE). We thank AMPLEXOR (https://www.amplexor.com) for their excellent support in providing access to professional human translators for our experiments. 1700 References Vicent Alabau, Ragnar Bonk, Christian Buck, Michael Carl, Francisco Casacuberta, Mercedes Garc´ıaMart´ınez, Jes´us Gonz´alez, Philipp Koehn, Luis Leiva, Bartolom´e Mesa-Lao, et al. 2013. CASMACAT: An open source workbench for advanced computer aided translation. The Prague Bulletin of Mathematical Linguistics, 100:101–112. Vicent Alabau and Francisco Casacuberta. 2012. Study of electronic pen commands for interactivepredictive machine translation. In Proceedings of the International Workshop on Expertise in Translation and Post-Editing – Research and Application, pages 17–18. Nora Aranberri, Gorka Labaka, A Diaz de Ilarraza, and Kepa Sarasola. 2014. Comparison of post-editing productivity between professional translators and lay users. In Proceeding of AMTA Third Workshop on Post-editing Technology and Practice, pages 20– 33. Jan van den Bergh, Eva Geurts, Donald Degraen, Mieke Haesen, Iulianna van der Lek-Ciudin, Karin Coninx, et al. 2015. Recommendations for translation environments to improve translators’ workflows. In Proceedings of the 37th Conference Translating and the Computer, pages 106–119. Tradulex. Julie Brousseau, Caroline Drouin, George Foster, Pierre Isabelle, Roland Kuhn, Yves Normandin, and Pierre Plamondon. 1995. French speech recognition in an automatic dictation system for translators: The TransTalk project. In Proceedings of Eurospeech Fourth European Conference on Speech Communication and Technology, pages 193–196. Michael Carl, Martin Jensen, and Kay Kristian. 2010. Long distance revisions in drafting and post-editing. CICLing Special Issue on Natural Language Processing and its Applications, pages 193–204. Sven Coppers, Jan van den Bergh, Kris Luyten, Karin Coninx, Iulianna van der Lek-Ciudin, Tom Vanallemeersch, and Vincent Vandeghinste. 2018. Intellingo: An intelligible translation environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1–13. ACM. Barbara Dragsted, Inger Margrethe Mees, and Inge Gorm Hansen. 2011. Speaking your translation: Students’ first encounter with speech recognition technology. Translation & Interpreting, 3(1):10–43. Marc Dymetman, Julie Brousseau, George Foster, Pierre Isabelle, Yves Normandin, and Pierre Plamondon. 1994. Towards an automatic dictation system for translators: The TransTalk project. In Proceedings of the ICSLP International Conference on Spoken Language Processing. Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Matteo Negri, Marco Turchi, Marco Trombetti, Alessandro Cattelan, Antonio Farina, Domenico Lupinetti, Andrea Martines, et al. 2014. The MateCat tool. In Proceedings of the 25th International Conference on Computational Linguistics: System Demonstrations, pages 129–132. Federico Gaspari, Antonio Toral, Sudip Kumar Naskar, Declan Groves, and Andy Way. 2014. Perception vs reality: Measuring machine translation post-editing productivity. In Third Workshop on Post-Editing Technology and Practice, page 60. Spence Green, Jason Chuang, Jeffrey Heer, and Christopher D Manning. 2014a. Predictive translation memory: A mixed-initiative system for human language translation. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, pages 177–187. ACM. Spence Green, Jeffrey Heer, and Christopher D Manning. 2013. The efficacy of human post-editing for language translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 439–448. ACM. Spence Green, Sida I Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D Manning. 2014b. Human effort and machine learnability in computer aided translation. In Proceedings of the EMNLP Conference on Empirical Methods in Natural Language Processing, pages 1225–1236. Nico Herbig, Santanu Pal, Tim D¨uwel, Kalliopi Meladaki, Mahsa Monshizadeh, Vladislav Hnatovskiy, Antonio Kr¨uger, and Josef van Genabith. 2020. MMPE: A multi-modal interface using handwriting, touch reordering, and speech commands for post-editing machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics. Nico Herbig, Santanu Pal, Josef van Genabith, and Antonio Kr¨uger. 2019a. Multi-modal approaches for post-editing machine translation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, page 231. ACM. Nico Herbig, Santanu Pal, Mihaela Vela, Antonio Kr¨uger, and Josef Genabith. 2019b. Multi-modal indicators for estimating perceived cognitive load in post-editing of machine translation. Machine Translation, 33(1-2):91–115. Maarit Koponen. 2012. Comparing human perceptions of post-editing effort with post-editing operations. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 181–190. Association for Computational Linguistics. Elina Lagoudaki. 2009. Translation editing environments. In MT Summit XII: Workshop on Beyond Translation Memories. 1701 Samuel L¨aubli, Mark Fishel, Gary Massey, Maureen Ehrensberger-Dow, and Martin Volk. 2013. Assessing post-editing efficiency in a realistic translation environment. In Proceedings of MT Summit XIV Workshop on Post-editing Technology and Practice, pages 83–91. Mercedes Garcia Martinez, Karan Singla, Aniruddha Tammewar, Bartolom´e Mesa-Lao, Ankita Thakur, MA Anusuya, Banglore Srinivas, and Michael Carl. 2014. SEECAT: ASR & eye-tracking enabled computer assisted translation. In The 17th Annual Conference of the European Association for Machine Translation, pages 81–88. European Association for Machine Translation. Bartolom´e Mesa-Lao. 2014. Speech-enabled computer-aided translation: A satisfaction survey with post-editor trainees. In Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 99–103. Joss Moorkens. 2018. What to expect from neural machine translation: A practical in-class translation evaluation exercise. The Interpreter and Translator Trainer, 12(4):375–387. Joss Moorkens and Sharon O’Brien. 2015. Postediting evaluations: Trade-offs between novice and professional participants. In Proceedings of the 18th Annual Conference of the European Association for Machine Translation, pages 75–81. Joss Moorkens and Sharon O’Brien. 2017. Assessing user interface needs of post-editors of machine translation. In Human Issues in Translation Technology, pages 127–148. Routledge. Sharon O’Brien, Joss Moorkens, and Joris Vreeke. 2014. Kanjingo – a mobile app for post-editing. In Proceedings of the 17th Annual Conference of the European Association for Machine Translation. Lane Schwartz, Isabel Lacruz, and Tatyana Bystrova. 2015. Effects of word alignment visualization on post-editing quality & speed. Proceedings of MT Summit XV, 1:186–199. Carlos S.C. Teixeira, Joss Moorkens, Daniel Turner, Joris Vreeke, and Andy Way. 2019. Creating a multimodal translation tool and testing machine translation integration using touch and voice. Informatics, 6. Dimitri Theologitis. 1998. Language tools at the EC translation service: The theory and the practice. In Proceedings of the 20th Conference Translating and the Computer, pages 12–13. Antonio Toral, Martijn Wieling, and Andy Way. 2018. Post-editing effort of a novel with statistical and neural machine translation. Frontiers in Digital Humanities, 5:9. Olga Torres-Hostench, Joss Moorkens, Sharon O’Brien, Joris Vreeke, et al. 2017. Testing interaction with a mobile MT post-editing app. Translation & Interpreting, 9(2):138. Vincent Vandeghinste, Tom Vanallemeersch, Liesbeth Augustinus, Bram Bult´e, Frank Van Eynde, Joris Pelemans, Lyan Verwimp, Patrick Wambacq, Geert Heyman, Marie-Francine Moens, et al. 2019. Improving the translation environment for professional translators. Informatics, 6(2):24. Vincent Vandeghinste, Tom Vanallemeersch, Liesbeth Augustinus, Joris Pelemans, Geert Heyman, Iulianna van der Lek-Ciudin, Arda Tezcan, Donald Degraen, Jan van den Bergh, Lieve Macken, et al. 2016. Scate – Smart Computer-Aided Translation Environment. Baltic Journal of Modern Computing, 4(2):382–382. Mihaela Vela, Santanu Pal, Marcos Zampieri, Sudip Kumar Naskar, and Josef van Genabith. 2019. Improving CAT tools in the translation workflow: New approaches and evaluation. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 8–15. Julian Wallis. 2006. Interactive Translation vs Pretranslation in the Context of Translation Memory Systems: Investigating the Effects of Translation Method on Productivity, Quality and Translator Satisfaction. Ph.D. thesis, University of Ottawa. Masaru Yamada. 2015. Can college students be posteditors? An investigation into employing language learners in machine translation plus post-editing settings. Machine Translation, 29(1):49–67. Marcos Zampieri and Mihaela Vela. 2014. Quantifying the influence of MT output in the translators’ performance: A case study in technical translation. In Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation, pages 93–98. Juli´an Zapata. 2016. Translating on the go? Investigating the potential of multimodal mobile devices for interactive translation dictation. Tradum`atica: Traducci´o i Tecnologies de la Informaci´o i la Comunicaci´o, 1(14):66–74. Juli´an Zapata, Sheila Castilho, and Joss Moorkens. 2017. Translation dictation vs. post-editing with cloud-based voice recognition: A pilot experiment. Proceedings of MT Summit XVI, 2. Anna Zaretskaya and M´ıriam Seghiri. 2018. User Perspective on Translation Tools: Findings of a User Survey. Ph.D. thesis, University of Malaga. Anna Zaretskaya, Mihaela Vela, Gloria Corpas Pastor, and Miriam Seghiri. 2016. Comparing postediting difficulty of different machine translation errors in Spanish and German translations from English. International Journal of Language and Linguistics, 3(3):91–100. 1702 Andre Zenner and Antonio Kr¨uger. 2017. Shifty: A weight-shifting dynamic passive haptic proxy to enhance object perception in virtual reality. IEEE Transactions on Visualization and Computer Graphics, 23(4):1285–1294.
2020
155
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703–1714 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1703 A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages Pedro Javier Ortiz Suárez1,2 Laurent Romary1 Benoît Sagot1 1Inria, Paris, France 2Sorbonne Université, Paris, France {pedro.ortiz, benoit.sagot, laurent.romary}@inria.fr Abstract We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCARbased and Wikipedia-based ELMo embeddings for these languages on the part-ofspeech tagging and parsing tasks. We show that, despite the noise in the Common-Crawlbased OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the crosslingual benefit of multilingual embedding architectures. 1 Introduction One of the key elements that has pushed the state of the art considerably in neural NLP in recent years has been the introduction and spread of transfer learning methods to the field. These methods can normally be classified in two categories according to how they are used: • Feature-based methods, which involve pretraining real-valued vectors (“embeddings”) at the word, sentence, or paragraph level; and using them in conjunction with a specific architecture for each individual downstream task. • Fine-tuning methods, which introduce a minimal number of task-specific parameters, and instead copy the weights from a pre-trained network and then tune them to a particular downstream task. Embeddings or language models can be divided into fixed, meaning that they generate a single representation for each word in the vocabulary; and contextualized, meaning that a representation is generated based on both the word and its surrounding context, so that a single word can have multiple representations, each one depending on how it is used. In practice, most fixed embeddings are used as feature-based models. The most notable examples are word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and fastText (Mikolov et al., 2018). All of them are extensively used in a variety of applications nowadays. On the other hand, contextualized word representations and language models have been developed using both featurebased architectures, the most notable examples being ELMo and Flair (Peters et al., 2018; Akbik et al., 2018), and transformer based architectures, that are commonly used in a fine-tune setting, as is the case of GPT-1, GPT-2 (Radford et al., 2018, 2019), BERT and its derivatives (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2019) and more recently T5 (Raffel et al., 2019). All of them have repeatedly improved the state-of-the art in many downstream NLP tasks over the last year. In general, the main advantage of using language models is that they are mostly built in an unsupervised manner and they can be trained with raw, unannotated plain text. Their main drawback is that enormous quantities of data seem to be required to properly train them especially in the case of contextualized models, for which larger corpora are thought to be needed to properly address polysemy and cover the wide range of uses that commonly exist within languages. For gathering data in a wide range of languages, 1704 Wikipedia is a commonly used option. It has been used to train fixed embeddings (Al-Rfou et al., 2013; Bojanowski et al., 2017) and more recently the multilingual BERT (Devlin et al., 2018), hereafter mBERT. However, for some languages, Wikipedia might not be large enough to train good quality contextualized word embeddings. Moreover, Wikipedia data all belong to the same specific genre and style. To address this problem, one can resort to crawled text from the internet; the largest and most widespread dataset of crawled text being Common Crawl.1 Such an approach generally solves the quantity and genre/style coverage problems but might introduce noise in the data, an issue which has earned the corpus some criticism, most notably by Trinh and Le (2018) and Radford et al. (2019). Using Common Crawl also leads to data management challenges as the corpus is distributed in the form of a large set of plain text each containing a large quantity of unclassified multilingual documents from different websites. In this paper we study the trade-off between quantity and quality of data for training contextualized representations. To this end, we use the OSCAR corpus (Ortiz Suárez et al., 2019), a freely available2 multilingual dataset obtained by performing language classification, filtering and cleaning of the whole Common Crawl corpus.3 OSCAR was created following the approach of Grave et al. (2018) but proposing a simple improvement on their filtering method. We then train OSCARbased and Wikipedia-based ELMo contextualized word embeddings (Peters et al., 2018) for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian. We evaluate the models by attaching them to the to UDPipe 2.0 architecture (Straka, 2018; Straka et al., 2019) for dependency parsing and part-of-speech (POS) tagging. We show that the models using the OSCAR-based ELMo embeddings consistently outperform the Wikipediabased ones, suggesting that big high-coverage noisy corpora might be better than small high-quality narrow-coverage corpora for training contextualized language representations4. We also establish a new state of the art for both POS tagging and dependency parsing in 6 different treebanks covering 1https://commoncrawl.org 2https://oscar-corpus.com 3Snapshot from November 2018 4Both the Wikipedia- and the OSCAR-based embeddings for these 5 languages are available at: https://oscarcorpus.com/#models. all 5 languages. The structure of the paper is as follows. In Section 2 we describe the recent related work. In Section 3 we present, compare and analyze the corpora used to train our contextualized embeddings, and the treebanks used to train our POS tagging and parsing models. In Section 4 we examine and describe in detail the model used for our contextualized word representations, as well as the parser and the tagger we chose to evaluate the impact of corpora in the embeddings’ performance in downstream tasks. Finally we provide an analysis of our results in Section 5 and in Section 6 we present our conclusions. 2 Related work Since the introduction of word2vec (Mikolov et al., 2013), many attempts have been made to create multilingual language representations; for fixed word embeddings the most remarkable works are those of (Al-Rfou et al., 2013) and (Bojanowski et al., 2017) who created word embeddings for a large quantity of languages using Wikipedia, and later (Grave et al., 2018) who trained the fastText word embeddings for 157 languages using Common Crawl and who in fact showed that using crawled data significantly increased the performance of the embeddings especially for mid- to low-resource languages. Regarding contextualized models, the most notable non-English contribution has been that of the mBERT (Devlin et al., 2018), which is distributed as (i) a single multilingual model for 100 different languages trained on Wikipedia data, and as (ii) a single multilingual model for both Simplified and Traditional Chinese. Four monolingual fully trained ELMo models have been distributed for Japanese, Portuguese, German and Basque5; 44 monolingual ELMo models6 where also released by the HIT-SCIR team (Che et al., 2018) during the CoNLL 2018 Shared Task (Zeman et al., 2018), but their training sets where capped at 20 million words. A German BERT (Chan et al., 2019) as well as a French BERT model (called CamemBERT) (Martin et al., 2019) have also been released. In general no particular effort in creating a set of highquality monolingual contextualized representations has been shown yet, or at least not on a scale that 5https://allennlp.org/elmo 6https://github.com/HIT-SCIR/ ELMoForManyLangs 1705 is comparable with what was done for fixed word embeddings. For dependency parsing and POS tagging the most notable non-English specific contribution is that of the CoNLL 2018 Shared Task (Zeman et al., 2018), where the 1st place (LAS Ranking) was awarded to the HIT-SCIR team (Che et al., 2018) who used Dozat and Manning (2017)’s Deep Biaffine parser and its extension described in (Dozat et al., 2017), coupled with deep contextualized ELMo embeddings (Peters et al., 2018) (capping the training set at 20 million words). The 1st place in universal POS tagging was awarded to Smith et al. (2018) who used two separate instances of Bohnet et al. (2018)’s tagger. More recent developments in POS tagging and parsing include those of Straka et al. (2019) which couples another CoNLL 2018 shared task participant, UDPipe 2.0 (Straka, 2018), with mBERT greatly improving the scores of the original model, and UDify (Kondratyuk and Straka, 2019), which adds an extra attention layer on top of mBERT plus a Deep Bi-affine attention layer for dependency parsing and a Softmax layer for POS tagging. UDify is actually trained by concatenating the training sets of 124 different UD treebanks, creating a single POS tagging and dependency parsing model that works across 75 different languages. 3 Corpora We train ELMo contextualized word embeddings for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian. We train one set of embeddings using only Wikipedia data, and another set using only Common-Crawl-based OSCAR data. We chose these languages primarily because they are morphologically and typologically different from one another, but also because all of the OSCAR datasets for these languages were of a sufficiently manageable size such that the ELMo pre-training was doable in less than one month. Contrary to HIT-SCIR team (Che et al., 2018), we do not impose any cap on the amount of data, and instead use the entirety of Wikipedia or OSCAR for each of our 5 chosen languages. 3.1 Wikipedia Wikipedia is the biggest online multilingual open encyclopedia, comprising more than 40 million articles in 301 different languages. Because articles are curated by language and written in an Language Size #Ktokens #Kwords #Ksentences Bulgarian 609M 64,190 54,748 3,685 Catalan 1.1G 211,627 179,108 8,293 Danish 338M 60,644 52,538 3,226 Finnish 669M 89,580 76,035 6,847 Indonesian 488M 80,809 68,955 4,298 Table 1: Size of Wikipedia corpora, measured in bytes, thousands of tokens, words and sentences. open collaboration model, its text tends to be of very high-quality in comparison to other free online resources. This is why Wikipedia has been extensively used in various NLP applications (Wu and Weld, 2010; Mihalcea, 2007; Al-Rfou et al., 2013; Bojanowski et al., 2017). We downloaded the XML Wikipedia dumps7 and extracted the plaintext from them using the wikiextractor.py script8 from Giuseppe Attardi. We present the number of words and tokens available for each of our 5 languages in Table 1. We decided against deduplicating the Wikipedia data as the corpora are already quite small. We tokenize the 5 corpora using UDPipe (Straka and Straková, 2017). 3.2 OSCAR Common Crawl is a non-profit organization that produces and maintains an open, freely available repository of crawled data from the web. Common Crawl’s complete archive consists of petabytes of monthly snapshots collected since 2011. Common Crawl snapshots are not classified by language, and contain a certain level of noise (e.g. one-word “sentences” such as “OK” and “Cancel” are unsurprisingly very frequent). This is what motivated the creation of the freely available multilingual OSCAR corpus (Ortiz Suárez et al., 2019), extracted from the November 2018 snapshot, which amounts to more than 20 terabytes of plain-text. In order to create OSCAR from this Common Crawl snapshot, Ortiz Suárez et al. (2019) reproduced the pipeline proposed by (Grave et al., 2018) to process, filter and classify Common Crawl. More precisely, language classification was performed using the fastText linear classifier (Joulin et al., 2016, 2017), which was trained by Grave et al. (2018) to recognize 176 languages and was shown to have an extremely good accuracy to processing time trade-off. The filtering step as performed by Grave et al. (2018) consisted in only keeping the lines exceeding 100 7XML dumps from April 4, 2019. 8Available here. 1706 Language Size #Ktokens #Kwords #Ksentences Bulgarian 14G 1,466,051 1,268,115 82,532 Catalan 4.3G 831,039 729,333 31,732 Danish 9.7G 1,828,881 1,620,091 99,766 Finnish 14G 1,854,440 1,597,856 142,215 Indonesian 16G 2,701,627 2,394,958 140,138 Table 2: Size of OSCAR subcorpora, measured in bytes, thousands of tokens, words and sentences. bytes in length.9 However, considering that Common Crawl is a mutilingual UTF-8 encoded corpus, this 100-byte threshold creates a huge disparity between ASCII and non-ASCII encoded languages. The filtering step used to create OSCAR therefore consisted in only keeping the lines containing at least 100 UTF-8-encoded characters. Finally, as in (Grave et al., 2018), the OSCAR corpus is deduplicated, i.e. for each language, only one occurrence of a given line is included. As we did for Wikipedia, we tokenize OSCAR corpora for the 5 languages we chose for our study using UDPipe. Table 2 provides quantitative information about the 5 resulting tokenized corpora. We note that the original Common-Crawl-based corpus created by Grave et al. (2018) to train fastText is not freely available. Since running the experiments described in this paper, a new architecture for creating a Common-Crawl-based corpus named CCNet (Wenzek et al., 2019) has been published, although it includes specialized filtering which might result in a cleaner corpus compared to OSCAR, the resulting CCNet corpus itself was not published. Thus we chose to keep OSCAR as it remains the only very large scale, Common-Crawl-based corpus currently available and easily downloadable. 3.3 Noisiness We wanted to address (Trinh and Le, 2018) and (Radford et al., 2019)’s criticisms of Common Crawl, so we devised a simple method to measure how noisy the OSCAR corpora were for our 5 languages. We randomly extract a number of lines from each corpus, such that the resulting random sample contains one million words.10 We test if the words are in the corresponding GNU Aspell11 dictionary. We repeat this task for each of the 5 languages, for both the OSCAR and the Wikipedia 9Script available here. 10We remove tokens that are capitalized or contain less than 4 UTF-8 encoded characters, allowing us to remove bias against Wikipedia, which traditionally contains a large quantity of proper nouns and acronyms. 11http://aspell.net/ Language OOV Wikipedia OOV OSCAR Bulgarian 60,879 66,558 Catalan 34,919 79,678 Danish 134,677 123,299 Finnish 266,450 267,525 Indonesian 116,714 124,607 Table 3: Number of out-of-vocabulary words in random samples of 1M words for OSCAR and Wikipedia. corpora. We compile in Table 3 the number of out-of-vocabulary tokens for each corpora. As expected, this simple metric shows that in general the OSCAR samples contain more out-ofvocabulary words than the Wikipedia ones. However the difference in magnitude between the two is strikingly lower that one would have expected in view of the criticisms by Trinh and Le (2018) and Radford et al. (2019), thereby validating the usability of Common Crawl data when it is properly filtered, as was achieved by the OSCAR creators. We even observe that, for Danish, the number of out-of-vocabulary words in OSCAR is lower than that in Wikipedia. 4 Experimental Setting The main goal of this paper is to show the impact of training data on contextualized word representations when applied in particular downstream tasks. To this end, we train different versions of the Embeddings from Language Models (ELMo) (Peters et al., 2018) for both the Wikipedia and OSCAR corpora, for each of our selected 5 languages. We save the models’ weights at different number of epochs for each language, in order to test how corpus size affect the embeddings and to see whether and when overfitting happens when training elmo on smaller corpora. We take each of the trained ELMo models and use them in conjunction with the UDPipe 2.0 (Straka, 2018; Straka et al., 2019) architecture for dependency parsing and POS-tagging to test our models. We train UDPipe 2.0 using gold tokenization and segmentation for each of our ELMo models, the only thing that changes from training to training is the ELMo model as hyperparameters always remain at the default values (except for number of training tokens) (Peters et al., 2018). 4.1 Contextualized word embeddings Embeddings from Language Models (ELMo) (Peters et al., 2018) is an LSTM-based language model. 1707 More precisely, it uses a bidirectional language model, which combines a forward and a backward LSTM-based language model. ELMo also computes a context-independent token representation via a CNN over characters. We train ELMo models for Bulgarian, Catalan, Danish, Finnish and Indonesian using the OSCAR corpora on the one hand and the Wikipedia corpora on the other. We train each model for 10 epochs, as was done for the original English ELMo (Peters et al., 2018). We save checkpoints at 1st, 3rd and 5th epoch in order to investigate some concerns about possible overfitting for smaller corpora (Wikipedia in this case) raised by the original ELMo authors.12 4.2 UDPipe 2.0 For our POS tagging and dependency parsing evaluation, we use UDPipe 2.0, which has a freely available and ready to use implementation.13 This architecture was submitted as a participant to the 2018 CoNLL Shared Task (Zeman et al., 2018), obtaining the 3rd place in LAS ranking. UDPipe 2.0 is a multi-task model that predicts POS tags, lemmas and dependency trees jointly. The original UDPipe 2.0 implementation calculates 3 different embeddings, namely: • Pre-trained word embeddings: In the original implementation, the Wikipedia version of fastText embeddings is used (Bojanowski et al., 2017); we replace them in favor of the newer Common-Crawl-based fastText embeddings trained by Grave et al. (2018). • Trained word embeddings: Randomly initialized word representations that are trained with the rest of the network. • Character-level word embeddings: Computed using bi-directional GRUs of dimension 256. They represent every UTF-8 encoded character with two 256 dimensional vectors, one for the forward and one for the backward layer. This two vector representations are concatenated and are trained along the whole network. After the CoNLL 2018 Shared Task, the UDPipe 2.0 authors added the option to concatenate contextualized representations to the embedding 12https://github.com/allenai/bilm-tf/ issues/135 13https://github.com/CoNLL-UD-2018/ UDPipe-Future Treebank #Ktokens #Ksentences Bulgarian-BTB 156 11 Catalan-AnCora 530 17 Danish-DDT 100 6 Finnish-FTB 159 19 Finnish-TDT 202 15 Indonesian-GSD 121 6 Table 4: Size of treebanks, measured in thousands of tokens and sentences. section of the network (Straka et al., 2019), we use this new implementation and we concatenate our pretrained deep contextualized ELMo embeddings to the three embeddings mentioned above. Once the embedding step is completed, the concatenation of all vector representations for a word are fed to two shared bidirectional LSTM (Hochreiter and Schmidhuber, 1997) layers. The output of these two BiLSTMS is then fed to two separate specific LSTMs: • The tagger- and lemmatizer-specific bidirectional LSTMs, with Softmax classifiers on top, which process its output and generate UPOS, XPOS, UFeats and Lemmas. The lemma classifier also takes the character-level word embeddings as input. • The parser-specific bidirectional LSTM layer, whose output is then passed to a bi-affine attention layer (Dozat and Manning, 2017) producing labeled dependency trees. 4.3 Treebanks To train the selected parser and tagger (cf. Section 4.2) and evaluate the pre-trained language models in our 5 languages, we run our experiments using the Universal Dependencies (UD)14 paradigm and its corresponding UD POS tag set (Petrov et al., 2012). We use all the treebanks available for our five languages in the UD treebank collection version 2.2 (Nivre et al., 2018), which was used for the CoNLL 2018 shared task, thus we perform our evaluation tasks in 6 different treebanks (see Table 4 for treebank size information). • Bulgarian BTB: Created at the Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, it consists of legal documents, news articles and fiction pieces. 14https://universaldependencies.org 1708 • Catalan-AnCora: Built on top of the SpanishCatalan AnCora corpus (Taulé et al., 2008), it contains mainly news articles. • Danish-DDT: Converted from the Danish Dependency Treebank (Buch-Kromann, 2003). It includes news articles, fiction and non fiction texts and oral transcriptions. • Finnish-FTB: Consists of manually annotated grammatical examples from VISK15 (The Web Version of the Large Grammar of Finnish). • Finnish-TDT: Based on the Turku Dependency Treebank (TDT). Contains texts from Wikipedia, Wikinews, news articles, blog entries, magazine articles, grammar examples, Europarl speeches, legal texts and fiction. • Indonesian-GSD: Includes mainly blog entries and news articles. 5 Results & Discussion 5.1 Parsing and POS tagging results We use UDPipe 2.0 without contextualized embeddings as our baseline for POS tagging and dependency parsing. However, we did not train the model without contextualized word embedding ourselves. We instead take the scores as they are reported in (Kondratyuk and Straka, 2019). We also compare our UDPipe 2.0 + ELMo models against the state-of-the-art results (assuming gold tokenization) for these languages, which are either UDify (Kondratyuk and Straka, 2019) or UDPipe 2.0 + mBERT (Straka et al., 2019). Results for UPOS, UAS and LAS are shown in Table 5. We obtain the state of the art for the three metrics in each of the languages with the UDPipe 2.0 + ELMoOSCAR models. We also see that in every single case the UDPipe 2.0 + ELMoOSCAR result surpasses the UDPipe 2.0 + ELMoWikipedia one, suggesting that the size of the pre-training data plays an important role in downstream task results. This is also supports our hypothesis that the OSCAR corpora, being multi-domain, exhibits a better coverage of the different styles, genres and uses present at least in these 5 languages. Taking a closer look at the results for Danish, we see that ELMoWikipedia, which was trained with a mere 300MB corpus, does not show any sign 15http://scripta.kotus.fi/visk Treebank Model UPOS UAS LAS UDify 98.89 95.54 92.40 UDPipe 2.0 98.98 93.38 90.35 Bulgarian BTB +mBERT 99.20 95.34 92.62 +ELMoWikipedia 99.17 94.93 92.05 +ELMoOSCAR 99.40 96.01 93.56 UDify 98.89 94.25 92.33 UDPipe 2.0 98.88 93.22 91.06 Catalan-AnCora +mBERT 99.06 94.49 92.74 +ELMoWikipedia 99.05 93.99 92.24 +ELMoOSCAR 99.06 94.49 92.88 UDify 97.50 87.76 84.50 UDPipe 2.0 97.78 86.88 84.31 Danish-DDT +mBERT 98.21 89.32 87.24 +ELMoWikipedia 98.45 89.05 86.92 +ELMoOSCAR 98.62 89.84 87.95 UDify 93.80 86.37 81.40 UDPipe 2.0 96.65 90.68 87.89 Finnish-FTB +mBERT 96.97 91.68 89.02 +ELMoWikipedia 97.27 92.05 89.62 +ELMoOSCAR 98.13 93.81 92.02 UDify 94.43 86.42 82.03 UDPipe 2.0 97.45 89.88 87.46 Finnish-TDT +mBERT 97.57 91.66 89.49 +ELMoWikipedia 97.65 91.60 89.34 +ELMoOSCAR 98.36 93.54 91.77 UDify 93.36 86.45 80.10 UDPipe 2.0 93.69 85.31 78.99 Indonesian-GSD +mBERT 94.09 86.47 80.40 +ELMoWikipedia 93.94 86.16 80.10 +ELMoOSCAR 94.12 86.49 80.59 Table 5: Scores from UDPipe 2.0 (from Kondratyuk and Straka, 2019), the previous state-of-the-art models UDPipe 2.0+mBERT (Straka et al., 2019) and UDify (Kondratyuk and Straka, 2019), and our ELMoenhanced UDPipe 2.0 models. Test scores are given for UPOS, UAS and LAS in all five languages. Best scores are shown in bold, second best scores are underlined. of overfitting, as the UDPipe 2.0 + ELMoWikipedia results considerably improve the UDPipe 2.0 baseline. This is the case for all of our ELMoWikipedia models as we never see any evidence of a negative impact when we add them to the baseline model. In fact, the results of UDPipe 2.0 + ELMoWikipedia give better than previous state-of-the-art results in all metrics for the Finnish-FTB and in UPOS for the Finnish-TDT. The results for Finnish are actually quite interesting, as mBERT was pre-trained on Wikipedia and here we see that the multilingual setting in which UDify was fine-tuned exhibits subbaseline results for all metrics, and that the UDPipe + mBERT scores are often lower than those of our UDPipe 2.0 + ELMoWikipedia. This actually suggests that even though the multilingual approach of mBERT (in pre-training) or UDify (in pre-training and fine-tuning) leads to better performance for high-resource languages or languages 1709 Treebank Model UPOS UAS LAS UDPipe 2.0 98.98 93.38 90.35 +ELMoWikipedia(1) 98.81 93.60 90.21 +ELMoWikipedia(3) 99.01 94.32 91.36 +ELMoWikipedia(5) 99.03 94.32 91.38 Bulgarian BTB +ELMoWikipedia(10) 99.17 94.93 92.05 +ELMoOSCAR(1) 99.28 95.45 92.98 +ELMoOSCAR(3) 99.34 95.58 93.12 +ELMoOSCAR(5) 99.34 95.63 93.25 +ELMoOSCAR(10) 99.40 96.01 93.56 UDPipe 2.0 98.88 93.22 91.06 +ELMoWikipedia(1) 98.93 93.24 91.21 +ELMoWikipedia(3) 99.02 93.75 91.93 +ELMoWikipedia(5) 99.04 93.86 92.05 Catalan-AnCora +ELMoWikipedia(10) 99.05 93.99 92.24 +ELMoOSCAR(1) 99.07 93.92 92.29 +ELMoOSCAR(3) 99.10 94.29 92.69 +ELMoOSCAR(5) 99.07 94.38 92.75 +ELMoOSCAR(10) 99.06 94.49 92.88 UDPipe 2.0 97.78 86.88 84.31 +ELMoWikipedia(1) 97.47 86.98 84.15 +ELMoWikipedia(3) 98.03 88.16 85.81 +ELMoWikipedia(5) 98.15 88.24 85.96 Danish-DDT +ELMoWikipedia(10) 98.45 89.05 86.92 +ELMoOSCAR(1) 98.50 89.47 87.43 +ELMoOSCAR(3) 98.59 89.68 87.77 +ELMoOSCAR(5) 98.59 89.46 87.64 +ELMoOSCAR(10) 98.62 89.84 87.95 Treebank Model UPOS UAS LAS UDPipe 2.0 96.65 90.68 87.89 +ELMoWikipedia(1) 95.86 89.63 86.39 +ELMoWikipedia(3) 96.76 91.02 88.27 +ELMoWikipedia(5) 96.97 91.66 89.04 Finnish-FTB +ELMoWikipedia(10) 97.27 92.05 89.62 +ELMoOSCAR(1) 97.91 93.41 91.43 +ELMoOSCAR(3) 98.00 93.99 91.98 +ELMoOSCAR(5) 98.15 93.98 92.24 +ELMoOSCAR(10) 98.13 93.81 92.02 UDPipe 2.0 97.45 89.88 87.46 +ELMoWikipedia(1) 96.73 89.11 86.33 +ELMoWikipedia(3) 97.55 90.84 88.50 +ELMoWikipedia(5) 97.55 91.11 88.88 Finnish-TDT +ELMoWikipedia(10) 97.65 91.60 89.34 +ELMoOSCAR(1) 98.27 93.03 91.29 +ELMoOSCAR(3) 98.38 93.60 91.83 +ELMoOSCAR(5) 98.39 93.57 91.80 +ELMoOSCAR(10) 98.36 93.54 91.77 UDPipe 2.0 93.69 85.31 78.99 +ELMoWikipedia(1) 93.70 85.81 79.46 +ELMoWikipedia(3) 93.90 86.04 79.72 +ELMoWikipedia(5) 94.04 85.93 79.97 Indonesian-GSD +ELMoWikipedia(10) 93.94 86.16 80.10 +ELMoOSCAR(1) 93.95 86.25 80.23 +ELMoOSCAR(3) 94.00 86.21 80.14 +ELMoOSCAR(5) 94.23 86.37 80.40 +ELMoOSCAR(10) 94.12 86.49 80.59 Table 6: UPOS, UAS and LAS scores for the UDPipe 2.0 baseline reported by (Kondratyuk and Straka, 2019), plus the scores for checkpoints at 1, 3, 5 and 10 epochs for all the ELMoOSCAR and ELMoWikipedia. All scores are test scores. Best ELMoOSCAR scores are shown in bold while best ELMoWikipedia scores are underlined. that are closely related to high-resource languages, it might also significantly degrade the representations for more isolated or even simply more morphologically rich languages like Finnish. In contrast, our monolingual approach with UDPipe 2.0 + ELMoOSCAR improves the previous SOTA considerably, by more than 2 points for some metrics. Note however that Indonesian, which might also be seen as a relatively isolated language, does not behave in the same way as Finnish. 5.2 Impact of the number of training epochs An important topic we wanted to address with our experiments was that of overfitting and the number of epochs one should train the contextualized embeddings for. The ELMo authors have expressed that increasing the number of training epochs is generally better, as they argue that training the ELMo model for longer reduces held-out perplexity and further improves downstream task performance.16 This is why we intentionally fully pre-trained the ELMoWikipedia to the 10 epochs of the original ELMo paper, as its authors also expressed concern over the possibility of overfitting for smaller corpora. We thus save checkpoints for 16Their comments on the matter can be found here. each of our ELMo model at the 1, 3, 5 and 10 epoch marks so that we can properly probe for overfitting. The scores of all checkpoints are reported in Table 6. Here again we do not train the UDPipe 2.0 baselines without embedding, we just report the scores published in Kondratyuk and Straka (2019). The first striking finding is that even though all our Wikipedia data sets are smaller than 1GB in size (except for Catalan), none of the ELMoWikipedia models show any sign of overfitting, as the results continue to improve for all metrics the more we train the ELMo models, with the best results consistently being those of the fully trained 10 epoch ELMos. For all of our Wikipedia models, but those of Catalan and Indonesian, we see sub-baseline results at 1 epoch; training the model for longer is better, even if the corpora are small in size. ELMoOSCAR models exhibit exactly the same behavior as ELMoWikipedia models where the scores continue to improve the longer they are pre-trained, except for the case of Finnish. Here we actually see an unexpected behavior where the model performance caps around the 3rd to 5th epoch. This is surprising because the Finnish OSCAR corpus is more than 20 times bigger than our smallest Wikipedia corpus, the Danish Wikipedia, that did not exhibit 1710 this behavior. As previously mentioned Finnish is morphologically richer than the other languages in which we trained ELMo, we hypothesize that the representation space given by the ELMo embeddings might not be sufficiently big to extract more features from the Finnish OSCAR corpus beyond the 5th epoch mark, however in order to test this we would need to train a larger language model like BERT which is sadly beyond our computing infrastructure limits (cf. Subsection 5.3). However we do note that pre-training our current language model architectures in a morphologically rich language like Finnish might actually better expose the limits of our existing approaches to language modeling. One last thing that it is important to note with respect to the number of training epochs is that even though we fully pre-trained our ELMoWikipedia’s and ELMoOSCAR’s to the recommended 10 epoch mark, and then compared them against one another, the number of training steps between both pre-trained models differs drastically due to the big difference in corpus size (for Indonesian, for instance, 10 epochs correspond to 78K steps for ELMoWikipedia and to 2.6M steps for OSCAR; the complete picture is provided in the Appendix, in Table 8). In fact, we can see in Table 6 that all the UDPipe 2.0 + ELMoOSCAR(1) perform better than the UDPipe 2.0 + ELMoWikipedia(1) models across all metrics. Thus we believe that talking in terms of training steps as opposed to training epochs might be a more transparent way of comparing two pretrained models. 5.3 Computational cost and carbon footprint Considering the discussion above, we believe an interesting follow-up to our experiments would be training the ELMo models for more of the languages included in the OSCAR corpus. However training ELMo is computationally costly, and one way to estimate this cost, as pointed out by Strubell et al. (2019), is by using the training times of each model to compute both power consumption and CO2 emissions. In our set-up we used two different machines, each one having 4 NVIDIA GeForce GTX 1080 Ti graphic cards and 128GB of RAM, the difference between the machines being that one uses a single Intel Xeon Gold 5118 processor, while the other uses two Intel Xeon E5-2630 v4 processors. One GeForce GTX 1080 Ti card is rated at around Language Power Hours Days KWh·PUE CO2e OSCAR-Based ELMos Bulgarian 1183 515.00 21.45 962.61 49.09 Catalan 1118 199.98 8.33 353.25 18.02 Danish 1183 200.89 8.58 375.49 19.15 Finnish 1118 591.25 24.63 1044.40 53.26 Indonesian 1183 694.26 28.93 1297.67 66.18 Wikipedia-Based ELMos Bulgarian 1118 15.45 0.64 27.29 1.39 Catalan 1118 51.08 2.13 90.22 4.60 Danish 1118 14.56 0.61 25,72 1.31 Finnish 1118 21.79 0.91 38.49 1.96 Indonesian 1118 20.28 0.84 35.82 1.82 TOTAL EMISSIONS 216.78 Table 7: Average power draw (Watts), training times (in both hours and days), mean power consumption (KWh) and CO2 emissions (kg) for each ELMo model trained. 250 W,17 the Xeon Gold 5118 processor is rated at 105 W,18 while one Xeon E5-2630 v4 is rated at 85 W.19 For the DRAM we can use the work of Desrochers et al. (2016) to estimate the total power draw of 128GB of RAM at around 13W. Having this information, we can now use the formula proposed by Strubell et al. (2019) in order to compute the total power required to train one ELMo model: pt = 1.58t(cpc + pr + gpg) 1000 Where c and g are the number of CPUs and GPUs respectively, pc is the average power draw (in Watts) from all CPU sockets, pr the average power draw from all DRAM sockets, and pg the average power draw of a single GPU. We estimate the total power consumption by adding GPU, CPU and DRAM consumptions, and then multiplying by the Power Usage Effectiveness (PUE), which accounts for the additional energy required to support the compute infrastructure. We use a PUE coefficient of 1.58, the 2018 global average for data centers (Strubell et al., 2019). In table 7 we report the training times in both hours and days, as well as the total power draw (in Watts) of the system used to train each individual ELMo model. We use this in17https://www.geforce.com/hardware/ desktop-gpus/geforce-gtx-1080-ti/ specifications 18https://ark.intel.com/content/www/ us/en/ark/products/120473/intel-xeongold-5118-processor-16-5m-cache-2-30ghz.html 19https://ark.intel.com/content/www/ us/en/ark/products/92981/intel-xeonprocessor-e5-2630-v4-25m-cache-2-20-ghz. html 1711 formation to compute the total power consumption of each ELMo, also reported in table 7. We can further estimate the CO2 emissions in kilograms of each single model by multiplying the total power consumption by the average CO2 emissions per kWh in France (where the models were trained). According to the RTE (Réseau de transport d’électricité / Electricity Transmission Network) the average emission per kWh were around 51g/kWh in November 2019,20 when the models were trained. Thus the total CO2 emissions in kg for one single model can be computed as: CO2e = 0.051pt All emissions for the ELMo models are also reported in table 7. We do not report the power consumption or the carbon footprint of training the UDPipe 2.0 architecture, as each model took less than 4 hours to train on a machine using a single NVIDIA Tesla V100 card. Also, this machine was shared during training time, so it would be extremely difficult to accurately estimate the power consumption of these models. Even though it would have been interesting to replicate all our experiments and computational cost estimations with state-of-the-art fine-tuning models such as BERT, XLNet, RoBERTa or ALBERT, we recall that these transformer-based architectures are extremely costly to train, as noted by the BERT authors on the official BERT GitHub repository,21 and are currently beyond the scope of our computational infrastructure. However we believe that ELMo contextualized word embeddings remain a useful model that still provide an extremely good trade-off between performance to training cost, even setting new state-of-the-art scores in parsing and POS tagging for our five chosen languages, performing even better than the multilingual mBERT model. 6 Conclusions In this paper, we have explored the use of the Common-Crawl-based OSCAR corpora to train ELMo contextualized embeddings for five typologically diverse mid-resource languages. We have compared them with Wikipedia-based ELMo embeddings on two classical NLP tasks, POS tagging 20https://www.rte-france.com/fr/ eco2mix/eco2mix-co2 21https://github.com/google-research/ bert and parsing, using state-of-the-art neural architectures. Our goal was to explore whether the noisiness level of Common Crawl data, often invoked to criticize the use of such data, could be compensated by its larger size; for some languages, the OSCAR corpus is several orders of magnitude larger than the corresponding Wikipedia. Firstly, we found that when properly filtered, Common Crawl data is not massively noisier than Wikipedia. Secondly, we show that embeddings trained using OSCAR data consistently outperform Wikipedia-based embeddings, to the extent that they allow us to improve the state of the art in POS tagging and dependency parsing for all the 6 chosen treebanks. Thirdly, we observe that more training epochs generally results in better embeddings even when the training data is relatively small, as is the case for Wikipedia. Our experiments show that Common-Crawlbased data such as the OSCAR corpus can be used to train high-quality contextualized embeddings, even for languages for which more standard textual resources lack volume or genre variety. This could result in better performances in a number of NLP tasks for many non highly resourced languages. Acknowledgments We want to thank Ganesh Jawahar for his insightful comments and suggestions during the early stages of this project. This work was partly funded by the French national ANR grant BASNUM (ANR-18-CE38-0003), as well as by the last author’s chair in the PRAIRIE institute,22 funded by the French national ANR as part of the “Investissements d’avenir” programme under the reference ANR-19-P3IA-0001. The authors are grateful to Inria Sophia Antipolis - Méditerranée “Nef”23 computation cluster for providing resources and support. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1638–1649. Association for Computational Linguistics. Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations 22http://prairie-institute.fr/ 23https://wiki.inria.fr/wikis/ ClustersSophia 1712 for multilingual NLP. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183–192, Sofia, Bulgaria. Association for Computational Linguistics. Bernd Bohnet, Ryan McDonald, Gonçalo Simões, Daniel Andor, Emily Pitler, and Joshua Maynez. 2018. Morphosyntactic tagging with a metaBiLSTM model over context sensitive token encodings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2642–2652, Melbourne, Australia. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Matthias Buch-Kromann. 2003. The danish dependency treebank and the dtag treebank tool. In 2nd Workshop on Treebanks and Linguistic Theories (TLT), Sweden, pages 217–220. Branden Chan, Timo Möller, Malte Pietsch, Tanay Soni, and Chin Man Yeung. 2019. German BERT. https://deepset.ai/german-bert. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55–64, Brussels, Belgium. Association for Computational Linguistics. Spencer Desrochers, Chad Paradis, and Vincent M. Weaver. 2016. A validation of dram rapl power measurements. In Proceedings of the Second International Symposium on Memory Systems, MEMSYS ’16, page 455–470, New York, NY, USA. Association for Computing Machinery. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv e-prints, page arXiv:1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Multilingual BERT. https://github.com/google-research/ bert/blob/master/multilingual.md. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford’s graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30, Vancouver, Canada. Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hervé Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. CoRR, abs/1612.03651. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics. Dan Kondratyuk and Milan Straka. 2019. 75 Languages, 1 Model: Parsing Universal Dependencies Universally. arXiv e-prints, page arXiv:1904.02099. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv eprints, page arXiv:1909.11942. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. 2019. CamemBERT: a Tasty French Language Model. arXiv e-prints, page arXiv:1911.03894. Rada Mihalcea. 2007. Using Wikipedia for automatic word sense disambiguation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 196–203, Rochester, New York. Association for Computational Linguistics. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. 1713 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Joakim Nivre, Mitchell Abrams, Željko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gül¸sen Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Ça˘grı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline Etienne, Richárd Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh Hà M˜y, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladká, Jaroslava Hlaváˇcová, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomáš Jelínek, Anders Johannsen, Fredrik Jørgensen, Hüner Ka¸sıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phương Lê H`ông, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubeši´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonça, Niko Miekka, Anna Missilä, C˘at˘alin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-B¯erzkalne, Lương Nguy˜ên Thi., Huy`ên Nguy˜ên Thi. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayò. Olúòkun, Mai Omura, Petya Osenova, Robert Östling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnin, a, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Ros,ca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardži´c, Stephanie Samson, Manuela Sanguinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, Mária Šimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdenˇek Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. Challenges in the Management of Large Corpora (CMLC-7) 2019, page 9. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. 1714 Slav Petrov, Dipanjan Das, and Ryan T. McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 23-25, 2012, pages 2089–2096. European Language Resources Association (ELRA). Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. OpenAI Blog. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv e-prints, page arXiv:1910.10683. Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018. 82 treebanks, 34 models: Universal dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113–123, Brussels, Belgium. Association for Computational Linguistics. Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197–207, Brussels, Belgium. Association for Computational Linguistics. Milan Straka and Jana Straková. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88–99, Vancouver, Canada. Association for Computational Linguistics. Milan Straka, Jana Straková, and Jan Hajic. 2019. Evaluating contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency parsing. CoRR, abs/1908.07448. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Mariona Taulé, Maria Antònia Martí, and Marta Recasens. 2008. Ancora: Multilevel annotated corpora for catalan and spanish. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May - 1 June 2008, Marrakech, Morocco. European Language Resources Association. Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. CoRR, abs/1806.02847. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2019. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. arXiv e-prints, page arXiv:1911.00359. Fei Wu and Daniel S. Weld. 2010. Open information extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 118–127, Uppsala, Sweden. Association for Computational Linguistics. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–21, Brussels, Belgium. Association for Computational Linguistics. A Appendix A.1 Number of training steps for each checkpoint and each corpus Language 1 Epoch 3 Epochs 5 Epochs 10 Epochs Wikipedia-Based ELMos Bulgarian 6,268 18,804 31,340 62,680 Catalan 20,666 61,998 103,330 206,660 Danish 5,922 17,766 29,610 59,220 Finnish 8,763 26,289 43,815 87,630 Indonesian 7,891 23,673 39,455 78,910 OSCAR-Based ELMos Bulgarian 143,169 429,507 715,845 1,431,690 Catalan 81,156 243,468 405,780 811,560 Danish 81,156 243,468 405,780 811,560 Finnish 181,230 543,690 906,150 1,812,300 Indonesian 263,830 791,490 1,319,150 2,638,300 Table 8: Number of training steps for each checkpoint, for the ELMoWikipedia and ELMoOSCAR of each language.
2020
156
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1715–1724 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1715 Will-They-Won’t-They: A Very Large Dataset for Stance Detection on Twitter Costanza Conforti1, Jakob Berndt2, Mohammad Taher Pilehvar1,3, Chryssi Giannitsarou2, Flavio Toxvaerd2, Nigel Collier1 1 Language Technology Lab, University of Cambridge 2 Faculty of Economics, University of Cambridge 3 Tehran Institute for Advanced Studies, Iran {cc918,jb2088,mp792,cg349,fmot2,nhc30}@cam.ac.uk Abstract We present a new challenging stance detection dataset, called Will-They-Won’t-They1 (WT– WT), which contains 51,284 tweets in English, making it by far the largest available dataset of the type. All the annotations are carried out by experts; therefore, the dataset constitutes a high-quality and reliable benchmark for future research in stance detection. Our experiments with a wide range of recent state-of-the-art stance detection systems show that the dataset poses a strong challenge to existing models in this domain. The entire dataset is released for future research2. 1 Introduction Apart from constituting an interesting task on its own, stance detection has been identified as a crucial sub-step towards many other NLP tasks (Mohammad et al., 2017). In fact, stance detection is the core component of fake news detection (Pomerleau and Rao, 2017), fact-checking (Vlachos and Riedel, 2014; Baly et al., 2018), and rumor verification (Zubiaga et al., 2018b). Despite its importance, stance detection suffers from the lack of a large dataset which would allow for reliable comparison between models. We aim at filling this gap by presenting Will-They-Won’tThey (WT–WT), a large dataset of English tweets targeted at stance detection for the rumor verification task. We constructed the dataset based on tweets, since Twitter is a highly relevant platform for rumour verification, which is popular with the public as well as politicians and enterprises (Gorrell et al., 2019). To make the dataset representative of a realistic scenario, we opted for a real-world application 1https://en.wiktionary.org/wiki/will-they-won%27t-they 2https://github.com/cambridge-wtwt/ acl2020-wtwt-tweets of the rumor verification task in finance. Specifically, we constructed the dataset based on tweets that discuss mergers and acquisition (M&A) operations between companies. M&A is a general term that refers to various types of financial transactions in which the ownership of companies are transferred. An M&A process has many stages that range from informal talks to the closing of the deal. The discussions between companies are usually not publicly disclosed during the early stages of the process (Bruner and Perella, 2004; Piesse et al., 2013). In this sense, the analysis of the evolution of opinions and concerns expressed by users about a possible M&A deal, from its early stage to its closing (or its rejection) stage, is a process similar to rumor verification (Zubiaga et al., 2018a). Moreover, despite the wide interest, most research in the intersection of NLP and finance has so far focused on sentiment analysis, text mining and thesauri/taxonomy generation (Fisher et al., 2016; Hahn et al., 2018; El-Haj et al., 2018). While sentiment (Chan and Chong, 2017) and targetedsentiment analysis (Chen et al., 2017) have an undisputed importance for analyzing financial markets, research in stance detection takes on a crucial role: in fact, being able to model the market’s perception of the merger might ultimately contribute to explaining stock price re-valuation. We make the following three contributions. Firstly, we construct and release WT–WT, a large, expert-annotated Twitter stance detection dataset. With its 51,284 tweets, the dataset is an order of magnitude larger than any other stance detection dataset of user-generated data, and could be used to train and robustly compare neural models. To our knowledge, this is the first resource for stance in the financial domain. Secondly, we demonstrate the utility of the WT–WT dataset by evaluating 11 competitive and state-of-the-art stance detection models on our benchmark. Thirdly, we annotate a further 1716 M&A Buyer Target Outcome CVS_AET CVS Health Aetna Succeeded CI_ESRX Cigna Express Scripts Succeeded ANTM_CI Anthem Cigna Blocked AET_HUM Aetna Humana Blocked DIS_FOXA Disney 21st Century Fox Succeeded Table 1: Considered M&A operations. Note that AET and CI appear both as buyers and as targets. M&A operation in the entertainment domain; we investigate the robustness of best-performing models on this operation, and show that such systems struggle even over small domain shifts. The entire dataset is released to enable research in stance detection and domain adaptation. 2 Building the WT–WT Dataset We consider five recent operations, 4 in the healthcare and 1 in the entertainment industry (Table 1). 2.1 Data Retrieval For each operation, we used Selenium3 to retrieve IDs of tweets with one of the following sets of keywords: mentions of both companies’ names or acronyms, and mentions of one of the two companies with a set of merger-specific terms (refer to Appendix A.1 for further details). Based on historically available information about M&As, we sampled messages from one year before the proposed merger’s date up to six months after the merger took place. Finally, we obtain the text of a tweet by crawling for its ID using Tweepy4. 2.2 Task Definition and Annotation Guidelines The annotation process was preceded by a pilot annotation, after which the final annotation guidelines were written in close collaboration with three domain experts. We followed the convention in Twitter stance detection (Mohammad et al., 2017) and considered three stance labels: support, refute and comment. We also added an unrelated tag, obtaining the following label set: 1. Support: the tweet is stating that the two companies will merge. [CI_ESRX] Cigna to acquire Express Scripts for $52B in health care shakeup via usatoday 3www.seleniumhq.org 4www.tweepy.org/ 2. Refute: the tweet is voicing doubts that the two companies will merge. [AET_HUM] Federal judge rejects Aetna’s bid to buy Louisville-based Humana for $34 billion 3. Comment: the tweet is commenting on merger, neither directly supporting, nor refuting it. [CI_ESRX] Cigna-Express Scripts deal unlikely to benefit consumers 4. Unrelated: the tweet is unrelated to merger. [CVS_AET] Aetna Announces Accountable Care Agreement with Weill Cornell Physicians The obtained four-class annotation schema is similar to those in other corpora for news stance detection (Hanselowski et al., 2018; Baly et al., 2018). Note that, depending on the given target, the same sample can receive a different stance label: • Merger hopes for Aetna-Humana remain, Anthem-Cigna not so much. [AET_HUM] →support [ANTM_CI] →refute As observed in Mohammad et al. (2017), stance detection is different but closely related to targeted sentiment analysis, which considers the emotions conveyed in a text (Alhothali and Hoey, 2015). To highlight this subtle difference, consider the following sample: • [CVS_AET] #Cancer patients will suffer if @CVSHealth buys @Aetna CVS #PBM has resulted in delfays in therapy, switches, etc – all documented. Terrible! While its sentiment towards the target operation is negative (the user believes that the merger will be harmful for patients), following the guidelines, its stance should be labeled as comment: the user is talking about the implications of the operation, without expressing the orientation that the merger will happen (or not). Refer to Appendix A.2 for a detailed description of the four considered labels. 2.3 Data Annotation During the annotation process, each tweet was independently labeled by 2 to 6 annotators. Ten experts in the financial domain were employed as annotators5. Annotators received tweets in batches of 2,000 samples at a time, and were asked to annotate no more than one batch per week. The entire annotation process lasted 4 months. In case of disagreement, the gold label was obtained through 5Two MPhil, six PhD students and two lecturers at the Faculty of Economics of the University of Cambridge 1717 Label Healthcare Entertainment CVS_AET CI_ESRX ANTM_CI AET_HUM DIS_FOXA # samples % # samples % # samples % # samples % # samples % support 2,469 21.24 773 30.58 0970 08.78 1,038 13.14 01,413 07.76 refute 518 04.45 253 10.01 1,969 17.82 1,106 14.00 0 378 02.07 comment 5,520 47.49 947 37.47 3,098 28.05 2,804 35.50 0 8,495 46.69 unrelated 3,115 26.80 554 21.92 5,007 45.33 2,949 37.34 0 7,908 43.46 total 11,622 02,527 11,622 07,897 18,194 Table 2: Label distribution across different M&A operations (Table 1): four mergers in the healthcare domain (33,090 tweets) and one merger in the entertainment domain. The total number of tweets is: 51,284. total twt avg twt/target Mohammad et al. (2016b) 4,870 811 Inkpen et al. (2017) 4,455 1,485 Aker et al. (2017) 401 401 Derczynski et al. (2017) 5,568 696 Gorrell et al. (2019) (only Twitter) 6,634 829 WT–WT 51,284 10,256 Table 3: Statistics of Twitter stance detection datasets. majority vote, discarding samples where this was not possible (0.2% of the total). 2.4 Quality Assessment The average Cohen’s κ between the annotator pairs6 0.67, which is substantial (Cohen, 1960). To estimate the quality of the obtained corpus, a further domain-expert labeled a random sample of 3,000 tweets, which were used as human upperbound for evaluation (Table 4). Cohen’s κ between those labels and the gold is 0.88. This is well above the agreement obtained in previously released datasets where crowd-sourcing was used (the agreement scores reported, in terms of percentage, range from 63.7% (Derczynski et al., 2017) to 79.7% (Inkpen et al., 2017)). Support-comment samples constitute the most common source of disagreement between annotators: this might indicate that such samples are the most subjective to discriminate, and might also contribute to explain the high number of misclassifications between those classes which have been observed in other research efforts on stance detection (Hanselowski et al., 2018). Moreover, w.r.t. stance datasets where unrelated samples were randomly generated (Pomerleau and Rao, 2017; Hanselowski et al., 2018), we report a slightly 6The average κ was weighted by the number of samples annotated by each pair. The standard deviation of the κ scores between single annotator pairs is 0.074. higher disagreement between unrelated and comment samples, indicating that our task setting is more challenging. 2.5 Label Distribution The distribution of obtained labels for each operation is reported in Table 2. Differences in label distribution between events are usual, and have been observed in other stance corpora (Mohammad et al., 2016a; Kochkina et al., 2018). For most operations, there is a clear correlation between the relative proportion of refuting and supporting samples and the merger being approved or blocked by the US Department of Justice. Commenting tweets are more frequent than supporting over all operations: this is in line with previous findings in financial microblogging (Žnidaršiˇc et al., 2018). 2.6 Comparison with Existing Corpora The first dataset for Twitter stance detection collected 4,870 tweets on 6 political events (Mohammad et al., 2016a) and was later used in SemEval2016 (Mohammad et al., 2016b). Using the same annotation schema, Inkpen et al. (2017) released a corpus on the 2016 US election annotated for multi-target stance. In the scope of PHEME, a large project on rumor resolution (Derczynski and Bontcheva, 2014), Zubiaga et al. (2015) stanceannotated 325 conversational trees discussing 9 breaking news events. The dataset was used in RumourEval 2017 (Derczynski et al., 2017) and was later extended with 1,066 tweets for RumourEval 2019 (Gorrell et al., 2019). Following the same procedure, Aker et al. (2017) annotated 401 tweets on mental disorders (Table 3). This makes the proposed dataset by far the largest publicly available dataset for stance detection on user-generated data. In contrast with Mohammad et al. (2016a), Inkpen et al. (2017) and 1718 Macro F1 across healthcare opertations Average per-class accuracy Encoder CVS_AET CI_ESRX ANTM_CI AET_HUM avgF1 avgwF1 sup ref com unr SVM 51.0 51.0 65.7 65.0 58.1 58.5 54.5 43.9 41.2 88.4 MLP 46.5 46.6 57.6 59.7 52.6 52.7 55.7 40.3 48.6 68.1 EmbAvg 50.4 51.9 50.4 58.9 52.9 52.3 55.2 50.5 52.7 67.4 CharCNN 49.6 48.3 65.6 60.9 56.1 56.8 55.5 44.2 41.6 82.1 WordCNN 46.3 39.5 56.8 59.4 50.5 51.7 62.9 37.0 31.0 71.7 BiCE 56.5 52.5 64.9 63.0 59.2 60.1 61.0 48.7 45.1 79.9 CrossNet 59.1 54.5 65.1 62.3 60.2 61.1 63.8 48.9 50.5 75.8 SiamNet 58.3 54.4 68.7 67.7 62.2 63.1 67.0 48.0 52.5 78.3 CoMatchAtt 54.7 43.8 50.8 50.6 49.9 51.6 71.9 24.4 33.7 65.9 TAN 56.0 55.9 66.2 66.7 61.2 61.3 66.1 49.0 51.7 74.1 HAN 56.4 57.3 66.0 67.3 61.7 61.7 67.6 52.0 55.2 69.1 mean 53.1 50.5 61.6 62.0 − − 61.9 44.2 45.8 74.6 upperbound 75.3 71.2 74.4 73.7 74.7 75.2 80.5 89.6 71.8 84.0 Table 4: Results on the healthcare operations in the WT–WT dataset. Macro F1 scores are obtained by testing on the target operation while training on the other three. avgF1 and avgwF1 are, respectively, the unweighted and weighted (by operations size) average of all operations. PHEME, where crowd-sourcing was used, only highly skilled domain experts were involved in the annotation process of our dataset. Moreover, previous work on stance detection focused on a relatively narrow range of mainly political topics: in this work, we widen the spectrum of considered domains in the stance detection research with a new financial dataset. For these reasons, the WT–WT dataset constitutes a high quality and robust benchmark for the research community to train and compare performance of models and their scalability, as well as for research on domain adaptation. Its large size also allows for pre-trainining of models, before moving to domain with data-scarcity. 3 Experiments and Results We re-implement 11 architectures recently proposed for stance detection. Each system takes as input a tweet and the related target, represented as a string with the two considered companies. A detailed description of the models, with references to the original papers, can be found in Appendix B.1. Each architecture produces a single vector representation h for each input sample. Given h, we predict ˆy with a softmax operation over the 4 considered labels. 3.1 Experimental Setup We perform common preprocessing steps, such as URL and username normalization (see Appendix B.2). All hyper-parameters are listed in Appendix B.1 for replication. In order to allow for a fair comparison between models, they are all initialized with Glove embeddings pretrained on Twitter7 (Pennington et al., 2014), which are shared between tweets and targets and kept fixed during training. 3.2 Results and Discussion Results of experiments are reported in Table 4. Despite its simple architecture, SiamNet obtains the best performance in terms of both averaged and weighted averaged F1 scores. In line with previous findings (Mohammad et al., 2017), the SVM model constitutes a very strong and robust baseline. The relative gains in performance of CrossNet w.r.t. BiCE, and of HAN w.r.t. TAN, consistently reflect results obtained by such models on the SemEval 2016-Task 6 corpus (Xu et al., 2018; Sun et al., 2018). Moving to single labels classification, analysis of the confusion matrices shows a relevant number of misclassifications between the support and comment classes. Those classes have been found difficult to discriminate in other datasets as well (Hanselowski et al., 2018). The presence of linguistic features, as in the HAN model, may help in spotting the nuances in the tweet’s argumentative structure which allow for its correct classification. This may hold true also for the refute class, the least common and most difficult to discriminate. Unrelated samples in WT–WT could be about the involved companies, but not about their merger: this makes classification more challenging than in datasets containing randomly generated unre7https://nlp.stanford.edu/projects/ 1719 lated samples (Pomerleau and Rao, 2017). SVM and CharCNN obtain the best performance on unrelated samples: this suggests the importance of character-level information, which could be better integrated into future architectures. Concerning single operations, CVS_AET and CI_ESRX have the lowest average performance across models. This is consistent with higher disagreement among annotators for the two mergers. 3.3 Robustness over Domain Shifts We investigate the robustness of SiamNet, the best model in our first set of experiments, and BiCE, which constitutes a simpler neural baseline (Section 3.2), over domain shifts with a cross-domain experiment on an M&A event in the entertainment business. Data. We collected data for the Disney-Fox (DIS_FOXA) merger and annotated them with the same procedure as in Section 2, resulting in a total of 18,428 tweets. The obtained distribution is highly skewed towards the unrelated and comment class (Table 2). This could be due to the fact that users are more prone to digress and joke when talking about the companies behind their favorite shows than when considering their health insurance providers (see Appendix A.2). train →test BiCE SiamNet acc F1 acc F1 health →health 77.69 76.08 78.51 77.38 health →ent 57.32 37.77 59.85 40.18 ent →ent 84.28 74.82 85.01 75.42 ent →health 46.45 33.62 48.99 35.25 Table 5: Domain generalization experiments across entertainment (ent) and healthcare datasets. Note that the data partitions used are different than in Table 4. Results. We train on all healthcare operations and test on DIS_FOXA (and the contrary), considering a 70-15-15 split between train, development and test sets for both sub-domains. Results show SiamNet consistently outperforming BiCE. The consistent drop in performance according to both accuracy and macro-avg F1 score, which is observed in all classes but particularly evident for commenting samples, indicates strong domain dependency and room for future research. 4 Conclusions We presented WT–WT, a large expert-annotated dataset for stance detection with over 50K labeled tweets. Our experiments with 11 strong models indicated a consistent (>10%) performance gap between the state-of-the-art and human upperbound, which proves that WT–WT constitutes a strong challenge for current models. Future research directions might explore the usage of transformer-based models, as well as of models which exploit not only linguistic but also network features, which have been proven to work well for existing stance detection datasets (Aldayel and Magdy, 2019). Also, the multi-domain nature of the dataset enables future research in cross-target and crossdomain adaptation, a clear weak point of current models according to our evaluations. Acknowledgments We thank the anonymous reviewers of this paper for their efforts and for the constructive comments and suggestions. We gratefully acknowledge funding from the Keynes Fund, University of Cambridge (grant no. JHOQ). CC is grateful to NERC DREAM CDT (grant no. 1945246) for partially funding this work. CG and FT are thankful to the Cambridge Endowment for Research in Finance (CERF). References Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, Anna Kolliakou, Rob Procter, and Maria Liakata. 2017. Stance classification in out-of-domain rumours: A case study around mental health disorders. In Social Informatics - 9th International Conference, SocInfo 2017, Oxford, UK, September 13-15, 2017, Proceedings, Part II, volume 10540 of Lecture Notes in Computer Science, pages 53–64. Springer. Abeer Aldayel and Walid Magdy. 2019. Your stance is exposed! analysing possible factors for stance detection on social media. PACMHCI, 3(CSCW):205:1– 205:20. Areej Alhothali and Jesse Hoey. 2015. Good news or bad news: Using affect control theory to analyze readers’ reaction towards news articles. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1548–1558, Denver, Colorado. Association for Computational Linguistics. Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detec1720 tion with bidirectional conditional encoding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 876– 885. The Association for Computational Linguistics. Ramy Baly, Mitra Mohtarami, James R. Glass, Lluís Màrquez, Alessandro Moschitti, and Preslav Nakov. 2018. Integrating stance detection and fact checking in a unified corpus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 21–27. Association for Computational Linguistics. Emily M. Bender, Leon Derczynski, and Pierre Isabelle, editors. 2018. Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018. Association for Computational Linguistics. Steven Bethard, Daniel M. Cer, Marine Carpuat, David Jurgens, Preslav Nakov, and Torsten Zesch, editors. 2016. Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2016, San Diego, CA, USA, June 16-17, 2016. The Association for Computer Linguistics. Robert F Bruner and Joseph R Perella. 2004. Applied mergers and acquisitions, volume 173. John Wiley & Sons. Samuel WK Chan and Mickey WC Chong. 2017. Sentiment analysis in financial texts. Decision Support Systems, 94:53–64. Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2017. NLG301 at semeval-2017 task 5: Finegrained sentiment analysis on financial microblogs and news. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 847–851. Association for Computational Linguistics. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46. Leon Derczynski and Kalina Bontcheva. 2014. Pheme: Veracity in digital social networks. In Posters, Demos, Late-breaking Results and Workshop Proceedings of the 22nd Conference on User Modeling, Adaptation, and Personalization co-located with the 22nd Conference on User Modeling, Adaptation, and Personalization (UMAP2014), Aalborg, Denmark, July 7-11, 2014., volume 1181 of CEUR Workshop Proceedings. CEUR-WS.org. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, Vancouver, Canada, August 3-4, 2017, pages 69–76. Association for Computational Linguistics. Kuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical stance detection for twitter: A twophase LSTM model using attention. In Advances in Information Retrieval - 40th European Conference on IR Research, ECIR 2018, Grenoble, France, March 26-29, 2018, Proceedings, volume 10772 of Lecture Notes in Computer Science, pages 529–536. Springer. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3988–3994. Mahmoud El-Haj, Paul Rayson, and Andrew Moore. 2018. Proceedings of the first financial narrative processing workshop. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. Ingrid E. Fisher, Margaret R. Garnsey, and Mark E. Hughes. 2016. Natural language processing in accounting, auditing and finance: A synthesis of the literature with a roadmap for future research. Int. Syst. in Accounting, Finance and Management, 23(3):157–214. Genevieve Gorrell, Elena Kochkina, Maria Liakata, Ahmet Aker, Arkaitz Zubiaga, Kalina Bontcheva, and Leon Derczynski. 2019. SemEval-2019 task 7: RumourEval, determining rumour veracity and support for rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 845–854, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Iryna Gurevych and Yusuke Miyao, editors. 2018. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers. Association for Computational Linguistics. Udo Hahn, Véronique Hoste, and Ming-Feng Tsai. 2018. Proceedings of the first workshop on economics and natural language processing. In The 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia. Andreas Hanselowski, Avinesh P. V. S., Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In (Bender et al., 2018), pages 1859–1874. Kazi Saidul Hasan and Vincent Ng. 2013. Stance classification of ideological debates: Data, models, features, and constraints. In Sixth International Joint 1721 Conference on Natural Language Processing, IJCNLP 2013, Nagoya, Japan, October 14-18, 2013, pages 1348–1356. Asian Federation of Natural Language Processing / ACL. Diana Inkpen, Xiaodan Zhu, and Parinaz Sobhani. 2017. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 551–557. Association for Computational Linguistics. Elena Kochkina, Maria Liakata, and Isabelle Augenstein. 2017. Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm. CoRR, abs/1704.07221. Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. 2018. All-in-one: Multi-task learning for rumour verification. In (Bender et al., 2018), pages 3402– 3413. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiao-Dan Zhu, and Colin Cherry. 2016a. A dataset for detecting stance in tweets. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portorož, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA). Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiao-Dan Zhu, and Colin Cherry. 2016b. Semeval-2016 task 6: Detecting stance in tweets. In (Bethard et al., 2016), pages 31–41. Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Trans. Internet Techn., 17(3):26:1–26:23. Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 2786–2792. AAAI Press. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Jenifer Piesse, Cheng-Few Lee, Lin Lin, and HsienChang Kuo. 2013. Merger and acquisition: Definitions, motives, and market responses. Encyclopedia of Finance, pages 411–420. Dean Pomerleau and Delip Rao. 2017. Fake news challenge. Benjamin Riedel, Isabelle Augenstein, Georgios P. Spithourakis, and Sebastian Riedel. 2017. A simple but tough-to-beat baseline for the fake news challenge stance detection task. CoRR, abs/1707.03264. T. Y. S. S. Santosh, Srijan Bansal, and Avirup Saha. 2019. Can siamese networks help in stance detection? In Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, COMAD/CODS 2019, Kolkata, India, January 3-5, 2019, pages 306–309. ACM. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In (Bender et al., 2018), pages 2399–2409. Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi, and Deb Roy. 2016. Deepstance at semeval-2016 task 6: Detecting stance in tweets using character and word-level cnns. In (Bethard et al., 2016), pages 413–419. Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the Workshop on Language Technologies and Computational Social Science@ACL 2014, Baltimore, MD, USA, June 26, 2014, pages 18–22. Association for Computational Linguistics. Shuohang Wang, Mo Yu, Jing Jiang, and Shiyu Chang. 2018. A co-matching model for multi-choice reading comprehension. In (Gurevych and Miyao, 2018), pages 746–751. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada, pages 347–354. The Association for Computational Linguistics. Chang Xu, Cécile Paris, Surya Nepal, and Ross Sparks. 2018. Cross-target stance classification with selfattention networks. In (Gurevych and Miyao, 2018), pages 778–783. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 1217, 2016, pages 1480–1489. The Association for Computational Linguistics. Martin Žnidaršiˇc, Jasmina Smailovi´c, Jan Gorše, Miha Grˇcar, Igor Mozetiˇc, and Senja Pollak. 2018. Trust and doubt terms in financial tweets and periodic reports. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection 1722 and resolution of rumours in social media: A survey. ACM Comput. Surv., 51(2):32:1–32:36. Arkaitz Zubiaga, Geraldine Wong Sak Hoi, Maria Liakata, Rob Procter, and Peter Tolmie. 2015. Analysing how people orient to and spread rumours in social media by looking at conversational threads. CoRR, abs/1511.07487. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. 2016. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. In COLING 2016, 26th International Conference on Computational Linguistics, December 11-16, 2016, Osaka, Japan, pages 2438–2448. ACL. Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018b. Discourseaware rumour stance classification in social media using sequential classifiers. Inf. Process. Manage., 54(2):273–290. Appendix A: Dataset-related Specifications A.1 Crawling Specifications • M&A-specific terms used for crawling: one of merge, acquisition, agreement, acquire, takeover, buyout, integration + mention of a given company/acronym. • Crawl start and end dates: CVS_AET 15/02/2017 →17/12/2018 CI_ESRX 27/05/2017 →17/09/2018 ANTM_CI 01/04/2014 →28/04/2017 AET_HUM 01/09/2014 →23/01/2017 DIS_FOXA 09/07/2017 →18/04/2018 A.2 Description and Examples of the Considered Labels This is an extract from the annotation guidelines sent to the annotators. The annotation process consists of choosing one of four possible labels, given a tweet and an M&A operation. The four labels to choose from are Support, Comment, Refute, and Unrelated. Label 1: Support – If the tweet is supporting the theory that the merger is happening. Supporting tweets can be, for example, one of the following: 1. Explicitly stating that the deal is happening: →[CI_ESRX] Cigna to acquire Express Scripts for $52B in health care shakeup via usatoday 2. Stating that the deal is likely to happen: →[CVS_AET] CVS near deal to buy Aetna (Via Boston Herald) <URL> 3. Stating that the deal has been cleared: →[CVS_AET] #Breaking DOJ clears #CVS $69Billion deal for #Aetna. Label 2: Comment – If the tweet is commenting on the merger. The tweet should neither directly state that the deal is happening, nor refute this. Tweets that state the merger as a fact and then talk about, e.g. implications or consequences of the merger, should also be labelled as commenting. Commenting tweets can be, for example, one of the following: 1. Talking about implications of the deal: →[CI_ESRX] Cigna-Express Scripts deal unlikely to benefit consumers 2. Stating merger as fact and commenting on something related to the deal: →[CVS_AET] #biotechnology Looking at the CVSAetna Deal One Academic Sees Major Disruptive Potential 3. Talking about changes in one or both of the companies involved: →[CVS_AET] Great article about the impact of Epic within the CVS and Aetna Merge <URL> Label 3: Refute – This label should be chosen if the tweet is refuting that the merger is happening. Any tweet that voices doubts or mentions potential roadblocks should be labelled as refuting. Refuting tweets can be, for example, one of the following: 1. Explicitly voicing doubts about the merger: →[ANTM_CI] business: JUST IN: Cigna terminates merger agreement with Anthem 2. Questioning that the companies want to move forward: →[CI_ESRX] Why would $ESRX want a deal with $CI? 3. Talking about potential roadblocks for the merger: →[CI_ESRX] Why DOJ must block the CignaExpress Scripts merger <URL> Label 4: Unrelated – If the tweet is unrelated to the given merger. Unrelated tweets can be, for example, one of the following: 1. Talking about something unrelated to the companies involved in the merger: →[DIS_FOXA] I’m watching the Disney version of Robin Hood someone tell me how I have a crush on a cartoon fox 1723 2. Talking about the companies involved in the merger, however not about the merger: →[CVS_AET] CVS and Aetna’s combined revenue in 2016 was larger than every U.S. company’s other than Wal Mart <URL> 3. Talking about a different merger: →[CVS_AET] What are the odds and which one do you think it will be? Cigna or Humana? Aetna acquisition rumor Appendix B: Models-related Parameters B.1 Encoder’s Architectures • SVMs: linear-kernel SVM leveraging bag of ngrams (over words and characters) features. A similar simple system outperformed all 19 teams in the SemEval-Task 6 (Mohammad et al., 2017). • MLP: a multi-layer perceptron (MLP) with one dense layer, taking as input the concatenation of tweet’s and target’s TF-IDF representations and their cosine similarity score (similar to the model in Riedel et al. (2017)). • EmbAvg: a MLP with two dense layers, taking as input the average of the tweet’s and the target’s word embeddings. Averaging embeddings was proven to work well for Twitter data in previous papers by Zubiaga et al. (2016); Kochkina et al. (2017), who - differently than in this paper classified stream of tweets in a conversation tree. • CharCNN and WordCNN: two CNN models, one over character and one over words, following the work by Vijayaraghavan et al. (2016). • BiCE: a similar Bidirectional Conditional Encoding model to that of Augenstein et al. (2016): the tweet is processed by a BiLSTM whose forward and backward initial states are initialized with the last states of a further BiLSTM which processed the target. • CrossNet: a BiCE model augmented with selfattention and two dense layers, as in the crosstarget stance detection model (Xu et al., 2018). • SiamNet: siamese networks have been recently used for fake news stance detection (Santosh et al., 2019). Here we implement a siamese network based on a BiLSTM followed by a selfattention layer (Yang et al., 2016). The obtained tweet and target vector representations are concatenated with their similarity score (following Mueller and Thyagarajan (2016), we used the inverse exponential of the Manhattan distance). • Co-MatchAtt: we use a similar co-matching attention mechanism as in Wang et al. (2018) to connect the tweet and the target, encoded with two separated BiLSTM layers, followed by a self-attention layer (Yang et al., 2016). • TAN: a model combining a BiLSTM and a target-specific attention extractor over targetaugmented embeddings (Du et al., 2017; Dey et al., 2018), similarly as in Du et al. (2017). • HAN: we follow Sun et al. (2018) to implement a Hierarchical Attention Network, which uses two levels of attention to leverage the tweet representation along with linguistic information (sentiment, dependency and argument). SVM model Word NGrams 1, 2, 3 Char NGrams 2, 3, 4 Common to all neural models max tweet len 25 batch size 32 max epochs 70 optimizer Adam Adam learning rate 0.001 word embedding size 200 embedding dropout 0.2 TFIDF–MLP model BOW vocabulary size 3000 dense hidden layer size 100 EmbAvg model dense hidden layers size 128 WordCNN model window size 2, 3, 4 no filters 200 dropout 0.5 CharCNN model no of stacked layers 5 window size 7, 7, 3, 3, 3 no filters 256 dropout 0.2 BiCE, CrossNet, SiamNet and TAN model BiLSTM hidden size 265*2 BiLSTM recurrent dropout 0.2 HAN model max sentiment input len 10 max dependency input len 30 max argument input len 25 BiLSTM hidden size 128 Table 6: Hyperparameters used for training. Whenever reported, we used the same as in the original papers. B.2 Preprocessing Details After some preliminary experiments, we found the following preprocessing steps to perform the best: 1. Lowercasing and tokenizing using NLTK’s TwitterTokenizer8. 2. Digits and URL normalization. 8https://www.nltk.org/api/nltk.tokenize.html 1724 3. Low-frequency users have been normalized; high frequency users have been kept, stripping the ”@“ from the token. Such users included the official Twitter accounts of the companies involved in the mergers (like @askanthem), media (@wsj), official accounts of US politicians (@potus, @thejusticedept, ...) 4. The # signs have been removed from hashtags. We keep in the vocabulary only tokens occurring at least 3 times, resulting in 19,561 entries considering both healthcare and entertainment industry. We use gensim to extract the TF–IDF vectors froms the data9, which are used in the TFIDF–MLP model. For the HAN model, following Sun et al. (2018), we use the MPQA subjective lexicon (Wilson et al., 2005) to extract the sentiment word sequences and the Stanford Parser10 to extract the dependency sequences. We train an SVM model to predict argument labels on Hasan and Ng (2013)’s training data, and we predict the argument sentences for the WT–WT dataset, as discussed in Sun et al. (2018). 9https://radimrehurek.com/gensim/models/tfidfmodel. html 10https://nlp.stanford.edu/software/lex-parser.html
2020
157
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1725–1744 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1725 A Systematic Assessment of Syntactic Generalization in Neural Language Models Jennifer Hu1, Jon Gauthier1, Peng Qian1, Ethan Wilcox2, and Roger P. Levy1 1Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology 2Department of Linguistics, Harvard University {jennhu,pqian,rplevy}@mit.edu [email protected], [email protected] Abstract While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 English-language syntactic test suites. We find substantial differences in syntactic generalization performance by model architecture, with sequential models underperforming other architectures. Factorially manipulating model architecture and training dataset size (1M–40M words), we find that variability in syntactic generalization performance is substantially greater by architecture than by dataset size for the corpora tested in our experiments. Our results also reveal a dissociation between perplexity and syntactic generalization performance. 1 Introduction A growing body of work advocates that assessment of neural language models should include both information-theoretic metrics, such as perplexity, as well as targeted linguistic evaluation. Benchmarks such as GLUE (Wang et al., 2019a,b) have demonstrated that neural language models trained on naturalistic corpora for next-word prediction learn representations that can yield remarkable performance on many semantic tasks. Targeted syntactic evaluations have shown that these models also implicitly capture many syntactic generalizations, ranging from subject–verb agreement Materials and code can be found at https://github. com/cpllab/syntactic-generalization. to long-distance filler–gap dependencies (Linzen et al., 2016; Marvin and Linzen, 2018; Futrell et al., 2018; Wilcox et al., 2019b). This paper aims to bring targeted evaluations of syntactic performance to scale, complementing similar developments in semantic evaluation (McCoy et al., 2019). Because the most widespread currency of evaluation for language models is perplexity—how well, on average, a model predicts a word in its context— a primary focus of this paper is the relationship between a model’s perplexity and its performance on targeted syntactic evaluations. As perplexity improves, can we expect more human-like syntactic generalization? How do training dataset size and model architecture jointly affect syntactic generalization? And what picture of models’ syntactic generalization emerges when evaluation is brought to scale, across dozens of controlled syntactic tests? In this paper we offer initial answers to these questions, systematically assessing the syntactic generalization abilities of neural language models on 34 targeted test suites (33 adapted from previously published work, and 1 novel) covering a wide range of syntactic phenomena. Test suites are written using a standard format that allows for flexible predictions which more closely resemble those used in psycholinguistic studies, specifically allowing for predictions about interactions among multiple testing conditions. Performance on each test suite is reported as a Syntactic Generalization (SG) score. We group test suites into six syntactic circuits based on the linguistic representations needed to achieve high performance on each suite. We train four classes of neural models and one baseline n-gram model on four datasets derived from a newswire corpus, consisting of 1, 5, 14, and 42 million tokens. While previous work has compared model architectures for a fixed dataset size (e.g. Wilcox et al., 2019b) and network sizes for a fixed architecture (e.g. van Schijndel et al., 1726 2019), our controlled regime allows us to make an apples-to-apples comparison across model architectures on a range of sizes. In addition, we evaluate several off-the-shelf models which were trained on datasets ranging up to 2 billion tokens. Our results address the three questions posed above: First, for the range of model architectures and dataset sizes tested, we find a substantial dissociation between perplexity and SG score. Second, we find a larger effect of model inductive bias than training data size on SG score, a result that accords with van Schijndel et al. (2019). Models afforded explicit structural supervision during training outperform other models: One structurally supervised model is able to achieve the same SG scores as a purely sequence-based model trained on ∼100 times the number of tokens. Furthermore, several Transformer models achieve the same SG score as a Transformer trained on ∼200 times the amount of data. Third, we find that architectures have different relative advantages across types of syntactic tests, suggesting that the tested syntactic phenomena tap into different underlying processing capacities in the models. 2 Background 2.1 Perplexity Standard language models are trained to predict the next token given a context of previous tokens. Language models are typically assessed by their perplexity, the inverse geometric mean of the joint probability of words w1, . . . , wN in a held-out test corpus C: PPL(C) = p(w1, w2, . . . wN)−1 N (1) Models with improved perplexity have also been shown to better match various human behavioral measures, such as gaze duration during reading (Frank and Bod, 2011; Fossum and Levy, 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020). However, a broad-coverage metric such as perplexity may not be ideal for assessing human-like syntactic knowledge for a variety of reasons. In principle, a sentence can appear with vanishingly low probability but still be grammatically wellformed, such as Colorless green ideas sleep furiously (Chomsky, 1957). While perplexity remains an integral part of language model evaluation, fine-grained linguistic assessment can provide both more challenging and more interpretable tests to evaluate neural models. 2.2 Targeted tests for syntactic generalization Alternatively, a language model can be evaluated on its ability to make human-like generalizations for specific syntactic phenomena (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018). The targeted syntactic evaluation paradigm (Marvin and Linzen, 2018; Futrell et al., 2019) incorporates methods from psycholinguistic experiments, designing sentences which hold most lexical and syntactic features of each sentence constant while minimally varying features that determine grammaticality or surprise characteristics of the sentence. For example, given the two strings The keys to the cabinet are on the table and *The keys to the cabinet is on the table, a model that has learned the proper subject–verb number agreement rules for English should assign a higher probability to the grammatical plural verb in the first sentence than to the ungrammatical singular verb in the second (Linzen et al., 2016). Although some targeted syntactic evaluations, such as the example discussed above, involve simple comparisons of conditional probabilities of a word in its context, other evaluations are more complex. We can demonstrate this with an evaluation of models’ “garden-pathing” behavior (Futrell et al., 2019). For example, the sentence The child kicked in the chaos found her way back home yields processing disruption for humans at the word found. This is because, up to right before that word, the part-of-speech ambiguous kicked is preferentially interpreted as the main verb of the sentence, whereas it turns out to be a passive participle in a reduced relative clause modifying child. This garden-path disambiguation effect is ameliorated by replacing kicked with forgotten, which is not part-of-speech ambiguous (B below; Trueswell et al., 1994) or by using an unreduced relative clause (C below; Ferreira and Clifton, 1986). In probabilistic language models, these garden-path disambiguation effects are well captured by word negative log probabilities, or SURPRISALS (Hale, 2001): S(w|C) = −log2 p(w|C), which are independently well-established to predict human incremental processing difficulty over several orders of magnitude in word probability (Smith and Levy, 2013). A targeted syntactic evaluation for gardenpathing is provided by comparing surprisals at the disambiguating word found in the set of four examples below (Futrell et al., 2019): (A) The child kicked in the chaos found . . . 1727 (B) The child forgotten in the chaos found ... (C) The child who was kicked in the chaos found . . . (D) The child who was forgotten in the chaos found . . . Successful human-like generalization involves three criteria: (i) found should be less surprising (i.e., more probable) in B than A; (ii) found should be more probable in C than A; (iii) the C–D surprisal difference should be smaller than the A–B surprisal difference—a 2 × 2 interaction effect on surprisal—because the syntactic disambiguation effect of not reducing the relative clause was achieved by using a part-of-speech unambiguous verb. We will use these controlled tests to help us describe and test for human-like syntactic knowledge in language models. 2.3 Related work The testing paradigm presented here differs in several crucial ways from recent, related syntactic assessments and provides complementary insights. Unlike Warstadt et al. (2019a), our approach does not involve fine-tuning, but rather assesses what syntactic knowledge is induced from the language modeling objective alone. The most closely related work is the Benchmark of Linguistic Minimal Pairs (Warstadt et al., 2020), which is a challenge set of automatically-generated sentence pairs also designed to test language models on a large set of syntactic phenomena. Our approach differs in important ways: we compare critical sentence regions instead of full-sentence probabilities, and employ a 2 × 2 paradigm with a strict, multi-fold success criterion inspired by psycholinguistics methodology. This allows us to factor out as many confounds as possible, such as the lexical frequency of individual tokens and low-level n-gram statistics. 3 Methods We designed a controlled paradigm for systematically testing the relationship between two design choices — model class and dataset size — and two performance metrics — perplexity and syntactic generalization capacity. Section 3.1 describes the test suites collected for our evaluation, and Sections 3.2 and 3.3 describe the datasets and model classes investigated. 3.1 Test suites We assemble a large number of test suites inspired by the methodology of experimental sentenceprocessing and psycholinguistic research. Each test suite contains a number of ITEMS (typically between 20 and 30), and each item appears in several CONDITIONS: across conditions, a given item will differ only according to a controlled manipulation designed to target a particular feature of grammatical knowledge. Each test suite contains at least one PREDICTION, which specifies inequalities between surprisal values at pairs of regions/conditions that should hold if a model has learned the appropriate syntactic generalization. We expect language models which have learned the appropriate syntactic generalizations from their input to satisfy these inequalities without further fine-tuning. We compute accuracy on a test suite as the proportion of items for which the model’s behavior conforms to the prediction. Most of our test suites involve 2×2 designs and a success criterion consisting of a conjunction of inequalities across conditions, as in the garden-pathing example described in Section 2.2.1 Random baseline accuracy varies by test suite and is ∼25% overall. Most of these test suites and criteria are designed so that n-gram models cannot perform above chance for n = 5 (sometimes greater). Syntactic coverage In order to assess the coverage of our test suites, we manually inspected the phenomena covered in Carnie (2012), a standard introductory syntax textbook. Of the 47 empirical phenomena reviewed in the summary sections at the end of each chapter, our tests target 16 (∼34%). These are evenly distributed across the whole range of subject matter, with tests targeting phenomena in 11 of the 15 chapters (∼73%).2 Modifiers Five test suites include paired modifier versions, where extra syntactically irrelevant (but semantically plausible) content, such as a prepositional phrase or relative clause, is inserted before the critical region being measured. We use these paired test suites to evaluate models’ stability to intervening content within individual syntactic tests. Circuits The test suites are divided into 6 syntactic circuits, based on the type of algorithm required to successfully process each construction. We give a brief overview of each circuit below.3 • Agreement is a constraint on the feature values of two co-varying tokens. For example, 1The exception is Center Embedding, which features a 2condition design with a single-inequality criterion. 2For more details on this analysis, see Appendix A. 3A full overview of our test suites is given in Appendix B. 1728 the number feature of a verb must agree with the number feature of its upstream subject. We include 3 Subject-Verb Number Agreement suites from Marvin and Linzen (2018). • Licensing occurs when a particular token must exist within the scope of an upstream licensor token. Scope is determined by the tree-structural properties of the sentence. Test suites include Negative Polarity Item Licensing (NPI) (4 suites) and Reflexive Pronoun Licensing (6 suites), both from Marvin and Linzen (2018). • Garden-Path Effects are well-studied syntactic phenomena that result from treestructural ambiguities that give rise to locallycoherent but globally implausible syntactic parses. Garden-path test suites include Main Verb / Reduced Relative Clause (MVRR) (2 suites) and NP/Z Garden-paths (NPZ) (4 suites), both from Futrell et al. (2018). • Gross Syntactic Expectation is a processor’s expectation for large syntactic chunks such as verb phrases or sentences, and are often set up by subordinating conjunctions such as while, although and despite. Our tests for gross syntactic expectation include Subordination (4 suites) from Futrell et al. (2018). • Center Embedding sentences are sentences recursively nested within each other. Subject and verbs must match in a first-in-last-out order, meaning models must approximate a stack-like data-structure in order to successfully process them. Our 2 suites of Center Embedding sentences come from the items presented in Wilcox et al. (2019a). • Long-Distance Dependencies are covariations between two tokens that span long distances in tree depth. Test suites include Filler-Gap Dependencies (FGD) (6 suites) from Wilcox et al. (2018) and Wilcox et al. (2019b), and 2 novel Cleft suites, described in detail below. Novel test suite: Cleft We introduce one novel test suite that assesses models’ ability to process pseudo-cleft constructions, which are used to put a particular syntactic constituent into focus via passive transformation. Consider Example (1): BLLIP sizes: XS SM MD LG # sentences 40K 200K 600K 1.8M # tokens 1M 4.8M 14M 42M # non-UNK types 24K 57K 100K 170K # UNK types 68 70 71 74 Table 1: Statistics of training set for each corpus size. (1) a. What he did after coming in from the rain was eat a hot meal. [DO/VP] b.*What he devoured after coming in from the rain was eat a hot meal. [LEX/VP] c.*What he did after coming in from the rain was a hot meal. [DO/NP] d. What he devoured after coming in from the rain was a hot meal. [LEX/NP] When this constituent is a verb, it must be replaced in the wh-clause that heads the sentence with the DO verb, as in (1a), below. However, when it is a noun, the lexical verb for which it serves as an object must be preserved, as in (1d). If models have properly learned the pseudo-cleft construction, then DO verbs should set up expectations for VPs (the region in bold should have a lower surprisal in (1a) than in (1b)) and lexicalized verbs should set up expectations for NPs (the region in bold should have a lower surprisal in (1d) than in (1c)). 3.2 Model training data Corpora We train and evaluate models on English newswire corpora of four different sizes, obtained by randomly sampling sections from the Brown Laboratory for Linguistic Information Processing 1987-89 Corpus Release 1 (BLLIP; Charniak et al., 2000). The corpora are sampled such that the training set of each corpus is a proper subset of each larger corpus. We call these four corpora BLLIP-XS (40K sentences, 1M tokens); BLLIP-SM (200K sentences, 5M tokens); BLLIPMD (600K sentences, 14M tokens); and BLLIP-LG (2M sentences, 42M tokens). Table 1 summarizes statistics of the training set for each corpus. To ensure consistency in perplexity evaluation across datasets, we report perplexity scores achieved by the models on a shared held-out test set. We additionally use a shared held-out validation for tuning and early stopping. We use the NLTK implementation of the Penn Treebank tokenizer to process all datasets (Bird and Loper, 2004; Marcus et al., 1993). 1729 # layers # hidden units Embedding size LSTM 2 256 256 ON-LSTM 3 1150 400 RNNG 2 256 256 GPT-2 12 768 768 Table 2: Size of neural models in our controlled experiments. BLLIP sizes: XS SM MD LG LSTM 13.4M 30.5M 52.2M 88.1M ON-LSTM 30.8M 44.2M 61.2M 89.2M RNNG 22.8M 48.4M 81.1M 134.9M GPT-2 124.4M 124.4M 124.4M 124.4M Table 3: Parameter counts for neural models in our controlled experiments. Out-of-vocabulary tokens For each corpus, we designate a token as OOV if the token appears fewer than two times in the training set. Our larger training datasets thus contain larger vocabularies than our smaller training datasets. This allows larger-training-set models to learn richer wordspecific information, but may also harm perplexity evaluation because they have vocabulary items that are guaranteed to not appear in the BLLIP-XS test set. This means that perplexity scores across training dataset sizes will not be strictly comparable: if a larger-training-set model does better than a smaller-training-set model, we can be confident that it has meaningfully lower perplexity, but the reverse is not necessarily the case. The exception to the above is GPT-2, which uses sub-words from byte-pair encoding and has no OOVs (see also Footnote 6). Unkification We follow the convention used by the Berkeley parser (Petrov and Klein, 2007), which maps OOVs to UNK classes which preserve fine-grained information such as orthographic case distinctions and morphological suffixes (e.g. UNK-ed, UNK-ly). Before training, we verified that the UNK classes in the test and validation sets were all present in the training set. 3.3 Model classes In order to study the effects of model inductive bias and dataset size, we trained a fleet of models with varying inductive biases on each corpus. Because many of our test suites exploit ambiguities that arise from incremental processing, we restrict evaluation to left-to-right language models; future BLLIP sizes: XS SM MD LG LSTM 98.19 65.52 59.05 57.09 ON-LSTM 71.76 54.00 56.37 56.38 RNNG 122.46 86.72 71.12 69.57 GPT-2 529.90 183.10 37.04 32.14 n-gram 240.21 158.60 125.58 106.09 Table 4: Perplexity averages achieved by each controlled model on each corpus. Perplexity scores across training dataset sizes are not always strictly comparable (see Section 3.2). work could involve evaluation of bidirectional models (Devlin et al., 2018; Yang et al., 2019) on an appropriate subset of our test suites, and/or adaptation of our suites for use with bidirectional models (Goldberg, 2019). Training ran until convergence of perplexity on a held-out validation set. Wherever possible, we trained multiple seeds of each model class and corpus size. We use the model sizes and training hyperparameters reported in the papers introducing each model (Table 2).4 The full parameter counts and perplexity scores for each model × corpus combination are given in Tables 3 and 4, respectively. LSTM Our baseline neural model is a vanilla long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) based on the boilerplate PyTorch implementation (Paszke et al., 2017). Ordered-Neurons We consider the OrderedNeurons LSTM architecture (ON-LSTM; Shen et al., 2019), which encodes an explicit bias towards modeling hierarchical structure. RNNG Recurrent neural network grammars (RNNG; Dyer et al., 2016) model the joint probability of a sequence of words and its syntactic structure. RNNG requires labeled trees that contain complete constituency parses, which we produce for BLLIP sentences with an off-the-shelf constituency parser (Kitaev and Klein, 2018).5 To compute surprisals from RNNG, we use wordsynchronous beam search (Stern et al., 2017) to approximate the conditional probability of the current word given the context. 4Due to computational constraints, we performed only minimal tuning past these recommended hyperparameters. 5While the BLLIP corpus already contains Treebank-style parses, we strip the terminals and re-parse in order to obtain more accurate, up-to-date syntactic parses. 1730 GPT­2­XL * GPT­2 *Transformer­XL * JRNN *GPT­2 GRNN *RNNG ON­LSTM LSTM n­gram Model 0.0 0.2 0.4 0.6 0.8 1.0 SG score Figure 1: Average SG score by model class. Asterisks denote off-the-shelf models. Error bars denote bootstrapped 95% confidence intervals of the mean. Transformer Transformer models (Vaswani et al., 2017) have recently gained popularity in language processing tasks. We use GPT-2 (Radford et al., 2019) as a representative Transformer model and train it from scratch on our BLLIP corpora.6 n-gram As a baseline, we consider a 5-gram model with modified Kneser-Ney smoothing. 3.4 Off-the-shelf models We also test five off-the-shelf models: GRNN, trained on 90M tokens from Wikipedia (Gulordava et al., 2018); JRNN, trained on 800M tokens from the 1 Billion Word Benchmark (Jozefowicz et al., 2016); Transformer-XL, trained on 103M tokens from WikiText-103 (Dai et al., 2019); and the pretrained GPT-2 and GPT-2-XL, trained on 40GB of web text (Radford et al., 2019). These models are orders of magnitude larger than our controlled ones in parameter count and/or training set size. 4 Results Figure 1 shows the average accuracy of all models on the complete set of SG test suites. Asterisks denote off-the-shelf models. All neural models achieve a SG score significantly greater than a random baseline (dashed line). However, the range within neural models is notable, with the bestperforming model (GPT-2-XL) scoring over twice as high as the worst-performing model (LSTM). Also notable are the controlled GPT-2 and RNNG models, which achieve comparable performance to Transformer-XL and JRNN, despite being trained on significantly smaller data sizes. 6Our GPT-2 code is based on nshepperd/gpt-2. The model vocabulary consists of byte-pair encoded sub-words extracted from the GPT-2 pre-trained model, not from the BLLIP training corpora. To calculate GPT-2 perplexities, we divide the sum of all sub-word conditional log-probabilities by the total number of words in the corpus. 0 50 100 150 200 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 SG score 520 540 Test perplexity GPT­2 * GPT­2­XL * GRNN * JRNN * Random Transformer­XL * GPT­2 * GPT­2­XL * GRNN * JRNN * Random Transformer­XL * BLLIP­LG LSTM BLLIP­MD ON­LSTM BLLIP­SM RNNG BLLIP­XS GPT­2 n­gram Figure 2: Relationship between SG score and perplexity on our held-out BLLIP test set for each model. We now return to the three major issues presented in Section 1. In 4.1 we present evidence that SG score is dissociated from perplexity. In 4.2 we argue that model architecture accounts for larger gains in SG score than amount of training data. And in 4.3 we show that this cross-architecture difference is due largely to variance on a handful of key test suites. 4.1 Syntactic generalization and perplexity Figure 2 shows the relationship between SG score and perplexity on the BLLIP test set across models and training set sizes. As expected, n-gram models never rise appreciably above chance in SG score. Among neural models, GPT-2 achieves both the worst (BLLIP-XS and BLLIP-SM) and best (BLLIP-MD and BLLIP-LG) performance; the impressive performance of these latter models comes with the caveat that the sub-words come from the pre-trained GPT-2 model, tacitly importing information from a larger training dataset (see further discussion in Section 4.5). For the remaining neural models, there is no simple relationship between perplexity and SG score, especially once training dataset size is controlled for (comparing points in Figure 2 of the same color). For example, there is a remarkable amount of variance in the SG score of models trained on BLLIP-LG not explained by perplexity. This suggests that targeted syntactic evaluation can reveal information that may be orthogonal to perplexity. 1731 LSTM ON­LSTM RNNG GPT­2 n­gram Model 0.75 0.50 0.25 0.00 0.25 0.50 0.75 SG score delta BLLIP­LG BLLIP­MD BLLIP­SM BLLIP­XS Corpus LSTM ON­LSTM RNNG GPT­2 n­gram Figure 3: Main results of our controlled evaluation of model class and dataset size. SG score varies more by model class (left) than by training dataset size (right). 4.2 Inductive bias and data scale In order to decouple the effects of model class and data scale from test suite difficulty, we represent a particular trained model’s performance on each test suite as a delta relative to the average performance of all models on this test suite. Unless noted otherwise, the remainder of the figures in this section plot a score delta, aggregating these deltas within model classes or corpus types. Figure 3 tracks the influence of model class and data scale across the model types tested in our experiments, with SG score deltas on the y-axis. The left-hand panel shows the difference in SG score by model class. We find that model class clearly influences SG score: for example, the error bars (bootstrapped 95% confidence intervals of the mean) for RNNG and LSTM do not overlap. The right-hand panel shows the difference in SG score delta by training dataset, and shows a much more minor increase in mean SG score as training data increases. We tested the influence of these factors quantitatively using a linear mixed-effects regression model, predicting suite-level performance as a feature of model architecture and training dataset size (represented as log-number of words). Both features made statistically significant contributions to SG score (both p < 0.001). However, predictor ablation indicates that architecture affects regression model fit more (AIC=–581 when dataset size is ablated; AIC=–574 when architecture is ablated).7 Beyond the above analysis, our GPT-2 results offer another striking example of the influence of 7n-grams and/or GPT-2 could arguably be expected to have qualitatively different sensitivity to training dataset size (the latter due to byte-pair encoding), so we repeated the analyses here and in Section 4.3 excluding both architectures individually as well as simultaneously. In all cases the same qualitative patterns described in the main text hold. model architecture relative to data scale. Figure 2 shows that our controlled BLLIP-MD and BLLIPLG GPT-2 models achieve roughly the same SG score as the pre-trained GPT-2 model, despite being trained on less than 1% of the data used by the pretrained model. This suggests diminishing returns to training data scale for syntactic generalization performance. 4.3 Circuit-level effects on SG score Figure 4 shows the breakdown at the circuit level by model architecture (left) and training dataset size (right). The right panel demonstrates little effect of dataset size on SG score delta within most circuits, except for Agreement, on which the models trained on our smallest dataset fare poorly. In the left panel we find substantial between-circuit differences across architectures. Linear mixed-effects analyses support this finding: interactions with circuit are significant for both training dataset size and model architecture, but stronger for the latter (AIC=–654 and AIC=–623 when size and architecture are respectively ablated). While model inductive biases separate clearly in performance on some circuits, they have little effect on performance on Licensing. This minimally suggests that Licensing taps into a distinct syntactic process within language models. One potential explanation for this is that the interactions tested by Licensing involve tracking two co-varying tokens where the downstream token is optional (see e.g. Hu et al., 2020). We show the circuit-level breakdown of absolute SG scores for all models (including off-the-shelf) in Figure 5. In general, the models that obtain high SG scores on average (as in Figure 1) also perform well across circuits: pre-trained GPT-2 and GPT1732 Agreement Center Embedding Garden­Path Effects Gross Syntactic State Licensing Long­Distance Dependencies Circuit 0.6 0.4 0.2 0.0 0.2 0.4 SG score delta LSTM ON­LSTM RNNG GPT­2 n­gram Agreement Center Embedding Garden­Path Effects Gross Syntactic State Licensing Long­Distance Dependencies Circuit BLLIP­LG BLLIP­MD BLLIP­SM BLLIP­XS Figure 4: Controlled evaluation results, split across test suite circuits. Circuit-level differences in SG score vary more by model class (left) than by training dataset size (right). Agreement Center Embedding Garden­Path Effects Gross Syntactic State Licensing Long­Distance Dependencies Circuit 0.0 0.5 1.0 SG score GPT­2­XL * GPT­2 * Transformer­XL * JRNN * GPT­2 GRNN * RNNG ON­LSTM LSTM n­gram Figure 5: Evaluation results on all models, split across test suite circuits. 2-XL outperform all other models on each circuit, including Licensing, on which JRNN, GRNN, and most of our custom-trained models perform particularly poorly. Again, we highlight the impressive performance of RNNG: it achieves comparable average performance to GRNN on all circuits, despite being trained on a fraction of the data size. 4.4 Stability to modifiers We separately investigate the degree to which models’ syntactic generalizations are robustly stored in memory. For five test suites (Center Embedding, Cleft, MVRR, NPZ-Ambiguous, NPZ-Object), we designed minimally edited versions where syntactically irrelevant intervening content was inserted before the critical region. An ideal model should robustly represent syntactic features of its input across these modifier insertions. In Figure 6 we plot models’ average scores on these five test suites (dark bars) and their minimally edited versions (light bars), evaluating how robust each model is to intervening content. Among models in our controlled experiments, we see that model class clearly influences the degree to which predictions are affected by intervening content (compare e.g. the stability of RNNG to that of ON-LSTM). Some off-the-shelf models, such as GPT-2-XL, perform near ceiling on the original five test suites and are not affected at all by intervening content. GPT­2­XL * GPT­2 *Transformer­XL * JRNN *GPT­2 GRNN *RNNG ON­LSTM LSTM n­gram Model 0.0 0.2 0.4 0.6 0.8 1.0 SG score No modifier With modifier Figure 6: SG score on the pairs of test suites with and without intervening modifiers: Center Embedding, Cleft, MVRR, NPZ-Ambiguous, and NPZ-Object. 1733 4.5 Effects of model pre-processing The GPT-2 models trained and evaluated in this paper use a sub-word vocabulary learned by byte-pair encoding (BPE; Sennrich et al., 2016) to represent their inputs, while all other models represent and compute over word-level inputs. This byte-pair encoding was taken from the pre-trained GPT-2 model trained on a much larger corpus. The results reported for these models thus conflate a choice of model class (a deep Transformer architecture) and preprocessing standard (sub-word tokenization computed on a larger corpus). Some preliminary work suggests that sub-word tokenization is indeed responsible for much of the larger GPT-2 models’ success: we find that GPT-2 models trained on word-level representations of BLLIP-LG and BLLIP-MD achieve good perplexity measures, but degrade sharply in SG score. Peculiarities of the GPT-2 training regime may be responsible for its particularly bad performance on the smaller corpora. Its sub-word vocabulary was held constant across training corpora, meaning that the model vocabulary size also remained constant across corpora, unlike the other models tested. The poor performance of GPT-2 models trained on smaller corpora may thus be due to overparameterization, and not due to fundamental problems with the model architecture at small data scales. We leave a thorough investigation of the role of sub-word tokenization to future work. 5 Discussion This work addresses multiple open questions about syntactic evaluations and their relationship to other language model assessments. Our results dissociate model perplexity and performance in syntactic generalization tests, suggesting that the two metrics capture complementary features of language model knowledge. In a controlled evaluation of different model classes and datasets, we find model architecture plays a more important role than training data scale in yielding correct syntactic generalizations. Our circuit-level analysis reveals consistent failure on Licensing but inconsistent behavior on other circuits, suggesting that different syntactic circuits make use of different underlying processing capacities. In addition to the insight these results provide about neural NLP systems, they also bear on questions central to cognitive science and linguistics, putting lower bounds on what syntactic knowledge can be acquired from string input alone. Targeted syntactic evaluation is just one in a series of complementary methods being developed to assess the learning outcomes of neural language processing models. Other methods include classifying sentences as grammatical or ungrammatical (Warstadt et al., 2019b), decoding syntactic features from a model’s internal state (Belinkov et al., 2017; Giulianelli et al., 2018), or transfer learning to a strictly syntactic task such as parsing or POS tagging (Hewitt and Manning, 2019). As each task brings an explicit set of assumptions, complementary assessment methods can collectively provide greater insight into models’ learning outcomes. Although this paper, together with Warstadt et al. (2020), report what is to our knowledge the largestscale targeted syntactic evaluations to date, we emphasize that they are only first steps toward a comprehensive understanding of the syntactic capabilities of contemporary language models. This understanding will be further advanced by new targeted-evaluation test suites covering a still wider variety of syntactic phenomena, additional trained models with more varied hyperparameters and randomization seeds, and new architectural innovations. Humans develop extraordinary grammatical capabilities through exposure to natural linguistic input. It remains to be seen to just what extent contemporary artificial systems do the same. Acknowledgments The authors would like to thank the anonymous reviewers and Samuel R. Bowman for their feedback, Miguel Ballesteros for advice and technical guidance, and Tristan Thrush for technical assistance. J.H. is supported by the NIH under award number T32NS105587 and an NSF Graduate Research Fellowship. J.G. is supported by an Open Philanthropy AI Fellowship. R.P.L. gratefully acknowledges support from the MIT-IBM Watson AI Lab, a Google Faculty Research Award, and a Newton Brain Science Award. References Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872. Tom Bever. 1970. The cognitive basis for linguistic structures. In J.R. Hayes, editor, Cognition and 1734 the Development of Language, pages 279–362. New York: John Wiley & Sons. Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics. Kathryn Bock and Carol A. Miller. 1991. Broken agreement. Cognitive Psychology, 23:45–93. Andrew Carnie. 2012. Syntax: A generative introduction, volume 18. John Wiley & Sons. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. BLLIP 198789 WSJ Corpus Release 1 LDC2000T43. Linguistic Data Consortium. Rui P. Chaves. 2020. What don’t RNN language models learn about filler-gap dependencies? In Proceedings of the Society for Computation in Linguistics. Noam Chomsky. 1957. Syntactic structures. Walter de Gruyter. Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 133–144, Santa Fe, New Mexico, USA. Stephen Crain and Janet Dean Fodor. 1985. How can grammars help parsers? In David Dowty, Lauri Kartunnen, and Arnold M. Zwicky, editors, Natural Language Parsing: Psycholinguistic, Computational, and Theoretical Perspectives, pages 940–128. Cambridge: Cambridge University Press. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171– 4186, Minneapolis, Minnesota. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Fernanda Ferreira and Charles Clifton, Jr. 1986. The independence of syntactic processing. Journal of Memory and Language, 25:348–368. Victoria Fossum and Roger P. Levy. 2012. Sequential vs. hierarchical syntactic models of human incremental sentence processing. In Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics, pages 61–69. Stefan L Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierarchical structure. Psychological Science, 22(6):829– 834. Lyn Frazier and Keith Rayner. 1982. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14:178–210. Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329. Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 18th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 32–42. Anastasia Giannakidou. 2011. Negative and positive polarity items: Variation, licensing, and compositionality. In Semantics: An international handbook of natural language meaning, volume 3, pages 1660–1712. Berlin: Mouton de Gruyter. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240– 248. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics, pages 10–18. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the second 1735 meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8. John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138. Francis Roger Higgins. 1973. The Pseudo-Cleft Construction in English. Ph.D. thesis, MIT. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Jennifer Hu, Sherry Yong Chen, and Roger P. Levy. 2020. A closer look at the performance of neural language models on reflexive anaphor licensing. In Proceedings of the Meeting of the Society for Computation in Linguistics. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. William Ladusaw. 1979. Polarity Sensitivity as Inherent Scope Relations. Ph.D. thesis, University of Texas at Austin. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 5:1202–1247. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In Transactions of the Association for Computational Linguistics, volume 4, pages 521–535. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313–330. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428– 3448, Florence, Italy. George A. Miller and Noam Chomsky. 1963. Finitary models of language users. In R. Duncan Luce, Robert R. Bush, and Eugene Galanter, editors, Handbook of Mathematical Psychology, volume II, pages 419–491. New York: John Wiley & Sons, Inc. Don C. Mitchell. 1987. Lexical guidance in human parsing: Locus and processing characteristics. In Max Coltheart, editor, Attention and Performance XII: The psychology of reading. London: Erlbaum. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Neural Information Processing Systems Autodiff Workshop. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York. Association for Computational Linguistics. Martin J. Pickering and Matthew J. Traxler. 1998. Plausibility and recovery from garden paths: An eyetracking study. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24(4):940–961. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report. Tanya Reinhart. 1981. Definite NP anaphora and ccommand domains. Linguistic Inquiry, 12(4):605– 635. John Robert Ross. 1967. Constraints on Variables in Syntax. Ph.D. thesis, MIT. Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5835–5841. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational 1736 Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations. Nathaniel J. Smith and Roger P. Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128:302–319. Adrian Staub. 2007. The parser doesn’t ignore intransitivity, after all. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33(3):550–569. Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695–1700. Laurie A Stowe. 1986. Parsing wh-constructions: Evidence for on-line gap location. Language & Cognitive Processes, 1(3):227–245. Patrick Sturt, Martin J. Pickering, and Matthew W. Crocker. 1999. Structural change and reanalysis difficulty in language comprehension. Journal of Memory and Language, 40:136–150. John C. Trueswell, Michael K. Tanenhaus, and Susan M. Garnsey. 1994. Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution. Journal of Memory and Language, 33:285–318. Shravan Vasishth, Sven Br¨ussow, Richard L Lewis, and Heiner Drenhaus. 2008. Processing polarity: How the ungrammatical intrudes on the grammatical. Cognitive Science, 32(4):685–712. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3266–3280. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. BLiMP: A Benchmark of Linguistic Minimal Pairs for English. In Proceedings of the Society for Computation in Linguistics. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019a. CoLA: The Corpus of Linguistic Acceptability (with added annotations). Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Ethan Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger P. Levy. 2020. Evaluating neural networks as models of human online language processing. In Proceedings of the 42nd Meeting of the Cognitive Science Society (CogSci 2020). To appear. Ethan Wilcox, Roger P. Levy, and Richard Futrell. 2019a. Hierarchical representation in neural language models: Suppression and recovery of expectations. In Proceedings of the 2019 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Ethan Wilcox, Roger P. Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler–gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballestros, and Roger P. Levy. 2019b. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3302–3312, Minneapolis, Minnesota. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems. A Syntactic coverage of test suites In order to assess the coverage of our syntactic tests, we manually inspected the “Ideas, Rules and Constraints introduced in this Chapter” section for each chapter in Carnie (2012), a standard introductory syntax textbook. We included entries from these sections which are theory-neutral and refer to observable linguistic data. For example, we do not include affix lowering (Chapter 7) or theta criterion (Chapter 8) because these phenomena presuppose a commitment to one particular syntactic analysis. We found that our tests covered 16 of the 47 phenomena presented (∼34%). Of the 15 chapters surveyed, our tests assessed phenomena in 11 1737 CHAPTER 1: GENERATIVE GRAMMAR Lexical gender Number ✓ Person Case CHAPTER 2: PARTS OF SPEECH Parts of Speech ✓ Plurality ✓ Count vs. Mass Nouns Argument Structure of Verbs ✓ CHAPTER 3: CONSTITUENCY, TREES, RULES Constituency Tests Hierarchical Structure ✓ CHAPTER 4: STRUCTURAL RELATIONS c-command ✓ Government CHAPTER 5: BINDING THEORY R-expression vs. Pronominals Anaphoric expressions and their antecedents ✓ Co-reference and co-indexation Binding Principles (A, B, C) ✓ Locality Constraints ✓ CHAPTER 6: X-BAR THEORY One Replacement Do-so Replacement CHAPTER 7: EXTENDING X-BAR THEORY Fundamental Phrase Types of DP/CP/TP TO FUNCTIONAL CATEGORIES Genitives: of-genitives and ’s genitives Subjects and Predicates Clausal Embedding ✓ Clausal Tense/Finiteness and its restrictions Yes/No Questions Subject-Auxilliary Inversion CHAPTER 8: CONSTRAINING X-BAR THEORY: Thematic Relations ✓ THE LEXICON Internal Theta role vs. External Theta Roles Expletive Pronouns and Expletive Insertion Extended Projection Principle CHAPTER 9: HEAD-TO-HEAD MOVEMENT V →T Movement T →C movement ✓ Do-Support CHAPTER 10: DP MOVEMENT Passive Constructions ✓ DP-Raising CHAPTER 11: WH-MOVEMENT Wh-Movement ✓ Structural Constraints on Wh-Movement (Island Constraints) ✓ Wh in-Situ and Echo Questions CHAPTER 12: A UNIFIED THEORY Universal Quantifiers vs. Existential Quantifiers OF MOVEMENT Quantificational Scope and Quantifier Raising CHAPTER 13: EXTENDED VPS Light Verbs Object Shift (and end weight) Ellipsis Pseudogapping CHAPTER 14: RAISING CONTROL AND Control, Subject-to-Subject and Subject-to-Object Raising (ECM) EMPTY CATEGORIES CHAPTER 15: ADVANCED TOPICS IN Binding Principle A and B ✓ BINDING THEORY Table 5: Test suite coverage of syntactic phenomena presented in Carnie (2012). 1738 (∼73%). We did not assess coverage from the last two chapters of the book, which explore alternative syntactic formalisms. The outcome of our manual inspection is given in Table 5. A ✓indicates that some aspect of that phenomena was tested in one or more of our suites. ✓does not necessarily mean that the test suite was designed explicitly for the purpose of testing that phenomena, but merely that the phenomena was implicated in model success. For example, we place a ✓next to Parts of Speech because differentiation between verbs and nouns is necessary for models to succeed in the Cleft Structure tests. B Description of test suites In this work we have assembled a large number of test suites inspired by the methodology of experimental sentence-processing and psycholinguistic research. Each test suite contains a number of ITEMS, and each item appears in several CONDITIONS: across conditions, a given item will differ only according to a controlled manipulation designed to target a particular feature of grammatical knowledge. For each suite we define a SUCCESS CRITERION, which stipulates inequalities among conditional probabilities of sentence substrings. In the main paper, a model’s accuracy for a test suite is computed as the percentage of the test suite’s items for which it satisfies the criterion. In this appendix, we briefly describe each test suite and the criterion used to determine whether a given model succeeds on each item of the test suite. B.1 Notation B.1.1 Sentence status Following and building on linguistic traditions, we annotate examples as follows. Examples marked with a * violate a well-established grammatical constraint, and are ungrammatical. Examples marked with ? or ?? are not necessarily ungrammatical, but are marginal: for example, they may require an unusual interpretation of a word in order for the sentence to be grammatical. (More ?’s is roughly intended to indicate more severe marginality). Examples marked with ! are not ungrammatical, but induce severe processing difficulty that is measurable in real-time human sentence processing. For all test suites, we include references to established literature on the relevant grammatical and/or sentence-processing phenomena. B.1.2 Success criteria Criteria involve inequalities among conditional probabilities of sentence substrings given the complete sentence context preceding the substring. In describing criteria, we use P(·) for raw probabilities and S(·) for surprisals (negative logprobabilities), and leave the conditioning on preceding context implicit. For concision, we use subscripts on P and S to indicate the variant of the sentence within the test suite that we are referring to. In the first described test suite, CENTER EMBEDDING B.2, we show the criterion in both concise and fully spelled-out forms, to help clarify the conventions we are using in the concise form. All items within a given test suite share the same criterion for success. We provide chance accuracy on the assumption that the order of probabilities among conditions for a given item is random. In some cases, exactly determining chance accuracy may require further assumptions about the distribution of these probabilities; in this case we provide an upper bound on chance accuracy. B.2 Center embedding Center embedding, the ability to embed a phrase in the middle of another phrase of the same type, is a hallmark feature of natural language syntax. Center-embedding creates NESTED SYNTACTIC DEPENDENCIES, which could pose a challenge for some language models. To succeed in generating expectations about how sentences will continue in the context of multiple center embedding, a model must maintain a representation not only of what words appear in the preceding context but also of the order of those words, and must predict that upcoming words occur in the appropriate order. In this test suite we use verb transitivity and subject– verb plausibility to test model capabilities in this respect. For example, A below is a correct centerembedding, but B is not: (A) The paintingN1 that the artistN2 paintedV2 deterioratedV1. [correct] (B) ??The paintingN1 that the artistN2 deterioratedV1 paintedV2. [incorrect] Here, Ni and Vi correspond to matched subject– verb pairs. In the WITH-MODIFIER version of the test suite, we postmodify N2 with a relative clause to increase the linear distance over which the nested dependen1739 cies must be tracked, potentially leading to a harder test suite: (A) The paintingN1 that the artistN2 who lived long ago paintedV2 deterioratedV1. [correct] (B) #The paintingN1 that the artistN2 who lived long ago deterioratedV1 paintedV2. [incorrect] Criterion The probability of the verb sequence in the correct variant should be higher than the probability of the verb sequence in the incorrect variant: PA(V2V1) > PB(V1V2) In full form, this criterion for the example item in the no-modifier version of this test suite would be: P(painted deteriorated|The painting that the artist) > P(deteriorated painted|The painting that the artist) Chance performance on these center-embedding test suites would be 50%. References Miller and Chomsky (1963);Wilcox et al. (2019a) B.3 Pseudo-clefting The pseudo-cleft construction involves (i) an extraction of a TARGETED CONSTITUENT from a sentence and (ii) a constituent that provides the semantic contents of the targeted constituent and must match it in syntactic category, where (i) and (ii) are linked by the copula. The pseudo-cleft construction can target both NPs and VPs; in the latter case, the VP of the free relative becomes an inflected form of do. This means that a free relative subject plus the copula can set up a requirement for the syntactic category that comes next. If the free relative clause has a do VP without a direct object, then the main-clause postcopular predicate can be a VP (A below). Otherwise, the postcopular predicate must be an NP (C below): (A) What the worker did was VP z }| { board the plane. (B) ?What the worker did was NP z }| { the plane. (C) What the worker repaired was NP z }| { the plane. (D) *What the worker repaired was VP z }| { board the plane. Criterion The postcopular predicate should be more surprising when its syntactic category mismatches the cleft, averaging across VP and NP postcopular predicates: SD(VP) + SB(NP) > SC(NP) + SA(VP) Chance is 50%. A more stringent criterion would be to apply this requirement separately for each of NP and VP postcopular predicates: SD(VP) > SA(VP) ∧SB(NP) > SC(NP) However, it is often possible to use an NP postcopular predicate with a do cleft through semantic coercion (e.g., in B “did” can be interpreted as “fixed” or “was responsible for”), so we felt that this latter criterion might be too stringent. References Higgins (1973) B.4 Filler–gap dependencies Consider the following sentence, in which all arguments and adjuncts appear “in situ” (in the syntactic position at which they are normally interpreted semantically): I know that our uncle grabbed the food in front of the guests at the holiday party. A FILLER–GAP DEPENDENCY can be created by EXTRACTING any of a number of elements from the subordinate clause, including our uncle (subject extraction), the food (object extraction) or the guests (extraction from a prepositional phrase). These possibilities serve as the basis for several test suites on filler–gap dependencies. References Ross (1967); Crain and Fodor (1985); Stowe (1986); Wilcox et al. (2018); Chowdhury and Zamparelli (2018); Chaves (2020) B.4.1 Subject extractions (A) I know that α z }| { our uncle grabbed the food in front of the guests at the holiday party. [THAT, NO GAP] (B) *I know who α z }| { our uncle grabbed the food in front of the guests at the holiday party. [WH, NO GAP] (C) *I know that β z }| { grabbed the food in front of the guests at the holiday party. [THAT, GAP] 1740 (D) I know who β z }| { grabbed the food in front of the guests at the holiday party. [WH, GAP] Criterion We require that a model successfully pass a two-part criterion for each item: the whfiller should make the unextracted subject α more surprising in the NO-GAP conditions and should make the post-gap material β less surprising in the GAP conditions: SB(α) > SA(α) ∧SC(β) > SD(β) Chance is 25%. B.4.2 Object extractions The logic of this test suite is the same as that for subject extraction above. Note that we use obligatorily transitive embedded verbs, so that omitting a direct object should be highly surprising when there is no filler, as in C. (A) I know that our uncle grabbed α z }| { the food in front of the guests at the holiday party. [THAT, NO GAP] (B) *I know what our uncle grabbed α z }| { the food in front of the guests at the holiday party. [WH, NO GAP] (C) ??I know that our uncle grabbed β z }| { in front of the guests at the holiday party. [THAT, GAP] (D) I know what our uncle grabbed β z }| { in front of in front of the guests at the holiday party. [WH, GAP] Criterion SB(α) > SA(α) ∧SC(β) > SD(β) B.4.3 Extraction from prepositional phrases The logic of this test suite is the same as that for subject and object extractions above. (A) I know that our uncle grabbed the food in front of α z }| { the guests at the holiday party. [THAT, NO GAP] (B) *I know who our uncle grabbed the food in front of α z }| { the guests at the holiday party. [WH, NO GAP] (C) *I know that our uncle grabbed the food in front of β z }| { at the holiday party. [THAT, GAP] (D) I know who our uncle grabbed the food in front of β z }| { at the holiday party. [WH, GAP] Criterion SB(α) > SA(α) ∧SC(β) > SD(β) B.4.4 Tests for unboundedness Filler–gap dependencies are “unbounded” in the sense that there is no limit to how many clausal levels above the gap the filler can be extracted. This serves as the basis for harder versions of the object-extracted test suites, involving three or four levels of clausal embedding. Example [THAT, NO GAP] sentences are given below: I know that our mother said her friend remarked that the park attendant reported your friend threw the plastic into the trash can. [3 levels of embedding] I know that our mother said her friend remarked that the park attendant reported the cop thinks your friend threw the plastic into the trash can. [4 levels of embedding] These base sentences give rise to 4-condition test suites using the same manipulations as for the basic object-extraction test suite (Section B.4.2), and the criterion for success is the same. B.5 Main-verb/reduced-relative garden-path disambiguation This is one of the best-studied instances of syntactic garden-pathing in the psycholinguistics literature. An example 4-condition item is given below: (A) !The child kicked in the chaos V∗ z }| { found her way back home. [REDUCED, AMBIG] (B) The child who was kicked in the chaos V∗ z }| { found her way back home. (C) The child forgotten in the chaos V∗ z }| { found her way back home. (D) The child who was forgotten in the chaos V∗ z }| { found her way back home. 1741 Criterion Relative to the [REDUCED, AMBIG] condition, not reducing the relative clause should make V∗less surprising, as should changing the participial verb to one that is the same form as a simple past-tense verb. Additionally, the effect of not reducing the relative clause on V∗surprisal should be smaller for unambiguous participial verbs than for participial verbs: SA(V∗) > SB(V∗) ∧SA(V∗) > SC(V∗)∧ SA(V∗) −SB(V∗) > SC(V∗) −SD(V∗) Chance is somewhere below 25%. References Bever (1970); Ferreira and Clifton (1986); Trueswell et al. (1994); van Schijndel and Linzen (2018); Futrell et al. (2019) B.6 Negative Polarity Licensing The words any and ever, in their most common uses, are “negative polarity items” (NPIs): they can only be used in an appropriate syntactic-semantic environment—to a first approximation, in the scope of negation. For example, the determiner no can license NPIs, but its NP has to structurally command the NPI. Below, A and D are acceptable, because no is the determiner for the subject noun managers. There is no negation in C so the NPI is unlicensed and the sentence is unacceptable; crucially, however, B is unacceptable despite the presence of no earlier in the sentence, because no is embedded inside a modifier of the main-clause subject and thus does not command the NPI. (A) No managers that respected the guard have had NPI z}|{ any luck. [+NEG,–DISTRACTOR] (B) *The managers that respected no guard have had NPI z}|{ any luck. [–NEG,+DISTRACTOR] (C) *The managers that respected the guard have had NPI z}|{ any luck. [–NEG,–DISTRACTOR] (D) No managers that respected no guard have had NPI z}|{ any luck. [+NEG,+DISTRACTOR] In the above test suite, the “distractor” position for no is inside a subject-extracted relative clause modifying the main-clause subject. We also used a variant test suite in which these relative clauses are object-extracted: (A) No managers that the guard respected have had NPI z}|{ any luck. [+NEG,–DISTRACTOR] (B) *The managers that no guard respected have had NPI z}|{ any luck. [–NEG,+DISTRACTOR] (C) *The managers that the guard respected have had NPI z}|{ any luck. [–NEG,–DISTRACTOR] (D) No managers that no guard respected have had NPI z}|{ any luck. [+NEG,+DISTRACTOR] The above two test suites use any as the NPI; we also use test suites with ever as the NPI. Subjectextracted relative clause example: (A) No managers that respected the guard have NPI z}|{ ever gotten old. [+NEG,–DISTRACTOR] (B) *The managers that respected no guard have NPI z}|{ ever gotten old. [–NEG,+DISTRACTOR] (C) *The managers that respected the guard have NPI z}|{ ever gotten old. [–NEG,–DISTRACTOR] (D) No managers that respected no guard have NPI z}|{ ever gotten old. [+NEG,+DISTRACTOR] Object-extracted relative clause example: (A) No managers that the guard respected have NPI z}|{ ever gotten old. [+NEG,–DISTRACTOR] (B) *The managers that no guard respected have NPI z}|{ ever gotten old. [–NEG,+DISTRACTOR] (C) *The managers that the guard respected have NPI z}|{ ever gotten old. [–NEG,–DISTRACTOR] (D) No managers that no guard respected have NPI z}|{ ever gotten old. [+NEG,+DISTRACTOR] Criterion Changing the main-clause subject’s determiner from The to No should increase the probability of the NPI where it appears, regardless of whether there is a distractor no in the subjectmodifying relative clause. Furthermore, when there is exactly one no in the sentence, the NPI should be higher-probability when it is in a licensing position rather than in a distractor position: PA(NPI) > PC(NPI) ∧PD(NPI) > PB(NPI)∧ PA(NPI) > PB(NPI) Chance is 5 32. 1742 References Ladusaw (1979); Vasishth et al. (2008); Giannakidou (2011); Marvin and Linzen (2018); Futrell et al. (2018) B.7 NP/Z garden-path ambiguity This is another well-studied syntactic gardenpathing configuration. In A below, the NP the waters introduces a local syntactic ambiguity: it could be (1) the direct object of crossed, in which case the sentence-initial subordinate clause has not yet ended, or (2) the subject of the main clause, in which case crossed is used intransitively and is the last word of the sentence-initial subordinate clause. (This was dubbed “NP/Z” by Sturt et al. (1999) because the subordinate-clause verb might have either an NP object or a Z(ero), i.e. null, object.) The next word, remained, is only compatible with (2); the ruling out of (1) generally yields increased processing difficulty for human comprehenders. Marking the end of the subordinate clause with a comma, as in B, makes the sentence easier at V∗, as does an obligatorily intransitive subordinate-clause verb, as in C. (A) !As the ship crossed the waters V∗ z }| { remained blue and calm. [TRANS,NO COMMA] (B) As the ship crossed, the waters V∗ z }| { remained blue and calm. [TRANS,COMMA] (C) As the ship drifted the waters V∗ z }| { remained blue and calm. [INTRANS,NO COMMA] (D) As the ship drifted, the waters V∗ z }| { remained blue and calm. [INTRANS,COMMA] Criterion Similar to the main-verb/reducedrelative garden-pathing ambiguity, a model must pass a three-part criterion. Relative to A, either marking the subordinate-clause end with a comma or using an obligatorily intransitive verb in the subordinate clause should reduce the surprisal of V∗. Furthermore, the surprisal-reduction effect of the comma should be smaller when the subordinateclause verb is intransitive than when it is transitive: SA(V∗) > SB(V∗) ∧SA(V∗) > SC(V∗)∧ SA(V∗) −SB(V∗) > SC(V∗) −SD(V∗) We also use an NP/Z test suite where the second means of disambiguation is not changing the subordinate-clause verb to an intransitive, but rather giving the transitive subordinate-clause verb an overt direct object. For the above example item, the first two conditions are the same and the other two conditions would be: (C) As the ship crossed the sea the waters V∗ z }| { remained blue and calm. (D) As the ship crossed the sea, the waters V∗ z }| { remained blue and calm. The success criterion remains the same. Finally, we create harder versions of both the above test suites by adding a postmodifier to the main-clause subject (in the above example, the waters becomes the waters of the Atlantic Ocean). References Frazier and Rayner (1982); Mitchell (1987); Pickering and Traxler (1998); Sturt et al. (1999); Staub (2007) B.8 Subject–verb number agreement This task tests a language model for how well it predicts the number marking on English finite presenttense verbs (whether it should be the third-person singular form, or the non-third-person-singular form, generally referred to as the plural form for simplicity, although technically this is the form for first- and second-person singular as well). In controlled, targeted versions of this test, multiple NP precede the verb: the verb’s actual subject, as well as a DISTRACTOR NP with number that is different from that of the subject. A successful language model should place higher probability on the verbform matching that of the subject, not the distractor. We have three versions of this test suite: one where the distractor is in a prepositional phrase postmodifier of the subject: (A) The farmer near the clerks knowsVsg many people. (B) *The farmer near the clerks knowVpl many people. (C) The farmers near the clerk knowVpl many people. (D) *The farmers near the clerk knowsVsg many people. one in which the distractor is in a subject-extracted relative clause postmodifier of the subject: (A) The farmer that embarrassed the clerks knowsVsg many people. 1743 (B) *The farmer that embarrassed the clerks knowVpl many people. (C) The farmers that embarrassed the clerk knowVpl many people. (D) *The farmers that embarrassed the clerk knowsVsg many people. and one in which the distractor is in an objectextracted relative clause postmodifier of the subject: (A) The farmer that the clerks embarrassed knowsVsg many people. (B) *The farmer that the clerks embarrassed knowVpl many people. (C) The farmers that the clerk embarrassed knowVpl many people. (D) *The farmers that the clerk embarrassed knowsVsg many people. Criterion Following Linzen et al. (2016) and Marvin and Linzen (2018), we require successful discrimination of the preferred upcoming verbform of the given lemma (rather than, for example, successful discrimination of the better context given a particular verbform). For success we require that a model successfully predicts the preferred verbform for both the singular- and plural-subject versions of an item: PA(Vsg) > PB(Vpl) ∧PC(Vpl) > PD(Vsg) Chance performance is thus 25%, though a context-insensitive baseline that places different probabilities on Vsg and Vpl would score 50%. References Bock and Miller (1991); Linzen et al. (2016); Marvin and Linzen (2018) B.9 Reflexive pronoun licensing The noun phrase with which a reflexive pronoun (herself, himself, themselves) corefers must command it in a sense similar to that relevant for negative-polarity items (Section B.6). In the below example, the reflexive pronoun ending the sentence can only corefer to the subject of the sentence, author, with which it must agree in number: a singular subject requires a singular reflexive Rsg, and a plural subject requires a plural reflexive Rpl. (A) The author next to the senators hurt herselfRsg.fem. (B) *The authors next to the senator hurt herselfRsg.fem. (C) The authors next to the senator hurt themselvesRpl. (D) *The authors next to the senator hurt themselvesRpl. We generated a pair of test suites—one in which the singular reflexive is herself, and another where the singular reflexive is himself, on the template of the above example, where the distractor NP is in a prepositional-phrase postmodifier of the subject NP. We also generated a similar pair of test suites where the distractor NP is inside a subject-extracted relative clause modifying the subject: (A) The author that liked the senators hurt herselfRsg.fem. (B) *The authors that liked the senator hurt herselfRsg.fem. (C) The authors that liked the senator hurt themselvesRpl. (D) *The authors that liked the senator hurt themselvesRpl. and a pair of test suites where the distractor NP is inside an object-extracted relative clause modifying the subject: (A) The author that the senators liked hurt herselfRsg.fem. (B) *The authors that the senator liked hurt herselfRsg.fem. (C) The authors that the senator liked hurt themselvesRpl. (D) *The authors that the senator liked hurt themselvesRpl. Criterion For each item in each test suite, we require that for both the singular and the plural versions of the reflexive pronoun the model assign higher conditional probability in the correct licensing context than in the incorrect licensing context: PA(Rsg) > PB(Rsg) ∧PC(Rpl) > PD(Rpl) Chance is 25%. References Reinhart (1981); Marvin and Linzen (2018) B.10 Subordination Beginning a sentence with As, When, Before, After, or Because, implies that an immediately following clause is not the main clause of the sentence, as would have otherwise been the case, but instead is 1744 a SUBORDINATE CLAUSE that must be followed by the main clause. Ending the sentence without a main clause, as in B, is problematic. Conversely, following an initial clause with a second clause MC (without linking it to the initial clause with and, but, despite, or a similar coordinator or subordinator), as in C below, is unexpected and odd. (A) The minister praised the building END z}|{ . (B) *After the minister praised the building END z}|{ . (C) ??The minister praised the building MC z }| { , it started to rain. (D) After the minster praised the building MC z }| { , it started to rain. In addition to the base test suite exemplified by the item above, we include three versions with longer and more complex initial clauses, which may make the test suite more difficult. In the first of these versions, we postmodify both the subject and object of the initial clauses with prepositional phrases: the minister praised the building ↓ the minister in the dark suit and white tie praised the new building on the town’s main square In the second of these versions, the postmodifiers are subject-extracted relative clauses: the minister praised the building ↓ the minister who wore a black suit praised the new building that was built by the square In the third of these versions, the postmodifiers are object-extracted relative clauses: the minister praised the building ↓ the minister who the mayor had invited praised the new building that the businessman had built downtown Criterion Introducing a subordinator at the beginning of the sentence should make an ending without a second clause less probable, and should make a second clause more probable: PA(END) > PB(END) ∧PD(MC) < PC(MC) References Futrell et al. (2018)
2020
158
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1745–1756 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1745 Inflecting When There’s No Majority: Limitations of Encoder-Decoder Neural Networks as Cognitive Models for German Plurals Kate McCurdy Sharon Goldwater Adam Lopez Institute for Language, Cognition and Computation School of Informatics University of Edinburgh [email protected], {sgwater, alopez}@inf.ed.ac.uk Abstract Can artificial neural networks learn to represent inflectional morphology and generalize to new words as human speakers do? Kirov and Cotterell (2018) argue that the answer is yes: modern Encoder-Decoder (ED) architectures learn human-like behavior when inflecting English verbs, such as extending the regular past tense form /-(e)d/ to novel words. However, their work does not address the criticism raised by Marcus et al. (1995): that neural models may learn to extend not the regular, but the most frequent class — and thus fail on tasks like German number inflection, where infrequent suffixes like /-s/ can still be productively generalized. To investigate this question, we first collect a new dataset from German speakers (production and ratings of plural forms for novel nouns) that is designed to avoid sources of information unavailable to the ED model. The speaker data show high variability, and two suffixes evince ‘regular’ behavior, appearing more often with phonologically atypical inputs. Encoder-decoder models do generalize the most frequently produced plural class, but do not show human-like variability or ‘regular’ extension of these other plural markers. We conclude that modern neural models may still struggle with minority-class generalization. 1 Introduction Morphology has historically been the site of vigorous debate on the capacity of neural models to capture human speaker behavior, and hence ground claims about speaker cognition. In 1986, Rumelhart and McClelland described a neural network model which learned to map English present tense verbs to their past tense forms. Importantly, the network handled both regular verbs, whose past tense is formed systematically by adding the suffix /-(e)d/ (e.g. jumped), and irregular verbs where the present and past tenses bear no systematic relationship (e.g. ran). The authors suggested their model provided “an alternative [...] to the implicit knowledge of rules” (1986, 218), a claim which sparked considerable controversy. Pinker and Prince (1988) highlighted many empirical inadequacies of the Rumelhart and McClelland model, and argued that these failures stemmed from “central features of connectionist ideology” and would persist in any neural network model lacking a symbolic processing component. Recently, however, Kirov and Cotterell (2018, henceforth K&C) revisited the English past tense debate and showed that modern recurrent neural networks with encoder-decoder (ED) architectures overcome many of the empirical limitations of earlier neural models. Their ED model successfully learns to generalize the regular past tense suffix /-(e)d/, achieving near-ceiling accuracy on held-out test data. Moreover, its errors result from overapplication of the regular past tense (e.g. throw– throwed)—a type of error observed in human language learners as well—as opposed to the unattested forms produced by Rumelhart and McClelland’s model. K&C conclude that modern neural networks can learn human-like behavior for English past tense without recourse to explicit symbolic structure, and invite researchers to move beyond the ‘rules’ debate, asking instead whether the learner correctly generalizes to a range of novel inputs, and whether its errors (and other behavior) are human-like. This challenge was first taken up by Corkery et al. (2019), who showed that, on novel English-like words designed to elicit some irregular generalizations from humans, the ED model’s predictions do not closely match the human data. While these results suggest possible problems with the ED model, English may not be the best test case to fully understand these, since the sole regular inflectional class is also by far the most frequent. In contrast, many languages have multiple inflectional classes which can act ‘regular’ under various conditions (Seidenberg and Plaut, 2014; Clahsen, 2016). In this paper, we examine German number inflection, which has been identified as a crucial test case 1746 for connectionist modeling (K¨opcke, 1988; Bybee, 1995; Marcus et al., 1995; Clahsen, 1999b). The German plural system features eight plural markers (c.f. Table 1), none of which hold a numerical majority in type or token frequency. Different linguistic environments favor different plural markers (e.g. K¨opcke, 1988; Wiese, 1996; Yang, 2016), and even the famously rare suffix /-s/ is nonetheless productive, in the sense that speakers readily extend it to new words.1 In their analysis of the German plural system, Marcus et al. (1995, henceforth M95) argue that neural networks generalize the most frequent patterns to unfamiliar inputs, and thus struggle to represent productive but rare classes such as /-s/. We investigate that claim using the novel Germanlike nouns M95 developed. Because the design and results of previous human studies have been somewhat inconsistent, and because we want to compare to fine-grained results from individuals (not just published averages), we first collect a new dataset of plural productions and ratings from German speakers. Our speaker data show high variability: no class holds a majority overall, and two less frequent suffixes show a relative preference for phonologically atypical inputs (“Non-Rhymes”). We then compare our human data with the predictions of the encoderdecoder (ED) model proposed by K&C. While our human data paint a more complex picture of the German plural system than M95 claimed, nevertheless M95’s central idea is borne out: when given Non-Rhymes, the ED model prefers the most frequent plural class, but speakers behave differently. This finding reveals that while modern neural models are far more powerful than earlier ones, they still have limitations as models of cognition in contexts like German number inflection, where no class holds a majority. The model may correctly identify the most frequent class, but fails to learn the conditions under which minority classes are productive for speakers. 2 Study 1: Speaker plural inflection To evaluate whether neural models generalize correctly, we need to compare their behavior with that of humans on the same task. Unfortunately, no existing datasets were suitable, so our first study asks how German speakers inflect novel nouns. 1For example, the Institut f¨ur Deutsche Sprache (https: //www.owid.de/service/stichwortlisten/ neo_neuste) officially added multiple /-s/-inflecting nouns to the German language in 2019, including Verh¨utungsapp, Morphsuit and Onesie. Suffix Singular Plural Type Token /-(e)n/ Strasse Strassen 48% 45% /-e/ Hund Hunde 27% 21% Kuh K¨uhe /-∅/ Daumen Daumen 17% 29% Mutter M¨utter /-er/ Kind Kinder 4% 3% Wald W¨alder /-s/ Auto Autos 4% 2% Table 1: German plural system with examples, ordered by CELEX type frequency (Sonnenstuhl and Huth, 2002). 2.1 Background Wug testing and productivity If an English speaker needs to produce the plural form of an unknown word such as wug, that speaker must decide whether wug belongs to the same inflectional class as dog and cat (yielding plural wugs) or the same class as sheep and deer (yielding wug). Speakers’ overwhelming preference for wugs in this scenario indicates that the /-s/ plural class is productive in English: a productive morphological process can be generalized to new inputs. This task of inflecting novel (nonce) words is known as the wug test (Berko, 1958), and is the standard method to determine productivity in psycholinguistic research. While the concept of morphological ‘regularity’ is not well-defined (Herce, 2019), productivity is nonetheless an essential component: an inflectional class that is not productive cannot be regular. Productivity in German plurals The German plural system comprises five suffixes: /-e/, /-er/, /-∅/2, /-(e)n/, and /-s/. The first three can optionally combine with an umlaut over the root vowel.3 Umlaut varies semi-independently of plural class (Wiese, 1996), and is not fully predictable; for simplicity, this study will focus only on the five main suffix classes for analysis. Examples in all forms are shown in Table 1. Each plural suffix is also shown with its type frequency (counting each word type only once, how many types in the lexicon take this plural?) and token frequency (how often do words with this plural suffix appear in the corpus overall?). German nouns can have one of 2/-∅/ refers to the so-called “zero plural”, and is indicated as “zero” on all figures in this paper. 3Umlaut is a process which fronts a back vowel, so only roots with back vowels can take an umlaut (e.g. Dach → D¨acher, Fuss →F¨usse). 1747 three grammatical genders — masculine, feminine, or neuter — and this lexical feature is highly associated with plural class: most feminine nouns take /-(e)n/, while /-e/ and /-∅/ nouns are often masculine or neuter. The phonological shape of a noun also influences its plural class; for example, most nouns ending with schwa take /-(e)n/ (Elsen, 2002). Although there are statistical tendencies, there are no absolute rules, and no suffix holds a majority overall. Researchers continue to debate which plural markers are productive, and in which circumstances. The dispute has historically centered on the infrequent class /-s/, which, despite its rarity, occurs across a wide range of linguistic environments. Examples include proper names (e.g. der Bader → die Bader ‘the barber →the barbers’ but meine Freunden, die Baders ‘my friends, the Barbers’), acronyms, and truncated and quoted nouns (e.g. der Asi →die Asis, short for Asozialer ‘antisocial person’). In addition, /-s/ tends to be the plural class for recent borrowings from other languages, and children reportedly extend /-s/ to novel nouns (Clahsen et al., 1992). For these reasons, M95 argue that /-s/ is the default plural: it applies in a range of heterogeneous elsewhere conditions which do not define a cohesive similarity space, serving as the “emergency” plural form when other markers do not seem to fit. They further assert that, as the default form, /-s/ is also the only regular plural form, in the sense that it “applies not to particular sets of stored items or to their frequent patterns, but to any item whatsoever” (1995, 192). Under this minority-default analysis, other German plural classes may be productive, but in a limited sense — they can only extend to novel inputs which are similar in some respect to existing class members, while infrequent /-s/ can apply to any noun regardless of its form (Clahsen, 1999b). M95 claim that this behavior should be particularly difficult for connectionist, i.e. neural, models to learn: /-s/ cannot be generalized based on its frequency, as it is rare, and it cannot be generalized based on similar inputs, as it applies to heterogeneous, unfamiliar inputs. Other researchers have challenged the minoritydefault account with evidence of regular, productive behavior from the two more common suffixes /-e/ and /-(e)n/. /-(e)n/ is argued to be the default class for feminine nouns and nouns ending with the weak vowel schwa (Wiese, 1996; Dressler, 1999), and children have also been found to overgeneralize /-(e)n/ (K¨opcke, 1998). Indefrey (1999, 1025) argues that /-(e)n/ and /-e/ are “regular and productive allomorphs with gender-dependent application domains”, noting that /-e/ and /-(e)n/ are extended in elsewhere conditions where /-s/ is blocked for phonological reasons, such as letters (die “X”e) and acronyms (die MAZen, Magnetaufzeichnungen, ‘magnetic recordings’). Bybee (1995) argues that, while /-s/ does act as the default plural, it is still less productive than other plural classes due to its low type frequency. Wug testing for German plurals To assess whether German speakers treat /-s/ as a productive default for novel words, M95 developed a list of 24 monosyllabic nonce nouns for wug testing. The stimuli represented two phonological classes: ‘familiar’ or Rhyme words, which rhymed with one or more existing words in German (e.g. Bral, rhyming with Fall; Spert, rhyming with Wert), and ‘unfamiliar’ or Non-Rhyme words (e.g. Plaupf, Fn¨ohk), which were constructed using rare but phonotactically valid phone sequences. They hypothesized that Non-Rhymes, as phonologically atypical words, should be more likely to take the /-s/ plural. M95 conducted a rating study in which stimuli were presented across three different sentence contexts. If the word Bral was presented in the “root” condition, subjects would rate a set of sentences where the nonce word referred to some object: Die gr¨unen BRAL sind billiger (“The green brals are cheaper”), Die gr¨unen BRALE ..., Die gr¨unen BR ¨ALE ..., etc.; whereas in the “name” condition, the nonce word would refer to people: Die BRAL sind ein bißchen komisch (“The Brals [family name] are a bit weird”), Die BRALEN ..., Die BRALS . . . , etc. With data from 48 participants, /-s/ was the top-rated plural form for 2 out of 12 rhyme words, and 7 out of 12 non-rhyme words; while /-e/ was rated highest overall, /-s/ was the only marker favored more for non-rhymes. Clahsen (1999a) cites this asymmetry as crucial evidence for /-s/ as the only default plural form, at least with respect to these stimuli. These results, however, have been called into question. Zaretsky and Lange (2016, henceforth Z&L) conducted a large-scale follow-up study with 585 participants, using the same nonce words but a different task: instead of rating the plural forms within a sentence context, subjects were presented with the noun in isolation (e.g. Der Bral) and asked 1748 to produce its plural form.4 They found a much lower preference for /-s/ than expected based on M95’s results, and a significant effect for feminine (die) versus non-feminine (der, das) grammatical gender, where M95 reported no effect of gender. The authors conclude from their data that /-(e)n/, /-e/, and /-s/ are all productive in German, and also speculate that task differences (production versus rating) could account for the discrepancy between the two studies. 2.2 Data collection Motivation Although M95 published average rating data for each word in the appendix to their paper, we felt it necessary to collect our own data. Z&L’s findings suggest that the M95 /-s/ effect might reflect task artefacts: speaker behavior could differ for production and rating tasks, and with and without sentential context for the nonce words. We seek to evaluate K&C’s performance claims for ED models, which were based on speaker production probabilities rather than ratings. To do so, we need speaker data which closely parallels the model task: given a noun in isolation, produce its plural inflected form. We collect production data, and also ratings, to see whether speaker behavior is consistent across tasks. Another issue raised by Z&L’s findings is the role of grammatical gender. Although Z&L reported significant gender effects, M95 did not: their reported rating averages combine all gender presentations (e.g. Der Bral, Die Bral, Das Bral). Previous experiments have found neural models of German plurals to be sensitive to grammatical gender (Goebel and Indefrey, 2000); therefore, the stimuli presented to speakers should be consistent with model inputs to enable valid comparison. For simplicity, we opted to select one grammatical gender for presentation: neuter, or Das. Based on similar experimentation by K¨opcke (1988), speakers do not have a strong majority class preference for neuter monosyllablic nouns, hence this environment may be the most challenging for a neural model to learn. For this reason, we present all stimuli as neuter to study participants. Method The current study uses the same Rhyme and Non-Rhyme stimuli from M95’s original experiment. We collected both production and rating data on plural inflection for the 24 M95 nonce nouns through an online survey with 150 native 4Z&L’s data is unfortunately not freely available. Plural Prod % N Rating (SE) /-e/ R 45.3 815 3.53 (.021) NR 44.7 805 3.51 (.024) /-(e)n/ R 25.0 450 3.73 (.026) NR 34.7 624 3.84 (.025) /-er/ R 17.4 314 3.08 (.022) NR 6.7 120 3.06 (.024) /-s/ R 4.2 75 2.39 (.027) NR 6.4 116 2.52 (.028) /-∅/ R 2.7 48 2.24 (.020) NR 2.7 48 2.38 (.024) other R 5.4 98 NR 4.8 87 overall R 1800 2.99 (.011) NR 1800 3.04 (.012) Table 2: Survey results. Production reported as percentages out of all Rhymes (R) and Non-Rhymes (NR); ratings are averages over a 1 (worst) – 5 (best) scale, with standard errors in parentheses. Highest numbers in each category are bolded. German-speaking participants. Survey respondents were first prompted to produce a plural-inflected form for each noun (i.e. filling in the blank: “Das Bral, Die ”).5 After producing plural forms for all nouns, they were prompted to rate the acceptability of each potential plural form for each noun on a 1-5 Likert scale, where 5 means most acceptable. For example, a participant would see Das Bral, and then give an acceptability rating for each of the following plural forms: Bral, Br¨al, Brale, Br¨ale, Bralen, Braler, Br¨aler, Brals. For details of the survey design, please see Appendix A. 2.3 Results Our study results are shown in Table 2. The production data collected in our survey appears broadly consistent with the distribution observed by Z&L and K¨opcke: /-e/ is favored in production, followed by /-(e)n/. The rhyme vs non-rhyme comparison is also consistent with Z&L’s results. /-s/ is produced more for Non-Rhymes than for Rhymes, as emphasized by Clahsen (1999a); however, /-(e)n/ also shows the same directional preference, and at a much higher frequency. Our rating results diverge from production results in some ways — for example, /-(e)n/ is fa5The article das indicates singular number, neuter gender; as all nouns were presented in neuter gender (see preceding discussion), all nouns were preceded by das. Die here indicates plural number, so the following noun will be pluralized. 1749 vored instead of /-e/ — and are consistent in others: both /-s/ and /-(e)n/ are rated higher for NonRhymes compared to Rhymes. The low ratings for /-s/ conflict with M95’s findings, and suggest that presentation in sentence context is an important methodological difference from presentation in isolation. For example, family surnames obligatorily take /-s/ in German, so it’s possible that exposure to surnames in the “name” context primed subjects in the M95 rating study to find /-s/ more acceptable generally, across conditions.6 In any case, our results demonstrate task effects: although /-e/ is the most produced plural form, /-(e)n/ obtains the highest ratings from the same speakers.7 We compare these results with the modeling study in Section 4, focusing on production data. 3 Study 2: Encoder-Decoder inflection Our second study trains an encoder-decoder (ED) model on the task of German plural inflection, following the method of Kirov and Cotterell (K&C). We then compare its predictions on the M95 stimuli to the behavior of participants in Study 1. 3.1 Background Wug testing and computational models Wug tests have also been used to evaluate how computational models generalize, although the appropriate method of comparison to speakers is still under debate. Albright and Hayes (2003) collected spoken productions and acceptability ratings of past tense inflections for English nonce verbs, comparing the prevalence of regular inflection (e.g. rife → rifed)) to one or two pre-selected irregular forms for each nonce verb (e.g. rife →rofe, riff). They then evaluated two different computational models on their wug data, focusing on correlation between model scores and participant ratings to select a rulebased learner as the best-performing model. K&C also tested their ED model on Albright and Hayes’ nonce words and evaluated performance using correlation with model scores; however, instead of the rating data, they focused on production probabilities: the percentage of speakers who produced each pre-selected irregular form. Corkery et al. (2019) 6Hahn and Nakisa (2000) reanalyze the M95 ratings and find that /-s/ is rated much higher for family surnames than other kinds of names within the “name” condition (e.g. first names), reflecting the strong link between this category and the /-s/ plural class. 7Further analysis indicates that individual survey participants rated a plural form they did not produce as better than the form they did produce in fully one-third of cases. call this methodology into question, noting that different random initializations of the ED model lead to highly variable rankings of the output forms, and thus to unstable correlation metrics. Instead, they correlate the speaker production probabilities to the aggregated predictions of models with different random seeds, treating each model instance as simulating a unique “speaker”. Our study follows the latter approach: we aggregate production probabilities over several model initializations and compare these results to the speaker production data. Modeling German plurals The same M95 stimuli used in our Study 1 have also been applied to wug test computational models. To date, no computational studies have reproduced the high /-s/ preference reported for participants in the original rating study. Hahn and Nakisa (2000) framed the problem as a classification task, mapping noun inputs to their plural classes. They trained a “single-route” exemplar-based categorization model (Nosofsky, 1988) alongside a “dual-route” version of the same model, which had an additional symbolic rule component to handle the /-s/ class. Hahn and Nakisa also collected their own speaker productions of the M95 wug stimuli, and found that the singleroute model showed a higher overall correlation to speaker production probabilities, relative to the dual-route model. They did not explicitly compare model and speaker behavior on Rhymes versus Non-Rhymes, so we don’t know whether the model learned speaker-like generalizations for phonologically atypical stimuli, or whether the model could achieve similar performance on the more challenging task of sequence prediction. Goebel and Indefrey (2000) used a simple recurrent network (Elman, 1990) for sequence prediction on the M95 wug stimuli. The model did produce /-s/ more often for Non-Rhymes than Rhymes, but as the overall production of /-s/ was relatively low, the authors did not consider this evidence of default behavior. Instead, they find that the model learns to condition regular plural inflection on grammatical gender. For both Rhymes and Non-Rhymes, the model predicted /-(e)n/ when the input was preceded by the feminine article die, and /-e/ when the input began with masculine der; neuter das was not tested. Goebel and Indefrey reanalyze the original M95 rating data and argue that its results are hypothetically8 consistent with the model’s behavior; they conclude that /-s/, /-(e)n/, and /-e/ are all reg8”Hypothetically” because M95 did not report results split by grammatical gender. 1750 Plural % All Neut M95 R 1 Syll /-(e)n/ 37.3 3.2 13.9 14.0 /-e/ 34.4 51.9 72.6 66.5 /-∅/ 19.2 21.5 0.5 1.4 /-er/ 2.9 10.6 7.3 4.7 /-s/ 4.0 7.7 3.1 12.5 other 2.1 5.1 2.6 .9 N 11,243 2,606 642 570 Table 3: Distribution (percentages) of plural class for 1) nouns overall, 2) only neuter nouns, 3) nouns rhyming with M95 stimuli, 4) one-syllable nouns from Unimorph German dataset (Kirov et al., 2016). ular plural classes in German, with the latter two conditioned on grammatical gender. These findings show the importance of controlling for grammatical gender in comparing speaker and model results. 3.2 Method Overview We model German number inflection using the sequence-to-sequence Encoder-Decoder architecture (Sutskever et al., 2014). This comprises a recurrent neural network (RNN) which reads in an input sequence and encodes it into a fixed-length vector representation, and another RNN which incrementally decodes that representation into an output sequence. Following Kann and Sch¨utze (2016), our decoder uses neural attention (Bahdanau et al., 2015). For our task of morphological transduction, the ED model takes character-level representations of German nouns in their singular form as inputs (e.g. ⟨m⟩H U N D ⟨eos⟩), and learns to produce the noun’s inflected plural form (e.g. H U N D E ⟨eos⟩). Each character sequence starts with ⟨m⟩, ⟨f⟩, or ⟨n⟩, to indicate grammatical gender. Unlike English, the phonological-orthographic mapping is straightforward in German, so we can use a written corpus for model training. We keep a held-out dev set for hyperparameter selection, and a held-out test set to asses the model’s accuracy in generalizing to unseen German nouns. In addition, the 24 M95 nouns were used for comparison with speaker behavior. They were presented to the model as neuter gender, consistent with Study 1. Corpus We trained all models on the UniMorph German data set9 (Kirov et al., 2016; SylakGlassman et al., 2015), which provides the singular and plural forms of 11,243 nouns. Only nominative case forms were used. Grammatical gender was 9https://github.com/unimorph/deu Train Dev Test 99.9% (8694) 92.1% (1229) 88.8% (1320) Table 4: Model accuracy (N) by UniMorph corpus split, averaged over 25 random initializations. obtained by merging the Unimorph dataset with a more recent Wiktionary scrape containing this feature.10 Table 3 gives the distribution of plural suffixes for the UniMorph corpus overall, and for three relevant subsets: nouns with neuter gender, monosyllabic nouns (like the M95 stimuli), and nouns which were phonologically similar to the M95 stimuli, i.e. shared a rhyme. The number of items in the train, dev, and test splits is shown (in parentheses) in Table 4. Implementation Following K&C and Corkery et al. (2019), our model is implemented using OpenNMT (Klein et al., 2018) with their reported hyperparameters (after Kann and Sch¨utze, 2016): 2 LSTM encoder layers and 2 LSTM decoder layers, 300-dimensional character embeddings in the encoder, and 100-dimensional hidden layers in both encoder and decoder; Adadelta optimization for training with a batch size of 20 and inter-layer dropout rate of 0.3; and a beam size of 12 for decoding during evaluation. Since Corkery et al. (2019) found the ED model to be highly sensitive to initialization, we trained multiple simulations with the same architecture, varying only the random seed. Reported results combine predictions from 25 separate random initializations. The one hyperparameter we tuned was early stopping. Best performance on the validation set was achieved at 10 epochs, which was sufficient to memorize the training data. Results The model achieves 88.8% accuracy on the held-out test set (Table 4). It performs best on /-(e)n/, the most frequent class (Table 5). Unsurprisingly, the worst performance appears on the ‘other’ category, which comprises the long tail of idiosyncratic forms which must be memorized (e.g. Latinate plurals Abstraktum →Abstrakta or other borrowings Zaddik →Zaddikim). In keeping with the findings of Hahn and Nakisa (2000), /-s/ is the plural suffix with the worst generalization perfor10https://github.com/gambolputty/ german-nouns/ To ensure our results were not limited by the small size of the UniMorph dataset, we also trained the model on this larger dataset, including about 65,000 nouns. As the outcome was consistent with our findings here, we report results from the smaller model. 1751 Study 1 (speakers) Study 2 (ED) Rhymes Non−Rhymes 0 50 100 150 0 5 10 15 20 25 pisch pund nuhl vag mur raun bral klot spert spand kach pind bneik bnaupf pleik snauk bnöhk fneik fnöhk pröng plaupf pnähf pläk fnahf N e en s er zero other Figure 1: Plural class productions by item. Test M95 Prec. Rec. F1 %R %NR ρ /-(e)n/ .95 .95 .95 6.3 3.3 .28 /-e/ .86 .89 .87 68.3 91.7 .13 /-∅/ .96 .91 .92 0 0 /-er/ .83 .85 .84 21.7 2.7 .05 /-s/ .64 .56 .60 3.7 2.3 .33 other .37 .48 .42 0 0 Table 5: Model results by plural suffix for: (left) test set performance (averaged over plural seed); (right) production percentages for rhyme (R) and non-rhyme (NR) M95 stimuli, and correlation (Spearman’s ρ) to speaker productions. mance; this cannot be attributed to low frequency alone (c.f. Table 3), as the model does much better on the similarly rare suffix /-er/ . We use the M95 stimuli to compare model predictions to speaker data from Study 1. The model shows an overwhelming preference for /-e/ on these words (Table 5); roughly 80% of its productions are /-e/, relative to 45% of speaker productions (Figure 1). In contrast, the model rarely predicts /-(e)n/, which speakers use 30% of the time. The model’s treatment of Rhymes and Non-Rhymes is even farther off the mark: where speakers use /-(e)n/ and /-s/ more for Non-Rhymes relative to Rhymes, the ED model uses them less, producing /-e/ for over 90% of Non-Rhymes. Following K&C and Corkery et al. (2019), we calculate the Spearman rank correlation coefficient (Spearman’s ρ) between model and speaker production probabilities within inflectional categories rather than across categories.11 This means that, for each potential plural suffix, we compare speaker and model productions for that suffix on each individual M95 word. Table 5 reports the correlation for each suffix. None show a statistically significant difference from the null hypothesis of no correlation. Figure 2 shows the distribution of plural classes in the top 5 most likely forms predicted by the model for each M95 word. While all of the model’s top-ranked predictions are well-formed outputs in the sense that they conform to one of the main German plural classes, the lower-ranked predictions are rapidly dominated by “other” forms which do not cohere to standard plural production. An example from one model instance: the Rhyme input Spert had as its top five predictions Sperte, Spelte, Spente, Sperten, and Fspern; the Non-Rhyme input Bneik had Bneiken, Bneiks, Bneikke, Bneikz, and Bneikme. Corkery et al. (2019) observed instability in the ranking of irregular forms in ED models trained on the English past tense; however, English irregular forms are very diverse, which makes it difficult to draw broad conclusions about the plausibility of lower-ranked forms in the model’s output. In contrast, the five main plural suffixes for German cover 98% of the nouns in the UniMorph dataset, 11For the English analyses in the prior works, this means calculating separate correlations for regular and irregular forms. 1752 Rhymes Non−Rhymes 0 100 200 300 0 100 200 300 5 4 3 2 1 N Rank e en s er zero other Figure 2: Distribution of plural classes by rank in ED model output. and 95% of speaker productions on M95 stimuli in Study 1. The predominance of ill-formed plurals in lower-ranked predictions12 suggests ED model scores may not be cognitively plausible analogues to speaker behavior; if they were, we would expect forms with standard plural inflections to receive consistently high rankings. 4 Discussion The current study asks whether modern EncoderDecoder neural models learn the full set of correct generalizations — that is, human-like behavior — with respect to German number inflection, which requires the learner to generalize non-majority inflectional classes. The short answer is no: our model learns part of that set. In particular, it correctly identifies /-e/ as the ‘best’ plural class for this context. /-e/ is the most frequent class in the training data for similar inputs (neuter gender, monosyllabic, phonologically close to M95; c.f. Table 3), and it is also the plural suffix most frequently produced by speakers (Table 2). Like all plural classes, /-e/ does not characterize a majority of German nouns overall (Table 1), so the model has technically learned to generalize a minority class in its appropriate context. Nonetheless, it does not reproduce the behavior of survey participants in response to the same stimuli, which shows a more variable distribution over plural classes and different generalization patterns for Non-Rhymes relative to Rhymes. 12Interestingly, while less frequent classes such as /-s/ and /-∅/ appear more often in the model’s lower-ranked outputs, the class /-(e)n/ is almost never predicted — despite being the second most frequent class in speaker data productions. This outcome is not surprising when one considers that the model is trained to produce one correct form rather than a distribution over plausible forms; however, this is exactly the task faced by human language learners as well. All the models of morphology discussed here assume that exposure to correct forms alone should suffice for learning speaker-like behavior. Corkery et al. (2019, 3872, fn. 4) note that training on single target forms produces highly skewed ED model scores, with a great deal of probability mass on the top-ranked form and instability in lower rankings, but that training on a distribution would not be a cognitively plausible alternative. However, it could be the case that German speakers do regularly encounter variable realizations of plural forms. K¨opcke observes that German plural inflection shows regional variation, for example northern speakers using /-s/ (die M¨adels ‘girls’) where southern dialects prefer /-(e)n/ (die M¨adeln). Incorporating dialect-informed variability into training might be one way to encourage neural models toward speaker-like generalization.13 Parallel issues arise for model evaluation: how should we evaluate models of production when the target output is a distribution? On simplified versions of the task, such as classification (Hahn and Nakisa, 2000), the output distribution is constrained within a space of plausible forms, but sequence-to-sequence models deal with the open-ended domain of all possible strings. For 13Like previous studies on these stimuli, our Study 1 did not collect data on speakers’ dialect background; we are addressing this issue in follow-up research. We note that Study 1 began with an onboarding task prompting speakers to inflect existing nouns in Modern High German, which hopefully primed use of the standard variety for the following tasks. 1753 encoder-decoders, the likelihood scores produced during beam-search decoding offer an intuitive option, and K&C use these scores to evaluate their model with respect to Albright and Hayes’ wug data; however, Corkery et al. (2019) demonstrate that these model scores are not a suitable metric for that comparison. Other recent research has highlighted the limitations of both beam search and model scores globally in neural sequence-tosequence models (Stahlberg and Byrne, 2019). Our results provide further evidence that lower-ranked ED predictions do not reflect cognitively plausible distributions: they contain many ill-formed outputs, and omit inflectional classes such as /-(e)n/, which is prevalent in speaker productions. An alternative to model scores is to treat each randomly initialized instance of a model as an individual, and compare aggregate productions with speaker data (Goebel and Indefrey, 2000; Corkery et al., 2019). For our experiments, this did not produce the distribution observed in the speaker data. The discrepancy between speaker production and rating preferences poses another challenge, as it’s not clear how the ED model might represent these different task modalities. Beside variability, the other key discrepancy between speaker and ED behavior is the treatment of Non-Rhyme words. If German has a default plural class, it should be realized more often on these phonologically atypical stimuli than the more familiar Rhyme words. Speakers in Study 1 use /-s/ and /-(e)n/ more for Non-Rhymes than for Rhymes. These results are consistent with earlier studies: M95 found that /-s/ was the only plural form to receive higher average ratings for Non-Rhymes compared to Rhymes, and Z&L found that speakers produced both /-(e)n/ and /-s/ more often for Non-Rhymes. In contrast, the ED model appears to treat /-e/ as a default, producing /-e/ inflections for under 70% of Rhymes but over 90% of NonRhyme inputs. This asymmetry suggests that the model has not induced the full set of correct generalizations for German plural inflection — it has not recognized which plural classes are more productive for phonologically atypical nouns. In fact, the model’s preference for /-e/, the most frequent (if non-majority) suffix, is the behavior anticipated by M95: “frequency in the input to a pattern associator causes a greater tendency to generalize” (1995, 215). It seems that the productivity of less frequent inflectional classes continues to challenge neural models and limit their cognitive application. 5 Conclusions German number inflection has been claimed to have distributional properties which make it difficult for neural networks to model. Our experimental speaker data does not necessarily support all of these claims; in particular, /-s/ does not appear to be the only plural suffix which speakers treat as a ‘default’ for phonologically unfamiliar words, as the more frequent marker /-(e)n/ shows similar trends. Nonetheless, the German plural system continues to challenge ED architectures. Our neural model struggles to accurately predict the distribution of /-s/ for existing German nouns. On novel nouns, it generalizes the contextually most frequent plural marker /-e/; its predictions are less variable than speaker productions, and show different patterns of response to words which are phonologically typical (Rhymes) as opposed to atypical (Non-Rhymes). Regardless of the minority-default question, it seems that ED models do not necessarily function as good cognitive approximations for inflectional systems like German number, in which no class holds the majority. Acknowledgments The authors thank Yevgen Matusevych, Maria Corkery, Timothy O’Donnell, the Agora reading group at Edinburgh, and the ACL reviewers for helpful feedback. This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and the University of Edinburgh. This work was also supported by a James S McDonnell Foundation Scholar Award (#220020374) to the second author. References Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in English past tenses: A computational/experimental study. Cognition, 90(2):119– 161. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR. Jean Berko. 1958. The Child’s Learning of English Morphology. WORD, 14(2-3):150–177. Joan Bybee. 1995. Regular morphology and the lexicon. Language and Cognitive Processes, 10(5):425– 455. 1754 Harald Clahsen. 1999a. The dual nature of the language faculty. Behavioral and Brain Sciences, 22(6):1046–1055. Harald Clahsen. 1999b. Lexical entries and rules of language: A multidisciplinary study of German inflection. Behavioral and Brain Sciences, 22(6):991– 1013. Harald Clahsen. 2016. Contributions of linguistic typology to psycholinguistics. Linguistic Typology, 20(3). Harald Clahsen, Monika Rothweiler, Andreas Woest, and Gary F. Marcus. 1992. Regular and irregular inflection in the acquisition of German noun plurals. Cognition, 45(3):225–255. Maria Corkery, Yevgen Matusevych, and Sharon Goldwater. 2019. Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 3868–3877, Florence, Italy. Association for Computational Linguistics. Wolfgang U Dressler. 1999. Why collapse morphological concepts? Behavioral and Brain Sciences, 22(6):1021–1021. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Hilke Elsen. 2002. The acquisition of German plurals. In Morphology 2000: Selected Papers from the 9th Morphology Meeting, Vienna, 24-28 February 2000, number v. 218 in Amsterdam Studies in the Theory and History of Linguistic Science, pages 117–127. J. Benjamins, Amsterdam ; Philadelphia. Rainer Goebel and Peter Indefrey. 2000. A recurrent network with short-term memory capacity learning the German-s plural. Models of language acquisition: Inductive and deductive approaches, pages 177–200. Ulrike Hahn and Ramin Charles Nakisa. 2000. German Inflection: Single Route or Dual Route? Cognitive Psychology, 41(4):313–360. Borja Herce. 2019. Deconstructing (ir)regularity. Studies in Language, 43(1):44–91. Peter Indefrey. 1999. Some problems with the lexical status of nondefault inflection. Behavioral and Brain Sciences, 22(6):1025–1025. Katharina Kann and Hinrich Sch¨utze. 2016. SingleModel Encoder-Decoder with Explicit Morphological Representation for Reinflection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 555–560, Berlin, Germany. Association for Computational Linguistics. Christo Kirov and Ryan Cotterell. 2018. Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate. Transactions of the Association for Computational Linguistics, 6:651–665. Christo Kirov, John Sylak-Glassman, Roger Que, and David Yarowsky. 2016. Very-large Scale Parsing and Normalization of Wiktionary Morphological Paradigms. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 3121–3126, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean Senellart, and Alexander M. Rush. 2018. OpenNMT: Neural Machine Translation Toolkit. arXiv:1805.11462 [cs]. Klaus-Michael K¨opcke. 1988. Schemas in German plural formation. Lingua, 74(4):303–335. Klaus-Michael K¨opcke. 1998. The acquisition of plural marking in English and German revisited: Schemata versus rules. Journal of Child Language, 25(2):293–319. Gary F Marcus, Ursula Brinkmann, Harald Clahsen, Richard Wiese, and Steven Pinker. 1995. German inflection: The exception that proves the rule. Cognitive psychology, 29(3):189–256. Robert M. Nosofsky. 1988. Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14(4):700–708. Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28(1-2):73–193. D E Rumelhart and J McClelland. 1986. On Learning the Past Tenses of English Verbs. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, pages 216–271. MIT Press, Cambridge, MA. Mark S. Seidenberg and David C. Plaut. 2014. Quasiregularity and Its Discontents: The Legacy of the Past Tense Debate. Cognitive Science, 38(6):1190–1228. Ingrid Sonnenstuhl and Axel Huth. 2002. Processing and Representation of German -n Plurals: A Dual Mechanism Approach. Brain and Language, 81(13):276–290. Felix Stahlberg and Bill Byrne. 2019. On NMT Search Errors and Model Errors: Cat Got Your Tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3354– 3360, Hong Kong, China. Association for Computational Linguistics. 1755 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems, pages 3104–3112. John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015. A Language-Independent Feature Schema for Inflectional Morphology. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 674– 680, Beijing, China. Association for Computational Linguistics. Richard Wiese. 1996. The Phonology of German. Oxford University Press on Demand. Charles D. Yang. 2016. The Price of Linguistic Productivity : How Children Learn to Break the Rules of Language. The MIT Press, Cambridge, Massachusetts. Eugen Zaretsky and Benjamin P Lange. 2016. No matter how hard we try: Still no default plural marker in nonce nouns in Modern High German. In A Blend of MaLT: Selected Contributions from the Methods and Linguistic Theories Symposium 2015, number Band 15 in Bamberger Beitr¨age Zur Linguistik, pages 153– 178. University of Bamberg Press, Bamberg. 1756 A Study design A.1 Stimuli Table 6 provides the complete list of nouns used in the experiment. Rhymes Non-rhymes Bral Bnaupf Kach Bneik Klot Bn¨ohk Mur Fnahf Nuhl Fneik Pind Fn¨ohk Pisch Plaupf Pund Pleik Raun Pl¨ak Spand Pn¨ahf Spert Pr¨ong Vag Snauk Table 6: Experimental stimuli (Marcus et al., 1995) A.2 Procedure We designed an online survey comprising three sections, in order of presentation: 1) an introductory production task with existing German words, 2) a nonce-word production task, and 3) a nonce-word rating task. For the introductory production task, eight existing German nouns were used, one from each of the eight plural classes under consideration. The goal of this section was to familiarize participants with the task of producing the plural, and avoid biasing them toward any particular plural marker by showing all eight options. We also hoped that inflecting nouns in Modern High German would encourage participants to approach the following tasks with the standard variety primed, thus reducing the possible effects of dialectal variation. For the second and third sections, the production and rating tasks, the twenty-four M95 nonce words were presented. All stimuli were presented with neuter grammatical gender in the nominative case. In all tasks, each noun was preceded by the article Das, indicating neuter gender and singular number, and each prompt for participant responses was preceded by Die..., to indicate plural number. The eight existing nouns presented in the introductory production task were selected for neuter gender, so they followed this pattern as well. We recruited 192 participants through the online survey platform Prolific14, using the site’s demo14http://www.prolific.com graphic filters to target native German speakers. Participants were additionally asked about their age and exposure to languages other than German within the survey. Participants were shown the three tasks, introduction, production, and rating, in order, meaning that participants had to produce a plural form for all 24 nonce words before performing the rating task. For the production task, participants saw the noun on its own, preceded by Das, e.g. Das Bral. Above the response box, the text Die... appeared, to indicate that a plural form of the noun should be typed into the response box below the text. For the rating task, participants were prompted to rate each potential plural on a Likert scale of Sehr gut (‘very good’; 5) to Sehr schlecht (‘very bad’; 1). After filtering out 42 respondents who failed a preliminary attention check, data from 150 participants was available for analysis. The cleaned, anonymized survey data will be published online along with this paper.
2020
159
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 160–170 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 160 Cross-modal Language Generation using Pivot Stabilization for Web-scale Language Coverage Ashish V. Thapliyal Google Research [email protected] Radu Soricut Google Research [email protected] Abstract Cross-modal language generation tasks such as image captioning are directly hurt in their ability to support non-English languages by the trend of data-hungry models combined with the lack of non-English annotations. We investigate potential solutions for combining existing language-generation annotations in English with translation capabilities in order to create solutions at web-scale in both domain and language coverage. We describe an approach called Pivot-Language Generation Stabilization (PLuGS), which leverages directly at training time both existing English annotations (gold data) as well as their machinetranslated versions (silver data); at run-time, it generates first an English caption and then a corresponding target-language caption. We show that PLuGS models outperform other candidate solutions in evaluations performed over 5 different target languages, under a largedomain testset using images from the Open Images dataset. Furthermore, we find an interesting effect where the English captions generated by the PLuGS models are better than the captions generated by the original, monolingual English model. 1 Introduction Data hungry state-of-the-art neural models for language generation have the undesired potential to widen the quality gap between English and non-English languages, given the scarcity of nonEnglish labeled data. One notable exception is machine translation, which benefits from large amounts of bilingually or multilingually annotated data. But cross-modal language generation tasks, such as automatic image captioning, tend to be directly hurt by this trend: existing datasets such as Flickr (Young et al., 2014a), MSCOCO (Lin et al., 2014), and Conceptual Captions (Sharma et al., 2018) have extensive labeled data for English, but labeled data is extremely scarce in other languages (Elliott et al., 2016) (at 2 orders of magnitude less for a couple of languages, and none for the rest). In this paper, we conduct a study aimed at answering the following question: given a large annotated web-scale dataset such as Conceptual Captions (Sharma et al., 2018) in one language, and a baseline machine translation system, what is the optimal way to scale a cross-modality language generation system to new languages at web-scale? We focus our study on the task of automatic image captioning, as a representative for cross-modal language generation where back-and-forth consistency cannot be leveraged in a straightforward manner 1. In this framework, we proceed to test several possible solutions, as follows: (a) leverage existing English (En) image captioning datasets to train a model that generates En captions, which are then translated into a target language X; we call this approach Train-Generate-Translate (TGT); (b) leverage existing En captioning datasets and translation capabilities to first translate the data into the target language X, and then train a model that generates X -language captions; we call this approach Translate-Train-Generate (TTG); (c) stabilize the TTG approach by directly using the En gold data along with the translated training data in the X language (silver data) to train a model that first generates En captions (conditioned on the image), and then generates X -language captions (conditioned on the image and the generated En caption); this approach has En acting as a pivot language between the input modality and the X language output text, stabilizing against and reduc1We chose to focus on the cross-modality version of this problem because for the text-only modality the problem is less severe (due to existing parallel data) and also more studied (Artetxe et al., 2018), as it is amenable to exploiting backand-forth consistency as a powerful learning signal. 161 Image TGT Train Generate Translate TTG Translate Train Generate PLuGS Pivot Language Generation Stabilization Das Logo ist auf dem Computer zu sehen. (the logo can be seen on the computer.) Bild mit dem Titel Live mit einem Schritt (Image titled Live with a step) the iphone is seen in this undated image . <de> Das iPhone ist in diesem undatierten Bild zu sehen . Autoverkehr an einem regnerischen Tag (car traffic on a rainy day) Polizeiauto auf der Straße (police car on the street) a car in the city <de> ein auto in der stadt Bronzestatue im Garten (bronze statue in the garden) eine Stadt im Garten (a city in the garden) the entrance to the gardens <de> der Eingang zu den Gärten Figure 1: Examples of captions produced in German by Train-Generate-Translate (TGT), Translate-Train-Generate (TTG), and Pivot Language Generation Stabilization (PLuGS) approaches. Captions are shown in bold font. For TGT and TTG outputs, we show the English translation in parenthesis beside the caption. For the PLuGS outputs we mark the Stabilizer in the output using a light gray background. We do not explicitly show a translation for PLuGS outputs since the Stabilizer is already a translation. ing potential translation noise. We call the latter the Pivot-Language Generation Stabilization (PLuGS) approach. Examples of outputs produced by these three solutions are shown in Fig. 1. We perform extensive evaluations across five different languages (French, Italian, German, Spanish, Hindi) to compare these three approaches. The results indicate that the bilingual PLuGS models consistently perform the best in terms of captioning accuracy. Since there is very little support in the literature regarding the ability of standard evaluation metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), CIDEr (Vedantam et al., 2015), and SPICE (Anderson et al., 2016) to accurately measure captioning accuracy for non-English languages, our evaluations are done using fine-grained, side-by-side human evaluations using paid raters; we explain the evaluation protocol in detail in Sec. 5. Besides the evaluations on bilingual PLuGS models, we also train and evaluate a multilingual PLuGS model, in which all five non-English languages considered are supported through a single model capable of generating outputs in all 5 languages. The results indicate that similar languages are reinforcing each other in the common representation space, showing quantitative gains for the Romance languages involved in our experiments. A related but perhaps less expected result is that the English captions generated by PLuGS models (what we call the Stablizer outputs) are better, as measured using side-by-side human evaluations, than captions generated by the original, monolingual English model. There is a final additional advantage to having PLuGS models as a solution: in real-world applications of image captioning, quality estimation of the resulting captions is an important component that has recently received attention (Levinboim et al., 2019). Again, labeled data for quality-estimation (QE) is only available for English2, and generating it separately for other languages of interest is expensive, time-consuming, and scales poorly. The TGT approach could directly apply a QE model at run-time on the En caption, but the subsequent translation step would need to be perfect in order not to ruin the predicted quality score. The TTG ap2https://github.com/google-research-datasets/ImageCaption-Quality-Dataset 162 proach cannot make use at run-time of an En QE model without translating the caption back to English and thus again requiring perfect translation in order not to ruin the predicted quality score. In contrast, the PLuGS approach appears to be best suited for leveraging an existing En QE model, due to the availability of the generated bilingual output that tends to maintain consistency between the generated EN- & X-language outputs, with respect to accuracy; therefore, directly applying an English QE model appears to be the most appropriate scalable solution. 2 Related Work There is a large body of work in automatic image captioning for English, starting with early work (Hodosh et al., 2013; Donahue et al., 2014; Karpathy and Fei-Fei, 2015; Kiros et al., 2015; Xu et al., 2015) based on data offered by manually annotated datasets such as Flickr30K (Young et al., 2014b) and MS-COCO (Lin et al., 2014), and more recently with work using Transformer-based models (Sharma et al., 2018; Zhao et al., 2019; Changpinyo et al., 2019) based on the web-scale Conceptual Captions dataset (Sharma et al., 2018). Generating image captions in languages other than English has been explored in the context of the WMT 2017-2018 multimodal translation sub-task on multilingual caption generation (Elliott et al., 2017). The goal of the task is to generate image captions in German and French, using a small training corpus with images and captions available in English, German and French (based on Flickr30K). In the context of that work, we use the results reported in (Caglayan et al., 2019) to quantitatively compare it against our approach. Another relevant connection is with the work in (Jaffe, 2017), which explores several LSTM-based encoder-decoder models that generate captions in different languages. The model most similar to our work is their Dual Attention model, which first generates an English caption, then an LSTM with attention over the image and the generated English caption produces a German caption. Their quantitative evaluations do not find any additional benefits for this approach. Our work is related to this idea, but there are key technical differences. In the PLuGS approach, we train an end-to-end model based on a Transformer (Vaswani et al., 2017) decoder that exploits the generated English-prefix via the self-attention mechanism to learn to predict the non-English target caption, conditioned on the English tokens at multiple levels through the decoder stack. Moreover, we approach this study as the search for a solution for web-scale multi-language image captioning: we employ the web-sized Conceptual Captions dataset for training, and consider the effects of using captions across multiple languages, as well as multi-language/single-model setups. 3 Model Architecture We model the output caption using a sequencegeneration approach based on Transformer Networks (Vaswani et al., 2017). The output is the sequence of sub-tokens comprising the target caption. As shown in Fig. 2, the input sequence is obtained by concatenating the following features. Global Image Embedding: We use a global image representation using the Graph-RISE model (Juan et al., 2019), a ResNet-101 model (He et al., 2016) trained for image classification at ultrafine granularity levels. This model produces a compact image embedding i of dimension Di = 64. This embedding is projected to match Transformer dimensions (set to 512 in most of our experiments) by a 2 layer DNN with linear activation and fed as the first element in the sequence of inputs to the encoder. Object Labels Embeddings: Detecting the presence of certain objects in the image (e.g. “woman”, “flag”, “laptop”) can help generate more accurate captions, since a good caption should mention the more salient objects. The object labels are generated by an object detection model which is run over the entire image. The output labels are then converted to vectors using word embeddings to obtain what we call object-label embeddings. More precisely, we detect object labels over the entire image using a ResNet-101 object-detection classifier trained on the JFT dataset (Hinton et al., 2015). The classifier produces a list of detected object-label identifiers, sorted in decreasing order by the classifier’s confidence score; we use the first sixteen of these identifiers. The identifiers are then mapped to embeddings oj using an object-label embedding layer which is pre-trained to predict label co-occurrences in web documents, using a word2vec approach (Mikolov et al., 2013). The resulting sequence of embeddings is denoted O = (o1, . . . , o|O|), where each oj has dimension Do = 163 DNNobjects DNNimage Image Object Classifier Global Features Extractor Label Embeddings Trainable Pre-trained/fixed) Text Transformer Inputs LangId VocabLangid DNNLangId Vocabtext Embedding Transformer Encoder Decoder Outputs (Shifted) Splitter Stabilizer Caption Vocabtext Encoder Outputs Transformer Decoder Embedding Encoder-decoder Attention Linear SoftMax Probs Beam Search Decoder Outputs Figure 2: The Transformer based PLuGS model. The text on the input side is used for the translation and multimodal translation experiments with the Multi30K dataset. For image captioning, no text input is provided. 256. Each member of this sequence of embeddings is projected to match Transformer dimensions by a 2 layer DNN with linear activation. This sequence of projected object-label embeddings is fed to the encoder together with the global image embedding. LangId Embeddings: When training languageaware models, we add as input the language of the target sequence. We specify the language using a language identifier string such as en for English, de for German, etc. We call this the LangId of the target sequence or target LangId in short. Given the target LangId, we encode it using a LangId vocabulary, project it to match Transformer dimensions with a 2 layer DNN, then append it to the encoder input sequence. Text Embeddings: All text (input or output) is encoded using byte-pair encoding (Sennrich et al., 2016) with a shared source-target vocabulary of about 4000 tokens, then embedded as described in (Vaswani et al., 2017), resulting in a sequence of text embeddings. The embeddings dimensions are chosen to match the Transformer dimensions. When performing the translation (MT) and multimodal translation (MMT) experiments in Sec. 6.1, the sequence of source text embeddings are fed to the encoder after the LangId embedding. Additionally, we reserve a token-id in the text vocabulary for each language (e.g. ⟨de⟩for German) for use as a separator in the PLuGS model output and also have a separate start-of-sequence token for each language. Decoding: We decode with beam search with beam width 5. PLuGS: For PLuGS models, in addition to the target caption we require the model to generate a ... car parked in the city < de > Encoder Outputs Decoder Layer 1 Encoder-Decoder Attention Masked Self-Attention Trainable Fixed Previous tokens Add & Normalize Voc Emb Voc Emb Voc Emb Voc Emb Voc Emb Voc Emb FF FF FF FF FF FF Add & Normalize Add & Normalize Decoder Layer k ... parked in the city < de > Auto ... Figure 3: Caption’s dependence on the Stabilizer. The target-language caption is conditioned on the Stabilizer through the Masked Self-Attention in the decoder, and on the input image through the Encoder-Decoder attention that attends to the outputs of the last encoder layer. Note that in this figure, FF stands for the feed forward network, Voc stands for the (fixed) text vocab, and Emb stands for the (trainable) text embeddings. pivot-language (En) caption which we call the Stabilizer. Specifically, we train the model over target sequences of the form Stabilizer + ⟨separator⟩+ Caption. We use ⟨$LangId⟩as the separator (i.e., for German captions we use ⟨de⟩as the separator). This approach has the advantage that it can be applied to multilingual models as well. We subsequently split the model output based on the separator to obtain two strings: the Stabilizer and the Caption. 164 Note an important technical advantage here: as shown in Fig. 3, after initially generating the Stabilizer output, the Transformer decoder is capable of exploiting it directly via the self-attention mechanism, and learn to predict the non-English Caption tokens conditioned (via teacher-forcing) on the gold-data English tokens at multiple levels through the decoder stack, in addition to the cross-attention mechanism attending to the inputs. As our results indicate, the models are capable of maintaining this advantage at run-time as well, when auto-regressive decoding is performed. 4 Datasets We perform our experiments using two different benchmarks. We use the Multi30K (Elliott et al., 2016) dataset in order to compare the effect of the PLuGS model using a resource that has been widely used in the community. We focus on Task 1 for French from (Caglayan et al., 2019), generating a translation in French based on an image and an English caption as input. The training set consists of images from the Flickr30K train and validation splits, along with the corresponding French captions. The validation split consists of test2016 images and captions, and the test split consists of the test2017 images and captions. For the core results in this paper, we use the Conceptual Captions dataset (Sharma et al., 2018) as our English-annotated generation labels, in order to capture web-scale phenomena related to image captioning. In addition, we use Google Translate as the translation engine (both for the run-time translations needed for the TGT approach and the training-time translations needed for the TTG and PLuGS approaches), targeting French, Italian, German, Spanish, and Hindi as target languages. We use the standard training and validation splits from Conceptual Captions for developing our models. We report the results using a set of 1,000 randomly samples images from the Open Images Dataset (Kuznetsova et al., 2018). We refer to this test set as OID1k when reporting our results. 5 Evaluation In the experiments done using the Multi30K dataset, we are reporting results using the METEOR (Banerjee and Lavie, 2005) metric, in line with previous work. For the experiments performed using the Conceptual Captions dataset, we have found that automated evaluation metrics for image captioning such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), CIDEr (Vedantam et al., 2015), and SPICE (Anderson et al., 2016) cannot accurately measure captioning accuracy for non-English languages. However, we are reporting CIDEr numbers as a point of comparison, and contrast these numbers with human evaluation results. We describe the human evaluation framework we use next. 5.1 Human Side-by-Side Evaluation We perform side-by-side human evaluation for comparing model outputs. To compare two image captioning models A (baseline) vs B, we generate captions for these images with each model and ask human raters to compare them. As illustrated in Fig. 4, the raters are shown the image with the two captions randomly placed to the left vs. right, and are asked to compare the captions on a side-by-side rating scale. In addition, they are asked to also provide an absolute rating for each caption. The absolute rating provides a cross-check on the comparison. Each image and associated captions are rated by three raters in our experiments. We calculate the following statistics using the resulting side-by-side rating comparisons: Wins: Percent of images where majority of raters (i.e. 2 out of 3) marked Caption B as better (after derandomization). Losses: Percent of images where majority of raters marked Caption A as better. Gainsxs = Wins −Losses We also calculate the following statistics using the resulting absolute ratings: AAccept = Percent of images where majority of raters mark caption A as Acceptable, Good, or Excellent. BAccept = Percent of images where majority of raters mark caption B as Acceptable, Good, or Excellent. GainAccept = BAccept −AAccept The advantages of the Gainsxs and GainAccept metrics is that they are intuitive, i.e., they measure the absolute increase in accuracy between the two experimental conditions3 3Inter-rater agreement analysis shows that for each evaluation comparing two models, two of the three raters agree on Win/Loss/Same for 90% to 95% of the items. Further, for more than 98% of the items using the difference between the absolute ratings gives the same Win/Loss/Same values as obtained from the side-by-side ratings. Also, for 80% to 85% of the absolute ratings, two of the three raters agree on the rating. 165 Caption A: tractor seed in the morning followed by seagulls Caption B: tractor plowing the field How well does Caption A above describe the image? Excellent Good Acceptable Bad Not enough information How well does Caption B above describe the image? Excellent Good Acceptable Bad Not enough information Much Better Better Slightly Better About the same Slightly Better Better Much Better Please compare Caption A to Caption B: Now select individual ratings for each caption: Figure 4: Side-by-side human evaluation of two image captions. The same template is used for evaluating English as well as the 5 languages targeted. 5.2 Training Details Multi30K: For the experiments using this dataset, we use a Transformer Network (Vaswani et al., 2017) with 3 encoder and 3 decoder layers, 8 heads, and model dimension 512. We use the Adam optimizer (Kingma and Ba, 2015), and do a hyperparameter search over learning rates {3e−4, e−4, 3e−5, e−5} with linear warmup over 16000 steps followed by exponential decay over {50k, 100k} steps. We use 5e−6 as the weight for L2 regularization. We train with a batch size of 1024, using a dropout of 0.3, on 8 TPU (You et al., 2019) cores. Conceptual Captions: For all except large multilingual models, we use a vanilla Transformer with 6 encoder and decoder layers, 8 heads, and model dimension 512. We use the SGD optimizer, and do a hyperparameter search over learning rates {0.12, 0.15, 0.18, 0.21, 0.24} with linear warmup over 16000 steps followed by exponential decay over {350k, 450k} steps. For multilingual models, we also use linear warmup over 80000 steps. We use 1e−5 as the weight for L2 regularization. We train with a batch size of 4096, using a dropout of 0.3 on 32 TPU (You et al., 2019) cores. For large multilingual models, we use a Transformer with 10 encoder and decoder layers, 12 heads, and model dimension 7684 We also use a smaller learning rate of 0.09. 4Dimension chosen so that we maintain 64 dimensions per head. 6 Experiments and Results 6.1 Multi30K In order to compare our work to related work we train our models on the Multi30K dataset and compared our results to the results in (Caglayan et al., 2019). We focus on Task 1: generate a French translation based on an image and English caption as input. Table 1 shows the results on the Multi30K dataset for Multimodal Translation. Note that since (Caglayan et al., 2019) does not show numbers for the pure (no caption input) image captioning task, we show numbers for the D4 condition, where only the first 4 tokens of the English caption are provided as input to the image captioning model. We see that the PLuGS model is able to produce numbers for MT and MMT that are close to the baseline, even thought it is just an image captioning model augmented to handle these tasks. For the D4 task, which is the closest to image captioning, the PLuGS model shows improvement over the baseline. Furthermore, the results contain preliminary indications that the PLuGS approach produces better results compared to the non-PLuGS approach Task Baseline non-PLuGS PLuGS MT 70.6 66.6 67.7 MMT 70.9 64.7 65.6 IC-D4 32.3 30.6 32.8 Table 1: Multi30K test set METEOR scores for Translation (MT), Multi Modal Translation (MMT), and Image Captioning (IC-D4). The baseline is from task 1 of (Caglayan et al., 2019). 166 Lang Wins Losses Gainsxs PLuGSAccept TGTAccept GainAccept Fr 22.8 19.4 3.4 68.7 66.5 2.2 It 22.5 18.3 4.2 52.1 49.9 2.2 De 22.6 19.1 3.5 69.2 67.7 1.5 Es 27.0 22.1 4.9 58.8 56.9 1.9 Hi 26.8 23.8 3.0 78.6 75.9 2.7 Wins Losses Gainsxs PLuGSAccept TTGAccept GainAccept Fr 18.2 17.3 0.9 66.2 64.2 2.0 It 23.7 20.8 2.9 55.1 52.2 2.9 De 21.9 19.6 2.3 64.3 63.0 1.3 Es 24.9 23.8 1.1 57.7 56.8 0.9 Hi 27.4 25.5 1.9 71.3 69.6 1.7 Table 2: SxS performance of PLuGS vs. TGT models (upper half) and PLuGS vs. TTG models (lower half), across five target languages on OID1k. The PLuGS models perform better on both GainSxS and GainAccept metrics, for all five languages. Lang TGT TTG PLuGS PLuGS-TGT PLuGS-TTG Fr 0.7890 0.7932 0.7820 -0.0070 -0.0112 It 0.7729 0.7760 0.7813 0.0084 0.0053 De 0.6220 0.6079 0.6170 0.0050 0.0091 Es 0.8042 0.7907 0.7854 -0.0188 -0.0053 Hi 0.7026 0.7149 0.7155 0.0129 0.0006 Table 3: CIDEr scores on CC-1.1 validation set for PLuGS, TGT, and TTG models for five languages. (+2.2 METEOR). 6.2 Conceptual Captions In this section, we evaluate the performance of models trained using Conceptual Captions, as detailed in Sec. 4. Table 2 presents the results on the OID1k testset for the SxS human evaluations between the TGT and PLuGS models (upper half), and between the TTG and PLuGS models (lower half). The results show that, for all five languages, the PLuGS model captions are consistently superior to the TGT captions on both GainSxS and GainAccept metrics. The GainSxS are between 3% and 5% absolute percentages between TGT and PLuGS models, and 1% and 3% absolute percentages between TTG and PLuGS models, with similar trends for the GainAccept metric. Table 3 presents the CIDEr scores on the validation set of the Conceptual Captions v1.1 (CC-1.1). The CIDEr metric fails to capture any meaningful correlation between its scores and the results of the SxS human evaluations. 6.3 Multilingual Models We further explore the hypothesis that adding more languages inside one single model may perform even better, as a result of both translation noise canceling out and the languages reinforcing each other in a common representation space. In this vein, we rename the bilingual version as PLuGS-2L, and train several additional models: a TTG-5L model, which uses a LangId token as input and uses for training all translated captions for all five languages and English; a TTGlarge-5L model, for which we simply increased the capacity of the Transformer network (see Sec. 5.2); and a PLuGS-5L model, which is trained using groundtruth labels that are concatenations (using the LangId token as separator) between golden groundtruth En labels and their translated versions, for all five target languages. Results using CIDEr are shown in Table 4. Across all languages, the TTG-5L models show a large gap in the CIDEr scores as compared to the TTG monolingual models. Using more capacity in the TTGlarge-5L model closes the gap only slightly. However, the effect of using pivotlanguage stabilizers tends to be consistently larger, in terms of CIDEr improvements, than the ones obtained by increasing the model capacity. To accurately evaluate the impact of multilinguality, we also perform SxS evaluations between the PLuGS-2L (as the base condition) vs. 167 Lang TTG PLuGS-2L TTG-5L TTGlarge-5L PLuGS-5L Fr 0.7932 0.7820 0.6834 0.7064 0.7264 It 0.7760 0.7813 0.6538 0.6885 0.6978 De 0.6079 0.6170 0.4992 0.5367 0.5503 Es 0.7907 0.7854 0.7093 0.7203 0.7284 Hi 0.7149 0.7155 0.5891 0.6201 0.6641 Table 4: CIDEr scores on CC-1.1 validation set for bilingual and multilingual models. Lang Wins Losses Gainsxs BAccept AAccept GainAccept Fr 21.3 18.3 3.0 69.8 68.7 1.1 It 22.2 18.2 4.0 56.4 55.5 0.9 Hi 26.8 27.0 -0.2 75.6 79.5 -3.9 Table 5: SxS performance of PLuGS-5L vs. PLuGS-2L models for three languages. PLuGS-5L (as the test condition) models, over three languages (French, German, and Hindi). As shown in Table 5, the PLuGS-5L model performs better on French and Italian (3% and 4% better on Gainsxs), while performing worse on Hindi compared to the bilingual PLuGS Hindi model (-0.2% on Gainsxs, -3.9% on GainAccept). The results are encouraging, and indeed support the hypothesis that similar languages are reinforcing each other in the common representation space, explaining the gain observed for the Romance languages and the detrimental impact on Hindi. We also note here that the human evaluation results, except for Hindi, come in direct contradiction to the CIDEr metric results, which indicate a large performance hit for PLuGS-5L vs. PLuGS2L, across all languages. This reflects again the extreme care needed when judging the outcome of such experiments based on the existing automatic metrics. 6.4 Stabilizers Used as English Captions As already mentioned, the PLuGS models generate outputs of the form Stabilizer + ⟨LangId⟩+ Caption. We therefore ask the following question: how does the quality of the Stabilizer output compare to the quality of captions produced by the baseline English model (that is, the same model whose captions are translated to the target languages in the TGT approach)? We perform SxS human evaluations over Stabilizer captions (English) for three different PLuGS2L models (trained for French, German, and Spanish). As shown in Table 6, the somewhat unexpected answer is that these Stabilizer outputs are consistently better, as English captions, compared to the ones produced by the original monolingual English captioning model. The Gainsxs are between 5% and 6% absolute percentage improvements, while GainAccept also improves up to 3.4% absolute for the PLuGS-Fr model. We again note that the CIDEr metric is not able to correctly capture this trend, as shown by the results in Table 7, which indicate a flat/reverse trend. 6.5 Caption is Translation of Stabilizer So far, we have verified that both the targetlanguage Caption and the Stabilizer English outputs for the PLuGS-2L models are better compared to the alternative ways of producing them. Additionally, we want to check whether the Stabilizer and the target-language Caption are actually translations of each other, and not just independently good captions associated with the input image. In Table 9, we show the BLEU-4 score of the translation of the Stabilizer output for the PLuGS-2L models, compared to the corresponding PLuGS-2L Caption treated as a reference, using the images in the OID1k test set. The high BLEU scores are indeed confirming that the Caption outputs are close translations of the Stabilizer English outputs. This allows us to conclude that PLuGS models are indeed performing the double-duty of captioning and translation. 6.6 Stabilizers Used for Quality Estimation Finally, we perform an experiment to understand the extent to which the quality of the Stabilizer outputs is correlated with the quality of the targetlanguage Captions, so that a QE model (Levinboim et al., 2019) trained for English can be applied directly on PLuGS model outputs (more specifically, 168 Model Wins Losses Gainsxs BAccept AAccept GainAccept PLuGS-Fr 26.9 21.8 5.1 70.4 67.0 3.4 PLuGS-De 26.6 21.3 5.3 70.4 69.7 0.7 PLuGS-Es 28.0 21.8 6.2 69.7 67.8 1.9 Table 6: Performance of Stabilizers used as captions from PLuGS models for three languages vs the captions produced by the baseline English model. The PLuGS Stabilizer outputs are better captions across all three languages. Model PLuGS Baseline Diff PLuGS-Fr 0.8663 0.8772 -0.0139 PLuGS-De 0.8680 0.8772 -0.0092 PLuGS-Es 0.8590 0.8772 -0.0182 Table 7: CIDEr scores on CC-1.1 validation set for Baseline and PLuGS-Stabilizer outputs (English captions). Model Spearman ρ TGT TTG PLuGS PLuGS-Fr 0.3017 0.3318 0.5982 PLuGS-De 0.3246 0.2900 0.5862 PLuGS-Es 0.2928 0.3201 0.5566 Table 8: Spearman correlation of Stabilizer vs TGT, TTG and PLuGS Captions across three languages. on the Stabilizer outputs). To that end, we perform human evaluations of stand-alone captions. In this type of evaluation, the raters are shown an image along with a single caption, and are asked to provide an absolute rating for the caption on a 4point scale. As before, we define the metric Accept = Percent of images where majority of raters (2 of 3) marked Caption as Acceptable, Good or Excellent. Since these ratings are obtained individually for captions, we can use them to measure crosslingual quality correlations. 6.6.1 Quality Correlation between Stabilizer and Caption We use the stand-alone caption evaluation results to compute quality correlations. Table 8 shows the correlation between the median human rating for the Stabilizer (English caption) vs Caption (targetlanguage caption) for the PLuGS models considered. We see that the correlation is much higher compared to the baselines, calculated by computing the correlation of the median rating for the Stabilizer vs Caption (target-language) generated by the TGT and TTG approaches. These results confirm that the PLuGS approach appears to be best suited for leveraging an existing Fr It De Es Hi BLEU 93.3 92.9 88.2 93.9 88.2 Table 9: The BLEU-4 score of the translation of the stabilizer against the caption treated as the reference. En QE model, due to the availability of the generated Stabilizer output that tends to maintain consistency between the English and the target-language caption, with respect to content accuracy. 7 Conclusions We present a cross-modal language generation approach called PLuGS, which successfully combines the availability of an existing gold annotation (usually in English) with the availability of translation engines that automatically produce silver-data annotations. The result is a multilingual engine capable of generating high-quality outputs in the target languages, with no gold annotations needed for these languages. We show that, for image captioning, the PLuGS approach out-performs other alternatives, while also providing the ability to pack multiple languages in a single model for increased performance. Surprisingly, by considering the generated outputs in the original language of the annotation (Stabilizer outputs), we find that the quality of the Stabilizers is higher compared to the outputs of a model trained on the original annotated data. Overall, our results can be understood as a successful instance of transfer learning from a unimodal task (text-to-text translation) to a crossmodal task (image-to-text generation), which allows us to indirectly leverage the abundance of text-only parallel data annotations across many languages to improve the quality of an annotation-poor cross-modal setup. References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: semantic propositional image caption evaluation. In ECCV. 169 Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632– 3642, Brussels, Belgium. Association for Computational Linguistics. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Lo¨ıc Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 27, 2019, Volume 1 (Long and Short Papers), pages 4159–4170. Soravit Changpinyo, Bo Pang, Piyush Sharma, and Radu Soricut. 2019. Decoupled box proposal and featurization with ultrafine-grained semantic labels improve image captioning and visual question answering. In EMNLP-IJCNLP. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2014. Long-term recurrent convolutional networks for visual recognition and description. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Desmond Elliott, Stella Frank, Lo¨ıc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine translation and multilingual image description. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 215–233. Desmond Elliott, Stella Frank, Khalil Sima’an, and Lucia Specia. 2016. Multi30k: Multilingual englishgerman image descriptions. In Proceedings of the 5th Workshop on Vision and Language, hosted by the 54th Annual Meeting of the Association for Computational Linguistics, VL@ACL 2016, August 12, Berlin, Germany. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR. Alan Jaffe. 2017. Generating image descriptions using multilingual data. In Proceedings of the Second Conference on Machine Translation, WMT 2017, Copenhagen, Denmark, September 7-8, 2017, pages 458–464. Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, and Sujith Ravi. 2019. Graph-rise: Graph-regularized image semantic embedding. CoRR, abs/1902.10814. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2015. Unifying visual-semantic embeddings with multimodal neural language models. Transactions of the Association for Computational Linguistics. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper R. R. Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. 2018. The open images dataset V4: unified image classification, object detection, and visual relationship detection at scale. CoRR, abs/1811.00982. T. Levinboim, A. Thapliyal, P. Sharma, and R. Soricut. 2019. Quality estimation for image captions based on large-scale human evaluations. arXiv preprint arXiv:1909.03396. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In Proceedings of ECCV. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NeurIPS. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the ACL. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual Captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of ACL. 170 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In Proceedings of CVPR. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of ICML. Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. 2019. Fast deep neural network training on distributed systems and cloud tpus. IEEE Trans. Parallel Distrib. Syst., 30(11):2449– 2462. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014a. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67–78. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014b. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67–78. Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, and Radu Soricut. 2019. Informative image captioning with external sources of information. In ACL.
2020
16
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1757–1762 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1757 Overestimation of Syntactic Representation in Neural Language Models Jordan Kodner University of Pennsylvania Dept. of Linguistics [email protected] Nitish Gupta University of Pennsylvania Dept. of Computer and Information Science [email protected] Abstract With the advent of powerful neural language models over the last few years, research attention has increasingly focused on what aspects of language they represent that make them so successful. Several testing methodologies have been developed to probe models’ syntactic representations. One popular method for determining a model’s ability to induce syntactic structure trains a model on strings generated according to a template then tests the model’s ability to distinguish such strings from superficially similar ones with different syntax. We illustrate a fundamental problem with this approach by reproducing positive results from a recent paper with two non-syntactic baseline language models: an n-gram model and an LSTM model trained on scrambled inputs. 1 Introduction In recent years, RNN-based systems have proven excellent at a wide range of NLP tasks, sometimes achieving or even surpassing human performance on popular benchmarks. Their success stems from the complex but hard to interpret, representations that they learn from data. Given that syntax plays a critical role in human language competence, it is natural to ask whether part of what makes these models successful on language tasks is an ability to encode something akin to syntax. This question pertains to syntax “in the meaningful sense,” that is, the latent, hierarchical, largely context-free phrase structure underpinning human language as opposed to superficial or shallow issues of word order (Chomsky, 1957; Marcus, 1984; Everaert et al., 2015; Linzen et al., 2016). Clearly, syntactic information can be explicitly incorporated into neural systems to great effect (e.g., Dyer et al., 2016; Swayamdipta et al., 2018). Less certain is whether such systems induce something akin to hierarchical structure (henceforth, “syntax”) on their own when not explicitly taught to do so. Uncovering what an RNN actually represents is notoriously difficult, and several methods for probing RNNs’ linguistic representations have been developed to approach the problem. Most directly, one can extract finite automata (e.g., Weiss et al., 2017) from the network or measure its state as it processes inputs to determine which neurons attend to what features (e.g., Shi et al., 2016; Linzen et al., 2016; Tenney et al., 2019). Alternatively, one can present a task which only a syntactic model should be able to solve, such as grammaticality discrimination or an agreement task, and then infer if a model has syntactic representations based on its behavior (Linzen et al., 2016; Ettinger et al., 2018; Gulordava et al., 2018; Warstadt et al., 2019). In practice, simple sentences far outnumber the ones that require syntax in any natural corpus, which may obscure evaluation (Linzen et al., 2016). One way around this, referred to here as templatebased probing, is to either automatically generate sentences with a particular structure or extract just the relevant ones from a much larger corpus. Templates have been used in a wide range of studies, including grammaticality prediction (e.g., Warstadt et al., 2019), long-distance dependency resolution, and agreement prediction tasks (e.g., Gulordava et al., 2018). By focusing on just relevant structures that match a given template rather than the gamut of naturally occurring sentence, templatebased probing offers a controlled setting for evaluating specific aspects of a model’s representation. The crux of behavioral evaluation is the assertion that the chosen task effectively distinguishes between a model that forms syntactic representations and one which does not. This must be demonstrated for each task – if a model that does not capture syntax can pass the evaluation, then there is no conclusion to be drawn. However, this step is often omitted (but not always, e.g., Gulordava et al., 1758 2018; Warstadt et al., 2019). Moreover, templatebased generation removes the natural sparse and diverse distribution of sentence types, increasing the chance that a system might pick up on nonsyntactic patterns in the data, further increasing the importance of a clear baseline. This problem is most clearly illustrated with an example. In the following sections, we introduce Prasad et al.’s (2019) novel psycholinguisticsinspired template-based probe of relative clause types, which was taken as evidence in support of syntactic representation in LSTMs. We then pass PvSL’s test with two non-syntactic baselines: an n-gram LM which can only capture short-distance word order of concrete types (Section 3), and an LSTM trained on scrambled inputs (Section 4). These baselines show that a combination of collocation and lexical representation can account for PvSL’s results, which highlights a critical flaw in that experimental design. Following that, we argue that it is unlikely that LSTMs induce syntactic representations given current evidence and suggest an alternative angle for the question (Section 5). 2 Prasad, van Schijndel, & Linzen 2019 Prasad et al. (PvSL; 2019) leverage an analogy from psycholinguistic syntactic priming to test whether an LSTM is able to distinguish between sentences with different syntactic structures. When human subjects are primed by receiving an example of some input, their expectation of receiving similar subsequent input will temporarily increase relative to their expectation of other inputs. This can be used to test questions about syntax because once one is primed with sentences with a specific structure, subsequent sentences with shared structure will tend to show decreased surprisal responses relative to those with different structures. PvSL observe that this procedure may be applied to neural networks as well. Since a model’s surprisal upon receiving some input decreases as it receives subsequent similar inputs, one could cumulatively “prime” a model by adapting it toward a certain class of input (van Schijndel and Linzen, 2018). As the reasoning goes, if the model can be primed for a particular syntactic structure, that implies that it is able to recognize that structure and therefore has learned a representation for it. This paradigm is used to assess an LSTM’s ability to distinguish between five superficially similar but structurally distinct sentences types: those containing an unreduced object relative clause (RC), reduced object RC, unreduced passive subject RC, unreduced passive subject RC, and active subject RC, as well as two types matched for lexical content: passive subj./obj. RC-matched coordination sentences and active subj. RC-matched coordination. (1-2) present an example object RC and subject RC sentence to illustrate the structures.1 These are distinguished syntactically by the origin of their subjects. In the first case, the subject of the sentence, ‘the cake,’ is also the object of the relative clause (position indicated by underscore), but in the second case, the sentence subject, ‘the baker,’ is also the subject of the relative clause. (1) unreduced obj. RC: The caket [that the baker baked t] impressed the customers. (2) unreduced subj. RC: The bakert [that t baked the cake] impressed the customers. As PvSL note, if a model were able to track the position of the implicit syntactic origin, it would be able to distinguish these sentence types, so one would expect the model to exhibit a greater adaptation effect (greater decrease in surprisal) when primed and tested on the same sentence type than if primed on one type and tested on the other. 2.1 Main Experiment PvSL populated templates to generate five sets of 20 adaptation and 50 test sentences for each sentence type with lexical items chosen to minimize lexical overlap between corresponding adaptation and test sets. Modifiers were optionally inserted in order to vary surface word order somewhat, and generated sentences were constrained to be felicitous, that is, they all made plausible semantic sense. They trained 75 LSTM language models (van Schijndel and Linzen, 2018) on five splits of the WikiText-103 corpus. Average surprisal was computed for each model for each test set, then each model was adapted to (“primed for”) each sentence type. They were then retested on the same test sets. The difference between pre- and post-adaptation surprisal (“adaptation effect”) for each adaptation sentence type/test type pair was recorded, and adaptation effects were averaged across all models for each sentence type. They establish a consistently and significantly stronger adaptation effect for same-type adaptation and test runs than different-type runs (PvSL 1More examples can be found in PvSL §4.1. 1759 Figure 1: Average same-type vs. different-type adaptation effects for n-gram models. All differences are statistically significant except for object coordination. §5.2), a stronger effect for RCs tested on models adapted for RCs rather than coordination sentences and vice-versa (PvSL §5.3), and for runs matched for passive voice over mismatched runs and for runs matched for reduction over mismatched runs (PvSL §5.4). Altogether, this is consistent with their hypothesis that the LSTM LMs are capturing abstract syntactic properties of their inputs. Although the results are impressive, there are potential issues with their suggested interpretation. Namely, there may still be sufficient superficial word order information to achieve the effect despite the addition of optional modifiers (e.g., if unreduced object RCs often contain the bigram “that the,” but unreduced subject RCs never do). Also, the felicity constraint means that the lexical items that appear in each sentence type should pattern together in the training data (i.e., verbs that are more likely to appear in object RCs are likely to pattern similarly in other constructions too). We test both possibilities in the following sections. 3 N-Gram Model We begin by training an n-gram language model (through 4-grams) with Knesser-Ney smoothing (Ney et al., 1994) with the NLTK toolkit to determine whether it could be primed to distinguish PvSL’s sentence types. An n-gram LM can only learn surface collocations and so cannot capture (hierarchical) syntax, so if it produces a significant differential adaptation effect, then the experiment is not able to discriminate between models which capture syntax from those which do not. Adaptation and testing were carried out with PvSL’s adaptation and test sets, and LM training was modified slightly to address n-gram models’ characteristics. They have no recency bias, unlike RNNs, which diminishes the impact of adaptation. As such, 20 smaller models were trained on disjoint Figure 2: Average RC vs. coordination adaptation effects for n-gram models. Adapt on coord. is significant subsets of WikiText-2 rather than the full-sized WikiText-103 subsets. Plotting and statistical analysis were carried out with PvSL’s code2. Figure 1 shows the average adaptation effect observed when the models are adapted and tested on the same sentence type or different sentence types. Importantly, the same-type adaptation effect is greater than the different-type effect for six of seven sentence types (unreduced passive RC is reversed). Although the adaptation effect is uniformly weaker than observed for PvSL’s LSTM LMs, there is a statistically significant difference between the same-type and different-type effects for six of seven sentence types. Figure 2 compares the adaptation effect over RCs compared to coordination sentences. The ngram models show a significantly greater sametype adaptation effect for coordination but not for RCs. A small but significant increase in voiceand reduction-matched adaptation over unmatched combinations was found (matched-passive matched reduction: 0.610, matched-passive mismatchedreduction: 0.594, mismatched-passive matchedreduction: 0.575, mismatched-passive mismatchedreduction: 0.572). 4 Scrambled-Input Model Next, the same van Schijndel and Linzen (2018) trained LSTM LMs which PvSL employed were adapted on altered versions of their adaptation sets in which the word order of each sentence was scrambled to destroy the sentence’s syntax while retaining its lexical content, then tested on the original non-scrambled test sets. Even though PvSL minimize the amount of lexical overlap in the adaptation and test sets, it may be the case that the models pick up on lexical similarities because of the felicity constraint which was imposed on them. 2https://github.com/grushaprasad/RNN-Priming, with minor aesthetic changes to plots 1760 Figure 3: Average same-type vs. different-type adaptation effects for scrambled LSTM models. All differences are significant. Scrambling was random on a sentence-bysentence basis. Results were averaged across all the adaptation sets and models (as they were in PvSL), so the effect of any individual accidentally grammatical scramble was diminished. Figure 3 shows the average differential adaptation effects on these scrambled annotation runs. The same-type adaptation effect is significantly greater than different-type for six of seven sentence types (except subject coord.), and the largest relative difference is seen for unreduced passive RCs, the only type for which the n-gram models produced a reverse effect. Overall, the adaptation effect is an order of magnitude larger than for the n-gram models’ but still smaller than PvSL’s. Figure 4 shows differential adaptation effects for RC and coordination sentences. A backward effect is observed for sentences adapted on coordination, but a large positive effect is found for those adapted on RC sentences. This is the complement of what was found for n-gram models. A significant positive difference was found between sentence types matched and unmatched in passives and reduction (matched-passive matched reduction: 0.65, matched-passive mismatched-reduction: 0.53, mismatched-passive matched-reduction: 0.53, mismatched-passive mismatched-reduction: 0.43). 5 Discussion These results call into question the van Schijndel and Linzen (2018) and Prasad et al. (2019) syntactic priming paradigm’s ability to distinguish models which represent syntax from those which rely on shallow phenomena by achieving a positive result with two non-syntactic baseline models. First, success in the priming paradigm is measured by whether or not adaptation reduces surprisal, but not by how much, so even though both baseline models tested here reduce surprisal by less than PvSL’s Figure 4: Average RC vs. coord. adaptation effects for scrambled LSTM models. Differences are significant. models on average, they still pass the success criterion. To put it another way, PvSL report quantitative results but do not actually establish what would constitute a meaningful effect size. Even though the effect sizes of both our baseline replications were smaller, PvSL could have reported the results from our baseline models instead of their actual model and drawn the same conclusions. Second, the fact that our surface word order ngram model and lexical similarity-only scrambled LSTM LMs also show surprisal effects draws into question the basic claim that only a syntactic model would respond to adaptation: it is our hypothesis that the combined effect of word order and lexical similarity are what drive the LSTM models’ larger effect. This is upheld, especially when it is noted that the adaptation effects of both baselines complement each other. Both alternative sources of information are well known in the community and have been tested in the past (Bernardy and Lappin, 2017; Gulordava et al., 2018). This reiterates the need for proper baseline testing in computational linguistics and for informative evaluations. This highlights a more general problem with template-based probing, namely, that the unnatural lack of sentence diversity imposed by the templates imposes unintended regularity for models to latch onto. Given the well-known observation that neural models will “take the easy way out” given the presence of this unintended surface information (Jia and Liang, 2017; Naik et al., 2018; Sennhauser and Berwick, 2018), and other work suggesting that LSTMs do not necessarily induce syntactic structure (Gupta and Lewis, 2018; McCoy et al., 2018; Warstadt et al., 2019), one must take successes in template-based probing studies with a grain of salt. The evaluation of non-syntactic baselines is an easy-to-implement way to combat the tendency of these behavioral probes to overestimate language models’ abilities. 1761 To improve the priming paradigm in particular, one would need to establish a success metric that discriminates between baselines and alter the experimental setup to mitigate information side channels. One possibility would be to include infelicitous “colorless green ideas” sentences with grammatical syntax (cf. Gulordava et al., 2018), which might decrease the lexical similarity problem. Removing the issue altogether could require enforcing completely lexically disjoint training, adaptation, and test sets, but we cannot reasonably expect a model to function when it has no generalizations to work with, and demanding lexically distinct sets (including function words) greatly limits the set of phenomena that could be studied. 5.1 An Alternative Approach As a more radical alternative, we suggest extending behavioral analysis into “consequence-based” analysis. The two have similar reasoning: from an engineering perspective, a family of models that is capable of inducing syntax is useful because it may be expected to improve performance on downstream tasks. Marcus (1984) discusses in a theory-independent way which kinds of sentences a model capturing syntax should be able to parse but a “no-explicit-syntax” model (in the modern context, probably a baseline RNN) should not (cf. Chomsky, 1957; Rimell et al., 2009; Nivre et al., 2010; Bender et al., 2011; Everaert et al., 2015). It follows then that no-explicit- and explicit-syntax models should exhibit quantitatively different behavior on tasks that require parsing such sentences. A model that solves problems that only one capable of inducing syntactic structure can solve may as well have induced syntactic structure from a practical standpoint. Consequence-based analysis would be implemented over naturalistic data rather than templates by embedding it in higher level tasks like question answering to mitigate the unnaturalness problem and demonstrate a model’s practical utility. The possibility of side-channel information is already known in relation to these higher-level tasks (e.g., Poliak et al., 2018; Geva et al., 2019), and various challenge data sets have been constructed to mitigate it in different ways (Levesque et al., 2011; Chao et al., 2017; Dua et al., 2019; Lin et al., 2019; Dasigi et al., 2019). Uniting these with a collection of hard sentence types (e.g., Marvin and Linzen, 2018; Warstadt et al., 2019) in something like a syntax-focused QA challenge set would provide new insights into which families of models capture the practical benefits of true hierarchical syntactic representation. Acknowledgments We are particularly grateful to Marten van Schijndel for sharing the van Schijndel and Linzen (2018) model checkpoints with us. We also thank Mitch Marcus, Charles Yang, and Ryan Budnick for their comments and suggestions. This work was funded by an NDSEG fellowship awarded to the first author by the ARO, in addition to funding by the ONR under Contract No. N00014-19-1-2620, and by sponsorship from the LwLL DARPA program under Contract No. FA8750-19-2-0201. (The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.) References Emily M Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 397– 408. Association for Computational Linguistics. Jean-Philippe Bernardy and Shalom Lappin. 2017. Using deep neural networks to learn syntactic agreement. LiLT (Linguistic Issues in Language Technology), 15. Wei-Lun Chao, Hexiang Hu, and Fei Sha. 2017. Being negative but constructively: Lessons learnt from creating better visual question answering datasets. arXiv preprint arXiv:1704.07121. Noam Chomsky. 1957. Syntactic Structures. Moulton & Co. Pradeep Dasigi, Nelson F. Liu, Ana Marasovi´c, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In EMNLP/IJCNLP. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In NAACLHLT. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209. 1762 Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801. M.B.H. Everaert, Marinus Huybregts, Noam Chomsky, Robert Berwick, and Johan Bolhuis. 2015. Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences, xx. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In EMNLP/IJCNLP. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205. Nitish Gupta and Mike Lewis. 2018. Neural compositional denotational semantics for question answering. In EMNLP/IJCNLP. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In EMNLP/IJCNLP. Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2011. The winograd schema challenge. In KR. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In MRQA. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. TACL. Mitchell P Marcus. 1984. Some inadequate theories of human language processing. In Carroll J. Bever, T. and L. Miller, editors, Talking Minds: The Study of Language in the Cognitive Sciences, chapter 9, pages 253–279. MIT Press, Cambridge, MA. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. arXiv preprint arXiv:1808.09031. R Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. arXiv preprint arXiv:1802.09091. Aakanksha Naik, Abhilasha Ravichander, Norman M. Sadeh, Carolyn Penstein Ros´e, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In COLING. Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochastic language modelling. Computer Speech & Language. Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos Gomez-Rodriguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 833–841. Association for Computational Linguistics. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In *SEM@NAACL-HLT. Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 66–76. Laura Rimell, Stephen Clark, and Mark Steedman. 2009. Unbounded dependency recovery for parser evaluation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 813–821. Association for Computational Linguistics. Marten van Schijndel and Tal Linzen. 2018. A neural model of adaptation in reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4704–4710. Luzi Sennhauser and Robert Berwick. 2018. Evaluating the ability of lstms to learn context-free grammars. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115–124. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In EMNLP/IJCNLP, pages 1526–1534. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A Smith. 2018. Syntactic scaffolds for semantic structures. In EMNLP/IJCNLP, pages 3772–3782. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In ACL. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. BLiMP: A benchmark of linguistic minimal pairs for English. Gail Weiss, Yoav Goldberg, and Eran Yahav. 2017. Extracting automata from recurrent neural networks using queries and counterexamples. arXiv preprint arXiv:1711.09576.
2020
160
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1763–1788 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1763 Modelling Suspense in Short Stories as Uncertainty Reduction over Neural Representation David Wilmot and Frank Keller Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK [email protected], [email protected] Abstract Suspense is a crucial ingredient of narrative fiction, engaging readers and making stories compelling. While there is a vast theoretical literature on suspense, it is computationally not well understood. We compare two ways for modelling suspense: surprise, a backward-looking measure of how unexpected the current state is given the story so far; and uncertainty reduction, a forward-looking measure of how unexpected the continuation of the story is. Both can be computed either directly over story representations or over their probability distributions. We propose a hierarchical language model that encodes stories and computes surprise and uncertainty reduction. Evaluating against short stories annotated with human suspense judgements, we find that uncertainty reduction over representations is the best predictor, resulting in near human accuracy. We also show that uncertainty reduction can be used to predict suspenseful events in movie synopses. 1 Introduction As current NLP research expands to include longer, fictional texts, it becomes increasingly important to understand narrative structure. Previous work has analyzed narratives at the level of characters and plot events (e.g., Gorinski and Lapata, 2018; Martin et al., 2018). However, systems that process or generate narrative texts also have to take into account what makes stories compelling and enjoyable. We follow a literary tradition that makes And then? (Forster, 1985; Rabkin, 1973) the primary question and regards suspense as a crucial factor of storytelling. Studies show that suspense is important for keeping readers’ attention (Khrypko and Andreae, 2011), promotes readers’ immersion and suspension of disbelief (Hsu et al., 2014), and plays a big part in making stories enjoyable and interesting (Oliver, 1993; Schraw et al., 2001). Computationally less well understood, suspense has only sporadically been used in story generation systems (O’Neill and Riedl, 2014; Cheong and Young, 2014). Suspense, intuitively, is a feeling of anticipation that something risky or dangerous will occur; this includes the idea both of uncertainty and jeopardy. Take the play Romeo and Juliet: Dramatic suspense is created throughout — the initial duel, the meeting at the masquerade ball, the marriage, the fight in which Tybalt is killed, and the sleeping potions leading to the death of Romeo and Juliet. At each moment, the audience is invested in something being at stake and wonders how it will end. This paper aims to model suspense in computational terms, with the ultimate goal of making it deployable in NLP systems that analyze or generate narrative fiction. We start from the assumption that concepts developed in psycholinguistics to model human language processing at the word level (Hale, 2001, 2006) can be generalised to the story level to capture suspense, the Hale model. This assumption is similar concepts to model suspense in games (Ely et al., 2015; Li et al., 2018), the Ely model. Common to both approaches is the idea that suspense is a form of expectation: In games, we expect to win or lose instead in stories, we expect that the narrative will end a certain way. We will therefore compare two ways for modelling narrative suspense: surprise, a backwardlooking measure of how unexpected the current state is given the story so far; and uncertainty reduction, a forward-looking and measure of how unexpected the continuation of the story is. Both measures can be computed either directly over story representations, or indirectly over the probability distributions over such representations. We propose a hierarchical language model based on Generative Pre-Training (GPT, Radford et al., 2018) to encode story-level representations and develop an 1764 inference scheme that uses these representations to compute both surprise and uncertainty reduction. For evaluation, we use the WritingPrompt corpus of short stories (Fan et al., 2018), part of which we annotate with human sentence-by-sentence judgements of suspense. We find that surprise over representations and over probability distributions both predict suspense judgements. However uncertainty reduction over representations is better, resulting in near human-level accuracy. We also show that our models can be used to predict turning points, i.e., major narrative events, in movie synopses (Papalampidi et al., 2019). 2 Related Work In narratology, uncertainty over outcomes is traditionally seen as suspenseful (e.g., O’Neill, 2013; Zillmann, 1996; Abbott, 2008). Other authors claim that suspense can exist without uncertainty (e.g., Smuts, 2008; Hoeken and van Vliet, 2000; Gerrig, 1989) and that readers feel suspense even when they read a story for the second time (Delatorre et al., 2018), which is unexpected if suspense is uncertainty; this is referred to as the paradox of suspense (Prieto-Pablos, 1998; Yanal, 1996). Considering Romeo and Juliet again, in the first view suspense is motivated by primarily by uncertainty over what will happen. Who will be hurt or killed in the fight? What will happen after marriage? However, at the beginning of the play we are told “from forth the fatal loins of these two foes, a pair of starcrossed lovers take their life”, and so the suspense is more about being invested in the plot than not knowing the outcome, aligning more with the second view: suspense can exist without uncertainty. We do not address the paradox of suspense directly in this paper, but we are guided by the debate to operationalise methods that encompass both views. The Hale model is closer to the traditional model of suspense as being about uncertainty. In contrast, the Ely model is more in line with the second view that uncertainty matters less than consequentially different outcomes. In NLP, suspense is studied most directly in natural language generation, with systems such as Dramatis (O’Neill and Riedl, 2014) and Suspenser (Cheong and Young, 2014), two planning-based story generators that use the theory of Gerrig and Bernardo (1994) that suspense is created when a protagonist faces obstacles that reduce successful outcomes. Our approach, in contrast, models suspense using general language models fine-tuned on stories, without planning and domain knowledge. The advantage is that the model can be trained on large volumes of available narrative text without requiring expensive annotations, making it more generalisable. Other work emphasises the role of characters and their development in story understanding (Bamman et al., 2014, 2013; Chaturvedi et al., 2017; Iyyer et al., 2016) or summarisation (Gorinski and Lapata, 2018). A further important element of narrative structure is plot, i.e., the sequence of events in which characters interact. Neural models have explicitly modelled events (Martin et al., 2018; Harrison et al., 2017; Rashkin et al., 2018) or the results of actions (Roemmele and Gordon, 2018; Liu et al., 2018a,b). On the other hand, some neural generation models (Fan et al., 2018) just use a hierarchical model on top of a language model; our architecture follows this approach. 3 Models of Suspense 3.1 Definitions In order to formalise measures of suspense, we assume that a story consists of a sequence of sentences. These sentences are processed one by one, and the sentence at the current timepoint t is represented by an embedding et (see Section 4 for how embeddings are computed). Each embedding is associated with a probability P(et). Continuations of the story are represented by a set of possible next sentences, whose embeddings are denoted by ei t+1. The first measure of suspense we consider is surprise (Hale, 2001), which in the psycholinguistic literature has been successfully used to predict word-based processing effort (Demberg and Keller, 2008; Roark et al., 2009; Van Schijndel and Linzen, 2018a,b). Surprise is a backward-looking predictor: it measures how unexpected the current word is given the words that preceded it (i.e., the left context). Hale formalises surprise as the negative log of the conditional probability of the current word. For stories, we compute surprise over sentences. As our sentence embeddings et include information about the left context e1,...,et−1, we can write Hale surprise as: SHale t = −logP(et) (1) An alternative measure for predicting word-byword processing effort used in psycholinguistics is entropy reduction (Hale, 2006). This measure is 1765 forward-looking: it captures how much the current word changes our expectations about the words we will encounter next (i.e., the right context). Again, we compute entropy at the story level, i.e., over sentences instead of over words. Given a probability distribution over possible next sentences P(ei t+1), we calculate the entropy of that distribution. Entropy reduction is the change of that entropy from one sentence to the next: Ht = −∑ i P(ei t+1)logP(ei t+1) UHale t = Ht−1 −Ht (2) Note that we follow Frank (2013) in computing entropy over surface strings, rather than over parse states as in Hale’s original formulation. In the economics literature, Ely et al. (2015) have proposed two measures that are closely related to Hale surprise and entropy reduction. At the heart of their theory of suspense is the notion of belief in an end state. Games are a good example: the state of a tennis game changes with each point being played, making a win more or less likely. Ely et al. define surprise as the amount of change from the previous time step to the current time step. Intuitively, large state changes (e.g., one player suddenly comes close to winning) are more surprising than small ones. Representing the state at time t as et, Ely surprise is defined as: S Ely t = (et −et−1)2 (3) Ely et al.’s approach can be adapted for modelling suspense in stories if we assume that each sentence in a story changes the state (the characters, places, events in a story, etc.). States et then become sentence embeddings, rather than beliefs in end states, and Ely surprise is the distance between the current embedding et and the previous embedding et−1. In this paper, we will use L1 and L2 distances; other authors (Li et al., 2018) experiment with information gain and KL divergence, but found worse performance when modelling suspense in games. Just like Hale surprise, Ely surprise models backwardlooking prediction, but over representations, rather than over probabilities. Ely et al. also introduce a measure of forwardlooking prediction, which they define as the expected difference between the current state et and the next state et+1: U Ely t = E[(et −ei t+1)2] = ∑ i P(ei t+1)(et −ei t+1)2 (4) This is closely related to Hale entropy reduction, but again the entropy is computed over states (sentence embeddings in our case), rather than over probability distributions. Intuitively, this measure captures how much the uncertainty about the rest of the story is reduced by the current sentence. We refer to the forward-looking measures in Equations (2) and (4) as Hale and Ely uncertainty reduction, respectively. Ely et al. also suggest versions of their measures in which each state is weighted by a value αt, thus accounting for the fact that some states may be more inherently suspenseful than others: S αEly t = αt(et −et−1)2 U αEly t = E[αt+1(et −ei t+1)2] (5) We stipulate that sentences with high emotional valence are more suspenseful, as emotional involvement heightens readers’ experience of suspense. This can be captured in Ely et al.’s framework by assigning the αs the scores of a sentiment classifier. 3.2 Modelling Approach We now need to show how to compute the surprise and uncertainty reduction measures introduced in the previous section. This involves building a model that processes stories sentence by sentence, and assigns each sentence an embedding that encodes the sentence and its preceding context, as well as a probability. These outputs can then be used to compute a surprise value for the sentence. Furthermore, the model needs to be able to generate a set of possible next sentences (story continuations), each with an embedding and a probability. Generating upcoming sentences is potentially very computationally expensive since the number of continuations grows exponentially with the number of future time steps. As an alternative, we can therefore sample possible next sentences from a corpus and use the model to assign them embeddings and probabilities. Both of these approaches will produce sets of upcoming sentences, which we can then use to compute uncertainty reduction. While we have so far only talked about the next sentences, we will also experiment with uncertainty reduction computed using longer rollouts. 1766 Once upon a time word_enc (GPT) sent_enc (RNN) story_enc (RNN)   ℓz~ˆy Œ0 Œ1 Œ2 Œ3 ˆ0 ˆ1 ˆ2 ˆ3 0 1 2 3 ˆ0 3 Œ~ {‰0 {‰+1 {‰+2 {‰+3  ⋅ ˆ1 ‚w(~) ˆ2 ‚w(~) ˆ3 ‚w(~)   ℓ}{ƒ fusion (affine) Concat word and story vectors lm = [ ; ] | {‚w(‰) Œ Œ {‚w(‰) Figure 1: Architecture of our hierarchical model. See text for explanation of the components word enc, sent enc, and story enc. 4 Model 4.1 Architecture Our overall approach leverages contextualised language models, which are a powerful tool in NLP when pretrained on large amounts of text and fine tuned on a specific task (Peters et al., 2018; Devlin et al., 2019). Specifically, we use Generative Pre-Training (GPT, Radford et al., 2018), a model which has proved successful in generation tasks (Radford et al., 2019; See et al., 2019). Hierarchical Model Previous work found that hierarchical models show strong performance in story generation (Fan et al., 2018) and understanding tasks (Cai et al., 2017). The language model and hierarchical encoders we use are unidirectional, which matches the incremental way in which human readers process stories when they experience suspense. Figure 1 depicts the architecture of our hierarchical model.1 It builds a chain of representations that anticipates what will come next in a story, allowing us to infer measures of suspense. For a given sentence, we use GPT as our word encoder (word enc in Figure 1) which turns each word in a sentence into a word embedding wi. Then, we use an RNN (sent enc) to turn the word embeddings of the sentences into a sentence embedding γi. Each sentence is represented by the hidden state of its last word, which is then fed into a second RNN 1Model code and scripts for evaluation are available at https://github.com/dwlmt/Story-Untangling/ tree/acl-2020-dec-submission (story enc) that computes a story embedding. The overall story representation is the hidden state of its last sentence. Crucially, this model also gives us et, a contextualised representation of the current sentence at point t in the story, to compute surprise and uncertainty reduction. Model training includes a generative loss ℓgen to improve the quality of the sentences generated by the model. We concatenate the word representations w j for all word embeddings in the latest sentence with the latest story embedding emax(t). This is run through affine ELU layers to produce enriched word embedding representations, analogous to the Deep Fusion model (G¨ulc¸ehre et al., 2015), with story state instead of a translation model. The related Cold Fusion approach (Sriram et al., 2018) proved inferior. Loss Functions To obtain the discriminatory loss ℓdisc for a particular sentence s in a batch, we compute the dot product of all the story embeddings e in the batch, and then take the cross-entropy across the batch with the correct next sentence: ℓdisc(ei=s t+1) = −log exp(ei=s t+1 ⋅et) ∑i exp(ei t+1 ⋅et) (6) Modelled on Quick Thoughts (Logeswaran and Lee, 2018), this forces the model to maximise the dot product of the correct next sentence versus other sentences in the same story, and negative examples from other stories, and so encourages representations that anticipate what happens next. The generative loss in Equation (7) is a standard LM loss, where wj is the GPT word embeddings from the sentence and emax(t) is the story context that each word is concatenated with: ℓgen = −∑ j logP(w j∣w j−1,w j−2,...;emax(t)) (7) The overall loss is ℓdisc +ℓgen. More advanced generation losses (e.g., Zellers et al., 2019) could be used, but are an order of magnitude slower. 4.2 Inference We compute the measures of surprise and uncertainty reduction introduced in Section 3.1 using the output of the story encoder story enc. In addition to the contextualised sentence embeddings et, this requires their probabilities P(et), and a distribution over alternative continuations P(ei t+1). We implement a recursive beam search over a tree of future sentences in the story, looking between one and three sentences ahead (rollout). The 1767 probability is calculated using the same method as the discriminatory loss, but with the cosine similarity rather than the dot product of the embeddings et and ei t+1 fed into a softmax function. We found that cosine outperformed dot product on inference as the resulting probability distribution over continuations is less concentrated. 5 Methods Dataset The overall goal of this work is to test whether the psycholinguistic and economic theories introduced in Section 3 are able to capture human intuition of suspense. For this, it is important to use actual stories which were written by authors with the aim of being engaging and interesting. Some of the story datasets used in NLP do not meet this criterion; for example ROC Cloze (Mostafazadeh et al., 2016) is not suitable because the stories are very short (five sentences), lack naturalness, and are written by crowdworkers to fulfill narrow objectives, rather than to elicit reader engagement and interest. A number of authors have also pointed out technical issues with such artificial corpora (Cai et al., 2017; Sharma et al., 2018). Instead, we use WritingPrompts (Fan et al., 2018), a corpus of circa 300k short stories from the /r/WritingPrompts subreddit. These stories were created as an exercise in creative writing, resulting in stories that are interesting, natural, and of suitable length. The original split of the data into 90% train, 5% development, and 5% test was used. Pre-processing steps are described in Appendix A. Annotation To evaluate the predictions of our model, we selected 100 stories each from the development and test sets of the WritingPrompts corpus, such that each story was between 25 and 75 sentence in length. Each sentence of these stories was judged for narrative suspense; five master workers from Amazon Mechanical Turk annotated each story after reading instructions and completing a training phase. They read one sentence at a time and provided a suspense judgement using the fivepoint scale consisting of Big Decrease in suspense (1% of the cases), Decrease (11%), Same (50%), Increase (31%), and Big Increase (7%). In contrast to prior work (Delatorre et al., 2018), a relative rather than absolute scale was used. Relative judgements are easier to make while reading, though in practice, the suspense curves generated are very similar, with a long upward trajectory and flattening or dip near the end. After finishing a story, annotators had GRU LSTM Loss 5.84 5.90 Discriminatory Acc. 0.55 0.54 Discriminatory Acc. k = 10 0.68 0.68 Generative Acc. 0.37 0.46 Generative Acc. k = 10 0.85 0.85 Cosine Similarity 0.48 0.50 L2 Distance 1.73 1.59 Number of Epochs 4 2 Table 1: For accuracy the baseline probability is 1 in 99; k = 10 is the accuracy of the top 10 sentences of the batch. From the best epoch of training on the WritingPrompts development set. to write a short summary of the story. In the instructions, suspense was framed as dramatic tension, as pilot annotations showed that the term suspense was too closely associated with murder mystery and related genres. Annotators were asked to take the character’s perspective when reading to achieve stronger inter-annotator agreement and align closely with literary notions of suspense. During training, all workers had to annotate a test story and achieve 85% accuracy before they could continue. Full instructions and the training story are in Appendix B. The inter-annotator agreement α (Krippendorff, 2011) was 0.52 and 0.57 for the development and test sets, respectively. Given the inherently subjective nature of the task, this is substantial agreement. This was achieved after screening out and replacing annotators who had low agreement for the stories they annotated (mean α < 0.35), showed suspiciously low reading times (mean RT < 600 ms per sentence), or whose story summaries indicated low-quality annotation. Training and Inference The training used SGD with Nesterov momentum (Sutskever et al., 2013) with a learning rate of 0.01 and a momentum of 0.9. Models were run with early stopping based on the mean of the accuracies of training tasks. For each batch, 50 sentence blocks from two different stories were chosen to ensure that the negative examples in the discriminatory loss include easy (other stories) and difficult (same story) sentences. We used the pretrained GPT weights but finetuned the encoder and decoder weights on our task. For the RNN components of our hierarchical model, we experimented with both GRU (Chung et al., 1768 2015) and LSTM (Hochreiter and Schmidhuber, 1997) variants. The GRU model had two layers in both sen enc and story enc; the LSTM model had four layers each in sen enc and story enc. Both had two fusion layers and the size of the hidden layers for both model variants was 768. We give the results of both variants on the tasks of sentence generation and sentence discrimination in Table 1. Both perform similarly, with slightly worse loss for the LSTM variant, but faster training and better generation accuracy. Overall, model performance is strong: the LSTM variant picks out the correct sentence 54% of the time and generates it 46% of the time. This indicates that our architecture successfully captures the structure of stories. At inference time, we obtained a set of story continuations either by random sampling or by generation. Random sampling means that n sentences were selected from the corpus and used as continuations. For generation, sentences were generated using top-k sampling (with k = 50) using the GPT language model and the approach of Radford et al. (2019), which generates better output than beam search (Holtzman et al., 2018) and can outperform a decoder (See et al., 2019). For generation, we used up to 300 words as context, enriched with the story sentence embeddings from the corresponding points in the story. For rollouts of one sentence, we generated 100 possibilities at each step; for rollouts of two, 50 possibilities and rollouts of three, 25 possibilities. This keeps what is an expensive inference process manageable. Importance We follow Ely et al. in evaluating weighted versions of their surprise and uncertainty reduction measure S αEly t and U αEly t (see Equation (5)). We obtain the αt values by taking the sentiment scores assigned by the VADER sentiment classifier (Hutto and Gilbert, 2014) to each sentence and multiplying them by 1.0 for positive sentiment and 2.0 for negative sentiment. The stronger negative weighting reflects the observation that negative consequences can be more important than positive ones (O’Neill, 2013; Kahneman and Tversky, 2013). Baselines We test a number of baselines as alternatives to surprise and uncertainty reduction derived from our hierarchical model. These baselines also reflect how much change occurs from one sentence to the next in a story: WordOverlap is the Jaccard similarity between the two sentences, GloveSim is the cosine similarity between the averaged Glove (Pennington et al., 2014) word embeddings of the two sentences, and GPTSim is the cosine similarity between the GPT embeddings of the two sentences. The α baseline is the weighted VADER sentiment score. 6 Results 6.1 Narrative Suspense Task The annotator judgements are relative (amount of decrease/increase in suspense from sentence to sentence), but the model predictions are absolute values. We could convert the model predictions into discrete categories, but this would fail to capture the overall arc of the story. Instead, we convert the relative judgements into absolute suspense values, where Jt = j1 +⋅⋅⋅+ jt is the absolute value for sentence t and j1,..., jt are the relative judgements for sentences 1 to t. We use −0.2 for Big Decrease, −0.1 for Decrease, 0 for Same, 0.1 for Increase, and 0.2 for Big Increase.2 Both the absolute suspense judgements and the model predictions are normalised by converting them to z-scores. To compare model predictions and absolute suspense values, we use Spearman’s ρ (Sen, 1968) and Kendall’s τ (Kendall, 1975). Rank correlation is preferred because we are interested in whether human annotators and models view the same part of the story as more or less suspenseful; also, rank correlation methods are good at detecting trends. We compute ρ and τ between the model predictions and the judgements of each of the annotators (i.e., five times for five annotators), and then take the average. We then average these values again over the 100 stories in the test or development sets. As the human upper bound, we compute the mean pairwise correlation of the five annotators. Results Figure 2 shows surprise and uncertainty reduction measures and human suspense judgements for an example story (text and further examples in Appendix C). We performed model selection using the correlations on the development set, which are given in Table 2. We experimented with all the measures introduced in Section 3.1, computing sets of alternative sentences either us2These values were fitted with predictions (or cross-worker annotation) using 5-fold cross validation and an L1 loss to optimise the mapping. A constraint is placed so that Same is 0, increases are positive and decreases are negative with a minimum 0.05 distance between. 1769 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 3 Sentence Suspense Figure 2: Story 27, Human, SHale, SEly, UEly, UαEly. Solid lines: generated alternative continuations, dashed lines: sampled alternative continuations. ing generated continuations (Gen) or continuations sampled from the corpus (Cor), except for SEly, which can be computed without alternatives. We compared the LSTM and GRU variants (see Section 4) and experimented with rollouts of up to three sentences. We tried L1 and L2 distance for the Ely measures, but only report L1, which always performed better. Discussion On the development set (see Table 2), we observe that all baselines perform poorly, indicating that distance between simple sentence representations or raw sentiment values do not model suspense. We find that Hale surprise SHale performs well, reaching a maximum ρ of .675 on the development set. Hale uncertainty reduction UHale, however, performs consistently poorly. Ely surprise SEly also performs well, reaching as similar value as Hale surprise. Overall, Ely uncertainty reduction UEly is the strongest performer, with ρ = .698, numerically outperforming the human upper bound. Some other trends are clear from the development set: using GRUs reduces performance in all cases but one; rollout of more than one never leads to an improvement; sentiment weighting (prefix α in the table) always reduces performance, as it introduces considerable noise (see Figure 2). We therefore eliminate the models that correspond to these settings when we evaluate on the test set. For the test set results in Table 3 we also report upper and lower confidence bounds computed using the Fisher Z-transformation (p < 0.05). On the test set, UEly again is the best measure, with a correlation statistically indistinguishable from human performance (based on CIs). We find that absolute correlations are higher on the test set, presumably Prediction Model Roll τ ↑ ρ ↑ Human .553 .614 Baselines WordOverlap 1 .017 .026 GloveSim 1 .017 .029 GPTSim 1 .021 .031 α 1 .024 .036 SHale-Gen GRU 1 .145 .182 LSTM 1 .434 .529 SHale-Cor GRU 1 .177 .214 LSTM 1 .580 .675 UHale-Gen GRU 1 .036 .055 LSTM 1 .009 .016 UHale-Cor GRU 1 .048 .050 LSTM 1 .066 .094 SEly GRU 1 .484 .607 LSTM 1 .427 .539 SαEly GRU 1 .089 .123 LSTM 1 .115 .156 UEly-Gen GRU 1 .241 .161 2 .304 .399 LSTM 1 .610 .698 2 .393 .494 UEly-Cor GRU 1 .229 .264 2 .512 .625 3 .515 .606 LSTM 1 .594 .678 2 .564 .651 3 .555 .645 UαEly-Gen GRU 1 .216 .124 2 .219 .216 LSTM 1 .474 .604 2 .316 .418 UαEly-Cor GRU 1 .205 .254 2 .365 .470 LSTM 1 .535 .642 2 .425 .534 Table 2: Development set results for WritingPrompts for generated (Gen) or corpus sampled (Cor) alternative continuations; α indicates sentiment weighting. Bold: best model in a given category; red: best model overall. reflecting the higher human upper bound. Overall, we conclude that our hierarchical architecture successfully models human suspense 1770 Prediction τ ↑ ρ ↑ Human .652 (.039) .711 (.033) SHale-Gen .407 (.089) .495 (.081) SHale-Cor .454 (.085) .523 (.079) UHale-Gen .036 (.102) .051 (.102) UHale-Cor .061 (.100) .088 (.101) SEly .391 (.092) .504 (.082) UEly-Gen .620 (.067) .710 (.053) UEly-Cor .605 (.069) .693 (.056) Table 3: Test set results for WritingPrompts for generated (Gen) or corpus sampled (Cor) continuations. LSTM with rollout one; brackets: confidence intervals. judgements on the WritingPrompts dataset. The overall best predictor is UEly, uncertainty reduction computed over story representations. This measure combines the probability of continuation (SHale) with distance between story embeddings (SEly), which are both good predictors in their own right. This finding supports the theoretical claim that suspense is an expectation over the change in future states of a game or a story, as advanced by Ely et al. (2015). 6.2 Movie Turning Points Task and Dataset An interesting question is whether the peaks in suspense in a story correspond to important narrative events. Such events are sometimes called turning points (TPs) and occur at certain positions in a movie according to screenwriting theory (Cutting, 2016). A corpus of movie synopses annotated with turning points is available in the form of the TRIPOD dataset (Papalampidi et al., 2019). We can therefore test if surprise or uncertainty reduction predict TPs in TRIPOD. As our model is trained on a corpus of short stories, this will also serve as an out-of-domain evaluation. Papalampidi et al. (2019) assume five TPs: 1. Opportunity, 2. Change of Plans, 3. Point of no Return, 4. Major Setback, and 5. Climax. They derive a prior distribution of TP positions from their test set, and use this to constrain predicted turning points to windows around these prior positions. We follow this approach and select as the predicted TP the sentence with the highest surprise or uncertainty reduction value within a given constrained window. We report the same baselines as in the previous experiment, as well as the Theory Baseline, Dev D ↓ Test D ↓ Human Not reported 4.30 (3.43) Theory Baseline 9.65 (0.94) 7.47 (3.42) TAM 7.11 (1.71) 6.80 (2.63) WordOverlap 13.9 (1.45) 12.7 (3.13) GloveSim 10.2 (0.74) 10.4 (2.54) GPTSim 16.8 (1.47) 18.1 (4.71) α 11.3 (1.24) 11.2 (2.67) SHale-Gen 8.27 (0.68) 8.72 (2.27) UHale-Gen 10.9 (1.02) 10.69 (3.66) SEly 9.54 (0.56) 9.01 (1.92) SαEly 9.95 (0.78) 9.54 (2.76) UEly-Gen 8.75 (0.76) 8.38 (1.53) UEly-Cor 8.74 (0.76) 8.50 (1.69) UαEly-Gen 8.80 (0.61) 7.84 (3.34) UαEly-Cor 8.61 (0.68) 7.78 (1.61) Table 4: TP prediction on the TRIPOD development and test sets. D is the normalised distance to the gold standard; CI in brackets. which uses screenwriting theory to predict where in a movie a given TP should occur (e.g., Point of No Return theoretically occurs 50% through the movie). This baseline is hard to beat (Papalampidi et al., 2019). Results and Discussion Figure 3 plots both gold standard and predicted TPs for a sample movie synopsis (text and further examples in Appendix D). The results on the TRIPOD development and test sets are reported in Table 4 (we report both due to the small number of synopses in TRIPOD). We use our best LSTM model with a of rollout of one; the distance measure for Ely surprise and uncertainty reduction is now L2 distance, as it outperformed L1 on TRIPOD. We report results in terms of D, the normalised distance between gold standard and predicted TP positions. On the test set, the best performing model with D = 7.78 is UαEly-Cor, with UαEly-Gen only slightly worse. It is outperformed by TAM, the best model of Papalampidi et al. (2019), which however requires TP annotation at training time. UαEly-Cor is close to the Theory Baseline on the test set, an impressive result given that our model has no TP supervision and is trained on a different domain. The fact that models with sentiment 1771 0 10 20 30 40 50 0 1 2 3 4 5 6 Sentence Suspense Figure 3: Movie 15 Minutes, SHale, SEly, UEly, UαEly, ◆theory baseline, ⭑TP annotations, triangles are predicted TPs. weighting (prefix α) perform well here indicates that turning points often have an emotional resonance as well as being suspenseful. 7 Conclusions Our overall findings suggest that by implementing concepts from psycholinguistic and economic theory, we can predict human judgements of suspense in storytelling. That uncertainty reduction (UEly) outperforms probability-only (SHale) and state-only (SEly) surprise suggests that, while consequential state change is of primary importance for suspense, the probability distribution over the states is also a necessary factor. Uncertainty reduction therefore captures the view of suspense as reducing paths to a desired outcome, with more consequential shifts as the story progresses (O’Neill and Riedl, 2014; Ely et al., 2015; Perreault, 2018). This is more in line with the Smuts (2008) Desire-Frustration view of suspense, where uncertainty is secondary. Strong psycholinguistic claims about suspense are difficult to make due to several weaknesses in our approach, which highlight directions for future research: the proposed model does not have a higher-level understanding of event structure; most likely it picks up the textual cues that accompany dramatic changes in the text. One strand of further work is therefore analysis: Text could be artificially manipulated using structural changes, for example by switching the order of sentences, mixing multiple stories, including a summary at the beginning that foreshadows the work, masking key suspenseful words, or paraphrasing. An analogue of this would be adversarial examples used in computer vision. Additional annotations, such as how certain readers are about the outcome of the story, may also be helpful in better understanding the relationship between suspense and uncertainty. Automated interpretability methods as proposed by Sundararajan et al. (2017), could shed further light on models’ predictions. The recent success of language models in wideranging NLP tasks (e.g., Radford et al., 2019) has shown that language models are capable of learning semantically rich information implicitly. However, generating plausible future continuations is an essential part of the model. In text generation, Fan et al. (2019) have found that explicitly incorporating coreference and structured event representations into generation produces more coherent generated text. A more sophisticated model would incorporate similar ideas. Autoregressive models that generate step by step alternatives for future continuations are computationally impractical for longer rollouts and are not cognitively plausible. They also differ from the Ely et al. (2015) conception of suspense, which is in terms of Bayesian beliefs over a longer-term future state, not step by step. There is much recent work (e.g., Ha and Schmidhuber (2018); Gregor et al. (2019)), on state-space approaches that model beliefs as latent states using variational methods. In principle, these would avoid the brute-force calculation of a rollout and conceptually, anticipating longer-term states aligns with theories of suspense. Related tasks such as inverting the understanding of suspense to utilise the models in generating more suspenseful stories may also prove fruitful. This paper is a baseline that demonstrates how modern neural network models can implicitly represent text meaning and be useful in a narrative context without recourse to supervision. It provides a springboard to further interesting applications and research on suspense in storytelling. Acknowledgments The authors would like to thank the anonymous reviewers, Pinelopi Papalampidi and David Hodges for reviews of the annotation task, the AMT annotators, and Mirella Lapata, Ida Szubert, and Elizabeth Nielson for comments on the paper. Wilmot’s work is funded by an EPSRC doctoral training award. 1772 References H Porter Abbott. 2008. The Cambridge introduction to narrative. Cambridge University Press. David Bamman, Brendan O’Connor, and Noah A Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352–361. David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 370–379, Baltimore, Maryland. Association for Computational Linguistics. Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 616– 622. Snigdha Chaturvedi, Mohit Iyyer, and Hal Daume III. 2017. Unsupervised learning of evolving relationships between literary characters. In Thirty-First AAAI Conference on Artificial Intelligence. Yun-Gyung Cheong and R Michael Young. 2014. Suspenser: A story generation system for suspense. IEEE Transactions on Computational Intelligence and AI in Games, 7(1):39–52. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2015. Gated feedback recurrent neural networks. In International Conference on Machine Learning, pages 2067–2075. James E Cutting. 2016. Narrative theory and the dynamics of popular movies. Psychonomic Bulletin & Review, 23(6):1713–1743. Pablo Delatorre, Carlos Le´on, Alberto G Salguero, Manuel Palomo-Duarte, and Pablo Gerv´as. 2018. Confronting a paradox: a new perspective of the impact of uncertainty in suspense. Frontiers in Psychology, 9:1392. Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 101(2):193–210. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Jeffrey Ely, Alexander Frankel, and Emir Kamenica. 2015. Suspense and surprise. Journal of Political Economy, 123(1):215–260. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. In ACL. Edward Morgan Forster. 1985. Aspects of the Novel, volume 19. Houghton Mifflin Harcourt. Stefan L Frank. 2013. Uncertainty reduction as a measure of cognitive load in sentence comprehension. Topics in Cognitive Science, 5(3):475–494. Richard J Gerrig. 1989. Suspense in the absence of uncertainty. Journal of Memory and Language, 28(6):633–648. Richard J Gerrig and Allan BI Bernardo. 1994. Readers as problem-solvers in the experience of suspense. Poetics, 22(6):459–472. Philip John Gorinski and Mirella Lapata. 2018. What’s this movie about? a joint neural network architecture for movie content analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1770–1781, New Orleans, Louisiana. Association for Computational Linguistics. Karol Gregor, George Papamakarios, Frederic Besse, Lars Buesing, and Theophane Weber. 2019. Temporal difference variational auto-encoder. In International Conference on Learning Representations. C¸ aglar G¨ulc¸ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo¨ıc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. CoRR, abs/1503.03535. David Ha and J¨urgen Schmidhuber. 2018. Recurrent world models facilitate policy evolution. In Advances in Neural Information Processing Systems 31, pages 2451–2463. Curran Associates, Inc. https://worldmodels.github.io. John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the 2nd Conference of the North American Chapter of the Association for Computational Linguistics, volume 2, pages 159–166, Pittsburgh, PA. Association for Computational Linguistics. John Hale. 2006. Uncertainty about the rest of the sentence. Cognitive science, 30(4):643–672. 1773 Brent Harrison, Christopher Purdy, and Mark O Riedl. 2017. Toward automated story generation with markov chain monte carlo methods and deep neural networks. In Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Hans Hoeken and Mario van Vliet. 2000. Suspense, curiosity, and surprise: How discourse structure influences the affective and cognitive processing of a story. Poetics, 27(4):277–286. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638–1649, Melbourne, Australia. Association for Computational Linguistics. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. C. T. Hsu, M. Conrad, and A. M. Jacobs. 2014. Fiction feelings in harry potter: haemodynamic response in the mid-cingulate cortex correlates with immersive reading experience. NeuroReport, 25:1356–1361. Clayton J Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth international AAAI conference on weblogs and social media. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1534–1544. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Daniel Kahneman and Amos Tversky. 2013. Prospect theory: An analysis of decision under risk. In Handbook of the fundamentals of financial decision making: Part I, pages 99–127. World Scientific. MG Kendall. 1975. Rank correlation measures. Charles Griffin, London, 202:15. Y. Khrypko and P. Andreae. 2011. Towards the problem of maintaining suspense in interactive narrative. In Proceedings of the 7th Australasian Conference on Interactive Entertainment, pages 5:1–5:3. Klaus Krippendorff. 2011. Computing krippendorff’s alpha-reliability. 2011. Annenberg School for Communication Departmental Papers: Philadelphia. Zhiwei Li, Neil Bramley, and Todd M. Gureckis. 2018. Modeling dynamics of suspense and surprise. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018, Madison, WI, USA, July 25-28, 2018. Chunhua Liu, Haiou Zhang, Shan Jiang, and Dong Yu. 2018a. DEMN: Distilled-exposition enhanced matching network for story comprehension. In Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation, Hong Kong. Association for Computational Linguistics. Fei Liu, Trevor Cohn, and Timothy Baldwin. 2018b. Narrative modeling with memory chains and semantic supervision. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 278– 284, Melbourne, Australia. Association for Computational Linguistics. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In International Conference on Learning Representations. Lara J Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O Riedl. 2018. Event representations for automated story generation with deep neural nets. In Thirty-Second AAAI Conference on Artificial Intelligence. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics. M. B. Oliver. 1993. Exploring the paradox of the enjoyment of sad films. Human Communication Research, 19:315–342. Brian O’Neill. 2013. A computational model of suspense for the augmentation of intelligent story generation. Ph.D. thesis, Georgia Institute of Technology. Brian O’Neill and Mark Riedl. 2014. Dramatis: A computational model of suspense. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Qu´ebec City, Qu´ebec, Canada., pages 944–950. Pinelopi Papalampidi, Frank Keller, and Mirella Lapata. 2019. Movie plot analysis via turning point identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1707–1717, Hong Kong, China. Association for Computational Linguistics. 1774 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Joseph Perreault. 2018. The Universal Structure of Plot Content: Suspense, Magnetic Plot Elements, and the Evolution of an Interesting Story. Ph.D. thesis, University of Idaho. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Juan A Prieto-Pablos. 1998. The paradox of suspense. Poetics, 26(2):99–113. Eric S Rabkin. 1973. Narrative suspense.” When Slim turned sideways...”. Ann Arbor: University of Michigan Press. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Https://openai.com/blog/language-unsupervised/. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463–473, Melbourne, Australia. Association for Computational Linguistics. Brian Roark, Asaf Bachrach, Carlos Cardenas, and Christophe Pallier. 2009. Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 324–333, Singapore. Association for Computational Linguistics. Melissa Roemmele and Andrew Gordon. 2018. An encoder-decoder approach to predicting causal relations in stories. In Proceedings of the First Workshop on Storytelling, pages 50–59, New Orleans, Louisiana. Association for Computational Linguistics. G. Schraw, Flowerday, T., and S. Lehman. 2001. Increasing situational interest in the classroom. Educational Psychology Review, 13:211–224. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? arXiv preprint arXiv:1909.10705. Pranab Kumar Sen. 1968. Estimates of the regression coefficient based on kendall’s tau. Journal of the American statistical association, 63(324):1379– 1389. Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 752–757. Aaron Smuts. 2008. The desire-frustration theory of suspense. The Journal of Aesthetics and Art Criticism, 66(3):281–290. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. 2018. Cold fusion: Training seq2seq models together with language models. In Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018., pages 387–391. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, page 3319–3328. JMLR.org. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. 2013. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139– 1147. Marten Van Schijndel and Tal Linzen. 2018a. Can entropy explain successor surprisal effects in reading? CoRR, abs/1810.11481. Marten Van Schijndel and Tal Linzen. 2018b. Modeling garden path effects without explicit hierarchical syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018, Madison, WI, USA, July 25-28, 2018. Robert J Yanal. 1996. The paradox of suspense. The British Journal of Aesthetics, 36(2):146–159. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. CoRR, abs/1905.12616. Dolf Zillmann. 1996. The psychology of suspense in dramatic exposition. Suspense: Conceptualizations, theoretical analyses, and empirical explorations, pages 199–231. 1775 A Pre-processing WritingPrompts comes from a public forum of short stories and so is naturally noisy. Story authors often use punctuation in unusual ways to mark out sentences or paragraph boundaries and there are lots of spelling mistakes. Some of these cause problems with the GPT model and in some circumstances can cause it to crash. To improve the quality, sentence demarcations are left as they are from the original WritingPrompts dataset but some sentences are cleaned up and others skipped over. Skipping over is also why there sometimes are gaps in the graph plots as the sentence was ignored during training and inference. The preprocessing steps are as follows. Where substitutions are made rather than ignoring the sentence, the token is replaced by the Spacy (Honnibal and Montani, 2017) POS tag. 1. English Language: Some phrases in sentences can be non-English, Whatthelang (Joulin et al., 2016) is used to filter out these sentences. 2. Nondictionary words: PyDictionary and PyEnchant and used to check if each word is a dictionary word. If not they are replaced. 3. Repeating Symbols: Some author mark out sections by using a string of characters such as *************** or !!!!!!!!!!!!. This can cause the Pytorch GPT implementation to break so repeating characters are replaced with a single one. 4. Ignoring sentences: If after all of these replacements there are not three or more GPT word pieces ignoring the POS replacements then the sentence is skipped. The same processing applies to generating sentences in the inference. Occasionally the generated sentences can be nonsense, so the same criteria are used to exclude them. B Mechanical Turk Written Instructions These are the actual instructions given to the Mechanical Turk Annotators, plus the example in Table 5: INSTRUCTIONS For the first HIT there will be an additional training step to pass. This will take about 5 minutes. After this you will receive a code which you can enter in the code box to bypass the training for subsequent HITS. Other stories are in separate HITS, please search for ”Story dramatic tension, reading sentence by sentence” to find them. The training completion code will work for all related HITS. You will read a short story and for each sentence be asked to assess how the dramatic tension increases, decreases or stays the same. Each story will take an estimated 8-10 minutes. Judge each sentence on how the dramatic tension has changed as felt by the main characters in the story, not what you as a reader feel. Dramatic tension is the excitement or anxiousness over what will happen to the characters next, it is anticipation. Increasing levels of each of the following increase the level of dramatic tension: • Uncertainty: How uncertain are the characters involved about what will happen next? Put yourself in the characters shoes; judge the change in the tension based on how the characters perceive the situation. • Significance: How significant are the consequences of what will happen to the central characters of the story? An Example: Take a dramatic moment in a story such as a character that needs to walk along a dangerous cliff path. When the character first realises they will encounter danger the tension will rise, then tension will increase further. Other details such as falling rocks or slips will increase the tension further to a peak. When the cliff edge has been navigated safely the tension will drop. The pattern will be the same with a dramatic event such as a fight, argument, accident, romantic moment, where the tension will rise to a peak and then fall away as the tension is resolved. You will be presented with one sentence at a time. Once you have read the sentence, you will press one of five keys to judge the increase or decrease in dramatic tension that this sentence caused. You will use five levels (with keyboard shortcuts in brackets): • Big Decrease (A): A sudden decrease in dramatic tension of the situation. In the cliff example the person reaching the other side safely. • Decrease (S): A slow decrease in the level of tension, a more gradual drop. For example the cliff walker sees an easier route out. 1776 Annotation Sentence NA Clancy Marguerian, 154, private first class of the 150 + army , sits in his foxhole. Increase Tired cold, wet and hungry, the only thing preventing him from laying down his rifle and walking towards the enemy lines in surrender is the knowledge that however bad he has it here, life as a 50 - 100 POW is surely much worse . Increase He’s fighting to keep his eyes open and his rifle ready when the mortar shells start landing near him. Same He hunkers lower. Increase After a few minutes under the barrage, Marguerian hears hurried footsteps, a grunt, and a thud as a soldier leaps into the foxhole. Same The man’s uniform is tan , he must be a 50 - 100 . Big Increase The two men snarl and grab at each other , grappling in the small foxhole . Same Abruptly, their faces come together. Decrease “Clancy?” Decrease “Rob?” Big Decrease Rob Hall, 97, Corporal in the 50 - 100 army grins, as the situation turns from life or death struggle, to a meeting of two college friends. Decrease He lets go of Marguerian’s collar. Same “ Holy shit Clancy , you’re the last person I expected to see here ” Same “ Yeah ” “ Shit man , I didn’t think I’d ever see Mr. volunteers every saturday morning at the food shelf’ , not after The Reorganization at least ” Same “Yeah Rob , it is something isn’t it ” Decrease “ Man , I’m sorry, I tried to kill you there”. Table 5: One of the training annotation examples given to Mechanical Turk workers. The annotation labels are the recommended labels. This is an extract from a validation set WritingPrompts story. • Same (Space): Stays at a similar level. In the cliff example an ongoing description of the event. • Increase (K): A gradual increase in the tension. Loose rocks fall nearby the cliff walker. • Big Increase (L): A more sudden dramatic increase such as an argument. The cliff walker suddenly slips and falls. POST ACTUAL INSTRUCTIONS In addition to the suspense annotation. The following review questions were asked: • Please write a summary of the story in one or two sentences. • Do you think the story is interesting or not? And why? One or two sentences. • How interesting is the story? 1–5 The main purpose of this was to test if the MTurk Annotators were comprehending the stories and not trying to cheat by skipping over. Some further work through can be done to tie these into the suspense measures and also the WritingPrompts prompts. C Writing Prompts Examples The numbers are from the full WritingPrompts test set. Since random sampling was done from these from for evaluation the numbers are not in a contiguous block. There are a couple of nonsense sentences or entirely punctuation sentences. In the model these are excluded in pre-processing but included here to match the sentence segmentation. Also there are some unusual break such as “should n’t”, this is because the word segmentation produced by the Spacy tokenizer. C.1 Story 27 This is Story 27 from the test set in Figure 4, it is the same as the example in the main text: 0. As I finished up my research on Alligator breeding habits for a story I was tasked with writing , a bell began to ring loudly throughout the office . 1777 0 5 10 15 20 25 30 35 0 0.5 1 1.5 2 2.5 3 Sentence Suspense Figure 4: Story 27, Human, SHale, SEly, UEly, UαEly 1. I could feel the sound vibrating off the cubicle walls . 2. I looked over my cubicle wall to ask a co worker what the bell was for . 3. I watched as he calmly opened his desk drawer , to reveal a small armory . 4. There were multiple handguns , knives and magazines and other assorted weapons neatly stashed away . 5. “ What the hell is that for ? ” 6. I questioned loudly , and nervously . 7. The man looked me in the eyes , and pointed his handgun at my face . 8. I saw my life flash before my eyes , and could n’t understand what circumstances had arisen to put me in this position . 9. I heard the gun fire , and the sound of the shot rang through my ears . 10. I heard something hit the ground loudly behind me . 11. I turned to see the woman who had hired me yesterday , lying in a pool of blood on the floor . 12. She was holding a rifle in her arms . 13. I looked back at the man who had apparently just saved my life . 14. He seemed to be about 40 or so , well built , muscular and had a scar down the right side of his face that went from his forehead down to his beard . 15. “ She liked to go after the new hires ” he explained in a deep voice . 16. “ She hires the ones she wants to kill ” 17. I was n’t sure what to make of this , but my thoughts were cut off by the sounds of screaming throughout the building . 18. “ What ’s happening ” 1778 19. I asked , barely able to look my savior in the eyes . 20. “ You survive today , and you ’ll receive a bonus of $5,000 and your salary will be raised 5 % ” 21. I cut the man off . 22. “ What does that ? ” 23. He continued to speak , while motioning me to stop taking . 24. “ I ’ll keep you alive , if you give me your bonus and half your raise 25. He finished . 26. I just nodded , still unable to understand the position I was in . 27. He grabbed my arm so hard I thought it would break , and pulled me over the cubicle wall , and under his desk . 28. Then , he placed a gun in my hand . 29. “ The safety is on , and it ’s fully loaded with one in the chamber ” 30. He said , pointing to the safety switch . 31. The weapon felt heavy in my hand , I flicked the safety off with my thumb and gripped the gun tightly . 32. The man looked down at his watch . 33. “ 45 minutes to go ” C.2 Story 2066 This is Story 2066 from the test set in Figure 5: 0. The life pods are designed so we ca n’t steer . 1. Meant for being stranded in space , it broadcasts an S.O.S . 2. to the entire human empire even as it leaves the mother ship . 3. Within minutes any occupant will be gassed so they wo n’t suffer the long months , and perhaps years before a rescue . 4. As soon as your vitals show you ’re in deep sleep , it puts the entire interior into a cryogenic freeze . 5. The technology is effective , efficient and brilliant . 6. But as I ’ m being launched out of our vessel I ca n’t help but slam the hatch with my fists . 7. My ears are still ringing with the endless boom of explosions and my eyes covered in blind spots from the flashes . 8. The battle had been swift , and we humans had lost . 9. Captain ’s orders : Abandon ship . 10. Which was why I was stuck here , counting the seconds before I got put into stasis . 11. This was no Titanic . 12. There were ample pods for the entire crew , by the time the call was made only half of us had access to the escape pods , and a quarter of those were injured , a condition that no matter how advanced our technology was , made the life pod a null option . 13. No use being cryogenically frozen if you bleed out before the temperature even drops . 14. Better men and women than I were stuck alive on the ship , and I had to abandon them to whatever their fate may be . 15. I sit back and harness myself into the chair . 16. No use getting worked up over survivor ’s guilt now . 17. I ’ll do that when I thaw . 18. * 19. * * 20. The first thing I notice is the cold . 21. I ’ m too cold . 22. I shiver , my uniform plastered to me . 23. I frown at its tattered appearance . 24. What had happened ? 25. The last thing I remember is ... 26. The life pod . 1779 0 10 20 30 40 0 1 2 3 4 Sentence Suspense Figure 5: Story 2066, Human, SHale, SEly, UEly, UαEly 27. I ’ m still in it . 28. But I ’ ve been picked up . 29. Someone on the outside has initiated the thaw cycle . 30. At once I ’ m struck by relief . 31. Then anxiety . 32. How long was I out ? 33. How many of the crew survived ? 34. Their screams are coming back to me now , and I squirm with the pain . 35. “ Please do n’t let me be the only one , ” I whisper to myself , half pleading with fate , half praying to a God . 36. The hatch swings open . 37. The lump in my throat drops to my toes with the weight of lead . 38. A gun greets me . 39. Slowly , I put my hands behind my head . 40. There ’s no mistaking the alien wielding it . 41. The brute features are familiar , too familiar . 42. I ’ ve been rescued by the wrong side . C.3 Story 3203 This is Story 3203 from the test set in Figure 6: 0. I swore never to kill . 1. I swore that I will never stoop down to their level . 2. That we , the guardians of justice , can and will achieve our goals through the peaceful way . 3. But as I stood there , at the edge of the cliff , staring at the hideous smile that has tormented me for far too long , I could feel my vow slowly breaking before me . 4. “ So what it ’s gon na be Batsy ? 1780 0 10 20 30 40 50 0 1 2 3 4 Sentence Suspense Figure 6: Story 3203, Human, SHale, SEly, UEly, UαEly 5. Will you choose to kill the evil crazy clown , or are you going to let poor Miss Lane fall to her death ? 6. Tick tock tick tock , time ’s ticking ! ” 7. I gritted my teeth . 8. Lois was suspended in mid - air , 12 stories high , her life hanging by the mere minutes . 9. Around me , the League lay incapacitated , having fallen to Joker ’s devious ambush . 10. I turned towards Clark , hoping that he would have woken up by now . 11. No luck . 12. The Kryptonite knock out gas had worked its miracle . 13. As fate would have it , only two of us are left . 14. Two bitter rivals to the very end . 15. “ Let her go , Joker ! 16. This fight is between you and me ! ” 17. I shouted . 18. My mind raced for possible solutions . 19. A well - aimed batarang could free Lois , but I have to rappel to her in time . 20. Too risky with Joker free . 21. I could try knocking him out , but that would not leave me enough time to- “ Tsk tsk tsk , my dear Bats . 22. Trying to stall for time , are n’t you ? 23. How many times must I tell you that it wo n’t work ! 24. I know you , Bats , better than you know yourself . 25. In fact ... ” 26. He took out a remote , and pushed one of the bright red buttons on it . 1781 27. The cable jerked downwards , closer to the barrel of Joker venom . 28. “ ... for every minute you spend thinking , Miss Lane will be closer to smiley face land . 29. How about that ! 30. Hahahaha ! ” 31. It was right then when I lost it . 32. I leaped from my spot , headed straight for the Prince of Clowns . 33. I thought about the last time we almost lost Lois . 34. Clark was so close to unleashing a destructive rampage across Metropolis . 35. Too close . 36. And it was on that day when every member of the League swore an oath to protect Lois no matter what it takes , no matter what the cost , even if it meant breaking our own sacred vows . 37. Superman was too great an asset to be lost . 38. Joker knew that . 39. From the very moment he saw the destruction Clark unleashed . 40. And he has been targeting Lois ever since . 41. The blade plunged through his chest and into his heart surprisingly quick . 42. I had expected the Joker to have a fail safe mechanism , but apparently he did not . 43. He wanted me to do it . 44. The blood splattered against my suit , as the sickening sound of flesh tearing apart filled my ears . 45. And as all these happened , the Joker kept laughing , his hysterical voice filling the air . 46. He laughed and laughed , until his voice gradually grew weaker , softer . 47. Before he drew his last breath , he raised his bloodied left hand and patted me on my cowl . 48. “ Hehehe ... I win , Batsy . ” D Turning Points Examples This section is the full text output with some example plots from Turning Points TRIPOD dataset. D.1 15 Minutes The full text for the synopsis of 15 Minutes in Figure 7, this is the same example as is given in the main text: 0. After getting out of prison , ex - convicts Emil Slovak ( Karel Roden ) and Oleg Razgul ( Oleg Taktarov ) travel to New York City to meet a contact in order to claim their part of a bank heist in 1. Russia ( or somewhere in the Czech Republic ) . 2. Within minutes of arriving , Oleg steals a video camera . 3. They go to the brownstone apartment of their old partner Milos Karlova ( Vladimir Mashkov ) and his wife Tamina , and demand their share . 4. When Milos admits that he spent it , an enraged Emil kills him with a kitchen knife , then breaks Tamina ’s neck as Oleg tapes it with his new camera . 5. The couple ’s neighbor , Daphne Handlova ( Vera Farmiga ) , witnesses everything , but she escapes before they can get to her . 6. To cover up the crime , they douse the bodies in acetone , carefully position them on the bed , and burn down the apartment , intending to pass it off as an accident . 7. Jordy Warsaw ( Edward Burns ) , an arson investigator , and NYPD detective Eddie Flemming ( Robert De Niro ) are called to the scene . 8. Flemming is a high profile detective who frequently appears on the local tabloid TV show Top Story . 9. Flemming and Warsaw decide to work the case together . 10. They eventually determine that Milos was stabbed so hard that the knife ’s tip broke off and lodged in his spine . 1782 0 10 20 30 40 50 0 1 2 3 4 5 6 Sentence Suspense Figure 7: The film 15 Minutes, SHale, SEly, UEly, UαEly,◆theory baseline, ⭑TP annotations, triangles are predicted TPs. 11. While checking out the crowd outside , Warsaw spots Daphne trying to get his attention . 12. When he finally gets to where she was , she is gone , but Warsaw manages to produce a sketch of the witness . 13. Emil , who got hold of Daphne ’s wallet when she fled the apartment earlier , realizes that Daphne is in the country illegally and will be deported if she calls the police . 14. He contacts an escort service from a business card he found in Daphne ’s wallet . 15. He asks for a Czech girl hoping she will arrive . 16. When Honey , a regular call girl , arrives instead , he stabs and kills her , but not before getting the address of the escort service from her . 17. Oleg tapes the entire murder . 18. In fact , he tapes everything he can ; a wannabe filmmaker , he aspires to be the next Frank Capra . 19. Flemming and Warsaw investigate her murder , determine the link to the fire , and also visit the escort service . 20. Rose Heam ( Charlize Theron ) runs the service and tells them that the girl they are looking for ( Daphne ) does not work for her but rather a local hairdresser , and she just told the same thing to 21. a couple other guys that were asking the same questions . 22. Flemming and Warsaw then rush to the hairdresser but get there just after Emil and Oleg warn the girl not to say anything to anyone . 23. As Flemming puts Daphne into his squad car , he notices Oleg taping them from across the street . 1783 24. A foot chase begins , culminating in Flemming ’s partner getting shot and his wallet stolen . 25. Emil finds a card with Flemming ’s name and address in it . 26. He gets very jealous of Flemming ’s celebrity status and is convinced that anyone in America can do whatever they want and get away with it . 27. On the night that Flemming is to propose to his girlfriend Nicolette Karas ( Melina Kanakaredes ) , Oleg and Emil sneak into his house and knocks him unconscious , later taping him to a chair . 28. While Oleg is recording , Emil explains his plan - he will kill Flemming , then he will sell the tape to Top Story , and when he is arrested , he will plead insanity . 29. After being committed to an insane asylum he will declare that he is actually sane . 30. Because of double jeopardy , he will get off , collecting the royalties from his books and movies . 31. Flemming starts attacking them with his chair ( while still taped to it ) and almost gets them but Emil stabs him in the abdomen , and putting a pillow on Flemming , killing him . 32. The entire city is in mourning and Emil calls Robert Hawkins ( Kelsey Grammer ) , the host of Top Story , to tell him he has a tape of the killing and is willing to sell it . 33. Robert pays him a million dollars for the tape . 34. Warsaw and the entire police force are furious with Robert and can not believe he would air it , especially since his main reporter is Nicolette . 35. At the same time , Emil and Oleg try to kill Warsaw and Daphne by booby - trapping Daphne ’s apartment . 36. The two narrowly escape the resulting fire . 37. On the night it is aired Emil and Oleg sit in a Planet Hollywood to watch it with the rest of the public . 38. As the clip progresses , the customers react with horror at the brutality of it , and a few begin to notice Emil and Oleg are right there with them , Oleg actually smiling at the results of his work , and panic takes place . 39. Emil explains his betrayal to Oleg and as he about to execute Emil with a gun , Oleg stabs him in the arm . 40. The police come in and arrest the wounded Emil , while Oleg escapes . 41. They put Emil in Warsaw ’s squad car but instead of taking him to the police station , Warsaw takes him to an abandoned warehouse where he is going to kill him . 42. The police arrive just in time and take Emil away . 43. Everything goes as planned as Emil is now a celebrity and is pleading insanity . 44. His lawyer agrees to work for 30 45. Meanwhile , Oleg is jealous of the notoriety that Emil is receiving . 46. While being led away with his lawyer and all the media , Warsaw gets into an argument with the lawyer while the Top Story crew is taping the whole thing . 47. Oleg gives Hawkins the part of the tape where Emil explains his plan to Flemming , proving he was sane the whole time ( Oleg presumably kept this part of the tape on hand as part of an ” insurance policy ”” ) .” 48. Hawkins shouts out to Emil and explains to him the evidence he now has . 49. Emil pushes a policeman down , takes his gun and shoots Oleg . 50. Emil grabs Flemming ’s fianc ˘AˇSe , who is covering the news story , and threatens to shoot her . 51. He is finally cornered by the police and Warsaw . 1784 52. Against orders , Warsaw shoots Emil a dozen times in the chest in order to avenge Eddie ’s death . 53. An officer shouts that Oleg is still alive , and Hawkins rushes to him to get footage just as Oleg says the final few words to his movie he is taping just before he dies ( with the Statue of Liberty in the background ) . 54. Shortly afterward , Hawkins approaches Warsaw and tries to cultivate the same sort of arrangement he had with Flemming , suggesting the power an arrangement would give him . 55. In response , Warsaw punches out Hawkins and leaves the scene as the police officers smile in approval . D.2 Pretty Woman The full text for the synopsis of the film Pretty Woman in Figure 8: 0. Edward Lewis (Gere), a successful businessman and ”corporate raider”, takes a detour on Hollywood Boulevard to ask for directions. Receiving little help, he encounters a prostitute named Vivian Ward (Roberts) who is willing to assist him in getting to his destination. 1. The morning after, Edward hires Vivian to stay with him for a week as an escort for social events. 2. Vivian advises him that it ”will cost him,” and Edward agrees to give her $3,000 and access to his credit cards. 3. Vivian then goes shopping on Rodeo Drive, only to be snubbed by saleswomen who disdain her because of her unsophisticated appearance. 4. Initially, hotel manager Barnard Thompson (Hector Elizondo) is also somewhat taken aback. 5. But he relents and decides to help her buy a dress, even coaching her on dinner etiquette. 6. Edward returns and is visibly amazed by Vivian’s transformation. The business dinner does not end well, however, with Edward making clear his intention to dismantle Morse’s corporation once it was bought, close down the shipyard which Morse spent 40 years building, and sell the land for real estate. 7. Morse and his grandson abandon their dinner in anger, while Edward remains preoccupied with the deal afterward. 8. Back at the hotel, Edward reveals to Vivian that he had not spoken to his recently deceased father for 14 and half years. 9. Later that night, the two make love on the grand piano in the hotel lounge. 10. The next morning, Vivian tells Edward about the snubbing that took place the day before. 11. Edward takes Vivian on a shopping spree. 12. Vivian then returns, carrying all the bags, to the shop that had snubbed her, telling the salesgirls they had made a big mistake. 13. The following day, Edward takes Vivian to a polo match where he is interested in networking for his business deal. 14. While Vivian chats with David Morse, the grandson of the man involved in Edward’s latest deal, Philip Stuckey (Edward’s attorney) wonders if she is a spy. 15. Edward re-assures him by telling him how they met, and Philip (Jason Alexander) then approaches Vivian and offers to hire her once she is finished with Edward, inadvertently insulting her. 16. When they return to the hotel, she is furious with Edward for telling Phillip about her. 17. She plans to leave, but he apologizes and persuades her to see out the week. 18. Edward leaves work early the next day and takes a breath-taking Vivian on a date to the opera in San Francisco in his private jet. She is clearly moved by the opera (which is La Traviata, whose plot deals with a rich man tragically falling in love with a courtesan). 19. While playing chess with Edward after returning, Vivian persuades him to take the next day off. 1785 0 10 20 30 40 0 1 2 3 4 5 6 Sentence Suspense Figure 8: The film Pretty Woman, SHale, SEly, UEly, UαEly, ◆theory baseline, ⭑TP annotations, triangles are predicted TPs. 20. They spend the entire day together, and then have sex, in a personal rather than professional way. 21. Just before she falls asleep, Vivian admits that she’s in love with Edward. 22. Over breakfast, Edward offers to put Vivian up in an apartment so he can continue seeing her. 23. She feels insulted and says this is not the ”fairy tale” she wants. 24. He then goes off to work without resolving the situation. 25. Vivian’s friend, Kit De Luca (Laura San Giacomo), comes to the hotel and realizes that Vivian is in love with Edward. 26. Edward meets with Mr. Morse, about to close the deal, and changes his mind at the last minute. 27. His time with Vivian has shown him another way of living and working, taking time off and enjoying activities for which he initially had little time. 28. As a result, his strong interest towards his business is put aside. 29. He decides that he would rather help Morse than take over his company. 30. Furious, Philip goes to the hotel to confront Edward, but only finds Vivian there. 31. He blames her for changing Edward and tries to rape her. 32. Edward arrives in time to stop Philip, chastising him for his greed and ordering him to leave the room. 33. Edward tends to Vivian and tries to persuade her to stay with him because she wants to, not because he’s paying her. 34. She refuses once again and returns to the apartment she shares with Kit, preparing to leave for San Francisco to earn a G.E.D. in the hopes of a better life. 1786 35. Edward gets into the car with the chauffeur that took her home. 36. Instead of going to the airport, he goes to her apartment arriving accompanied by music from La Traviata. 37. He climbs up the fire escape, despite his fear of heights, with a bouquet of roses clutched between his teeth, to woo her. 38. His leaping from the white limousine, and then climbing the outside ladder and steps, is a visual urban metaphor for the knight on white horse rescuing the ”princess” from the tower, a childhood fantasy Vivian told him about. 39. The film ends as the two of them kiss on the fire escape. D.3 Slumdog Millionaire The full text for the synopsis of the film Slumdog Millionaire, in Figure 9: 0. In Mumbai in 2006, eighteen-year-old Jamal Malik (Dev Patel), a former street child (child Ayush Mahesh Khedekar, adolescent Tanay Chheda) from the Juhu slum, is a contestant on the Indian version of Who Wants to Be a Millionaire?, and is one question away from the grand prize. 1. However, before the Rs. 2. 20 million question, he is detained and interrogated by the police, who suspect him of cheating because of the impossibility of a simple ”slumdog” with very little education knowing all the answers. 3. Jamal recounts, through flashbacks, the incidents in his life which provided him with each answer. 4. These flashbacks tell the story of Jamal, his brother Salim (adult Madhur Mittal, adolescent Ashutosh Lobo Gajiwala, child Azharuddin Mohammed Ismail), and Latika (adult Freida Pinto, adolescent Tanvi Ganesh Lonkar, child Rubina Ali). 5. In each flashback Jamal has a point to remember one person, or song, or different things that lead to the right answer of one of the questions. 6. The row of questions does not correspond chronologically to Jamal’s life, so the story switches between different periods (childhood, adolescence) of Jamal. 7. Some questions do not refer to points of his life (cricket champion), but by witness he comes to the right answer. 8. Jamal’s flashbacks begin with his managing, at age five, to obtain the autograph of Bollywood star Amitabh Bachchan, which his brother then sells, followed immediately by the death of his mother during the Bombay Riots. 9. As they flee the riot, they run into a child version of the God Rama, Salim and Jamal then meet Latika, another child from their slum. 10. Salim is reluctant to take her in, but Jamal suggests that she could be the third musketeer, a character from the Alexandre Dumas novel (which they had been studying — albeit not very diligently — in school), whose name they do not know. 11. The three are found by Maman (Ankur Vikal), a gangster who tricks and then trains street children into becoming beggars. 12. When Jamal, Salim, and Latika learn Maman is blinding children in order to make them more effective as singing beggars, they flee by jumping onto a departing train. 13. Latika catches up and takes Salim’s hand, but Salim purposely lets go, and she is recaptured by the gangsters. 14. Over the next few years, Salim and Jamal make a living travelling on top of trains, selling goods, picking pockets, working as dish washers, and pretending to be tour guides at the Taj Mahal, where they steal people’s shoes. 15. At Jamal’s insistence, they return to Mumbai to find Latika, discovering from Arvind, one of the singing beggars, that she has been raised by Maman to become a prostitute and that her virginity is expected to fetch a high price. 16. The brothers rescue her, and Salim draws a gun and kills Maman. 1787 0 10 20 30 40 0 1 2 3 4 5 6 Sentence Suspense Figure 9: Slumdog Millionare, SHale, SEly, UEly, UαEly, ◆theory baseline, ⭑TP annotations, triangles are predicted TPs. 17. Salim then manages to get a job with Javed (Mahesh Manjrekar), Maman’s rival crime lord. 18. Arriving at their hotel room, Salim orders Jamal to leave him and Latika alone. 19. When Jamal refuses, Salim draws a gun on him, and Jamal leaves after Latika persuades him to go away (presumably so he wouldn’t get hurt by Salim). 20. Years later, while working as a tea server at an Indian call centre, Jamal searches the centre’s database for Salim and Latika. 21. He fails in finding Latika but succeeds in finding Salim, who is now a high-ranking lieutenant in Javed’s organization, and they reunite. 22. Salim is regretful for his past actions and only pleads for forgiveness when Jamal physically attacks him. 23. Jamal then bluffs his way into Javed’s residence and reunites with Latika. 24. While Jamal professes his love for her, Latika asks him to forget about her. 25. Jamal promises to wait for her every day at 5 o’clock at the VT station. 26. Latika attempts to rendezvous with him, but she is recaptured by Javed’s men, led by Salim. 27. Jamal loses contact with Latika when Javed moves to another house, outside of Mumbai. 28. Knowing that Latika watches it regularly, Jamal attempts to make contact with her again by becoming a contestant on the show Who Wants to Be a Millionaire? 29. He makes it to the final question, despite the hostile attitude of the show’s host, Prem Kumar (Anil Kapoor), and becomes a wonder across India. 1788 30. Kumar feeds Jamal the incorrect response to the penultimate question and, when Jamal still gets it right, turns him into the police on suspicion of cheating. 31. Back in the interrogation room, the police inspector (Irrfan Khan) calls Jamal’s explanation ”bizarrely plausible”, but thinks he is not a liar and, ripping up the arrest warrant, allows him to return to the show. 32. At Javed’s safehouse, Latika watches the news coverage of Jamal’s miraculous run on the show. 33. Salim, in an effort to make amends for his past behaviour, quietly gives Latika his mobile phone and car keys, and asks her to forgive him and to go to Jamal. 34. Latika, though initially reluctant out of fear of Javed, agrees and escapes. 35. Salim fills a bathtub with cash and sits in it, waiting for the death he knows will come when Javed discovers what he has done. 36. Jamal’s final question is, by coincidence, the name of the third musketeer in The Three Musketeers, a fact he never learned. 37. Jamal uses his Phone-A-Friend lifeline to call Salim’s cell, as it is the only phone number he knows. 38. Latika succeeds in answering the phone just in the nick of time, and, while she does not know the answer, tells Jamal that she is safe. 39. Relieved, Jamal randomly picks Aramis, the right answer, and wins the grand prize. 40. Simultaneously, Javed discovers that Salim has helped Latika escape after he hears Latika on the show. 41. He and his men break down the bathroom door, and Salim kills Javed, before being gunned down himself at the hands of Javed’s men. 42. With his dying breath, Salim gasps, ”God is great.” 43. Later that night, Jamal and Latika meet at the railway station and kiss. 44. The movie ends with a dance scene on the platform to ”Jai Ho”.
2020
161
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1789–1794 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1789 You Don’t Have Time to Read This: An Exploration of Document Reading Time Prediction Orion Weller,1 Jordan Hildebrandt,1 Ilya Reznik,2 Chris Challis,2 E. Shannon Tass,1 Quinn O. Snell,1 Kevin Seppi1 1Brigham Young University 2Adobe [email protected] Abstract Predicting reading time has been a subject of much previous work, focusing on how different words affect human processing, measured by reading time. However, previous work has dealt with a limited number of participants as well as word level only predictions (i.e. predicting the time to read a single word). We seek to extend these works by examining whether or not document level predictions are effective, given additional information such as subject matter, font characteristics, and readability metrics. We perform a novel experiment to examine how different features of text contribute to the time it takes to read, distributing and collecting data from over a thousand participants. We then employ a large number of machine learning methods to predict a user’s reading time. We find that despite extensive research showing that word level reading time can be most effectively predicted by neural networks, larger scale text can be easily and most accurately predicted by one factor, the number of words. 1 Introduction Understanding how we read and process text has proven a large area of both cognitive science and natural language processing (NLP) research (Graesser et al., 1980; Liversedge et al., 1998; Frank et al., 2013a; Busjahn et al., 2014; Weller and Seppi, 2019, 2020). Online content providers and consumers are also interested in this research; in the increasingly busy world of today, consumers lack the time to read long articles, prompting content creators to aim for specific reading lengths. Many providers1 have even examined traffic patterns in order to determine the ideal content length, with the general consensus finding 3-7 minutes of Work done as part of a capstone course with Adobe 1Medium’s study can be found here. content optimal. Thus, having established the optimal content length, article writers now face the next hurdle: when has their post reached the ideal length? A news article about last night’s football game may be easier to read than a technical post about NLP. Perhaps the font type or size influences the consumer’s comprehension, slowing down the reading process. There are many factors, both textual and stylistic, that quickly come to mind when considering the potential reading time of an article. Although there has been an extensive body of work on reading time prediction applied to single words (Frank, 2017; Willems et al., 2015; Shain, 2019; van Schijndel and Linzen, 2018), to the best of our knowledge there has been no research into understanding these effects on document sized text. In this paper, we seek to address this area by building models to predict, understand, and interpret factors that could affect an article’s reading time. Our contributions to this area include a methodically designed statistical study, consisting of 1130 experimental trials and 32 different articles, experimental results for a broad collection of machine learning algorithms on this novel task, and discussion of potential reasons why more complex models fail. To the best of our knowledge, this is the largest experimental study for reading time research, in terms of participants and breadth of factors. All code and datasets are publicly available.2 2 Related Work Researchers have made significant progress in predicting the reading time of single words, illustrating the effect of different words on the human brain (Frank et al., 2013b; Shain, 2019; Goodkind and Bicknell, 2018) for many different texts (Futrell et al., 2018; Kennedy et al., 2003). Although this 2The code and datasets for our experiments can be found at http://github.com/orionw/DocumentReadingTime 1790 effort is focused more on the cognitive effects of words, these results show that scientists can accurately predict the reading time of individual words in context. With the rise in popularity of machine learning techniques, many scientists have found the most success through these methods, with the most recent research showing significant improvements from combining neural networks as language models with linear mixed models (LMMs) (Goodkind and Bicknell, 2018; de Vries et al., 2018; van Schijndel and Linzen, 2018). However, all previous research has been confined to the effect of a specific word in context, which naturally leads to the question of how this research generalizes. A separate but similar line of research, readability, measures the reading difficulty of a body of text. This research area has investigated effects of readability in a plethora of areas: online vs paper (Kurniawan and Zaphiris, 2001), color and contrast (Legge et al., 1990), and writing style (Bostian, 1983). The most famous readability metric for English, the Flesch–Kincaid (Kincaid et al., 1975), uses the number of syllables and words to determine readability. Other scientists have attempted to improve upon this simple metric, showing success in reading level classification with unigram language models (Si and Callan, 2001) or SVM models built on top of these basic textual characteristics (Pitler and Nenkova, 2008). As previous metrics seem to be sufficient, recent research has focused on evaluating and comparing the diverse metrics on different domains (Sugawara et al., 2017; Redmiles et al., 2019). We use these readability works to influence our choice of features, as readability seems inherently interwoven with reading time. We employ the py-readability-metrics package to include 7 state-of-the-art metrics that we add to our data for the modeling task (Section 4, Appendix B). 3 Experimental Design We collected our reading time data from a statistical survey performed on Amazon’s Mechanical Turk. Since we were not physically present to observe the respondents we took a number of precautions and controls to ensure data quality. We note however, that the inclinations of Mechanical Turk users align with our target audience: we would expect most readers of online content to be of a younger demographic, tech-savy, and prone to read as fast as possible. In this section we will discuss our survey design, validation, and results. 3.1 Survey Design In order to gather the maximum amount of information from a survey design, we implemented our survey following Fractional Factorial Design (FFD) (Box et al., 2005). This method of survey collection allows us to exploit the sparsity-of-effects principle, gleaning the most information while only using a fraction of the effort of a full factorial design, in terms of experimental runs and resources. This method works by defining two levels for each factor: for example, our factor font size had the levels 12 point and 16 point. We extracted 8 factors with 2 levels, consisting of 28 unique surveys (28−3 = 32 using FFD) to design. When choosing factors and levels, we focused on areas that would provide the most contrast in order to illustrate potential differences in reading time. Although there are an almost endless number of factors that could potentially influence article reading time, the number of surveys needed to explore those factors increases exponentially; thus, we chose eight crucial factors. Levels of the factor are indicated in parenthesis if applicable: font size (12 vs 16 point), font type (sans vs serif), subject matter (health vs. technology), genre (blog post vs news article), average syllables per word, number of words, average words per sentence, and average unigram frequency. We note that we further collected the original article’s text so that additional factors could be easily extracted for future analysis. Again, these factors are not exhaustive but instead were chosen to give a representative sample for a specific area of online articles, while still showing contrast between documents (e.g. news articles vs blog posts or small vs large font). To define the levels of our numeric features, such as unigram frequency or the average number of syllables, we collected 200 articles for the week of March 4th 2019, aggregating from different news and blog sources, but taking a maximum of three articles from each source (see a more comprehensive list on Github, as there are too many to list). We took these articles, extracted our feature characteristics, and found the median of the distribution. This number was then used as the cutoff between the two levels for that factor. Unigram frequencies were computed using the wordfreq library, aggregating frequencies from numerous sources.3 3Details on which text corpora were aggregated can be found at https://github.com/LuminosoInsight/wordfreq/ 1791 0 5 10 15 20 25 30 Article Number 100 200 300 400 500 Reading Time (s) 400 600 800 1000 1200 Article Length (words) 100 200 300 400 500 Reading Time (s) Figure 1: Left: boxplots for the results of each survey, with reading time in seconds. Right: a plot of the number of words vs. reading time. Note that lines in the x-axis are due to each of the 32 surveys having around 40 respondents each, for a total of 1130 respondents. 3.2 Survey Construction With the requirements for each survey defined by the FFD, we gathered additional articles and parsed their features. We then matched each one of the 32 combinations from the FFD to a unique article that contained those features. In order to gather a large audience with similar characteristics to online readership, we distributed our survey through Amazon’s Mechanical Turk using the Qualtrics platform. Our survey flow consisted of five short demographic questions including age, gender, education level, familiarity with the article subject matter (health or technology) and their perception of their reading speed on a five point Likert scale (slow to fast). They were then instructed to read the next page of the survey uninterrupted at their normal reading pace, after which they would be asked several basic comprehension questions for validation. Each comprehension question was created to be easily answered if the user had read the article but non-trivial for those that had not. See Appendix A for examples of comprehension questions. If the user failed to answer any of the control questions correctly, the survey was terminated and the data was not used. 3.3 Survey Validation and Controls Due to the nature of Mechanical Turk, we employed various controls to ensure the quality of our data. Many Mechanical Turk workers are prone to take multiple surveys concurrently, leave the page of the survey open for long periods of time, or rush through surveys in order to maximize their earnings. However, the inclination to read through an article quickly is similar to that of online readers, thus, a crowdsourcer’s work is acceptable as long as they pass our validation. In order to control for these tendencies, we included many checks throughout each stage of the survey. If the answers to the demographic questions were unrealistic (such as age greater than 90 or less than 18), we rejected the survey. If the user failed to answer a validation question, such as asking the user to select a certain box before proceeding to the next page, they were disqualified. If the user spent an unrealistic amount of time on the reading page due to any reason (less than two minutes or greater than ten minutes4 for a long article, as an example) or failed to answer any of the comprehension questions, their data was not used. 3.4 Experimental Results The results from our surveys are plotted in Figure 1, consisting of 1130 respondents. Note that the results have significant variance, especially as the length of the article increases. More plots of the data can be found in our Github repository. 4 Modeling With the data gathered and readability metrics calculated (see Section 2), we explore the results from a variety of different models. We employ three categories of models: models that only use 4These times were found by initially performing this survey on a limited number of respondents with no limits and then extending the min/max by an additional two minutes. 1792 extracted features, models that only use the text, and models that stack textual-only models with model features. Basic extracted feature models include a vanilla Linear Regression (LR) with only the number of words variable (“word”), a Linear Regression model with all variables (“all”), Random Forests, K-Nearest Neighbors (KNN), and a Multi-Layered Perceptron (MLP). As using the entire article as input for the text only models is not computationally feasible, we use modern neural networks to embed the text as a document embedding, using a linear output layer for regression. We tried various state-of-the-art embedding models including roBERTa (Liu et al., 2019; Devlin et al., 2018), XLNet (Yang et al., 2019), and ELMo (Peters et al., 2018). The stacked models combine the document embedding with the extracted features, feeding them both into an MLP. Embeddings use the Flair (Akbik et al., 2018) and HuggingFace (Wolf et al., 2019) libraries. We use two baselines: a commonly used rule-ofthumb for online reading estimates, 240 words per minute (WPM), and the sum of the word-level predictions (Surprisal-Sum) from a surprisal model in order to compare with recent works (van Schijndel and Linzen, 2018; Shain, 2019). For the SurprsialSum baseline predictions, we employ the model used in (van Schijndel and Linzen, 2018), where predictions are made by training a Linear Mixed Model over surprisal data. 5 Results The results from our experiments are found in Table 1. We see that the most effective models were the simplest: the 240 WPM baseline, linear regression, k-nearest neighbors, and random forests. Using the word count only linear model, because of its easy interpretability, shows us an R2 value of 0.40, meaning that 40% of the variance of reading time can be explained by the number of words in the article. We also see that scaling a regression model to include demographic and textual information (the “all” linear regression model) does not seem to provide significant improvements in prediction. Given the amount of empirical evidence from word level reading time prediction, we were surprised to see a dearth of similar results for document level prediction. Models that provide strong results in word level prediction, such as varieties of neural networks, fail to be as effective as the simpler models. Perhaps this is due to the length of Features Only: RMSE (sd) MAE (sd) 240 WPM 66.0 10.7 52.1 8.3 Surprisal-Sum 141.5 42.8 118.4 35.8 MLP 84.8 10.5 67.2 7.0 Random Forest 64.3 7.7 50.2 5.6 LR (word) 65.5 10.7 51.1 7.9 LR (all) 65.7 9.8 51.6 8.0 KNN 70.1 9.6 54.3 7.1 Text-Only: RMSE (sd) MAE (sd) XLNet 81.0 8.6 62.8 6.6 ELMo 84.3 13.1 66.7 8.6 roBERTa 83.2 13.9 66.3 9.1 Stacked: RMSE (sd) MAE (sd) XLNet/MLP 80.3 10.4 62.9 8.0 ELMo/MLP 83.2 13.7 66.4 9.4 roBERTa/MLP 83.5 10.5 66.1 6.9 Table 1: Results on the reading time prediction task. RMSE and MAE are reported in seconds for the mean of a 10-fold cross validation. “sd” indicates one standard deviation for the previous metric. Best results in each column are in bold. the document - small changes in word level reading time simply get evened out at the document level (for example, see the Surprisal-Sum model). Alternatively, the level of surprisal in online articles may remain constant with the number of words. 6 Conclusion Given previous work in single word reading time prediction, we conducted a large novel study to test whether document level reading time could be predicted. We carefully designed an experiment containing a myriad of potential factors to measure reading time, distributed the survey to more than a thousand people, and collected the results into the first dataset of its kind. We then employed machine learning techniques to predict the time to read, finding that simpler models were the most competitive, with the number of words as the sole critical factor in predicting reading time. We hope this resource can benefit future research into developing techniques to model and understand human responses to document sized text. Acknowledgements We would like to thank Hayden Harris for his help and advice during the capstone project. 1793 References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638– 1649. Lloyd R Bostian. 1983. How active, passive and nominal styles affect readability of science writing. Journalism quarterly, 60(4):635–670. George EP Box, J Stuart Hunter, and William G Hunter. 2005. Statistics for experimenters. In Wiley Series in Probability and Statistics. Wiley Hoboken, NJ, USA. Teresa Busjahn, Roman Bednarik, and Carsten Schulte. 2014. What influences dwell time during source code reading?: analysis of element type and frequency as factors. In Proceedings of the Symposium on Eye Tracking Research and Applications, pages 335–338. ACM. Jeanne Sternlicht Chall and Edgar Dale. 1995. Readability revisited: The new Dale-Chall readability formula. Brookline Books. Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. North American Chapter of the Association for Computational Linguistics. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Stefan Frank. 2017. Word embedding distance does not predict word reading time. In CogSci. Stefan L. Frank, Irene Fernandez Monsalve, Robin L. Thompson, and Gabriella Vigliocco. 2013a. Reading time data for evaluating broad-coverage models of english sentence processing. Behavior Research Methods, 45(4):1182–1190. Stefan L. Frank, Leun J. Otten, Giulia Galli, and Gabriella Vigliocco. 2013b. Word surprisal predicts n400 amplitude during reading. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 878–883, Sofia, Bulgaria. Association for Computational Linguistics. Richard Futrell, Edward Gibson, Harry J Tily, Idan Blank, Anastasia Vishnevetsky, Steven Piantadosi, and Evelina Fedorenko. 2018. The natural stories corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 10–18, Salt Lake City, Utah. Association for Computational Linguistics. Arthur C Graesser, Nicholas L Hoffman, and Leslie F Clark. 1980. Structural components of reading time. Journal of Verbal Learning and Verbal Behavior, 19(2):135–151. Robert Gunning et al. 1952. Technique of clear writing. Alan Kennedy, Robin Hill, and Jo¨el Pynte. 2003. The dundee corpus. In Proceedings of the 12th European conference on eye movement. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. George R Klare. 1974. Assessing readability. Reading research quarterly, pages 62–102. Sri Hastuti Kurniawan and Panayiotis Zaphiris. 2001. Reading online or on paper: Which is faster? Gordon E Legge, David H Parish, Andrew Luebker, and Lee H Wurm. 1990. Psychophysics of reading. xi. comparing color contrast and luminance contrast. JOSA A, 7(10):2002–2010. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints, page arXiv:1907.11692. Simon P Liversedge, Kevin B Paterson, and Martin J Pickering. 1998. Eye movements and measures of reading time. In Eye guidance in reading and scene perception, pages 55–75. Elsevier. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of the conference on empirical methods in natural language processing, pages 186– 195. Association for Computational Linguistics. Elissa Redmiles, Lisa Maszkiewicz, Emily Hwang, Dhruv Kuchhal, Everest Liu, Miraida Morales, Denis Peskov, Sudha Rao, Rock Stevens, Kristina Gligori´c, et al. 2019. Comparing and developing tools to measure the readability of domain-specific texts. In 1794 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4833– 4844. Marten van Schijndel and Tal Linzen. 2018. A neural model of adaptation in reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4704–4710. Cory Shain. 2019. A large-scale study of the effects of word frequency and predictability in naturalistic reading. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4086–4094. Luo Si and Jamie Callan. 2001. A statistical model for scientific readability. In CIKM, volume 1, pages 574–576. Edgar A Smith and RJ Senter. 1967. Automated readability index. AMRL-TR. Aerospace Medical Research Laboratories (US), pages 1–14. Saku Sugawara, Yusuke Kido, Hikaru Yokono, and Akiko Aizawa. 2017. Evaluation metrics for machine reading comprehension: Prerequisite skills and readability. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 806–817. Clarissa de Vries, W Gudrun Reijnierse, and Roel M Willems. 2018. Eye movements reveal readers’ sensitivity to deliberate metaphors during narrative reading. Scientific Study of Literature, 8(1):135–164. Orion Weller and Kevin Seppi. 2019. Humor detection: A transformer gets the last laugh. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3612–3616. Orion Weller and Kevin Seppi. 2020. The rjokes dataset: a large scale humor collection. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6136–6141, Marseille, France. European Language Resources Association. Roel M. Willems, Stefan L. Frank, Annabel D. Nijhof, Peter Hagoort, and Antal van den Bosch. 2015. Prediction During Natural Language Comprehension. Cerebral Cortex, 26(6):2506–2516. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Transformers: State-ofthe-art natural language processing. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754–5764. A Comprehension Questions We designed our comprehension questions such that the answer would not be trivially obvious to those who did not read the article. In this example, an article about Minecraft Mods, we ask two questions that would even require someone familiar with Minecraft to read the article: asking them what the author’s opinion was and what the term mod stood for in this specific context. We further put these questions on the page after the reading section of the survey and did not allow respondents to go back to re-read the text. Figure 2: Example comprehension questions for an article about Minecraft B Readability Metrics We use the following metrics calculated from the py-readability-metrics package: • Flesch-Kincaid (Kincaid et al., 1975) • Flesch (Flesch, 1948) • Gunning-Fog (Gunning et al., 1952) • Coleman-Liau (Coleman and Liau, 1975) • Dale-Chall (Chall and Dale, 1995) • Ari (Smith and Senter, 1967) • Linsear Write (Klare, 1974)
2020
162
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1795–1807 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1795 A Generative Model for Joint Natural Language Understanding and Generation Bo-Hsiang Tseng1∗, Jianpeng Cheng2, Yimai Fang2 and David Vandyke2 1Engineering Department, University of Cambridge, UK 2Apple [email protected] {jianpeng.cheng, yimai_fang, dvandyke}@apple.com Abstract Natural language understanding (NLU) and natural language generation (NLG) are two fundamental and related tasks in building task-oriented dialogue systems with opposite objectives: NLU tackles the transformation from natural language to formal representations, whereas NLG does the reverse. A key to success in either task is parallel training data which is expensive to obtain at a large scale. In this work, we propose a generative model which couples NLU and NLG through a shared latent variable. This approach allows us to explore both spaces of natural language and formal representations, and facilitates information sharing through the latent space to eventually benefit NLU and NLG. Our model achieves state-of-the-art performance on two dialogue datasets with both flat and tree-structured formal representations. We also show that the model can be trained in a semi-supervised fashion by utilising unlabelled data to boost its performance. 1 Introduction Natural language understanding (NLU) and natural language generation (NLG) are two fundamental tasks in building task-oriented dialogue systems. In a modern dialogue system, an NLU module first converts a user utterance, provided by an automatic speech recognition model, into a formal representation. The representation is then consumed by a downstream dialogue state tracker to update a belief state which represents an aggregated user goal. Based on the current belief state, a policy network decides the formal representation of the system response. This is finally used by an NLG module to generate the system response(Young et al., 2010). It can be observed that NLU and NLG have opposite goals: NLU aims to map natural language ∗Work done while the author was an intern at Apple. Figure 1: Generation and inference process in our model, and how NLU and NLG are achieved. x and y denotes utterances and formal representations respectively; z represents the shared latent variable for x and y. to formal representations, while NLG generates utterances from their semantics. In research literature, NLU and NLG are well-studied as separate problems. State-of-the-art NLU systems tackle the task as classification (Zhang and Wang, 2016) or as structured prediction or generation (Damonte et al., 2019), depending on the formal representations which can be flat slot-value pairs (Henderson et al., 2014), first-order logical form (Zettlemoyer and Collins, 2012), or structured queries (Yu et al., 2018; Pasupat et al., 2019). On the other hand, approaches to NLG vary from pipelined approach subsuming content planning and surface realisation (Stent et al., 2004) to more recent end-to-end sequence generation (Wen et al., 2015; Dušek et al., 2020). However, the duality between NLU and NLG has been less explored. In fact, both tasks can be treated as a translation problem: NLU converts 1796 natural language to formal language while NLG does the reverse. Both tasks require a substantial amount of utterance and representation pairs to succeed, and such data is costly to collect due to the complexity of annotation involved. Although unannotated data for either natural language or formal representations can be easily obtained, it is less clear how they can be leveraged as the two languages stand in different space. In this paper, we propose a generative model for Joint natural language Understanding and Generation (JUG), which couples NLU and NLG with a latent variable representing the shared intent between natural language and formal representations. We aim to learn the association between two discrete spaces through a continuous latent variable which facilitates information sharing between two tasks. Moreover, JUG can be trained in a semi-supervised fashion, which enables us to explore each space of natural language and formal representations when unlabelled data is accessible. We examine our model on two dialogue datasets with different formal representations: the E2E dataset (Novikova et al., 2017) where the semantics are represented as a collection of slot-value pairs; and a more recent weather dataset (Balakrishnan et al., 2019) where the formal representations are tree-structured. Experimental results show that our model improves over standalone NLU/NLG models and existing methods on both tasks; and the performance can be further boosted by utilising unlabelled data. 2 Model Our key assumption is that there exists an abstract latent variable z underlying a pair of utterance x and formal representation y. In our generative model, this abstract intent guides the standard conditional generation of either NLG or NLU (Figure 1a). Meanwhile, z can be inferred from either utterance x, or formal representation y (Figure 1b). That means performing NLU requires us to infer the z from x, after which the formal representation y is generated conditioning on both z and x (Figure 1c), and vice-versa for NLG (Figure 1d). In the following, we will explain the model details, starting with NLG. 2.1 NLG As mentioned above, the task of NLG requires us to infer z from y, and then generate x using both z and y. We choose the posterior distribution q(z|y) to be Gaussian. The task of inferring z can then be recast to computing mean µ and standard deviation σ of the Gaussian distribution using an NLG encoder. To do this, we use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) to encode formal representation y. which is linearised and represented as a sequence of symbols. After encoding, we obtain a list of hidden vectors H, with each representing the concatenation of forward and backward LSTM states. These hidden vectors are then average-pooled and passed through two feedforward neural networks to compute mean µµµy,z and standard deviation σσσy,z vectors of the posterior q(z|y). H = Bi-LSTM(y) ¯h = Pooling(H) µµµy,z = Wµ¯h + bµ σσσy,z = Wσ¯h + bσ (1) where W and b represent neural network weights and bias. Then the latent vector z can be sampled from the approximated posterior using the re-parameterisation trick of Kingma and Welling (2013): ϵϵϵ ∼N(0, I) z = µµµy,z + σσσy,zϵϵϵ (2) The final step is to generate natural language x based on latent variable z and formal representation y. We use an LSTM decoder relying on both z and y via attention mechanism (Bahdanau et al., 2014). At each time step, the decoder computes: gx i = LSTM(gx i−1, xi−1) ci = attention(gx i , H) p(xi) = softmax(Wv[ci⊕gx i ⊕z] + bv) (3) where ⊕denotes concatenation. xi−1 is the word vector of input token; gx i is the corresponding decoder hidden state and p(xi) is the output token distribution at time step i. 2.2 NLU NLU performs the reverse procedures of NLG. First, an NLU encoder infers the latent variable z from utterance x. The encoder uses a bi-directional LSTM to convert the utterance into a list of hidden states. These hidden states are pooled and passed through feed-forward neural networks to compute the mean µµµx,z and standard deviation σσσx,z of the posterior q(z|x). This procedure follows Equation 1 in NLG. 1797 However, note that a subtle difference between natural language and formal language is that the former is ambiguous while the later is precisely defined. This makes NLU a many-to-one mapping problem but NLG is one-to-many. To better reflect the fact that the NLU output requires less variance, when decoding we choose the latent vector z in NLU to be the mean vector µµµx,z, instead of sampling it from q(z|x) like Equation 2.1 After the latent vector is obtained, the formal representation y is predicted from both z and x using an NLU decoder. Since the space of y depends on the formal language construct, we consider two common scenarios in dialogue systems. In the first scenario, y is represented as a set of slot-value pairs, e.g., {food type=British, area=north} in restaurant search domain (Mrkši´c et al., 2017). The decoder here consists of several classifiers, one for each slot, to predict the corresponding values.2 Each classifier is modelled by a 1-layer feed-forward neural network that takes z as input: p(ys) = softmax(Wsz + bs) (4) where p(ys) is the predicted value distribution of slot s. In the second scenario, y is a tree-structured formal representation (Banarescu et al., 2013). We then generate y as a linearised token sequence using an LSTM decoder relying on both z and x via the standard attention mechanism (Bahdanau et al., 2014). The decoding procedure follows exactly Equation 3. 2.3 Model Summary One flexibility of the JUG model comes from the fact that it has two ways to infer the shared latent variable z through either x or y; and the inferred z can aid the generation of both x and y. In this next section, we show how this shared latent variable enables the JUG model to explore unlabelled x and y, while aligning the learned meanings inside the latent space. 3 Optimisation We now describe how JUG can be optimised with a pair of x and y (§3.1), and also unpaired x or 1Note that it is still necessary to compute the standard deviation σσσx,z in NLU, since the term is needed for optimisation. See more details in Section 3. 2Each slot has a set of corresponding values plus a special one not_mention. y (§3.2). We specifically discuss the prior choice of JUG objectives in §3.3. A combined objective can be thus derived for semi-supervised learning: a practical scenario when we have a small set of labelled data but abundant unlabelled ones (§3.4). 3.1 Optimising p(x, y) Given a pair of utterance x and formal representation y, our objective is to maximise the loglikelihood of the joint probability p(x, y): log p(x, y) = log Z z p(x, y, z) (5) The optimisation task is not directly tractable since it requires us to marginalise out the latent variable z. However, it can be solved by following the standard practice of neural variational inference (Kingma and Welling, 2013). An objective based on the variational lower bound can be derived as Lx,y = Eq(z|x) log p(y|z, x) + Eq(z|x) log p(x|z, y) −KL[q(z|x)||p(z)] (6) where the first term on the right side is the NLU model; the second term is the reconstruction of x; and the last term denotes the Kullback−Leibler divergence between the approximate posterior q(z|x) with the prior p(z). We defer the discussion of prior to Section 3.3 and detailed derivations to Appendix. The symmetry between utterance and semantics offers an alternative way of inferring the posterior through the approximation q(z|y). Analogously we can derive a variational optimisation objective: Ly,x = Eq(z|y) log p(x|z, y) + Eq(z|y) log p(y|z, x) −KL[q(z|y)||p(z)] (7) where the first term is the NLG model; the second term is the reconstruction of y; and the last term denotes the KL divergence. It can be observed that our model has two posterior inference paths from either x or y, and also two generation paths. All paths can be optimised. 3.2 Optimising p(x) or p(y) Additionally, when we have access to unlabelled utterance x (or formal representation y), the optimisation objective of JUG is the marginal likelihood p(x) (or p(y)): log p(x) = log Z y Z z p(x, y, z) (8) 1798 Note that both z and y are unobserved in this case. We can develop an objective based on the variational lower bound for the marginal: Lx = Eq(y|z,x)Eq(z|x) log p(x|z, y) −KL[q(z|x)||p(z)] (9) where the first term is the auto-encoder reconstruction of x with a cascaded NLU-NLG path. The second term is the KL divergence which regularizes the approximated posterior distribution. Detailed derivations can be found in Appendix. When computing the reconstruction term of x, it requires us to first run through the NLU model to obtain the prediction on y, from which we run through NLG to reconstruct x. The full information flow is (x →z →y →z →x).3 Connections can be drawn with recent work which uses backtranslation to augment training data for machine translation (Sennrich et al., 2016; He et al., 2016). Unlike back-translation, the presence of latent variable in our model requires us to sample z along the NLU-NLG path. The introduced stochasticity allows the model to explore a larger area of the data manifold. The above describes the objectives when we have unlabelled x. We can derive a similar objective for leveraging unlabelled y: Ly = Eq(x|z,y)Eq(z|y) log p(y|z, x) −KL[q(z|y)||p(z)] (10) where the first term is the auto-encoder reconstruction of y with a cascaded NLG-NLU path. The full information flow here is (y→z→x→z→y). 3.3 Choice of Prior The objectives described in 3.1 and 3.2 require us to match an approximated posterior (either q(z|x) or q(z|y)) to a prior p(z) that reflects our belief. A common choice of p(z) in the research literature is the Normal distribution (Kingma and Welling, 2013). However, it should be noted that even if we match both q(z|x) and q(z|y) to the same prior, it does not guarantee that the two inferred posteriors are close to each other; this is a desired property of the shared latent space. To better address the property, we propose a novel prior choice: when the posterior is inferred 3This information flow requires us to sample both z and y in reconstructing x. Since y is a discrete sequence, we use REINFORCE (Williams, 1992) to pass the gradient from NLG to NLU in the cascaded NLU-NLG path. from x (i.e., q(z|x)), we choose the parameterised distribution q(z|y) as our prior belief of p(z). Similarly, when the posterior is inferred from y (i.e., q(z|y)), we have the freedom of defining p(z) to be q(z|x). This approach directly pulls q(z|x) and q(z|y) closer to ensure a shared latent space. Finally, note that it is straightforward to compute both q(z|x) and q(z|y) when we have parallel x and y. However when we have the access to unlabelled data, as described in Section 3.2, we can only use the pseudo x-y pairs that are generated by our NLU or NLG model, such that we can match an inferred posterior to a pre-defined prior reflecting our belief of the shared latent space. 3.4 Training Summary In general, JUG subsumes the following three training scenarios which we will experiment with. When we have fully labelled x and y, the JUG jointly optimises NLU and NLG in a supervised fashion with the objective as follows: Lbasic = X (x,y)∼(X,Y ) (Lx,y + Ly,x) (11) where (X, Y ) denotes the set of labelled examples. Additionally in the fully supervised setting, JUG can be trained to optimise both NLU, NLG and auto-encoding paths. This corresponds to the following objective: Lmarginal = Lbasic + X (x,y)∼(X,Y ) (Lx +Ly) (12) Furthermore, when we have additional unlabelled x or y, we optimise a semi-supervised JUG objective as follows: Lsemi = Lbasic + X x∼X Lx + X y∼Y Ly (13) where X denotes the set of utterances and Y denotes the set of formal representations. 4 Experiments We experiment on two dialogue datasets with different formal representations to test the generality of our model. The first dataset is E2E (Novikova et al., 2017), which contains utterances annotated with flat slot-value pairs as their semantic representations. The second dataset is the recent weather dataset (Balakrishnan et al., 2019), where both utterances and semantics are represented in tree structures. Examples of the two datasets are provided in tables 1 and 2. 1799 Natural Language "sousa offers british food in the low price range. it is family friendly with a 3 out of 5 star rating. you can find it near the sunshine vegetarian cafe." Semantic Representation restaurant_name=sousa, food=english, price_range=cheap, customer_rating=average, family_friendly=yes, near=sunshine vegetarian cafe Table 1: An example in E2E dataset. Natural Language (original) "[__DG_YES__ Yes ] , [__DG_INFORM__ [__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ today’s ] ] forecast is [__ARG_CLOUD_COVERAGE__ mostly cloudy ] with [__ARG_CONDITION__ light rain showers ] ] ." Natural Language (processed by removing tree annotations) "Yes, today’s forecast is mostly cloudy with light rain showers." Semantic Representation [__DG_YES__ [__ARG_TASK__ get_weather_attribute ] ] [__DG_INFORM__ [__ARG_TASK__ get_forecast ] [__ARG_CONDITION__ light rain showers ] [__ARG_CLOUD_COVERAGE__ mostly cloudy ] [__ARG_DATE_TIME__ [__ARG_COLLOQUIAL__ today’s ] ] ] Table 2: An example in weather dataset. The natural language in original dataset (first row) is used for training to have a fair comparison with existing methods. The processed utterances (second row) is used in our semi-supervised setting. 4.1 Training Scenarios We primarily evaluated our models on the raw splits of the original datasets, which enables us to fairly compare fully-supervised JUG with existing work on both NLU and NLG.4 Statistics of the two datasets can be found in Table 3. In addition, we set up an experiment to evaluate semi-supervised JUG with a varying amount of labelled training data (5%, 10%, 25%, 50%, 100%, with the rest being unlabelled). Note that the original E2E test set is designed on purpose with unseen slot-values in the test set to make it difficult (Dušek et al., 2018, 2020); we remove the distribution bias by randomly re-splitting the E2E dataset. On the contrary, utterances in the weather dataset contains extra tree-structure annotations which make the NLU task a toy problem. We therefore remove these annotations to make NLU more realistic, as shown in the second row of Table 2. As described in Section 3.4, we can optimise our proposed JUG model in various ways. We investigate the following approaches: JUGbasic: this model jointly optimises NLU 4Following Balakrishnan et al. (2019), the evaluation code https://github.com/tuetschek/e2e-metrics provided by the E2E organizers is used here for calculating BLEU in NLG. Dataset Train Valid Test E2E 42061 4672 4693 Weather 25390 3078 3121 Table 3: Number of examples in two datasets E2E NLU F1 Dual supervised learning (Su et al., 2019) 0.7232 JUGbasic 0.7337 E2E NLG BLEU TGEN (Dušek and Jurcicek, 2016) 0.6593 SLUG (Juraska et al., 2018) 0.6619 Dual supervised learning (Su et al., 2019) 0.5716 JUGbasic 0.6855 Weather NLG BLEU S2S-CONSTR (Balakrishnan et al., 2019) 0.7660 JUGbasic 0.7768 Table 4: Comparison with previous systems on two datasets. Note that there is no previous system trained for NLU in weather dataset. and NLG with the objective in Equation 11. This uses labelled data only. JUGmarginal: jointly optimises NLU, NLG and auto-encoders with only labelled data, per Equation 12. JUGsemi: jointly optimises NLU and NLG with labelled data and auto-encoders with unlabelled data, per Equation 13. 4.2 Baseline Systems We compare our proposed model with some existing methods as shown in Table 4 and two designed baselines as follows: Decoupled: The NLU and NLG models are trained separately by supervised learning. Both of the individual models have the same encoderdecoder structure as JUG. However, the main difference is that there is no shared latent variable between the two individual NLU and NLG models. Augmentation: We pre-train Decoupled models to generate pseudo label from the unlabelled corpus (Lee, 2013) in a setup similar to backtranslation (Sennrich et al., 2016). The pseudo data and labelled data are then used together to fine-tune the pre-trained models. Among all systems in our experiments, the number of units in LSTM encoder/decoder are set to {150, 300} and the dimension of latent space is 150. The optimiser Adam (Kingma and Ba, 2014) is used with learning rate 1e-3. Batch size is set to {32, 64}. All the models are fully trained and the 1800 Model / Data 5% 10% 25% 50% 100% Decoupled 52.77 (0.874) 62.32 (0.902) 69.37 (0.924) 73.68 (0.935) 76.12 (0.942) Augmentation∗ 54.71 (0.878) 62.54 (0.902) 68.91 (0.922) 73.84 (0.935) JUGbasic 60.30 (0.902) 67.08 (0.918) 72.49 (0.932) 74.74 (0.937) 78.05 (0.945) JUGmarginal 62.96 (0.907) 68.43 (0.920) 73.35 (0.933) 75.74 (0.939) 78.93 (0.948) JUG∗ semi 68.09 (0.921) 70.33 (0.925) 73.79 (0.935) 75.46 (0.939) Table 5: NLU results on E2E dataset. Joint accuracy (%) and F1 score (in bracket) are both reported with varying percentage of labelled training data. Models using unlabelled data are marked with *. Model / Data 5% 10% 25% 50% 100% Decoupled 0.693 (83.47) 0.723 (87.33) 0.784 (92.52) 0.793 (94.91) 0.813 (96.98) Augmentation∗0.747 (84.79) 0.770 (90.13) 0.806 (94.06) 0.815 (96.04) JUGbasic 0.685 (84.20) 0.734 (88.68) 0.769 (93.83) 0.788 (95.11) 0.810 (95.07) JUGmarginal 0.724 (85.57) 0.775 (93.59) 0.803 (94.99) 0.817 (98.67) 0.830 (99.11) JUG∗ semi 0.814 (90.47) 0.792 (94.76) 0.819 (95.59) 0.827 (98.42) Table 6: NLG results on E2E dataset. BLEU and semantic accuracy (%) (in bracket) are both reported with varying percentage of labelled training data. Models using unlabelled data are marked with *. Model / Data 5% 10% 25% 50% 100% Decoupled 73.46 80.85 86.00 88.45 90.68 Augmentation∗74.77 79.84 86.24 88.69 JUGbasic 73.62 80.13 86.15 87.94 90.55 JUGmarginal 74.61 81.14 86.83 89.06 91.28 JUG∗ semi 79.19 83.22 87.46 89.17 Table 7: NLU results with exact match accuracy (%) on weather dataset. best model is picked by the average of NLU and NLG results on validation set during training. 4.3 Main Results We start by comparing the JUGbasic performance with existing work following the original split of the datasets. The results are shown in Table 4. On E2E dataset, we follow previous work to use F1 of slot-values as the measurement for NLU, and BLEU-4 for NLG. For weather dataset, there is only published results for NLG. It can be observed that the JUGbasic model outperforms the previous state-of-the-art NLU and NLG systems on the E2E dataset, and also for NLG on the weather dataset. The results prove the effectiveness of introducing the shared latent variable z for jointly training NLU and NLG. We will further study the impact of the shared z in Section 4.4.2. We also evaluated the three training scenarios of JUG in the semi-supervised setting, with different proportion of labelled and unlabelled data. The results for E2E is presented in Table 5 and 6. We computed both F1 score and joint accuracy (Mrkši´c Model / Data 5% 10% 25% 50% 100% Decoupled 0.632 0.667 0.703 0.719 0.725 Augmentation∗0.635 0.677 0.703 0.727 JUGbasic 0.634 0.673 0.701 0.720 0.726 JUGmarginal 0.627 0.671 0.711 0.721 0.722 JUG∗ semi 0.670 0.701 0.725 0.733 Table 8: NLG results with BLEU on weather dataset. et al., 2017) of slot-values as a more solid NLU measurement. Joint accuracy is defined as the proportion of test examples whose slot-value pairs are all correctly predicted. For NLG, both BLEU-4 and semantic accuracy are computed. Semantic accuracy measures the proportion of correctly generated slot values in the produced utterances. From the results, we observed that Decoupled can be improved with techniques of generating pseudo data (Augmentation), which forms a stronger baseline. However, all our model variants perform better than the baselines on both NLU and NLG. When using only labelled data, our model JUGmarginal can surpass Decoupled across all the four measurements. The gains mainly come from the fact that the model uses auto-encoding objectives to help learn a shared semantic space. Compared to Augmentation, JUGmarginal also has a ‘builtin mechanism’ to bootstrap pseudo data on the fly of training (see Section 3.4). When adding extra unlabelled data, our model JUGsemi gets further performance boosts and outperforms all baselines by a significant margin. With the varying proportion of unlabelled data in 1801 Figure 2: Visualisation of latent variable z. Given a pair of x and y, z can be sampled from the posterior q(z|x) or q(z|y), denoted by blue and orange dots respectively. the training set, we see that unlabelled data is helpful in almost all cases. Moreover, the performance gain is the more significant when the labelled data is less. This indicates that the proposed model is especially helpful for low resource setups when there is a limited amount of labelled training examples but more available unlabelled ones. The results for weather dataset are presented in Table 7 and 8. In this dataset, NLU is more like a semantic parsing task (Berant et al., 2013) and we use exact match accuracy as its measurement. Meanwhile, NLG is measured by BLEU. The results reveal a very similar trend to that in E2E. The generated examples can be found in Appendix. 4.4 Analysis In this section we further analyse the impact of the shared latent variable and also the impact of utilising unlabelled data. 4.4.1 Visualisation of Latent Space As mentioned in Section 2.1, the latent variable z can be sampled from either posterior approximation q(z|x) or q(z|y). We inspect the latent space in Figure 2 to find out how well the model learns intent sharing. We plot z with the E2E dataset on 2dimentional space using t-SNE projection (Maaten and Hinton, 2008). We observe two interesting properties. First, for each data point (x, y), the z values sampled from q(z|x) and q(z|y) are close to each other. This reveals that the meanings of x and y are tied in the latent space. Second, there exists distinct clusters in the space of z. By further inspecting the actual examples within each cluster, we found that a cluster represents a similar meaning composition. For instance, the cluster cenModel NLU NLG JUGbasic 90.55 0.726 JUGbasic (feed random z) 38.13 0.482 Table 9: A comparative study to evaluate the contribution of the learned latent variable z in NLU/NLG decoding. Models are trained on the whole weather dataset. Method NLU NLG Mi Re Wr Mi Wr Decoupled 714 256 2382 5714 2317 JUGbasic 594 169 1884 4871 2102 Table 10: Error analysis on E2E dataset. Numbers of missing (Mi), redundant (Re) and wrong (Wr) predictions on slot-value pairs are reported for NLU; numbers of missing or wrong generated slot values are listed for NLG. Lower number indicates the better results. Both models are trained on 5% of the training data. tered at (-20, -40) contains {name, foodtype, price, rating, area, near}, while the cluster centered at (45, 10) contains {name, eattype, foodtype, price}. This indicates that the shared latent serves as conclusive global feature representations for NLU and NLG. 4.4.2 Impact of the Latent Variable One novelty of our model is the introduction of shared latent variable z for natural language x and formal representations y. A common problem in neural variational models is that when coupling a powerful autogressive decoder, the decoder tends to learn to ignore z and solely rely on itself to generate the data (Bowman et al., 2016; Chen et al., 2017; Goyal et al., 2017). In order to examine to what extent does our model actually rely on the shared variable in both NLU and NLG, we seek for an empirical answer by comparing the JUGbasic model with a model variant which uses a random value of z sampled from a normal distribution N(0, 1) during testing. From Table 9, we can observe that there exists a large performance drop if z is assigned with random values. This suggests that JUG indeed relies greatly on the shared variable to produce good-quality x or y. We further analyse the various sources of errors to understand the cases which z helps to improve. On E2E dataset, wrong prediction in NLU comes from either predicting not_mention label for certain slots in ground truth semantics; predicting arbitrary values on slots not present in the ground truth semantics; or predicting wrong values com1802 E2E Weather Method NLU NLG NLU NLG JUGbasic 60.30 0.685 73.62 0.634 +unlabelled x 62.89 0.765 74.97 0.654 +unlabelled y 59.55 0.815 76.98 0.621 +unlabelled x and y 68.09 0.814 79.19 0.670 Table 11: Comparison on sources of unlabelled data for semi-supervised learning using only utterances (x), only semantic representations (y) or both (x and y). JUGbasic model is trained on 5% of training data. paring to ground truth. Three types of error are referred to Missing (Mi), Redundant (Re) and Wrong (Wr) in Table 10. For NLG, semantic errors can be either missing or generating wrong slot values in the given semantics (Wen et al., 2015). Our model makes fewer mistakes in all these error sources comparing to the baseline Decoupled. We believe this is because the clustering property learned in the latent space provides better feature representations at a global scale, eventually benefiting NLU and NLG. 4.4.3 Impact of Unlabelled Data Source In Section 4.3, we found that the performance of our model can be further enhanced by leveraging unlabelled data. As we used both unlabelled utterances and unlabelled semantic representations together, it is unclear if both contributed to the performance gain. To answer this question, we start with the JUGbasic model, and experimented with adding unlabelled data from 1) only unlabelled utterances x; 2) only semantic representations y; 3) both x and y. As shown in Table 11, when adding any uni-sourced unlabelled data (x or y), the model is able to improve to a certain extent. However, the performance can be maximised when both data sources are utilised. This strengthens the argument that our model can leverage bi-sourced unlabelled data more effectively via latent space sharing to improve NLU and NLG at the same time. 5 Related Work Natural Language Understanding (NLU) refers to the general task of mapping natural language to formal representations. One line of research in the dialogue community aims at detecting slot-value pairs expressed in user utterances as a classification problem (Henderson et al., 2012; Sun et al., 2014; Mrkši´c et al., 2017; Vodolán et al., 2017). Another line of work focuses on converting single-turn user utterances to more structured meaning representations as a semantic parsing task (Zettlemoyer and Collins, 2005; Jia and Liang, 2016; Dong and Lapata, 2018; Damonte et al., 2019). In comparison, Natural Language Generation (NLG) is scoped as the task of generating natural utterances from their formal representations. This is traditionally handled with a pipelined approach (Reiter and Dale, 1997) with content planning and surface realisation (Walker et al., 2001; Stent et al., 2004). More recently, NLG has been formulated as an end-to-end learning problem where text strings are generated with recurrent neural networks conditioning on the formal representation (Wen et al., 2015; Dušek and Jurcicek, 2016; Dušek et al., 2020; Balakrishnan et al., 2019; Tseng et al., 2019). There has been very recent work which does NLU and NLG jointly. Both Ye et al. (2019) and Cao et al. (2019) explore the duality of semantic parsing and NLG. The former optimises two sequence-to-sequence models using dual information maximisation, while the latter introduces a dual learning framework for semantic parsing. Su et al. (2019) proposes a learning framework for dual supervised learning (Xia et al., 2017) where both NLU and NLG models are optimised towards a joint objective. Their method brings benefits with annotated data in supervised learning, but does not allow semi-supervised learning with unlabelled data. In contrast to their work, we propose a generative model which couples NLU and NLG with a shared latent variable. We focus on exploring a coupled representation space between natural language and corresponding semantic annotations. As proved in experiments, the information sharing helps our model to leverage unlabelled data for semi-supervised learning, which eventually benefits both NLU and NLG. 6 Conclusion We proposed a generative model which couples natural language and formal representations via a shared latent variable. Since the two space is coupled, we gain the luxury of exploiting each unpaired data source and transfer the acquired knowledge to the shared meaning space. This eventually benefits both NLU and NLG, especially in a lowresource scenario. The proposed model is also suitable for other translation tasks between two modalities. As a final remark, natural language is richer and more informal. NLU needs to handle ambiguous 1803 or erroneous user inputs. However, formal representations utilised by an NLG system are more precisely-defined. In future, we aim to refine our generative model to better emphasise this difference of the two tasks. Acknowledgments Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan. This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1.. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural nlg from compositional representations in task-oriented dialogue. arXiv preprint arXiv:1906.07220. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186, Sofia, Bulgaria. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. ACL. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Marco Damonte, Rahul Goel, and Tagyoung Chung. 2019. Practical semantic parsing for spoken language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 16–23. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731–742. Ondˇrej Dušek and Filip Jurcicek. 2016. Sequence-tosequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 45–51. Ondˇrej Dušek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the e2e nlg challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 322–328. Ondˇrej Dušek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge. Computer Speech & Language, 59:123–156. Anirudh Goyal Alias Parth Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochastic recurrent networks. In Advances in neural information processing systems, pages 6713–6723. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 820–828. Curran Associates Inc. Matthew Henderson, Milica Gaši´c, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative spoken language understanding using word confusion networks. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 176– 181. IEEE. Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292– 299. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. 1804 Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22. Juraj Juraska, Panagiotis Karagiannis, Kevin Bowden, and Marilyn Walker. 2018. A deep ensemble model with slot alignment for sequence-to-sequence natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 152–162. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, page 2. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788. Jekaterina Novikova, Ondˇrej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for endto-end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201–206, Saarbrücken, Germany. Association for Computational Linguistics. Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, and Luke Zettlemoyer. 2019. Span-based hierarchical semantic parsing for task-oriented dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1520–1526, Hong Kong, China. Association for Computational Linguistics. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd annual meeting on association for computational linguistics, page 79. Association for Computational Linguistics. Shang-Yu Su, Chao-Wei Huang, and Yun-Nung Chen. 2019. Dual supervised learning for natural language understanding and generation. ACL. Kai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The sjtu system for dialog state tracking challenge 2. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 318–326. Bo-Hsiang Tseng, Paweł Budzianowski, Yen-chen Wu, and Milica Gasic. 2019. Tree-structured semantic encoder with knowledge sharing for domain adaptation in natural language generation. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 155–164. Miroslav Vodolán, Rudolf Kadlec, Jan Kleindienst, and V Parku. 2017. Hybrid dialog state tracker with asr features. EACL 2017, page 205. Marilyn A Walker, Owen Rambow, and Monica Rogati. 2001. Spot: A trainable sentence planner. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8. Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gasic, Nikola Mrkši´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711–1721. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3789– 3798. JMLR. org. Hai Ye, Wenjie Li, and Lu Wang. 2019. Jointly learning semantic parser and natural language generator via dual information maximization. arXiv preprint arXiv:1906.00575. Steve Young, Milica GaÅ ˛aiÄ ˘G, Simon Keizer, FranÃ˘gois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech Language, 24(2):150 – 174. 1805 Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018. Typesql: Knowledge-based type-aware neural text-to-sql generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, pages 588–594. Luke S Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 658–666. AUAI Press. Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. arXiv preprint arXiv:1207.1420. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2993–2999. AAAI Press. 1806 A Appendices A.1 Derivation of Lower Bounds We derive the lower bounds for log p(x, y) as follows: log p(x, y) = log Z z p(x, y, z) = log Z z p(x, y, z)q(z|x) q(z|x) = log Z z p(x|z, y)p(y|z, x)p(z)q(z|x) q(z|x) = log Eq(z|x) p(x|z, y)p(y|z, x)p(z) q(z|x) ≥Eq(z|x) log p(x|z, y)p(y|z, x)p(z) q(z|x) = Eq(z|x)[log p(x|z, y) + log p(y|z, x)] −KL[q(z|x)||p(z)] (14) where q(z|x) represents an approximated posterior. This derivation gives us the Equation 6 in the paper. Similarly we can derive an alternative lower bound in Equation 7 by introducing q(z|y) instead of q(z|x). For marginal log-likelihood log p(x) or log p(y), its lower bound is derived as follows: log p(x) = log Z y Z z p(x, y, z) = log Z y Z z p(x|z, y)p(y)p(z)q(z|x)q(y|z, x) q(z|x)q(y|z, x) = log Eq(y|z,x)Eq(z|x) p(x|z, y)p(y)p(z) q(z|x)q(y|z, x) ≥Eq(y|z,x)Eq(z|x) log p(x|z, y)p(y)p(z) q(z|x)q(y|z, x) = Eq(y|z,x)Eq(z|x) log p(x|z, y) −KL[q(z|x)||p(z)] −KL[q(y|x, z)||p(y)] (15) Note that the resulting lower bound consists of three terms: a reconstruction of x, a KL divergence which regularises the space of z, and also a KL divergence which regularises the space of y. We have dropped the last term in our optimisation objective in Equation 9, since we do not impose any prior assumption on the output space of the NLU model. Analogously we can derive the lower bound for log p(y). We also do not impose any prior assumption on the output space of the NLG model, which leads us to Equation 10. 1807 A.2 Generated Examples Reference of example x: "for those prepared to pay over £30 , giraffe is a restaurant located near the six bells ." y: {name=giraffe, eat_type=restaurant, price_range=more than £30, near=the six bells} Prediction by Decoupled model x: "near the six bells , there is a restaurant called giraffe that is children friendly ." (miss price_range) y: {name=travellers rest beefeater, price_range=more than £30, near=the six bells} (wrong name, miss eat_type) Prediction by JUGsemi model x: "giraffe is a restaurant near the six bells with a price range of more than £30 ." (semantically correct) y: {name=giraffe, eat_type=restaurant, price_range=more than £30, near=the six bells} (exact match) Table 12: An example of E2E dataset and predictions generated by the baseline model Decoupled and the proposed model JUGsemi. x and y denotes natural language and the corresponding semantic representation. Errors are highlighted following predictions. Reference of example x: "it’s going to be __arg_temp__ and __arg_cloud_coverage__ __arg_colloquial__ between __arg_start_time__ and __arg_end_time__" y: [__dg_inform__ [__arg_task__ get_forecast ] [__arg_temp__ __arg_temp__ ] [__arg_cloud_coverage__ __arg_cloud_coverage__ ] [__arg_ date_time_range__ [__arg_start_time__ __arg_start_time__ ] [__arg_end_time__ __arg_end_time__ ] [__arg_colloquial__ __arg_colloquial__ ] ] ] Prediction by Decoupled model x: "it will be __arg_temp__ degrees and __arg_cloud_coverage__ from __arg_start_time__ to __arg_end_time__" y: [__dg_inform__ [__arg_task__ get_forecast ] [__arg_temp__ __arg_temp__ ] [__arg_cloud_coverage__ __arg_cloud_coverage__ ] [__arg_ date_time__ [__arg_colloquial__ __arg_colloquial__ ] ] ] [__dg_inform__ [__arg_task__ get_forecast ] [__arg_temp__ __arg_temp__ ] [__arg_cloud_coverage__ __arg_cloud_coverage__ ] [__arg_date_time_range__ [__arg_start_time__ __arg_start_time__ ] [__arg_end_time__ __arg_end_time__ ] ] ] (not match) Prediction by JUG_semi model x: "the temperature will be around __arg_temp__ degrees __arg_colloquial__ between __arg_start_time__ and __arg_end_time__" y: [__dg_inform__ [__arg_task__ get_forecast ] [__arg_temp__ __arg_temp__ ] [__arg_cloud_coverage__ __arg_cloud_coverage__ ] [__arg_ date_time_range__ [__arg_start_time__ __arg_start_time__ ] [__arg_end_time__ __arg_end_time__ ] [__arg_colloquial__ __arg_colloquial__ ] ] ] (exact match) Table 13: An example of weather dataset and predictions generated by the baseline model Decoupled and the proposed model JUGsemi. x and y denotes natural language and the corresponding semantic representation. NLU result are highlighted following predictions.
2020
163
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1808–1822 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1808 Automatic Detection of Generated Text is Easiest when Humans are Fooled Daphne Ippolito†‡∗ [email protected] Daniel Duckworth‡* [email protected] Chris Callison-Burch†‡ [email protected] Douglas Eck‡ [email protected] Abstract Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies—topk, nucleus sampling, and untruncated random sampling—and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems. 1 Introduction State-of-the-art generative language models are now capable of producing multi-paragraph excerpts that at a surface level are virtually indistinguishable from human-written content (Zellers et al., 2019; Radford et al., 2019; Adelani et al., 2020). Often, only subtle logical fallacies or idiosyncrasies of language give away the text as machine-generated, errors that require a close reading and/or domain knowledge for humans to detect. Deceptive text, whether human- or machinegenerated, has entered the sphere of public concern (Cooke, 2018). It propogates quickly (Vosoughi et al., 2018), sets political agendas ∗Equal contribution, ‡Google, †University of Pennsylvania (Vargo et al., 2018), influences elections (Allcott and Gentzkow, 2017), and undermines user trust (Wang et al., 2012; Song et al., 2015). Recently, Adelani et al. (2020) have shown that automatically generated reviews are perceived to be as fluent as human-written ones. As generative technology matures, authors, well-meaning or otherwise, will increasingly employ it to augment and accelerate their own writing. It is more imperative now than ever for both humans and automated systems to be able to detect and identify machinegenerated texts in the wild. However, there has thus been little inquiry into the textual properties that cause humans to give generated text high human-like ratings compared to those that cause automatic systems to rate it highly. To speak of texts produced by language models, we must first consider how these texts are generated. A neural language model encodes a probability distribution over the next word in a sequence given the previous words.1 A decoding strategy is an algorithm that generates sequences from a language model by determining how words should get selected from this distribution. The field has largely moved toward probabilistic decoding strategies that randomly sample from the output distribution token-by-token. However, when many low-likelihood words cumulatively contain quite a bit of probability mass, choosing one of these words can lead to odd or contradictory phrases and semantic errors. Humans are quick to notice these types of errors. For this reason, it has become common to modify the language model’s output probability distribution to increase the chance of sampling tokens with high likelihood according to the language model. Top-k random sampling, where low-likelihood words are restricted from being 1Often these ‘words” are actually subword character sequences such as BPE tokens (Sennrich et al., 2016). 1809 generated, is one such method. A language model that is only permitted to produce high-likelihood words is less likely to make a poor choice and create the type of mistakes that are easy for humans to detect. Since humans are not proficient at identifying when a model subtly favors some utterances more often than a human author would, they don’t notice the over-representation of high-likelihood words in the generated text. In contrast, automatic systems excel at identifying statistical anomalies and struggle to build deeper semantic understanding. Top-k in particular creates text that is easy for machines to detect but very hard for humans. Thus, we observe the general trend: as the number of unlikely words available to be chosen is increased, humans get better at detecting fakes while automatic systems get worse. In this work, we study three popular random decoding strategies—top-k, nucleus, and temperature sampling—applied to GPT-2 (Radford et al., 2019). We draw a large number of excerpts generated by each strategy and train a family of BERTbased (Devlin et al., 2019) binary classifiers to label text excerpts as human-written or machinegenerated. We find large differences in human rater and classifier accuracy depending on the decoding strategy employed and length of the generated sequences. Regardless of strategy, we find human raters achieve significantly lower accuracy than the automatic discriminators. We also show that when a decoding strategy severely modifies the unigram token distribution, as top-k does, humans have trouble detecting the resultant generated text, but automatic classifiers find it the easiest to discriminate. Worryingly, we further find that classifiers are brittle; they generalize poorly when trained to discriminate samples from one strategy and then evaluated on samples from another. In summary, our contributions are: • A comprehensive study of generated text detection systems’ sensitivity to model structure, decoding strategy, and excerpt length. • An analysis of human raters’ ability to identify machine-generated content, and how human raters differ from automatic detectors. 2 Related Work Generative Language Models With a sufficiently large training set and number of trainable parameters, neural language models based on the Transformer architecture (Vaswani et al., 2017) are capable of generating convincing, human-like excerpts up to several paragraphs in length. GPT2 (Radford et al., 2019), GROVER (Zellers et al., 2019), and Transformer-DMCA (Liu et al., 2018) are a few examples of large, publicly available models with this ability. GROVER, in particular, has been shown to generate fake news that is more trustworthy than human-written fake news according to human raters. Human Detection The task of trying to guess whether text is coming from a robot or a fellow human was made famous by the Turing Test (Turing, 1950). It continues to be used is chatbot evaluation (Lowe et al., 2017). The related (but not identical) task of asking human raters to judge the quality of machine-generated excerpts remains the gold-standard for evaluating open-domain generation systems (van der Lee et al., 2019). Kreps et al. (2020), Gehrmann et al. (2019), and others have stressed the importance of humans being able to identify fake content on the web. Automatic Detection The rise of machinegenerated content has led to the development of automated systems to identify it. GROVER was designed to not only generate convincing news excerpts but to also identify them using a fine-tuned version of the generative model itself (Zellers et al., 2019). GLTR, expecting attackers to use sampling methods that favor high-likelihood tokens, aims to make machine-generated text detectable by computing histograms over per-token log likelihoods (Gehrmann et al., 2019). Bakhtin et al. (2019) frame human-text detection as a ranking task and evaluate their models’ cross-domain and cross-model generalization, finding significant loss in quality when training on one domain and evaluating on another. Schuster et al. (2019) argue that the language distributional features implicitly or explicitly employed by these detectors are insufficient; instead, one should look to explicit fact-verification models. Finally, discriminators for whether text is machine-generated are a promising research direction in adversarial training (Lin et al., 2017; Li et al., 2017) and in automatic evaluation of generative model quality (Novikova et al., 2017; Kannan and Vinyals, 2017; Lowe et al., 2017). Natural Language Understanding Automatic detection of machine-generated text benefits from a semantic understanding of the text. Contradic1810 tions, falsehoods, and topic drift can all indicate that an excerpt was machine-generated. Encoderonly Transformer models such as BERT (Devlin et al., 2019) have been shown to do very well at tasks requiring this understanding. While we finetune BERT for the task of classifying whether text was machine-generated, others have used the contextual word embeddings from a pre-trained BERT model without fine-tuning to compute a quality score for generated text (Zhang et al., 2020). It is worth noting that recent work has raised questions as to whether BERT truly builds a semantic understanding to make its predictions, or whether it merely takes advantage of spurious statistical differences between the text of different classes (Niven and Kao, 2019). 3 Task Definition We frame the detection problem as a binary classification task: given an excerpt of text, label it as either human-written or machine-generated. In particular, we are interested in how variables such as excerpt length and decoding strategy impact performance on this classification task. We thus create several datasets. Each is approximately balanced between positive examples of machinegenerated text and negative examples of humanwritten text. While they all share the same humanwritten examples, each dataset contains a different set of machine-generated examples sampled using one particular decoding strategy. We also build additional datasets by truncating all of the examples to a particular sequence length, By training a separate classifier on each dataset, we are able to answer questions about which decoding strategy results in text that is the easiest to automatically disambiguate from human-written text. We are also able to answer questions about how the length of the examples in the training set impacts our ability to automatically classify excerpts of that same length as either human-written or machine-generated. 4 Dataset Methodology All of our generated text samples are drawn from GPT-2, a state-of-the-art Transformer-based generative language model that was trained on text from popular web pages (Radford et al., 2019). While we use the GPT-2 LARGE model with 774M parameters, we found that similar trends to those reported here hold in experiments with smaller language models. Given an autoregressive language model that defines a probability distribution over the next token given the previous tokens in a sequence, a decoding strategy generates text by deciding how to output a token at each step based on the predicted distributions. Perhaps the most straightforward decoding strategy is to randomly choose a token with probability proportional to its likelihood. A challenge with the random sampling approach is that these probability distributions often contain a long tail of vocabulary items that are individually low-probability but cumulatively comprise a substantial amount of probability mass. Holtzman et al. (2020) observe that choosing tokens from this tail often leads to incoherent generations. Top-k sampling, nucleus sampling, and (in the extreme) beam search have all been proposed to heuristically promote samples with higher pertoken likelihoods. Top-k and nucleus sampling both do so by setting the likelihood of tokens in the tail of the distribution to zero. Top-k restricts the distribution to all but the k most likely tokens, where k is a constant (Fan et al., 2018). Nucleus sampling, also called top-p, truncates the distribution at each decoding step t to the kt-most-likely next tokens such that the cumulative likelihood of these tokens is no greater than a constant p (Holtzman et al., 2020). We thus consider three different decoding strategy settings: • Sample from the untruncated distribution • Top-k, choosing k=40 (Radford et al., 2019). • Nucleus sampling (aka top-p), choosing p=0.96 (Zellers et al., 2019). In addition, we form “negative” examples of human-written text by taking excerpts of web text that come from the same distribution as GPT-2’s training data.2 By picking text that resembles GPT-2’s train set, we ensure that our classifiers can’t simply take advantage of stylistic differences between the human-written text corpus and the kind of text GPT-2 was trained to generate. For each decoding method, we construct a training dataset by pairing 250,000 generated samples with 250,000 excerpts of web text. 5,000 additional paired samples are kept aside for validation and test datasets. Lastly, we filter out excerpts with fewer than 192 WordPiece tokens (Wu et al., 2https://github.com/openai/ gpt-2-output-dataset 1811 2016) (excerpts might be quite short if the model produces an end-of-text token early on). See Appendix 1 for final dataset sizes. A crucial question when generating text with a language model is whether or not to provide a priming sequence which the language model should continue. Unconditioned samples, where no priming text is provided, in conjunction with top-k sampling, lead to pathological behavior for discriminators as the first token of the generated text will always be one of k possible options. On the other hand, if long sequences of human text are used as priming, the space of possible generated sequences is larger, but the detection problem shifts from one of “how human-like is the generated text?” to “how well does the generated text follow the priming sequence?”. Since in this study we are interested in the former simpler question, we create two datasets, one with no priming, and one with the minimum amount of priming possible: a single token of web text. This means that for every excerpt of web text in the training set, there is an excerpt of machinegenerated text that starts with the same token. We find that even with limited priming, the ability of automatic detectors can be strongly impacted. To study the effect of excerpt length, we construct variations of the above datasets by truncating all excerpts to ten possible lengths ranging from 2 to 192 WordPiece tokens (Wu et al., 2016). In total, we obtain sixty dataset variations: one per sampling method, truncation length, and choice of priming or no priming. 5 Automatic Detection Method The primary discriminator we employ is a finetuned BERT classifier (Devlin et al., 2019). We fine-tune one instance of BERT per dataset variation described above. For the longest sequence length, n=192, we compare BERT’s performance with several simple baselines that have been proposed in other work. Fine-tuned BERT We fine-tune BERT-LARGE (cased) on the task of labeling a sentence as human- or machine- generated. The models are trained for 15 epochs, with checkpoints saved every 1000 steps, and a batch size of 256. All results are reported on the test set using the checkpoint for which validation accuracy was highest. Bag-of-Words For each sequence, we compute a bag-of-words embedding where each dimension corresponds to a token in GPT-2’s 50,000 token BPE vocabulary (Sennrich et al., 2016), and we count how many times that token appears in the text sequence. We then train a logistic regression binary classifier to predict human- or machinewritten given this 50,000-dimensional embedding. We experimented with truncating embedding size by removing entries for infrequent vocabulary words, but this did not improve performance. Histogram-of-Likelihood Ranks Following GLTR (Gehrmann et al., 2019), we compute the probability distribution of the next word given the previous words in a text sequence according to a trained language model (in our case the same GPT-2 model that was used for generation). At each sequence position, we rerank the vocabulary words by likelihood, and record the rank of the ground-truth next word within this list. These ranks are then binned. GLTR uses four bins, counting (1) the number of times the top 1 word is seen, (2) the number of times words ranked 2 through 5 are seen, (3) words ranked 6-100, and (4) words ranked >100. However, we observe higher accuracy when 50 bins are spread uniformly over the possible rankings. This means that since there are 50,000 vocabulary words, the first bin counts the number of times the actual next word was within the 1,000 mostly likely next words, the second bin counts the 1,001-2,000th, and so on. We then train logistic regression binary classifiers to predict human- or machine-written given either the 4-dimensional histograms or 50-dimensional histograms as input. Total Probability Solaiman et al. (2019) propose a very simple baseline consisting of a threshold on the total probability of the text sequence. An excerpt is predicted as machine-generated if its likelihood according to GPT-2 is closer to the mean likelihood over all machine-generated sequences than to the mean of human-written ones. 6 Human Detection Method The human evaluation task is framed similarly to the automatic one. We ask the raters to decide whether a passage of text was written by a human or by a computer algorithm. (Full instructions are in the Appendix.) Raters are allowed to choose between four options: “definitely” or “possibly” machine-generated and “definitely” or “possibly” human-written. They are first shown an excerpt of length 16 WordPiece tokens. After they make 1812 BERT BagOfWords HistGLTRBuckets Hist50Buckets TotalProb Human Method acc AUC acc AUC acc AUC acc AUC acc acc k40-1wordcond 0.88 0.99 0.79 0.87 0.52 0.52 0.69 0.76 0.61 0.64 p0.96-1wordcond 0.81 0.89 0.60 0.65 0.53 0.56 0.54 0.56 0.63 0.77 p1.0-1wordcond 0.79 0.92 0.59 0.62 0.53 0.55 0.54 0.55 0.65 0.71 Table 1: Performance (accuracy and AUC) of the fine-tuned BERT classifier and several simple baselines on detecting length-192 sequences generated with one word of priming (1worccond). Note that p1.0 refers to untruncated random sampling, where we sample from 100% of the probability mass. The last column shows human performance on the same task where accuracy with a 50% baseline is computed by randomly pairing samples from each decoding strategy with a human-written sample. a guess, the length of the excerpt is doubled, and they are asked the same question again. This continues until the entire passage of length 192 tokens is shown. Passages are equally likely to be humanwritten or machine-generated, with the machinegenerated excerpts being evenly split between the three sampling strategies considered in this paper. Initially, Amazon Mechanical Turk (AMT) raters were employed for this task, but rater accuracy was poor with over 70% of the “definitely” votes cast for “human” despite the classes being balanced. Accuracy, even for the longest sequences, hovered around 50%. The same study was then performed with university students who were first walked through ten examples (see Appendix Table 4) as a group. Afterward, they were asked to complete the same tasks that had been sent to the AMT workers. No additional guidance or direction was given to them after the initial walk-through. We will refer to this group as the “expert raters.” Among them, 52.1% of “definitely” votes were cast for human, and accuracy on the longest excerpt length was over 70%. The human evaluation dataset consisted of 150 excerpts of web text and 50 excerpts each from the three decoding strategies. Each question was shown to at most three raters, leading to 900 total annotations from the untrained workers and 475 from the expert raters. A more detailed breakdown can be found in the Appendix. 7 Automatic Detection Results Simple Baselines Table 1 shows the performance of the baseline discriminators on length192 sequences, as compared with fine-tuned BERT. Reassuringly, BERT far surpasses all simple baselines, indicating that it is not fully possible to solve the detection problem without complex sequence-based understanding. The simplest baseline, TotalProb, which makes a decision based on the likelihood of the sequence, performs surprisingly well (over 60% accuracy for all sampling methods) relative to the methods which involve training logistic regression models. Logistic regression on bag-of-words is the best of the baselines, beating out the histogram-based methods. While Gehrmann et al. (2019) report an AUC of 0.87 on classifying text as real or generated using logistic regression on the four buckets of the GLTR system, we report AUC between 0.52 and 0.56 for this task. The discrepancy is likely due to the fact that the human-written text in our discriminator training set comes from the same distribution as the text used to train the language model, while in GLTR the human text comes from children’s books, scientific abstracts, and newspaper articles. The selection of training data for learned detection systems is crucial. In real-world applications, the choice ought to reflect the genres that builders of text-generation systems are trying to impersonate. Fine-tuned BERT In Figure 1a, we begin by observing discriminator accuracy as a function of excerpt length and sampling method. As can be intuitively expected, as sequence length increases, so too does accuracy. For unconditioned text decoded with nucleus (p0.96) and untruncated (p1.0) random sampling, we find discriminator accuracy increases from 55%, near random, to about 81% for the longest sequences tested. In contrast, discriminators trained and evaluated on top-k achieve over 80% accuracy even on 16-token excerpts. Why are top-k’s samples so easy to detect? In Figure 2b, we see the percentage of probability mass concentrated in the k most common token types for each sampling method. While random sampling and nucleus sampling are very similar to human-written texts, we see top-k concentrating up to 80% of its mass in the first 500 most common tokens. The other sampling methods as well as human-written texts require at least 1,100 token types for the same. It is clear that top-k’s distribu1813 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100% 0 32 64 96 128 160 192 Accuracy Sequence length in tokens Accuracy of BERT Fine-tuned Discriminator k40-1wordcond k40-nowordcond p0.96-1wordcond p0.96-nowordcond p1.0-1wordcond p1.0-nowordcond (a) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 4 8 16 32 64 96 128 160 192 Sequence length in tokens Fraction of BERT Discriminator Errors that are Machine-generated Labeled as Human-written k40-1wordcond p0.96-1wordcond p1.0-1wordcond (b) Figure 1: In (a), accuracy increases as the length of the sequences used to train the discriminator is increased. In (b), we see that the BERT fine-tuned discriminator predicts about the same number of false-positives as falsenegatives when trained with samples generated using top-p sampling. However, for top-k, it more often mistakes machine-generated text to be human-written, while for untruncated random sampling the opposite is the case. tion over unigrams strongly diverges from humanwritten texts–an easy feature for discriminators to exploit. In fact, See et al. (2019) note that it takes setting k to 1000 to achieve about the same amount of rare word usage and fraction of non-stopword text as as human writing.3 This makes it very easy for the model to pick out machine-generated text based on these distributional differences. One way to help resolve this problem is to add priming text. Doing so causes more rare words to be incorporated into the top-k of the unigram distribution. Adding even a single human word of priming significantly reduces the performance of detectors trained with top-k random sampling. Without priming, a discriminator trained on sequences of length 2 can classify with ∼90% accuracy the provenance of the text (Figure 1a). By adding one priming token, accuracy drops to ∼65%. Even on the longest 192-length sequences, top-k discriminator accuracy is 6% lower on the primed dataset than the unprimed one. When generating with nucleus or untruncated random sampling, adding a priming token is not as impactful, as these methods are already sampling from a large fraction (or all) of the probability distribution. This is seen in Figure 2a where at the very first step of unprimed generation, nucleus sampling selects from 3075 possible vocabulary words, and at later positions selects from on 3when decoding from the GPT-2 small model with 117M parameters. average more than 500. Untruncated random sampling always selects from the entire 50,000 word vocabulary, whereas top-k only selects from k. Transferability In Table 2, we show how discriminators trained with samples from one decoding strategy can transfer at test time to detecting samples generated using a different decoding strategy. Unsurprisingly a discriminator trained on top-k generalizes poorly to other sampling methods: accuracy drops to as low as 42.5%, worse than chance. Conversely, training the discriminator with sequences sampled from the untruncated distribution leads to little transferability to detecting top-k samples. Only the discriminator trained with nucleus sampling (a compromise between unmodified sampling and top-k) was able to detect sequences from the other sampling strategies without too much of a hit to accuracy. As expected, a discriminator trained on an equal portion of data from each decoding method does reasonably at detecting all three. Perhaps this lack of transferability is related to each discriminator’s calibration. Indeed, the degree to which a discriminator’s average prediction deviates from 50% is a direct indicator of its accuracy. In Table 3, we observe that of the three BERT discriminators, only that trained on top-p samples predicts ‘machine-generated’ on approximately 50% of in-domain examples as expected. This same discriminator’s behavior holds on datasets generated by other sampling strategies 1814 0 50 100 150 200 Position in sequence 500 1000 1500 2000 2500 3000 3500 4000 4500 k Mean k Chosen at each Position during Generation with Nucleus Sampling p0.96-nowordcond p0.96-1wordcond (a) 0 500 1000 1500 2000 2500 k most common unique tokens 0% 20% 40% 60% 80% 100% % all tokens p1.0-1wordcond k40-1wordcond p0.96-1wordcond webtext (b) Figure 2: In (a), the average (over sequences in the test set) k chosen at each step during generating with nucleus sampling is plotted. Adding a single word of priming strongly impacts the ks chosen for the first few positions, but this difference quickly dissipates. In (b), we consider the first token generated in each sequence by top-k, and plot what fraction of these are captured by the k most common unique tokens from the vocabulary. Overall, at its first step, top-k concentrates 80% of its probability mass in the 500 most common tokens from the vocabulary. ! !"# !"$ !"% !"& !"' !"( !") !"* !"+ # #( %$ (& #$* #+$ ,-./-01-23-04562702589-0: ;<=1578028>2?=5-<2@<<8<:256=52=<-2A=1670-B 4-0-<=5-C2D=E-3-C2=:2F/G=0BH<755-0 9&!B#H8<C180C I!"+(B#H8<C180C I#"!B#H8<C180C (a) (b) (c) Figure 3: (a) and (b) show human rater accuracy of correctly identifying an excerpt as human-written or machinewritten, shown with 80% confidence internals, in (a), broken up by decoding strategy and in (b), overall. Accuracy increases as raters observe more tokens. (c) shows that for short excerpts, most rater mistakes are them incorrectly thinking machine-generated text is human written. The two errors types become more balanced at longer lengths. Eval top-k nucleus random Train top-k 90.1 57.1 43.8 nucleus 79.1 81.3 78.4 random 47.8 63.7 81.7 mixed 88.7 74.2 72.2 Table 2: Accuracy of BERT fine-tuned discriminator when trained on samples from one strategy (rows) and evaluated on another (columns). Trained on samples with 192 tokens. The ‘mixed’ dataset is one containing an equal portion of samples from each strategy. as well. In contrast, we observe that discriminators trained on top-k and untruncated random samples severely underestimate the percentage of machine-generated excerpts in out-of-domain datasets. Even within domain (Figure 1b), we find both discriminators heavily favor a single class, inEval top-k nucleus random Train top-k 60.9 27.9 14.5 nucleus 49.2 51.7 48.9 random 7.3 22.6 38.3 Table 3: Average probability of ‘machine-generated’ according to each length-192 discriminator. The expected in-domain probability is 0.5. One token of conditioning. creasingly so as the number of tokens increases. Human Evaluation Overall human performance across all sampling methods is shown in Figure 3b. Even with the multi-paragraph 192-length excerpts, human performance is only at 71.4%, indicating that even trained humans struggle to correctly identify machine-generated text over a quar1815 Truth Raters p1.0 k40 p0.96 Truth Raters p1.0 k40 p0.96 H M H H M H H M M M EDIT:OKAY!, I guess that’ll work for now. > http://www.teamfortress.com/ and then go buy the game and experience some of the best online gaming I have ever played. ˆ ˆBoth girls had a really fun time and I had a GREAT time making both of these costumes. Everything was altered even a little bit(dying the pants a darker grey and painting the boots and shirts) But my piece de resistance would have to be my eldest’s Medi-Gun.If you have any questions about the costumes, I would be happy to assist you!Oh and here’s a video of my daughter before the costume was completed.Thanks! Image copyright Getty Images Image caption Women mourn over the coffin of one of the victim’s of Sunday’s bombing in Ankara ¶Who’d be in Turkey’s shoes right now? ¶Since July last year, hundreds of soldiers and civilians have been killed in terrorist attacks. Suicide bombs have torn into crowds of demonstrators and tourists. Military convoys have been targeted in the heart of the capital. ¶A long-running Kurdish insurgency, once thought to be close to resolution after years of painstaking efforts to build bridges, has erupted once more. ¶The country is awash with Syrian and other refugees. The government has been under pressure to stop them moving on into Europe and prevent would-be jihadis travelling the other way. ¶How dangerous is Turkey’s unrest? ¶Tears and destruction amid PKK crackdown ¶Turkey v Islamic State v the Kurds Truth Raters p1.0 k40 p0.96 Truth Raters p1.0 k40 p0.96 M M H M M H First off, this thread has done a pretty good job of describing in detail yet another broken touchscreen. That’s the difference between a smartphone and a PC with no prying eyes having to snap shots for the police to find. ¶What I would like to address is the mindset that generally surrounds Chrome OS users. To me this is analogous to saying that Apple does“hate their Windows”, or that HP does“hate their Macs” as if http://twitter.com/) (and that quote is from two years ago), that anyone who covers smartphones and tablets from a “PC” perspective is just jealous. ¶Chrome OS is for browsing the web, PC processors can do stronger things in that regard, Windows is a juggernaut on those fronts. This is how I see it. Yes, it can be slow. And yes, you need a fast CPU FOR ALABAMA, GOOD WEEKS ¶AND A TOUR OF CAIRO ¶THE ALABAMA COMMITTEE ON THE STUDY OF THE AMERICAN SECURITY AGENDA, ¶America’s future has been mapped out in carved stone. Metro Atlanta’s last US congressman, Bill Posey, was a inextricable integral element of the Citadel project as it became another metaphor for Atlanta’s transformation from an industry backwater into the finance and information hub of the nation’s capital. Meanwhile, Cobb County – Atlanta’s geode of change – is home to some of the largest industrial parks in the South, a regional cultural center, a 100year-old manufacturing town and a potent symbol of the former city’s cherished Georgian past. The gentry still live there, the defunct industrial landscapes carry the names of Truth Raters p1.0 k40 p0.96 Truth Raters p1.0 k40 p0.96 M H M M H M Exidentia at Eurnari, is an upcoming Cryptopia event which is currently still in development. Be a part of the first live stream of this year’s event on 15-16 January 2016! ¶Since the release of v1.22, Exidentia has received a fair amount of user feedback. This event takes place in the underwater Cryptopia they have built. During this event, you will learn about the ocean and areas around it, and be reached by a treasure hunter that helps you explore the different areas. ¶There will be six different levels in this event that you will become acquainted with: thought Polar Lava, Ocean Seared Cones and Celestine Floors, Sea Damaged Aerie Bricks, coast Puddle (congipit stopping at red water), Shaikh Swamp and Bugmite. At rotating points, you will learn how to access various types of creatures Ever since the opening of the North American College of Art Education in 1990, the demand for art education in America has grown steadily, and in recent years we have seen the rise of students that pursue art education not in the classroom but at art academies. This year saw another 50 percent increase in the number of art academies in the United States offering courses – with an additional 10 percent of students in 2017 taking art. ¶Some major changes have occurred in recent years with regard to the art curriculum and the way students learn, and we will explore each of these in coming months as we look at the various forms of art education. There is no one-size-fits-all approach for this or any other field of study, and students who begin a course in art education may change their plans based on what they see that course, including what lessons they have completed and the resources available, to create meaningful experiences of artistic creation. ¶One important area Table 4: Some 192-token examples where at least two expert raters agreed with each other, but were not in agreement with the automatic discriminators. The first row shows examples where the ground-truth was human-written, the second shows machine-generated examples where the corresponding discriminator guessed incorrectly, and the third shows machine-generated examples where the discriminator was correct, but raters got it wrong. ter a time. However, it is worth noting that our best raters achieved accuracy of 85% or higher, suggesting that it is possible for humans to do very well at this task. Further investigation is needed into how educational background, comfort with English, participation in more extensive training, and other factors can impact rater performance. To break up the accuracies by sampling method in a way that is comparable to the results shown for the automatic discriminators, we pair each machine-generated example with a randomly selected one of webtext to create a balanced dataset for each sampling strategy. Performance is shown in Figure 3a. Top-k produces the text that is hardest for raters to correctly distinguish, but as shown in Section 7, it is the easiest for our automatic detection systems. Samples from untruncated random sampling and nucleus sampling with p=0.96 are equivalently difficult for raters to classify as machine-generated. Our human evaluation results suggest that much lower p-values than the 0.92 to 0.98 range proposed in Zellers et al. (2019) might be necessary in order to generate text that is considered significantly more human-like to human raters than the text produced by using the untruncated distribution. Table 4 gives several examples where human raters and our BERT-based discriminators disagreed. When raters incorrectly labeled humanwritten text as machine-generated, often the excerpts contained formatting failures introduced when the HTML was stripped out. In the middle two examples, topic drift and falsehoods such as Atlanta being the “information hub of the nation’s capital” allowed humans to correctly detect the generated content. However, in the bottom two examples, the high level of fluency left human raters fooled. Overall we find that human raters—even “expert” trained ones—have consistently worse accuracy than automatic discriminators for all decoding methods and excerpt lengths. In our experiments, randomly-selected pairs of raters agree with each other on a mere 59% of excerpts on average. (In comparison, raters and discriminators agree on 61% to 70% of excerpts depending on the discriminator considered). We surmise that the gap between human and machine performance will only grow as researchers inevitably train bigger, better detection models on larger amounts of 1816 training data. While improved detection models are inevitible, it is unclear how to go about improving human performance. GLTR proposes providing visual aids to humans to improve their performance at detecting generated-text, but it is unlikely that their histogram-based color-coding will continue to be effective as generative methods get better at producing high-quality text that lacks statistical anomalies. 8 Conclusion In this work, we study the behavior of automated discriminators and their ability to identify machine-generated and human-written texts. We train these discriminators on balanced binary classification datasets where all machinegenerated excerpts are drawn from the same generative model but with different decoding strategies. We find that, in general, discriminators transfer poorly between decoding strategies, but that training on a mix of data from methods can help. We also show the rate at which discriminator accuracy increases as excerpts are lengthened. We further study the ability of expert human raters to perform the same task. We find that rater accuracy varies wildly, but has a median of 74%, which is less than the accuracy of our bestperforming discriminator. Most interestingly, we find that human raters and discriminators make decisions based on different qualities, with humans more easily noticing semantic errors and discriminators picking up on statistical artifacts. In our experiments, these artifacts are most prominent with top-k sampling. However, any strategy that oversamples high-likelihood words is susceptible. As the p in nucleus sampling is set increasingly lower to achieve more fluent text (some systems are already using p as low as 0.5 (Miculicich et al., 2019)), the distributional deviations that plague top-k text will surface in nucleus sampling as well. Holtzman et al. (2020) explain how a unique attribute of human language is that it dips in and out of low probability zones. This variance in likelihood is what makes human-written text interesting and exciting to read. Today’s generation systems have not yet solved the problem of mimicking the human cadence without introducing poor word choices that are easy for humans to detect. Generation systems often optimize for fooling humans without acknowledging the trade-off that exists between human perception of quality and ease of automatic detection. We therefore suggest three prongs for future research: 1. Identifying ways to improve the language models and decoding strategies we use in order to generate text that is both exciting (ie. unlikely) and semantically plausible. 2. Building better world understanding into automatic discriminators so that they are more capable of detecting the types of errors that humans notice. 3. Developing tools and educational materials to improve humans’ ability to detect machine-generated text. These may include automatic detectors with components that explain their predictions. Finally, we would like to note that all of our experiments were performed with English language models, and it remains an open question how the trade-off between ease of human detection and ease of automatic detection might differ for languages that are very different from English. Acknowledgements This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA875019-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. We also thank Noah Fiedel, Peter Liu, Sharan Narang, Joao Sedoc, Yun William Yu, and Hugh Zhang for their valuable feedback. References David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. 2020. Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection. In International Conference on Advanced Information Networking and Applications, pages 1341–1354. Springer. Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2):211–36. Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. 1817 Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351. Nicole A Cooke. 2018. Fake news and alternative facts: Information literacy in a post-truth era. American Library Association. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898. Sebastian Gehrmann, Hendrik Strobelt, and Alexander M Rush. 2019. Gltr: Statistical detection and visualization of generated text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 111–116. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Anjuli Kannan and Oriol Vinyals. 2017. Adversarial evaluation of dialogue models. arXiv preprint arXiv:1701.08198. Sarah E Kreps, Miles McCain, and Miles Brundage. 2020. All the news that’s fit to fabricate: Aigenerated text as a tool of media misinformation. Social Science Research Network. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pages 355–368. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Information Processing Systems, pages 3155–3165. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116–1126. Lesly Miculicich, Marc Marone, and Hany Hassan. 2019. Selecting, planning, and rewriting: A modular approach for data-to-document generation and translation. EMNLP-IJCNLP 2019, page 289. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664, Florence, Italy. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. arXiv preprint arXiv:1707.06875. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Tal Schuster, Roei Schuster, Darsh J Shah, and Regina Barzilay. 2019. Are we safe yet? the limitations of distributional features for fake news detection. arXiv preprint arXiv:1908.09805. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843–861. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203. Jonghyuk Song, Sangho Lee, and Jong Kim. 2015. Crowdtarget: Target-based detection of crowdturfing in online social networks. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 793–804. ACM. Alan Turing. 1950. Computing machinery and intelligence-am turing. Mind, 59(236):433. 1818 Chris J Vargo, Lei Guo, and Michelle A Amazeen. 2018. The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New media & society, 20(5):2028– 2049. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y Zhao. 2012. Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web, pages 679–688. ACM. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. CoRR, abs/1905.12616. Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. 1819 A Appendix A.1 Dataset Sizes Table 5 shows the number of sequences used for training and evaluating each of the automatic discriminators. Recall that each discriminator is trained for binary classification on an a dataset of machine-generated (positive) and human-written (negative) examples. Each dataset was constructed by pairing the human-written excerpts (last row of Table 5) with the machine-generated excerpts drawn via a particular decoding algorithm (‘k40’, ‘p0.96’, or ‘p1.0’) and priming strategy (‘nocond’ or ‘1wordcond’). Originally the humanwritten set and each machine-generated set contained 250,000 training examples, 5,000 validation examples, and 5,000 test examples. Table 5 shows the resulting counts after after all excerpts with sequence length shorter than 192 tokens were filtered out. Thus, the final training, validation, and test sets were almost, but not quite, balanced. A.2 Further Details on Human Evaluation The user interface for the human evaluation task is shown in Figure 6. At each step, the rater is shown additional text and asked to guess whether the excerpt is human-written or machine-generated. They are able to revise their guess at each subsequent step. The newly appended text at each step is bolded in the UI. At the end, workers are told whether or not they got the question correct. To gauge worker attention levels, 10% of questions shown to workers explicitly stated what answer ought to be specified. An example of one of these “honeypot” questions is shown in Figure 7. Amazon Mechanical Turk workers got 83% accuracy on these questions. Expert raters got 91.8% accuracy. Table 8 shows the accuracy of each expert rater along with the number of annotations they provided. Table 9 shows the example exerpts that were used to “train” the expert raters. For both the Amazon Mechanical Turk raters and the expert raters initial predictions were biased towards ‘possibly human,’ and only by observing more tokens did their predictions become more confident. Figure 4 shows that ‘possibly human’ is by far the most frequent answer upon observing 16 tokens, and as more tokens are observed raters gravitate towards ‘definitely human’ or ‘definitely machine.’ Even at 192 tokens, many raters are still uncertain. Figure 4 also shows how raters for the most part default to guessing short excerpts are Figure 4: Number of votes expert raters made for each label as a function of number of tokens observed. As raters observe more tokens, their predictions become more confident. human-written, and as the excerpts are extended, raters use the extra evidence available to revise their guess. By the longest sequence length, votes for “human-written” and “machine-generated” are about balanced. In Figure 5, we plot the frequency for each sequence length that raters converged on a single guess (either human or machine) at that point. The figure shows how it takes raters longer to converge on a decision of “machine” than to converge on a decision of “human.” A.3 Automatic Detection Method Reliability In order to quantify the variance of automatic discriminator accuracy, we finetuned five independent BERT discriminators on a ‘mixed’ dataset comprising of 50% human-written examples and 50% machine-generated examples, where machine-generated examples are equally split between top-k=40, top-p=0.96, and untruncated random sampling. All sequences were exactly 192 tokens. The best performing model checkpoint, according to an in-domain validation set, was then used to evaluate out-of-domain binary classification datasets as in Table 2 of the main paper. The results are shown in Table 7. We find outof-domain accuracy to be extremely reliable with a standard deviation of approximately 1% or less. 1820 Method # train # valid # test large-744M-k40-1wordcond 211148 4226 4191 large-744M-k40-nocond 218825 4362 4360 large-744M-p0.96-1wordcond 210587 4248 4208 large-744M-p0.96-nocond 209390 4174 4185 large-744M-p1.0-1wordcond 209334 4169 4173 large-744M-p1.0-nocond 208219 4187 4168 human-written 201344 4031 4030 Table 5: The number of excerpts used for training, validation, and testing. # Annotations Expert Raters AMT Workers webtext 239 450 k0-1wordcond 87 150 k40-1wordcond 75 150 p0.96-1wordcond 74 150 total machine 236 450 Table 6: The number of human annotations collected. In total, there were 50 examples from each sampling strategy and 150 examples of web text. Each example was shown to at most three raters. 16 32 64 128 192 Length at which rater made up their mind 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 Fraction of all annotations Point of Convergence for Annotations of Human-Written Text 16 32 64 128 192 Length at which rater made up their mind 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Fraction of all annotations Point of Convergence for Annotations of Machine-Generated Text Figure 5: On average, it takes much less text for raters to decide an excerpt is human-written than to decide an excerpt is machine-generated. Dataset µ σ random sampling 72.47 1.02 top-k = 40 88.06 0.59 top-p = 0.96 74.4 0.76 Table 7: Average (µ) and standard deviation (σ) of accuracy on out-of-domain datasets across five runs of automatic discriminator finetuning. Accuracy Count 61.3% 83 57.8% 51 66.7% 51 69.8% 51 79.5% 48 84.6% 40 82.4% 39 65.6% 36 78.1% 34 84.0% 26 58.8% 18 92.3% 14 90.0% 11 100.0% 9 50.0% 8 60.0% 5 100.0% 5 100.0% 2 0.0% 2 0.0% 1 100.0% 1 0.0% 1 Table 8: Our expert rater pool consisted of 22 raters. The average accuracy of each rater on the longest excerpt length (192 tokens) is shown here along with the total number of excerpts they annotated. 1821 Human I recently got the chance to try the new Oil Essentials line. With six potent blends to choose from–at $13 each–these cute little bottles offer a great, affordable way to partake in the skin and hair care oil craze. I tested each product in the line, massaging them onto my face every night before bed and running any leftover oil through my hair to tame frizziness. You could also add a few drops to your bath, favorite moisturizer, or even your shampoo and conditioner. Here’s a quick rundown of each oil. Revitalize: Omega 3, 6, 9 & Evening Primrose This was the first one I tried (I went in ROYGBIV order to keep things straight) and my first impression was that it smells lovely but a little strong. The fragrance smells genuinely like flowers. Machine Red Lanterns, the lead exposure to a movie starring the Batman solo movie alum Margot Robbie taken under Wonder Woman’s wing have reignited that rivalry with their whispery premiere. They played it as much as they possibly could, even though people who didn’t ever watch Justice League or might have missed it waiting in line for the theater were still talking about as I spilled coffee. The gist? An overextended (OK, a sore) Adam West films set up a Legion of Super-Heroes situation. How aggro? Super laws and paramilitary groups watch over the world’s superheroes, which is a mix of that schtick ending, Planet Of The Apes II bit, and the Batman/Venom bit of last appeared in The Seventh Seal when Chris O’Donnell infiltrated one of the teams at some point, also wearing Staff. Machine He is considered to be the most terrifying man on the planet and people stay away from him. A guy asks him to do something and he says, ”My girlfriend’s so important to me... I don’t need to fight her any more.” And then, boom, there’s some in a corner crying inappropriately. Men: It’s gone in five minutes. Why do I have to be so sad? It’s cute,” says female member, who asks to remain anonymous. ”It’s what grew up to drive me crazy when I was a kid, seeing these women become the nurturing, wealthy things they are in this professional world I truly love.” And it’s nothing to do with her success. These men still actively fear being around the idea of a woman who might win Oscars, make movies or be audacious drivers. Human Dropbox and Google Drive are very different services that appeal to different users. While Drive is connected to the entire Google Apps (now known as G Suite) ecosystem, Dropbox is a lightweight, simple alternative for file storage. While both are useful, users need to look beyond features, and make sure the service they choose can adequately protect their data. Here’s how Dropbox encryption and Google Drive encryption stack up. Dropbox and Google Drive Encryption To their credit, both Dropbox and Google Drive protect user files with encryption. Both also allow users to enable two-step verification, which requires an extra code texted to the user’s phone to access the account, making it harder for hackers to access a user’s data. Human EVE Isk Per Hour(Eveiph) is hands down the best tool I’ve ever used to make isk in New Eden. It is a market helper program that is able to do a great deal of the work that is typically done by a traders spreadsheet. I’ve used it to go from a 200m/month trading income to 3b/month on my main trading character. Above you can see the blueprint manufacturing page which is located on the first tab of Eveiph. Here you can see the components required to make an item, the settings for the blueprint, and a brief market analysis of what you can expect to make manufacturing the item and selling it at the market you’ve selected. You can enter the amount of runs you want to make, the ME and PE of your blueprint and click add to shopping list, and it will be added to a list of items to purchase when you are next at a trade hub. Machine So, not only was the speech a thoroughly mediocre diatribe about what he now thinks we should do for the next 45 minutes, but also how much credit we should give to Mumford and Sons for bringing Obama to the campaign trail. Behold: At the DNC, we drew strength from something even more powerful than the power of words. We drew strength from the power of families in this country. We drew strength from the power of family values. We drew strength from the power of a common purpose–We drew strength from our shared commitment to fighting against everything that undermines our potential in this country and our freedom. It is with that same conviction that we launch this campaign today and we urge every American in America to join us tonight. To allow the same attempt to succeed in this election. Machine The year is twenty-eight, and the boy is Harry, the sixth year at Hogwarts School of Witchcraft and Wizardry. He can’t walk without spells covering his feet (or in his case, his feet are so badly burned that he, for practical purposes, can’t even walk for that long without them) and he’s just starting to feel more secure about things. This is a pretty dull aspect of the book, I’d say. They probably spent way too much time on the fact that he can’t use the stick of silver from his wand, despite his friends bewitching all the knives they had. Harry had been having some difficulty getting to sleep until Hermione pulled him out of his state of near-death-conversation. Thanks to Hermione’s meddling, he’s gotten some sleep for the past two days. They also learnt a fair amount about getting used to his new surroundings. Machine Coincidentally, just a few days after the first tweet came out, a fellow named Kevin McReynolds sent out an interview with GQ to promote their upcoming issue. McReynolds describes himself as ”a conservative Catholic” who ”cannot fathom this guy being a real person and should be ashamed that he was able to be elected president.” It’s true. If you believe Hillary Clinton gave away 20 percent of the American Uranium to Russia, then you should be ashamed that you voted for Trump. No one should be able to give or receive anything that’s not supposed to, so long as they have a warrant. If you’ve been in a relationship for more than six months with a person who’s also convicted of being a felon (or convicted of stealing), that’s just stupid, especially as a married man. If you’re married to someone convicted of a crime, and they go on their honeymoon with you, that’s a felony, not a honeymoon. Human CHIP DESIGNER Texas Instruments unveiled a family of system on chip (SoC) processors aimed at automakers today, which are designed for use in self-driving cars. Named the TDA2x, the SoC family integrates safety features, such as aiding auto designers to create advanced driver assistance systems (ADAS), which in turn help ”reduce the number of collisions on the road and enable autonomous driving experiences”. ”TDA2x device family combines an optimal mix of high performance, vision analytics, video, graphics and general purpose processing cores in a low power envelope, enabling a broad range of ADAS applications including front camera, surround view and sensor fusion,” Texas Instruments said in its release. Machine Description This classic blend of coffee, cream, and sugar is the perfect drink! It is a smooth and creamy coffee with hints of cream and sweet sugar that can be enjoyed even after a full day of work or playing! The sugar provides a wonderful texture to the coffee beans, so that it can be scooped out into a cup. Available in four flavours: vanilla cream, caramel cream, coffee creme, and chocolate cream. Note: Coffee can be prepared in less than 120 minutes. Note: Serves one. Table 9: The 10 examples that “expert” raters were guided through before they were asked to perform the detection task. These are hand-selected to showcase the spectrum of generated text and human-written text. 1822 Figure 6: The interface of the task used for human evaluation. Each time the user presses next, the passage’s length is doubled. On the left, we show the first step of evaluation, on the right, the second to last. Figure 7: For some of the questions, the text ”Dear AMT Worker: to show you’re reading, please select definitely [X] for this one.” was inserted into the last text segment, and ”Did you read carefully?” was appended to the end.
2020
164
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1823–1834 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1823 Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing Haoming Jiang Georgia Tech [email protected] Chen Liang Georgia Tech [email protected] Chong Wang ByteDance [email protected] Tuo Zhao Georgia Tech [email protected] Abstract Many multi-domain neural machine translation (NMT) models achieve knowledge transfer by enforcing one encoder to learn shared embedding across domains. However, this design lacks adaptation to individual domains. To overcome this limitation, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing. We first observe that words in a sentence are often related to multiple domains. Hence, we assume each word has a domain proportion, which indicates its domain preference. Then word representations are obtained by mixing their embedding in individual domains based on their domain proportions. We show this can be achieved by carefully designing multi-head dot-product attention modules for different domains, and eventually taking weighted averages of their parameters by word-level layer-wise domain proportions. Through this, we can achieve effective domain knowledge sharing, and capture fine-grained domain-specific knowledge as well. Our experiments show that our proposed model outperforms existing ones in several NMT tasks. 1 Introduction Neural Machine Translation (NMT) has made significant progress in various machine translation tasks (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016). The success of NMT heavily relies on a huge amount of annotated parallel sentences as training data, which is often limited in certain domains, e.g., medical domain. One approach to address this is to explore unparalleled corpora, such as unsupervised machine translation (Lample et al., 2017, 2018). Another approach is to train a multi-domain NMT model and this is the focus of this paper. The simplest way is to build a unified model by directly pooling all training data from multiple domains together, as the languages from different domains often share some similar semantic traits, e.g., sentence structure, textual style and word usages. For domains with less training data, the unified model usually shows significant improvement. Researchers have proposed many methods for improving multi-domain NMT. Though certain semantic traits are shared across domains, there still exists significant heterogeneity among languages from different domains. For example, Haddow and Koehn (2012) show that for a domain with sufficient training data, a unified model may lead to weaker performance than the one trained solely over the domain; Farajian et al. (2017); Luong et al. (2015); Sennrich et al. (2015a); Servan et al. (2016) also show that to improve the translation performance over certain domains, fine-tuning the unified model is often needed, but at the expense of sacrificing the performance over other domains. This indicates that a unified model might not well exploit the domain-specific knowledge for each individual domain. To overcome this drawback, two lines of recent research focus on developing new methods by exploiting domain-shared and domain-specific knowledge to improve multi-domain NMT (Britz et al., 2017; Zeng et al., 2018; Tars and Fishel, 2018; Hashimoto et al., 2016; Wang et al., 2017; Chen et al., 2017; Wang et al., 2018; Gu et al., 2019; Chu and Wang, 2018; Dou et al., 2019; Pham et al., 2019; Chu and Dabre, 2019). One line of research focuses on instance weighting, which assigns domain related weights to different samples during training. For example, Wang et al. (2017) consider sentence weighting and domain weighting for NMT. The sentence weight is determined by the bilingual cross-entropy of each 1824 sentence pair based on the language model of each domain. The domain weight can be modified by changing the number of sentences from that domain in a mini-batch. Chen et al. (2017) propose a cost weighting method, where the weight of each pair of sentences is evaluated by the output probability of a domain classifier on the encoder embedding. Wang et al. (2018) propose a dynamic training method to adjust the sentence selection and weighting during training. We remark that many of these methods are complementary to our proposed model, and can be applied to improve the training of our model. Another line of research attempts to design specific encoder-decoder architectures for NMT models. For example, Britz et al. (2017) consider domain-aware embedding given by the encoder, and then jointly train a domain classifier, taking the embedding as input to incorporate the domain information. Zeng et al. (2018); Su et al. (2019) further extend their approach by separating the domainshared and domain-specific knowledge within the embedding. In addition, Zeng et al. (2018) and Shen et al. (2017) propose a maximum weighted likelihood estimation method, where the weight is obtained by word-level domain aware masking to encourage the model to pay more attention to the domain-specific words. The aforementioned methods, however, have a notable limitation: They enforce one single encoder to learn shared embedding across all domains, which often lacks adaptivity to each individual domain. To better capture domain-shared knowledge beyond shared embedding from a single encoder, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing. Our proposed model is motivated by the observation that although every sentence of the training data has a domain label, the words in the sentence are not necessarily only related to that domain. For instance, the word “article” appears in the domains of laws and business. Therefore, we expect the knowledge for translating the word “article” to be shared between these two domains. Our proposed model assigns a context-dependent domain proportion1 to every word in the sentence. The domain proportions of the words can be naturally integrated into the Transformer model for capturing domain-shared/specific knowledge, as 1A word actually has multiple domain proportions at different layers of our model. See more details in Section 3 the multi-head dot-product attention mechanism is applied at the word-level. Specifically, we carefully design multi-head dot-product attention modules for different domains, and eventually mix these modules by taking weighted averages of their parameters by their layer-wise domain proportions. Compared with existing models, ours has the following two advantages: • Our proposed model is more powerful in capturing the domain-specific knowledge, as we design multiple dot-product attention modules for different domains. In contrast, existing models rely on one single shared encoder, and then one single unified translation model is applied, which often cannot adapt to each individual domain very well. • Our proposed model is more adaptive in the process of domain knowledge sharing. For common words across domains, their domain proportions tend to be uniform, and therefore can significantly encourage knowledge sharing. For some words specific to certain domains, their domain proportions tend to be skewed, and accordingly, the knowledge sharing is encouraged only within the relevant domains. For example, the word “article” appears less in the medical domain than the domains of laws and business. Therefore, the corresponding domain proportion tends to favor the domains of laws and business more than the medical domain. We evaluate our proposed model in several multidomain machine translation tasks, and the empirical results show that our proposed model outperforms existing ones and improves the translation performance for all domains. The rest of the paper is organized as follows: Section 2 introduces the background; Section 3 describes our proposed model in detail; Section 4 presents numerical experiments on EN-DE, ENFR and ZH-EN datasets; Section 5 discusses the connection to word disambiguation. 2 Background Neural Machine Translation (NMT) directly models the conditional distribution of the translated sentence y = (y1, ..., yℓ) given a source sentence x = (x1, ..., xℓ)2. The conditional probability density function p(y|x) is parameterized by an encoder-decoder neural network: The encoder 2Here we assume that we have applied padding to all sentences, and therefore, they are all of the same length. 1825 encodes the source sentence into a sequence of hidden representations H(x) = (h1, ..., hn), and the decoder generates target sentence one token at a time using these intermediate representations. More specifically, the decoder usually contains a recursive structure for computing p(yt|y<t, x) by p(yt|y<t, x) = F(Gt, H(x), yt−1), where Gt denotes the hidden representation of the decoder for the t-th position of the sequence, and F denotes a multi-layered network that outputs the probability of yt. Notice that Gt is generated by the Gt−1, H(x), and the previous word yt−1. Given N pairs of source/target sequences denoted by {xi, yi}n i=1, we train the NMT model by minimizing the cross-entropy loss as follows, minH,G,F Lgen = 1 n Pn i=1 −log p(yi|xi) where p(yi|xi) = Qm t=1 p(yi,t|yi,<t, xi). Transformer is one of the most popular NMT models (Vaswani et al., 2017; Tubay and Costa-juss`a, 2018; Devlin et al., 2018). The encoder and decoder in Transformer contain stacked self-attention and point-wise, fully connected layers without any explicit recurrent structure, which is different from existing RNN-based NMT models. Specifically, Vaswani et al. (2017) propose a new attention function using the scaled dot-product as the alignment score, which takes the form, Attention(Q, K, V ) = softmax QK⊤ √ d  V, (1) where Q, K, V ∈Rℓ×d are the vector representations of all the words in the sequences of queries, keys and values accordingly. For the self-attention modules in the encoder and decoder, Q = K = V ; For the attention module that takes into account the encoder and the decoder sequences, Q is different from the sequence represented by V and K. Based on the above attention function in (1), Vaswani et al. (2017) further develop a multi-head attention module, which allows the NMT model to jointly attend to information from different representations at different positions. In particular, we consider a multi-head attention module with m heads. For the i-th head Hi, three point-wise linear transformations Wi,Q, Wi,K , Wi,V ∈Rd×d/m are first applied to the input Q, K and V , respectively, and then the scaled dot-product attention Figure 1: Multi-head Scaled Dot-Product Attention. is applied: Let eQi = QWi,Q, eKi = KWi,K and eV = V Wi,V , Hi = Attention( eQi, eKi, eVi). (2) Eventually, the final output applies a point-wise linear transformation WO ∈Rd×d to the concatenation of the output from all heads: MultiHead(Q, K, V ) = Concat(H1, ..., Hm)WO. An illustrative example of the multihead attention architecture is provided in Figure 1. In addition to the above multi-head attention modules, each layer in the encoder and decoder in Transformer contains a point-wise two-layer fully connected feed-forward network. 3 Model We present our Transformer-based multi-domain neural machine translation model with word-level layer-wise domain mixing. 3.1 Domain Proportion Our proposed model is motivated by the observation that although every sentence in the training data has a domain label, a word in the sentence does not necessarily only belong to that single domain. Therefore, we assume that every word in the vocabulary has a domain proportion, which indicates its domain preference. Specifically, given the embedding x ∈Rd of a word, k domains and R ∈Rk×d, our model represents the domain proportion by a smoothed softmax layer as follows, D(x) = (1 −ϵ) · softmax(Rx) + ϵ/k, where ϵ ∈(0, 1) is a smoothing parameter to prevent the output of D(x) from collapsing towards 0 or 1. Specifically, setting ϵ as a large value encourages the word to be shared across domains. 1826 3.2 Word-Level Adaptive Domain Mixing In our proposed model, each domain has its own multi-head attention modules. Recall that the pointwise linear transformations in the multi-head attention module Wi,Q’s, Wi,K’s, Wi,V ’s and WO are applied to each word separately and identically, as shown in Figure 2. Therefore, we can naturally Figure 2: The Point-wise Linear Transformations are applied at the word-level. integrate the domain proportions of the words with these multi-head attention modules. Specifically, we take the weighted averaging of the linear transformation based on the domain proportion D(x). For example, we consider the point-wise linear transformations {Wi,Q,j}k j=1 on the t-th word of the input, Qt, of all domains. The mixed linear transformation can be written as Qi,t = Pk j=1 Q⊤ t Wi,Q,jDQ,j(Qt), where DQ,j(Qt) denotes the j-th entry of DQ(Qt), and DQ is the domain proportion layer related to Q. Then we only need to replace eQi in (2) with [Qi,1, ..., Qi,n]. An illustrative example is presented in Figure 3. For other linear transformations, we applied the domain mixing scheme in the same way. We reFigure 3: Word-level mixing with 3 domains. For simplicity, we omit the subscripts Q, i. mark that the Transformer model, though does not have any explicit recurrent structure, handles the sequence through adding additional positional embedding for each word (in conjunction with sequential masking). Therefore, if a word appears in different positions of a sentence, its corresponding embedding is different. This indicates that the domain proportions of the same word can also be different across positions. This feature makes our model more flexible, as the same word in different positions can carry different domain information. 3.3 Layer-wise Domain Mixing Recall that the Transformer model contains multiple multi-head attention modules/layers. Therefore, our proposed model inherits the same architecture and applies the word-level domain mixing to all these attention layers. Since the words have different representations at each layer, the corresponding domain proportions at each layer are also different, as shown in Figure 4. In addition to the multi-head attention layers, we also apply similar word-level domain mixing to the point-wise two-layer fully connected feed-forward network. The layer-wise domain mixing allows the domain proportions to be context dependent. This is because the domain proportions are determined by the word embedding, and the word embedding at top layers is essentially learnt from the representations of all words at bottom layers. As a result, when the embedding of a word at some attention layer is already learned well through previous layers (in the sense that it contains sufficient contextual information and domain knowledge), we no longer need to borrow knowledge from other domains to learn the embedding of the word at the current layer. Accordingly, the associated domain proportion is expected to be skewed and discourages knowledge sharing across domains. This makes the process of knowledge sharing of our model more adaptive. 3.4 Training Recall that H denotes the encoder, F denotes the decoder, and D denotes the domain proportion. Define Θ = {F, H, D}. The proposed model can be efficiently trained by minimizing a composite loss function defined as follows, L∗= Lgen(Θ) + Lmix(Θ), where Lgen(Θ) denotes the cross-entropy loss over the training data {xi, yi}n i=1, and Lmix(Θ) denotes the cross entropy loss over the words/domain (hard) labels. For Lmix(Θ), the domain labels are obtained from the training data. Specifically, for all words 1827 Figure 4: Illustration of Our Multi-domain NMT Model: Normalization and residual connection are omitted for simplicity. For all other detail, please refer to Vaswani et al. (2017). in a sentence belonging to the J-th domain, we specify their domain hard labels as J. Then given the embedding x of a word, we compute the cross entropy loss of its domain proportion D(x) as −log(DJ(x)). Accordingly, Lmix(Θ) is the sum of the cross entropy loss over all such pairs of word/domain label of the training data. 4 Experiment We conduct experiments on three different machine translation tasks: • English-to-German. We use a dataset from two domains: News and TED. We collect the News domain data from Europarl (Koehn, 2005) and the TED domain data from IWLST (Cettolo et al., 2014). • English-to-French We use a dataset containing two domains: TED and Medical domain. We collect TED domain data from IWLST (Cettolo et al., 2017) and medical domain data from Medline (Yepes et al., 2017). • Chinese-to-English We use a dataset containing four domains: News, Speech, Thesis and Laws. We collect the Laws, Speech, and Thesis data from UM-Corpus (Tian et al.), and the News data from LDC (Consortium, 1992). The translation from Chinese-to-English is inherently difficult. The fourdomains setting makes it even more challenging. This dataset is also used in Zeng et al. (2018). The sizes of training, validation, and testing sets for different language pairs are summarized in Table 1. We tokenize English, German and French sentences using MOSES script (Koehn et al., 2007) and perform word segmentation on Chinese sentences using Stanford Segmenter (Tseng et al., 2005). All sentences are then encoded using bytepair encoding (Sennrich et al., 2015b). We evaluate the performance using two metrics: BLEU (Papineni et al., 2002) and perplexity following the default setting in fairseq with beam search steps of 5. Language Domain Train Valid Test EN-DE News 184K 18K 19K TED 160K 7K 7K EN-FR TED 226K 10K 10K MEDICAL 516K 25K 25K ZH-EN Laws 219K 600 456 News 300K 800 650 Speech 219K 600 455 Thesis 299K 800 625 Table 1: The numbers of sentences in the datasets. 4.1 Baselines Our baselines include the Transformer models trained using data from single and all domains. We also include several domain aware embedding based methods, which train the embedding of the encoder along with domain information. • Multitask Learning (MTL) proposed in Britz et al. (2017) uses one sentence-level domain classifier to train the embedding. Note that their classifier is only used to predict the domain, while our model 1828 uses multiple word-level domain classifiers to obtain the domain proportions for different layers (further used for domain mixing). • Adversarial Learning (AdvL) proposed in Britz et al. (2017) is a variant of MTL, which flips the gradient before it is back-propagated into the embedding. This encourages the embedding from different domains to be similar. • Partial Adversarial Learning (PAdvL) To combine the advantages of the above two methods, we split the embedding into half of multitask part and half of adversarial part. • Word-Level Domain Context Discrimination (WDC) Zeng et al. (2018) integrates MTL and AdvL with word-level domain contexts. This method requires the dimension of the embedding to be doubled and, thus, is not directly applicable in Transformer. We use a point-wise linear transformation to reduce the dimension. Moreover, Zeng et al. (2018) consider the wordlevel domain aware weighted loss (WL). Specifically, they assign a domain-aware attention weight βj to the j-th position in the output sentence, and the corresponding weighted loss is: Lgen = −1 n Pn j=1(1 + βj) log p(yj|x, y<j). Here βj is obtained by an attention based domain classifier built upon the last hidden layer. 4.2 Details of Our Implementation All of our experiments are conducted under fairseq (Ott et al., 2019) environment. We follow the fairseq re-implementation of 12-layer Transformer designed for IWLST data. Specifically, the embedding dimension is 512 for both the encoder and decoder, the number of heads is 4, and the embedding dimension in the feed-forward layer is 1024. Such a model is actually larger than the base model in Vaswani et al. (2017) (76M vs. 65M parameters). Notice that, the number of parameters of the mixing model is k times larger (k is the number of domains). For a fair comparison, all baselines are tested using both the above model and an enlarged model, which has √ k times larger embedding dimension (so the weight matrices are k times larger). The enlarged model and the mixing model has the same number of parameters. The presented baseline results are the best of the two. In terms of the optimization, we follow the training recipe provided by fairseq. Specifically, we use Adam (Kingma and Ba, 2014) with β1 = 0.9, β2 = 0.98 with a weight decay parameter of 10−4. The learning rate follows the inverse square root schedule (Vaswani et al., 2017) with warm-up steps of 4000, initial warm-up learning rate of 10−7, and the highest learning rate of 5×10−4. For effective training, Lgen is replaced by a label-smoothing cross-entropy loss with a smoothing parameter of 0.1 (Szegedy et al., 2016). For our domain mixing methods, we set the smoothing parameter ϵ of the domain proportion as 0.05. Besides applying domain mixing to both the encoder and decoder (E/DC), we consider applying domain mixing to only the Encoder. The domain proportion layers D are only used for estimating the domain proportion and should not intervene in the training of the translation model. So the gradient propagation is cut off between the Transformer and the domain proportion as Figure 5 shows. More discussion about the training procedure can be found in Section 4.6. Figure 5: Computational graph for training the domain proportion layers. 4.3 Experimental Results Table 2 shows the BLEU scores of the baselines and our domain mixing methods for English-toGerman translation. As can be seen, our methods outperform the baselines on both domains. Notice that, our baseline method achieves 29.09 BLEU when training and testing on TED domain only, where Liu et al. (2019) only achieves 28.56 with the same training/testing data, the codebase (i.e., fairseq), and the network structure. This indicates that our reimplemented baseline is rather strong. We also compare the perplexity on the validation set in Figure 6. As can be seen, our domain mixing methods converge faster than the baselines and all methods converge after 50 epochs. We also observe that the baselines get stuck at plateaus at the early 1829 Method News TED Direct Training News 26.09 6.15 TED 4.90 29.09 News + TED 26.06 28.11 Embedding based Methods MTL 26.90 29.27 AdvL 25.68 27.46 PAdvL 27.06 29.49 WDC + WL 27.25 29.43 Our Domain Mixing Methods Encoder 27.78 30.30 Encoder + WL 27.67 30.11 E/DC 27.58 30.33 E/DC + WL 27.55 30.22 Table 2: English-to-German. 0 10 20 30 40 50 60 Epoches 0 20 40 60 80 100 Perplexity News+TED MTL AdvL PAdvL WDC w/ WL Mixing: Encoder Mixing: E/DC Figure 6: Perplexity v.s. Number of epochs for Englishto-German. stage of training. The possible reason is that their training enforces one unified model to fit data from two different domains simultaneously, which is computationally more difficult. Table 3 shows the BLEU scores of the baselines and our domain mixing methods for English-toFrench translation. Note that though the data from the Medical and TED domains are slightly imbalanced (about 1:2.5), our methods can still outperform the baselines on both domains. Method TED Medical Direct Training TED 28.22 7.32 Medical 7.03 53.73 Medical + TED 39.21 53.40 Embedding based Methods MTL 39.14 53.37 AdvL 39.54 53.46 PAdvL 39.56 53.23 WDC + WL 39.79 53.85 Our Domain Mixing Methods Encoder 40.30 54.05 Encoder + WL 40.43 54.14 E/DC 40.52 54.28 E/DC + WL 40.60 54.39 Table 3: English-to-French. Table 4 shows the BLEU scores of the baselines and our domain mixing methods for Chinese-toMethod Laws News Speech Thesis Direct Training Laws 51.98 3.80 2.38 2.64 News 6.88 31.99 8.12 4.17 Speech 3.33 4.90 18.63 3.08 Thesis 5.90 5.55 4.77 11.06 Mixed 48.87 26.92 16.38 12.09 Embedding based Methods MTL 49.14 27.15 16.34 11.80 AdvL 48.93 26.51 16.18 12.08 PAdvL 48.72 27.07 15.93 12.23 WDC + WL 42.16 25.81 15.29 10.14 Our Domain Mixing Methods Encoder 50.21 27.94 16.85 12.03 Encoder + WL 50.11 27.48 16.79 11.93 E/DC 50.64 28.48 17.41 11.71 E/DC + WL 50.04 28.17 17.60 11.59 Table 4: Chinese-to-English. English translation. As can be seen, our methods outperform the baselines on all domains except Thesis. We remark that the translation for the Thesis domain is actually very difficult, and all methods obtain poor performance. Moreover, we find that for Chinese-to-English task, all our baselines are sensitive to the architecture of the Transformer. Their training will fail, if we place the layer normalization at the end of each encoder and decoder layer (as Vaswani et al. (2017) suggest). Therefore, we move the layer normalization to their beginnings. Surprisingly, our domain mixing methods are very stable regardless of the position of the layer normalization. More details can be found in Table 8 of Appendix A. 4.4 Ablation Study We further shows that the performance gains are from the domain mixing methods, instead of from the new model architecture design. Table 5 shows the BLEU scores with and without using domain labels under the same network structure and the same number of parameters as in the domain mixing methods. The only difference is that we remove domain label to guide the training of domain proportion, i.e., only Lgen is used in the training loss, and Lmix is removed. Training without domain labels shows a slight improvement over baseline, but is still significantly worse than our proposed method for most of the tasks. Therefore, we can conclude that our proposed domain mixing approach indeed improves performance. 4.5 Visualizing Domain Proportions To further investigate our domain mixing methods, we plot the domain proportions of the word em1830 Method Direct Training w/o DL with DL (Ours) English-to-Germany News 26.06 26.25 27.78 TED 28.11 28.27 30.30 English-to-French TED 39.21 39.39 40.30 Medical 53.40 53.33 54.05 Chinese-to-English Laws 48.87 48.96 50.21 News 26.92 27.02 27.94 Speech 16.38 16.15 16.85 Thesis 12.09 12.03 12.03 Table 5: BLEU Scores with and without domain labels (DL) under equal model capacity. bedding at different layers. A uniform proportion, e.g., (0.5, 0.5), is encouraging knowledge sharing across domains, while a skewed proportion, e.g., (0.1, 0.9), means there is little knowledge to share across domains. Figure 7 illustrates how the knowlFigure 7: Domain proportion of a sentence from the TED domain for English-to-French task. The domain proportion is extracted from all layers of the encoder. edge sharing is controlled via the domain proportion. The selected sentence is from the English-toFrench task, containing TED and Medical domains. Specifically, we observe : • The domain proportions of different words at different layers have various patterns. • At the bottom layers, the domain proportion of a word is closely related to its frequency of occurrence. • Some words with simple semantic meanings do not need to borrow much knowledge from other domains, e.g., and; Some other words need to borrow knowledge from other domains to better understand their own semantic meaning. For example, the word phenomenon keeps borrowing/sharing knowledge from/to the medical domain at every layer. • The ending of the sentence only conveys a stopping signal, and thus is shared across all domains. • The domain proportions at the bottom layers tend to be more diverse, while those at the top layers tend to be more skewed, as shown in Figure 8 for English-to-German task. • The domain proportions of the decoder tend to be more skewed than those of the encoder, which demonstrates little knowledge sharing. Figure 9 shows the histograms of word-level domain proportions at different layers in both the encoder and decoder. This might explain why the mixing decoder only contributes limited performance gain for the English-to-German task. Figure 8: Domain proportions of a sentence pair for English-to-German task. White represents the News domain and black represents the TED domain. The domain proportions of both the encoder (bottom) and the decoder (top) are presented. Layer-1 2 3 4 5 6 Encoder 0.0 1.0 Decoder Figure 9: Histograms of the domain proportions of each layer in our domain mixing model for English-toGerman Task. Within each histogram, 0 means pure News domain, and 1 means pure TED domain. 4.6 Combining Domain Mixing with Domain Aware Embedding The embedding based methods can be naturally combined with our domain mixing methods. As we mentioned in 4.2, the domain proportion is trained solely, meaning gradient does not propagate between the domain proportion layers D and 1831 Figure 10: Back-propagation for different embedding based methods. the Transformer. The computation of the gradient, on the other hand, is the key to combining two methods. Specifically, we encourage the embedding to be domain aware via MTL, AdvL and PAdvL, where we use the domain proportion layers to guide the training of the embedding. Figure 10 illustrates the back-propagation under different methods. Table 6 shows the performance for Chinese-to-English task under this setting. Here we consider applying domain mixing only to the encoder as the baseline. As can be seen, by applying appropriate domain aware embedding, the performance can be further improved. Method Laws News Speech Thesis Encoder 50.21 27.94 16.85 12.03 +MTL 49.15 26.82 15.72 11.93 +Adv 50.18 27.72 16.99 12.16 +PAdvL 49.01 26.63 16.06 12.15 +Multitask + WL 48.75 26.78 16.53 12.11 +Adv + WL 50.24 28.21 16.98 12.00 +PAdv + WL 48.87 26.86 16.14 11.89 Table 6: BLEU Scores of Domain Mixing + Domain Aware Embedding for Chinese-to-English Task 5 Discussions One major challenge in multi-domain machine translation is the word ambiguity in different domains. For example, the word “article” has different meanings in the domains of laws and media. When translating “article” into Chinese, the translated words are “条款” and “文章” , meaning a separate clause of a legal document and a piece of writing. Our proposed word-level layer-wise domain mixing approach tends to reduce the word ambiguity. As mentioned in Section 3.3, our model extracts different representations of each word from contexts at different layers. Accordingly, the domain proportion of each word evolves from bottom to top layers, and can eventually help identify the corresponding domains. Laws “Article 37 The freedom of marriage ...” “第三十七条 条 条:婚姻的自由...” Media “... working on an article about the poems ...” “... 正在写一篇诗的文 文 文章 章 章...” Table 7: The ambiguity of “articles”. Moreover, as mentioned in Section 3.2, the positional embedding also contributes to the word disambiguation in multi-domain translation. For example, in the law domain, we find that “article” often appears at the beginning of a sentence, while in the media domain, the word “article” may appear in other positions. Therefore, varying domain proportions for different positions can help with word disambiguation. We remark that word disambiguation across domains actually requires D(x) to be powerful for predicting the domain of the word. However, a powerful D(x) tends to yield skewed domain proportions and is not flexible enough for domain knowledge sharing. To trade off between strength and flexibility of D(x), the smoothing parameter ϵ of D(x) (see Section 3.1) needs to be properly set. 6 Conclusions We present a novel multi-domain NMT with wordlevel layer-wise domain mixing, which can adaptively exploit the domain knowledge. Unlike the existing work, we construct multi-head dot-product modules for each domain and then combine them by the layer-wise domain proportion of every word. The proposed method outperforms the existing embedding based methods. We also show mixing method can be combined with embedding based methods to make further improvement. Moreover, we remark that our approach can be extended to other multi-domain or multi-task NLP problems. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Denny Britz, Quoc Le, and Reid Pryzant. 2017. Effective domain mixing for neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 118–126. Mauro Cettolo, Marcello Federico, Luisa Bentivogli, Niehues Jan, St¨uker Sebastian, Sudoh Katsuitho, Yoshino Koichiro, and Federmann Christian. 2017. 1832 Overview of the iwslt 2017 evaluation campaign. In International Workshop on Spoken Language Translation, pages 2–14. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, page 57. Boxing Chen, Colin Cherry, George Foster, and Samuel Larkin. 2017. Cost weighting for neural machine translation domain adaptation. In Proceedings of the First Workshop on Neural Machine Translation, pages 40–46. Chenhui Chu and Raj Dabre. 2019. Multilingual multidomain adaptation approaches for neural machine translation. arXiv preprint arXiv:1906.07978. Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. arXiv preprint arXiv:1806.00258. The Linguistic Data Consortium. 1992. The linguistic data consortium description. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, and Graham Neubig. 2019. Unsupervised domain adaptation for neural machine translation with domain-aware feature embeddings. arXiv preprint arXiv:1908.10430. M Amin Farajian, Marco Turchi, Matteo Negri, Nicola Bertoldi, and Marcello Federico. 2017. Neural vs. phrase-based machine translation in a multi-domain scenario. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 280–284. Shuhao Gu, Yang Feng, and Qun Liu. 2019. Improving domain adaptation translation with domain invariant and specific information. arXiv preprint arXiv:1904.03879. Barry Haddow and Philipp Koehn. 2012. Analysing the effect of out-of-domain data on smt systems. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 422–432. Association for Computational Linguistics. Kazuma Hashimoto, Akiko Eriguchi, and Yoshimasa Tsuruoka. 2016. Domain adaptation and attentionbased unknown word replacement in chinese-tojapanese neural machine translation. In Proceedings of the 3rd Workshop on Asian Translation (WAT2016), pages 75–83. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177–180. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755. Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Minh Quang Pham, Josep-Maria Crego, Jean Senellart, and Franc¸ois Yvon. 2019. Generic and specialized word embeddings for multi-domain machine translation. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. 1833 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Christophe Servan, Josep Crego, and Jean Senellart. 2016. Domain specialization: a post-training domain adaptation for neural machine translation. arXiv preprint arXiv:1612.06141. Yan Shen, Dahlmann Leonard, Petrushkov Pavel, Hewavitharana Sanjika, and Khadivi Shahram. 2017. Word-based domain adaptation for neural machine translation. In Proceedings of the 15th International Workshop on Spoken Language Translation, pages 31–38. Jinsong Su, Jiali Zeng, Jun Xie, Huating Wen, Yongjing Yin, and Yang Liu. 2019. Exploring discriminative word-level domain contexts for multidomain neural machine translation. IEEE transactions on pattern analysis and machine intelligence. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Sander Tars and Mark Fishel. 2018. Multidomain neural machine translation. arXiv preprint arXiv:1805.02282. Liang Tian, Derek F Wong, Lidia S Chao, Paulo Quaresma, and Francisco Oliveira. Um-corpus: A large english-chinese parallel corpus for statistical machine translation. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing. Brian Tubay and Marta R Costa-juss`a. 2018. Neural machine translation with the transformer and multisource romance languages for the biomedical wmt 2018 task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 667–670. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10):1727–1741. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482– 1488. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Antonio Jimeno Yepes, Aur´elie N´ev´eol, Mariana Neves, Karin Verspoor, Ondrej Bojar, Arthur Boyer, Cristian Grozea, Barry Haddow, Madeleine Kittner, Yvonne Lichtblau, et al. 2017. Findings of the wmt 2017 biomedical translation shared task. In Proceedings of the Second Conference on Machine Translation, pages 234–247. Jiali Zeng, Jinsong Su, Huating Wen, Yang Liu, Jun Xie, Yongjing Yin, and Jianqiang Zhao. 2018. Multidomain neural machine translation with word-level domain context discrimination. In Conference on Empirical Methods in Natural Language Processing, pages 447–457. 1834 A Complementary Experiments – Chinese to English Experiment results of the original Transformer, where layer normalization is at the end each layer. Method Laws News Spoken Thesis Laws 10.37 0.45 0.27 0.27 News 0.39 5.12 0.91 0.57 Spoken 0.70 1.11 6.19 0.83 Thesis 0.63 0.25 0.16 1.24 Mixed 5.45 4.09 2.67 1.85 Multitask 6.16 3.83 1.91 1.53 Adversarial 5.93 3.38 1.85 1.37 PAdv 6.58 3.90 2.32 1.80 WDC. w/ WL 7.13 3.87 2.45 1.88 Our Proposed Mixing Method Encoder 50.16 27.61 16.92 11.85 + Decoder 50.45 28.15 17.45 11.62 Table 8: Chinese to English Figure 11: Two variants of layer normalization
2020
165
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1835–1845 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1835 Conversational Graph Grounded Policy Learning for Open-Domain Conversation Generation Jun Xu1∗, Haifeng Wang2, Zheng-Yu Niu2, Hua Wu2, Wanxiang Che1†, Ting Liu1 1Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin, China 2Baidu Inc., Beijing, China {jxu, car, tliu}@ir.hit.edu.cn, {wanghaifeng, niuzhengyu, wu hua}@baidu.com Abstract To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog. To this end, we first construct a conversational graph (CG) from dialog corpora, in which there are vertices to represent “what to say” and “how to say”, and edges to represent natural transition between a message (the last utterance in a dialog context) and its response. We then present a novel CG grounded policy learning framework that conducts dialog flow planning by graph traversal, which learns to identify a what-vertex and a how-vertex from the CG at each turn to guide response generation. In this way, we effectively leverage the CG to facilitate policy learning as follows: (1) it enables more effective long-term reward design, (2) it provides high-quality candidate actions, and (3) it gives us more control over the policy. Results on two benchmark corpora demonstrate the effectiveness of this framework. 1 Introduction How to effectively learn dialog strategies is an enduring challenge for open-domain multi-turn conversation generation. To address this challenge, previous works investigate word-level policy models that simultaneously learn dialog policy and language generation from dialog corpora (Li et al., 2016b; Zhang et al., 2018b). But these word-level policy models often lead to a degeneration issue where the utterances become ungrammatical or repetitive (Lewis et al., 2017). To alleviate this issue, utterance-level policy models have been proposed to decouple policy learning from response generation, and they focus on how to incorporate ∗This work was done at Baidu. †Corresponding author: Wanxiang Che. 今天晚上要通宵加班 I have to work overnight tonight. 辛苦了,好辛苦,注意身体 Take care of yourself when doing a very hard work. 还不能打盹,领导也在 I can’t take a nap yet, as the leaders are also here. 这么晚了,不犯困啊? It's so late. Don’t you get sleepy? 哈哈,那也会犯困吧 Ha-ha, that will make you sleepy. 我以为你会犯困的,这么晚了 I thought you’d be sleepy, as it's late. Context Mechanisms Responses 犯困/sleepy + Message Response Figure 1: Our system (1) understands the user message by linking it to CG. We call the linked vertices as hit what-vertices (green color) ; (2) selects a what-vertex (“sleepy”) and a how-vertex (responding mechanism M3, a MLP network) from one-hop neighbors of hit vertices; (3) generates a coherent response with two sub-steps: firstly, obtains a response representation ¯r using both M3 and a message representation (from a message-encoder); Next, produces a response “It’s so ...” with “sleepy” and ¯r as input. Notice all the howvertices are from the same set rather than completely independent of each other. high-level utterance representations, e.g., latent variables or keywords, to facilitate policy learning (He et al., 2018; Yao et al., 2018; Zhao et al., 2019). However, these utterance-level methods tend to produce less coherent multi-turn dialogs since it is quite challenging to learn semantic transitions in a dialog flow merely from dialog data without the help of prior information. In this paper, we propose to represent prior information about dialog transition (between a message and its response) as a graph, and optimize dialog policy based on the graph, to foster a more coherent dialog. To this end, we propose a novel conversational graph (CG) grounded policy learning frame1836 work for open-domain multi-turn conversation generation (CG-Policy). It consists of two key components, (1) a CG that captures both localappropriateness and global-coherence information, (2) a reinforcement learning (RL) based policy model that learns to leverage the CG to foster a more coherent dialog. In Figure 1, given a user message, our system selects a what-vertex (“sleepy”) and a how-vertex(responding mechanism M3) to produce a coherent response.1 We first construct the CG based on dialog data. We use vertices to represent utterance content, and edges to represent dialog transitions between utterances. Specifically, there are two types of vertices: (1) a what-vertex that contains a keyword, and (2) a how-vertex that contains a responding mechanism (from a multi-mapping based generator in Section 3.1) to capture rich variability of expressions. We also use this multi-mapping based method to build edges between two what-vertices to capture the local-appropriateness between the two keywords as a message and a response respectively. It can be seen that the what-vertices from the same highly connected region are more likely to constitute coherent dialog. We then present a novel graph grounded policy model to plan a long-term success oriented vertex sequence to guide response generation. Specifically, as illustrated by the three pink lines in Figure 1, given a user message, CG-Policy first links its keywords to CG to obtain hit what-vertices. Next, the policy model learns to select a what-vertex from one-hop what-vertex neighbors of all hit whatvertices, and then select a how-vertex from howvertex neighbors of the chosen what-vertex. Finally, the two selected vertices are utilized to guide response generation. Thus we leverage the prior dialog-transition information (as graph edges) to narrow down candidate response content for more effective policy decision, instead of using the whole set of keywords as candidate actions. Moreover, to facilitate the modeling of long-term influence of policy decisions in an ongoing dialog, we first present novel CG based rewards to better measure the long-term influence of selected actions. We then employ a graph attention mechanism and graph embedding to encode global structure information of CG into dialog state representations, enabling global information aware decisions. 1Each mechanism is a MLP network to model how to express response content (Chen et al., 2019). This paper makes the following contributions: • This work is the first attempt that represents dialog transitions as a graph, and conducts graph grounded policy learning with RL. Supported by CG and this policy learning framework, CG-Policy can respond better in terms of local appropriateness and global coherence. • Our study shows that: (1) one-hop whatvertex neighbors of hit what-vertices provide locally-appropriate and diverse response content; (2) the CG based rewards can supervise the policy model to promote a globallycoherent dialog; (3) the use of how-vertices in CG can improve response diversity; (4) the CG can help our system succeed in the task of target-guided conversation, indicating that it gives us more control over the dialog policy. 2 Related Work Policy learning for chitchat generation To address the degeneration issue of word-level policy models (Li et al., 2016b; Zhang et al., 2018b), previous works decouple policy learning from response generation, and then use utterance-level latent variables (Zhao et al., 2019) or keywords (Yao et al., 2018) as RL actions to guide response generation. In this work, we investigate how to use prior dialog-transition information to facilitate dialog policy learning. Knowledge aware conversation generation There are growing interests in leveraging knowledge bases for generation of more informative responses (Dinan et al., 2019; Ghazvininejad et al., 2018; Moghe et al., 2018; Zhou et al., 2018; Liu et al., 2019; Bao et al., 2019; Xu et al., 2020). In this work, we employ a dialog-modeling oriented graph built from dialog corpora, instead of a external knowledge base, in order to facilitate multi-turn policy learning, instead of dialog informativeness improvement. Specifically, we are motivated by (Xu et al., 2020). The method in (Xu et al., 2020) has the issue of cross-domain transfer since it relies on labor-intensive knowledge graph grounded multiturn dialog datasets for model training. Compared with them, our conversational graph is automatically built from dialog datasets, which introduces very low cost for training data construction. Furthermore, we decouple conversation modeling into two parts: “what to say” modeling and “how to 1837 NLG CG-Policy Input Output Message !"#"$%&'"()* Subgraphs Message+,history keywords, '#*-(-#"$,#'"()*. Selected keyword #*-,responding mechanism Dialog corpus Response TransE based graph embeddings Policy Reward /(" what-vertices ,,0$1"('$.,.$2$'"$-, 34,5)2('4,#", '611$*","(7$,."$5 0$1"('$.,8("/, /(.")14,9$48)1-. Graph construction Conversational graph :;< … … … … Figure 2: The architecture of our CG-Policy that consists of NLU, state/action, policy, and NLG. We first construct conversational graph from dialog corpus. Then we train CG-Policy with RL. The upper-right part shows the details of input/output of each module. say” modeling. It is reasonable to only adjust the “what-” part when transfer to different domains which further reduces the domain transfer cost. 3 Our Approach The overview of CG-Policy is presented in Figure 2. Given a user message, to obtain candidate actions, the NLU module attempts to retrieve contextually relevant subgraphs from CG. The state/action module maintains candidate actions, history keywords that selected by policy at previous turns or mentioned by user, and the message. The policy module learns to select a response keyword and a responding mechanism from the above subgraphs. The NLG module first encodes the message into a representation using a message encoder and the selected mechanism, and then employs a Seq2BF model2 (Mou et al., 2016) to produce a response 2It decodes a response starting from the input keyword, and generates the remaining previous and future words subsequently. In this way, the keyword will appear in the response. x Message encoder MLP for responding mechanism selected by policy Message Response representation r Response Seq2BF based decoder The !"#$%&'( )"*"+,"'(-#( .%*/+# Figure 3: The Multi-mapping based generator for NLG in which we use a Seq2BF based model (Mou et al., 2016) as the decoder. with the above representation and the selected keyword as input. The models used in CG construction/policy/NLG/reward are trained separately. 3.1 Background: Multi-mapping Generator for NLG To address the “one-to-many” semantic mapping problem for conversation generation, Chen et al.(2019) proposed an end-to-end multi-mapping model in which each responding mechanism (a MLP network) models how to express response content (e.g. responding with a specific sentence function). In test procedure, they randomly select a mechanism for response generation. As shown in Figure 3, the generator consists of a RNN based message encoder, a set of responding mechanisms, and a decoder. First, given a dialog message, the message-encoder represents it as a vector x. Second, the generator uses a responding mechanism (selected by policy) to convert x into a response representation ¯r. Finally, ¯r and a keyword (selected by policy) are fed into the decoder for response generation. To ensure that the given keyword will appear in generated responses, we introduce another Seq2BF based decoder (Mou et al., 2016) to replace the original RNN decoder. Moreover, this generator is trained on a dataset with pairs of [the message, a keyword extracted from a response]-the response.3 3.2 CG Construction Given a dialog corpus D, we construct the CG with three steps: what-vertex construction, how-vertex construction, and edge construction. 3If multiple keywords are extracted from the response, we randomly choose one; and if no keyword exists in the response, we randomly sample a word from the response to serve as “keyword”. 1838 What-vertex construction To extract content words from D as what-vertices, we use a rule-based keyword extractor to obtain salient keywords from utterances in D.4 After removing stop words, we obtain all the keywords as what-vertices. How-vertex construction We obtain a set of Nr responding mechanisms from the generator described in Section 3.1. Then they are used as howvertices. Notice that all the how-vertices in CG share the same set of responding mechanisms. Edge construction There are two types of edges in CG. One is to join two what-vertices and the other is to join a what-vertex and a how-vertex. To build the first type of edges, we first construct another dataset that consists of keyword pairs, where each pair consists of any two keywords extracted from the message and the response respectively in D. To capture natural transitions between keywords, we train another multi-mapping based model on this new dataset.5 For each what-vertex vw, we find appropriate keywords as its responses by selecting top five keywords decoded (decoding length is 1) by each responding mechanism, and then connect vw to vertices of these keywords. To build the second type of edges, for the [message-keyword]-response pair in D (described in Section 3.1), we use the ground-truth response to select the most suitable mechanism for each keyword. Then, given a what-vertex vw, we select top five mechanisms that are frequently selected for vw’s keyword. Then we build edges to connect vw to each of the top ranked how-vertices. These edges lead to responding mechanisms that are suitable to generate vw. 3.3 NLU To obtain subgraphs to provide high-quality candidate actions, we first extract keywords in the last utterance of the context (message) using the same tool in CG construction, and then link each keyword to the CG through exact string matching, to obtain multiple hit what-vertices. Then we retrieve a subgraph for each keyword, and use vertices (exclude hit what-vertices) in these subgraphs as candidate actions. Each subgraph consists of three parts: the hit what-vertex, its one-hop neighboring 4github.com/squareRoot3/Target-Guided-Conversation 5We ever tried other methods for edge construction, e.g., PMI (Yao et al., 2018). Finally we found that our method can provide more diverse response keyword candidates, while PMI tends to provide high-frequency keyword candidates. Here we use a RNN based decoder to replace the Seq2BF. 0. Prepare dataset D and pretrained embedding. 1. Construct the what-vertex set. (3.2) 2. Train a multi-mapping based generator for NLG. (3.1) Responding mechanisms constitute the how-vertex set. 3. Construct edges between two what-vertices or a what-vertex and a how-vertex. (3.2) 4. Train a scoring model for local relevance. (3.6) 5. Train TransE based embedding and PageRank scores for what-vertices. (3.6) 6. Calculate shortest path distances between any two what-vertices. (3.6) 7. Train a original multi-mapping based with a RNN decoder on D for user-simulator. (4.3) 8. Optimize policy with reinforcement learning, where parameters in other modules stay intact. (3.7) Table 1: The training procedure of CG-Policy. what-vertices, and how-vertices being connected to the above neighbors. If there are no keywords to be extracted from the message or to be linked to CG, we reuse the retrieved subgraphs at the last time.6 Thus we leverage the CG to provide high-quality candidate actions, instead of using the whole set of candidates as done in previous work (Yao et al., 2018). 3.4 State/Action This module maintains candidate actions, history keywords that selected by the policy or mentioned by user, and the message. Moreover, we use the message-encoder from Section 3.1 to represent the message as a vector x, and then we use all the responding mechanisms from Section 3.1 to convert x into Nr candidate response representations {rj}Nr j=1, which will be used in the policy. 3.5 Policy State representation The state representation st at the t-th time step is obtained by concatenating a message representation sM t and a history keywords representation sV t that are encoded by two RNN encoders respectively. Formally, st = [sM t ; sV t ]. (1) To enable global information aware policy decisions, we employ a graph attention mechanism and graph embedding to encode global structure information into state representation. Recall that we have a subgraph for each keyword in the message obtained by NLU. Here each subgraph gi consists of a hit what-vertex, 6If we encounter this case at the first time step, hit whatvertices are set as what-vertices that contain the top-5 highfrequency keywords in D. 1839 its what-vertex neighbors (here we remove howvertices) and edges between them. Formally, gi = {τk} Ngi k=1, where each τk is a triple with τk = (headk, relk, tailk), and Ngi is the number of triples in gi. For non keywords in the message, a NULL subgraph is used. Then we calculate a subgraph vector gi as a weighted sum of head vectors and tail vectors in the triples. gi = Ngi X k=1 αk[eheadk; etailk], αk = exp(βk) PNgi m=1 exp(βm) , βk = eT relk tanh(Wheheadk + Wtetailk). (2) Here e∗represents pretrained graph embedding (TransE (Bordes et al., 2013)) that are not updated during RL training. Wh and Wt are parameters. sM t is obtained by recursively feeding a concatenated vector ei = [wc i; gi] into a vanilla RNN unit, where wc i (as model parameters) is the embedding of the keyword wc i. Thus we encode the global graph structure information into RL state representations, enabling a global-information aware policy model. Moreover, we calculate sV t in a similar way. Policy decision Each decision consists of two sequential sub-decisions. First the what-policy selects a what-vertex from candidate what-vertices, and then the how-policy selects a how-vertex from how-vertex neighbors of the selected what-vertex. With st as the state representation, the whatpolicy µwhat is defined by: µwhat(st, vw j ) = exp(sT t vw j ) PNw act l=1 exp(sT t vw l ) , (3) where vw j (as model parameters, different from both wc i and e∗) is the embedding of the j-th candidate what-vertices, and Nw act is the number of candidate what-vertices. The how-policy µhow is defined by: µhow(st, ri) = λi exp(sT t ri) PNr j=1 λj exp(sT t rj) , (4) where ri is a candidate response representation in the state module, and λi is mechanism mask. λi is set as 1 if the i-th responding mechanism is one of neighbors of the selected what-vertex, otherwise 0. 3.6 Rewards Following previous works, we consider these utterance-level rewards: Local relevance We use a state-of-the-art multiturn response selection model, DualEncoder in (Lowe et al., 2015), to calculate local relevance. Repetition Repetition penalty is 1 if the generated response shares more than 60% words with any contextual utterances, otherwise 0. Target similarity For target-guided conversation, we calculate cosine similarity between the chosen keyword and the target word in pretrained word embedding space as target similarity.7 To leverage the global graph structure information of CG to facilitate policy learning, we propose the following rewards: Global coherence We calculate the average cosine distance between the chosen what-vertex and one of history what-vertices (selected or mentioned previously) in TransE based embedding space (also used in Equation 2) as coherence reward. Sustainability It is reasonable to promote whatvertices with a large number of neighbors to generate more sustainable, coherent, and diverse dialogs. For this reward, we calculate a PageRank score (calculated on the full CG) for the chosen whatvertex. Shortest path distance to the target For targetguided conversation, if the chosen what-vertex is closer to the target what-vertex in terms of shortest path distance when compared to the previously chosen what-vertex, then this reward is 1, or 0 if the distance does not change, otherwise -1. Moreover, we define the final reward as a weighted sum of the above-mentioned factors, where the weight of each factor is set as [0.5, -5, 0, 3, 8000, 0] by default.8 We see that our rewards can fully leverage dialog transition information in training data by using not only utterance based rewards (e.g., local relevance), but also graph based rewards (e.g., coherence, sustainability). 3.7 Policy Optimization To make training process more stable, we employ the A2C method (Sutton and Barto, 2018) for optimization. Moreover, we only update policy pa7If no keyword is chosen, as in baseline models, we calculate target similarity for each word in response and select the closest one. 8We optimize these values on Weibo dataset by grid search. The weights of the third/sixth factors are set as 0 by default because they are proposed for target-guided conversation. 1840 rameters, and the parameters of other modules stay intact during RL training. 3.8 NLG As described in Section 3.1, we use the mechanism selected by how-policy to convert x into a response representation ¯r. Then we feed the keyword in the selected what-vertex and ¯r into a Seq2BF decoder (Mou et al., 2016) for response generation. 4 Experiments and Results9 4.1 Datasets We conduct experiments on two widely used opendomain dialog corpora. Weibo corpus (Shang et al., 2015). This is a large micro-blogging corpora. After data cleaning, we obtain 2.6 million pairs for training, 10k pairs for validation and 10k pairs for testing. We use publicly-available lexical analysis tools10 to obtain POS tag features for this dataset and then we further use this feature to extract keywords from utterances. We use Tencent AI Lab Embedding11for embedding initialization in models. Persona dialog corpus (Zhang et al., 2018a). This ia a crowd-sourced dialog corpora where each participant plays the part of an assigned persona. To evaluate policy controllability brought by CGPolicy, we conduct an experiment for target-guided conversation on the Persona dataset as done in (Tang et al., 2019). The training set / validation set / testing set contain 101,935 / 5,602 / 5,371 utterances respectively. Embeddings are initialized with Glove (Pennington et al., 2014). Conversational Graph The constructed CG on Weibo corpus contains 4,000 what-vertices and 74,362 edges among what-vertices, where 64% edges are evaluated as suitable for chatting by three human annotators.12 The constructed CG on Persona corpus contains 1,500 what-vertices and 21,902 edges among what-vertices, where 67% edges are evaluated as suitable for chatting by three human annotators. 4.2 Methods We carefully select three SOTA methods that focus on dialog policy learning as baselines. 9Please see the supplemental material for more details. 10ai.baidu.com/ 11ai.tencent.com/ailab/nlp/embedding.html 12We randomly sample 500 edges for evaluation. LaRL It is a latent variable driven dialog policy model (Zhao et al., 2019). We use their released codes and choose the multivariate categorical latent variables as RL actions since it performs the best. For target-guided conversation, we implement another model LaRL-Target, where we add the “target similarity” factor into RL rewards, and its weight is set as 4 by grid search. ChatMore We implement the keyword driven policy model (Yao et al., 2018) by following their original design. For target-guided conversation, we implement ChatMore-Target, where we add the “target similarity” factor into RL rewards, and its weight is set as 4 by grid search. TGRM It is a retrieval based model for targetguided conversation, where the keyword chosen at each turn must move strictly closer (in embedding space) to a given target word (Tang et al., 2019). For target-guided conversation, we use the codes released by the original authors, denoted as TGRM-Target, and we use their kernel version since it performs the best.13 To suit the task of open-domain conversation on Weibo, we remove the unnecessary constraint on keyword’s similarity with the target word, denoted as TGRM. CG-Policy It is our system presented in Section 3. For target-guided conversation, we implement another system CG-Policy-Target, where we use an additional feature, the “shortest path distance to the target” factor, to augment the original whatvertex representation vw j in the what-policy µwhat. Formally, ¯vw j = W1 ∗[vw j ; edj], where ¯vw j is the augmented representation, W1 is a weighting matrix, edj is an embedding for the distance value dj, and ¯vw j has the same size with vw j . We also use this factor in reward estimation and its weight is set as 5 by grid search, and we don’t use the “target similarity” factor. Moreover, we use the same dialog corpora to construct CG, train user simulator, reward functions, and the NLG module for CG-Policy. 4.3 User Simulator We use the same user simulator for RL training of LaRL, ChatMore and CG-Policy. The user simulator is the original multi-mapping based generator with a RNN decoder, which is pretrained on dialog corpus and not updated during policy training. Please refer to (Chen et al., 2019) for more details. During testing, all the systems share this simulator. 13github.com/squareRoot3/Target-Guided-Conversation 1841 4.4 Evaluation Settings Conversation with user simulator Following previous work (Li et al., 2016b; Tang et al., 2019), we use a user simulator to play the role of human and let each of the models converse with it. Given a randomly selected model, we randomly select an utterance from all the utterances (at the starting position of sessions) in test set for the model to start a conversation. Moreover, we set a maximum allowed number of turns, which is 8 in our experiment. Finally, we collect 100 model-simulator dialogs for evaluation. For single-turn level evaluation, we randomly sample 100 message-response pairs from the dialogs for each model. Conversation with human Following previous work (Tang et al., 2019), we also perform human evaluation for a more reliable system comparison. Given a model to be evaluated, we randomly select a dialogue from test set and pick its first utterance for the model to start a conversation with a human. Then the conversation will continue till 8 turns are reached. Finally, we obtain 50 dialogs for evaluation. For single-turn level evaluation, we randomly sample 100 message-response pairs from the dialogs for each model. 4.5 Evaluation Metrics Metrics such as BLEU and perplexity have been widely used for dialog evaluation (Li et al., 2016a; Serban et al., 2016), but it is widely debated how well these automatic metrics are correlated with true response quality (Liu et al., 2016). Since the proposed system does not aim at predicting the highest-probability response at each turn, but rather the long-term success of a dialog (e.g., coherence), we do not employ BLEU or perplexity for evaluation, and we propose the following metrics. 4.5.1 Multi-turn Level Metrics Global coherence We define incoherence problems as follows: (1) Inconsistent dialogs where the model contradicts with itself, e.g., the model says he is a driver before and then says he is a doctor; (2) One-side dialogs in which the model ignores the user’s topics with two or more consecutive turns. A session will be rated “0” if it contains more than three incoherence cases, or “+1” if a session contains 2 or 3 cases, otherwise “+2”. Distinct The metric Dist-i calculates the ratio of distinct i-gram in generated responses (Li et al., 2016a). We use Dist-2 to measure the diversity of generated responses. Methods Cohe. Dist-2 Appr. Infor. LaRL 0.85 0.12 0.55 0.77 ChatMore 0.95 0.05 0.58 0.93 TGRM 0.79 0.42 0.68 1.00 CG-Policy 1.33 0.31 0.73 1.00 Table 2: Results for dialogs with simulator on Weibo. Dialog-target success rate For target-guided conversation, we measure the success rate of generating the target word within 8 turns. 4.5.2 Single-turn Level Metrics Local appropriateness14 A response will be rated “0” if it is inappropriate as an reply to the given message, otherwise “1”. Informativeness “0” if a response is a “safe” response, e.g. “I don’t know”, otherwise “1”. 4.6 Evaluation Results 4.6.1 Setting We ask three annotators to judge the quality of each dialog (at multi-turn level) or utterance pair (at single-turn level) for each model. Notice that model identifiers are masked during evaluation. 4.6.2 Conversation with simulator As shown in Table 2, CG-Policy significantly outperforms (sign test, p-value < 0.01) baselines in terms of global coherence and local appropriateness. It indicates that the CG can effectively facilitate policy learning (see the ablation study for further analysis). For LaRL, its single-turn response quality is worse than other models. It might be explained by that their latent variables are not finegrained enough to provide sufficient information to guide response generation. ChatMore tends to select high-frequency or generic keywords, resulting in its worst performance in terms of Dist-2. TGRM performs the best in terms of Dist-2 and informativeness, indicating that retrieval-based models can produce more diverse responses than generation based models. It is consistent with the conclusions in previous work (Chen et al., 2017; Zhang et al., 2018a). However, TGRM performs the worst in terms of coherence, since TGRM does not use RL framework. It indicates the importance of RL framework for multi-turn dialog modeling. Here the Kappa value for inter-annotater agreement is above 0.4, indicating moderate agreement. 14We do not consider if a response is appropriate or not for the selected responding mechanism. 1842 Methods Cohe. Dist-2 Appr. Infor. LaRL 0.82 0.22 0.52 0.74 ChatMore 0.88 0.15 0.54 0.94 TGRM 0.77 0.63 0.61 1.00 CG-Policy 1.26 0.47 0.67 1.00 Table 3: Results for dialogs with human on Weibo. 4.6.3 Conversation with human As shown in Table 3, CG-Policy outperforms baselines in terms of both global coherence and local appropriateness (sign test, p-value < 0.01) , which is consistent with the results in Table 2. The Kappa value is above 0.4, indicating moderate agreement. 4.6.4 Ablation study We conduct an ablation study for CG-Policy on Weibo corpus to investigate why CG-Policy performs better. First, to evaluate the contribution of CG, we remove the CG from CG-Policy, denoted as CG-Policy-noCG, where we do not use graph structure information for action space pruning and reward design. Moreover, we attempt to use the CG (without how-vertices) to augment the ChatMore model for action space pruning and reward design, denoted as Chatmore-CG. As shown in Table 4, the performance of CG-Policy-noCG drops dramatically in terms of coherence, Dist-2 and appropriateness when compared to the original model. Moreover, CG can boost the performance of ChatMore in terms of most of metrics. It indicates that the use of CG is crucial to the superior performance of CG-Policy, and it can also help other models, e.g., ChatMore. Second, to evaluate the contribution of CG for action space pruning or reward design respectively, we implement two system variants: (1) we use all the what-vertices in CG as action candidates at each turn, denoted as CGPolicy-noCGact; (2) we remove all the CG-based factors from RL rewards, denoted as CG-PolicynoCGrwd. As shown in Table 4, the performance of CG-Policy-noCGact drops significantly in terms of Dist-2 as it tends to select high-frequency keywords like ChatMore, indicating the importance of graph paths to provide both locally-appropriate and diverse response keywords. Moreover, the performance of CG-Policy-noCGrwd drops significantly in terms of coherence, indicating that CG based rewards can effectively guide CG-Policy to promote coherent dialogs. Third, we remove how-vertices from CG, denoted as CG-Policy-noCGhow. As shown in Table 4, how-vertex removal hurts its perMethods Cohe. Dist-2 Appr. Infor. CG-Policy 1.33 0.31 0.73 1.00 ChatMore 0.95 0.05 0.58 0.93 ChatMore-CG 1.15 0.14 0.65 0.91 CG-Policy-noCG 1.03 0.07 0.62 1.00 CG-Policy-noCGact 1.11 0.08 0.68 1.00 CG-Policy-noCGrwd 1.06 0.19 0.64 1.00 CG-Policy-noCGhow 1.21 0.13 0.65 1.00 Table 4: Ablation study for CG-Policy on Weibo. formance in Dist-2, indicating the importance of how-vertices for response diversity. 4.7 The Task of Target-guided Conversation Besides maintaining coherence, CG grounded policy learning can enable more control over dialog models, which is important to achieve certain goals for chatbot, e.g. proactive leading to certain chatting topics (keywords) or certain products. 4.7.1 Setting Following the setting in (Tang et al., 2019), where we randomly sample a keyword as the target word for each session in testing procedure. Here we use a multi-mapping based user simulator trained on the Persona dataset for evaluation. Methods Succ.(%) Cohe. Appr. Infor. LaRL-Target 1 0.91 0.62 0.91 ChatMore-Target 6 0.93 0.65 0.97 TGRM-Target 69 0.96 0.67 1.00 CG-Policy-Target 98 1.17 0.75 1.00 Table 5: Results for target-guided dialogs on Persona. 4.7.2 Results Table 5 presents the results on 100 dialogs for each model. We see that CG-Policy-Target can significantly outperform baselines in terms of dialogtarget success rate (sign test, p-value < 0.01). It can be seen that that CG-Policy can successfully lead the dialog to a given target word by learning to walk over the CG, indicating that this graph gives us more control over the policy. LaRL-Target and ChatMore-Target perform badly in terms of success rate. It may be explained by that they lack the ability of proactive dialog content planning. 4.8 Analysis of Responding Mechanisms Figure 4 provides representative words of each mechanism.15 For example, for Mech-1, its keywords are mainly subjective words (e.g. think) for 15We select words that occur frequently in responses guided by this mechanism but rarely occur with other mechanisms. 1843 generation of responses with respect to personal opinion or intention. For Mech-2, it tends to respond with a specific type of mood. Mech-1 Mech-2 Mech-3 Mech-4 Mech-5 以为 think 哈哈 haha 哪 where 漂亮 beautiful 别 no 想 want 哇 wow 什么 what 可爱 cute 还是 or else 信 believe 好吧 alright ? 萌 cuddly 没有 no Figure 4: Representative words of responding mechanisms. 5 Conclusion In this paper we present a novel graph grounded policy learning framework for open-domain multiturn conversation, which can effectively leverage prior information about dialog transitions to foster a more coherent and controllable dialog. Experimental results demonstrate the effectiveness of this framework in terms of local appropriateness, global coherence and dialog-target success rate. In the future, we will investigate how to extend the CG to support hierarchical topic management in conversational systems. Acknowledgments We are grateful for the support from Yan Zeng at the initial stage of this work. We also thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by the National Key Research and Development Project of China (No.2018AAA0101900) and the National Natural Science Foundation of China (NSFC) via grant 61976072. References Siqi Bao, Huang He, Fan Wang, Rongzhong Lian, and Hua Wu. 2019. Know more about each other: Evolving dialogue strategy via compound assessment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5382– 5391. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Chaotao Chen, Jinhua Peng, Fan Wang, Jun Xu, and Hua Wu. 2019. Generating multiple diverse responses with multi-mapping and posterior mapping selection. Proceedings of IJCAI. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. SIGKDD Explorations. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In Proceedings of ICLR. Association for Computational Linguistics. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of AAAI 2018, pages 5110–5117. Association for the Advancement of Artificial Intelligence. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333–2343. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443–2453. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of EMNLP, pages 1192—-1202. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In EMNLP-IJCNLP. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294. 1844 Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of EMNLP, pages 2322—2332. Association for Computational Linguistics. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of AAAI, pages 3776—-3784. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of ACL-IJCNLP, volume 1, pages 1577–1586. Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT Press. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric P. Xing, and Zhiting Hu. 2019. Target-guided open-domain conversation. In Proceedings of ACL. Jun Xu, Haifeng Wang, Zhengyu Niu, Hua Wu, and Wanxiang Che. 2020. Knowledge graph grounded goal planning for open-domain conversation generation. In Thirty-Fourth AAAI Conference on Artificial Intelligence. Lili Yao, Ruijian Xu, Chao Li, Dongyan Zhao, and Rui Yan. 2018. Chat more if you like: Dynamic cue words planning to flow longer conversations. arXiv preprint arXiv:1811.07631. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Wei-Nan Zhang, Lingzhi Li, Dongyan Cao, and Ting Liu. 2018b. Exploring implicit feedback for open domain conversation generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Tiancheng Zhao, Kaige Xie, and Maxine Eskenazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1208–1218. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of IJCAIECAI. A Appendices Training Details and Two Conversation Cases For fair comparison, all models share the same vocab (maximum size is 50000 for the Weibo corpus or 20000 for the Persona corpus), initialized word embedding (size is 200), and keyword set. Further, One layer bidirectional GRU-RNN (hidden size is 512) is utilized for all encoders. Dropout rate is 0.3, and optimizer is Adam(lr=2le-3) for all models. We initialize each session with a starting utterance chosen from the training dataset randomly. Maximum turns is set as 8 and the discounting weight for rewards is set as 0.95. LaRL Categorical z M=10,K=20 β 0.01 CG-Policy TransE Embedding Size 100 Number of Responding Mechanisms Nr 10 Table 6: Training details for models. 1845 B: 晚饭吃了么[Did you have dinner?] U: 我想吃排骨了,晚上炖排骨吃吧[I want to have spareribs. Let's stew spareribs in the evening.] B: 我请你吃猪蹄吧,我也要吃[I'll treat you to pig hooves. I'll eat them, too.] U: 炸猪蹄很好吃啊[Fried pig hooves are delicious.] B: 这是什么类型啊,你吃过么?[What kind of food is it? Have you eaten it?] U: 我还是吃炸酱面吧,最近减肥[I'd better eat fried sauce noodles. I've lost weight recently.] B: 正宗了,我也想吃[It's authentic. I want to eat it, too.] U: 我想跟你一起跳舞[I want to dance with you.] B: 好萌啊,好可爱,好喜欢[how budding! You are lovely and I like it.] U: 我想起我家那条狗了,泰迪狗,给你看看照片[I think of my dog, Teddy. Show you the picture.] B: 我想养狗了,我也要养一只[I want to have a dog. I want to have one, too.] U: 我想回家吃狗肉[I want to go home and eat dog meat.] B: 可怜的泰迪啊,哈哈哈[Poor Teddy, haha] U: 我不是要回家吃我的泰迪![I'm not going home to eat my Teddy!] Figure 5: Case 1: One conversation between CG-Policy and human, where “B” is CG-Policy and “U” is human. The red words are keywords. We translate the original Chinese utterances into English. B: 昨天去菜市场买了一大袋土豆,累死我了 [Yesterday I went to the vegetable market and bought a big bag of potatoes. I am exhausted.] U: 话说楼下新开了个超市[I heard that there is a new supermarket downstairs.] B: 我去了,不打折了,哈哈[I've been there. No discount yet, haha.] U: 里面的水果新鲜么[Is the fruit fresh?] B: 我都没看出是有多难吃[It’s untasty, even more than I expected.] U: 刚开业就不打折,真抠门,我去淘宝买吧 [It’s really stingy that have no discount just after opening. I'll go to Taobao to buy fruit.] B: 在哪里买啊,低价代购?求地址[Which shop? Substitute purchase at low price? Give me the link address.] U: 很多店铺吧, 搜一下就好了[A lot of shops can be your choice. Just search them.] B: 万能的淘宝网店,天猫要火啊 [Taobao shops are really omnipotent, TianMao is becoming more and more famous] U: 双十一要到了,马云赚钱好开心 [The Double Eleventh Shopping Festival is coming. Ma Yun is happy to earn lots of money.] B: 我去年双十一购买了5件商品[I bought five items last year in the Double Eleventh Shopping Festival.] U: 美死了,哈哈,好可爱,哈哈[Haha, you are lovely, haha] B: 你也很可爱[You're lovely, too] U: 我想买一只萨摩耶,好可爱[I want to buy a Samoa. It's cute.] Figure 6: Case 2: One conversation between CG-Policy and human, where “B” is CG-Policy and “U” is human. The red words are keywords. We translate the original Chinese utterances into English.
2020
166
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1846–1852 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1846 GPT-too: A Language-Model-First Approach for AMR-to-Text Generation Manuel Mager1∗Ram´on Fernandez Astudillo2 Tahira Naseem2 Md Arafat Sultan2 Young-Suk Lee2 Radu Florian2 Salim Roukos2 1 Institute for Natural Language Processing, University of Stuttgart, Germany 2 IBM Research AI, Yorktown Heights, NY 10598, USA [email protected] {ramon.astudillo,arafat.sultan}@ibm.com {tnaseem, ysuklee}@us.ibm.com Abstract Abstract Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs. Existing approaches to generating text from AMR have focused on training sequenceto-sequence or graph-to-sequence models on AMR annotated data only. In this paper, we propose an alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring. Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques on the English LDC2017T10 dataset, including the recent use of transformer architectures. In addition to the standard evaluation metrics, we provide human evaluation experiments that further substantiate the strength of our approach. 1 Introduction Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a rooted, directed, acyclic graph with labeled edges (relations) and nodes (concepts) expressing “who is doing what to whom”. AMR-to-text generates sentences representing the semantics underlying an AMR graph. Initial works in AMR-to-text used transducers (Flanigan et al., 2016), phrase-based machine translation (Pourdamghani et al., 2016) and neural sequence-to-sequence (seq2seq) models with linearized graphs (Konstas et al., 2017). Cao and Clark (2019) leverage constituency parsing for generation. Beck et al. (2018) improve upon prior RNN graph encoding (Song et al., 2018) with Levi Graph Transformations. Damonte and Cohen (2019) compare multiple representations and find graph encoders to be the best. Guo et al. (2019) use RNN graph encoders with dense graph convolutional encoding. Ribeiro et al. (2019) ∗This research was done during an internship at IBM Research AI. use RNN encoders with dual graph representations. Transformer-based seq2seq (Vaswani et al., 2017) was first applied to AMR-to-text in (Sinh and Le Minh, 2019). Zhu et al. (2019) greatly improve over the prior state-of-the-art by modifying self-attention to account for AMR graph structure. Using transformers has also been recently explored by Wang et al. (2020) who propose a mutli-head graph attention mechanism. Pre-trained transformer representations (Radford et al., 2018; Devlin et al., 2019; Radford et al., 2019) use transfer learning to yield powerful language models that considerably outperform the prior art. They have also shown great success when fine-tuned to particular text generation tasks (See et al., 2019; Zhang et al., 2019; Keskar et al., 2019). Given their success, it would be desirable to apply pre-trained transformer models to a graph-to-text task like AMR-to-text, but the need for graph encoding precludes in principle that option. Feeding the network with some sequential representation of the graph, such as a topological sorting, looses some of the graphs representational power. Complex graph annotations, such as AMR, also contain many special symbols and special constructs that departure from natural language and may by not interpretable by a pretrained language model. In this paper we explore the possibility of directly fine-tuning a pre-trained transformer language model on a sequential representation of AMR graphs, despite the expected difficulties listed above. For this we re-purpose a GPT-2 language model (Radford et al., 2019) to yield an AMR-to-text system. We show that it is surprisingly easy to fine-tune GPT-2 to learn AMR graph to text mapping that outperforms the previous state-of-the-art on automatic evaluation metrics. Since a single graph AMR, graph corresponds to multiple sentences with the same meaning, we 1847 also provide human evaluation and semantic similarity metric results (Zhang et al., 2020) which are less dependent on reference text. Human evaluation and semantic similarity results highlight the positive impact of a strong language model strategy. Finally we also introduce a simple re-scoring technique based on cycle-consistency that further improves performance. 2 Fine-tuning GPT-2 for conditional language generation In order to fine-tune a generative model (GPT-2; Radford et al. (2019)) for conditional text generation, prior works fine-tune the language model to predict target text starting from the additional source text as context. In our experiments, we found it beneficial to fine-tune on the joint distribution of AMR and text instead i.e. also reconstruct the source. Given a tokenized sentence w1 · · · wN and the sequential AMR representation a1 · · · aM we maximized the joint probability pGPT-2(w, a) = N Y j=1 pGPT-2(wj | w1:j−1, a1:M) · M Y i=1 pGPT-2(ai | a1:i−1) A special separator token is added to mark the end of the sequential AMR representation. Special AMR symbols that should not be interpreted literally are assigned tokens from the GPT-2 unused token list. In addition to this, we also observed that freezing the input embeddings when fine-tuning had positive impact in performance. At test time, we provide the AMR as context as in conventional conditional text generation: ˆwj = arg max wj {pGPT-2(wj | w1:j−1, a1:M)} 3 Re-scoring via Cycle Consistency The general idea of cycle consistency is to assess the quality of a system’s output based on how well an external ‘reverse’ system can reconstruct the input from it. In previous works, cycle-consistency based losses have been used as part of the training objective in machine translation (He et al., 2016) and speech recognition (Hori et al., 2019). It has also been used for filtering synthetic training data for question answering (Alberti et al., 2019). Here we propose the use of a cycle consistency measure to re-score the system outputs. In particular, we take the top k sentences generated by our system from each gold AMR graph and parse them using an off-the-shelf parser to obtain a second AMR graph. We then re-score each sentence using the standard AMR parsing metric Smatch (Cai and Knight, 2013) by comparing the gold and parsed AMRs. 4 Experimental setup Following Previous works on AMR-to-text, we Use the standard LDC2017T10 AMR corpus for evaluation of the proposed model. This Corpus contains 36,521 training instances of AMR graphs in PENMAN notation and the corresponding texts. It also includes 1368 and 1371 development and test instances, respectively. We tokenize each input text using The JAMR toolkit (Flanigan et al., 2014). The concatenation of an AMR graph and the corresponding text is split into words, special symbols and sub-word units using the GPT-2 tokenizer. We add all arc labels seen in the training set and the root node :root to the vocabulary of the GPT-2model, but we freeze the embedding layer for training. We use the Hugging Face implementation of (Wolf et al., 2019) for GPT-2 small (GPT-2S), medium (GPT-2M) and large (GPT-2L). Fine-tuning converges after 6 epochs, which takes just a few hours on a V100 GPU1. For cycle-consistency re-scoring we use an implementation of Naseem et al. (2019) in PyTorch. For re-scoring experiments, we use a beam size of 15. AMR input representation. we test three variants of AMR representation. First, a depth-first search (DFS) through the graph following Konstas et al. (2017), where the input sequence is the path followed in the graph. Second, to see if GPT-2 is in fact learning from the graph structure, we remove all the edges from the DFS, keeping only the concept nodes. This has the effect of removing the relation information between concepts, such as subject/object relations. As a third option, we use the PENMAN representation without any modification. The three input representations are illustrated below: 1Code for this paper is available at: https:// github.com/IBM/GPT-too-AMR2text 1848 Nodes recommend advocate-01 it vigorous DFS recommend :ARG1 advocate-01 :ARG1 it :manner vigorous Penman (r / recommend-01 :ARG1 (a / advocate-01 :ARG1 (i / it) :manner (v / vigorous))) Decoding. For generation, we experiment with greedy decoding, beam search, and nucleus sampling (Holtzman et al., 2019). For beam search, we explore beam sizes of 5, 10 and 15. As the system, in some cases, produces repetitive output at the end of the text, we additionally perform a post-processing step to remove these occurrences. Metrics. We considered the three automatic evaluation metrics commonly used in previous works. We compute BLEU (Papineni et al., 2002) using SacreBLEU (Ma et al., 2019). We compute chrF++ (Popovi´c, 2017) using both SacreBLEU and the scripts used by authors of the baseline systems. We compute METEOR (Banerjee and Lavie, 2005) with the default values for English of the CMU implementation.2 In addition to the standard automatic metrics, we also carry out human evaluation experiments and use the semantic similarity metric BERTScore (Zhang et al., 2020). Both metrics arguably have less dependency on the surface symbols of the reference text used for evaluation. This is particularly relevant for the AMR-to-text task, since one single AMR graph corresponds to multiple sentences with the same semantic meaning. Conventional metrics for AMR-to-text are are strongly influenced by surface symbols and thus do not capture well the ability of the system to produce a diverse sentences with same underlying semantics. Human evaluations are carried out by three professional annotators on 51 randomly selected sentences from the 1371 test sentences, on a 6 point scale, ranging from 0 to 5. • 0=Exceptionally poor (No useful information is conveyed at all.) • 1=Poor (Fundamental errors in grammar and vocabulary make it difficult to understand the meaning.) • 2=Not good enough (Errors in grammar, vocabulary and style make it difficult to understand the meaning.) • 3=Good enough (There are errors in the text, but I am reasonably confident that I understand the meaning.) 2https://www.cs.cmu.edu/˜alavie/METEOR Model Input BLEU chrF++ GPT-2S Rec. Only nodes AMR 9.45 41.59 GPT-2S Rec. Lin. AMR w/o edges. 11.35 43.25 GPT-2S Rec. Lin. AMR w/edges. 20.14 53.12 GPT-2S Rec. Penman AMR 22.37 53.92 GPT-2M Rec. Lin. AMR w/edges. 22.86 55.04 GPT-2M Rec. Penman AMR 27.99 61.26 Table 1: Results on the LDC2017T10 development set using GPT-2 S(mall) and M(edium) with Rec(onstruction) loss (see §2) for different AMR representations (see §4). Approach Decoding BLEU chrF++ GPT-2M Conditional Greedy 25.73 57.2 GPT-2M Rec. Greedy 30.41 61.36 GPT-2M Rec. BEAM 31.8 62.56 GPT-2M Rec. BEAM 10 32.32 62.79 GPT-2M Rec. Sampling 28.75 61.19 Table 2: Results on the LDC2017T10 development set. Rec(onstruction) uses the AMR reconstruction term (see §2) whereas Conditional does not. • 4=Very good (There may be minor errors in the text, but I am very confident that I understand the meaning.) • 5=Excellent (The information is presented clearly and with appropriate grammar, vocabulary and style.) For each system, scores from all annotators are averaged to compute a single score. Inter-annotator agreement was 0.7 when measured by Pearson correlation coefficient. Our system produces de-tokenized cased output after BPE decoding, whereas previous systems produce traditional tokenized lower-cased output. Therefore, we lowercase and tokenize our system outputs to have fair comparisons with previous systems. 4.1 Results Regarding the type of AMR representation, as shown in Table 1, using directly the PENMAN notation for AMR representation leads to the best results outperforming DFS. Edge information, indicating relations between concepts, seems also to play a fundamental role since its absence strongly decreases performance in both DFS and PENMAN representations. Penman notation was chosen for the rest of the experiments. The impact of the use of a reconstruction term explained in §2 is shown in Table 2. The model trained using this additional term achieves 30.41 BLEU and 61.36 chrF++, as opposed to 25.73 1849 System Performance BLEU Meteor chrF++ Beck et al. (2018) 23.30 50.40 Damonte and Cohen (2019) 24.54 24.07 Guo et al. (2019) 27.60 57.30 Cao and Clark (2019) 26.80 Sinh and Le Minh (2019) 18.36 Ribeiro et al. (2019) 27.87 33.21 Cai and Lam (2020) 29.80 35.10 59.4 Zhu et al. (2019) 31.82 36.38 64.05 GPT-2M Rec. 32.10♦35.863 61.81♦ GPT-2L Rec. 32.47♦36.803 62.88♦ GPT-2M Rec. re-scoring 32.98♦37.333 63.09♦ GPT-2L Rec. re-scoring 33.02♦37.683 63.892 Table 3: Results on the LDC2017T10 test set for best performing models compared to other results reported in the literature. ♦indicates statistical significance at (P < .01), 3 at (P < 0.05) and 2, not significant. All significance tests are with respect to (Zhu et al., 2019). System LDC2017T10 Human Eval. SemSim Avg. P45 F1 Guo et al. (2019) 2.48 15.69% 92.68 Ribeiro et al. (2019) 2.42 16.37% 92.63 Zhu et al. (2019) 2.61 20.26% 93.31 GPT-2M Rec. 3.03 37.91% 94.55 GPT-2L Rec. 3.04 41.83% 94.63 Table 4: Human evaluation and semantic similarity (SemSim) results on the LDC2017T10 test set. Human evaluations (Human Eval.) show the average (Avg.) of scores (0 to 5) and the ratio of sentence evaluated between 4 and 5 (P45). All results for human evaluation are on 51 randomly selected sentences and statistically significant at (P < 0.05). SemSim results are significant at (P < 0.01). All significance tests refer to a comparison with (Zhu et al., 2019). BLEU and 57.2 chrF++ without the term. We therefore use a reconstruction term training in the rest of the experiments. Beam search improves system performance greatly over the greedy baseline with 1.91 BLEU points (see Table 2). With beam size 10, we obtain 32.32 BLEU and 62.79 chrF++. With nucleus sampling at a cumulative probability mass of 0.9, performance drops to 28.75 BLEU and 61.19 chrF++. Finally, cycle-consistency re-ranking of the beam search outputs improves performance (33.57 BLEU, 64.86 chrF++) over the one best output. Table 3 compares the best GPT-2M and GPT-2L results, fine-tuned using the reconstruction term and PENMAN notation. For all scores we test statistical significance with a standard two-tailed student t-test. Our model achieves a large improvement of 1.2 BLEU and 1.3 METEOR scores over the previous state-of-the-art model using GPT-2L and re-scoring. For chrF++, we get different scores from SacreBLEU and the scripts provided by the authors of our baseline systems, achieving comparable results with the former (63.89), and improving over the best score with the latter (65.01) (P < .01). Table 4 shows human Evaluation results and semantic similarity scores of GPT-2L and GPT-2M compared to (Zhu et al., 2019; Ribeiro et al., 2019; Guo et al., 2019). Our approach produces a large number of high-quality sentences with 41.8%, a significant gain over the previous best system (20.26%). Regarding semantic similarity, prior art methods show relatively close scores, a 0.9 points difference, while GPT-2L Rec. improves 1.6 points over the best of these models. It should be noted that differences with (Zhu et al., 2019) for GPT-2L Rec. are statistically significantly with P < .05, while differences for GPT-2M Rec are not significant due to the small sample size. In Table 5 we show three nontrivial examples, where we compare our system outputs with those of previous work. In the first example, the reference sentence contains a grammatical error. Our system not only generates the correct output, but also corrects the error in the reference. The proposed system can generate fluent long sentences as shown in example 2. The third example shows a sentence where all systems including ours fail to generate a correct text. 4.2 Discussion Due to the large amounts of data they are trained on, pre-trained transformer language models can be expected to generate fluent and diverse text (See et al., 2019). It should however be highlighted that fine-tuned GPT-2 learns to produce not only fluent but also adequate text, despite using a sequential representation of an AMR graph as input. As shown in the experimental setup, encoding of relations plays as well a fundamental role in AMRto-text performance, indicating that GPT-2 attains a fine-grained understanding of the underlying semantics to reach state of the art performance. While a sequence of PENMAN notation to1850 System Generated text (1) REF: the doctors gave her medication and it ’s made her much better . G2S: the doctor gives her medications and they make her much better . Transf: doctors give her medications and make her much better . Our: the doctor gave her the medication and made her feel much better. Our R.: the doctor gave her the medication and made her ” much better ” . (2) REF: at the state scientific center of applied microbiology there is every kind of deadly bacteria that was studied for use in the secret biological weapons program of the soviet union . G2S: there are every kind of killing <unk> in the state scientific center of applied microbiology to use themselves for soviet union ’s secret biological weapons programs . Transf: there is every kind of bacterium , which is studied in using bacterium for the soviet union secret biological weapons program . Our: every kind of bacterium that was studied was found at the state scientific center of applied microbiology and was used in soviet secret weapons programs for biological weapons of biology . Our R.: every kind of bacterium that has been studied and used in soviet secret programs for biological weapons has been in the state scientific center of applied microbiology . (3) REF: among the nations that have not signed the treaty only india and israel would qualify for admission to the nsg under the israeli proposal . G2S: only one of the nations who do not sign the treaty are qualified for their proposal to admit the nsg . Transf: india and israel are only qualified for the nations that do not sign the treaty , but they admitted to the nsg . Our: india and israel are the only countries eligible to admit to the nsg by proposing a treaty . Our R.: only india and israel are eligible to admit to the nsg by proposing a treaty . Table 5: Output examples from four systems of the LDC2017T10 dataset. REF stands for reference, G2S for (Guo et al., 2019) and Transf. for (Zhu et al., 2019). Our is the top beam output for GPT-2L and Our R. is with re-scoring. kens is far from an optimal encoding of a graph, it is noteworthy how far performance-wise current strong language models can go. Furthermore, It is likely that standard metrics (BLEU, Meteor, chrF++) that rely on a reference text do not properly reflect AMR-to-text quality. An AMR graph corresponds to multiple sentences with the same semantics and these measures are likely biased towards the single available reference. In metrics that are less influenced by the reference text such as human evaluation and semantic similarity, the proposed system shows a larger improvement over the previous systems with close to 50% of the generated sentences considered excellent or good. Finally it is worth considering that leveraging pre-trained transformers greatly expands the vocabulary available on AMR-to-text systems. A single AMR graph can correspond to multiple sentences with markedly different surface realizations, but manual annotation of AMR is a time consuming task. Approaches like the one proposed may be a simple solution for generation of diverse text data for AMR parser training or other applications were diversity play a role. 5 Conclusions In this work, we present a language model-based approach for the AMR-to-text generation task. We show that a strong pre-trained transformer language model (GPT-2) can be fine-tuned to generate text directly from the PENMAN notation of an AMR graph. Comparison with state-of-the-art models in BLUE, chrF++, METEOR as well as SemSim and human evaluation metrics show that while simple, this approach can outperform existing methods including methods training transformers from scratch. We also show that cycle consistency-based re-scoring using a conventional AMR parser and the Smatch metric can notably improve the results. Future work will focus on incorporating better encoding of the AMR graph into the current system and exploring data augmentation techniques leveraging the proposed approach. Acknowledgments We thank the reviewers for their valuable suggestions. We would also like to thank Chunchuan Lyu for his valuable feedback and help. 1851 References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 273–283. Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In 34th AAAI conference on artificial intelligence. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752. Kris Cao and Stephen Clark. 2019. Factorising amr generation through syntax. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2157–2163. Marco Damonte and Shay B Cohen. 2019. Structural neural encoders for amr-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3649–3658. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. Generation from abstract meaning representation using tree transducers. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 731–739. Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426– 1436. Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association for Computational Linguistics, 7:297–312. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Takaaki Hori, Ramon Astudillo, Tomoki Hayashi, Yu Zhang, Shinji Watanabe, and Jonathan Le Roux. 2019. Cycle-consistency training for end-to-end speech recognition. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6271–6275. IEEE. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural amr: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146–157. Qingsong Ma, Johnny Wei, Ondˇrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62–90. Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding Smatch: Transition-based AMR parsing with reinforcement learning. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. 1852 Maja Popovi´c. 2017. chrf++: words helping character n-grams. In Proceedings of the second conference on machine translation, pages 612–618. Nima Pourdamghani, Kevin Knight, and Ulf Hermjakob. 2016. Generating english from abstract meaning representations. In Proceedings of the 9th international natural language generation conference, pages 21–25. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pretraining. 2018. URL https://s3-us-west-2. amazonaws. com/openai-assets/research-covers/languageunsupervised/language understanding paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Leonardo FR Ribeiro, Claire Gardent, and Iryna Gurevych. 2019. Enhancing amr-to-text generation with dual graph representations. arXiv preprint arXiv:1909.00352. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? arXiv preprint arXiv:1909.10705. Vu Trong Sinh and Nguyen Le Minh. 2019. A study on self-attention mechanism for amr-to-text generation. In International Conference on Applications of Natural Language to Information Systems, pages 321–328. Springer. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616– 1626. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Tianming Wang, Xiaojun Wan, and Hanqi Jin. 2020. Amr-to-text generation with graph transformer. Transactions of the Association for Computational Linguistics, 8:19–33. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-theart natural language processing. arXiv preprint arXiv:1910.03771. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, and Guodong Zhou. 2019. Modeling graph structure in transformer for better amr-to-text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5462–5471.
2020
167
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1853–1868 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1853 Learning to Update Natural Language Comments Based on Code Changes Sheena Panthaplackel1, Pengyu Nie2, Milos Gligoric2, Junyi Jessy Li3, Raymond J. Mooney1 1Department of Computer Science 2Department of Electrical and Computer Engineering 3Department of Linguistics The University of Texas at Austin [email protected], [email protected], [email protected], [email protected], [email protected] Abstract We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits. 1 Introduction Software developers include natural language comments alongside source code as a way to document various aspects of the code such as functionality, use cases, pre-conditions, and post-conditions. With the growing popularity of open-source software that is widely used and jointly developed, the need for efficient communication among developers about code details has increased. Consequently, comments have assumed a vital role in the development cycle. With developers regularly refactoring and iteratively incorporating new functionality, source code is constantly evolving; however, the accompanying comments are not always updated to reflect the code changes (Tan et al., 2007; Ratol and Robillard, 2017). Inconsistency between code and comments can not only lead time-wasting confusion in tight project schedules (Hu et al., 2018) but can also result in bugs (Tan et al., 2007). To address this problem, we propose an approach that can automatically suggest comment updates when the associated methods are changed. /**@return double the roll euler angle.*/ public double getRotX() { return mOrientation.getRotationX(); } Previous Version /**@return double the roll euler angle in degrees.*/ public double getRotX() { return Math.toDegrees(mOrientation.getRotationX()); } Updated Version Figure 1: Changes in the getRotX method and its corresponding @return comment between two subsequent commits of the rajawali-rajawali project, available on GitHub. Prior work explored rule-based approaches for detecting inconsistencies for a limited set of cases; however, they do not present ways to automatically fix these inconsistencies (Tan et al., 2007; Ratol and Robillard, 2017). Recent work in automatic comment generation aims to generate a comment given a code representation (Liang and Zhu, 2018; Hu et al., 2018; Fernandes et al., 2019); although these techniques could be used to produce a completely new comment that corresponds to the most recent version of the code, this could potentially discard salient content from the existing comment that should be retained. To the best of our knowledge, we are the first to formulate the task of automatically updating an existing comment when the corresponding body of code is modified. This task is intended to align with how developers edit a comment when they introduce changes in the corresponding method. Rather than deleting it and starting from scratch, they would likely only modify the specific parts relevant to the code updates. For example, Figure 1 shows the getRotX method being modified to have the return value parsed into degrees. Within the same commit, the corresponding comment is revised to indicate this, without imposing changes on parts of the comment that pertain to other aspects of the return value. We replicate this process through a novel approach which is designed to correlate edits across two distinct language representations: source code and natural language comments. Namely, our model 1854 is trained to generate a sequence of edit actions, which are to be applied to the existing comment, by conditioning on learned representations of the code edits and existing comment. We additionally incorporate linguistic and lexical features to guide the model in determining where edits should be made in the existing comment. Furthermore, we develop an output reranking scheme that aims to produce edited comments that are fluent, preserve content that should not be changed, and maintain stylistic properties of the existing comment. We train and evaluate our system on a corpus constructed from open-source Java projects on GitHub, by mining their commit histories and extracting examples from consecutive commits in which there was a change to both the code within a method as well as the corresponding Javadoc comment, specifically, the @return Javadoc tag. These comments, which have been previously studied for learning associations between comment and code entities (Panthaplackel et al., 2020), follow a well-defined structure and describe characteristics of the output of a method. For this reason, as an initial step, we focus on @return comments in this work. Our evaluation consists of several automatic metrics that are used to evaluate language generation tasks as well as tasks that relate to editing natural language text. We also conduct human evaluation, and assess whether human judgments correlate with the automatic metrics. The main contributions of this work include (1) the task of automatically updating an existing comment based on source code changes and (2) a novel approach for learning to relate edits between source code and natural language that outperforms multiple baselines on several automatic metrics and human evaluation. Our implementation and data are publicly available.1 2 Task Given a method, its corresponding comment, and an updated version of the method, the task is to update the comment so that it is consistent with the code in the new method. For the example in Figure 1, we want to generate “@return double the roll euler angle in degrees.” based on the changes between the two versions of the method and the existing comment “@return double the roll euler angle.” Concretely, given (Mold, Cold) and Mnew, 1https://github.com/panthap2/ LearningToUpdateNLComments Figure 2: High-level overview of our system. where Mold and Mnew denote the old and new versions of the method, and Cold signifies the previous version of the comment, the task is to produce Cnew, the updated version of the comment. 3 Edit Model Overview We design a system that examines source code changes and how they relate to the existing comment in order to produce an updated comment that reflects the code modifications. Since Cold and Cnew are closely related, training a model to directly generate Cnew risks having it learn to just copy Cold. To explicitly inform the model of edits, we define the target output as a sequence of edit actions, Cedit, to indicate how the existing comment should be revised (e.g., for Cold=ABC, Cedit=<Delete>A<DeleteEnd> implies that A should be deleted to produce Cnew=BC). Furthermore, in order to better correlate these edits with changes in the code, we unify Mold and Mnew into a single diff sequence that explicitly identifies code edits, Medit. We discuss in more detail how Medit and the training Cedit are constructed in §4. Figure 2 shows a high-level overview of our system. We design an encoder-decoder architecture consisting of three components: a two-layer, bidirectional GRU (Cho et al., 2014) that encodes the code changes (Medit), another two-layer, bidirectional GRU that encodes the existing comment (Cold), and a GRU that is trained to decode a sequence of edit actions (Cedit).2 We concatenate the 2We refrain from using the self-attention model (Vaswani et al., 2017) because prior work (Fernandes et al., 2019) suggests that it yields lower performance for comment generation. 1855 final states of the two encoders to form a vector that summarizes the content in Medit and Cold, and use this vector as the initial state of the decoder. The decoder essentially has three subtasks: (1) identify edit locations in Cold; (2) determine parts of Medit that pertain to making these edits; and (3) apply updates in the given locations based on the relevant code changes. We rely on an attention mechanism (Luong et al., 2015) over the hidden states of the two encoders to accomplish the first two goals. At every decoding step, rather than aligning the current decoder state with all the encoder hidden states jointly, we align it with the hidden states of the two encoders separately. We concatenate the two resulting context vectors to form a unified context vector that is used in the final step of computing attention, ensuring that we incorporate pertinent content from both input sequences. Consequently, the resulting attention vector carries information relating to the current decoder state as well as knowledge aggregated from relevant portions of Cold and Medit. Using this information, the decoder performs the third subtask, which requires reasoning across language representations. Specifically, it must determine how the source code changes that are relevant to the current decoding step should manifest as natural language updates to the relevant portions of Cold. At each step, it decides whether it should begin a new edit action by generating an edit start keyword, continue the present action by generating a comment token, or terminate the present action by generating an end-edit keyword. Because actions relating to deletions will include tokens in Cold, and actions relating to insertions are likely to include tokens in Medit, we equip the decoder with a pointer network (Vinyals et al., 2015) to accommodate copying tokens from Cold and Medit. The decoder generates a sequence of edit actions, which will have to be parsed into a comment (§4.4). 4 Representing Edits Here we define the edit lexicon that is used to construct the input code edit sequence, Medit, and the target comment edit sequence, Cedit. 4.1 Edit Lexicon We use difflib3 to extract code edits and target comment edits. Both the input code edit sequence and the target comment edit sequence consist of a se3https://docs.python.org/3/library/ difflib.html ries of edit actions; each edit action is structured as <Action> [span of tokens] <ActionEnd>.4 We define four types of edit actions: Insert, Delete, Replace, and Keep. Because the Replace action must simultaneously incorporate distinct content from two versions (i.e., tokens in the old version that will be replaced, and tokens in the new version that will take their place), it follows a slightly different structure: <ReplaceOld> [span of old tokens] <ReplaceNew> [span of new tokens] <ReplaceEnd> 4.2 Code Edits We extract the edits between Mold and Mnew using the edit lexicon to construct Medit, the code edit sequence used as input in one of the encoders. Figure 2 (top right) shows the Medit corresponding to code changes in Figure 1. In contrast to line-level code diffs that are commonly used for commit message generation (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019), this representation allows us to explicitly capture more fine-grained edits. While we could exploit the abstract syntax tree (AST) structure of source code and represent the changes between the ASTs corresponding to the two versions of code, prior work suggests that such techniques do not always lead to improved performance (Yin et al., 2019). We leave it to future work to investigate how the AST structure can be leveraged for this task. 4.3 Comment Edits We identify the changes between Cold and Cnew to construct Cedit, the target comment edit sequence. During inference, the output comment is produced by parsing the predicted edit sequence (§4.4). We introduce a slightly modified set of specifications that disregards the Keep type when constructing the sequence of edit actions, referred to as the condensed edit sequence. The intuition for disregarding Keep and the span of tokens to which it applies is that we can simply copy the content that is retained between Cold and Cnew, instead of generating it anew. By doing posthoc copying, we simplify learning for the model since it has to only learn what to change rather than also having to learn what to keep. We design a method to deterministically place edits in their correct positions in the absence of 4Preliminary experiments showed that this performed better than structuring edits at the token-level as in other tasks (Shin et al., 2018; Li et al., 2018; Dong et al., 2019; Awasthi et al., 2019). 1856 Keep spans. For the example in Figure 1, the raw sequence <Insert>in degrees<InsertEnd> does not encode information as to where “in degrees” should be inserted. To address this, we bind an insert sequence with the minimum number of words (aka “anchors”) such that the place of insertion can be uniquely identified. This results in the structure that is shown for Cedit in Figure 2. Here “angle” serves as the anchor point, identifying the insert location. Following the structure of Replace, this sequence indicates that “angle” should be replaced with “angle in degrees,” effectively inserting “in degrees” and keeping “angle” from Cold, which appears immediately before the insert location. See Appendix A for details on this procedure. 4.4 Parsing Edit Sequences Since the decoder is trained to predict a sequence of edit actions, we must align it with Cold and copy unchanged tokens in order to produce the edited comment. We denote the predicted edit sequence as C’edit and the corresponding parsed output as C’new. This procedure entails simultaneously following pointers, left-to-right, on Cold and C’edit, which we refer to as Pold and Pedit respectively. Pold is advanced, copying the current token into C’new at each point, until an edit location is reached. The edit action corresponding to the current position of Pedit is then applied, and the tokens from its relevant span are copied into C’new if applicable. Finally, Pedit is advanced to the next action, and Pold is also advanced to the appropriate position in cases involving deletions and replacements. This process repeats until both pointers reach the end of their respective sequences. 5 Features We extract linguistic and lexical features for tokens in Medit and Cedit, many of which were shown to improve learning associations between @return comment and source code entities in our prior work (Panthaplackel et al., 2020). We incorporate these features into the network as one-hot vectors that are concatenated to Medit and Cedit embeddings and then passed through a linear layer. These vectors are provided as inputs to the two encoders. All sequences are subtokenized, e.g., camelCase → camel, case. Features specific to Medit: We aim to take advantage of common patterns among different types of code tokens by incorporating features that identify certain categories: edit keywords, Java keywords, and operators. If a token is not an edit keyword, we have indicator features for whether it is part of a Insert, Delete, ReplaceNew, ReplaceOld, or Keep span. We believe this will be particularly helpful for longer spans since edit keywords only appear at either the beginning or end of a span. Finally, we include a feature to indicate whether the token matches a token in Cold. This is intended to help the model identify locations in Medit that may be relevant to editing Cold. Features specific to Cold: We include whether a token matches a code token that is inserted, deleted, or replaced in Medit. These help align parts of Cold with code edits, assisting the model in determining where edits should be made. In order to exploit common patterns for different types of tokens, we incorporate features that identify whether the token appears more than once in Cold or is a stop word, and its part-of-speech. Shared features: We include whether the token is a subtoken that was originally part of a larger token and its index if so (e.g., split from camelCase, camel and case are subtokens with indices 0 and 1 respectively). These features aim to encode important relationships between adjacent tokens that are lost once the body of code and comment are transformed into a single, subtokenized sequences. Additionally, because we focus on @return comments, we introduce features intended to guide the model in identifying relevant tokens in Medit and Cold. Namely, we include whether a given token matches a token in a return statement that is unique to Mold, unique to Mnew, or present in both. Similarly, we indicate whether the token matches a token in the subtokenized return type that is unique to Mold, unique to Mnew, or present in both. 6 Reranking Reranking allows the incorporation of additional priors that are difficult to back-propagate, by re-scoring candidate sequences during beam search (Neubig et al., 2015; Ko et al., 2019; Kriz et al., 2019). We incorporate two heuristics to rescore the candidates: 1) generation likelihood and 2) similarity to Cold. These heuristics are computed after parsing the candidate edit sequences (§4.4). Generation likelihood. Since the edit model is trained on edit actions only, it does not globally score the resulting comment in terms of aspects such as fluency and overall suitability for the updated method. To this end, we make use of a pretrained comment generation model (§8.2) that is 1857 Train Valid Test Examples 5,791 712 736 Projects 526 274 281 Edit Actions 8,350 1,038 1,046 Sim (Mold, Mnew) 0.773 0.778 0.759 Sim (Cold, Cnew) 0.623 0.645 0.635 Code Unique 7,271 2,473 2,690 Mean 86.4 87.4 97.4 Median 46 49 50 Comm. Unique 4,823 1,695 1,737 Mean 10.8 11.2 11.1 Median 8 9 9 Table 1: Number of examples, projects, and edit actions; average similarity between Mold and Mnew as the ratio of overlap; average similarity between Cold and Cnew as the ratio of overlap; number of unique code tokens and mean and median number of tokens in a method; and number of unique comment tokens and mean and median number of tokens in a comment. trained on a substantial amount of data for generating Cnew given only Mnew. We compute the length-normalized probability of this model generating the parsed candidate comment, C’new, (i.e., P(C′ new|M new)1/N where N is the number of tokens in C’new). This model gives preference to comments that are more likely for Mnew and are more consistent with the general style of comments.5 Similarity to Cold. So far, our model is mainly trained to produce accurate edits; however, we also follow intuitions that edits should be minimal (as an analogy, the use of Levenshtein distance in spelling correction). To give preference to predictions that accurately update the comment with minimal modifications, we use similarity to Cold as a heuristic for reranking. We measure similarity between the parsed candidate prediction and Cold using METEOR (Banerjee and Lavie, 2005). Reranking score. The reranking score for each candidate is a linear combination of the original beam score, the generation likelihood, and the similarity to Cold with coefficients 0.5, 0.3, and 0.2 respectively (tuned on validation data). 7 Data We extracted examples from popular, open-source Java projects using GitHub’s commit history. We extract pairs of the form (method, comment) for the same method across two consecutive commits where there is a simultaneous change to both the code and comment. This creates somewhat noisy data for the task of comment update; Appendix B describes filtering techniques to reduce this noise. 5We attempted to integrate this model into the training procedure of the edit model through joint training; however, this deteriorated performance. We first tokenize Mold and Mnew using the javalang6 library. We subtokenize based on camelCase and snake_case, as in previous work (Allamanis et al., 2016; Alon et al., 2019; Fernandes et al., 2019). We then form Medit from the subtokenized forms of Mold and Mnew. We tokenize Cold and Cnew by splitting by space and punctuation. We remove HTML tags and the “@return” that precedes all comments, and also subtokenize tokens since code tokens may appear in comments as well. The gold edit action sequence, Cedit, is computed from these processed forms of Cold and Cnew. To avoid having examples that closely resemble one another in training and test, the projects in the training, test, and validation sets are disjoint, similar to Movshovitz-Attias and Cohen (2013). Table 1 gives dataset statistics. Of the 7,239 examples in our final dataset, 833 of them were extracted from the diffs used in Panthaplackel et al. (2020). Including code and comment tokens that appear at least twice in the training data as well as the predefined edit keywords, the code and comment vocabulary sizes are 5,945 and 3,642 respectively. 8 Experimental Method We evaluate our approach against multiple rulebased baselines and comment generation models. 8.1 Baselines Copy: Since much of the content of Cold is typically retained in the update, we include a baseline that merely copies Cold as the prediction for Cnew. Return type substitution: The return type of a method often appears in its @return comment. If the return type of Mold appears in Cold and the return type is updated in the code, we substitute the new return type while copying all other parts of Cold. Otherwise, Cold is copied as the prediction. Return type substitution w/ null handling: As an addition to the previous method, we also check whether the token null is added to either a return statement or if statement in the code. If so, we copy Cold and append the string or null if null, otherwise, we simply copy Cold. This baseline addresses a pattern we observed in the data in which ways to handle null input or cases that could result in null output were added. 6https://pypi.org/project/javalang/ 1858 8.2 Generation Model One of our main hypotheses is that modeling edit sequences is better suited for this task than generating comments from scratch. However, a counter argument could be that a comment generation model could be trained from substantially more data, since it is much easier to obtain parallel data in the form (method, comment), without the constraints of simultaneous code/comment edits. Hence the power of large-scale training could out-weigh edit modeling. To this end, we compare with a generation model trained on 103,473 method/@return comment pairs collected from GitHub. We use the same underlying neural architecture as our edit model to make sure that the difference in results comes from the amount of training data and from using edit of representations only: a two-layer, bi-directional GRU that encodes the sequence of tokens in the method, and an attention-based GRU decoder with a copy mechanism that decodes a sequence of comment tokens. We expect the incorporation of more complicated architectures, e.g., tree-based (Alon et al., 2019) and graph-based (Fernandes et al., 2019) encoders which exploit AST structure, can be applied to both an edit model and a generation model, which we leave for future work. Evaluation is based on the 736 (Mnew, Cnew) pairs in the test set described in §7. We ensure that the projects from which training examples are extracted are disjoint from those in the test set. 8.3 Reranked Generation Model In order to allow the generation model to exploit the old comment, this system uses similarity to Cold (cf. §6) as a heuristic for reranking the top candidates from the previous model. The reranking score is a linear combination of the original beam score and the METEOR score between the candidate prediction and Cold, both with coefficient 0.5 (tuned on validation data). 8.4 Model Training Model parameters are identical across the edit model and generation model, tuned on validation data. Encoders have hidden dimension 64, the decoder has hidden dimension 128, and the dimension for code and comment embeddings is 64. The embeddings used in the edit model are initialized using the pre-trained embedding vectors from the generation model. We use a dropout rate of 0.6, a batch size of 100, an initial learning rate of 0.001, and Adam optimizer. Models are trained to minimize negative log likelihood, and we terminate training if the validation loss does not decrease for ten consecutive epochs. During inference, we use beam search with beam width=20. 9 Evaluation 9.1 Automatic Evaluation Metrics: We compute exact match, i.e., the percentage of examples for which the model prediction is identical to the reference comment Cnew. This is often used to evaluate tasks involving source code edits (Shin et al., 2018; Yin et al., 2019). We also report two prevailing language generation metrics: METEOR (Banerjee and Lavie, 2005), and average sentence-level BLEU-4 (Papineni et al., 2002) that is previously used in code-language tasks (Iyer et al., 2016; Loyola et al., 2017). Previous work suggests that BLEU-4 fails to accurately capture performance for tasks related to edits, such as text simplification (Xu et al., 2016), grammatical error correction (Napoles et al., 2015), and style transfer (Sudhakar et al., 2019), since a system that merely copies the input text often achieves a high score. Therefore, we also include two text-editing metrics to measure how well our system learns to edit: SARI (Xu et al., 2016), originally proposed to evaluate text simplification, is essentially the average of N-gram F1 scores corresponding to add, delete, and keep edit operations;7 GLEU (Napoles et al., 2015), used in grammatical error correction and style transfer, takes into account the source sentence and deviates from BLEU by giving more importance to n-grams that have been correctly changed. Results: We report automatic metrics averaged across three random initializations for all learned models, and use bootstrap tests (Berg-Kirkpatrick et al., 2012) for statistical significance. Table 2 presents the results. While reranking using Cold appears to help the generation model, it still substantially underperforms all other models, across all metrics. Although this model is trained on considerably more data, it does not have access to Cold during training and uses fewer inputs and consequently has less context than the edit model. Reranking slightly deteriorates the edit model’s 7Although the original formulation only used precision for the delete operation, more recent work computes F1 for this as well (Dong et al., 2019; Alva-Manchego et al., 2019). 1859 Model xMatch (%) METEOR BLEU-4 SARI GLEU Baselines Copy 0.000 34.611 46.218 19.282 35.400 Return type subt. 13.723§ 43.106¶ 50.796∥ 31.723 42.507∗ Return type subst. + null 13.723§ 43.359 51.160† 32.109 42.627∗ Models Generation 1.132 11.875 10.515 21.164 17.350 Edit 17.663 42.222¶ 48.217 46.376 45.060 Reranked models Generation 2.083 18.170 18.891 25.641 22.685 Edit 18.433 44.698 50.717∥† 45.486 46.118 Table 2: Exact match, METEOR, BLEU-4, SARI, and GLEU scores. Scores for which the difference in performance is not statistically significant (p < 0.05) are indicated with matching symbols. performance with respect to SARI; however, it provides statistically significant improvements on most other metrics. Although two of the baselines achieve slightly higher BLEU-4 scores than our best model, these differences are not statistically significant, and our model is better at editing comments, as shown by the results on exact match, SARI, and GLEU. In particular, our edit models beat all other models with wide, statistically significant, margins on SARI, which explicitly measures performance on edit operations. Furthermore, merely copying Cold, yields a relatively high BLEU-4 score of 46.218. The return type substitution and return type substitution w/ null handling baselines produce predictions that are identical to Cold for 74.73% and 65.76% of the test examples, respectively, while it is only 9.33% for the reranked edit model. In other words, the baselines attain high scores on automatic metrics and even beat our model on BLEU-4, without actually performing edits on the majority of examples. This further underlines the shortcomings of some of these metrics and the importance of conducting human evaluation for this task. 9.2 Human Evaluation Automatic metrics often fail to incorporate semantic meaning and sentence structure in evaluation as well as accurately capture performance when there is only one gold-standard reference; indeed, these metrics do not align with human judgment in other generation tasks like grammatical error correction (Napoles et al., 2015) and dialogue generation (Liu et al., 2016). Since automatic metrics have not yet been explored in the context of the new task we are proposing, we find it necessary to conduct human evaluation and study whether these metrics are consistent with human judgment. User study design: Our study aims to reflect how a comment update system would be used in practice, such as in an Integrated Development EnBaseline Generation Edit None 18.4% 12.4% 30.2% 55.0% Table 3: Percentage of annotations for which users selected comment suggestions produced by each model. All differences are statistically significant (p < 0.05). vironment (IDE). When developers change code, they would be shown suggestions for updating the existing comment. If they think the comment needs to be updated to reflect the code changes, they could select the one that is most suitable for the new version of the code or edit the existing comment themselves if none of the options are appropriate. We simulated this setting by asking a user to select the most appropriate updated comment from a list of suggestions, given Cold as well as the diff between Mold and Mnew displayed using GitHub’s diff interface. The user can select multiple options if they are equally good or a separate None option if no update is needed or all suggestions are poor. The list of suggestions consists of up to three comments, predicted by the strongest benchmarks and our model : (1) return type substitution w/ null handling, (2) reranked generation model, and (3) reranked edit model, arranged in randomized order. We collapse identical predictions into a single suggestion and reward all associated models if the user selects that comment. Additionally, we remove any prediction that is identical to Cold to avoid confusion as the user should never select such a suggestion. We excluded 6 examples from the test set for which all three models predicted Cold for the updated comment. Nine students (8 graduate/1 undergraduate) and one full-time developer at a large software company, all with 2+ years of Java experience, participated in our study. To measure inter-annotator agreement, we ensured that every example was evaluated by two users. We conducted a total of 500 evaluations, across 250 distinct test examples. Results: Table 3 presents the percentage of annotations (out of 500) for which users selected 1860 /**@return item in given position*/ public Complex getComplex(final int i) { return get(i); } Previous Version /**@return item in first position*/ public Complex getComplex() { return get(); } Updated Version Figure 3: Changes in the getComplex method and its corresponding @return comment between two subsequent commits of the eclipse-january project, available on GitHub. comment suggestions that were produced by each model. Using Krippendorff’s α (Krippendorff, 2011) with MASI distance (Passonneau, 2006) (which accommodates our multi-label setting), inter-annotator agreement is 0.64, indicating satisfactory agreement. The reranked edit model beats the strongest baseline and reranked generation by wide statistically-significant margins. From rationales provided by two annotators, we observe that some options were not selected because they removed relevant information from the existing comment, and not surprisingly, these options often corresponded to the comment generation model. Users selected none of the suggested comments 55% of the time, indicating there are many cases for which either the existing comment did not need updating, or comments produced by all models were poor. Based on our inspection of a sample these, we observe that in a large portion of these cases, the comment did not warrant an update. This is consistent with prior work in sentence simplification which shows that, very often, there are sentences that do not need to be simplified (Li and Nenkova, 2015). Despite our efforts to minimize such cases in our dataset through rule-based filtering techniques, we found that many remain. This suggests that it would be beneficial to train a classifier that first determines whether a comment needs to be updated before proposing a revision. Furthermore, the cases for which the existing comment does need to be updated but none of the models produce reasonable predictions illustrate the scope for improvement for our proposed task. 10 Error Analysis We find that our model performs poorly in cases requiring external knowledge and more context than that provided by the given method. For instance, correctly updating the comment shown in Figure 3 requires knowing that get returns the item in the first position if no argument is provided. Our model does not have access to this information, and it fails to generate a reasonable update: “@return complex in given position." On the other hand, the reranked generation model produces “@return the complex value" which is arguably reasonable for the given context. This suggests that incorporating more code context could be beneficial for both models. Furthermore, we find that our model tends to make more mistakes when it must reason about a large amount of code change between Mold and Mnew, and we found that in many such cases, the output of the reranked generation model was better. This suggests that when there are substantial code changes, Mnew effectively becomes a new method, and generating a comment from scratch may be more appropriate. Ensembling generation with our system through a regression model that predicts the extent of editing that is needed may lead to a more generalizable approach that can accommodate such cases. Sample outputs are given in Appendix C. 11 Ablations We empirically study the effect of training the network to encode explicit code edits and decode explicit comment edits. As discussed in Section 3, the edit model consists of two encoders, one that encodes Cold and another that encodes the code representation, Medit. We conduct experiments in which the code representation instead consists of either (1) Mnew or (2) both Mold and Mnew (encoded separately and hidden states concatenated). Additionally, rather than having the decoder generate comment edits in the form Cedit, we introduce experiments in which it directly generates Cnew, with no intermediate edit sequence. For this, we use only the underlying architecture of the edit model (without features or reranking). The performance for various combinations of input code and target comment representations are shown in Table 4. By comparing performance across combinations consisting of the same input code representation and varying target comment representations, the importance of training the decoder to generate a sequence of edit actions rather than the full updated comment is very evident. Furthermore, comparing across varying code representations under the Cedit target comment representation, it is clear that explicitly encoding the code changes, as Medit, leads to significant improvements across most metrics. We further ablate the features introduced in §5. As shown in Table 5, these features improve performance by wide margins, across all metrics. 1861 Inputs Output xM (%) METEOR BLEU-4 SARI GLEU Cold, Mnew Cnew 5.707‡¶ 29.259† 33.534§ 28.024 30.000∗ Cedit 4.755‡∗ 33.796 43.315 35.516 37.970∥ Cold, Mold, Mnew Cnew 3.714∗ 18.729 20.060 23.914 21.956 Cedit 5.163‡¶ 34.895 44.006∗ 33.479 37.618∥ Cold, Medit Cnew 6.114¶ 29.968† 34.164§ 28.980 30.491∗ Cedit 8.922 36.229 44.283∗ 40.538 39.879 Table 4: Exact match, METEOR, BLEU-4, SARI, and GLEU for various combinations of code input and target comment output configurations. Features and reranking are disabled for all models. Scores for which the difference in performance is not statistically significant (p < 0.05) are indicated with matching symbols. Model xM (%) METEOR BLEU-4 SARI GLEU Models Edit 17.663 42.222 48.217 46.376 45.060 - feats. 8.922† 36.229 44.283 40.538 39.879∗ Reranked models Edit 18.433 44.698 50.717 45.486 46.118 - feats. 8.877† 38.446 46.665 36.924 40.317∗ Table 5: Exact match, METEOR, BLEU-4, SARI, and GLEU scores of ablated models. Scores for which the difference in performance is not statistically significant (p < 0.05) are indicated with matching symbols. 12 Related Work Learning from source code changes: Lee et al. (2019) use rule-based techniques to automatically detect and revise outdated API names in code documentation; however, their approach cannot be extended to full natural language comments that are the focus of this work. Zhai et al. (2020) propose a technique for updating incomplete and buggy comments by propagating comments from different code elements (e.g., variables, methods, classes) based on program analysis and several heuristics. Rather than simply copying a related comment, we aim to revise an outdated comment by reasoning about code changes. Yin et al. (2019) present an approach for learning structural and semantic properties of source code edits so that they can be generalized to new code inputs. Similar to their work, we learn vector representations from source code changes; however, unlike their setting, we apply these representations to natural language. Prior work in automatic commit message generation aims to learn from code changes in order to generate a natural language summary of these changes (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019). Instead of generating natural language content from scratch as done in their work, we focus on applying edits to existing natural language text. We also show that generating a comment from scratch does not perform as well as our proposed edit model for the comment update setting. Editing natural language text: Approaches for editing natural language text have been studied extensively through tasks such as sentence simplification (Dong et al., 2019), style transfer (Li et al., 2018), grammatical error correction (Awasthi et al., 2019), and language modeling (Guu et al., 2018). The focus of this prior work is to revise sentences to conform to stylistic and grammatical conventions, and it does not generally consider broader contextual constraints. On the contrary, our goal is not to make cosmetic revisions to a given span of text, but rather amend its semantic meaning to be in sync with the content of a separate body of information on which it is dependent. More recently, Shah et al. (2020) proposed an approach for rewriting an outdated sentence based on a sentence stating a new factual claim, which is more closely aligned with our task. However, in our case, the separate body of information is not natural language and is generally much longer than a single sentence. 13 Conclusion We have addressed the novel task of automatically updating an existing programming comment based on changes to the related code. We designed a new approach for this task which aims to correlate cross-modal edits in order to generate a sequence of edit actions specifying how the comment should be updated. We find that our model outperforms multiple rule-based baselines and comment generation models, with respect to several automatic metrics and human evaluation. Acknowledgements We thank reviewers for their feedback on this work and the participants of our user study for their time. This work was partially supported by a Google Faculty Research Award and the US National Science Foundation under Grant Nos. CCF-1652517 and IIS-1850153. 1862 References Miltiadis Allamanis. 2019. The adverse effects of code duplication in machine learning models of code. In SPLASH, Onward!, pages 143–153. Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning, pages 2091–2100. Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2019. code2seq: Generating sequences from structured representations of code. In International Conference on Learning Representations. Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing: System Demonstrations, pages 49–54. Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, and Vihari Piratla. 2019. Parallel iterative edit models for local sequence transduction. In Conference on Empirical Methods in Natural Language Processing and International Joint Conference on Natural Language Processing, pages 4251–4261. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65– 72. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995–1005. Kyunghyun Cho, Bart van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing, pages 1724–1734. Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An neural programmer-interpreter model for sentence simplification through explicit editing. In Annual Meeting of the Association for Computational Linguistics, pages 3393–3402. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summarization. In International Conference on Learning Representations. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In International Conference on Program Comprehension, pages 200– 210. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Annual Meeting of the Association for Computational Linguistics, pages 2073–2083. Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In International Conference on Automated Software Engineering, pages 135–146. Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019. Linguistically-informed specificity and semantic plausibility for dialogue generation. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3456–3466. Klaus Krippendorff. 2011. Computing Krippendorff’s alpha reliability. Technical report, University of Pennsylvania. Reno Kriz, João Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplification. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3137–3147. Seonah Lee, Rongxin Wu, S.C. Cheung, and Sungwon Kang. 2019. Automatic detection and update suggestion for outdated API names in documentation. Transactions on Software Engineering. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1865–1874. Junyi Jessy Li and Ani Nenkova. 2015. Fast and accurate prediction of sentence specificity. In AAAI Conference on Artificial Intelligence, pages 2281–2287. Yuding Liang and Kenny Q. Zhu. 2018. Automatic generation of text descriptive comments for code blocks. In AAAI Conference on Artificial Intelligence, pages 5229–5236. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Conference on 1863 Empirical Methods in Natural Language Processing, pages 2122–2132. Pablo Loyola, Edison Marrese-Taylor, and Yutaka Matsuo. 2017. A neural architecture for generating natural language descriptions from source code changes. In Annual Meeting of the Association for Computational Linguistics, pages 287–292. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Dana Movshovitz-Attias and William W. Cohen. 2013. Natural language models for predicting programming comments. In Annual Meeting of the Association for Computational Linguistics, pages 35–40. Courtney Napoles, Keisuke Sakaguchi, Matt Post, and Joel Tetreault. 2015. Ground truth for grammatical error correction metrics. In Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing, pages 588–593. Graham Neubig, Makoto Morishita, and Satoshi Nakamura. 2015. Neural reranking improves subjective quality of machine translation: NAIST at WAT2015. In Workshop on Asian Translation, pages 35–41. Sheena Panthaplackel, Milos Gligoric, Raymond J. Mooney, and Junyi Jessy Li. 2020. Associating natural language comment and source code entities. In AAAI Conference on Artificial Intelligence. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics, pages 311–318. Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In International Conference on Language Resources and Evaluation. Inderjot Kaur Ratol and Martin P. Robillard. 2017. Detecting fragile comments. International Conference on Automated Software Engineering, pages 112– 122. Darsh J. Shah, Tal Schuster, and Regina Barzilay. 2020. Automatic fact-guided sentence modification. In AAAI Conference on Artificial Intelligence. Richard Shin, Illia Polosukhin, and Dawn Song. 2018. Towards specification-directed program repair. In International Conference on Learning Representations Workshop. Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. “Transforming” delete, retrieve, generate approach for controlled text style transfer. In Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing, pages 3267–3277. Lin Tan, Ding Yuan, Gopal Krishna, and Yuanyuan Zhou. 2007. /*iComment: Bugs or bad comments?*/. In Symposium on Operating Systems Principles, pages 145–158. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hanghang Tong, and Jian Lu. 2019. Commit message generation for source code changes. In International Joint Conference on Artificial Intelligence, pages 3975–3981. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Ziyu Yao, Daniel S. Weld, Wei-Peng Chen, and Huan Sun. 2018. StaQC: A systematically mined question-code dataset from Stack Overflow. In International Conference on World Wide Web, pages 1693–1703. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from Stack Overflow. In International Conference on Mining Software Repositories, pages 476–486. Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, and Alexander L. Gaunt. 2019. Learning to represent edits. In International Conference on Learning Representations. Juan Zhai, Xiangzhe Xu, Yu Shi, Guanhong Tao, Minxue Pan, Shiqing Ma, Lei Xu, Weifeng Zhang, Lin Tan, and Xiangyu Zhang. 2020. CPC: Automatically classifying and propagating natural language comments via program analysis. In International Conference on Software Engineering. 1864 Train Valid Test Total actions 8,350 1,038 1,046 Avg. # actions per example 1.44 1.46 1.42 Replace 51.9% 49.7% 50.1% ReplaceKeepBefore 2.9% 2.6% 3.5% ReplaceKeepAfter 0.7% 0.3% 0.4% InsertKeepBefore 21.5% 24.1% 23.2% InsertKeepAfter 4.2% 4.0% 3.3% Delete 17.4% 18.0% 17.8% DeleteKeepBefore 1.3% 0.7% 1.1% DeleteKeepAfter 0.2% 0.5% 0.6% Table 6: Total number of edit actions; average number of edit actions per example; percentage of total actions that is accounted by each edit action type. A Modified Comment Edit Lexicon We first transform insertions and ambiguous deletions into a structure that resembles Replace, characterized by InsertOld/InsertNew and DeleteOld/DeleteNew spans respectively. Next, we require the span of tokens attached to ReplaceOld, InsertOld, and DeleteOld to be unique across Cold so that we can uniquely identify the edit location. We enforce this by iteratively searching through unchanged tokens before and after the span, incorporating additional tokens into the span, until the span becomes unique. These added tokens are then included in both components of the action. For instance, if the last A is to be replaced with C in ABA, the ReplaceOld span would be BA and the ReplaceNew span would be BC. We also augment the edit types to differentiate between the various scenarios that may arise from this search procedure. Replace actions for which this procedure is performed deviate from the typical nature of Replace in which there is no overlap between the spans attached to ReplaceOld and ReplaceNew. This is because the tokens that are added to make the ReplaceOld span unique will appear in both spans. These tokens, which are effectively kept between Cold and Cnew, could appear before or after the edit location. We differentiate between these scenarios by augmenting the edit lexicon with new edit types. In addition to Replace, we have ReplaceKeepBefore and ReplaceKeepAfter to signify that the action entails retaining some content before or after, respectively. We include the same for the other types as well with InsertKeepBefore, InsertKeepAfter, DeleteKeepBefore, DeleteKeepAfter. Table 6 shows statistics on how often each of these edit actions are used. We present more details about the actions in the sections that follow. A.1 Replacements Replace This action is defined as shown below: <ReplaceOld>[old span] <ReplaceNew>[new span] <ReplaceEnd> It prescribes that the tokens attached to ReplaceOld are deleted and the tokens attached to ReplaceNew are inserted in their place. There is almost never overlap between the span of tokens attached to ReplaceOld and ReplaceNew. Example: if B is to be replaced with C in Cold=AB to produce Cnew=AC, the corresponding Cedit is: <ReplaceOld>B <ReplaceNew>C <ReplaceEnd> Note that the span attached to ReplaceOld must be unique across Cold for this edit type to be used. ReplaceKeepBefore This action is defined as shown below: <ReplaceOldKeepBefore>[old span] <ReplaceNewKeepBefore>[new span] <ReplaceEnd> Replace is transformed into this structure if the span attached to ReplaceOld is not unique. For example, suppose the first B is to be replaced with D in Cold=ABCB to produce Cnew=ADCB. If Cedit consists of a ReplaceOld span carrying just B, it is not obvious whether the first or last B should be replaced. To address this, we introduce a new edit type, ReplaceKeepBefore, which forms a unique span by searching before the edit location. It prescribes that the tokens attached to ReplaceOldKeepBefore are deleted and the tokens attached to ReplaceNewKeepBefore are inserted in their place. Unlike Replace, there will be some overlap at the beginning of the spans attached to ReplaceOldKeepBefore and ReplaceNewKeepBefore. To represent edits Cold=ABCB to produce Cnew=ADCB, Cedit is: <ReplaceOldKeepBefore> AB <ReplaceNewKeepBefore> AD <ReplaceEnd> The span attached to ReplaceOldKeepBefore is unique, making it clear that the first B is to be replaced with D. It also indicates that we are effectively keeping A, before the edit location. 1865 ReplaceKeepAfter This action is defined as shown below: <ReplaceOldKeepAfter>[old span] <ReplaceNewKeepAfter>[new span] <ReplaceEnd> Replace is transformed into this structure if the span attached to ReplaceOld is not unique and ReplaceKeepBefore cannot be used because we are unable to find a unique sequence of unchanged tokens before the edit location. For example, suppose the first B is to be replaced with D in Cold=ABCAB to produce Cnew=ADCAB. Searching before the edit location, we find only AB, which is not unique across Cold, and so it would still not be clear which B is to be edited. To address this, we introduce a new edit type, ReplaceKeepAfter, which forms a unique span by searching after the edit location. It prescribes that the tokens attached to ReplaceOldKeepAfter are deleted and the tokens attached to ReplaceNewKeepAfter are inserted in their place. Unlike Replace and ReplaceKeepBefore, there will be some overlap at the end of the spans attached to ReplaceOldKeepAfter and ReplaceNewKeepAfter. Therefore, to represent editing Cold=ABCAB to produce Cnew=ADCAB, Cedit is: <ReplaceOldKeepAfter> BC <ReplaceNewKeepAfter> DC <ReplaceEnd> The span attached to ReplaceOldKeepAfter is unique, making it clear that the first B is to be replaced with D. It also indicates that we are effectively keeping C, which appears after the edit location. A.2 Insertions We disregard basic Insert actions since it is always ambiguous where an insertion should occur without an anchor point. Following what is done for ambiguous Replace actions, we introduce InsertKeepBefore and InsertKeepAfter. InsertKeepBefore This action is defined as shown below: <InsertOldKeepBefore>[old span] <InsertNewKeepBefore>[new span] <InsertEnd> In this representation, the span of tokens attached to InsertOldKeepBefore must be unique and serve as the anchor point for where the new tokens should be inserted. We do this by searching before the edit location. The structure is identical to that of ReplaceKeepBefore in that the tokens attached to InsertOldKeepBefore are replaced with the tokens in InsertNewKeepBefore and that there is some overlap at the beginning of the two spans. As an example, suppose C is to be inserted at the end of Cold=AB to form Cnew=ABC. Then the corresponding Cedit is as follows: <InsertKeepBefore> B <InsertNewKeepBefore> BC <InserteEnd> This states that we are effectively inserting C and keeping B, which appears before the edit location. InsertKeepAfter This action is defined as shown below: <InsertOldKeepAfter>[old span] <InsertNewKeepAfter>[new span] <InsertEnd> We rely on this when we are unable to use InsertKeepBefore because we cannot find a unique span of tokens to identify the anchor point, by searching before the edit location. For instance, suppose C is to be inserted at the beginning of Cold=AB to form Cnew=CAB. There are no tokens that appear before the insert point, so we instead choose to search after. The structure is identical to that of ReplaceKeepAfter in that the tokens attached to InsertOldKeepAfter are replaced with the tokens in InsertNewKeepAfter and that there is some overlap at the end of the two spans. The corresponding Cedit from our example is as follows: <InsertKeepAfter> A <InsertNewKeepAfter> CA <InserteEnd> This states that we are effectively inserting C and keeping A, which appears after the edit location. A.3 Deletions Delete This action is defined as shown below: <Delete>[old span]<DeleteEnd> It prescribes that the tokens that appear in the Delete span are removed from Cold. Example: if B is to be deleted from Cold=AB to produce Cnew=A, the corresponding Cedit is: <Delete>B<DeleteEnd> Note that the Delete span must be unique across Cold for this edit type to be used. 1866 DeleteKeepBefore This action is defined as shown below: <DeleteOldKeepBefore>[old span] <DeleteNewKeepBefore>[new span] <DeleteEnd> Delete is transformed into this structure if the Delete span is not unique. For example, suppose the first B is to be deleted from Cold=ABCB to produce Cnew=ACB. From just Cedit=<Delete>B<DeleteEnd>, it is unclear which B is to be deleted. To address this, we introduce a new edit type, DeleteKeepBefore, which forms a unique span by searching before the edit location. The structure is identical to that of ReplaceKeepBefore in that the tokens attached to DeleteOldKeepBefore are replaced with the tokens in DeleteNewKeepBefore and that there is some overlap at the beginning of the two spans. For the example under consideration, the corresponding Cedit is given below: <DeleteOldKeepBefore> AB <DeleteNewKeepBefore> A <DeleteEnd> The span attached to DeleteOldKeepBefore is unique, making it clear that the first B is to be deleted. It also indicates that we are effectively keeping A, which appears before the edit location. DeleteKeepAfter This action is defined as shown below: <DeleteOldKeepAfter>[old span] <DeleteNewKeepAfter>[new span] <DeleteEnd> Delete is transformed into this structure if the Delete span is not unique and DeleteKeepBefore cannot be used because we are unable to find a unique sequence of unchanged tokens before the edit location. For example, suppose the first B is to be deleted from Cold=ABCAB to produce Cnew=ACAB. Searching before the edit location, we find only AB, which is not unique across Cold, and so it would still not be clear which B is to be deleted. To address this, we introduce a new edit type, DeleteKeepAfter, which forms a unique span by searching after the edit location. The structure is identical to that of ReplaceKeepAfter in that the tokens attached to DeleteOldKeepAfter are replaced with the tokens in DeleteNewKeepAfter and that there is some overlap at the end of the two spans. For the example under consideration, Cedit is as follows: <DeleteOldKeepAfter> BC <DeleteNewKeepAfter> C <DeleteEnd> The span attached to DeleteOldKeepAfter is unique, making it clear that the first B is to be deleted. It also indicates that we are effectively keeping C, which appears after the edit location. B Data Filtering As done in Panthaplackel et al. (2020), we apply heuristics to reduce the number of cases in which the code and comment changes are unrelated. First, because we focus on @return comments that pertain to the return values of a given method, we discard any example in which the code change does not entail either a change to the return type or at least one return statement. Then, to identify the correct mapping of two versions of a method among other changes in a commit, we focus on the code changes that preserve the method names. It may happen sometimes that developers change the method name as well as code and comment in one commit, but we leave it as future work to improve this filtering heuristic. Next, we attempt to remove examples in which the comment change appears to be purely stylistic (e.g. spelling corrections, reformatting, and rephrasing). Furthermore, prior work (Allamanis, 2019) has shown that duplication can adversely affect evaluation of machine learning models for code and language tasks. For this reason, we remove duplicates from our corpus. Despite having mined commit histories for thousands of projects, upon filtering, we are left with a total of 7,239 examples belonging to 1,081 different projects. This demonstrates the challenge of collecting large datasets with relatively low levels of noise in this domain. Although online code resources like GitHub and StackOverflow host large quantities of data that can be exploited for transduction tasks between source code and natural language, prior work has shown that much of this data is unusable without cleaning (Yin et al., 2018). Some have used rule-based techniques to do data cleaning (Allamanis et al., 2016; Hu et al., 2018; Fernandes et al., 2019), and others train classifiers on hand-labeled examples that can be applied to a much larger pool of examples in order to differentiate between clean and noisy examples (Iyer et al., 2016; Yao et al., 2018; Yin et al., 2018). Most of these approaches focus on code summarization or comment generation which only require single code-NL pairs for training and evaluation 1867 as the task entails generating a natural language summary of a given code snippet. On the contrary, our proposed task requires two code-NL pairs that are assumed to hold specific parallel relationships with one another. Namely, the relationship between Cnew and Mnew is expected to be similar to that of Cold and Mold. The relationship between Cnew and Cold is expected to correlate with the relationship between Mnew and Mold. Not only does having four moving parts in one example magnify noise, but the need to hold these relationships makes data cleaning particularly difficult. We leave building classifiers for aiding this process as future work. C Sample Output In Table 7, we show predictions for various examples in the test set. 1868 Examples Project: ariejan-slick2d public float getX() { return center[NUM]; } public float getX() { + if (left == null) { + calculateLeft(); + } + return left.floatValue(); } Old: @return the x location of the center of this circle Base: @return the x location of the center of this circle or null if null Gen: @return the x of the angle in this vector Edit: @return the x location of the left of this circle Gold: @return the x location of the left side of this shape . Project: jackyglony-objectiveeclipse private IProject getProject() { return managedTarget.getOwner().getProject(); } private IProject getProject() { + return (IProject) managedProject.getOwner(); } Old: @return the iproject associated with the target Base: @return the iproject associated with the target Gen: @return the iproject Edit: @return the iproject associated with the project Gold: @return the iproject associated with the managed project Project: rajawali-rajawali public double getRotX() { return mOrientation.getRotationX(); } public double getRotX() { + return Math.toDegrees(mOrientation.getRotationX()); } Old: @return double the roll euler angle . Base: @return double the roll euler angle . Gen: @return the rot x . Edit: @return parsed double the roll euler angle . Gold: @return double the roll euler angle in degrees . Project: Qihoo360-RePlugin -public static <T extends Collection<?>> T validIndex(final T collection, final int index) { return validIndex(collection, index, DEFAULT_VALID_INDEX_COLLECTION_EX_MESSAGE, Integer.valueOf(index)); } +public static <T extends CharSequence> T validIndex(final T chars, final int index) { + return validIndex(chars, index, + DEFAULT_VALID_INDEX_CHAR_SEQUENCE_EX_MESSAGE, Integer.valueOf(index)); } Old: @return the validated collection ( never null for method chaining ) Base: @return the validated collection ( never null for method chaining ) Gen: @return the index Edit: @return the validated char sequence ( never null for method chaining ) Gold: @return the validated character sequence ( never null for method chaining ) Project: orfjackal-hourparser public Date getStart() { if (records.size() == NUM) { return null; } else { Date first = records.get(NUM).getDate(); for (Entry e : records) { if (e.getDate().before(first)) { first = e.getDate(); } } return first; } } public Date getStart() { if (records.size() == NUM) { + return new Date(); } else { Date first = records.get(NUM).getDate(); for (Entry e : records) { if (e.getDate().before(first)) { first = e.getDate(); } } return first; } } Old: @return the time of the first record or null if there are no records Base: @return the time of the first record or null if there are no records Gen: @return the date , or null if not available Edit: @return the time of the first record or date if there are no records Gold: @return the time of the first record , or the current time if there are no records Table 7: Examples from open-source software projects. For each example, we show the diff between the two versions of the method (left: old version, right: new version, diff lines are highlighted), the existing @return comment prior to being updated (left), and predictions made by the return type substitution w/ null handling baseline, reranked generation model, and reranked edit model, and the gold updated comment (right, from top to bottom).
2020
168
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1869–1881 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1869 Politeness Transfer: A Tag and Generate Approach Aman Madaan ∗, Amrith Setlur ∗, Tanmay Parekh ∗, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, Shrimai Prabhumoye School of Computer Science Carnegie Mellon University Pittsburgh, PA, USA {amadaan, asetlur, tparekh}@cs.cmu.edu Abstract This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning. We also provide a dataset of more than 1.39 million instances automatically labeled for politeness to encourage benchmark evaluations on this new task. We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content. For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy. Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks. The data and code is located at https:// github.com/tag-and-generate/ 1 Introduction Politeness plays a crucial role in social interaction, and is closely tied with power dynamics, social distance between the participants of a conversation, and gender (Brown et al., 1987; DanescuNiculescu-Mizil et al., 2013). It is also imperative to use the appropriate level of politeness for smooth communication in conversations (Coppock, 2005), organizational settings like emails (Peterson et al., 2011), memos, official documents, and many other settings. Notably, politeness has also been identified as an interpersonal style which can be decoupled from content (Kang and Hovy, 2019). Motivated by its central importance, in this paper we study the task of converting non-polite sentences to polite sentences while preserving the meaning. Prior work on text style transfer (Shen et al., 2017; Li et al., 2018; Prabhumoye et al., 2018; ∗authors contributed equally to this work. Rao and Tetreault, 2018; Xu et al., 2012; Jhamtani et al., 2017) has not focused on politeness as a style transfer task, and we argue that defining it is cumbersome. While native speakers of a language and cohabitants of a region have a good working understanding of the phenomenon of politeness for everyday conversation, pinning it down as a definition is non-trivial (Meier, 1995). There are primarily two reasons for this complexity. First, as noted by (Brown et al., 1987), the phenomenon of politeness is rich and multifaceted. Second, politeness of a sentence depends on the culture, language, and social structure of both the speaker and the addressed person. For instance, while using “please” in requests made to the closest friends is common amongst the native speakers of North American English, such an act would be considered awkward, if not rude, in the Arab culture (K´ad´ar and Mills, 2011). We circumscribe the scope of politeness for the purpose of this study as follows: First, we adopt the data driven definition of politeness proposed by (Danescu-Niculescu-Mizil et al., 2013). Second, we base our experiments on a dataset derived from the Enron corpus (Klimt and Yang, 2004) which consists of email exchanges in an American corporation. Thus, we restrict our attention to the notion of politeness as widely accepted by the speakers of North American English in a formal setting. Even after framing politeness transfer as a task, there are additional challenges involved that differentiate politeness from other styles. Consider a common directive in formal communication, “send me the data”. While the sentence is not impolite, a rephrasing “could you please send me the data” would largely be accepted as a more polite way of phrasing the same statement (DanescuNiculescu-Mizil et al., 2013). This example brings out a distinct characteristic of politeness. It is easy to pinpoint the signals for politeness. However, 1870 cues that signal the absence of politeness, like direct questions, statements and factuality (DanescuNiculescu-Mizil et al., 2013), do not explicitly appear in a sentence, and are thus hard to objectify. Further, the other extreme of politeness, impolite sentences, are typically riddled with curse words and insulting phrases. While interesting, such cases can typically be neutralized using lexicons. For our study, we focus on the task of transferring the non-polite sentences to polite sentences, where we simply define non-politeness to be the absence of both politeness and impoliteness. Note that this is in stark contrast with the standard style transfer tasks, which involve transferring a sentence from a well-defined style polarity to the other (like positive to negative sentiment). We propose a tag and generate pipeline to overcome these challenges. The tagger identifies the words or phrases which belong to the original style and replaces them with a tag token. If the sentence has no style attributes, as in the case for politeness transfer, the tagger adds the tag token in positions where phrases in the target style can be inserted. The generator takes as input the output of the tagger and generates a sentence in the target style. Additionally, unlike previous systems, the outputs of the intermediate steps in our system are fully realized, making the whole pipeline interpretable. Finally, if the input sentence is already in the target style, our model won’t add any stylistic markers and thus would allow the input to flow as is. We evaluate our model on politeness transfer as well as 5 additional tasks described in prior work (Shen et al., 2017; Prabhumoye et al., 2018; Li et al., 2018) on content preservation, fluency and style transfer accuracy. Both automatic and human evaluations show that our model beats the stateof-the-art methods in content preservation, while either matching or improving the transfer accuracy across six different style transfer tasks(§5). The results show that our technique is effective across a broad spectrum of style transfer tasks. Our methodology is inspired by Li et al. (2018) and improves upon several of its limitations as described in (§2). Our main contribution is the design of politeness transfer task. To this end, we provide a large dataset of nearly 1.39 million sentences labeled for politeness (https://github.com/tag-and-generate/ politeness-dataset). Additionally, we hand curate a test set of 800 samples (from Enron emails) which are annotated as requests. To the best of our knowledge, we are the first to undertake politeness as a style transfer task. In the process, we highlight an important class of problems wherein the transfer involves going from a neutral style to the target style. Finally, we design a “tag and generate” pipeline that is particularly well suited for tasks like politeness, while being general enough to match or beat the performance of the existing systems on popular style transfer tasks. 2 Related Work Politeness and its close relation with power dynamics and social interactions has been well documented (Brown et al., 1987). Recent work (Danescu-Niculescu-Mizil et al., 2013) in computational linguistics has provided a corpus of requests annotated for politeness curated from Wikipedia and StackExchange. Niu and Bansal (2018) uses this corpus to generate polite dialogues. Their work focuses on contextual dialogue response generation as opposed to content preserving style transfer, while the latter is the central theme of our work. Prior work on Enron corpus (Yeh and Harnly, 2006) has been mostly from a socio-linguistic perspective to observe social power dynamics (Bramsen et al., 2011; McCallum et al., 2007), formality (Peterson et al., 2011) and politeness (Prabhakaran et al., 2014). We build upon this body of work by using this corpus as a source for the style transfer task. Prior work on style transfer has largely focused on tasks of sentiment modification (Hu et al., 2017; Shen et al., 2017; Li et al., 2018), caption transfer (Li et al., 2018), persona transfer (Chandu et al., 2019; Zhang et al., 2018), gender and political slant transfer (Reddy and Knight, 2016; Prabhumoye et al., 2018), and formality transfer (Rao and Tetreault, 2018; Xu et al., 2019). Note that formality and politeness are loosely connected but independent styles (Kang and Hovy, 2019). We focus our efforts on carving out a task for politeness transfer and creating a dataset for such a task. Current style transfer techniques (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018; Yang et al., 2018; John et al., 2019) try to disentangle source style from content and then combine the content with the target style to generate the sentence in the target style. Compared to prior work, “Delete, Retrieve and Generate” (Li et al., 2018) (referred to as DRG henceforth) and its extension (Sudhakar et al., 2019) are effective methods to generate out1871 puts in the target style while having a relatively high rate of source content preservation. However, DRG has several limitations: (1) the delete module often marks content words as stylistic markers and deletes them, (2) the retrieve step relies on the presence of similar content in both the source and target styles, (3) the retrieve step is time consuming for large datasets, (4) the pipeline makes the assumption that style can be transferred by deleting stylistic markers and replacing them with target style phrases, (5) the method relies on a fixed corpus of style attribute markers, and is thus limited in its ability to generalize to unseen data during test time. Our methodology differs from these works as it does not require the retrieve stage and makes no assumptions on the existence of similar content phrases in both the styles. This also makes our pipeline faster in addition to being robust to noise. Wu et al. (2019) treats style transfer as a conditional language modelling task. It focuses only on sentiment modification, treating it as a cloze form task of filling in the appropriate words in the target sentiment. In contrast, we are capable of generating the entire sentence in the target style. Further, our work is more generalizable and we show results on five other style transfer tasks. 3 Tasks and Datasets 3.1 Politeness Transfer Task For the politeness transfer task, we focus on sentences in which the speaker communicates a requirement that the listener needs to fulfill. Common examples include imperatives “Let’s stay in touch” and questions that express a proposal “Can you call me when you get back?”. Following Jurafsky et al. (1997), we use the umbrella term “action-directives” for such sentences. The goal of this task is to convert action-directives to polite requests. While there can be more than one way of making a sentence polite, for the above examples, adding gratitude (“Thanks and let’s stay in touch”) or counterfactuals (“Could you please call me when you get back?”) would make them polite (Danescu-Niculescu-Mizil et al., 2013). Data Preparation The Enron corpus (Klimt and Yang, 2004) consists of a large set of email conversations exchanged by the employees of the Enron corporation. Emails serve as a medium for exchange of requests, serving as an ideal application for politeness transfer. We begin by pre-processing the raw Enron corpus following Shetty and Adibi (2004). The first set of pre-processing1 steps and de-duplication yielded a corpus of roughly 2.5 million sentences. Further pruning2 led to a cleaned corpus of over 1.39 million sentences. Finally, we use a politeness classifier (Niu and Bansal, 2018) to assign politeness scores to these sentences and filter them into ten buckets based on the score (P0P9; Fig. 1). All the buckets are further divided into train, test, and dev splits (in a 80:10:10 ratio). For our experiments, we assumed all the sentences with a politeness score of over 90% by the classifier to be polite, also referred as the P9 bucket (marked in green in Fig. 1). We use the train-split of the P9 bucket of over 270K polite sentences as the training data for the politeness transfer task. Since the goal of the task is making action directives more polite, we manually curate a test set comprising of such sentences from test splits across the buckets. We first train a classifier on the switchboard corpus (Jurafsky et al., 1997) to get dialog state tags and filter sentences that have been labeled as either action-directive or quotation.3 Further, we use human annotators to manually select the test sentences. The annotators had a Fleiss’s Kappa score (κ) of 0.774 and curated a final test set of 800 sentences. Figure 1: Distribution of Politeness Scores for the Enron Corpus In Fig. 2, we examine the two extreme buckets with politeness scores of < 10% (P0 bucket) and > 90% (P9 bucket) from our corpus by plotting 1Pre-processing also involved steps for tokenization (done using spacy (Honnibal and Montani, 2017)) and conversion to lower case. 2We prune the corpus by removing the sentences that 1) were less than 3 words long, 2) had more than 80% numerical tokens, 3) contained email addresses, or 4) had repeated occurrences of spurious characters. 3We used AWD-LSTM based classifier for classification of action-directive. 4The score was calculated for 3 annotators on a sample set of 50 sentences. 1872 10 of the top 30 words occurring in each bucket. We clearly notice that words in the P9 bucket are closely linked to polite style, while words in the P0 bucket are mostly content words. This substantiates our claim that the task of politeness transfer is fundamentally different from other attribute transfer tasks like sentiment where both the polarities are clearly defined. Figure 2: Probability of occurrence for 10 of the most common 30 words in the P0 and P9 data buckets 3.2 Other Tasks The Captions dataset (Gan et al., 2017) has image captions labeled as being factual, romantic or humorous. We use this dataset to perform transfer between these styles. This task parallels the task of politeness transfer because much like in the case of politeness transfer, the captions task also involves going from a style neutral (factual) to a style rich (humorous or romantic) parlance. For sentiment transfer, we use the Yelp restaurant review dataset (Shen et al., 2017) to train, and evaluate on a test set of 1000 sentences released by Li et al. (2018). We also use the Amazon dataset of product reviews (He and McAuley, 2016). We use the Yelp review dataset labelled for the Gender of the author, released by Prabhumoye et al. (2018) compiled from Reddy and Knight (2016). For the Political slant task (Prabhumoye et al., 2018), we use dataset released by Voigt et al. (2018). 4 Methodology We are given non-parallel samples of sentences X1 = {x(1) 1 . . . x(1) n } and X2 = {x(2) 1 . . . x(2) m } from styles S1 and S2 respectively. The objective of the task is to efficiently generate samples ˆX1 = {ˆx(2) 1 . . . ˆx(2) n } in the target style S2, conditioned on samples in X1. For a style Sv where v ∈{1, 2}, we begin by learning a set of phrases (Γv) which characterize the style Sv. The presence of phrases from Γv in a sentence xi would associate the sentence with the style Sv. For example, phrases like “pretty good” and “worth every penny” are characteristic of the “positive” style in the case of sentiment transfer task. We propose a two staged approach where we first infer a sentence z(xi) from x(1) i using a model, the tagger. The goal of the tagger is to ensure that the sentence z(xi) is agnostic to the original style (S1) of the input sentence. Conditioned on z(xi), we then generate the transferred sentence ˆx(2) i in the target style S2 using another model, the generator. The intermediate variable z(xi) is also seen in other style-transfer methods. Shen et al. (2017); Prabhumoye et al. (2018); Yang et al. (2018); Hu et al. (2017) transform the input x(v) i to a latent representation z(xi) which (ideally) encodes the content present in x(v) i while being agnostic to style Sv. In these cases z(xi) encodes the input sentence in a continuous latent space whereas for us z(xi) manifests in the surface form. The ability of our pipeline to generate observable intermediate outputs z(xi) makes it somewhat more interpretable than those other methods. We train two independent systems for the tagger & generator which have complimentary objectives. The former identifies the style attribute markers a(x(1) i ) from source style S1 and either replaces them with a positional token called [TAG] or merely adds these positional tokens without removing any phrase from the input x(1) i . This particular capability of the model enables us to generate these tags in an input that is devoid of any attribute marker (i.e. a(x(1) i ) = {}). This is one of the major differences from prior works which mainly focus on removing source style attributes and then replacing them with the target style attributes. It is especially critical for tasks like politeness transfer where the transfer takes place from a non-polite sentence. This is because in such cases we may need to add new phrases to the sentence rather than simply replace existing ones. The generator is trained to generate sentences ˆx(2) i in the target style by replacing these [TAG] tokens with stylistically relevant words inferred from target style S2. Even though we have non-parallel corpora, both systems are trained in a supervised fashion as sequence-to-sequence models with their own distinct pairs of inputs & outputs. To create parallel training data, we first estimate the style markers Γv for a given style Sv & then use these to curate style free sentences with [TAG] 1873 Figure 3: Our proposed approach: tag and generate. The tagger infers the interpretable style free sentence z(xi) for an input x(1) i in source style S1. The generator transforms x(1) i into ˆx(2) i which is in target style S2. tokens. Training data creation details are given in sections §4.2, §4.3. Fig. 3 shows the overall pipeline of the proposed approach. In the first example x(1) 1 , where there is no clear style attribute present, our model adds the [TAG] token in z(x1), indicating that a target style marker should be generated in this position. On the contrary, in the second example, the terms “ok” and “bland” are markers of negative sentiment and hence the tagger has replaced them with [TAG] tokens in z(x2). We can also see that the inferred sentence in both the cases is free of the original and target styles. The structural bias induced by this two staged approach is helpful in realizing an interpretable style free tagged sentence that explicitly encodes the content. In the following sections we discuss in detail the methodologies involved in (1) estimating the relevant attribute markers for a given style, (2) tagger, and (3) generator modules of our approach. 4.1 Estimating Style Phrases Drawing from Li et al. (2018), we propose a simple approach based on n-gram tf-idfs to estimate the set Γv, which represents the style markers for style v. For a given corpus pair X1, X2 in styles S1, S2 respectively we first compute a probability distribution p2 1(w) over the n-grams w present in both the corpora (Eq. 2). Intuitively, p2 1(w) is proportional to the probability of sampling an n-gram present in both X1, X2 but having a much higher tf-idf value in X2 relative to X1. This is how we define the impactful style markers for style S2. η2 1(w) = 1 m m P i=1 tf-idf(w,x(2) i ) 1 n n P j=1 tf-idf(w,x(1) j ) (1) p2 1(w) = η2 1(w)γ P w′ η2 1(w′)γ (2) where, η2 1(w) is the ratio of the mean tf-idfs for a given n-gram w present in both X1, X2 with |X1| = n and |X2| = m. Words with higher values for η2 1(w) have a higher mean tf-idf in X2 vs X1, and thus are more characteristic of S2. We further smooth and normalize η2 1(w) to get p2 1(w). Finally, we estimate Γ2 by Γ2 = {w : p2 1(w) ≥k} In other words, Γ2 consists of the set of phrases in X2 above a given style impact k. Γ1 is computed similarly where we use p1 2(w), η1 2(w). 4.2 Style Invariant Tagged Sentence The tagger model (with parameters θt) takes as input the sentences in X1 and outputs {z(xi) : x(1) i ∈X1}. Depending on the style transfer task, the tagger is trained to either (1) identify and replace style attributes a(x(1) i ) with the token tag [TAG] (replace-tagger) or (2) add the [TAG] token at specific locations in x(1) i (add-tagger). In both the cases, the [TAG] tokens indicate positions where the generator can insert phrases from the target style S2. Finally, we use the distribution p2 1(w)/p1 2(w) over Γ2/Γ1 (§4.1) to draw samples of attribute-markers that would be replaced with the [TAG] token during the creation of training data. The first variant, replace-tagger, is suited for a task like sentiment transfer where almost every sentence has some attribute markers a(x(1) i ) present in it. In this case the training data comprises of pairs where the input is X1 and the output is {z(xi) : x(1) i ∈X1}. The loss objective for replace-tagger is given by Lr(θt) in Eq. 3. Lr(θt) = − |X1| X i=1 log Pθt(z(xi)|x(1) i ; θt) (3) The second variant, add-tagger, is designed for cases where the transfer needs to happen from style neutral sentences to the target style. That is, X1 consists of style neutral sentences whereas X2 consists of sentences in the target style. Examples of 1874 such a task include the tasks of politeness transfer (introduced in this paper) and caption style transfer (used by Li et al. (2018)). In such cases, since the source sentences have no attribute markers to remove, the tagger learns to add [TAG] tokens at specific locations suitable for emanating style words in the target style. Figure 4: Creation of training data for add-tagger. The training data (Fig. 4) for the add-tagger is given by pairs where the input is {x(2) i \a(x(2) i ) : x(2) i ∈X2} and the output is {z(xi) : x(2) i ∈X2}. Essentially, for the input we take samples x(2) i in the target style S2 and explicitly remove style phrases a(x(2) i ) from it. For the output we replace the same phrases a(x(2) i ) with [TAG] tokens. As indicated in Fig. 4, we remove the style phrases “you would like to” and “please” and replace them with [TAG] in the output. Note that we only use samples from X2 for training the add-tagger; samples from the style neutral X1 are not involved in the training process at all. For example, in the case of politeness transfer, we only use the sentences labeled as “polite” for training. In effect, by training in this fashion, the tagger learns to add [TAG] tokens at appropriate locations in a style neutral sentence. The loss objective (La) given by Eq. 4 is crucial for tasks like politeness transfer where one of the styles is poorly defined. La(θt) = − |X1| X i=1 log Pθt(z(xi)|x(2) i \a(x(2) i ); θt) (4) 4.3 Style Targeted Generation The training for the generator model is complimentary to that of the tagger, in the sense that the generator takes as input the tagged output z(xi) inferred from the source style and modifies the [TAG] tokens to generate the desired sentence ˆx(v) i in the target style Sv. L(θg) = − |Xv| X i=1 log Pθg(x(v) i |z(xi); θg) (5) The training data for transfer into style Sv comprises of pairs where the input is given by {z(xi) : x(v) i ∈Xv , v ∈{1, 2}} and the output is Xv, i.e. it is trained to transform a style agnostic representation into a style targeted sentence. Since the generator has no notion of the original style and it is only concerned with the style agnostic representation z(xi), it is convenient to disentangle the training for tagger & generator. Finally, we note that the location at which the tags are generated has a significant impact on the distribution over style attributes (in Γ2) that are used to fill the [TAG] token at a particular position. Hence, instead of using a single [TAG] token, we use a set of positional tokens [TAG]t where t ∈{0, 1, . . . T} for a sentence of length T. By training both tagger and generator with these positional [TAG]t tokens we enable them to easily realize different distributions of style attributes for different positions in a sentence. For example, in the case of politeness transfer, the tags added at the beginning (t = 0) will almost always be used to generate a token like “Would it be possible ...” whereas for a higher t, [TAG]t may be replaced with a token like “thanks” or “sorry.” 5 Experiments and Results Baselines We compare our systems against three previous methods. DRG (Li et al., 2018), Style Transfer Through Back-translation (BST) (Prabhumoye et al., 2018), and Style transfer from nonparallel text by cross alignment (Shen et al., 2017) (CAE). For DRG, we only compare against the best reported method, delete-retrieve-generate. For all the models, we follow the experimental setups described in their respective papers. Implementation Details We use 4-layered transformers (Vaswani et al., 2017) to train both tagger and generator modules. Each transformer has 4 attention heads with a 512 dimensional embedding layer and hidden state size. Dropout (Srivastava et al., 2014) with p-value 0.3 is added for each layer in the transformer. For the politeness dataset the generator module is trained with data augmentation techniques like random word shuffle, word drops/replacements as proposed by (Im 1875 Politeness Gender Political Acc BL-s MET ROU Acc BL-s MET ROU ACC BL-s MET ROU CAE 99.62 6.94 10.73 25.71 65.21 9.25 14.72 42.42 77.71 3.17 7.79 27.17 BST 60.75 2.55 9.19 18.99 54.4 20.73 22.57 55.55 88.49 10.71 16.26 41.02 DRG 90.25 11.83 18.07 41.09 36.29 22.9 22.84 53.30 69.79 25.69 21.6 51.8 OURS 89.50 70.44 36.26 70.99 82.21 52.76 37.42 74.59 87.74 68.44 45.44 77.51 Table 1: Results on the Politeness, Gender and Political datasets. et al., 2017). We empirically observed that these techniques provide an improvement in the fluency and diversity of the generations. Both modules were also trained with the BPE tokenization (Sennrich et al., 2015) using a vocabulary of size 16000 for all the datasets except for Captions, which was trained using 4000 BPE tokens. The value of the smoothing parameter γ in Eq. 2 is set to 0.75. For all datasets except Yelp we use phrases with p2 1(w) ≥k = 0.9 to construct Γ2, Γ1 (§4.1). For Yelp k is set to 0.97. During inference we use beam search (beam size=5) to decode tagged sentences and targeted generations for tagger & generator respectively. For the tagger, we re-rank the final beam search outputs based on the number of [TAG] tokens in the output sequence (favoring more [TAG] tokens). Automated Evaluation Following prior work (Li et al., 2018; Shen et al., 2017), we use automatic metrics for evaluation of the models along two major dimensions: (1) style transfer accuracy and (2) content preservation. To capture accuracy, we use a classifier trained on the nonparallel style corpora for the respective datasets (barring politeness). The architecture of the classifier is based on AWD-LSTM (Merity et al., 2017) and a softmax layer trained via cross-entropy loss. We use the implementation provided by fastai.5 For politeness, we use the classifier trained by (Niu and Bansal, 2018).6 The metric of transfer accuracy (Acc) is defined as the percentage of generated sentences classified to be in the target domain by the classifier. The standard metric for measuring content preservation is BLEU-self (BL-s) (Papineni et al., 2002) which is computed with respect to the original sentences. Additionally, we report the BLEU-reference (BL-r) scores using the human reference sentences on the Yelp, Amazon and Captions datasets (Li et al., 2018). We also report ROUGE (ROU) (Lin, 2004) and METEOR (MET) (Denkowski and Lavie, 5https://docs.fast.ai/ 6This is trained on the dataset given by (DanescuNiculescu-Mizil et al., 2013). 2011) scores. In particular, METEOR also uses synonyms and stemmed forms of the words in candidate and reference sentences, and thus may be better at quantifying semantic similarities. Table 1 shows that our model achieves significantly higher scores on BLEU, ROUGE and METEOR as compared to the baselines DRG, CAE and BST on the Politeness, Gender and Political datasets. The BLEU score on the Politeness task is greater by 58.61 points with respect to DRG. In general, CAE and BST achieve high classifier accuracies but they fail to retain the original content. The classifier accuracy on the generations of our model are comparable (within 1%) with that of DRG for the Politeness dataset. In Table 2, we compare our model against CAE and DRG on the Yelp, Amazon, and Captions datasets. For each of the datasets our test set comprises 500 samples (with human references) curated by Li et al. (2018). We observe an increase in the BLEU-reference scores by 5.25, 4.95 and 3.64 on the Yelp, Amazon, and Captions test sets respectively. Additionally, we improve the transfer accuracy for Amazon by 14.2% while achieving accuracies similar to DRG on Yelp and Captions. As noted by Li et al. (2018), one of the unique aspects of the Amazon dataset is the absence of similar content in both the sentiment polarities. Hence, the performance of their model is worse in this case. Since we don’t make any such assumptions, we perform significantly better on this dataset. While popular, the metrics of transfer accuracy and BLEU have significant shortcomings making them susceptible to simple adversaries. BLEU relies heavily on n-gram overlap and classifiers can be fooled by certain polarizing keywords. We test this hypothesis on the sentiment transfer task by a Naive Baseline. This baseline adds “but overall it sucked” at the end of the sentence to transfer it to negative sentiment. Similarly, it appends “but overall it was perfect” for transfer into a positive sentiment. This baseline achieves an average accuracy score of 91.3% and a BLEU score of 61.44 on the Yelp 1876 Yelp Amazon Captions Acc BL-s BL-r MET ROU Acc BL-s BL-r MET ROU Acc BL-s BL-r MET ROU CAE 72.1 19.95 7.75 21.70 55.9 78 2.64 1.68 9.52 29.16 89.66 2.09 1.57 9.61 30.02 DRG 88.8 36.69 14.51 32.09 61.06 52.2 57.07 29.85 50.16 79.31 95.65 31.79 11.78 32.45 64.32 OURS 86.6 47.14 19.76 36.26 70.99 66.4 68.74 34.80 45.3 83.45 93.17 51.01 15.63 43.67 79.51 Table 2: Results on the Yelp, Amazon and Captions datasets. Con Att Gra DRG Ours DRG Ours DRG Ours Politeness 2.9 3.6 3.2 3.6 2.0 3.7 Gender 3.0 3.5 2.2 2.5 Political 2.9 3.2 2.5 2.7 Yelp 3.0 3.7 3 3.9 2.7 3.3 Table 3: Human evaluation on Politeness, Gender, Political and Yelp datasets. dataset. Despite high evaluation scores, it does not reflect a high rate of success on the task. In summary, evaluation via automatic metrics might not truly correlate with task success. Changing Content Words Given that our model is explicitly trained to generate new content only in place of the TAG token, it is expected that a welltrained system will retain most of the non-tagged (content) words. Clearly, replacing content words is not desired since it may drastically change the meaning. In order to quantify this, we calculate the fraction of non-tagged words being changed across the datasets. We found that the non-tagged words were changed for only 6.9% of the sentences. In some of these cases, we noticed that changing non-tagged words helped in producing outputs that were more natural and fluent. Human Evaluation Following Li et al. (2018), we select 10 unbiased human judges to rate the output of our model and DRG on three aspects: (1) content preservation (Con) (2) grammaticality of the generated content (Gra) (3) target attribute match of the generations (Att). For each of these metrics, the reviewers give a score between 1-5 to each of the outputs, where 1 reflects a poor performance on the task and 5 means a perfect output. Since the judgement of signals that indicate gender and political inclination are prone to personal biases, we don’t annotate these tasks for target attribute match metric. Instead we rely on the classifier scores for the transfer. We’ve used the same instructions from Li et al. (2018) for our human study. Overall, we evaluate both systems on a total of 200 samples for Politeness and 100 samples each for Yelp, Gender and Political. Table 3 shows the results of human evaluations. We observe a significant improvement in content preservation scores across various datasets (specifically in Politeness domain) highlighting the ability of our model to retain content better than DRG. Alongside, we also observe consistent improvements of our model on target attribute matching and grammatical correctness. Qualitative Analysis We compare the results of our model with the DRG model qualitatively as shown in Table 4. Our analysis is based on the linguistic strategies for politeness as described in (Danescu-Niculescu-Mizil et al., 2013). The first sentence presents a simple example of the counterfactual modal strategy inducing “Could you please” to make the sentence polite. The second sentence highlights another subtle concept of politeness of 1st Person Plural where adding “we” helps being indirect and creates the sense that the burden of the request is shared between speaker and addressee. The third sentence highlights the ability of the model to add Apologizing words like “Sorry” which helps in deflecting the social threat of the request by attuning to the imposition. According to the Please Start strategy, it is more direct and insincere to start a sentence with “Please”. The fourth sentence projects the case where our model uses “thanks” at the end to express gratitude and in turn, makes the sentence more polite. Our model follows the strategies prescribed in (Danescu-Niculescu-Mizil et al., 2013) while generating polite sentences.7 Ablations We provide a comparison of the two variants of the tagger, namely the replace-tagger and add-tagger on two datasets. We also train and compare them with a combined variant.8 We train these tagger variants on the Yelp and Captions datasets and present the results in Table 5. We observe that for Captions, where we transfer a factual (neutral) to romantic/humorous sentence, the add7We provide additional qualitative examples for other tasks in the supplementary material. 8Training of combined variant is done by training the tagger model on the concatenation of training data for addtagger and replace-tagger. 1877 Input DRG Output Our Model Output Strategy what happened to my personal station? what happened to my mother to my co??? could you please let me know what happened to my personal station? Counterfactual Modal yes, go ahead and remove it. yes, please go to the link below and delete it. yes, we can go ahead and remove it. 1st Person Plural not yet-i’ll try this wkend. not yet to say-i think this will be a <unk> long. sorry not yet-i’ll try to make sure this wk Apologizing please check on metromedia energy, thanks again on the energy industry, please check on metromedia energy, thanks Mitigating please start Table 4: Qualitative Examples comparing the outputs from DRG and Our model for the Politeness Transfer Task tagger provides the best accuracy with a relatively negligible drop in BLEU scores. On the contrary, for Yelp, where both polarities are clearly defined, the replace-tagger gives the best performance. Interestingly, the accuracy of the add-tagger is ≈50% in the case of Yelp, since adding negative words to a positive sentence or vice-versa neutralizes the classifier scores. Thus, we can use the add-tagger variant for transfer from a polarized class to a neutral class as well. To check if the combined tagger is learning to perform the operation that is more suitable for a dataset, we calculate the fraction of times the combined tagger performs add/replace operations on the Yelp and Captions datasets. We find that for Yelp (a polar dataset) the combined tagger performs 20% more replace operations (as compared to add operations). In contrast, on the CAPTIONS dataset, it performs 50% more add operations. While the combined tagger learns to use the optimal tagging operation to some extent, a deeper understanding of this phenomenon is an interesting future topic for research. We conclude that the choice of the tagger variant is dependent on the characterstics of the underlying transfer task. Yelp Captions Acc BL-r Acc BL-r Add-Tagger 53.2 20.66 93.17 15.63 Replace-Tagger 86.6 19.76 84.5 15.04 Combined 72.5 22.46 82.17 18.51 Table 5: Comparison of different tagger variants for Yelp and Captions datasets 6 Conclusion We introduce the task of politeness transfer for which we provide a dataset comprised of sentences curated from email exchanges present in the Enron corpus. We extend prior works (Li et al., 2018; Sudhakar et al., 2019) on attribute transfer by introducing a simple pipeline – tag & generate which is an interpretable two-staged approach for content preserving style transfer. We believe our approach is the first to be robust in cases when the source is style neutral, like the “non-polite” class in the case of politeness transfer. Automatic and human evaluation shows that our approach outperforms other state-of-the-art models on content preservation metrics while retaining (or in some cases improving) the transfer accuracies. Acknowledgments This material is based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. This work was also supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple. We would also like to acknowledge NVIDIA’s GPU support. We would like to thank Antonis Anastasopoulos, Ritam Dutt, Sopan Khosla, and, Xinyi Wang for the helpful discussions. 1878 References Philip Bramsen, Martha Escobar-Molano, Ami Patel, and Rafael Alonso. 2011. Extracting social power relationships from natural language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 773–782, Stroudsburg, PA, USA. Association for Computational Linguistics. Penelope Brown, Stephen C Levinson, and Stephen C Levinson. 1987. Politeness: Some universals in language usage, volume 4. Cambridge university press. Khyathi Chandu, Shrimai Prabhumoye, Ruslan Salakhutdinov, and Alan W Black. 2019. my way of telling a story: Persona based grounded story generation. In Proceedings of the Second Workshop on Storytelling, pages 11–21. Liz Coppock. 2005. Politeness strategies in conversation closings. unpublished paper available online at http://www. stanford. edu/˜ coppock/face. pdf (last accessed 23 December 2007). Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 250–259, Sofia, Bulgaria. Association for Computational Linguistics. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the sixth workshop on statistical machine translation, pages 85–91. Association for Computational Linguistics. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence. Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3137–3146. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In proceedings of the 25th international conference on world wide web, pages 507–517. International World Wide Web Conferences Steering Committee. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1587–1596, International Convention Centre, Sydney, Australia. PMLR. Daniel Im Jiwoong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. 2017. Denoising criterion for variational auto-encoding framework. In ThirtyFirst AAAI Conference on Artificial Intelligence. Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Variation, pages 10–19, Copenhagen, Denmark. Association for Computational Linguistics. Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 424–434, Florence, Italy. Association for Computational Linguistics. Daniel Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL shallowdiscourse-function annotation coders manual, draft 13. Technical Report 97-02, University of Colorado, Boulder Institute of Cognitive Science, Boulder, CO. D´aniel Z K´ad´ar and Sara Mills. 2011. Politeness in East Asia. Cambridge University Press. Dongyeop Kang and Eduard Hovy. 2019. xslue: A benchmark and analysis platform for cross-style language understanding and evaluation. Bryan Klimt and Yiming Yang. 2004. Introducing the enron corpus. In CEAS. Juncen Li, Robin Jia, He He, and Percy Liang. 2018. Delete, retrieve, generate: a simple approach to sentiment and style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1865–1874, New Orleans, Louisiana. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Andrew McCallum, Xuerui Wang, and Andr´es Corrada-Emmanuel. 2007. Topic and role discovery in social networks with experiments on enron and academic email. J. Artif. Int. Res., 30(1):249–272. Ardith J Meier. 1995. Defining politeness: Universality in appropriateness. Language Sciences, 17(4):345– 356. 1879 Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Transactions of the Association for Computational Linguistics, 6:373–389. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Kelly Peterson, Matt Hohensee, and Fei Xia. 2011. Email formality in the workplace: A case study on the Enron corpus. In Proceedings of the Workshop on Language in Social Media (LSM 2011), pages 86–95, Portland, Oregon. Association for Computational Linguistics. Vinodkumar Prabhakaran, Emily E. Reid, and Owen Rambow. 2014. Gender and power: How gender and gender environment affect manifestations of power. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1965–1976, Doha, Qatar. Association for Computational Linguistics. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–876, Melbourne, Australia. Association for Computational Linguistics. Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 129–140. Sravana Reddy and Kevin Knight. 2016. Obfuscating gender in social media writing. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 17–26. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841. Jitesh Shetty and Jafar Adibi. 2004. The enron email dataset database schema and brief statistical report. Information sciences institute technical report, University of Southern California, 4(1):120–128. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. “transforming” delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3267– 3277, Hong Kong, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Rob Voigt, David Jurgens, Vinodkumar Prabhakaran, Dan Jurafsky, and Yulia Tsvetkov. 2018. RtGender: A corpus for studying differential responses to gender. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Xing Wu, Tao Zhang, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Mask and infill: Applying masked language model for sentiment transfer. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI19, pages 5271–5277. International Joint Conferences on Artificial Intelligence Organization. Ruochen Xu, Tao Ge, and Furu Wei. 2019. Formality style transfer with hybrid textual annotations. arXiv preprint arXiv:1903.06353. Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for style. In Proceedings of COLING 2012, pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee. Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems, pages 7287–7298. Jen-Yuan Yeh and Aaron Harnly. 2006. Email thread reassembly using similarity matching. In Conference on Email and Anti-Spam. Conference on Email and Anti-Spam. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational 1880 Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. 1881 Non-polite Input DRG Our Model jon - - please use this resignation letter in lieu of the one sent on friday . - i think this would be a good idea if you could not be a statement that harry ’s signed in one of the schedule . jon - sorry - please use this resignation letter in lieu of the one event sent on if you have a few minutes today, give me a call i’ll call today to discuss this. if you have a few minutes today, please give me a call at anyway you can let me know. anyway, i’m sure i’m sure. anyway please let me know as soon as possible yes, go ahead and remove it. yes, please go to the link below and delete it. yes, we can go ahead and remove it. can you explain a bit more about how those two coexist ? also ..... i can explain how the two more than <unk> i can help with mike ? can you explain a bit more about how those two coexist ? also thanks go ahead and sign it - i did . go away so we can get it approved . we could go ahead and sign it - i did look at Table 6: Additional Qualitative Examples of outputs from our Model and DRG for the Politeness Transfer Task Task Non-polite Input DRG Our Model Fem →Male my husband ordered the brisket . my wife had the best steak . my wife ordered the brisket . Fem →Male i ’ m a fair person . i ’ m a good job of the <unk> . i ’ m a big guy . Male →Fem my girlfriend and i recently stayed at this sheraton . i recently went with the club . my husband and i recently stayed at this office . Male →Fem however , once inside the place was empty . however , when the restaurant was happy hour for dinner . however , once inside the place was super cute . Pos →Neg good drinks , and good company . horrible company . terrible drinks , terrible company. Pos →Neg i will be going back and enjoying this great place ! i will be going back and enjoying this great ! i will not be going back and enjoying this garbage ! Neg →Pos this is the reason i will never go back . this is the reason i will never go back . so happy i will definitely be back . Neg →Pos salsa is not hot or good . salsa is not hot or good . salsa is always hot and fresh . Dem →Rep i am confident of trumps slaughter . i am mia love i am confident of trumps administration . Dem →Rep we will resist trump we will impeach obama we will be praying for trump Rep →Dem video : black patriots demand impeachment of obama video : black police show choose video : black patriots demand to endorse obama Rep →Dem mr. trump is good ... but mr. marco rubio is great ! ! thank you mr. good ... but mr. kaine is great senator ! ! mr. schumer is good ... but mr. pallone is great ! ! Fact →Rom a woman is sitting near a flower bed overlooking a tunnel . a woman is sitting near a flower overlooking a tunnel, determined to a woman is sitting near a brick rope , excited to meet her boyfriend . Fact →Rom two dogs play with a tennis ball in the snow . two dogs play with a tennis ball in the snow . two dogs play with a tennis ball in the snow celebrating their friendship . Fact →Hum three kids play on a wall with a green ball . three kids on a bar on a field of a date . three kids play on a wall with a green ball fighting for supremacy . Fact →Hum a black dog plays around in water . a black dog plays in the water . a black dog plays around in water looking for fish . Table 7: Additional Qualitative Examples of our Model and DRG for other Transfer Tasks
2020
169
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 171–182 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 171 Fact-based Text Editing Hayate Iso†∗ Chao Qiao‡ Hang Li‡ †Nara Institute of Science and Technology ‡ByteDance AI Lab [email protected], {qiaochao, lihang.lh}@bytedance.com Abstract We propose a novel text editing task, referred to as fact-based text editing, in which the goal is to revise a given document to better describe the facts in a knowledge base (e.g., several triples). The task is important in practice because reflecting the truth is a common requirement in text editing. First, we propose a method for automatically generating a dataset for research on fact-based text editing, where each instance consists of a draft text, a revised text, and several facts represented in triples. We apply the method into two public tableto-text datasets, obtaining two new datasets consisting of 233k and 37k instances, respectively. Next, we propose a new neural network architecture for fact-based text editing, called FACTEDITOR, which edits a draft text by referring to given facts using a buffer, a stream, and a memory. A straightforward approach to address the problem would be to employ an encoder-decoder model. Our experimental results on the two datasets show that FACTEDITOR outperforms the encoder-decoder approach in terms of fidelity and fluency. The results also show that FACTEDITOR conducts inference faster than the encoder-decoder approach. 1 Introduction Automatic editing of text by computer is an important application, which can help human writers to write better documents in terms of accuracy, fluency, etc. The task is easier and more practical than the automatic generation of texts from scratch and is attracting attention recently (Yang et al., 2017; Yin et al., 2019). In this paper, we consider a new and specific setting of it, referred to as fact-based text editing, in which a draft text and several facts (represented in triples) are given, and the system ∗The work was done when Hayate Iso was a research intern at ByteDance AI Lab. Set of triples {(Baymax, creator, Douncan Rouleau), (Douncan Rouleau, nationality, American), (Baymax, creator, Steven T. Seagle), (Steven T. Seagle, nationality, American), (Baymax, series, Big Hero 6), (Big Hero 6, starring, Scott Adsit)} Draft text Baymax was created by Duncan Rouleau, a winner of Eagle Award. Baymax is a character in Big Hero 6 . Revised text Baymax was created by American creators Duncan Rouleau and Steven T. Seagle . Baymax is a character in Big Hero 6 which stars Scott Adsit . Table 1: Example of fact-based text editing. Facts are represented in triples. The facts in green appear in both draft text and triples. The facts in orange are present in the draft text, but absent from the triples. The facts in blue do not appear in the draft text, but in the triples. The task of fact-based text editing is to edit the draft text on the basis of the triples, by deleting unsupported facts and inserting missing facts while retaining supported facts. aims to revise the text by adding missing facts and deleting unsupported facts. Table 1 gives an example of the task. As far as we know, no previous work did address the problem. In a text-to-text generation, given a text, the system automatically creates another text, where the new text can be a text in another language (machine translation), a summary of the original text (summarization), or a text in better form (text editing). In a table-to-text generation, given a table containing facts in triples, the system automatically composes a text, which describes the facts. The former is a text-to-text problem, and the latter a table-to-text problem. In comparison, fact-based text editing can be viewed as a ‘text&table-to-text’ problem. 172 First, we devise a method for automatically creating a dataset for fact-based text editing. Recently, several table-to-text datasets have been created and released, consisting of pairs of facts and corresponding descriptions. We leverage such kind of data in our method. We first retrieve facts and their descriptions. Next, we take the descriptions as revised texts and automatically generate draft texts based on the facts using several rules. We build two datasets for fact-based text editing on the basis of WEBNLG (Gardent et al., 2017) and ROTOWIRE, consisting of 233k and 37k instances respectively (Wiseman et al., 2017) 1. Second, we propose a model for fact-based text editing called FACTEDITOR. One could employ an encoder-decoder model, such as an encoderdecoder model, to perform the task. The encoderdecoder model implicitly represents the actions for transforming the draft text into a revised text. In contrast, FACTEDITOR explicitly represents the actions for text editing, including Keep, Drop, and Gen, which means retention, deletion, and generation of word respectively. The model utilizes a buffer for storing the draft text, a stream to store the revised text, and a memory for storing the facts. It also employs a neural network to control the entire editing process. FACTEDITOR has a lower time complexity than the encoder-decoder model, and thus it can edit a text more efficiently. Experimental results show that FACTEDITOR outperforms the baseline model of using encoderdecoder for text editing in terms of fidelity and fluency, and also show that FACTEDITOR can perform text editing faster than the encoder-decoder model. 2 Related Work 2.1 Text Editing Text editing has been studied in different settings such as automatic post-editing (Knight and Chander, 1994; Simard et al., 2007; Yang et al., 2017), paraphrasing (Dolan and Brockett, 2005), sentence simplification (Inui et al., 2003; Wubben et al., 2012), grammar error correction (Ng et al., 2014), and text style transfer (Shen et al., 2017; Hu et al., 2017). The rise of encoder-decoder models (Cho et al., 2014; Sutskever et al., 2014) as well as the attention (Bahdanau et al., 2015; Vaswani et al., 2017) 1The datasets are publicly available at https:// github.com/isomap/factedit and copy mechanisms (Gu et al., 2016; Gulcehre et al., 2016) has dramatically changed the landscape, and now one can perform the task relatively easily with an encoder-decoder model such as Transformer provided that a sufficient amount of data is available. For example, Li et al. (2018) introduce a deep reinforcement learning framework for paraphrasing, consisting of a generator and an evaluator. Yin et al. (2019) formalize the problem of text edit as learning and utilization of edit representations and propose an encoder-decoder model for the task. Zhao et al. (2018) integrate paraphrasing rules with the Transformer model for text simplification. Zhao et al. (2019) proposes a method for English grammar correction using a Transformer and copy mechanism. Another approach to text editing is to view the problem as sequential tagging instead of encoderdecoder. In this way, the efficiency of learning and prediction can be significantly enhanced. Vu and Haffari (2018) and Dong et al. (2019) conduct automatic post-editing and text simplification on the basis of edit operations and employ Neural Programmer-Interpreter (Reed and De Freitas, 2016) to predict the sequence of edits given a sequence of words, where the edits include KEEP, DROP, and ADD. Malmi et al. (2019) propose a sequential tagging model that assigns a tag (KEEP or DELETE) to each word in the input sequence and also decides whether to add a phrase before the word. Our proposed approach is also based on sequential tagging of actions. It is designed for fact-based text editing, not text-to-text generation, however. 2.2 Table-to-Text Generation Table-to-text generation is the task which aims to generate a text from structured data (Reiter and Dale, 2000; Gatt and Krahmer, 2018), for example, a text from an infobox about a term in biology in wikipedia (Lebret et al., 2016) and a description of restaurant from a structured representation (Novikova et al., 2017). Encoder-decoder models can also be employed in table-to-text generation with structured data as input and generated text as output, for example, as in (Lebret et al., 2016). Puduppully et al. (2019) and Iso et al. (2019) propose utilizing an entity tracking module for document-level table-to-text generation. One issue with table-to-text is that the style of generated texts can be diverse (Iso et al., 2019). Re173 y′ AGENT-1 performed as PATIENT-3 on BRIDGE-1 mission that was operated by PATIENT-2 . ˆx′ AGENT-1 served as PATIENT-3 was a crew member of the BRIDGE-1 mission . x′ AGENT-1 performed as PATIENT-3 on BRIDGE-1 mission . (a) Example for insertion. The revised template y′ and the reference template ˆx′ share subsequences. The set of triple templates T \ ˆT is {(BRIDGE-1, operator, PATIENT-2)}. Our method removes “that was operated by PATIENT-2” from the revised template y′ to create the draft template x′. y′ AGENT-1 was created by BRIDGE-1 and PATIENT-2 . ˆx′ The character of AGENT-1 , whose full name is PATIENT-1 , was created by BRIDGE-1 and PATIENT-2 . x′ AGENT-1 , whose full name is PATIENT-1 , was created by BRIDGE-1 and PATIENT-2 . (b) Example for deletion. The revised template y′ and the reference template ˆx′ share subsequences. The set of triple templates ˆT \T is {(AGENT-1, fullName, PATIENT-1)}. Our method copies “whose full name is PATIENT-1” from the reference template x′ to create the draft template x′. Table 2: Examples for insertion and deletion, where words in green are matched, words in gray are not matched, words in blue are copied, and words in orange are removed. Best viewed in color. searchers have developed methods to deal with the problem using other texts as templates (Hashimoto et al., 2018; Guu et al., 2018; Peng et al., 2019). The difference between the approach and factbased text editing is that the former is about tableto-text generation based on other texts, while the latter is about text-to-text generation based on structured data. 3 Data Creation In this section, we describe our method of data creation for fact-based text editing. The method automatically constructs a dataset from an existing table-to-text dataset. 3.1 Data Sources There are two benchmark datasets of table-totext, WEBNLG (Gardent et al., 2017)2 and ROTOWIRE(Wiseman et al., 2017)3. We create two datasets on the basis of them, referred to as WEBEDIT and ROTOEDIT respectively. In the datasets, each instance consists of a table (structured data) and an associated text (unstructured data) describing almost the same content.4. For each instance, we take the table as triples of facts and the associated text as a revised text, and we automatically create a draft text. The set of triples is represented as T = {t}. Each triple t consists of subject, predicate, and object, denoted 2The data is available at https://github.com/ ThiagoCF05/webnlg. We utilize version 1.5. 3We utilize the ROTOWIRE-MODIFIED data provided by Iso et al. (2019) available at https://github.com/ aistairc/rotowire-modified. The authors also provide an information extractor for processing the data. 4In ROTOWIRE, we discard redundant box-scores and unrelated sentences using the information extractor and heuristic rules. as t = (subj, pred, obj). For simplicity, we refer to the nouns or noun phrases of subject and object simply as entities. The revised text is a sequence of words denoted as y. The draft text is a sequence of words denoted as x. Given the set of triples T and the revised text y, we aim to create a draft text x, such that x is not in accordance with T , in contrast to y, and therefore text editing from x to y is needed. 3.2 Procedure Our method first creates templates for all the sets of triples and revised texts and then constructs a draft text for each set of triples and revised text based on their related templates. Creation of templates For each instance, our method first delexicalizes the entity words in the set of triples T and the revised text y to obtain a set of triple templates T ′ and a revised template y′. For example, given T ={(Baymax, voice, Scott Adsit)} and y =“Scott Adsit does the voice for Baymax”, it produces the set of triple templates T ′ ={(AGENT1, voice, PATIENT-1)} and the revised template y′ =“AGENT-1 does the voice for PATIENT-1”. Our method then collects all the sets of triple templates T ′ and revised templates y′ and retains them in a key-value store with y′ being a key and T ′ being a value. Creation of draft text Next, our method constructs a draft text x using a set of triple templates T ′ and a revised template y′. For simplicity, it only considers the use of either insertion or deletion in the text editing, and one can easily make an extension of it to a more complex 174 setting. Note that the process of data creation is reverse to that of text editing. Given a pair of T ′ and y′, our method retrieves another pair denoted as ˆT ′ and ˆx′, such that y′ and ˆx′ have the longest common subsequences. We refer to ˆx′ as a reference template. There are two possibilities; ˆT ′ is a subset or a superset of T ′. (We ignore the case in which they are identical.) Our method then manages to change y′ to a draft template denoted as x′ on the basis of the relation between T ′ and ˆT ′. If ˆT ′ ⊊T ′, then the draft template x′ created is for insertion, and if ˆT ′ ⊋T ′, then the draft template x′ created is for deletion. For insertion, the revised template y′ and the reference template ˆx′ share subsequences, and the set of triples T \ ˆT appear in y′ but not in ˆx′. Our method keeps the shared subsequences in y′, removes the subsequences in y′ about T \ ˆT , and copies the rest of words in y′, to create the draft template x′. Table 2a gives an example. The shared subsequences “AGENT-1 performed as PATIENT3 on BRIDGE-1 mission” are kept. The set of triple templates T \ ˆT is {(BRIDGE-1, operator, PATIENT-2)}. The subsequence “that was operated by PATIENT-2” is removed. Note that the subsequence “served” is not copied because it is not shared by y′ and ˆx′. For deletion, the revised template y′ and the reference template ˆx′ share subsequences. The set of triples ˆT \T appear in ˆx′ but not in y′. Our method retains the shared subsequences in y′, copies the subsequences in ˆx′ about ˆT \T , and copies the rest of words in y′, to create the draft template x′. Table 2b gives an example. The subsequences “AGENT-1 was created by BRIDGE-1 and PATIENT-2” are retained. The set of triple templates ˆT \T is {(AGENT-1, fullName, PATIENT-1)}. The subsequence “whose full name is PATIENT-1” is copied. Note that the subsequence “the character of” is not copied because it is not shared by y′ and ˆx′. After getting the draft template x′, our method lexicalizes it to obtain a draft text x, where the lexicons (entity words) are collected from the corresponding revised text y. We obtain two datasets with our method, referred to as WEBEDIT and ROTOEDIT, respectively. Table 3 gives the statistics of the datasets. In the WEBEDIT data, sometimes entities only appear in the subj’s of triples. In such cases, we also make them appear in the obj’s. To do so, we WEBEDIT ROTOEDIT TRAIN VALID TEST TRAIN VALID TEST #D 181k 23k 29k 27k 5.3k 4.9k #Wd 4.1M 495k 624k 4.7M 904k 839k #Wr 4.2M 525k 649k 5.6M 1.1M 1.0M #S 403k 49k 62k 209k 40k 36k Table 3: Statistics of WEBEDIT and ROTOEDIT, where #D is the number of instances, #Wd and #Wr are the total numbers of words in the draft texts and the revised texts, respectively, and #S is total the number of sentences. introduce an additional triple (ROOT, IsOf, subj) for each subj, where ROOT is a dummy entity. 4 FACTEDITOR In this section, we describe our proposed model for fact-based text editing referred to as FACTEDITOR. 4.1 Model Architecture FACTEDITOR transforms a draft text into a revised text based on given triples. The model consists of three components, a buffer for storing the draft text and its representations, a stream for storing the revised text and its representations, and a memory for storing the triples and their representations, as shown in Figure 1. FACTEDITOR scans the text in the buffer, copies the parts of text from the buffer into the stream if they are described in the triples in the memory, deletes the parts of the text if they are not mentioned in the triples, and inserts new parts of next into the stream which is only presented in the triples. The architecture of FACTEDITOR is inspired by those in sentence parsing Dyer et al. (2015); Watanabe and Sumita (2015). The actual processing of FACTEDITOR is to generate a sequence of words into the stream from the given sequence of words in the buffer and the set of triples in the memory. A neural network is employed to control the entire editing process. 4.2 Neural Network Initialization FACTEDITOR first initializes the representations of content in the buffer, stream, and memory. There is a feed-forward network associated with the memory, utilized to create the embeddings of triples. Let M denote the number of triples. The 175 embedding of triple tj, j = 1, · · · , M is calculated as tj = tanh(W t · [esubj j ; epred j ; eobj j ] + bt), where W t and bt denote parameters, esubj j , epred j , eobj j denote the embeddings of subject, predicate, and object of triple tj, and [ ; ] denotes vector concatenation. There is a bi-directional LSTM associated with the buffer, utilized to create the embeddings of words of draft text. The embeddings are obtained as b = BILSTM(x), where x = (x1, . . . , xN) is the list of embeddings of words and b = (b1, . . . , bN) is the list of representations of words, where N denotes the number of words. There is an LSTM associated with the stream for representing the hidden states of the stream. The first hidden state is initialized as s1 = tanh W s · "PN i=1 bi N ; PM j=1 tj M # + bs ! where W s and bs denotes parameters. Action prediction FACTEDITOR predicts an action at each time t using the LSTM. There are three types of action, namely Keep, Drop, and Gen. First, it composes a context vector ˜tt of triples at time t using attention ˜tt = M X j=1 αt,jtj where αt,j is a weight calculated as αt,j ∝exp  v⊤ α · tanh (W α · [st; bt; tj])  where vα and W α are parameters. Then, it creates the hidden state zt for action prediction at time t zt = tanh W z · [st; bt;˜tt] + bz  where W z and bz denote parameters. Next, it calculates the probability of action at P(at | zt) = softmax(W a · zt) where W a denotes parameters, and chooses the action having the largest probability. Stream Buffer st bt pop push tt~ (a) The Keep action, where the top embedding of the buffer bt is popped and the concatenated vector [˜tt; bt] is pushed into the stream LSTM. Stream Buffer st bt pop (b) The Drop action, where the top embedding of the buffer bt is popped and the state in the stream is reused at the next time step t + 1. Stream Buffer tt st bt Wp yt ~ push (c) The Gen action, where the concatenated vector [˜tt; W pyt] is pushed into the stream, and the top embedding of the buffer is reused at the next time step t + 1. Figure 1: Actions of FACTEDITOR. Action execution FACTEDITOR takes action based on the prediction result at time t. For Keep at time t, FACTEDITOR pops the top embedding bt in the buffer, and feeds the combination of the top embedding bt and the context vector of triples ˜tt into the stream, as shown in Fig. 1a. The state of stream is updated with the LSTM as st+1 = LSTM([˜tt; bt], st). FACTEDITOR also copies the top word in the buffer into the stream. For Drop at time t, FACTEDITOR pops the top embedding in the buffer and proceeds to the next state, as shown in Fig. 1b. The state of stream is updated as st+1 = st. Note that no word is inputted into the stream. For Gen at time t, FACTEDITOR does not pop the top embedding in the buffer. It feeds the 176 Draft text x Bakewell pudding is Dessert that can be served Warm or cold . Revised text y Bakewell pudding is Dessert that originates from Derbyshire Dales . Action sequence a Keep Keep Keep Keep Gen(originates) Gen(from) Gen(Derbyshire Dales) Drop Drop Drop Drop Keep Table 4: An example of action sequence derived from a draft text and revised text. combination of the context vector of triples ˜tt and the linearly projected embedding of word w into the stream, as shown in Fig. 1c. The state of stream is updated with the LSTM as st+1 = LSTM([˜tt; W pyt], st), where yt is the embedding of the generated word yt and W p denotes parameters. In addition, FACTEDITOR copies the generated word yt into the stream. FACTEDITOR continues the actions until the buffer becomes empty. Word generation FACTEDITOR generates a word yt at time t, when the action is Gen, Pgen(yt | zt) = softmax(W y · zt) where W y is parameters. To avoid generation of OOV words, FACTEDITOR exploits the copy mechanism. It calculates the probability of copying the object of triple tj Pcopy(oj | zt) ∝exp (v⊤ c · tanh(W c · [zt; tj])) where vc and W c denote parameters, and oj is the object of triple tj. It also calculates the probability of gating pgate = sigmoid(w⊤ g · zt + bg) where wg and bg are parameters. Finally, it calculates the probability of generating a word wt through either generation or copying, P(yt | zt) = pgatePgen(yt | zt) + (1 −pgate) M X j=1:oj=yt Pcopy(oj | zt), where it is assumed that the triples in the memory have the same subject and thus only objects need to be copied. 4.3 Model Learning The conditional probability of sequence of actions a = (a1, a2, · · · , aT ) given the set of triples T and the sequence of input words x can be written as P(a | T , x) = T Y t=1 P(at | zt) where P(at | zt) is the conditional probability of action at given state zt at time t and T denotes the number of actions. The conditional probability of sequence of generated words y = (y1, y2, · · · , yT ) given the sequence of actions a can be written as P(y | a) = T Y t=1 P(yt | at) where P(yt | at) is the conditional probability of generated word yt given action at at time t, which is calculated as P(yt | at) = ( P(yt | zt) if at = Gen 1 otherwise Note that not all positions have a generated word. In such a case, yt is simply a null word. The learning of the model is carried out via supervised learning. The objective of learning is to minimize the negative log-likelihood of P(a | T , x) and P(y | a) L(θ) = − T X t=1 {log P(at | zt) + log P(yt | at)} where θ denotes the parameters. A training instance consists of a pair of draft text and revised text, as well as a set of triples, denoted as x, y, and T respectively. For each instance, our method derives a sequence of actions denoted as a, in a similar way as that in (Dong et al., 2019). It first finds the longest common subsequence between x and y, and then selects an action of Keep, Drop, or Gen at each position, according to how y is obtained from x and T (cf., Tab. 4). Action Gen is preferred over action Drop when both are valid. 177 Table Encoder Decoder y T (a) Table-to-Text T Text Encoder Decoder y x (b) Text-to-Text Table Encoder Text Encoder Decoder y x T (c) ENCDECEDITOR Figure 2: Model architectures of the baselines. All models employ attention and copy mechanism. 4.4 Time Complexity The time complexity of inference in FACTEDITOR is O(NM), where N is the number of words in the buffer, and M is the number of triples. Scanning of data in the buffer is of complexity O(N). The generation of action needs the execution of attention, which is of complexity O(M). Usually, N is much larger than M. 4.5 Baseline We consider a baseline method using the encoderdecoder architecture, which takes the set of triples and the draft text as input and generates a revised text. We refer to the method as ENCDECEDITOR. The encoder of ENCDECEDITOR is the same as that of FACTEDITOR. The decoder is the standard attention and copy model, which creates and utilizes a context vector and predicts the next word at each time. The time complexity of inference in ENCDECEDITOR is O(N2 +NM) (cf.,Britz et al. (2017)). Note that in fact-based text editing, usually N is very large. That means that ENCDECEDITOR is less efficient than FACTEDITOR. 5 Experiment We conduct experiments to make comparison between FACTEDITOR and the baselines using the two datasets WEBEDIT and ROTOEDIT. 5.1 Experiment Setup The main baseline is the encoder-decoder model ENCDECEDITOR, as explained above. We further consider three baselines, No-Editing, Table-to-Text, and Text-to-Text. In No-Editing, the draft text is directly used. In Table-to-Text, a revised text is generated from the triples using encoder-decoder. In Text-to-Text, a revised text is created from the draft text using the encoder-decoder model. Figure 2 gives illustrations of the baselines. We evaluate the results of revised texts by the models from the viewpoint of fluency and fidelity. We utilize ExactMatch (EM), BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016) scores5 as evaluation metrics for fluency. We also utilize precision, recall, and F1 score as evaluation metrics for fidelity. For WEBEDIT, we extract the entities from the generated text and the reference text and then calculate the precision, recall, and F1 scores. For ROTOEDIT, we use the information extraction tool provided by Wiseman et al. (2017) for calculation of the scores. For the embeddings of subject and object for both datasets and the embedding of the predicate for ROTOEDIT, we simply use the embedding lookup table. For the embedding of the predicate for WEBEDIT, we first tokenize the predicate, lookup the embeddings of lower-cased words from the table, and use averaged embedding to deal with the OOV problem (Moryossef et al., 2019). We tune the hyperparameters based on the BLEU score on a development set. For WEBEDIT, we set the sizes of embeddings, buffers, and triples to 300, and set the size of the stream to 600. For ROTOEDIT, we set the size of embeddings to 100 and set the sizes of buffers, triples, and stream to 200. The initial learning rate is 2e-3, and AMSGrad is used for automatically adjusting the learning rate (Reddi et al., 2018). Our implementation makes use of AllenNLP (Gardner et al., 2018). 5.2 Experimental Results Quantitative evaluation We present the performances of our proposed model FACTEDITOR and the baselines on factbased text editing in Table 5. One can draw several conclusions from the results. First, our proposed model, FACTEDITOR, achieves significantly better performances than the main baseline, ENCDECEDITOR, in terms of almost all measures. In particular, FACTEDITOR 5We use a modified version of SARI where β equals 1.0, available at https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/ utils/sari_hook.py 178 Model FLUENCY FIDELITY BLEU SARI KEEP ADD DELETE EM P% R% F1% Baselines No-Editing 66.67 31.51 78.62 3.91 12.02. 0. 84.49 76.34 80.21 Table-to-Text 33.75 43.83 51.44 27.86 52.19 5.78 98.23 83.72 90.40 Text-to-Text 63.61 58.73 82.62 25.77 67.80 6.22 81.93 77.16 79.48 Fact-based text editing ENCDECEDITOR 71.03 69.59 89.49 43.82 75.48 20.96 98.06 87.56 92.51 FACTEDITOR 75.68 72.20 91.84 47.69 77.07 24.80 96.88 89.74 93.17 (a) WEBEDIT Model FLUENCY FIDELITY BLEU SARI KEEP ADD DELETE EM P% R% F1% Baselines No-Editing 74.95 39.59 95.72 0.05 23.01 0. 92.92 65.02 76.51 Table-to-Text 24.87 23.30 39.12 14.78 16.00 0. 48.01 24.28 32.33 Text-to-Text 78.07 60.25 97.29 13.04 70.43 0.02 63.62 41.08 49.92 Fact-based text editing ENCDECEDITOR 83.36 71.46 97.69 44.02 72.69 2.49 78.80 52.21 62.81 FACTEDITOR 84.43 74.72 98.41 41.50 84.24 2.65 78.84 52.30 63.39 (b) ROTOEDIT Table 5: Performances of FACTEDITOR and baselines on two datasets in terms of Fluency and Fidelity. EM stands for exact match. obtains significant gains in DELETE scores on both WEBEDIT and ROTOEDIT. Second, the fact-based text editing models (FACTEDITOR and ENCDECEDITOR) significantly improve upon the other models in terms of fluency scores, and achieve similar performances in terms of fidelity scores. Third, compared to No-Editing, Table-to-Text has higher fidelity scores, but lower fluency scores. Text-to-Text has almost the same fluency scores, but lower fidelity scores on ROTOEDIT. Qualitative evaluation We also manually evaluate 50 randomly sampled revised texts for WEBEDIT. We check whether the revised texts given by FACTEDITOR and ENCDECEDITOR include all the facts. We categorize the factual errors made by the two models. Table 6 shows the results. One can see that FACTEDITOR covers more facts than ENCDECEDITOR and has less factual errors than ENCDECEDITOR. FACTEDITOR has a larger number of correct editing (CQT) than ENCDECEDITOR for fact-based text editing. In contrast, ENCDECEDITOR often includes a larger number of unnecessary rephrasings (UPARA) than FACTEDITOR. Covered facts Factual errors CQT UPARA RPT MS USUP DREL ENCDECEDITOR 14 7 16 21 3 12 FACTEDITOR 24 4 9 19 1 3 Table 6: Evaluation results on 50 randomly sampled revised texts in WEBEDIT in terms of numbers of correct editing (CQT), unnecessary paraphrasing (UPARA), repetition (RPT), missing facts (MS), unsupported facts (USUP) and different relations (DREL) There are four types of factual errors: fact repetitions (RPT), fact missings (MS), fact unsupported (USUP), and relation difference (DREL). Both FACTEDITOR and ENCDECEDITOR often fail to insert missing facts (MS), but rarely insert unsupported facts (USUP). ENCDECEDITOR often generates the same facts multiple times (RPT) or facts in different relations (DREL). In contrast, FACTEDITOR can seldomly make such errors. Table 7 shows an example of results given by ENCDECEDITOR and FACTEDITOR. The revised texts of both ENCDECEDITOR and FACTEDITOR appear to be fluent, but that of FACTEDITOR has higher fidelity than that of ENCDECEDITOR. ENCDECEDITOR cannot effectively eliminate the 179 Set of triples {(Ardmore Airport, runwayLength, 1411.0), (Ardmore Airport, 3rd runway SurfaceType, Poaceae), (Ardmore Airport, operatingOrganisation, Civil Aviation Authority of New Zealand), (Ardmore Airport, elevationAboveTheSeaLevel, 34.0), (Ardmore Airport, runwayName, 03R/21L)} Draft text Ardmore Airport , ICAO Location Identifier UTAA . Ardmore Airport 3rd runway is made of Poaceae and Ardmore Airport . 03R/21L is 1411.0 m long and Ardmore Airport is 34.0 above sea level . Revised text Ardmore Airport is operated by Civil Aviation Authority of New Zealand . Ardmore Airport 3rd runway is made of Poaceae and Ardmore Airport name is 03R/21L . 03R/21L is 1411.0 m long and Ardmore Airport is 34.0 above sea level . ENCDECEDITOR Ardmore Airport , ICAO Location Identifier UTAA , is operated by Civil Aviation Authority of New Zealand . Ardmore Airport 3rd runway is made of Poaceae and Ardmore Airport . 03R/21L is 1411.0 m long and Ardmore Airport is 34.0 m long . FACTEDITOR Ardmore Airport is operated by Civil Aviation Authority of New Zealand . Ardmore Airport 3rd runway is made of Poaceae and Ardmore Airport . 03R/21L is 1411.0 m long and Ardmore Airport is 34.0 above sea level . Table 7: Example of generated revised texts given by ENCDECEDITOR and FACTEDITOR on WEBEDIT. Entities in green appear in both the set of triples and the draft text. Entities in orange only appear in the draft text. Entities in blue should appear in the revised text but do not appear in the draft text. WEBEDIT ROTOEDIT Table-to-Text 4,083 1,834 Text-to-Text 2,751 581 ENCDECEDITOR 2,487 505 FACTEDITOR 3,295 1,412 Table 8: Runtime analysis (# of words/second). Tableto-Text always shows the fastest performance (Boldfaced). FACTEDITOR shows the second fastest runtime performance (Underlined). description about an unsupported fact (in orange) appearing in the draft text. In contrast, FACTEDITOR can deal with the problem well. In addition, ENCDECEDITOR conducts an unnecessary substitution in the draft text (underlined). FACTEDITOR tends to avoid such unnecessary editing. Runtime analysis We conduct runtime analysis on FACTEDITOR and the baselines in terms of number of processed words per second, on both WEBEDIT and ROTOEDIT. Table 8 gives the results when the batch size is 128 for all methods. Table-to-Text is the fastest, followed by FACTEDITOR. FACTEDITOR is always faster than ENCDECEDITOR, apparently because it has a lower time complexity, as explained in Section 4. The texts in WEBEDIT are relatively short, and thus FACTEDITOR and ENCDECEDITOR have similar runtime speeds. In contrast, the texts in ROTOEDIT are relatively long, and thus FACTEDITOR executes approximately two times faster than ENCDECEDITOR. 6 Conclusion In this paper, we have defined a new task referred to as fact-based text editing and made two contributions to research on the problem. First, we have proposed a data construction method for fact-based text editing and created two datasets. Second, we have proposed a model for fact-based text editing, named FACTEDITOR, which performs the task by generating a sequence of actions. Experimental results show that the proposed model FACTEDITOR performs better and faster than the baselines, including an encoder-decoder model. Acknowledgments We would like to thank the reviewers for their insightful comments. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations. Denny Britz, Melody Guan, and Minh-Thang Luong. 2017. Efficient Attention using a Fixed-Size Memory Representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 392–400, Copenhagen, Denmark. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. In 180 Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Yue Dong, Zichao Li, Mehdi Rezagholizadeh, and Jackie Chi Kit Cheung. 2019. EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402, Florence, Italy. Association for Computational Linguistics. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. TransitionBased Dependency Parsing with Stack Long ShortTerm Memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China. Association for Computational Linguistics. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179–188, Vancouver, Canada. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Albert Gatt and Emiel Krahmer. 2018. Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research (JAIR), 61:65–170. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640, Berlin, Germany. Association for Computational Linguistics. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 140–149, Berlin, Germany. Association for Computational Linguistics. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating Sentences by Editing Prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A Retrieve-and-Edit Framework for Predicting Structured Outputs. In Advances in Neural Information Processing Systems, pages 10052–10062. Curran Associates, Inc. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward Controlled Generation of Text. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1587–1596, International Convention Centre, Sydney, Australia. PMLR. Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplification for reading assistance: A project note. In Proceedings of the Second International Workshop on Paraphrasing, pages 9–16, Sapporo, Japan. Association for Computational Linguistics. Hayate Iso, Yui Uehara, Tatsuya Ishigaki, Hiroshi Noji, Eiji Aramaki, Ichiro Kobayashi, Yusuke Miyao, Naoaki Okazaki, and Hiroya Takamura. 2019. Learning to Select, Track, and Generate for Data-to-Text. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), pages 2102–2113, Florence, Italy. Kevin Knight and Ishwar Chander. 1994. Automated Postediting of Documents. In Proceedings of the AAAI Conference on Artificial Intelligence., volume 94, pages 779–784. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Association for Computational Linguistics. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase Generation with Deep Reinforcement Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865–3878, Brussels, Belgium. Association for Computational Linguistics. Eric Malmi, Sebastian Krause, Sascha Rothe, Daniil Mirylenka, and Aliaksei Severyn. 2019. Encode, Tag, Realize: High-Precision Text Editing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5053– 5064, Hong Kong, China. Association for Computational Linguistics. 181 Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267–2277, Minneapolis, Minnesota. Association for Computational Linguistics. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 Shared Task on Grammatical Error Correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1– 14, Baltimore, Maryland. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Duˇsek, and Verena Rieser. 2017. The E2E Dataset: New Challenges For Endto-End Generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201–206, Saarbr¨ucken, Germany. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Hao Peng, Ankur Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text Generation with Exemplar-based Adaptive Decoding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2555–2565, Minneapolis, Minnesota. Association for Computational Linguistics. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. ”Data-to-text Generation with Entity Modeling”. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2023–2035, Florence, Italy. Association for Computational Linguistics. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of adam and beyond. In International Conference on Learning Representations. Scott Reed and Nando De Freitas. 2016. Neural Programmer-Interpreters. In International Conference on Learning Representations. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Studies in Natural Language Processing. Cambridge University Press. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style Transfer from Non-Parallel Text by Cross-Alignment. In Advances in Neural Information Processing Systems 30, pages 6830–6841. Curran Associates, Inc. Michel Simard, Cyril Goutte, and Pierre Isabelle. 2007. Statistical Phrase-Based Post-Editing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 508–515, Rochester, New York. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Thuy-Trang Vu and Gholamreza Haffari. 2018. Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3048–3053, Brussels, Belgium. Association for Computational Linguistics. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1169–1179, Beijing, China. Association for Computational Linguistics. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. ”Challenges in Data-to-Document Generation”. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Association for Computational Linguistics. Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015– 1024, Jeju Island, Korea. Association for Computational Linguistics. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing Statistical Machine Translation for Text Simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Diyi Yang, Aaron Halfaker, Robert Kraut, and Eduard Hovy. 2017. Identifying Semantic Edit Intentions from Revisions in Wikipedia. In Proceedings of the 2017 Conference on Empirical Methods 182 in Natural Language Processing, pages 2000–2010, Copenhagen, Denmark. Association for Computational Linguistics. Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, and Alexander L Gaunt. 2019. Learning to Represent Edits. In International Conference on Learning Representations. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating Transformer and Paraphrase Rules for Sentence Simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164–3173, Brussels, Belgium. Association for Computational Linguistics. Wei Zhao, Liang Wang, Kewei Shen, Ruoyu Jia, and Jingming Liu. 2019. ”Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data”. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 156–165, Minneapolis, Minnesota. Association for Computational Linguistics.
2020
17
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882–1892 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1882 BPE-Dropout: Simple and Effective Subword Regularization Ivan Provilkov∗1,2 Dmitrii Emelianenko∗1,3 Elena Voita4,5 1Yandex, Russia 2Moscow Institute of Physics and Technology, Russia 3National Research University Higher School of Economics, Russia 4University of Edinburgh, Scotland 5University of Amsterdam, Netherlands {iv-provilkov, dimdi-y, lena-voita}@yandex-team.ru Abstract Subword segmentation is widely used to address the open vocabulary problem in machine translation. The dominant approach to subword segmentation is Byte Pair Encoding (BPE), which keeps the most frequent words intact while splitting the rare ones into multiple tokens. While multiple segmentations are possible even with the same vocabulary, BPE splits words into unique sequences; this may prevent a model from better learning the compositionality of words and being robust to segmentation errors. So far, the only way to overcome this BPE imperfection, its deterministic nature, was to create another subword segmentation algorithm (Kudo, 2018). In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word. We introduce BPE-dropout – simple and effective subword regularization method based on and compatible with conventional BPE. It stochastically corrupts the segmentation procedure of BPE, which leads to producing multiple segmentations within the same fixed BPE framework. Using BPE-dropout during training and the standard BPE during inference improves translation quality up to 2.3 BLEU compared to BPE and up to 0.9 BLEU compared to the previous subword regularization. 1 Introduction Using subword segmentation has become de-facto standard in Neural Machine Translation (Bojar et al., 2018; Barrault et al., 2019). Byte Pair Encoding (BPE) (Sennrich et al., 2016) is the dominant approach to subword segmentation. It keeps the common words intact while splitting the rare and unknown ones into a sequence of subword units. This potentially allows a model to make ∗Equal contribution. use of morphology, word composition and transliteration. BPE effectively deals with an openvocabulary problem and is widely used due to its simplicity. There is, however, a drawback of BPE in its deterministic nature: it splits words into unique subword sequences, which means that for each word a model observes only one segmentation. Thus, a model is likely not to reach its full potential in exploiting morphology, learning the compositionality of words and being robust to segmentation errors. Moreover, as we will show further, subwords into which rare words are segmented end up poorly understood. A natural way to handle this problem is to enable multiple segmentation candidates. This was initially proposed by Kudo (2018) as a subword regularization – a regularization method, which is implemented as an on-the-fly data sampling and is not specific to NMT architecture. Since standard BPE produces single segmentation, to realize this regularization the author had to propose a new subword segmentation, different from BPE. However, the introduced approach is rather complicated: it requires training a separate segmentation unigram language model, using EM and Viterbi algorithms, and forbids using conventional BPE. In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word. BPE builds a vocabulary of subwords and a merge table, which specifies which subwords have to be merged into a bigger subword, as well as the priority of the merges. During segmentation, words are first split into sequences of characters, then the learned merge operations are applied to merge the characters into larger, known symbols, till no merge can be done (Figure 1(a)). We introduce BPE-dropout – a subword regularization method based on and compatible with conventional BPE. It uses a vocabulary and a 1883 (a) (b) Figure 1: Segmentation process of the word ‘unrelated’ using (a) BPE, (b) BPE-dropout. Hyphens indicate possible merges (merges which are present in the merge table); merges performed at each iteration are shown in green, dropped – in red. merge table built by BPE, but at each merge step, some merges are randomly dropped. This results in different segmentations for the same word (Figure 1(b)). Our method requires no segmentation training in addition to BPE and uses standard BPE at test time, therefore is simple. BPE-dropout is superior compared to both BPE and Kudo (2018) on a wide range of translation tasks, therefore is effective. Our key contributions are as follows: • We introduce BPE-dropout – a simple and effective subword regularization method; • We show that our method outperforms both BPE and previous subword regularization on a wide range of translation tasks; • We analyze how training with BPE-dropout affects a model and show that it leads to a better quality of learned token embeddings and to a model being more robust to noisy input. 2 Background In this section, we briefly describe BPE and the concept of subword regularization. We assume that our task is machine translation, where a model needs to predict the target sentence Y given the source sentence X, but the methods we describe are not task-specific. 2.1 Byte Pair Encoding (BPE) To define a segmentation procedure, BPE (Sennrich et al., 2016) builds a token vocabulary and a merge table. The token vocabulary is initialized with the character vocabulary, and the merge table is initialized with an empty table. First, each word is represented as a sequence of tokens plus a special end of word symbol. Then, the method iteratively counts all pairs of tokens and merges the most frequent pair into a new token. This token is added to the vocabulary, and the merge operation is added to the merge table. This is done until the desired vocabulary size is reached. The resulting merge table specifies which subwords have to be merged into a bigger subword, as well as the priority of the merges. In this way, it defines the segmentation procedure. First, a word is split into distinct characters plus the end of word symbol. Then, the pair of adjacent tokens which has the highest priority is merged. This is done iteratively until no merge from the table is available (Figure 1(a)). 2.2 Subword regularization Subword regularization (Kudo, 2018) is a training algorithm which integrates multiple segmentation candidates. Instead of maximizing log-likelihood, this algorithm maximizes log-likelihood marginalized over different segmentation candidates. Formally, L = X (X,Y )∈D E x∼P(x|X) y∼P(y|Y ) log P(y|x, θ), (1) where x and y are sampled segmentation candidates for sentences X and Y respectively, P(x|X) and P(y|Y ) are the probability distributions the candidates are sampled from, and θ is the set of model parameters. In practice, at each training step only one segmentation candidate is sampled. Since standard BPE segmentation is deterministic, to realize this regularization Kudo (2018) proposed a new subword segmentation. The introduced approach requires training a separate segmentation unigram language model to predict the probability of each subword, EM algorithm to optimize the vocabulary, and Viterbi algorithm to make samples of segmentations. Subword regularization was shown to achieve significant improvements over the method using a single subword sequence. However, the proposed method is rather complicated and forbids using 1884 conventional BPE. This may prevent practitioners from using subword regularization. 3 Our Approach: BPE-Dropout We show that to realize subword regularization it is not necessary to reject BPE since multiple segmentation candidates can be generated within the BPE framework. We introduce BPE-dropout – a method which exploits the innate ability of BPE to be stochastic. It alters the segmentation procedure while keeping the original BPE merge table. During segmentation, at each merge step some merges are randomly dropped with the probability p. This procedure is described in Algorithm 1. Algorithm 1: BPE-dropout current split ←characters from input word; do merges ←all possible merges1 of tokens from current split; for merge from merges do /* The only difference from BPE */ remove merge from merges with the probability p; end if merges is not empty then merge ←select the merge with the highest priority from merges; apply merge to current split; end while merges is not empty; return current split; If p is set to 0, the segmentation is equivalent to the standard BPE; if p is set to 1, the segmentation splits words into distinct characters. The values between 0 and 1 can be used to control the segmentation granularity. We use p > 0 (usually p = 0.1) in training time to expose a model to different segmentations and p = 0 during inference, which means that at inference time we use the original BPE. We discuss the choice of the value of p in Section 5. When some merges are randomly forbidden during segmentation, words end up segmented in different subwords; see for example Figure 1(b). We hypothesize that exposing a model to different 1In case of multiple occurrences of the same merge in a word (for example, m-e-r-g-e-r has two occurrences of the merge (e, r)), we decide independently for each occurrence whether to drop it or not. segmentations may result in better understanding of the whole words as well as their subword units; we will verify this in Section 6. 4 Experimental setup 4.1 Baselines Our baselines are the standard BPE and the subword regularization by Kudo (2018). Subword regularization by Kudo (2018) has segmentation sampling hyperparameters l and α. l specifies how many best segmentations for each word are produced before sampling one of them, α controls the smoothness of the sampling distribution. In the original paper (l = ∞, α = 0.2/0.5) and (l = 64, α = 0.1) were shown to perform best on different datasets. Since overall they show comparable results, in all experiments we use (l = 64, α = 0.1). 4.2 Vocabularies There are two ways of building vocabulary for models trained with BPE-dropout: (1) take the vocabulary built by BPE; then the segmented with BPE-dropout text will contain a small number of unknown tokens (UNKs)2; (2) add to the BPE vocabulary all tokens which can appear when segmenting with BPE-dropout. In the preliminary experiments, we did not observe any difference in quality; therefore, either of the methods can be used. We choose the first option to stay in the same setting as the standard BPE. Besides, a model exposed to some UNKs in training can be more reliable for practical applications where unknown tokens can be present. 4.3 Data sets and preprocessing We conduct our experiments on a wide range of datasets with different corpora sizes and languages; information about the datasets is summarized in Table 1. These datasets are used in the main experiments (Section 5.1) and were chosen to match the ones used in the prior work (Kudo, 2018). In the additional experiments (Sections 5.2-5.5), we also use random subsets of the WMT14 English-French data; in this case, we specify dataset size for each experiment. Prior to segmentation, we preprocess all 2For example, for the English part of the IWSLT15 EnVi corpora, these UNKs make up 0.00585 and 0.00085 of all tokens for 32k and 4k vocabularies, respectively. 1885 Number of sentences Voc size Batch size The value of p (train/dev/test) in BPE-dropout IWSLT15 En ↔Vi 133k / 1553 / 1268 4k 4k 0.1 / 0.1 En ↔Zh 209k / 887 / 1261 4k / 16k 4k 0.1 / 0.6 IWSLT17 En ↔Fr 232k / 890 / 1210 4k 4k 0.1 / 0.1 En ↔Ar 231k / 888 / 1205 4k 4k 0.1 / 0.1 WMT14 En ↔De 4.5M / 3000 / 3003 32k 32k 0.1 / 0.1 ASPEC En ↔Ja 2M / 1700 / 1812 16k 32k 0.1 / 0.6 Table 1: Overview of the datasets and dataset-dependent hyperparametes; values of p are shown in pairs: source language / target language. (We explain the choice of the value of p for BPE-dropout in Section 5.3.) datasets with the standard Moses toolkit.3 However, Chinese and Japanese have no explicit word boundaries, and Moses tokenizer does not segment sentences into words; for these languages, subword segmentations are trained almost from unsegmented raw sentences. Relying on a recent study of how the choice of vocabulary size influences translation quality (Ding et al., 2019), we choose vocabulary size depending on the dataset size (Table 1). In training, translation pairs were batched together by approximate sequence length. For the main experiments, the values of batch size we used are given in Table 1 (batch size is the number of source tokens). In the experiments in Sections 5.2, 5.3 and 5.4, for datasets not larger than 500k sentence pairs we use vocabulary size and batch size of 4k, and 32k for the rest.4 In the main text, we train all models on lowercased data. In the appendix, we provide additional experiments with the original case and casesensitive BLEU. 4.4 Model and optimizer The NMT system used in our experiments is Transformer base (Vaswani et al., 2017). More precisely, the number of layers is N = 6 with h = 8 parallel attention layers, or heads. The dimensionality of input and output is dmodel = 512, and the inner-layer of feed-forward networks has dimensionality dff = 2048. We use regularization and optimization procedure as described in Vaswani et al. (2017). 3https://github.com/moses-smt/ mosesdecoder 4Large batch size can be reached by using several of GPUs or by accumulating the gradients for several batches and then making an update. 4.5 Training time We train models till convergence. For all experiments, we provide number of training batches in the appendix (Tables 6 and 7). 4.6 Inference To produce translations, for all models, we use beam search with the beam of 4 and length normalization of 0.6. In addition to the main results, Kudo (2018) also report scores using n-best decoding. To translate a sentence, this strategy produces multiple segmentations of a source sentence, generates a translation for each of them, and rescores the obtained translations. While this could be an interesting future work to investigate different sampling and rescoring strategies, in the current study we use 1-best decoding to fit in the standard decoding paradigm. 4.7 Evaluation For evaluation, we average 5 latest checkpoints and use BLEU (Papineni et al., 2002) computed via SacreBleu5 (Post, 2018). For Chinese, we add option --tok zh to SacreBLEU. For Japanese, we use character-based BLEU. 5 Experiments 5.1 Main results The results are provided in Table 2. For all datasets, BPE-dropout improves significantly over the standard BPE: more than 1.5 BLEU for En-Vi, Vi-En, En-Zh, Zh-En, Ar-En, De-En, and 0.5-1.4 5Our SacreBLEU signature is: BLEU+case.lc+ lang.[src-lang]-[dst-lang]+numrefs.1+ smooth.exp+tok.13a+version.1.3.6 1886 BPE Kudo (2018) BPE-dropout IWSLT15 En-Vi 31.78 32.43 33.27 Vi-En 30.83 32.36 32.99 En-Zh 20.48 23.01 22.84 Zh-En 19.72 21.10 21.45 IWSLT17 En-Fr 39.37 39.45 40.02 Fr-En 38.18 38.88 39.39 En-Ar 13.89 14.43 15.05 Ar-En 31.90 32.80 33.72 WMT14 En-De 27.41 27.82 28.01 De-En 32.69 33.65 34.19 ASPEC En-Ja 54.51 55.46 55.00 Ja-En 30.77 31.23 31.29 Table 2: BLEU scores. Bold indicates the best score and all scores whose difference from the best is not statistically significant (with p-value of 0.05). (Statistical significance is computed via bootstrapping (Koehn, 2004).) BLEU for the rest. The improvements are especially prominent for smaller datasets; we will discuss this further in Section 5.4. Compared to Kudo (2018), among the 12 datasets we use BPE-dropout is beneficial for 8 datasets with improvements up to 0.92 BLEU, is not significantly different for 3 datasets and underperforms only on En-Ja. While Kudo (2018) uses another segmentation, our method operates within the BPE framework and changes only the way a model is trained. Thus, lower performance of BPE-dropout on En-Ja and only small or insignificant differences for Ja-En, En-Zh and ZhEn suggest that Japanese and Chinese may benefit from a language-specific segmentation. Note also that Kudo (2018) report larger improvements over BPE from using their method than we show in Table 2. This might be explained by the fact that Kudo (2018) used large vocabulary size (16k, 32k), which has been shown counterproductive for small datasets (Sennrich and Zhang, 2019; Ding et al., 2019). While this may not be the issue for models trained with subword regularization (see Section 5.4), this causes drastic drop in performance of the baselines. BPE BPE-dropout src-only dst-only both 250k 26.94 27.98 27.71 28.40 500k 29.28 30.12 29.40 29.89 1m 30.53 31.09 30.62 31.23 4m 33.38 33.89 33.46 33.85 16m 34.37 34.82 33.66 Table 3: BLEU scores for models trained with BPEdropout on a single side of a translation pair or on both sides. Models trained on random subsets of WMT14 En-Fr dataset. Bold indicates the best score and all scores whose difference from the best is not statistically significant (with p-value of 0.05). 5.2 Single side vs full regularization In this section, we investigate whether BPEdropout should be used only on one side of a translation pair or for both source and target languages. We select random subsets of different sizes from WMT14 En-Fr data to understand how the results are affected by the amount of data. We show that: • for small and medium datasets, full regularization performs best; • for large datasets, BPE-dropout should be used only on the source side. Since full regularization performs the best for most of the considered dataset sizes, in the subsequent sections we use BPE-dropout on both source and target sides. 5.2.1 Small and medium datasets: use full regularization Table 3 indicates that using BPE-dropout on the source side is more beneficial than on the target side; for the datasets not smaller than 0.5m sentence pairs, BPE-dropout can be used only the source side. We can speculate that it is more important for the model to understand a source sentence than being exposed to different ways to generate the same target sentence. 5.2.2 Large datasets: use only for source For larger corpora (e.g., starting from 4m instances), it is better to use BPE-dropout only on the source side (Table 3). Interestingly, using BPE-dropout for both source and target languages hurts performance for large datasets. 1887 Figure 2: BLEU scores for the models trained with BPE-dropout with different values of p. WMT14 EnFr, 500k sentence pairs. 5.3 Choice of the value of p Figure 2 shows BLEU scores for the models trained on BPE-dropout with different values of p (the probability of a merge being dropped). Models trained with high values of p are unable to translate due to a large mismatch between training segmentation (which is close to char-level) and inference segmentation (BPE). The best quality is achieved with p = 0.1. In our experiments, we use p = 0.1 for all languages except for Chinese and Japanese. For Chinese and Japanese, we take the value of p = 0.6 to match the increase in length of segmented sentences for other languages.6 5.4 Varying corpora and vocabulary size Now we will look more closely at how the improvement from using BPE-dropout depends on corpora and vocabulary size. First, we see that BPE-dropout performs best for all dataset sizes (Figure 3). Next, models trained with subword regularization are less sensitive to the choice of vocabulary size: differences in performance of models with 4k and 32k vocabulary are much less than for models trained with the standard BPE. This makes BPE-dropout attractive since it allows (i) not to tune vocabulary size for each dataset, (ii) choose vocabulary size depending on the desired model properties: models with smaller vocabularies are beneficial in terms of number of parameters, models with larger vocabularies are beneficial in terms of inference time.7 Finally, we see that the effect from using 6Formally, for English/French/etc. with BPE-dropout, p = 0.1 sentences become on average about 1.25 times longer compared to segmented with BPE; for Chinese and Japanese, we need to set the value of p to 0.6 to achieve the same increase. 7Table 4 shows that inference for models with 4k vocabFigure 3: BLEU scores. Models trained on random subsets of WMT14 En-Fr. BPE-dropout vanishes when a corpora size gets bigger. This is not surprising: the effect of any regularization is less in high-resource settings; however, as we will show later in Section 6.3, when applied to noisy source, models trained with BPEdropout show substantial improvements up to 2 BLEU even in high-resource settings. Note that for larger corpora, we recommend using BPE-dropout only for source language (Section 5.2). 5.5 Inference time and length of generated sequences Since BPE-dropout produces more fine-grained segmentation, sentences segmented with BPEdropout are longer; distribution of sentence lengths are shown in Figure 4 (a) (with p = 0.1, on average about 1.25 times longer). Thus there is a potential danger that models trained with BPEdropout may tend to use more fine-grained segmentation in inference and hence to slow inference down. However, in practice this is not the case: distributions of lengths of generated translations for models trained with BPE and with BPEdropout are close (Figure 4 (b)).8 Table 4 confirms these observations and shows that inference time of models trained with BPEdropout is not substantially different from the ones trained with BPE. ulary is more than 1.4 times longer than models with 32k vocabulary. 8This is the result of using beam search: while samples from a model reproduce training data distribution quite well, beam search favors more frequent tokens (Ott et al., 2018). Therefore, beam search translations tend not to use less frequent fine-grained segmentation. 1888 (a) (b) Figure 4: Distributions of length (in tokens) of (a) the French part of WMT14 En-Fr test set segmented using BPE or BPE-dropout; and (b) the generated translations for the same test set by models trained with BPE or BPE-dropout. voc size BPE BPE-dropout 32k 1.0 1.03 4k 1.44 1.46 Table 4: Relative inference time of models trained with different subword segmentation methods. Results obtained by (1) computing averaged over 1000 runs time needed to translate WMT14 En-Fr test set, (2) dividing all results by the smallest of the obtained times. 6 Analysis In this section, we analyze qualitative differences between models trained with BPE and BPEdropout. We find, that • when using BPE, frequent sequences of characters rarely appear in a segmented text as individual tokens rather than being a part bigger ones; BPE-dropout alleviates this issue; • by analyzing the learned embedding spaces, we show that using BPE-dropout leads to a better understanding of rare tokens; • as a consequence of the above, models trained with BPE-dropout are more robust to misspelled input. 6.1 Substring frequency Here we highlight one of the drawbacks of BPE’s deterministic nature: since it splits words into unique subword sequences, only rare words are split into subwords. This forces frequent sequences of characters to mostly appear in a segmented text as part of bigger tokens, and not as individual tokens. To show this, for each token in the BPE vocabulary we calculate how often it appears in a segmented text as an individual token and as a sequence of characters (which may Figure 5: Distribution of token to substring ratio for texts segmented using BPE or BPE-dropout for the same vocabulary of 32k tokens; only 10% most frequent substrings are shown. (Token to substring ratio of a token is the ratio between its frequency as an individual token and as a sequence of characters.) be part of a bigger token or an individual token). Figure 5 shows distribution of the ratio between substring frequency as an individual token and as a sequence of characters (for top-10% most frequent substrings). For frequent substrings, the distribution of token to substring ratio is clearly shifted to zero, which confirms our hypothesis: frequent sequences of characters rarely appear in a segmented text as individual tokens. When a text is segmented using BPE-dropout with the same vocabulary, this distribution significantly shifts away from zero, meaning that frequent substrings appear in a segmented text as individual tokens more often. 6.2 Properties of the learned embeddings Now we will analyze embedding spaces learned by different models. We take embeddings learned by models trained with BPE and BPE-dropout and for each token look at the closest neighbors in the corresponding embedding space. Figure 6 shows several examples. In contrast to BPE, nearest neighbours of a token in the embedding space of BPE-dropout are often tokens that share sequences of characters with the original token. To verify this observation quantitatively, we computed character 4-gram precision of top-10 neighbors: the proportion of those 4-grams of the top10 closest neighbors which are present among 4grams of the original token. As expected, embeddings of BPE-dropout have higher character 4gram precision (0.29) compared to the precision of BPE (0.18). This also relates to the study by Gong et al. (2018). For several tasks, they analyze the em1889 Figure 6: Examples of nearest neighbours in the source embedding space of models trained with BPE and BPEdropout. Models trained on WMT14 En-Fr (4m). (a) BPE (b) BPE-dropout Figure 7: Visualization of source embeddings. Models trained on WMT14 En-Fr (4m). bedding space learned by a model. The authors find that while a popular token usually has semantically related neighbors, a rare word usually does not: a vast majority of closest neighbors of rare words are rare words. To confirm this, we reduce dimensionality of embeddings by SVD and visualize (Figure 7). For the model trained with BPE, rare tokens are in general separated from the rest; for the model trained with BPE-dropout, this is not the case. While to alleviate this issue Gong et al. (2018) propose to use adversarial training for embedding layers, we showed that a trained with BPE-dropout model does not have this problem. 6.3 Robustness to misspelled input Models trained with BPE-dropout better learn compositionality of words and the meaning of subwords, which suggests that these models have to be more robust to noise. We verify this by measuring the translation quality of models on a test set augmented with synthetic misspellings. We augment the source side of a test set by modifying each word with the probability of 10% by applying one of the predefined operations. The operations we consider are (1) removal of one character from a word, (2) insertion of a random character into a word, (3) substitution of a character in a word with a random one. This augmentation produces words source BPE BPE-dropout diff En-De original 27.41 28.01 +0.6 misspelled 24.45 26.03 +1.58 De-En original 32.69 34.19 +1.5 misspelled 29.71 32.03 +2.32 En-Fr (4m) original 33.38 33.85 +0.47 misspelled 30.30 32.13 +1.83 En-Fr (16m) original 34.37 34.82 +0.45 misspelled 31.23 32.94 +1.71 Table 5: BLEU scores for models trained on WMT14 dataset evaluated given the original and misspelled source. For En-Fr trained on 16m sentence pairs, BPEdropout was used only on the source side (Section 5.2). with the edit distance of 1 from the unmodified words. Edit distance is commonly used to model misspellings (Brill and Moore, 2000; Ahmad and Kondrak, 2005; Pinter et al., 2017). Table 5 shows the translation quality of the models trained on WMT 14 dataset when given the original source and augmented with misspellings. We deliberately chose large datasets, where improvements from using BPE-dropout are smaller. We can see that while for the original test sets the improvements from using BPE-dropout are usually modest, for misspelled test set the improvements are a lot larger: 1.6-2.3 BLEU. This is especially interesting since models have not been exposed to misspellings during training. Therefore, even for large datasets using BPE-dropout can result in substantially better quality for practical applications where input is likely to be noisy. 1890 7 Related work Closest to our work in motivation is the work by Kudo (2018), who introduced the subword regularization framework multiple segmentation candidates and a new segmentation algorithm. Other segmentation algorithms include Creutz and Lagus (2006), Schuster and Nakajima (2012), Chitnis and DeNero (2015), Kunchukuttan and Bhattacharyya (2016), Wu and Zhao (2018), Banerjee and Bhattacharyya (2018). Regularization techniques are widely used for training deep neural networks. Among regularizations applied to a network weights the most popular are Dropout (Srivastava et al., 2014) and L2 regularization. Data augmentation techniques in natural language processing include dropping tokens at random positions or swapping tokens at close positions (Iyyer et al., 2015; Artetxe et al., 2018; Lample et al., 2018), replacing tokens at random positions with a placeholder token (Xie et al., 2017), replacing tokens at random positions with a token sampled from some distribution (e.g., based on token frequency or a language model) (Fadaee et al., 2017; Xie et al., 2017; Kobayashi, 2018). While BPE-dropout can be thought of as a regularization, our motivation is not to make a model robust by injecting noise. By exposing a model to different segmentations, we want to teach it to better understand the composition of words as well as subwords, and make it more flexible in the choice of segmentation during inference. Several works study how translation quality depends on a level of granularity of a segmentation (Cherry et al., 2018; Kreutzer and Sokolov, 2018; Ding et al., 2019). Cherry et al. (2018) show that trained long enough character-level models tend to have better quality, but it comes with the increase of computational cost for both training and inference. Kreutzer and Sokolov (2018) find that, given flexibility in choosing segmentation level, the model prefers to operate on (almost) character level. Ding et al. (2019) explore the effect of BPE vocabulary size and find that it is better to use small vocabulary for low-resource setting and large vocabulary for a high-resource setting. Following these observations, in our experiments we use different vocabulary size depending on a dataset size to ensure the strongest baselines. 8 Conclusions We introduce BPE-dropout – simple and effective subword regularization, which operates within the standard BPE framework. The only difference from BPE is how a word is segmented during model training: BPE-dropout randomly drops some merges from the BPE merge table, which results in different segmentations for the same word. Models trained with BPE-dropout (1) outperform BPE and the previous subword regularization on a wide range of translation tasks, (2) have better quality of learned embeddings, (3) are more robust to noisy input. Future research directions include adaptive dropout rates for different merges and an in-depth analysis of other pathologies in learned token embeddings for different segmentations. Acknowledgments We thank anonymous reviewers for the helpful feedback, Rico Sennrich for valuable comments on the first version of this paper, and Yandex Machine Translation team for discussions and inspiration. References Farooq Ahmad and Grzegorz Kondrak. 2005. Learning a spelling error model from search query logs. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 955–962, Vancouver, British Columbia, Canada. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642, Brussels, Belgium. Association for Computational Linguistics. Tamali Banerjee and Pushpak Bhattacharyya. 2018. Meaningless yet meaningful: Morphology grounded subword-level NMT. In Proceedings of the Second Workshop on Subword/Character LEvel Models, pages 55–60, New Orleans. Association for Computational Linguistics. Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. 1891 Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272–303, Belgium, Brussels. Association for Computational Linguistics. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 286– 293, Hong Kong. Association for Computational Linguistics. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295–4305, Brussels, Belgium. Association for Computational Linguistics. Rohan Chitnis and John DeNero. 2015. Variablelength word encodings for neural translation models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2088–2093, Lisbon, Portugal. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proceedings of the PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes, pages 12–17. Citeseer. Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 204–213, Dublin, Ireland. European Association for Machine Translation. Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567– 573, Vancouver, Canada. Association for Computational Linguistics. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: Frequency-agnostic word representation. In Advances in neural information processing systems, pages 1334–1345. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1681–1691, Beijing, China. Association for Computational Linguistics. Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 452–457, New Orleans, Louisiana. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Julia Kreutzer and Artem Sokolov. 2018. Learning to segment inputs for nmt favors character-level processing. In Proceedings of the 15th International Workshop on Spoken Language Translation. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66– 75, Melbourne, Australia. Association for Computational Linguistics. Anoop Kunchukuttan and Pushpak Bhattacharyya. 2016. Orthographic syllable as basic unit for SMT between related languages. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1912–1917, Austin, Texas. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Representations. Myle Ott, Michael Auli, David Grangier, and MarcAurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In International Conference on Machine Learning, pages 3956–3965. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword RNNs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 102–112, Copenhagen, Denmark. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on 1892 Machine Translation: Research Papers, pages 186– 191, Belgium, Brussels. Association for Computational Linguistics. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149–5152. IEEE. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich and Biao Zhang. 2019. Revisiting lowresource neural machine translation: A case study. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 211– 221, Florence, Italy. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, Los Angeles. Yingting Wu and Hai Zhao. 2018. Finding better subword segmentation for neural machine translation. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 53–64. Springer. Ziang Xie, Sida I. Wang, Jiwei Li, Daniel Lvy, Aiming Nie, Dan Jurafsky, and Andrew Y. Ng. 2017. Data noising as smoothing in neural network language models. In International Conference on Learning Representations. A Training time Table 6 shows number of training batches for the experiments in Section 5.1 (Table 2), Table 7 — for the experiments in Section 5.2 (Table 3). B Additional experiments In the main text, all models were trained (and evaluated) on lowercased data. Here we provide results of the models trained and evaluated without lower case (Table 8). BPE Kudo (2018) BPE-dropout IWSLT15 En-Vi 23 26 36 Vi-En 23 29 33 En-Zh 30 29 43 Zh-En 39 51 100 IWSLT17 En-Fr 36 45 60 Fr-En 32 46 85 En-Ar 30 60 62 Ar-En 41 51 59 WMT14 En-De 468 450 501 De-En 447 442 525 ASPEC En-Ja 280 165 462 Ja-En 239 144 576 Table 6: Number of thousands of training batches for the experiments from Table 2. BPE BPE-dropout src-only dst-only both 250k 47 53 53 85 500k 160 210 250 320 1m 30 114 67 180 4m 100 321 180 600 16m 345 345 400 Table 7: Number of thousands of training batches for the experiments from Table 3. Note that we use batch size 4k tokens for small corpora (250k and 500k) and 32k tokens for large corpora (1m, 4m and 16m). BPE BPE-dropout IWSLT15 En-Vi 31.44 32.70 Vi-En 32.19 33.22 IWSLT17 En-Fr 38.79 39.83 Fr-En 38.06 38.60 En-Ar 14.30 15.20 Ar-En 31.56 33.00 Table 8: BLEU scores. Bold indicates the best score; differences with the baselines are statistically significant (with p-value of 0.05). (Statistical significance is computed via bootstrapping (Koehn, 2004).)
2020
170
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1893–1898 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1893 Improving Non-autoregressive Neural Machine Translation with Monolingual Data Jiawei Zhou Harvard University [email protected] Phillip Keung Amazon Inc. [email protected] Abstract Non-autoregressive (NAR) neural machine translation is usually done via knowledge distillation from an autoregressive (AR) model. Under this framework, we leverage large monolingual corpora to improve the NAR model’s performance, with the goal of transferring the AR model’s generalization ability while preventing overfitting. On top of a strong NAR baseline, our experimental results on the WMT14 En-De and WMT16 En-Ro news translation tasks confirm that monolingual data augmentation consistently improves the performance of the NAR model to approach the teacher AR model’s performance, yields comparable or better results than the best non-iterative NAR methods in the literature and helps reduce overfitting in the training process. 1 Introduction Neural machine translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2014) has achieved impressive performance in recent years, but the autoregressive decoding process limits the translation speed and restricts low-latency applications. To mitigate this issue, many non-autoregressive (NAR) translation methods have been proposed, including latent space models (Gu et al., 2017; Ma et al., 2019; Shu et al., 2019), iterative refinement methods (Lee et al., 2018; Ghazvininejad et al., 2019), and alternative loss functions (Libovick`y and Helcl, 2018; Wang et al., 2019; Wei et al., 2019; Li et al., 2019; Shao et al., 2019). The decoding speedup for NAR models is typically 2-15× depending on the specific setup (e.g., the number of length candidates, number of latent samples, etc.), and NAR models can be tuned to achieve different trade-offs between time complexity and decoding quality (Gu et al., 2017; Wei et al., 2019; Ghazvininejad et al., 2019; Ma et al., 2019). Although different in various aspects, all of these methods are based on transformer modules (Vaswani et al., 2017), and depend on a well-trained AR model to obtain its output translations to create targets for NAR model training. This training setup is well-suited to leverage external monolingual data, since the target side of the NAR training corpus is always generated by an AR model. Techniques like backtranslation (Sennrich et al., 2015a) are known to improve MT performance using monolingual data alone. However, to the best of our knowledge, monolingual data augmentation for NAR-MT has not been reported in the literature. In typical NAR-MT model training, an AR teacher provides a consistent supervision signal for the NAR model; the source text that was used to train the teacher is decoded by the teacher to create synthetic target text. In this work, we use a large amount of source text from monolingual corpora to generate additional teacher outputs for NAR-MT training. We use a transformer model with minor structural changes to perform NAR generation in a noniterative way, which establishes stronger baselines than most of the previous methods. We demonstrate that generating additional training data with monolingual corpora consistently improves the translation quality of our baseline NAR system on the WMT14 En-De and WMT16 En-Ro translation tasks. Furthermore, our experiments show that NAR models trained with increasing amount of extra monolingual data are less prone to overfitting and generalize better on longer sentences. In addition, we have obtained Ro→En and En→De results which are state-of-the-art for noniterative NAR-MT, just by using more monolingual data. 1894 Parallel En Mono. Non-En Mono. En-Ro 608,320 2,197,792 2,261,206 En-De 4,459,186 3,008,621 3,015,110 Table 1: Number of sentences per language arc. ‘Mono’ refers to the amount of monolingual text available. 2 Methodology 2.1 Basic Approach Most of the previous methods treat the NAR modeling objective as a product of independent token probabilities (Gu et al., 2017), but we adopt a different point of view by simply treating the NAR model as a function approximator of an existing AR model. Given an AR model and a source sentence, the translation process of the greedy output1 of the AR model is a complex but deterministic function. Since the neural networks can be near-perfect nonlinear function approximators (Liang and Srikant, 2016), we can expect an NAR model to learn the AR translation process quite well, as long as the model has enough capacity. In particular, we first obtain the greedy output of a trained AR model, and use the resulting paired data to train the NAR model. Other papers on NAR-MT (Gu et al., 2017; Lee et al., 2018; Ghazvininejad et al., 2019) have used AR teacher models to generate training data, and this is a form of sequence-level knowledge distillation (Kim and Rush, 2016). 2.2 Model Structure Throughout this paper, we focus on non-iterative NAR methods. We use standard transformer structures with a few small changes for NAR-MT, which we describe below. For the target side input, most of the previous work simply copied the source side as the decoder’s input. We propose a soft copying method by using a Gaussian kernel to smooth the encoded source sentence embeddings xenc. Suppose the source and target lengths are T and T ′ respectively. Then the tth input token for the decoder is PT i=1 xenc i ·K(i, t), where K(i, t) is the Gaussian distribution evaluated at i with mean T T ′ t and variance σ2. (σ2 is a learned parameter.) We modify the attention mask so that it does not mask out the future tokens, and every token is 1By ‘greedy’, we mean decoding with a beam width of 1. dependent on both its preceding and succeeding tokens in every layer. Gu et al. (2017), Lee et al. (2018), Li et al. (2019) and Wang et al. (2019) use an additional positional self-attention module in each of the decoder layers, but we do not apply such a layer. It did not provide a clear performance improvement in our experiments, and we wanted to reduce the number of deviations from the base transformer structure. Instead, we add positional embeddings at each decoder layer. 2.3 Length Prediction We use a simple method to select the target length for NAR generation at test time (Wang et al., 2019; Li et al., 2019), where we set the target length to be T ′ = T + C, where C is a constant term estimated from the parallel data and T is the length of the source sentence. We then create a list of candidate target lengths ranging from [T ′ −B, T ′ +B] where B is the half-width of the interval. For example, if T = 5, C = 1 and we used a half-width of B = 2, then we would generate NAR translations of length [4, 5, 6, 7, 8], for a total of 5 candidates. These translation candidates would then be ranked by the AR teacher to select the one with the highest probability. This is referred to as length-parallel decoding in Wei et al. (2019). 3 NAR-MT with Monolingual Data Augmenting the NAR training corpus with monolingual data provides some potential benefits. Firstly, we allow more data to be translated by the AR teacher, so the NAR model can see more of the AR translation outputs than in the original training data, which helps the NAR model generalize better. Secondly, there is much more monolingual data than parallel data, especially for low-resource languages. Incorporating monolingual data for NAR-MT is straightforward in our setup. Given an AR model that we want to approximate, we obtain the sourceside monolingual text and use the AR model to generate the targets that we can train our NAR model on. 4 Experimental Setup Data We evaluate NAR-MT training on both the WMT16 En-Ro (around 610k sentence pairs) and the WMT14 En-De (around 4.5M sentence pairs) parallel corpora along with the associated WMT 1895 Models WMT16 WMT14 En→Ro Ro→En En→De De→En NAT-FT (Gu et al., 2017) 27.29 29.06 17.69 21.47 NAT-FT (+NPD s=10) 29.02 30.76 18.66 22.41 NAT-FT (+NPD s=100) 29.79 31.44 19.17 23.20 NAT-IR (idec=1) (Lee et al., 2018) 24.45 25.73 13.91 16.77 CTC (Libovick`y and Helcl, 2018) 19.93 24.71 17.68 19.80 imitate-NAT (Wei et al., 2019) 28.61 28.90 22.44 25.67 imitate-NAT (+LPD) 31.45 31.81 24.15 27.28 CMLM (Ghazvininejad et al., 2019) 27.32 28.20 18.05 21.83 FlowSeq (Ma et al., 2019) 29.73 30.72 23.72 28.39 FlowSeq (NPD n=30) 32.20 32.84 25.31 30.68 Our AR Transformer (beam 1) 33.56 33.68 28.84 32.77 Our AR Transformer (beam 4) 34.50 34.01 29.65 33.65 Our NAR baseline (B=5) 31.21 32.06 23.57 29.01 + monolingual data 31.91 33.46 25.53 29.96 + monolingual data and de-dup 31.96 33.57 25.73 30.18 Table 2: BLEU scores on the WMT16 En-Ro and WMT14 En-De test sets for different NAR models. All reported scores are from non-iterative NAR methods with similar hyper-parameter settings for transformers. ‘de-dup’ removes adjacent duplicated tokens. B is the half-width in Sec. 2.3. monolingual corpora for each language. For the parallel data, we use the processed data from Lee et al. (2018) to be consistent with previous publications. The WMT16 En-Ro task uses newsdev-2016 and newstest-2016 as development and test sets, and the WMT14 En-De task uses newstest-2013 and newstest-2014 as development and test sets. We report all results on test sets. We used the Romanian portion of the News Crawl 2015 corpus and the English portion of the Europarl v7/v8 corpus2 as monolingual text for our En-Ro experiments, which are both about 4 times larger than the original paired data. We used the News Crawl 2007/2008 corpora for German and English monolingual text2 in our En-De experiments, and downsampled them to ∼3 million sentences per language. The data statistics are summarized in Table 1. The monolingual data are processed following Lee et al. (2018), which are tokenized and segmented into subword units (Sennrich et al., 2015b). The vocabulary is shared between source and target languages and has ∼40k units. We use BLEU to evaluate the translation quality3. 2http://www.statmt.org/wmt16/translation-task.html 3We report tokenized BLEU scores in line with prior work (Lee et al., 2018; Ma et al., 2019), which are case-insensitive for WMT16 En-Ro and case-sensitive for WMT14 En-De in the data provided by Lee et al. (2018). Model Configuration We use the settings for the base transformer configuration in Vaswani et al. (2017) for all the models: 6 layers per stack, 8 attention heads per layer, 512 model dimensions and 2048 hidden dimensions. The AR and NAR model have the same encoder-decoder structure, except for the decoder attention mask and the decoding input for the NAR model as described in Sec. 2.2. Training and Inference We initialize the NAR embedding layer and encoder parameters with the AR model’s. The NAR model is trained with the AR model’s greedy outputs as targets. We use the Adam optimizer, with batches of size 64k tokens for one gradient update, and the learning rate schedule is the same as the one in Vaswani et al. (2017), where we use 4,000 warm-up steps and the maximum learning rate is around 0.0014. We stop training when there is no further improvement in the last 5 epochs, and training finishes in 30 epochs for AR models and 50 epochs for NAR models, except for the En-De experiments with monolingual data where we train for 35 epochs to roughly match the number of parameter updating steps without using extra monolingual data (∼140k steps). We average the last 5 checkpoints to obtain the final model. We train the NAR model with cross-entropy loss and label smoothing (ϵ = 0.1). During infer1896 0.0 0.2 0.4 0.6 0.8 1.0 percentage of monolingual data used 1.8 2.0 2.2 2.4 2.6 2.8 average loss of converged model Ro-En train Ro-En test En-Ro train En-Ro test Figure 1: Average loss of the NAR models versus the percentage of monolingual data used during training. The test set losses decrease as more monolingual data is added, and the gap towards training losses are closing, which indicates that monolingual data augmentation reduces overfitting. ence time, we use length parallel decoding with C = 0, and evaluate the BLEU scores on the reference sentences. All the models are implemented with MXNet and GluonNLP (Guo et al., 2019). We used 4 NVIDIA V100 GPUs for training, which takes about a day for an AR model and up to a week for an NAR model depending on the data size, and testing is performed on a single GPU. 5 Results and Analysis Main Results We present our BLEU scores alongside the scores of other non-iterative methods in Table 2. Our baseline results surpass many of the previous results, which we attribute to the way that we initialize the decoding process. Instead of directly copying the source embeddings to the decoder input, we use an interpolated version of the encoder outputs as the decoder input, which allows the encoder to transform the source embeddings into a more usable form. Note that a similar technique is adopted in Wei et al. (2019), but our model structure and optimization are much simpler as we do not have any imitation module for detailed teacher guidance. Our results confirm that the use of monolingual data improves the NAR model’s performance. By incorporating all of the monolingual data for the En-Ro NAR-MT task, we see a gain of 0.70 BLEU points for the En→Ro direction and 1.40 for the Ro→En direction. Similarly, we also see significant gains in the En-De NAR-MT task, with an En→Ro Ro→En no half all no half all B mono mono mono mono mono mono 0 27.19 +0.65 +0.56 26.62 +1.52 +1.58 1 29.34 +0.63 +0.69 28.81 +1.26 +1.46 2 30.46 +0.34 +0.45 30.18 +1.08 +1.24 3 30.87 +0.37 +0.71 31.24 +0.88 +1.09 4 31.06 +0.45 +0.67 31.92 +0.90 +1.25 5 31.21 +0.53 +0.70 32.06 +1.10 +1.40 6 31.20 +0.39 +0.62 31.98 +1.17 +1.43 7 30.99 +0.43 +0.51 31.85 +1.19 +1.31 gold 29.64 +0.61 +0.85 29.83 +1.42 +1.69 Table 3: BLEU scores on the WMT16 En-Ro test sets for NAR models trained with different numbers of length candidates and amounts of additional monolingual data. The half-width B determines the number of length candidates (Sec. 2.3). ‘gold’ refers to using the true target length instead of predicting it. All the +deltas are relative to the ‘no mono’ case. increase of 1.96 BLEU points for the En→De direction and 0.95 for the De→En direction. By removing the duplicated output tokens as a simple postprocessing step (following Lee et al. (2018)), we achieved 33.57 BLEU for the WMT16 Ro→En direction and 25.73 BLEU for the WMT14 En→De direction, which are state-of-the-art among non-iterative NAR-MT results. In addition, our work shrinks the gap between the AR teacher and the NAR model to just 0.11 BLEU points in the Ro→En direction. Losses in Training and Evaluation To further investigate how much the monolingual data contributes to BLEU improvements, we train En-Ro NAR models with 0%, 25%, 50%, and 100% of the monolingual corpora and plot the cross-entropy loss on the training data and the testing data for the converged model. In Figure 1, when no monolingual data is used, the training loss typically converges to a lower point compared to the loss on the testing set, which is not the case for the AR model where the validation and testing losses are usually lower than the training loss. This indicates that the NAR model overfits to the training data, which hinders its generalization ability. However, as more monolingual data is added to the training recipe, the overfitting problem is reduced and the gap between the evaluation and training losses shrinks. 1897 src # AR NAR +half +all length sent. beam 1 baseline mono mono [1, 20] 865 32.12 29.96 30.94 31.10 [21, 40] 867 33.82 30.77 31.92 31.96 [41, 60] 228 35.13 29.59 31.33 31.81 [61, 80] 29 35.09 26.69 27.99 30.47 [81, 120] 8 34.13 16.47 28.92 29.47 [121, 140] 2 6.70 3.11 3.56 5.99 Table 4: BLEU scores for source sentences in different length intervals on the WMT16 Ro→En test set. The gold target length is provided during decoding. Effect of Length-Parallel Decoding To test how the NAR model performance and the monolingual gains are affected by the number of decoding length candidates, we vary the half-width B (Sec. 2.3) across a range of values and test the NAR models trained with 0%, 50%, and 100% of the monolingual data for the En-Ro task (Table 3). The table shows that having multiple length candidates can increase the BLEU score significantly and can be better than using the gold target length, but having too many length candidates can hurt the performance and slow down decoding (in our case, the optimal B is 5). Nonetheless, for every value of B, the BLEU score consistently increases when monolingual data is used, and more data brings greater gains. BLEU under Different Sentence Lengths In Table 4, we present the BLEU scores on WMT16 Ro→En test sentences grouped by source sentence lengths. We can see that the baseline NAR model’s performance drops quickly as sentence length increases, whereas the NAR model trained with monolingual data degrades less over longer sentences, which demonstrates that external monolingual data improves the NAR model’s generalization ability. 6 Discussion We found that monolingual data augmentation reduces overfitting and improves the translation quality of NAR-MT models. We note that the monolingual corpora are derived from domains which may be different from those of the parallel training data or evaluation sets, and a mismatch can affect NAR translation performance. Other work in NMT has examined this issue in the context of backtranslation (e.g., Edunov et al. (2018)), and we expect the conclusions to be similar in the NAR-MT case. There are several open questions to investigate: Are the benefits of monolingual data orthogonal to other techniques like iterative refinement? Can the NAR model perfectly recover the AR model’s performance with much larger monolingual datasets? Are the observed improvements language-dependent? We will consider these research directions in future work. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6114– 6123. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Nonautoregressive neural machine translation. arXiv preprint arXiv:1711.02281. Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, and Shuai Zheng. 2019. Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. arXiv preprint arXiv:1907.04433. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947. Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901. Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Hint-based training for non-autoregressive machine translation. arXiv preprint arXiv:1909.06708. Shiyu Liang and Rayadurgam Srikant. 2016. Why deep neural networks for function approximation? arXiv preprint arXiv:1610.04161. 1898 Jindˇrich Libovick`y and Jindˇrich Helcl. 2018. End-toend non-autoregressive neural machine translation with connectionist temporal classification. arXiv preprint arXiv:1811.04719. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, and Eduard Hovy. 2019. Flowseq: Nonautoregressive conditional sequence generation with generative flow. arXiv preprint arXiv:1909.02480. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2019. Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation. arXiv preprint arXiv:1911.09320. Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2019. Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior. arXiv preprint arXiv:1908.07181. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. arXiv preprint arXiv:1902.10245. Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for nonautoregressive neural machine translation. arXiv preprint arXiv:1906.02041.
2020
171
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1899–1905 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1899 Attend to Medical Ontologies: Content Selection for Clinical Abstractive Summarization Sajad Sotudeh1, Nazli Goharian1, and Ross W. Filice2 1IR Lab, Georgetown University, Washington DC 20057, USA {sajad, nazli}@ir.cs.georgetown.edu 2MedStar Georgetown University Hospital, Washington DC 20007, USA [email protected] Abstract Sequence-to-sequence (seq2seq) network is a well-established model for text summarization task. It can learn to produce readable content; however, it falls short in effectively identifying key regions of the source. In this paper, we approach the content selection problem for clinical abstractive summarization by augmenting salient ontological terms into the summarizer. Our experiments on two publicly available clinical data sets (107,372 reports of MIMIC-CXR, and 3,366 reports of OpenI) show that our model statistically significantly boosts state-of-the-art results in terms of ROUGE metrics (with improvements: 2.9% RG-1, 2.5% RG-2, 1.9% RG-L), in the healthcare domain where any range of improvement impacts patients’ welfare. 1 Introduction Radiology reports convey the detailed observations along with the significant findings about a medical encounter. Each radiology report contains two important sections:1 FINDINGS that encompasses radiologist’s detailed observations and interpretation of imaging study, and IMPRESSION summarizing the most critical findings. IMPRESSION (usually couple of lines and thrice smaller than finding) is considered as the most integral part of report (Ware et al., 2017) as it plays a key role in communicating critical findings to referring clinicians. Previous studies have reported that clinicians mostly read the IMPRESSION as they have less time to review findings, particularly those that are lengthy or intricate (Flanders and Lakhani, 2012; Xie et al., 2019). In clinical setting, generating IMPRESSION from FINDINGS can be subject to errors (Gershanik et al., 2011; Brady, 2016). This fact is especially crucial when it comes to healthcare domain where even 1Depending on institution, radiology reports may or may not include other fields such as BACKGROUND. the smallest improvement in generating IMPRESSION can improve patients’ well-being. Automating the process of impression generation in radiology reporting would save clinicians’ read time and decrease fatigue (Flanders and Lakhani, 2012; Kovacs et al., 2018) as clinicians would only need to proofread summaries or make minor edits. Previously, MacAvaney et al. (2019) showed that augmenting the summarizer with entire ontology (i.e., clinical) terms within the FINDINGS can improve the content selection and summary generation to some noticeable extent. Our findings, further, suggest that radiologists select significant ontology terms, but not all such terms, to write the IMPRESSION. Following this paradigm, we hypothesize that selecting the most significant clinical terms occurring in the FINDINGS and then incorporating them into the summarization would improve the final IMPRESSION generation. We further examine if refining FINDINGS word representations according to the identified clinical terms would result in improved IMPRESSION generation. Overall, the contributions of this work are twofold: (i) We propose a novel seq2seq-based model to incorporate the salient clinical terms into the summarizer (§3.2). We pose copying likelihood of a word as an indicator of its saliency in terms of forming IMPRESSION, which can be learned via a sequence-tagger (§3.1); (ii) Our model statistically significantly improves over the competitive baselines on MIMIC-CXR publicly available clinical dataset. To evaluate the cross-organizational transferability, we further evaluate our model on another publicly available clinical dataset (OpenI) (§5). 2 Related Work Few prior studies have pointed out that although seq2seq models can effectively produce readable content, they perform poorly at selecting salient 1900 content to include in the summary (Gehrmann et al., 2018; Lebanoff et al., 2019). Many attempts have been made to tackle this problem (Zhou et al., 2017; Lin et al., 2018; Hsu et al., 2018; Lebanoff et al., 2018; You et al., 2019). For example, Zhou et al. (2017) used sentence representations to filter secondary information of word representation. Our work is different in that we utilize ontology representations produced by an additional encoder to filter word representations. Gehrmann et al. (2018) utilized a data-efficient content selector, by aligning source and target, to restrict the model’s attention to likely-to-copy phrases. In contrast, we use the content selector to find domain knowledge alignment between source and target. Moreover, we do not focus on model attention here, but on rectifying word representations. Extracting clinical findings from clinical reports has been explored previously (Hassanpour and Langlotz, 2016; Nandhakumar et al., 2017). For summarizing radiology reports, Zhang et al. (2018) recently used a separate RNN to encode a section of radiology report.2 Subsequently, MacAvaney et al. (2019) extracted clinical ontologies within the FINDINGS to help the model learn these useful signals by guiding decoder in generation process. Our work differs in that we hypothesize that all of the ontological terms in the FINDINGS are not equally important, but there is a notion of odds of saliency for each of these terms; thus, we focus on refining the FINDINGS representations. 3 Model Our model consists of two main components: (1) a content selector to identify the most salient ontological concepts specific to a given report, and (2) a summarization model that incorporates the identified ontology terms within the FINDINGS into the summarizer. The summarizer refines the FINDINGS word representation based on salient ontology word representation encoded by a separate encoder. 3.1 Content Selector The content selection problem can be framed as a word-level extraction task in which the aim is to identify the words within the FINDINGS that are likely to be copied into the IMPRESSION. We tackle this problem through a sequence-labeling approach. We align FINDINGS and IMPRESSION to obtain required data for sequence-labeling task. 2BACKGROUND field. To this end, let b1, b2, ..., bn be the binary tags over the FINDINGS terms x = {x1, x2, ..., xn}, with n being the length of the FINDINGS. We tag word xi with 1 if it meets two criteria simultaneously: (1) it is an ontology term, (2) it is directly copied into IMPRESSION, and 0 otherwise. At inference, we characterize the copying likelihood of each FINDINGS term as a measure of its saliency. Recent studies have shown that contextualized word embeddings can improve the sequencelabeling performance (Devlin et al., 2019; Peters et al., 2018). To utilize this improvement for the content selection, we train a bi-LSTM network on top of the BERT embeddings with a softmax activation function. The content selector is trained to maximize log-likelihood loss with the maximum likelihood estimation. At inference, the content selector calculates the selection probability of each token in the input sequence. Formally, let O be the set of ontological words which the content selector predicts to be copied into the IMPRESSION: O = {oi|oi ∈FU(x) ∧poi ≥ϵ} (1) where FU(x) is a mapping function that takes in FINDINGS tokens and outputs word sequences from input tokens if they appear in the ontology (i.e., RadLex) 3, and otherwise skips them. poi denotes the selection probability of ontology word oi, and ϵ ∈[0, 1] is the copying threshold. 3.2 Summarization Model 3.2.1 Encoders We exploit two separate encoders: (1) findings encoder that takes in the FINDINGS, and (2) ontology encoder that maps significant ontological terms identified by the content selector to a fix vector known as ontology vector. The findings encoder is fed with the embeddings of FINDINGS words, and generates word representations h. Then, a separate encoder, called ontology encoder, is used to process the ontology terms identified by the content selector and produce associated representations ho. h = Bi-LSTM(x) ho = LSTM(O) (2) where x is the FINDINGS text, O is the set of ontology terms occurring in the FINDINGS and identified by the content selector, ho = {ho 1, ho 2, ..., ho l } is the 3RadLex version 3.10, http://www.radlex.org/ Files/radlex3.10.xlsx 1901 … LSTM LSTM LSTM LSTM … LSTM LSTM … Attention LSTM … !"#$ %"#$: !" '" FFNN … Softmax %" !" ℎ)* ℎ+ * ℎ, Decoder • • • • • ℎ. LSTM ℎ, Encoder ℎ$ * ℎ/ * tn: normal t2: pleural t1: bilateral t0: new tL: effusion t1: pleural t0: bilateral : bilateral new small Figure 1: Overview of our summarization model. As shown, “bilateral” in the FINDINGS is a significant ontological term which has been encoded into the ontology vector. After refining FINDINGS word representation, the decoder computes attention weight (highest on “bilateral”) and generates it in the IMPRESSION. word representations yielded from the ontology encoder. Note that ho l –called ontology vector– is the last hidden state containing summarized information of significant ontologies in the FINDINGS. 3.2.2 Ontological Information Filtering Although de facto seq2seq frameworks implicitly model the information flow from encoder to decoder, the model should benefit from explicitly modeling the selection process. To this end, we implement a filtering gate on top of the findings encoder to refine the FINDINGS word representations according to the significant ontology terms within the FINDINGS and produce ontology-aware word representations. Specifically, the filtering gate receives two vectors: the word hidden representation hi that has the contextual information of word xi, and the ontology vector ho l including the overal information of significant ontology words within the FINDINGS. The filtering gate processes these two vectors through a liner layer with Sigmoid activation function. We then compute the ontology-aware word hidden representation h′ i, given the source word hidden representation hi and the associated filtering gate Fi. Fi = σ(Wh[hi; ho l ] + b) h′ i = hi ⊙Fi (3) where Wh is the weight matrix, b denotes the bias term, and ⊙denotes element-wise multiplication. 3.2.3 Impression Decoder We use an LSTM network as our decoder to generate the IMPRESSION iteratively. In this sense, the decoder computes the current decoding state st = LSTM(st−1, yt−1), where yt−1 is the input to the decoder (human-written summary tokens at training, or previously generated tokens at inference) and st−1 is the previous decoder state. The decoder also computes an attention distribution a = Softmax(h′⊤Vs⊤) with h′ being the ontology-aware word representations. The attention weights are then used to compute the context vector ct = Pn i aih′ i where n is the length of the FINDINGS. Finally, the context vector and decoder output are used to either generate the next token from the vocabulary or copy it from the FINDINGS. 4 Experiments 4.1 Dataset and Ontologies MIMIC-CXR. This collection (Johnson et al., 2019) is a large publicly available dataset of radiology reports. Following similar report preprocessing as done in (Zhang et al., 2018), we obtained 107,372 radiology reports. For tokenization, we used ScispaCy (Neumann et al., 2019). We randomly split the dataset into 80%(85,898)10%(10,737)-10%(10,737) train-dev-test splits. OpenI. A public dataset from the Indiana Network for Patient Care (Demner-Fushman et al., 2016) with 3,366 reports. Due to small size, it is not suitable for training; we use it to evaluate the cross-organizational transferability of our model and baselines. Ontologies. We use RadLex, a comprehensive radiology lexicon, developed by Radiological Society of North America (RSNA), including 68,534 radiological terms organized in hierarchical structure. 4.2 Baselines We compare our model against both known and state-of-the-art extractive and abstractive models. - LSA (Steinberger and Je¨zek, 2004): An extractive vector-based model that employs Sigular Value Decomposition (SVD) concept. - NeuSum (Zhou et al., 2018): A state-of-the-art extractive model that integrates the process of source sentence scoring and selection.4 - Pointer-Generator (PG) (See et al., 2017): An abstractive summarizer that extends ses2seq networks by adding a copy mechanism that allows for directly copying tokens from the source. - Ontology-Aware Pointer-Generator (Ont. PG) (MacAvaney et al., 2019): An extension of 4We use open code at https://github.com/ magic282/NeuSum with default hyper-parameters. 1902 Method RG-1 RG-2 RG-L LSA 22.21 11.17 20.80 NEUSUM 23.97 12.82 22.61 PG 51.20 39.13 50.16 Ont. PG 51.84 39.59 50.72 BUS 52.04 39.69 50.83 Ours (this work) 53.57∗ 40.78∗ 51.81∗ Table 1: ROUGE results on MIMIC-CXR. ∗shows the statistical significance (paired t-test, p < 0.05). PG model that first encodes entire ontological concepts within FINDINGS, then uses the encoded vector to guide decoder in summary decoding process. - Bottom-Up Summarization (BUS) (Gehrmann et al., 2018): An abstractive model which makes use of a content selector to constrain the model’s attention over source terms that have a good chance of being copied into the target.5 4.3 Parameters and Training We use SCIBERT model (Beltagy et al., 2019) which is pre-trained over biomedical text. We employ 2-layer bi-LSTM encoder with hidden size of 256 upon BERT model. The dropout is set to 0.2. We train the network to minimize cross entropy loss function, and optimize using Adam optimizer (Kingma and Ba, 2015) with learning rate of 2e−5. For the summarization model, we extended on the open base code by Zhang et al. (2018) for implementation.6 We use 2-layer bi-LSTM, 1-layer LSTM as findings encoder, ontology encoder, and decoder with hidden sizes of 200 and 100, respectively. We also exploit 100d GloVe embeddings pretrained on a large collection of 4.5 million radiology reports (Zhang et al., 2018). We train the network to optimize negative log likelihood with Adam optimizer and a learning rate of 0.001. 5 Results and Discussion 5.1 Experimental Results Table. 1 shows the ROUGE scores of our model and baseline models on MIMIC-CXR, with humanwritten IMPRESSIONS as the ground truth. Our model significantly outperforms all the baselines 5We re-implemented the BUS model. 6https://github.com/yuhaozhang/ summarize-radiology-findings Method RG-1 RG-2 RG-L BUS 40.02 21.89 39.37 Ours (this work) 40.88∗ 24.44∗ 40.37∗ Table 2: ROUGE results on Open-I dataset, comparing our model with the best-performing baseline. ∗shows the statistical significance (paired t-test, p < 0.05). Setting RG-1 RG-2 RG-L w/o Cont. Sel. 52.47 40.11 51.39 w/ Cont. Sel. 53.57∗ 40.78∗ 51.81 Table 3: ROUGE results showing the impact of content selector in summarization model. ∗shows the statistical significance (paired t-test, p < 0.05). on all ROUGE metrics with 2.9%, 2.5%, and 1.9% improvements for RG-1, RG-2, and RG-L, respectively. While NEUSUM outperforms the non-neural LSA in extractive setting, the extractive models lag behind the abstractive methods considerably, suggesting that human-written impressions are formed by abstractively selecting information from the findings, not merely extracting source sentences. When comparing Ont. PG with our model, it turns out that indeed our hypothesis is valid that a pre-step of identifying significant ontological terms can improve the summary generation substantially. As pointed out earlier, we define the saliency of an ontological term by its copying probability. As expected, BUS approach achieves the best results among the baseline models by constraining decoder’s attention over odds-on-copied terms, but still underperforms our model. This may suggest that the intermediate stage of refining word representations based on the ontological word would lead to a better performance than superficially restricting attention over the salient terms. Table. 3 shows the effect of content selector on the summarization model. For the setting without content selector, we encode all ontologies within the FINDINGS. As shown, our model statistically significantly improves the results on RG-1 and RG-2. To further evaluate the transferability of our model across organizations, we perform an evaluation on OpenI with our best trained model on MIMIC-CXR. As shown in Table. 2, our model significantly outperforms the top-performing abstractive baseline model suggesting the promising cross-organizational transferability of our model. 1903 0 50 5eadabiOity 2urs 0anuaO 1 2 3 scRre 73 5 9 2 8 1 1 1 1 2 3 score 0 50 Accuracy 2urs 0anuaO 1 2 3 score 71 6 11 2 1 5 2 2 1 2 3 score 0 50 CompOeteness 2urs 0anuaO 1 2 3 score 62 8 11 6 6 2 3 2 1 2 3 score (a) (b) (c) Figure 2: Histograms and arrow plots showing differences between IMPRESSION of 100 manually-scored radiology reports. Although challenges remain to reach human parity for all metrics, 81% (a), 82% (b), and 80% (c) of our system-generated Impressions are as good as human-written Impressions across different metrics. 5.2 Expert Evaluation While our approach achieves the best ROUGE scores, we recognize the limitation of this metric for summarization task (Cohan and Goharian, 2016). To gain a better understanding of qualities of our model, we conducted an expert human evaluation. To this end, we randomly sampled 100 system-generated Impressions with their associated gold from 100 evenly-spaced bins (sorted by our system’s RG-1) of MIMIC-CXR dataset. The Impressions were shuffled to prevent potential bias. We then asked three experts 7 to score the given Impressions independently on a scale of 1-3 (worst to best) for three metrics: Readability. understandable or nonsense; Accuracy. fully accurate, or containing critical errors; Completeness. having all major information, or missing key points. Figure. 2 presents the human evaluation results using histograms and arrow plots as done in (MacAvaney et al., 2019), comparing our system’s Impressions versus human-written Impressions. The histograms indicate the distribution of scores, and arrows show how the scores changed between ours and human-written. The tail of each arrow shows the score of human-written IMPRESSION , and its head indicates the score of our system’s IMPRESSION. The numbers next to the tails express the count of Impressions that gained score of s′ by ours and s by gold. 8 We observed that while there is still a gap between the systemgenerated and human-written Impressions, over 80% of our system-generated Impressions are as good 9 as the associated human-written Impres7Two radiologists and one medical student. 8s, s′ ∈{1, 2, 3} 9Either tied or improved. sions. Specifically, 73% (readability), and 71% (accuracy) of our system-generated Impressions ties with human-written Impressions, both achieving full-score of 3; nonetheless, this percentage is 62% for completeness metric. The most likely explanation of this gap is that deciding which findings are more important (i.e., should be written into Impression) is either subjective, or highly correlates with the institutional training purposes. Hence, we recognize cross-organizational evaluations in terms of Impression completeness as a challenging task. We also evaluated the inter-rater agreement using Fleiss’ Kappa (Fleiss, 1971) for our system’s scores and obtained 52% for readability, 47% for accuracy, and 50% for completeness, all of which are characterized as moderate agreement rate. 6 Conclusion We proposed an approach to content selection for abstractive text summarization in clinical notes. We introduced our novel approach to augment standard summarization model with significant ontological terms within the source. Content selection problem is framed as a word-level sequence-tagging task. The intrinsic evaluations on two publicly available real-life clinical datasets show the efficacy of our model in terms of ROUGE metrics. Furthermore, the extrinsic evaluation by domain experts further reveals the qualities of our system-generated summaries in comparison with gold summaries. Acknowledgement We thank Arman Cohan for his valuable comments on this work. We also thank additional domain expert evaluators: Phillip Hyuntae Kim, and Ish Talati. 1904 References Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: A pretrained language model for scientific text. In EMNLP. Adrian P. Brady. 2016. Error and discrepancy in radiology: inevitable or avoidable? In Insights into Imaging. Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. Proc. of 11th Conference on LREC, pages 806–813. Dina Demner-Fushman, Marc D. Kohli, Marc B. Rosenman, Sonya E. Shooshan, Laritza Rodriguez, Sameer K. Antani, George R. Thoma, and Clement J. McDonald. 2016. Preparing a collection of radiology examinations for distribution and retrieval. Journal of the American Medical Informatics Association : JAMIA, 23 2:304–10. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Adam E. Flanders and Paras Lakhani. 2012. Radiology reporting and communications: a look forward. Neuroimaging clinics of North America, 22 3:477– 96. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In EMNLP. Esteban F Gershanik, Ronilda Lacson, and Ramin Khorasani. 2011. Critical finding capture in the impression section of radiology reports. In AMIA. Saeed Hassanpour and Curtis P. Langlotz. 2016. Information extraction from multi-institutional radiology reports. Artificial intelligence in medicine, 66:29– 39. Wan Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In ACL. Alistair E. W. Johnson, Tom J. Pollard, Seth J. Berkowitz, Nathaniel R. Greenbaum, Matthew P. Lungren, Chih ying Deng, Roger G. Mark, and Steven Horng. 2019. Mimic-cxr: A large publicly available database of labeled chest radiographs. ArXiv, abs/1901.07042. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Mark D. Kovacs, Maximilian Y Cho, Philip F. Burchett, and Michael A. Trambert. 2018. Benefits of integrated ris/pacs/reporting due to automatic population of templated reports. Current problems in diagnostic radiology, 48 1:37–39. Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. In ACL. Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In EMNLP. Junyang Lin, Xu Sun, Shuming Ma, and Qi Su. 2018. Global encoding for abstractive summarization. In ACL. Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, and Ross W. Filice. 2019. Ontology-aware clinical abstractive summarization. SIGIR. Nidhin Nandhakumar, Ehsan Sherkat, Evangelos E. Milios, Hong Gu, and Michael Butler. 2017. Clinically significant information extraction from radiology reports. In DocEng. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. In BioNLP@ACL. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. Josef Steinberger and Karel Je¨zek. 2004. Using latent semantic analysis in text summarization and summary evaluation. In ISIM. Jeffrey B Ware, Saurabh W. Jha, Jenny K Hoang, Stephen R Baker, and Jill Wruble. 2017. Effective radiology reporting. Journal of the American College of Radiology : JACR, 14 6:838–839. Zhe Xie, Yuanyuan Yang, Mingqing Wang, Ming Hui Li, Haozhe Huang, Dezhong Zheng, Rong Shu, and Tonghui Ling. 2019. Introducing information extraction to radiology information systems to improve the efficiency on reading reports. Methods of information in medicine, 58 2-03:94–106. Yongjian You, Weijia Jia, Tianyi Liu, and Wenmian Yang. 2019. Improving abstractive document summarization with salient information modeling. In ACL. Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christopher D. Manning, and Curtis P. Langlotz. 2018. Learning to summarize radiology findings. In EMNLP Workshop on Health Text Mining and Information Analysis. 1905 Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In ACL. Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In ACL.
2020
172
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1906 On Faithfulness and Factuality in Abstractive Summarization Joshua Maynez∗ Shashi Narayan∗ Bernd Bohnet Ryan McDonald Google Research {joshuahm,shashinarayan,bohnetbd,ryanmcd}@google.com Abstract It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation. In this paper we have analyzed limitations of these models for abstractive document summarization and found that these models are highly prone to hallucinate content that is unfaithful to the input document. We conducted a large scale human evaluation of several neural abstractive summarization systems to better understand the types of hallucinations they produce. Our human annotators found substantial amounts of hallucinated content in all model generated summaries. However, our analysis does show that pretrained models are better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in generating faithful and factual summaries as evaluated by humans. Furthermore, we show that textual entailment measures better correlate with faithfulness than standard metrics, potentially leading the way to automatic evaluation metrics as well as training and decoding criteria.1 1 Introduction Current state of the art conditional text generation models accomplish a high level of fluency and coherence, mostly thanks to advances in sequenceto-sequence architectures with attention and copy (Sutskever et al., 2014; Bahdanau et al., 2015; Gu et al., 2016), fully attention-based Transformer architectures (Vaswani et al., 2017; Dai et al., 2019) and more recently pretrained language modeling for natural language understanding (Devlin et al., 2019; Radford et al., 2018; Yang et al., 2019; Liu et al., 2019). There has been a growing interest in ∗The first two authors contributed equally. 1Our human annotated summaries for faithfulness and factuality will be released at https://github.com/google-researchdatasets/xsum hallucination annotations. understanding how maximum likelihood training and approximate beam-search decoding in these models lead to less human-like text in open-ended text generation such as language modeling and story generation (Holtzman et al., 2020; Welleck et al., 2020; See et al., 2019). In this paper we investigate how these models are prone to generate hallucinated text in conditional text generation, specifically, extreme abstractive document summarization (Narayan et al., 2018a). Document summarization — the task of producing a shorter version of a document while preserving its information content (Mani, 2001; Nenkova and McKeown, 2011) — requires models to generate text that is not only human-like but also faithful and/or factual given the document. The example in Figure 1 illustrates that the faithfulness and factuality are yet to be conquered by conditional text generators. The article describes an event of “Conservative MP Zac Smith winning the primary for 2016 London mayoral election”, but summaries often forge entities (e.g., “Nigel Goldsmith” or “Zac Goldwin”) or information (e.g., “UKIP leader Nigel Goldsmith”, “Nigel Goldsmith winning the mayoral election”, “Sadiq Khan being the former London mayor” or “Zac Goldwin being the Labour’s candidate”) that are not supported by the document or are factually wrong. Interestingly, all summaries are topical and fluent, and perform well in terms of ROUGE scores (Lin and Hovy, 2003). We conducted a large-scale human evaluation of hallucinated content in systems that use Recurrent Neural Network (RNN) (See et al., 2017), Convolutional Neural Network (CNN) (Narayan et al., 2018a), and Transformers (Radford et al., 2019; Rothe et al., 2020), as well as human written summaries for the recently introduced eXtreme SUMmarization task (XSUM, Narayan et al., 2018a). We seek to answer the following questions: (i) How frequently do abstractive summarizers hallucinate content?; (ii) Do models hal1907 GOLD Zac Goldsmith will contest the 2016 London mayoral election for the Conservatives, it has been announced. DOCUMENT: The Richmond Park and North Kingston MP said he was ”honoured” after winning 70% of the 9,227 votes cast using an online primary system. He beat London Assembly Member Andrew Boff, MEP Syed Kamall and London’s deputy mayor for crime and policing Stephen Greenhalgh. Mr Goldsmith’s main rival is likely to be Labour’s Sadiq Khan. (2 sentences with 59 words are abbreviated here.) Mr Goldsmith, who was the favourite for the Tory nomination, balloted his constituents earlier this year to seek permission to stand. At the very point of his entry into the race for London mayor, Zac Goldsmith’s decision revealed two big characteristics. (5 sentences with 108 words are abbreviated here.) Mr Goldsmith - who first entered Parliament in 2010 - told the BBC’s Daily Politics that he hoped his environmental record would appeal to Green and Lib Dem voters and he also hoped to ”reach out” to UKIP supporters frustrated with politics as usual and the UK’s relationship with the EU. Zac Goldsmith Born in 1975, educated at Eton and the Cambridge Centre for Sixth-form Studies (5 sentences with 76 words are abbreviated here.) Mr Goldsmith, who has confirmed he would stand down from Parliament if he became mayor, triggering a by-election, said he wanted to build on current mayor Boris Johnson’s achievements. (3 sentences with 117 words are abbreviated here.) Both Mr Khan and Mr Goldsmith oppose a new runway at Heathrow airport, a fact described by the British Chambers of Commerce as ”depressing”. (1 sentences with 31 words is abbreviated here.) Current mayor Boris Johnson will step down next year after two terms in office. He is also currently the MP for Uxbridge and South Ruislip, having been returned to Parliament in May. Some Conservatives have called for an inquiry into the mayoral election process after only 9,227 people voted - compared with a 87,884 turnout for the Labour contest. (4 sentences with 121 words are abbreviated here.) PTGEN UKIP leader Nigel Goldsmith has been elected as the new mayor of London to elect a new Conservative MP. [45.7, 6.1, 28.6] TCONVS2S Former London mayoral candidate Zac Goldsmith has been chosen to stand in the London mayoral election. [50.0, 26.7, 37.5] TRANS2S Former London mayor Sadiq Khan has been chosen as the candidate to be the next mayor of London. [35.3, 12.5, 23.5] GPT-TUNED Conservative MP Zac Goldwin’s bid to become Labour’s candidate in the 2016 London mayoral election. [42.4, 25.8, 36.4] BERTS2S Zac Goldsmith has been chosen to contest the London mayoral election. [66.7, 40.0, 51.9] Figure 1: Hallucinations in extreme document summarization: the abbreviated article, its gold summary and the abstractive model generated summaries (PTGEN, See et al. 2017; TCONVS2S, Narayan et al. 2018a; and, GPTTUNED, TRANS2S and BERTS2S, Rothe et al. 2020) for a news article from the extreme summarization dataset (Narayan et al., 2018a). The dataset and the abstractive models are described in Section 3 and 4. We also present the [ROUGE-1, ROUGE-2, ROUGE-L] F1 scores relative to the reference gold summary. Words in red correspond to hallucinated information whilst words in blue correspond to faithful information. lucinate by manipulating the information present in the input document (intrinsic hallucinations) or by adding information not directly inferable from the input document (extrinsic hallucinations)?; (iii) How much hallucinated content is factual, even when unfaithful?; and (iv) Are there automatic means of measuring these hallucinations? Our main conclusions are as follows: First, intrinsic and extrinsic hallucinations happen frequently – in more than 70% of single-sentence summaries. Second, the majority of hallucinations are extrinsic, which potentially could be valid abstractions that use background knowledge. However, our study found that over 90% of extrinsic hallucinations were erroneous. Thus, hallucinations happen in most summaries and the majority of these are neither faithful nor factual. Third, models initialized with pretrained parameters perform best both on automatic metrics and human judgments of faithfulness/factuality. Furthermore, they have the highest percentage of extrinsic hallucinations that are factual. This suggests that while some studies argue that large-scale pretrained models are merely better at learning data-specific regularities (Niven and Kao, 2019), at least on in-domain summarization the gains in automatic metrics are realized in observable differences by humans. Fourth, ROUGE (Lin and Hovy, 2003) and BERTScore (Zhang et al., 2020) correlates less with faithfulness/factuality than metrics derived from automatic semantic inference systems, specifically the degree to which a summary is entailed by the source document. This presents an opportunity for improved automatic evaluation measures as well as model training and decoding objectives. We show preliminary experiments in this direction. 2 Hallucinations in Summarization Open-ended generation — the task of generating text that forms a natural continuation from the input text — requires the model to hallucinate text; hence the focus has been to ensure that the model learns to generate text that is more human-like (i.e., less repetitive or dull with more content-related words) 1908 (Holtzman et al., 2020; Welleck et al., 2020; See et al., 2019). In contrast, tasks such as document summarization (Nenkova and McKeown, 2011; See et al., 2017; Paulus et al., 2018) and data-to-text generation (Lebret et al., 2016; Wiseman et al., 2017) which are not open-ended, require models to be factual and/or faithful to the source text. Despite recent improvements in conditional text generation, most summarization systems are trained to maximize the log-likelihood of the reference summary at the word-level, which does not necessarily reward models for being faithful. Moreover, models are usually agnostic to the noises or artifacts of the training data, such as reference divergence, making them vulnerable to hallucinations (Kryscinski et al., 2019a; Wiseman et al., 2017; Dhingra et al., 2019). Thus, models can generate texts that are not consistent with the input, yet would likely have reasonable model log-likelihood. 2.1 Intrinsic and Extrinsic Hallucinations Given a document D and its abstractive summary S, we try to identify all hallucinations in S with respect to the content of D, regardless of the quality of the summary. In this work, we define a summary as being hallucinated if it has a span(s) wi . . . wi+j, j ≥i, that is not supported by the input document. To distinguish hallucinations further in the context of a document and a summary, we categorize hallucinations by the information source as intrinsic and extrinsic hallucinations. Note, paraphrases or any information that can be inferred from the document are not categorized as hallucinations. Intrinsic hallucinations are consequences of synthesizing content using the information present in the input document. For example, in Figure 1, “Former London mayoral candidate” in the TCONVS2S abstract and “Former London mayor” in the TRANS2S abstract are hallucinations of intrinsic nature; both use terms or concepts from the document but misrepresent information from the document, making them unfaithful to the document. The article does not confirm if “Zac Goldsmith” was a “Former London mayoral candidate” or if “Sadiq Khan” was a “Former London mayor”. One may suspect that a model with poor input document representation will fail to do document level inference, often required for abstraction, and will be vulnerable to such errors. Extrinsic hallucinations are model generations that ignore the source material altogether. For example, in Figure 1, “Nigel” in the PTGEN abstract and “2016” in both GOLD and GPT-TUNED are extrinsic hallucinations; these terms are not introduced in the document. A model with a poorlyinformed decoder and that is agnostic to the divergence issue between the source and target texts (Wiseman et al., 2017; Dhingra et al., 2019), will function more as an open-ended language model and will be prone to extrinsic hallucinations. 2.2 Factual Hallucinations in Summarization A summary S of a document D contains a factual hallucination if it contains information not found in D that is factually correct. Factual hallucinations may be composed of intrinsic hallucinations or extrinsic hallucinations. By definition, abstractive summaries are written to preserve the salient information in the input document, but they are expressed in the words of the summary author as opposed to the input document author (Nenkova and McKeown, 2011). As such, it is natural to construct summaries that integrate with the author’s background knowledge (van Dijk and Kintsch, 1978; Brown and Day, 1983). Such knowledge integration can also be desirable in real world applications. For instance, an engaging sports report will reflect an understanding of the game to provide color and context. Another example is audience-targeted summarization where a good summary will reflect understanding of both the article domain and the desired audience. Nonetheless, there is no consensus in the research community if the summary should be faithful (without any hallucinations) to the input document or if there is tolerance for factual hallucinations. Recent deep learning approaches to abstractive summarization naturally learn to integrate knowledge from the training data while generating an abstractive summary for a document (See et al., 2017; Gehrmann et al., 2018). More advanced pretrained text generators (Radford et al., 2018, 2019; Dong et al., 2019; Song et al., 2019; Khandelwal et al., 2019; Rothe et al., 2020) are even better at capturing world knowledge as they are informed by a vast amount of background text. This can be observed in the example shown in Figure 1 as the input document does not mention that the discussed “London mayoral election” is from “2016”; but the abstract generated by the pretrained text generator GPT-TUNED correctly predicts this information similar to the human-authored abstract.2 2Despite the correct extrinsic hallucination (“2016 ”), the GPT-TUNED abstract overall is still not factual due to the incorrect extrinsic hallucination in “Conservative MP Zac Goldwin.” There is no Conservative MP named Zac Goldwin. 1909 In this paper we stand in favour of the assertion that abstractive systems may integrate with the background knowledge to generate rich and meaningful summaries. More concretely, “hallucinations in summarization are acceptable if they lead to better summaries that are factual with respect to the document and the associated background knowledge.” This assumption also allows us to assess the capability of recent neural models to integrate with the background knowledge to generate factual abstracts (see Section 5.3). 3 Extreme Document Summarization We focus on the recently introduced extreme summarization dataset (XSUM, Narayan et al., 2018a)3 which comprises 226,711 British Broadcasting Corporation (BBC) articles paired with their singlesentence summaries, provided by the journalists writing the articles. The dataset is split into three subsets: training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) sets. All models in §4 trained to generate abstractive summaries are trained and evaluated using this standard split, provided by the authors. We choose to focus our study on extreme summarization for the following reasons: First, this task aims to create a single-sentence summary of a news article; these shorter summaries are relatively easier to annotate and analyze than longer summaries such as story highlights from the CNN/Dailymail dataset (Hermann et al., 2015) or abstracts from the NY Times (Sandhaus, 2008) or the WikiSum (Liu et al., 2018) dataset. Secondly, the gold summary in the extreme summarization dataset is an introductory sentence prefacing each article. By virtue of this property, the extreme summarization task is not amenable to extractive strategies and requires an abstractive modeling approach. Hence, it provides us a better benchmark to assess abstractive models’ abilities to produce abstractions which are faithful and factual. Finally, since we conclude that hallucination is a problem on this dataset, then we can safely conclude it is a problem for summarization datasets with longer summaries, as modeling longer-distance dependencies and discourse structures make the task harder. 4 Abstractive Summaries We evaluate summaries from RNN, CNN and Transformer-based state-of-the-art abstractive summarization methods and the reference human writ3https://github.com/EdinburghNLP/XSum ten summaries. See the Appendix for hyperparameter and decoding details for all models. Human Written Reference Summaries. The single-sentence summaries contained in the extreme summarization dataset (XSUM) are also evaluated as part of this study. These summaries were written by journalists as introductions to the news articles they precede. These summaries, therefore, often have true additional information not found in the document. Such divergence issue between source and target is not uncommon in conditional text generation (Kryscinski et al., 2019a; Wiseman et al., 2017; Dhingra et al., 2019). RNN-based Seq2Seq. We use the PointerGenerator model (PTGEN) introduced by See et al. (2017), an RNN-based attention-based sequence to sequence model which not only generates from the target vocabulary but can copy words from the source text. Topic-aware Convolutional Seq2Seq. The Topic-aware Convolutional Sequence to Sequence model (TCONVS2S) introduced by Narayan et al. (2018a) is an abstractive system which is conditioned on the article’s topics and based entirely on Convolutional Neural Networks (Gehring et al., 2017). TCONVS2S is better suited for extreme summarization, as convolution layers capture long-range dependencies between words in the document more effectively than RNNs. Simultaneously, the convolutional encoder associates each word with a topic vector, capturing whether it is representative of the document’s content. Transformer-based Abstractive Methods. We experiment with three Transformer-based model variants, all of which have 12 layers, a hidden size of 768, filter size of 3072, and 12 attention heads. GPT-TUNED: Radford et al. (2019) proposed Transformer-based Generative Pre-Trained (GPT) language models that can generate high quality text in open-ended generation setups. The proposed decoder-only architecture for language modeling can be easily adapted to abstractive summarization where the model first sees the document and, given a prompt, such as TL;DR;, generates its summary. Our GPT-TUNED is warm-started with a publicly available GPT checkpoint (Radford et al., 2019), but fine-tuned with supervised training on the extreme summarization dataset. TRANS2S and BERTS2S: TRANS2S and BERTS2S are sequence to sequence models 1910 Models Human Eval Test Set R1 R2 RL BERTScore PTGEN 30.01 9.38 23.76 74.30 TCONVS2S 30.89 11.47 25.80 75.23 TRANS2S 32.28 11.66 24.65 75.69 GPT-TUNED 21.82 4.72 16.28 70.35 BERTS2S 38.42 16.96 31.27 78.85 Table 1: ROUGE and BERTScore F1 scores for nonpretrained (the top block) and pretrained (the bottom block) models reported on the XSum dataset. These results are on the sampled human evaluation (500 items) dataset. The best results are boldfaced. where both encoder and decoder are composed of Transformer layers (Vaswani et al., 2017; Rothe et al., 2020). All weights in TRANS2S are randomly initialized, but in BERTS2S, both encoder and decoder are initialized with the BERT-Base checkpoints (Devlin et al., 2019), with parameter sharing between the encoder and decoder, following Rothe et al. (2020). The only variable that is initialized randomly is the encoderdecoder attention in BERTS2S. Both models are then trained on the extreme summarization dataset. 5 Experiments and Results The main focus of this work is not to propose a solution to hallucination related issues, but to achieve a better understanding of hallucinations in abstractive summarization through their human assessment. We randomly sampled 500 articles from the test set to facilitate our study. Using the full test set was unfeasible given its size and the cost of human judgments. We have trained annotators (fluent in English) specifically for our assessment. Our annotators went through two pilot studies to have a better understanding of intrinsic and extrinsic hallucinations, and factuality of summaries. Documents used in the pilot studies were not used in the final annotation. We also report on ROUGE (Lin and Hovy, 2003) scores, BERTScore (Zhang et al., 2020) and semantic inference metric such as textual entailment (Pasunuru and Bansal, 2018; Welleck et al., 2019; Falke et al., 2019; Kryscinski et al., 2019b) and question answering (Arumae and Liu, 2019; Wang et al., 2020). 5.1 Automatic Evaluation of Summaries ROUGE (Lin and Hovy, 2003) provides a means to quickly assess a model’s ability to generate summaries closer to human authored summaries. We report on ROUGE-1 and ROUGE-2 for informativeness and ROUGE-L, for fluency. Like ROUGE, BERTScore (Zhang et al., 2020) computes a similarity score for each token in the candidate sumFigure 2: Human assessment of a system generated summary for the article in Figure 1. The annotation user interface is shown as it was shown to raters. mary with each token in the reference summary. However, instead of exact matches, it computes token similarity using contextual embeddings. Results are presented in Table 1. For both cases, the pretrained encoder-decoder architecture BERTS2S performed far superior to any other randomly initialized models, such as PTGEN, TCONVS2S and TRANS2S, and the decoderonly architecture GPT-TUNED. The differences between PTGEN, TCONVS2S and TRANS2S are not significant; all other differences are significant.4 ROUGE and BERTScore are indicators of informativeness of summaries but they are not sufficient metrics to assess the overall quality of summaries. This becomes evident from our human assessments in the following sections where we employ human annotators to evaluate summaries generated with PTGEN, TCONVS2S, TRANS2S and BERTS2S, and the human authored summaries. We excluded GPT-TUNED abstracts from our study after their poor performance on the automatic measures. 5.2 Assessment of Hallucinations In this assessment, human annotators were presented an article and a single-sentence summary for the article. They were stringently told to only assess the hallucinations in the summary and to not confuse their assessment with the quality of the summary. For summaries containing hallucinations, annotators were tasked with (i) identifying those text spans that were unfaithful to the article and (ii) for each text span, annotating whether the hallucination was intrinsic or extrinsic. We elicited judgments from three different annotators for each of 2500 (500x5) document-summary pairs. Figure 2 shows an example assessment of a summary of an article from Figure 1. Results from the full assessment are shown in Table 2, which shows the percentage of documents per system that were annotated as faithful or hallucinated (faithful = 100 - hallucinated). The Appendix provides interannotator agreement of hallucinations as well as hallucinated span characteristics. Extrinsic Hallucination due to Divergence between Source and Target. Our results con4Pairwise comparisons between all models using a oneway ANOVA with post-hoc Tukey HSD tests; p < 0.01. 1911 Models Hallucinated Faith. +Fact. I E I ∪E PTGEN 19.9 63.3 75.3 24.7 27.3 TCONVS2S 17.7 71.5 78.5 21.5 26.9 TRANS2S 19.1 68.1 79.3 20.7 25.3 BERTS2S 16.9 64.1 73.1 26.9 34.7 GOLD 7.4 73.1 76.9 23.1 — Table 2: Intrinsic vs. Extrinsic Hallucinations. The numbers in “Hallucinated” columns show the percentage of summaries where at least one word was annotated by all three annotators as an intrinsic (I) or extrinsic (E) hallucination. When a summary is not marked with any hallucination, it is “faithful” (100 - I∪E), column “Faith.”. The final “+Fact.” column shows the total percentage of faithful and/or factual summaries, which includes all faithful summaries plus the percentage of non-faithful summaries annotated by all three annotators as factual. Higher numbers for faithful/factual and lower numbers for hallucinations are boldfaced. firmed that the BBC gold summaries often have extrinsic hallucinations due to the dataset artifact that gold summaries are introductory sentences prefacing each article. It was not surprising that most models also had significant extrinsic hallucinations. Intrinsic Hallucination is Also Common in Abstractive Summaries. Gold summaries can also display intrinsic hallucinations. For example, a news article could describe an event related to “Barack Obama” and “the office of the President of the United States” without inferring that “Obama is the President of the United States.” A journalist with the knowledge of the event in the article could write a summary stating “President Obama.” However, the percentage of system summaries with intrinsic hallucination was much higher than in gold summaries (7.4% vs others). This phenomenon particularly revealed the models’ tendency to misrepresent information in the document due to the lack of document-level understanding and inference. The copy mechanism in PTGEN is good at copying from the source (showing the least percentage of extrinsic hallucination of 63.3%), but the mechanism lacks the inference capability and is prone to generate a summary that is not supported by the document (19.9% intrinsic hallucination). TRANS2S showed similar performance to PTGEN and ranked second worst. The BERTS2S showed the least number of intrinsic hallucination (16.9%) among all four abstractive systems. Pretraining Improves Faithfulness. Hallucinations do not result from the artifacts in the training data only, but also due to model shortcomings. The PTGEN model with the copy mechanism (Gu et al., 2016; See et al., 2017) had the lowest extrinsic hallucination (63.3%), but BERTS2S reported the highest number of faithful summaries. It appears that BERTS2S is overall most conservative among all four abstractive systems while getting closer to reference summaries in terms of ROUGE. The pretraining prepares BertS2S to be more aware of the domain of the document and less prone to language model vulnerabilities. Consequently, BertS2S is more confident in predicting tokens from the document than TranS2S, hence, improving faithfulness. 5.3 Assessment of Factual Hallucinations. Hallucinations are not necessarily erroneous. In our second human assessment, we measured to what extent this is the case. Our annotators were presented a single-sentence summary with hallucinations and were asked to assess whether it is true or false. To better explain the context of the summary, annotators were made available the source document as well as the external resources such as Wikipedia or Google Search. The source document can be particularly important for generic summaries to better understand context. External resources assisted the evaluators to validate grounded facts in public knowledge bases. Annotators were expected to validate the summary by looking for supporting evidence for the information found on the summary. If information in the summary contradicts the document, then the summary is not factual. If supporting evidence is found for all the information, then the summary is factual. The document is not useful when the summary has information that is neither supported nor contradicted in the article. For example, the summary in Figure 2 mentions “Conservative MP Zac Goldwin” which can not be verified from the article in Figure 1. Here, annotators could use Wikipedia or Google Search to check that there had not been a Conservative MP named Zac Goldwin who tried to change their party and become a Labour’s candidate in the 2016 London mayoral election. We dropped the human authored gold summaries from this evaluation; they were presumably factual. We also dropped the abstracts that were faithful to their input documents from the previous study. Finally, there were 1869 document-summary pairs where the summaries were marked with at least one intrinsic or extrinsic hallucination. We elicited judgments from three different annotators for each of them. Results from this assessment are also presented in Table 2 (see the column labelled “+Fact.”) along with the hallucination assessment. 1912 Pretraining Helps Generating Factual Summaries. In total, 34.7% of the BERTS2S abstracts were faithful (26.9%) and/or factual (+7.8%). This is 7.4% absolute better than the next-best model (PTGEN). The number of unfaithful yet factual summaries for BERTS2S, 7.8%, was also the highest. In fact, for extrinsic hallucinations, even though PTGEN hallucinates less than BERTS2S (63.3% vs. 64.1%), 6.6% of BERTS2S hallucinations were factual, compared to 2.2% of PTGEN.5 Thus, if we consider factual hallucinations to be valid, this means that even for extrinsic cases, BERTS2S hallucinates the least. The superior performance of BERTS2S is most likely due to its exposure to vast amount of text through pretraining, allowing it to integrate background knowledge with generation. Even so, over 90% of BERTS2S hallucinations are erroneous. Finally, we carried out pairwise comparisons between all models (using a one-way ANOVA with post-hoc Tukey HSD tests; p < 0.01). For intrinsic hallucinations (the second column in Table 2), GOLD is significantly different from all other systems. For extrinsic hallucinations (the third column in Table 2), there were significant differences between PTGEN and TCONVS2S, PTGEN and GOLD, and, BERTS2S and GOLD. For factuality, the differences between PTGEN, TCONVS2S, and TRANS2S were insignificant. 5.4 Automatic Measures for Hallucinations Summaries are a proxy for their source documents under the assumption that they highlight the most important content. With this assumption, we further studied the extent to which the hallucinated content can be measured by semantic inference related measures, such as textual entailment and question answering. Textual Entailment. We trained an entailment classifier by finetuning a BERT-Large pretrained model (Devlin et al., 2019) on the Multi-NLI dataset (Williams et al., 2018). We calculated the entailment probability score between the document and its abstractive summaries. Note that this entailment classifier is not optimal for the BBC article-summary pairs; the Multi-NLI dataset contains sentence-sentence pairs. Ideally a summary should entail the document or perhaps be neutral to the document, but never contradict the document. As can be seen in Table 3, the BERTS2S abstracts showed the least number of 5See Appendix for full results. Models Textual Entailment QA entail. neut. cont. PTGEN 38.4 34.4 27.2 20.2 TCONVS2S 29.6 37.4 33.0 19.9 TRANS2S 34.6 39.8 25.6 22.4 BERTS2S 41.8 37.8 20.4 23.0 GOLD 32.8 47.2 20.0 19.3 Table 3: Textual entailment and question answering (QA) based measures for summary evaluation. For entailment, we show the percentage of times a summary entails (entail.) the document, is neutral (neut.) to the document and contradicts (cont.) the document. For QA, we report the percentage of questions that were correctly answered by a system. The highest numbers for entail., neut. and QA, and the lowest number for cont. are boldfaced. contradictions compared to other system-generated abstracts and was at par with the GOLD summaries. Similar to the performance on extrinsic hallucination in Table 2, the TCONVS2S abstracts also displayed the highest number of contradictions. Interestingly, the GOLD summaries are more neutral to their documents, whereas the BERTS2S summaries are more entailed by their documents. This is probably due to the nature of the data and that journalists tend to add color and have a high number of extrinsic (but valid) hallucinations. Question Answering. QA frameworks have been used to assess or promote summary informativeness (Narayan et al., 2018b; Arumae and Liu, 2019). We adapted the QA framework to assess hallucination in model generated summaries; a faithful model will generate a summary that only has information that is supported by its document. Under this assumption, any question answerable by the summary should also be answerable by the source. Given an abstractive summary, we used the round-trip consistency method of Alberti et al. (2019), which combines question generation and answer extraction models to generate synthetic question-answer pairs. For the 500 documentsummary pairs, we generated 731, 708, 720, 725 and 820 question-answer pairs for PTGEN, TCONVS2S, TRANS2S, BERTS2S and GOLD, respectively. Finally, we used a machine reading comprehension model to answer these questions using the document as context. As in Alberti et al. (2019), we trained all models: question generation, answer extraction and reading comprehension models; using a BERT-Base pretrained model (Devlin et al., 2019) finetuned on the Natural Questions dataset (Kwiatkowski et al., 2019). Similar to textual entailment results, the 1913 PTGEN Leeds United fought back from 2-0 down to beat Huddersfield town in the first round of the EFL cup. (Q: What team did Leeds United beat in the first round of the EFL cup?, A: Huddersfield town) TCONVS2S A coal mine in South Yorkshire has collapsed as a result of the loss of a coal mine. (Q: What type of mine has collapsed?, A: Coal) TRANS2S Star Wars actor James Davis said he was “locked in a caravan” and had his caravan stolen during a break-in. (Q: Who said he was locked in a caravan?, A: Davis) Figure 3: Sample of question-answer pairs generated from hallucinated summaries that are correctly answered by their source articles. Highlighted spans in the summaries are marked as extrinsic or intrinsic hallucinations by our annotators. Metric Faithful Factual ROUGE-1 0.197 0.125 ROUGE-2 0.162 0.095 ROUGE-L 0.162 0.113 BERTScore 0.190 0.116 QA 0.044 0.027 Entailment 0.431 0.264 Table 4: Spearman’s correlation coefficient (|rs|) of different metrics with faithful and factual annotations. BERTS2S abstracts were the most faithful to their source documents in terms of question answering. The GOLD abstracts were the least accurate due to a high number of extrinsic hallucination in them. Spearman’s Correlation. We estimate Spearman’s correlation coefficients of different metrics with the faithful and factual human scores (see Table 4). We found that the textual entailment scores are best correlated with both faithful (moderate, 0.40 ≤|rs| ≤0.59) and factual (weak, 0.20 ≤|rs| ≤0.39) human scores. Comparatively, ROUGE-based metrics and BERTScore have very weak correlation, our findings are consistent with the recent studies (Goodrich et al., 2019; Kryscinski et al., 2019a; Wang et al., 2020). Surprisingly, the question answering scores showed a very weak correlation (0.0 ≤|rs| ≤0.19) with faithful and factual human scores. We hypothesize that this is due to a compounding of errors where (i) the question generator is used to generate questions from the systems’ generated abstracts, instead of human-written text on which they were trained, (ii) the question generator is susceptible to generate questions with hallucinated content when fed in with hallucinated summaries, and (iii) our assumption that a summary is faithful if the answers from the source and the summary match, is rather poor for extreme summarization. We demonstrate these issues in Figure 3. Irrespective of questions with hallucinated content, our reading comprehension Models R1 R2 RL Faith. +Fact. BERTS2S 38.42 16.96 31.27 26.9 34.7 ENTAIL 35.93 14.02 28.87 31.5 38.6 →FAITH 37.31 15.21 30.12 31.7 38.8 Table 5: ROUGE and faithfulness/factuality scores for BERTS2S plus systems that use textual entailment as a criteria or fine-tuned on faithful annotations. model can fortuitously answer them correctly from their source articles. Better ways of generating questions (Narayan et al., 2020) and measuring factual consistency may alleviate some of these issues (Wang et al., 2020). 5.5 Model Selection with Entailment Our study suggests that entailment could be used as an automatic measure for faithfulness. However, we should point out that this measure is referenceless. Thus, it can easily be gamed, i.e., the first sentence of any source document is always entailed by the whole document. Because of this, entailmentbased measures for evaluation need to be coupled with reference-based measures like ROUGE. However, one major advantage of the measure being reference-less is that we can use it as a model selection objective or during decoding. We tested the former. Specifically, we used the probability that a summary is entailed by a document as a selection criteria to select a summary between four candidates generated by systems evaluated: PTGEN, TCONVS2S, TRANS2S, and BERTS2S. Results are shown in the ENTAIL row of Table 5. We can see that indeed this is a strong metric to optimize towards if we want faithful summaries - almost 5% absolute better. There is a trade-off in terms of ROUGE, but this model must select amongst 4 systems, 3 of which have significantly lower ROUGE than the best model. A further experiment is to train a model explicitly to predict faithfulness. In order to do this, we further fine-tuned the entailment model using the ‘faithful’ annotations generated during our evaluation. For all summary-document pairs marked as ‘faithful’, we set the associated class to ‘entailment’, otherwise we set it to ‘neutral’. This allowed for us to also fine-tune the last classification layers taking advantage of the correlation between ‘entailment’ and ‘faithfulness’. Results using 5-fold cross validation are shown in the ENTAIL→FAITH row of Table 5. We see here that indeed this does improve the ability to select faithful summaries from a set of candidates, though slightly. We would expect to see larger gains with more training data. However, this model is significantly better than ENTAIL 1914 on ROUGE-based metrics and seems like a good balance between ROUGE and better faithfulness. 6 Related Work Following the Document Understanding Conference (DUC; Dang, 2005), a majority of work has focused on evaluating the content and the linguistic quality of summaries (Nenkova, 2005). Most popular among them is the automatic metric ROUGE (Lin and Hovy, 2003) that measures the unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a proxy for assessing informativeness and the longest common subsequence (ROUGE-L), for fluency. ROUGE, however, can be misleading when used as the only means to assess the informativeness of summaries (Schluter, 2017). Hence, the ROUGE score is often complemented with subjective human assessment of summaries. More objective measures have been proposed to improve agreement among human annotators. Pyramid method (Nenkova and Passonneau, 2004) requires summaries to be annotated by experts for salient information. Narayan et al. (2018a,b) used a questionanswering based approach where a summary is used as context to answer questions which were written based on its reference summary. Hardy et al. (2019) proposed a reference-less approach where a summary is assessed against the source document, highlighted with its pertinent content. There has not been much work on evaluating faithfulness and truthfulness of abstractive summaries. The automatic evaluation such as ROUGE and the human evaluation of saliency and linguistic quality of summaries are not sufficient due to the complex nature of the task. Recently, Chen and Bansal (2018) asked human annotators to assess the summary relevance measuring both the saliency and the presence of contradictory/unrelated information. Dhingra et al. (2019) proposed a new automatic metric, PARENT, for data-to-text generation (Lebret et al., 2016; Wiseman et al., 2017) which aligns n-grams from the reference and generated texts to the source table to measure the accuracy of n-grams that are entailed from the source table. Goodrich et al. (2019) proposed a modelbased automatic metric to assess the faithfulness of Wikipedia summaries; they trained an end-to-end model to extract a complete set of OpenIE-style (Banko et al., 2007) facts from both the source text and the generated summary. The summary is faithful if it is precise in generating facts from the source text. In our experiments with OpenIEbased measures, we found that they are not suited for evaluating extreme summarization models; all models perform poorly on these metrics without any significant differences. Like ours, few recent works (some in parallel) have explored natural language inference and question answering models to detect factual consistency in generated text (Welleck et al., 2019; Falke et al., 2019; Kryscinski et al., 2019b; Wang et al., 2020). In line with our findings, Falke et al. (2019) observed that the BERT-based NLI models substantially improved summaries reranking in terms of their correctness. Kryscinski et al. (2019b) proposed an NLI-based fact checking model that is trained on a dataset tailored for detecting factual inconsistencies in generated text. Wang et al. (2020) proposed a question answering and generation based automatic evaluation protocol that is designed to identify factual inconsistencies in a generated summary. Future work will likely investigate better ways of generating questions and measuring factual consistency to address poor correlation with faithfulness and factuality annotations. Finally, others have used reinforcement learning to improve informativeness and reduce contradictory information in abstractive summaries, e.g., Pasunuru and Bansal (2018) used a textual entailment-based reward and Arumae and Liu (2019), a question-answering based reward. However, these approaches don’t evaluate if these rewards improve faithfulness of summaries. 7 Conclusion We conducted a large-scale study of hallucinations in abstractive document summarization. We found that (i) tackling hallucination is a critical challenge for abstractive summarization, perhaps the most critical, (ii) NLU-driven pretraining in neural text generators is key to generate informative, coherent, faithful and factual abstracts, but it is still far from solving the problem; and (iii) measures such as ROUGE or BERTScore will not be sufficient when studying the problem; semantic inference-based automatic measures are better representations of true summarization quality. Acknowledgments We thank Ratish Puduppully, Yova Kementchedjhieva, Ankur Parikh, Peter Liu, Slav Petrov, the reviewers and the action editor for invaluable feedback. The hard work of Muqthar Mohammad, Mohd Majeed and Ashwin Kakarla made our human annotation possible. 1915 References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6168– 6173, Florence, Italy. Kristjan Arumae and Fei Liu. 2019. Guiding extractive summarization with question-answering rewards. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2566–2577, Minneapolis, Minnesota. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, San Diego, CA, USA. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 2670–2676, Hyderabad, India. Ann L. Brown and Jeanne D. Day. 1983. Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behaviour, 22(1):1–14. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 675–686, Melbourne, Australia. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Hoa Trang Dang. 2005. Overview of DUC 2005. In Proceedings of the Document Understanding Conference, pages 1–12. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, Minneapolis, Minnesota. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4884–4895, Florence, Italy. Teun A. van Dijk and Walter Kintsch. 1978. Cognitive psychology and discourse: Recalling and summarizing stories. In Wolfgang U. Dressler, editor, Current Trends in Textlinguistics, pages 61–80. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems 32, pages 13063–13075. Curran Associates, Inc. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019. Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220, Florence, Italy. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1243—-1252, Sydney, NSW, Australia. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Ben Goodrich, Vinay Rao, Peter J. Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 166–175, New York, NY, USA. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1631–1640, Berlin, Germany. Hardy, Shashi Narayan, and Andreas Vlachos. 2019. HighRES: Highlight-based reference-less evaluation of summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3381–3392, Florence, Italy. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In Proceedings of the 8th International Conference on Learning Representations, Virtual Conference, Formerly Addis Ababa Ethiopia. 1916 Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, and Lukasz Kaiser. 2019. Sample efficient text summarization using a single pre-trained transformer. CoRR, abs/1905.08836. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019a. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 540–551, Hong Kong, China. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Evaluating the factual consistency of abstractive text summarization. CoRR, abs/1910.12840. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. CoRR, abs/1808.06226. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213, Austin, Texas. Chin Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150–157. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In Proceedings of the 6th International Conference on Learning Representations, Vancouver Canada. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Inderjeet Mani. 2001. Automatic summarization, volume 3. John Benjamins Publishing. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1747–1759, New Orleans, Louisiana. Shashi Narayan, Gonc¸alo Simoes, Ji Ma, Hannah Craighead, and Ryan T. McDonald. 2020. QURIOUS: Question generation pretraining for text generation. CoRR, abs/2004.11026. Ani Nenkova. 2005. Automatic Text Summarization of Newswire: Lessons Learned from the Document Understanding Conference. In Proceedings of the 20th National Conference on Artificial Intelligence Volume 3, pages 1436–1441. Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2–3):103–233. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The Pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 145–152, Boston, Massachusetts, USA. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664, Florence, Italy. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 646–653, New Orleans, Louisiana. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report. 1917 Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for sequence generation tasks. To appear in Transactions of the Association for Computational Linguistics, abs/1907.12461. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12). Natalie Schluter. 2017. The limits of automatic summarisation according to rouge. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pages 41– 45, Valencia, Spain. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083, Vancouver, Canada. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Conference on Computational Natural Language Learning, pages 843–861, Hong Kong, China. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, California. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Alex Wang, Kyunghyun Cho, and Michael Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Virtual Conference, Formerly Seattle, USA. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In Proceedings of the 8th International Conference on Learning Representations, Virtual Conference, Formerly Addis Ababa Ethiopia. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1112–1122, New Orleans, Louisiana. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2253–2263, Copenhagen, Denmark. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In Proceedings of the 8th International Conference on Learning Representations, Virtual Conference, Formerly Addis Ababa Ethiopia. 1918 A Model Hyperparameters and Predictions PTGEN and TCONVS2S model predictions are provided by Narayan et al. (2018a) and Transformer model predictions from GPT-TUNED, TRANS2S and BERTS2S, by Rothe et al. (2020). Both PTGEN and TCONVS2S use a Stanford tokenized vocabulary size of 50k. TRANS2S and BERTS2S use a vocabulary size of around ∼30k WordPieces (Wu et al., 2016) to match BERT pretrained vocabulary and, GPT-TUNED, a vocabulary size of around ∼50k SentencePieces (Kudo and Richardson, 2018) to match the GPT-2 pretrained vocabulary. All models use the same uncased vocabulary on both source and target sides. Both PTGEN and TCONVS2S summaries were generated using beam search with beam size 10, the Transformer models use beam size of 4. See Narayan et al. (2018a) and Rothe et al. (2020) for more details on these models. Models Fleiss’ Kappa Hall. Fact. Rept. Inco. PTGEN 0.70 0.91 0.89 0.84 TCONVS2S 0.73 0.91 0.93 0.90 TRANS2S 0.67 0.91 0.92 0.90 BERTS2S 0.67 0.88 0.94 0.93 GOLD 0.71 — 1.00 0.98 Table 6: Fleiss’s Kappa scores measuring word-level agreements among annotators for different annotation tasks: hallucination (Hall.), factuality (Fact.), repetition (Rept.) and incoherence (Inco.) assessments. B Inter annotator agreement We estimated Fleiss’s Kappa (k) to assess the agreement among our raters when categorizing a word in the summary as one of faithful, intrinsically hallucinated and extrinsically hallucinated. The results are shown in Table 6. All models showed substantial agreement (0.61 ≤k ≤0.80; Landis and Koch, 1977) among their annotations. Table 6 also shows Fleiss’s Kappa (k) to assess the agreement among our raters for factuality. All models showed almost perfect agreement (0.81 ≤k ≤1.0; Landis and Koch, 1977) among their annotations. C Highlighted Span Characteristics Results in Table 7 shed some light on the characteristics of hallucinated spans observed in different abstracts. GOLD abstracts showed the least number of intrinsically hallucinated spans (0.55 per document), whereas, PTGEN abstracts showed the Models Intrinsic Extrinsic avg. length total (avg.) total (avg.) PTGEN 625 (1.35) 1424 (2.85) 8.48 TCONVS2S 518 (1.04) 1556 (3.11) 8.44 TRANS2S 589 (1.18) 1556 (3.11) 7.39 BERTS2S 530 (1.06) 1520 (3.04) 6.12 GOLD 276 (0.55) 1807 (3.61) 7.11 Table 7: Total number of spans and the average number of spans per document, annotated as intrinsic or extrinsic hallucinations for all 500 document-summary pairs by three annotators. We also show the average span length for each system. Models Repetition Incoherence PTGEN 17.5 20.3 TCONVS2S 16.7 17.7 TRANS2S 8.9 11.5 BERTS2S 8.7 9.5 GOLD 0.0 0.8 Table 8: Repetition and Incoherence Evaluation. The numbers show the the percentage of 500 summaries where at least one word in a summary was annotated by all three annotators with the “Repetition” or “Incoherence” related issue. The lowest numbers are boldfaced. Metric Faithful Factual ROUGE-1 0.197 0.125 ROUGE-2 0.162 0.095 ROUGE-L 0.162 0.113 BERTScore 0.190 0.116 Repetition 0.064 0.075 Incoherence 0.067 0.082 QA 0.044 0.027 Entailment 0.431 0.264 Table 9: Spearman’s correlation coefficient (|rs|) of different metrics with faithful and factual annotations. least number of extrinsically hallucinated spans (2.85 per document). Interestingly, the average span length for PTGEN summaries was 8.48 words, much higher than 6.12 words for BERTS2S summaries. Our result demonstrates that (i) the effect of hallucination in BERTS2S is more local than what we observe in PTGEN and (ii) despite a lower number of extrinsically hallucinated spans or documents in PTGEN compared to that in BERTS2S (2.85 vs 3.04 spans per document, 63.3% vs 64.1% documents), the total number of words that were annotated as extrinsic hallucination is much higher in PTGEN than in BERTS2S (12075 vs 9302 words). D Assessment of Linguistic Irregularities. Following standard practice in summarization, all 2500 document-summary pairs were annotated for repetition and incoherence related linguistic irregularities. Annotators were presented only a singlesentence summary and were asked to identify all 1919 Models Faithful Hallucinated Factual I E I ∪E total factual total factual total factual PTGEN 24.7 19.9 0.4 63.3 2.2 75.3 2.6 27.3 TCONVS2S 21.5 17.7 0.8 71.5 5.0 78.5 5.4 26.9 TRANS2S 20.7 19.1 1.4 68.1 3.4 79.3 4.6 25.3 BERTS2S 26.9 16.9 1.8 64.1 6.6 73.1 7.8 34.7 GOLD 23.1 7.4 — 73.1 — 76.9 — — Table 10: Intrinsic vs Extrinsic Hallucinations and their factuality. The numbers in “Hallucinated” columns show the percentage of summaries out of 500 where at least one word was annotated by all three annotators as an intrinsic (I) or extrinsic (E) hallucination. When a summary is not marked with any hallucination, it is “faithful” (1- I∪E). The “factual” columns within the “Hallucinated” column show for each type (I, E and I∪E), the percentage of summaries out of 500 annotated by all three annotators as factual. The final “Factual” column shows the total percentage of factual summaries (Faithful + I∪Efactual). The highest numbers for faithful and factual, and the lowest numbers for hallucinations are boldfaced. spans of text in the summary that were either repeated or made the summary incoherent. We again elicited judgments from three different annotators for each document-summary pair. Results are shown in Table 8. Overall, all neural text generation systems are getting better in generating repetition-free and coherent single-sentence summaries of news articles. Transformer-based models, TRANS2S and BERTS2S in particular, perform superior to RNNbased PTGEN and CNN-based TCONVS2S models. Nonetheless, Table 9 shows that these metrics fail to correlate with faithful, hallucinated and factual assessments of summaries. Fleiss’s Kappa (k) values for repetition and incoherence assessments showed almost a perfect agreement (0.81 ≤k ≤ 1.0; Landis and Koch, 1977) among our raters (see Table 6). E Full Hallucination Results Table 10 has the full results from our human study of hallucinations.
2020
173
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1920–1933 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1920 Screenplay Summarization Using Latent Narrative Structure Pinelopi Papalampidi1 Frank Keller1 Lea Frermann2 Mirella Lapata1 1Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 2School of Computing and Information Systems University of Melbourne [email protected], [email protected], [email protected], [email protected] Abstract Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront. As a result, such models are biased by position and often perform a smart selection of sentences from the beginning of the document. When summarizing long narratives, which have complex structure and present information piecemeal, simple position heuristics are not sufficient. In this paper, we propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models. We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays (i.e., extract an optimal sequence of scenes). Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode and improve summarization performance over general extractive algorithms, leading to more complete and diverse summaries. 1 Introduction Automatic summarization has enjoyed renewed interest in recent years thanks to the popularity of modern neural network-based approaches (Cheng and Lapata, 2016; Nallapati et al., 2016, 2017; Zheng and Lapata, 2019) and the availability of large-scale datasets containing hundreds of thousands of document–summary pairs (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018; Narayan et al., 2018; Fabbri et al., 2019; Liu and Lapata, 2019). Most efforts to date have concentrated on the summarization of news articles which tend to be relatively short and formulaic following an “inverted pyramid” structure which places the most essential, novel and interesting elVictim: Mike Kimble, found in a Body Farm. Died 6 hours ago, unknown cause of death. CSI discover cow tissue in Mike's body.  Cross-contamination is suggested. Probable cause of death: Mike's house has been set on fire. CSI finds blood: Mike was murdered, fire was a cover up. First suspects: Mike's fiance, Jane and her ex-husband, Russ.  CSI finds photos in Mike's house of Jane's daughter, Jodie, posing naked. Mike is now a suspect of abusing Jodie. Russ allows CSI to examine his gun. CSI discovers that the bullet that killed Mike was made of frozen beef that melt inside him. They also find beef in Russ' gun. Russ confesses that he knew that Mike was abusing Jody, so he confronted and killed him. CSI discovers that the naked photos were taken on a boat, which belongs to Russ. CSI discovers that it was Russ who was abusing his daughter based on fluids found in his sleeping bag and later killed Mike who tried to help Jodie. Russ is given bail, since no jury would convict a protective father. Russ receives a mandatory life sentence. Setup New Situation Progress Complications The final push Aftermath Opportunity Change of Plans Point of no Return Major Setback Climax Figure 1: Example of narrative structure for episode “Burden of Proof” from TV series Crime Scene Investigation (CSI); turning points are highlighted in color. ements of a story in the beginning and supporting material and secondary details afterwards. The rigid structure of news articles is expedient since important passages can be identified in predictable locations (e.g., by performing a “smart selection” of sentences from the beginning of the document) and the structure itself can be explicitly taken into account in model design (e.g., by encoding the relative and absolute position of each sentence). In this paper we are interested in summarizing longer narratives, i.e., screenplays, whose form and structure is far removed from newspaper articles. Screenplays are typically between 110 and 120 pages long (20k words), their content is broken down into scenes, which contain mostly dialogue (lines the actors speak) as well as descriptions explaining what the camera sees. Moreover, screenplays are characterized by an underlying narrative structure, a sequence of events by which 1921 Screenplay Latent Narrative Structure TP1: Introduction TP3: Commitment TP2: Goal definition  TP4: Setback TP5: Ending Summary scenes Video summary relevant to TP2 relevant to TP5 irrelevant Figure 2: We first identify scenes that act as turning points (i.e., key events that segment the story into sections). We next create a summary by selecting informative scenes, i.e.,semantically related to turning points. a story is defined (Cutting, 2016), and by the story’s characters and their roles (Propp, 1968). Contrary to news articles, the gist of the story in a screenplay is not disclosed at the start, information is often revealed piecemeal; characters evolve and their actions might seem more or less important over the course of the narrative. From a modeling perspective, obtaining training data is particularly problematic: even if one could assemble screenplays and corresponding summaries (e.g., by mining IMDb or Wikipedia), the size of such a corpus would be at best in the range of a few hundred examples not hundreds of thousands. Also note that genre differences might render transfer learning (Pan and Yang, 2010) difficult, e.g., a model trained on movie screenplays might not generalize to sitcoms or soap operas. Given the above challenges, we introduce a number of assumptions to make the task feasible. Firstly, our goal is to produce informative summaries, which serve as a surrogate to reading the full script or watching the entire film. Secondly, we follow Gorinski and Lapata (2015) in conceptualizing screenplay summarization as the task of identifying a sequence of informative scenes. Thirdly, we focus on summarizing television programs such as CSI: Crime Scene Investigation (Frermann et al., 2018) which revolves around a team of forensic investigators solving criminal cases. Such programs have a complex but well-defined structure: they open with a crime, the crime scene is examined, the victim is identified, suspects are introduced, forensic clues are gathered, suspects are investigated, and finally the case is solved. In this work, we adapt general-purpose extractive summarization algorithms (Nallapati et al., 2017; Zheng and Lapata, 2019) to identify informative scenes in screenplays and instill in them knowledge about narrative film structure (Hauge, 2017; Cutting, 2016; Freytag, 1896). Specifically, we adopt a scheme commonly used by screenwriters as a practical guide for producing successful screenplays. According to this scheme, wellstructured stories consist of six basic stages which are defined by five turning points (TPs), i.e., events which change the direction of the narrative, and determine the story’s progression and basic thematic units. In Figure 1, TPs are highlighted for a CSI episode. Although the link between turning points and summarization has not been previously made, earlier work has emphasized the importance of narrative structure for summarizing books (Mihalcea and Ceylan, 2007) and social media content (Kim and Monroy-Hern´andez, 2015). More recently, Papalampidi et al. (2019) have shown how to identify turning points in feature-length screenplays by projecting synopsis-level annotations. Crucially, our method does not involve manually annotating turning points in CSI episodes. Instead, we approximate narrative structure automatically by pretraining on the annotations of the TRIPOD dataset of Papalampidi et al. (2019) and employing a variant of their model. We find that narrative structure representations learned on their dataset (which was created for feature-length films), transfer well across cinematic genres and computational tasks. We propose a framework for end-to-end training in which narrative structure is treated as a latent variable for summarization. We extend the CSI dataset (Frermann et al., 2018) with binary labels indicating whether a scene should be included in the summary and present experiments with both supervised and unsupervised summarization models. An overview of our approach is shown in Figure 2. Our contributions can be summarized as follows: (a) we develop methods for instilling knowledge about narrative structure into generic su1922 pervised and unsupervised summarization algorithms; (b) we provide a new layer of annotations for the CSI corpus, which can be used for research in long-form summarization; and (c) we demonstrate that narrative structure can facilitate screenplay summarization; our analysis shows that key events identified in the latent space correlate with important summary content. 2 Related Work A large body of previous work has focused on the computational analysis of narratives (Mani, 2012; Richards et al., 2009). Attempts to analyze how stories are written have been based on sequences of events (Schank and Abelson, 1975; Chambers and Jurafsky, 2009), plot units (McIntyre and Lapata, 2010; Goyal et al., 2010; Finlayson, 2012) and their structure (Lehnert, 1981; Rumelhart, 1980), as well as on characters or personas in a narrative (Black and Wilensky, 1979; Propp, 1968; Bamman et al., 2014, 2013; Valls-Vargas et al., 2014) and their relationships (Elson et al., 2010; Agarwal et al., 2014; Srivastava et al., 2016). As mentioned earlier, work on summarization of narratives has had limited appeal, possibly due to the lack of annotated data for modeling and evaluation. Kazantseva and Szpakowicz (2010) summarize short stories based on importance criteria (e.g., whether a segment contains protagonist or location information); they create summaries to help readers decide whether they are interested in reading the whole story, without revealing its plot. Mihalcea and Ceylan (2007) summarize books with an unsupervised graph-based approach operating over segments (i.e., topical units). Their algorithm first generates a summary for each segment and then an overall summary by collecting sentences from the individual segment summaries. Focusing on screenplays, Gorinski and Lapata (2015) generate a summary by extracting an optimal chain of scenes via a graph-based approach centered around the main characters. In a similar fashion, Tsoneva et al. (2007) create video summaries for TV series episodes; their algorithm ranks sub-scenes in terms of importance using features based on character graphs and textual cues available in the subtitles and movie scripts. Vicol et al. (2018) introduce the MovieGraphs dataset, which also uses character-centered graphs to describe the content of movie video clips. Our work synthesizes various strands of research on narrative structure analysis (Cutting, 2016; Hauge, 2017), screenplay summarization (Gorinski and Lapata, 2015), and neural network modeling (Dong, 2018). We focus on extractive summarization and our goal is to identify an optimal sequence of key events in a narrative. We aim to create summaries which re-tell the plot of a story in a concise manner. Inspired by recent neural network-based approaches (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018; Zheng and Lapata, 2019), we develop supervised and unsupervised models for our summarization task based on neural representations of scenes and how these relate to the screenplay’s narrative structure. Contrary to most previous work which has focused on characters, we select summary scenes based on events and their importance in the story. Our definition of narrative structure closely follows Papalampidi et al. (2019). However, the model architectures we propose are general and could be adapted to different plot analysis schemes (Field, 2005; Vogler, 2007). To overcome the difficulties in evaluating summaries for longer narratives, we also release a corpus of screenplays with scenes labeled as important (summary worthy). Our annotations augment an existing dataset based on CSI episodes (Frermann et al., 2018), which was originally developed for incremental natural language understanding. 3 Problem Formulation Let D denote a screenplay consisting of a sequence of scenes D = {s1,s2,...,sn}. Our aim is to select a subset D′ = {si,...,sk} consisting of the most informative scenes (where k < n). Note that this definition produces extractive summaries; we further assume that selected scenes are presented according to their order in the screenplay. We next discuss how summaries can be created using both unsupervised and supervised approaches, and then move on to explain how these are adapted to incorporate narrative structure. 3.1 Unsupervised Screenplay Summarization Our unsupervised model is based on an extension of TEXTRANK (Mihalcea and Tarau, 2004; Zheng and Lapata, 2019), a well-known algorithm for extractive single-document summarization. In our setting, a screenplay is represented as a graph, in which nodes correspond to scenes and edges between scenes si and sj are weighted by their simi1923 larity ei j. A node’s centrality (importance) is measured by computing its degree: centrality(si) = λ1∑ j<i eij +λ2∑ j>i eij (1) where λ1+λ2 = 1. The modification introduced in Zheng and Lapata (2019) takes directed edges into account, capturing the intuition that the centrality of any two nodes is influenced by their relative position. Also note that the edges of preceding and following scenes are differentially weighted by λ1 and λ2. Although earlier implementations of TEXTRANK (Mihalcea and Tarau, 2004) compute node similarity based on symbolic representations such as tf*idf, we adopt a neural approach. Specifically, we obtain sentence representations based on a pretrained encoder. In our experiments, we rely on the Universal Sentence Encoder (USE; Cer et al. 2018), however, other embeddings are possible.1 We represent a scene by the mean of its sentence representations and measure scene similarity ei j using cosine.2 As in the original TEXTRANK algorithm (Mihalcea and Tarau, 2004), scenes are ranked based on their centrality and the M most central ones are selected to appear in the summary. 3.2 Supervised Screenplay Summarization Most extractive models frame summarization as a classification problem. Following a recent approach (SUMMARUNNER; Nallapati et al. 2017), we use a neural network-based encoder to build representations for scenes and apply a binary classifier over these to predict whether they should be in the summary. For each scene si ∈D, we predict a label yi ∈{0,1} (where 1 means that si must be in the summary) and assign a score p(yi|si,D,θ) quantifying si’s relevance to the summary (θ denotes model parameters). We assemble a summary by selecting M sentences with the top p(1|si,D,θ). We calculate sentence representations via the pre-trained USE encoder (Cer et al., 2018); a scene is represented as the weighted sum of the representations of its sentences, which we obtain from a BiLSTM equipped with an attention mechanism. Next, we compute richer scene representations by modeling surrounding context of a given scene. 1USE performed better than BERT in our experiments. 2We found cosine to be particularly effective with USE representations; other metrics are also possible. We encode the screenplay with a BiLSTM network and obtain contextualized representations s′ i for scenes si by concatenating the hidden layers of the forward −→ hi and backward ←− hi LSTM, respectively: s′ i = [−→ hi ;←− hi ]. The vector s′ i therefore represents the content of the ith scene. We also estimate the salience of scene si by measuring its similarity with a global screenplay content representation d. The latter is the weighted sum of all scene representations s1,s2,...,sn. We calculate the semantic similarity between s′ i and d by computing the element-wise dot product bi, cosine similarity ci, and pairwise distance ui between their respective vectors: bi = s′ i ⊙d ci = s′ i ·d s′ i ∥d∥ (2) ui = s′ i ·d max(∥s′ i∥2 ·∥d∥2) (3) The salience vi of scene si is the concatenation of the similarity metrics: vi = [bi;ci;ui]. The content vector s′ i and the salience vector vi are concatenated and fed to a single neuron that outputs the probability of a scene belonging to the summary.3 3.3 Narrative Structure We now explain how to inject knowledge about narrative structure into our summarization models. For both models, such knowledge is transferred via a network pre-trained on the TRIPOD4 dataset introduced by Papalampidi et al. (2019). This dataset contains 99 movies annotated with turning points. TPs are key events in a narrative that define the progression of the plot and occur between consecutive acts (thematic units). It is often assumed (Cutting, 2016) that there are six acts in a film (Figure 1), each delineated by a turning point (arrows in the figure). Each of the five TPs has also a well-defined function in the narrative: we present each TP alongside with its definition as stated in screenwriting theory (Hauge, 2017) and adopted by Papalampidi et al. (2019) in Table 1 (see Appendix A for a more detailed description of narrative structure theory). Papalampidi et al. (2019) identify scenes in movies that correspond to these key events as a means for analyzing the narrative 3Aside from salience and content, Nallapati et al. (2017) take into account novelty and position-related features. We ignore these as they are specific to news articles and denote the modified model as SUMMARUNNER*. 4https://github.com/ppapalampidi/TRIPOD 1924 Turning Point Definition TP1: Opportunity Introductory event that occurs after the presentation of the story setting. TP2: Change of Plans Event where the main goal of the story is defined. TP3: Point of No Return Event that pushes the main character(s) to fully commit to their goal. TP4: Major Setback Event where everything falls apart (temporarily or permanently). TP5: Climax Final event of the main story, moment of resolution. Table 1: Turning points and their definitions as given in Papalampidi et al. (2019) structure of movies. They collect sentence-level TP annotations for plot synopses and subsequently project them via distant supervision onto screenplays, thereby creating silver-standard labels. We utilize this silver-standard dataset in order to pretrain a network which performs TP identification. TP Identification Network We first encode screenplay scenes via a BiLSTM equipped with an attention mechanism. We then contextualize them with respect to the whole screenplay via a second BiLSTM. Next, we compute topic-aware scene representations ti via a context interaction layer (CIL) as proposed in Papalampidi et al. (2019). CIL is inspired by traditional segmentation approaches (Hearst, 1997) and measures the semantic similarity of the current scene with a preceding and following context window in the screenplay. Hence, the topic-aware scene representations also encode the degree to which each scene acts as a topic boundary in the screenplay. In the final layer, we employ TP-specific attention mechanisms to compute the probability pi j that scene ti represents the jth TP in the screenplay. Note that we expect the TP-specific attention distributions to be sparse, as there are only a few scenes which are relevant for a TP (recall that TPs are boundary scenes between sections). To encourage sparsity, we add a low temperature value τ (Hinton et al., 2015) to the softmax part of the attention mechanisms: gi j = tanh(Wjti +bj), gj ∈[−1,1] (4) pi j = exp(gij/τ) ∑T t=1 exp(gt j/τ), T ∑ i=1 pij = 1 (5) where Wj,bj represent the trainable weights of the attention layer of the jth TP. Unsupervised SUMMER We now introduce our model, SUMMER (short for Screenplay Summarization with Narrative Structure).5 We first present an unsupervised variant which modifies the computation of scene centrality in the directed version of TEXTRANK (Equation (1)). Specifically, we use the pre-trained network described in Section 3.3 to obtain TP-specific attention distributions. We then select an overall score fi for each scene (denoting how likely it is to act as a TP). We set fi = max j∈[1,5] pi j, i.e., to the pi j value that is highest across TPs. We incorporate these scores into centrality as follows: centrality(si)=λ1∑ j<i (ei j+ f j)+λ2∑ j>i (ei j+fi) (6) Intuitively, we add the f j term in the forward sum in order to incrementally increase the centrality scores of scenes as the story moves on and we encounter more TP events (i.e., we move to later sections in the narrative). At the same time, we add the fi term in the backward sum in order to also increase the scores of scenes identified as TPs. Supervised SUMMER We also propose a supervised variant of SUMMER following the basic model formulation in Section 3.3. We still represent a scene as the concatenation of a content vector s′ and salience vector v′, which serve as input to a binary classifier. However, we now modify how salience is determined; instead of computing a general global content representation d for the screenplay, we identify a sequence of TPs and measure the semantic similarity of each scene with this sequence. Our model is depicted in Figure 3. We utilize the pre-trained TP network (Figures 3(a) and (b)) to compute sparse attention scores over scenes. In the supervised setting, where gold-standard binary labels provide a training signal, we fine-tune the network in an end-toend fashion on summarization (Figure 3(c)). We compute the TP representations via the attention scores; we calculate a vector tp j as the weighted sum of all topic-aware scene representations t produced via CIL: tp j = ∑i∈[1,N] pi jti, where N is the number of scenes in a screenplay. In practice, only a few scenes contribute to tp j due to the τ parameter in the softmax function (Equation (5)). A TP-scene interaction layer measures the semantic similarity between scenes ti and latent TP representations tpj (Figure 3(c)). Intuitively, a complete summary should contain scenes which 5We make our code publicly available at https:// github.com/ppapalampidi/SUMMER. 1925 s1 sk sM . . . . . . Screenplay Encoder (BiLSTM) s'1 s'k s'M . . . . . . Content Interaction Layer t1 tk tM . . . . . . TP1 TP2 TP3 TP4 TP5 tp1 TP-scene Interaction Layer . . Content . . Salience wrt plotline Final scene representations (b): Narrative structure prediction tp2 tp3 tp4 tp5 (c): Summary scenes prediction y1, ..., yk, ..., yM (a): Scene encoding . . Figure 3: Overview of SUMMER. We use one TP-specific attention mechanism per turning point in order to acquire TP-specific distributions over scenes. We then compute the similarity between TPs and contextualized scene representations. Finally, we perform max pooling over TP-specific similarity vectors and concatenate the final similarity representation with the contextualized scene representation. are related to at least one of the key events in the screenplay. We calculate the semantic similarity vi j of scene ti with TP tpj as in Equations (2) and (3). We then perform max pooling over vectors vi1,...,viT, where T is the number of TPs (i.e., five) and calculate a final similarity vector v′ i for the ith scene. The model is trained end-to-end on the summarization task using BCE, the binary cross-entropy loss function. We add an extra regularization term to this objective to encourage the TP-specific attention distributions to be orthogonal (since we want each attention layer to attend to different parts of the screenplay). We thus maximize the Kullback-Leibler (KL) divergence DKL between all pairs of TP attention distributions tpi, i ∈[1,5]: O = ∑ i∈[1,5] ∑ j∈[1,5],j̸=i log 1 DKL tpi tp j  +ε (7) Furthermore, we know from screenwriting theory (Hauge, 2017) that there are rules of thumb as to when a TP should occur (e.g., the Opportunity occurs after the first 10% of a screenplay, Change of Plans is approximately 25% in). It is reasonable to discourage tp distributions to deviate drastically from these expected positions. Focal regularization F minimizes the KL divergence DKL between each TP attention distribution tpi and its expected position distribution thi: F = ∑ i∈[1,5] DKL (tpi∥thi) (8) The final loss L is the weighted sum of all three components, where a,b are fixed during training: L = BCE+aO+bF. 4 Experimental Setup Crime Scene Investigation Dataset We performed experiments on an extension of the CSI dataset6 introduced by Frermann et al. (2018). It consists of 39 CSI episodes, each annotated with word-level labels denoting whether the perpetrator is mentioned in the utterances characters speak. We further collected scene-level binary labels indicating whether episode scenes are important and should be included in a summary. Three human judges performed the annotation task after watching the CSI episodes scene-by-scene. To facilitate the annotation, judges were asked to indicate why they thought a scene was important, citing the following reasons: it revealed (i) the victim, (ii) the cause of death, (iii) an autopsy report, (iv) crucial evidence, (v) the perpetrator, and (vi) the motive or the relation between perpetrator and victim. Annotators were free to select more than one or none of the listed reasons where appropriate. We can think of these reasons as high-level aspects a good summary should cover (for CSI and related crime series). Annotators were not given any information about TPs or narrative structure; the annotation was not guided by theoretical considerations, rather our aim was to produce useful CSI summaries. Table 2 presents the dataset statistics (see also Appendix B for more detail). Implementation Details In order to set the hyperparameters of all proposed networks, we used a small development set of four episodes from the CSI dataset (see Appendix B for details). After experimentation, we set the temperature τ of the softmax layers for the TP-specific attentions (Equation (5)) to 0.01. Since the binary labels in the 6https://github.com/EdinburghNLP/csi-corpus 1926 overall episodes 39 scenes 1544 summary scenes 454 per episode scenes 39.58 (6.52) crime-specific aspects 5.62 (0.24) summary scenes 11.64 (2.98) summary scenes (%) 29.75 (7.35) sentences 822.56 (936.23) tokens 13.27k (14.67k) per episode scene sentences 20.78 (35.61) tokens 335.19 (547.61) tokens per sentence 16.13 (16.32) Table 2: CSI dataset statistics; means and (std). supervised setting are imbalanced, we apply class weights to the binary cross-entropy loss of the respective models. We weight each class by its inverse frequency in the training set. Finally, in supervised SUMMER, where we also identify the narrative structure of the screenplays, we consider as key events per TP the scenes that correspond to an attention score higher than 0.05. More implementation details can be found in Appendix C. As shown in Table 2, the gold-standard summaries in our dataset have a compression rate of approximately 30%. During inference, we select the top M scenes as the summary, such that they correspond to 30% of the length of the episode. 5 Results and Analysis Is Narrative Structure Helpful? We perform 10-fold cross-validation and evaluate model performance in terms of F1 score. Table 3 summarizes the results of unsupervised models. We present the following baselines: Lead 30% selects the first 30% of an episode as the summary, Last 30% selects the last 30%, and Mixed 30%, randomly selects 15% of the summary from the first 30% of an episode and 15% from the last 30%. We also compare SUMMER against TEXTRANK based on tf*idf (Mihalcea and Tarau, 2004), the directed neural variant described in Section 3.1 without any TP information, a variant where TPs are approximated by their expected position as postulated in screenwriting theory, and a variant that incorporates information about characters (Gorinski and Lapata, 2015) instead of narrative structure. For the characterbased TEXTRANK, called SCENESUM, we substitute the fi, f j scores in Equation (6) with characterrelated importance scores ci similar to the definiModel F1 Lead 30% 30.66 Last 30% 39.85 Mixed 30% 34.32 TEXTRANK, undirected, tf*idf 32.11 TEXTRANK, directed, neural 41.75 TEXTRANK, directed, expected TP positions 41.05 SCENESUM, directed, character-based weights 42.02 SUMMER 44.70 Table 3: Unsupervised screenplay summarization. F1 Coverage of aspects # scenes per TP Lead 30% 30.66 – – Last 30% 39.85 – – Mixed 30% 34.32 – – SUMMARUNNER* 48.56 – – SCENESUM 47.71 – – SUMMER, fixed one-hot TPs 46.92 63.11 1.00 SUMMER, fixed distributions 47.64 67.01 1.05 SUMMER, −P, −R 51.93 44.48 1.19 SUMMER, −P, +R 49.98 51.96 1.14 SUMMER, +P, −R 50.56 62.35 3.07 SUMMER, +P, +R 52.00 70.25 1.20 Table 4: Supervised screenplay summarization; for in SUMMER variants, we also report the percentage of aspect labels covered by latent TP predictions. tion in Gorinski and Lapata (2015): ci = ∑c∈C [c ∈S ∪main(C)] ∑c∈C [c ∈S] (9) where S is the set of all characters participating in scene si, C is the set of all characters participating in the screenplay and main(C) are all the main characters of the screenplay. We retrieve the set of main characters from the IMDb page of the respective episode. We also note that human agreement for our task is 79.26 F1 score, as measured on a small subset of the corpus. As shown in Table 3, SUMMER achieves the best performance (44.70 F1 score) among all models and is superior to an equivalent model which uses expected TP positions or a character-based representation. This indicates that the pre-trained network provides better predictions for key events than position and character heuristics, even though there is a domain shift from Hollywood movies in the TRIPOD corpus to episodes of a crime series in the CSI corpus. Moreover, we find that the directed versions of TEXTRANK are better at identifying important scenes than the undirected version. We found that performance peaks with λ1 = 0.7 (see Equation (6)), indicating that higher importance is given to scenes as the story progresses (see Appendix D for experiments with different λ values). 1927 In Table 4, we report results for supervised models. Aside from the various baselines in the first block of the table, we compare the neural extractive model SUMMARUNNER*7 (Nallapati et al., 2017) presented in Section 3.2 with several variants of our model SUMMER. We experimented with randomly initializing the network for TP identification (−P) and with using a pretrained network (+P). We also experimented with removing the regularization terms, O and F (Equations (7) and (8)) from the loss (−R). We assess the performance of SUMMER when we follow a two-step approach where we first predict TPs via the pre-trained network and then train a network on screenplay summarization based on fixed TP representations (fixed one-hot TPs), or alternatively use expected TP position distributions as postulated in screenwriting theory (fixed distributions). Finally, we incorporate character-based information into our baseline and create a supervised version of SCENESUM. We now utilize the character importance scores per scene (Equation (9)) as attention scores – instead of using a trainable attention mechanism – when computing the global screenplay representation d (Section 3.2). Table 4 shows that all end-to-end SUMMER variants outperform SUMMARUNNER*. The best result (52.00 F1 Score) is achieved by pretrained SUMMER with regularization, outperforming SUMMARUNNER* by an absolute difference of 3.44. The randomly initialized version with no regularization achieves similar performance (51.93 F1 score). For summarizing screenplays, explicitly encoding narrative structure seems to be more beneficial than general representations of scene importance. Finally, two-step versions of SUMMER perform poorly, which indicates that end-to-end training and fine-tuning of the TP identification network on the target dataset is crucial. What Does the Model Learn? Apart from performance on summarization, we would also like to examine the quality of the TPs inferred by SUMMER (supervised variant). Problematically, we do not have any gold-standard TP annotation in the CSI corpus. Nevertheless, we can implicitly assess whether they are meaningful by measuring how well they correlate with the reasons annotators cite to justify their decision to include a scene in the summary (e.g., because it reveals cause of death 7Our adaptation of SUMMARUNNER that considers content and salience vectors for scene selection. or provides important evidence). Specifically, we compute the extent to which these aspects overlap with the TPs predicted by SUMMER as: C=∑Ai∈A∑TPj∈TP [dist(TPj,Ai)≤1] |A| (10) where A is the set of all aspect scenes, |A| is the number of aspects, TP is the set of scenes inferred as TPs by the model, Ai and TPj are the subsets of scenes corresponding to the ith aspect and jth TP, respectively, and dist(TPj,Ai) is the minimum distance between TPj and Ai in number of scenes. The proportion of aspects covered is given in Table 4, middle column. We find that coverage is relatively low (44.48%) for the randomly initialized SUMMER with no regularization. There is a slight improvement of 7.48% when we force the TP-specific attention distributions to be orthogonal and close to expected positions. Pre-training and regularization provide a significant boost, increasing coverage to 70.25%, while pre-trained SUMMER without regularization infers on average more scenes representative of each TP. This shows that the orthogonal constraint also encourages sparse attention distributions for TPs. Table 5 shows the degree of association between individual TPs and summary aspects (see Appendix D for illustrated examples). We observe that Opportunity and Change of Plans are mostly associated with information about the crime scene and the victim, Climax is focused on the revelation of the motive, while information relating to cause of death, perpetrator, and evidence is captured by both Point of no Return and Major Setback. Overall, the generic Hollywood-inspired TP labels are adjusted to our genre and describe crime-related key events, even though no aspect labels were provided to our model during training. Do Humans Like the Summaries? We also conducted a human evaluation experiment using the summaries created for 10 CSI episodes.8 We produced summaries based on the gold-standard annotations (Gold), SUMMARUNNER*, and the supervised version of SUMMER. Since 30% of an episode results in lengthy summaries (15 minutes on average), we further increased the compression rate for this experiment by limiting each summary to six scenes. For the gold standard condition, we randomly selected exactly one scene 8https://github.com/ppapalampidi/SUMMER/tree/ master/video_summaries 1928 Turning Point Crime scene Victim Death Cause Perpetrator Evidence Motive Opportunity 56.76 52.63 15.63 15.38 2.56 0.00 Change of Plans 27.03 42.11 21.88 15.38 5.13 0.00 Point of no Return 8.11 13.16 9.38 25.64 48.72 5.88 Major Setback 0.00 0.00 6.25 10.25 48.72 35.29 Climax 2.70 0.00 6.25 2.56 23.08 55.88 Table 5: Percentage of aspect labels covered per TP for SUMMER, +P, +R. System Crime scene Victim Death Cause Perpetrator Evidence Motive Overall Rank SUMMARUNNER* 85.71 93.88 75.51 81.63 59.18 38.78 72.45 2.18 SUMMER 89.80 87.76 83.67 81.63 77.55 57.14 79.59 2.00 Gold 89.80 91.84 71.43 83.67 65.31 57.14 76.53 1.82 Table 6: Human evaluation: percentage of yes answers by AMT workers regarding each aspect in a summary. All differences in (average) Rank are significant (p < 0.05, using a χ2 test). per aspect. For SUMMARUNNER* and SUMMER we selected the top six predicted scenes based on their posterior probabilities. We then created video summaries by isolating and merging the selected scenes in the raw video. We asked Amazon Mechanical Turk (AMT) workers to watch the video summaries for all systems and rank them from most to least informative. They were also presented with six questions relating to the aspects the summary was supposed to cover (e.g., Was the victim revealed in the summary? Do you know who the perpetrator was?). They could answer Yes, No, or Unsure. Five workers evaluated each summary. Table 6 shows the proportion of times participants responded Yes for each aspect across the three systems. Although SUMMER does not improve over SUMMARUNNER* in identifying basic information (i.e., about the victim and perpetrator), it creates better summaries overall with more diverse content (i.e., it more frequently includes information about cause of death, evidence, and motive). This observation validates our assumption that identifying scenes that are semantically close to the key events of a screenplay leads to more complete and detailed summaries. Finally, Table 6 also lists the average rank per system (lower is better), which shows that crowdworkers like gold summaries best, SUMMER is often ranked second, followed by SUMMARUNNER* in third place. 6 Conclusions In this paper we argued that the underlying structure of narratives is beneficial for long-form summarization. We adapted a scheme for identifying narrative structure (i.e., turning points) in Hollywood movies and showed how this information can be integrated with supervised and unsupervised extractive summarization algorithms. Experiments on the CSI corpus showed that this scheme transfers well to a different genre (crime investigation) and that utilizing narrative structure boosts summarization performance, leading to more complete and diverse summaries. Analysis of model output further revealed that latent events encapsulated by turning points correlate with important aspects of a CSI summary. Although currently our approach relies solely on textual information, it would be interesting to incorporate additional modalities such as video or audio. Audiovisual information could facilitate the identification of key events and scenes. Besides narrative structure, we would also like to examine the role of emotional arcs (Vonnegut, 1981; Reagan et al., 2016) in a screenplay. An often integral part of a compelling story is the emotional experience that is evoked in the reader or viewer (e.g., somebody gets into trouble and then out of it, somebody finds something wonderful, loses it, and then finds it again). Understanding emotional arcs may be useful to revealing a story’s shape, highlighting important scenes, and tracking how the story develops for different characters over time. Acknowledgments We thank the anonymous reviewers for their feedback. We gratefully acknowledge the support of the European Research Council (Lapata; award 681760, “Translating Multiple Modalities into Text”) and of the Leverhulme Trust (Keller; award IAF-2017-019). References Apoorv Agarwal, Sriramkumar Balasubramanian, Jiehan Zheng, and Sarthak Dash. 2014. Parsing 1929 Screenplays for Extracting Social Networks from Movies. In Proceedings of the 3rd Workshop on Computational Linguistics for Literature, pages 50– 58, Gothenburg, Sweden. David Bamman, Brendan O’Connor, and Noah A. Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352–361, Sofia, Bulgaria. David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 370–379, Baltimore, Maryland. John B Black and Robert Wilensky. 1979. An evaluation of story grammars. Cognitive science, 3(3):213–229. Charles Oscar Brink. 2011. Horace on Poetry: The’Ars Poetica’. Cambridge University Press. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602–610, Suntec, Singapore. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494. James E Cutting. 2016. Narrative theory and the dynamics of popular movies. Psychonomic bulletin & review, 23(6):1713–1743. Yue Dong. 2018. A survey on neural network-based summarization methods. ArXiv, abs/1804.04589. David K. Elson, Nicholas Dames, and Kathleen R. Mckeown. 2010. Extracting social networks from literary fiction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 138–147, Uppsala, Sweden. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Syd Field. 2005. Screenplay: Foundations of Screenwriting. Dell Publishing Company. Mark Alan Finlayson. 2012. Learning Narrative Structure from Annotated Folktales. Ph.D. thesis, Massachusetts Institute of Technology. Lea Frermann, Shay B Cohen, and Mirella Lapata. 2018. Whodunnit? crime drama as a case for natural language understanding. Transactions of the Association of Computational Linguistics, 6:1–15. Gustav Freytag. 1896. Freytag’s technique of the drama: an exposition of dramatic composition and art. Scholarly Press. Philip John Gorinski and Mirella Lapata. 2015. Movie script summarization as graph-based scene extraction. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1066–1076, Denver, Colorado. Association for Computational Linguistics. Amit Goyal, Ellen Riloff, and Hal Daum´e III. 2010. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 77–86, Cambridge, MA. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. NEWSROOM: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 708–719, New Orleans, USA. Michael Hauge. 2017. Storytelling Made Easy: Persuade and Transform Your Audiences, Buyers, and Clients – Simply, Quickly, and Profitably. Indie Books International. Marti A Hearst. 1997. Texttiling: Segmenting text into multi-paragraph subtopic passages. Computational linguistics, 23(1):33–64. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693– 1701. Morgan, Kaufmann. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Anna Kazantseva and Stan Szpakowicz. 2010. Summarizing short stories. Computational Linguistics, 36(1):71–109. Joy Kim and Andr´es Monroy-Hern´andez. 2015. Storia: Summarizing social media content based on narrative theory using crowdsourcing. CoRR, abs/1509.03026. 1930 Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wendy G. Lehnert. 1981. Plot units and narrative summarization. Cognitive Science, 5(4):293–331. Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070– 5081, Florence, Italy. Association for Computational Linguistics. Interjeet Mani. 2012. Computational Modeling of Narative. Synthesis Lectures on Human Language Technologies. Morgan and Claypool Publishers. Neil McIntyre and Mirella Lapata. 2010. Plot induction and evolutionary search for story generation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1562– 1572, Uppsala, Sweden. Rada Mihalcea and Hakan Ceylan. 2007. Explorations in automatic book summarization. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 380–389. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359. Pinelopi Papalampidi, Frank Keller, and Mirella Lapata. 2019. Movie plot analysis via turning point identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1707–1717. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. Patrice Pavis. 1998. Dictionary of the theatre: Terms, concepts, and analysis. University of Toronto Press. Vladimir Iakovlevich Propp. 1968. Morphology of the Folktale. University of Texas. Andrew J. Reagan, Lewis Mitchell, Dilan Kiley, Christopher M. Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated by six basic shapes. EPJ Data Science, 5(31):1–12. Whitman Richards, Mark Alan Finlayson, and Patrick Henry Winston. 2009. Advancing computational models of narrative. Technical Report 63:2009, MIT Computer Science and Atrificial Intelligence Laboratory. David E. Rumelhart. 1980. On evaluating story grammars. Cognitive Science, 4(3):313–316. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12). Roger C. Schank and Robert P. Abelson. 1975. Scripts, plans, and knowledge. In Proceedings of the 4th International Joint Conference on Artificial Intelligence, pages 151–157, Tblisi, USSR. Shashank Srivastava, Snigdha Chaturvedi, and Tom Mitchell. 2016. Inferring interpersonal relations in narrative summaries. In Proceedings of the 13th AAAI Conference on Artificial Intelligence, pages 2807–2813, Phoenix, Arizona. AAAI Press. Kristin Thompson. 1999. Storytelling in the new Hollywood: Understanding classical narrative technique. Harvard University Press. Tsvetomira Tsoneva, Mauro Barbieri, and Hans Weda. 2007. Automated summarization of narrative video on a semantic level. In International Conference on Semantic Computing (ICSC 2007), pages 169–176. IEEE. Josep Valls-Vargas, J. Zhu, and Santiago Ontanon. 2014. Toward automatic role identification in unannotated folk tales. In Proceedings of the 10th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, pages 188–194. Paul Vicol, Makarand Tapaswi, Lluis Castrejon, and Sanja Fidler. 2018. Moviegraphs: Towards understanding human-centric situations from videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8581–8590. Christopher Vogler. 2007. Writer’s Journey: Mythic Structure for Writers. Michael Wiese Productions. Kurt Vonnegut. 1981. Palm Sunday. RosettaBooks LLC, New York. 1931 Crime scene 12.4% Victim 14.6% Perpetrator 14.4% Cause of death 11.6% Evidence 36.1% Motive 10.9% Figure 4: Average composition of a CSI summary based on different crime-related aspects. Hao Zheng and Mirella Lapata. 2019. Sentence centrality revisited for unsupervised summarization. arXiv preprint arXiv:1906.03508. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. arXiv preprint arXiv:1807.02305. A Narrative Structure Theory The initial formulation of narrative structure was promoted by Aristotle, who defined the basic triangle-shaped plot structure, that has a beginning (protasis), middle (epitasis) and end (catastrophe) (Pavis, 1998). However, later theories argued that the structure of a play should be more complex (Brink, 2011) and hence, other schemes (Freytag, 1896) were proposed with fine-grained stages and events defining the progression of the plot. These events are considered as the precursor of turning points, defined by Thompson (1999) and used in modern variations of screenplay theory. Turning points are narrative moments from which the plot goes in a different direction. By definition these occur at the junctions of acts. Currently, there are myriad schemes describing the narrative structure of films, which are often used as a practical guide for screenwriters (Cutting, 2016). One variation of these modern schemes is adopted by Papalampidi et al. (2019), who focus on the definition of turning points and demonstrate that such events indeed exist in films and can be automatically identified. According to the adopted scheme (Hauge, 2017), there are six stages (acts) in a film, namely the setup, the new situation, progress, complications and higher stakes, the final push and the aftermath, separated by the five turning points presented in Table 1. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 32 34 36 38 40 42 44 F1 score (%) Directed neural TextRank SUMMER Figure 5: F1 score (%) for directed neural TEXTRANK and SUMMER for unsupervised summarization with respect to different λ1 values. Higher λ1 values correspond to higher importance in the next context for the centrality computation of a current scene. B CSI Corpus As described in Section 4, we collected aspectbased summary labels for all episodes in the CSI corpus. In Figure 4 we illustrate the average composition of a summary based on the different aspects seen in a crime investigation (e.g., crime scene, victim, cause of death, perpetrator, evidence). Most of these aspects are covered in 10–15% of a summary, which corresponds to approximately two scenes in the episode. Only the “Evidence” aspect occupies a larger proportion of the summary (36.1%) corresponding to five scenes. However, there exist scenes which cover multiple aspects (an as a result are annotated with more than one label) and episodes that do not include any scenes related to a specific aspect (e.g., if the murder was a suicide, there is no perpetrator). We should note that Frermann et al. (2018) discriminate between different cases presented in the same episode in the original CSI dataset. Specifically, there are episodes in the dataset, where except for the primary crime investigation case, a second one is presented occupying a significantly smaller part of the episode. Although in the original dataset, there are annotations available indicating which scenes refer to each case, we assume no such knowledge treating the screenplay as a single unit — most TV series and movies contain substories. We also hypothesize that the latent identified TP events in SUMMER should relate to the primary case. 1932 Figure 6: Examples of inferred TPs alongside with gold-standard aspect-based summary labels in CSI episodes at test time. The TP events are identified in the latent space for the supervised version of SUMMER (+P, +R). C Implementation Details In all unsupervised versions of TEXTRANK and SUMMER we used a threshold h equal to 0.2 for removing weak edges from the corresponding fully connected screenplay graphs. For the supervised version of SUMMER, where we use additional regularization terms in the loss function, we experimentally set the weights a and b for the different terms to 0.15 and 0.1, respectively. We used the Adam algorithm (Kingma and Ba, 2014) for optimizing our networks. After experimentation, we chose an LSTM with 64 neurons for encoding the scenes in the screenplay and another identical one for contextualizing them. For the context interaction layer, the window l for computing the surrounding context of a screenplay scene was set to 20% of the screenplay length as proposed in Papalampidi et al. (2019). Finally, we also added a dropout of 0.2. For developing our models we used PyTorch (Paszke et al., 2017). D Additional Results We illustrate in Figure 5 the performance (F1 score) of the directed neural TEXTRANK and SUMMER models in the unsupervised setting with respect to different λ1 values. Higher λ1 values correspond to higher importance for the succeeding scenes and respectively lower importance for the preceding ones, since λ1 and λ2 are bounded (λ1 +λ2 = 1). We observe that performance increases when higher importance is attributed to screenplay scenes as the story moves on (λ1 > 0.5), whereas for extreme cases (λ1 →1), where only the later parts of the story are considered, performance drops. Overall, the same peak appears for both TEXTRANK and SUMMER when λ1 ∈[0.6,0.7], which means that slightly higher importance is attributed to the screenplay scenes that follow. Intuitively, initial scenes of an episode tend to have high similarity with all other scenes in the screenplay, and on their own are not very informative (e.g., the crime, victim, and suspects are introduced but the perpetrator is not yet known). As a result, the undirected version of TEXTRANK tends to favor the first part of the story and the resulting summary consists mainly of initial scenes. By adding extra importance to later scenes, we also encourage the selection of later events that might be surprising (and hence have lower similarity with other scenes) but more informative for the summary. Moreover, in SUMMER, where the weights change in a systematic manner based on narrative structure, we also observe that scenes appearing later in the screenplay are selected more often for inclusion in the summary. As described in detail in Section 3.3, we also 1933 infer the narrative structure of CSI episodes in the supervised version of SUMMER via latent TP representations. During experimentation (see Section 5), we found that these TPs are highly correlated with different aspects of a CSI summary. In Figure 6 we visualize examples of identified TPs on CSI episodes during test time alongside with gold-standard aspect-based summary annotations. Based on the examples, we empirically observe that different TPs tend to capture different types of information helpful for summarizing crime investigation stories (e.g., crime scene, victim, perpetrator, motive).
2020
174
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1934–1945 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1934 Unsupervised Opinion Summarization with Noising and Denoising Reinald Kim Amplayo and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh [email protected], [email protected] Abstract The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization. Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced. In this paper we enable the use of supervised learning for the setting where there are only documents available (e.g., product or business reviews) without ground truth summaries. We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof which we treat as pseudo-review input. We introduce several linguistically motivated noise generation functions and a summarization model which learns to denoise the input and generate the original review. At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise. Extensive automatic and human evaluation shows that our model brings substantial improvements over both abstractive and extractive baselines. 1 Introduction The proliferation of massive numbers of online product, service, and merchant reviews has provided strong impetus to develop systems that perform opinion mining automatically (Pang and Lee, 2008). The vast majority of previous work (Hu and Liu, 2006) breaks down the problem of opinion aggregation and summarization into three interrelated tasks involving aspect extraction (Mukherjee and Liu, 2012), sentiment identification (Pang et al., 2002; Pang and Lee, 2004), and summary creation based on extractive (Radev et al., 2000; Lu et al., 2009) or abstractive methods (Ganesan et al., 2010; Carenini et al., 2013; Gerani et al., 2014; Di Fabbrizio et al., 2014). Although potentially more challenging, abstractive approaches seem more appropriate for generating informative and concise summaries, e.g., by performing various rewrite operations (e.g., deletion of words or phrases and insertion of new ones) which go beyond simply copying and rearranging passages from the original opinions. Abstractive summarization has enjoyed renewed interest in recent years thanks to the availability of large-scale datasets (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018; Liu et al., 2018; Fabbri et al., 2019) which have driven the development of neural architectures for summarizing single and multiple documents. Several approaches (See et al., 2017; Celikyilmaz et al., 2018; Paulus et al., 2018; Gehrmann et al., 2018; Liu et al., 2018; Perez-Beltrachini et al., 2019; Liu and Lapata, 2019; Wang and Ling, 2016) have shown promising results with sequence-to-sequence models that encode one or several source documents and then decode the learned representations into an abstractive summary. The supervised training of high-capacity models on large datasets containing hundreds of thousands of document-summary pairs is critical to the recent success of deep learning techniques for abstractive summarization. Unfortunately, in most domains (other than news) such training data is not available and cannot be easily sourced. For instance, manually writing opinion summaries is practically impossible since an annotator must read all available reviews for a given product or service which can be prohibitively many. Moreover, different types of products impose different restrictions on the summaries which might vary in terms of length, or the types of aspects being mentioned, rendering the application of transfer learning techniques (Pan and Yang, 2010) problematic. Motivated by these issues, Chu and Liu (2019) consider an unsupervised learning setting where 1935 there are only documents (product or business reviews) available without corresponding summaries. They propose an end-to-end neural model to perform abstractive summarization based on (a) an autoencoder that learns representations for each review and (b) a summarization module which takes the aggregate encoding of reviews as input and learns to generate a summary which is semantically similar to the source documents. Due to the absence of ground truth summaries, the model is not trained to reconstruct the aggregate encoding of reviews, but rather it only learns to reconstruct the encoding of individual reviews. As a result, it may not be able to generate meaningful text when the number of reviews is large. Furthermore, autoencoders are constrained to use simple decoders lacking attention (Bahdanau et al., 2014) and copy (Vinyals et al., 2015) mechanisms which have proven useful in the supervised setting leading to the generation of informative and detailed summaries. Problematically, a powerful decoder might be detrimental to the reconstruction objective, learning to express arbitrary distributions of the output sequence while ignoring the encoded input (Kingma and Welling, 2014; Bowman et al., 2016). In this paper, we enable the use of supervised techniques for unsupervised summarization. Specifically, we automatically generate a synthetic training dataset from a corpus of product reviews, and use this dataset to train a more powerful neural model with supervised learning. The synthetic data is created by selecting a review from the corpus, pretending it is a summary, generating multiple noisy versions thereof and treating these as pseudoreviews. The latter are obtained with two noise generation functions targeting textual units of different granularity: segment noising introduces noise at the word- and phrase-level, while document noising replaces a review with a semantically similar one. We use the synthetic data to train a neural model that learns to denoise the pseudo-reviews and generate the summary. This is motivated by how humans write opinion summaries, where denoising can be seen as removing diverging information. Our proposed model consists of a multi-source encoder and a decoder equipped with an attention mechanism. Additionally, we introduce three modules: (a) explicit denoising guides how the model removes noise from the input encodings, (b) partial copy enables to copy information from the source reviews only when necessary, and (c) a discriminator helps the decoder generate topically consistent text. We perform experiments on two review datasets representing different domains (movies vs businesses) and summarization requirements (short vs longer summaries). Results based on automatic and human evaluation show that our method outperforms previous unsupervised summarization models, including the state-of-the-art abstractive system of Chu and Liu (2019) and is on the same par with a state-of-the-art supervised model (Wang and Ling, 2016) trained on a small sample of (genuine) review-summary pairs. 2 Related Work Most previous work on unsupervised opinion summarization has focused on extractive approaches (Carenini et al., 2006; Ku et al., 2006; Paul et al., 2010; Angelidis and Lapata, 2018) where a clustering model groups opinions of the same aspect, and a sentence extraction model identifies text representative of each cluster. Ganesan et al. (2010) propose a graph-based abstractive framework for generating concise opinion summaries, while Di Fabbrizio et al. (2014) use an extractive system to first select salient sentences and then generate an abstractive summary based on hand-written templates (Carenini and Moore, 2006). As mentioned earlier, we follow the setting of Chu and Liu (2019) in assuming that we have access to reviews but no gold-standard summaries. Their model learns to generate opinion summaries by reconstructing a canonical review of the average encoding of input reviews. Our proposed method is also abstractive and neural-based, but eschews the use of an autoencoder in favor of supervised sequence-to-sequence learning through the creation of a synthetic training dataset. Concurrently with our work, Braˇzinskas et al. (2019) use a hierarchical variational autoencoder to learn a latent code of the summary. While they also use randomly sampled reviews for supervised training, our dataset construction method is more principled making use of linguistically motivated noise functions. Our work relates to denoising autoencoders (DAEs; Vincent et al., 2008), which have been effectively used as unsupervised methods for various NLP tasks. Earlier approaches have shown that DAEs can be used to learn high-level text representations for domain adaptation (Glorot et al., 2011) and multimodal representations of textual and visual input (Silberer and Lapata, 2014). Recent 1936 work has applied DAEs to text generation tasks, specifically to data-to-text generation (Freitag and Roy, 2018) and extractive sentence compression (Fevry and Phang, 2018). Our model differs from these approaches in two respects. Firstly, while previous work has adopted trivial noising methods such as randomly adding or removing words (Fevry and Phang, 2018) and randomly corrupting encodings (Silberer and Lapata, 2014), our noise generators are more linguistically informed and suitable for the opinion summarization task. Secondly, while in Freitag and Roy (2018) the decoder is limited to vanilla RNNs, our noising method enables the use of more complex architectures, enhanced with attention and copy mechanisms, which are known to improve the performance of summarization systems (Rush et al., 2015; See et al., 2017). 3 Modeling Approach Let X = {x1, ..., xN} denote a set of reviews about a product (e.g., a movie or business). Our aim is to generate a summary y of the opinions expressed in X. We further assume access to a corpus C = {X1, ..., XM} containing multiple reviews about M products without corresponding opinion summaries. Our method consists of two parts. We first create a synthetic dataset D = {(X, y)} consisting of summary-review pairs. Specifically, we sample review xi from C, pretend it is a summary, and generate multiple noisy versions thereof (i.e., pseudoreviews). At training time, a denoising model learns to remove the noise from the reviews and generate the summary. At test time, the same denoising model is used to summarize actual reviews. We use denoising as an auxiliary task for opinion summarization to simulate the fact that summaries tend to omit opinions that do not represent consensus (i.e., noise in the pseudo-review), but include salient opinions found in most reviews (i.e., nonnoisy parts of the pseudo-review). 3.1 Synthetic Dataset Creation via Noising We sample a review as a candidate summary and generate noisy versions thereof, using two functions: (a) segment noising adds noise at the token and chunk level, and (b) document noising adds noise at the text level. The noise functions are illustrated in Figure 1. Summary Sampling Summaries and reviews follow different writing conventions. For examthe movie is a fun comedy with fine performances . the film is a nice comedy with stellar cast . with good production and zohan , a nice comedy by the film . Token-level Noising Document-level Noising Chunk-level Noising Candidate Summary the high-handed premise does not always work in zohan but you have to admire the chutzpah in trying it . the fine performance of sandler as zohan in this very funny comedy makes this movie special . the latest in a long line of underwhelming adam sandler comedies . 0.67 0.12 0.05 PP NP CC NP , NP PP NP . Figure 1: Synthetic dataset creation. Given a sampled candidate summary, we add noise using two methods: (a) segment noising performs token- and chunk-level alterations, and (b) document noising replaces the text with a semantically similar review. ple, reviews are subjective, and often include firstperson singular pronouns such as I and my and several unnecessary characters or symbols. They may also vary in length and detail. We discard reviews from corpus C which display an excess of these characteristics based on a list of domain-specific constraints (detailed in Section 4). We sample a review y from the filtered corpus, which we use as the candidate summary. Segment Noising Given candidate summary y = {w1, ..., wL}, we create a set of segment-level noisy versions X(c) = {x(c) 1 , ..., x(c) N }. Previous work has adopted noising techniques based on random n-gram alterations (Fevry and Phang, 2018), however, we instead rely on two simple, linguistically informed noise functions. Firstly, we train a bidirectional language model (BiLM; Peters et al., 2018) on the review corpus C. For each word in y, the BiLM predicts a softmax word distribution which can be used to replace words. Secondly, we utilize FLAIR1 (Akbik et al., 2019), an off-theshelf state-of-the-art syntactic chunker that leverages contextual embeddings, to shallow parse each review r in corpus C. This results in a list of chunks Cr = {c1, ..., cK} with corresponding syntactic labels Gr = {g1, ..., gK} for each review r, which we use for replacing and rearranging chunks. Segment-level noise involves token- and chunk1https://github.com/zalandoresearch/ flair 1937 level alterations. Token-level alterations are performed by replacing tokens in y with probability pR. Specifically, we replace token wj in y, by sampling token w′ j from the BiLM predicted word distribution (see in Figure 1). We use nucleus sampling (Holtzman et al., 2019), which samples from a rescaled distribution of words with probability higher than a threshold pN , instead of the original distribution. This has been shown to yield better samples in comparison to top-k sampling, mitigating the problem of text degeneration (Holtzman et al., 2019). Chunk-level alterations are performed by removing and inserting chunks in y, and rearranging them based on a sampled syntactic template. Specifically, we first shallow parse y using FLAIR, obtaining a list of chunks Cy, each of which is removed with probability pR. We then randomly sample a review r from our corpus and use its sequence of chunk labels Gr as a syntactic template, which we fill in with chunks in Cy (sampled without replacement), if available, or with chunks in corpus C, otherwise. This results in a noisy version x(c) (see Figure 1 for an example). Repeating the process N times produces the noisy set X(c). We describe this process step-by-step in the Appendix. Document Noising Given candidate summary y = {w1, ..., wL}, we also create another set of document-level noisy versions X(d) = {x(d) 1 , ..., x(d) N }. Instead of manipulating parts of the summary, we altogether replace it with a similar review from the corpus and treat it as a noisy version. Specifically, we select N reviews that are most similar to y and discuss the same product. To measure similarity, we use IDF-weighted ROUGE-1 F1 (Lin, 2004), where we calculate the lexical overlap between the review and the candidate summary, weighted by token importance: overlap = X wj∈x IDF(wj) ∗1(wj ∈y)  P = overlap/|x| R = overlap/|y| F1 = (2 ∗P ∗R)/(P + R) where x is a review in the corpus, 1(·) is an indicator function, and P, R, and F1 are the ROUGE-1 precision, recall, and F1, respectively. The reviews with the highest F1 are selected as noisy versions of y, resulting in the noisy set X(d) (see Figure 1). We create a total of 2 ∗N noisy versions of y, i.e., X = X(c)∪X(d) and obtain our synthetic training data D = {(X, y)} by generating |D| pseudoreview-summary pairs. Both noising methods are necessary to achieve aspect diversity amongst input reviews. Segment noising creates reviews which may mention aspects not found in the summary, while document noising creates reviews with content similar to the summary. Relying on either noise function alone decreases performance (see the ablation studies in Section 5). We show examples of these noisy versions in the Appendix. 3.2 Summarization via Denoising We summarize (aka denoise) the input X with our model which we call DENOISESUM, illustrated in Figure 2. A multi-source encoder produces an encoding for each pseudo-review. The encodings are further corrected via an explicit denoising module, and then fused into an aggregate encoding for each type of noise. Finally, the fused encodings are passed to a decoder with a partial copy mechanism to generate the summary y. Multi-Source Encoder For each pseudo-review xj ∈X where xj = {w1, ..., wL} and wk is the kth token in xj, we obtain contextualized token encodings {hk} and an overall review encoding dj with a BiLSTM encoder (Hochreiter and Schmidhuber, 1997): −→h k = LSTMf(wk, −→h k−1) ←−h k = LSTMb(wk, ←−h k+1) hk = [−→h k; ←−h k] dj = [−→h L; ←−h 1] where −→h k and ←−h k are forward and backward hidden states of the BiLSTM at timestep k, and ; denotes concatenation (see module (a) in Figure 2). Explicit Denoising The model should be able to remove noise from the encodings before decoding the text. While previous methods (Vincent et al., 2008; Freitag and Roy, 2018) implicitly assign the denoising task to the encoder, we propose an explicit denoising component (see module (b) in Figure 2). Specifically, we create a correction vector c(c) j for each pseudo-review d(c) j which resulted from the application of segment noise. c(c) j represents the adjustment needed to denoise each dimension of d(c) j and is used to create ˆd(c) j , a denoised 1938 𝑥" ($) 𝑥& ($) 𝑥' ($) ... (a) Encoder (c) Noise-Specific Fusion (c) Noise-Specific Fusion ... Decoder Attention Attention with Copy (d) Partial Copy Category Category Classifier (e) Discriminator +0.5,+0.3 -0.6,+0.8 -0.9,+0.2 ... ... ... +0.8,-0.1 -0.6,-0.8 -0.4,-0.2 ... ... (b) Denoising Input (Segment Noise) Input (Document Noise) 𝑦 Summary Output 𝑥" ()) 𝑥& ()) 𝑥' ()) ... Figure 2: Architecture of DENOISESUM: it consists of a multi-source encoder with explicit denoising, noisespecific fusion, a decoder with partial copy, and a review category classifier. encoding of d(c) j : q = N X j=1 d(c) j /N c(c) j = tanh(W (c) d [d(c) j ; q] + b(c) d ) ˆd(c) j = d(c) j + c(c) j where q represents a mean review encoding and functions as a query vector, W and b are learned parameters, and superscript (c) signifies segment noising. We can interpret the correction vector as removing or adding information to each dimension when its value is negative or positive, respectively. Analogously, we obtain ˆd(d) j for pseudoreviews d(d) j which have been created with document noising. Noise-Specific Fusion For each type of noise (segment and document), we create a noise-specific aggregate encoding by fusing the denoised encodings into one (see module (c) in Figure 2). Given { ˆd(c) j }, the set of denoised encodings corresponding to segment noisy inputs, we create aggregate encoding s(c) 0 : α(c) j = softmax(W (c) f ˆd(c) j + b(c) f ) s(c) 0 = X j ˆd(c) j ∗α(c) j where αj is a gate vector with the same dimensionality as the denoised encodings. Analogously, we obtain s(d) 0 from the denoised encodings { ˆd(d) j } corresponding to document noisy inputs. Decoder with Partial Copy Our decoder generates a summary given encodings s(c) 0 and s(d) 0 as input. An advantage of our method is its ability to incorporate techniques used in supervised models, such as attention (Bahdanau et al., 2014) and copy (Vinyals et al., 2015). Pseudo-reviews created using segment noising include various chunk permutations, which could result to ungrammatical and incoherent text. Using a copy mechanism on these texts may hurt the fluency of the output. We therefore allow copy on document noisy inputs only (see module (d) in Figure 2). We use two LSTM decoders for the aggregate encodings, one equipped with attention and copy mechanisms, and one without copy mechanism. We then combine the results of these decoders using a learned gate. Specifically, token wt at timestep t is predicted as: s(c) t , p(c)(wt) = LSTMatt(wt−1, s(c) t−1) s(d) t , p(d)(wt) = LSTMatt+copy(wt−1, s(d) t−1) λt = σ(Wp[wt−1; s(c) t ; s(d) t ] + bp) p(wt) = λt∗p(c)(wt) + (1 −λt)∗p(d)(wt) where st and p(wt) are the hidden state and predicted token distribution at timestep t, and σ(·) is the sigmoid function. 1939 3.3 Training and Inference We use a maximum likelihood loss to optimize the generation probability distribution based on summary y = {w1, ..., wL} from our synthetic dataset: Lgen = − X wt∈y log p(wt) The decoder depends on Lgen to generate meaningful, denoised outputs. As this is a rather indirect way to optimize our denoising module, we additionally use a discriminative loss providing direct supervision. The discriminator operates at the output of the fusion module and predicts the category distribution p(z) of the output summary y (see module (e) in Figure 2). The type of categories varies across domains. For movies, categories can be information about their genre (e.g., drama, comedy), while for businesses their specific type (e.g., restaurant, beauty parlor). This information is often included in reviews but we assume otherwise and use an LDA topic model (Blei et al., 2003) to infer p(z) (we present experiments with human labeled and automatically induced categories in Section 5). An MLP classifier takes as input aggregate encodings s(c) and s(d) and infers q(z). The discriminator is trained by calculating the KL divergence between predicted and actual category distributions q(z) and p(z): q(z) = MLPd(s(c), s(d)) Ldisc = DKL(p(z) ∥q(z)) The final objective is the sum of both loss functions: L = Lgen + Ldisc At test time, we are given genuine reviews X as input instead of the synthetic ones. We generate a summary by treating X as X(c) and X(d), i.e., the outcome of segment and document noising. 4 Experimental Setup Dataset We performed experiments on two datasets which represent different domains and summary types. The Rotten Tomatoes dataset2 (Wang and Ling, 2016) contains a large set of reviews for various movies written by critics. Each set of reviews has a gold-standard consensus summary written by an editor. We follow the partition 2http://www.ccs.neu.edu/home/luwang/ data.html Rotten Tomatoes Train* Dev Test #movies 25k 536 737 #reviews/movie 40.0 98.0 100.3 #tokens/review 28.4 23.5 23.6 #tokens/summary 22.7 23.6 23.8 corpus size 245,848 Yelp Train* Dev Test #businesses 100k 100 100 #reviews/business 8.0 8.0 8.0 #tokens/review 72.3 70.3 67.8 #tokens/summary 64.8 70.9 67.3 corpus size 2,320,800 Table 1: Dataset statistics; Train* column refers to the synthetic data we created through noising (Section 3.1). of Wang and Ling (2016) but do not use ground truth summaries during training to simulate our unsupervised setting. The Yelp dataset3 in Chu and Liu (2019) includes a large training corpus of reviews without gold-standard summaries. The latter are provided for the development and test set and were generated by an Amazon Mechanical Turker. We follow the splits introduced in their work. A comparison between the two datasets is provided in Table 1. As can be seen, Rotten Tomatoes summaries are generally short, while Yelp reviews are three times longer. Interestingly, there are a lot more reviews to summarize in Rotten Tomatoes (approximately 100 reviews) while input reviews in Yelp are considerably less (i.e., 8 reviews). Implementation To create the synthetic dataset, we sample candidate summaries using the following constraints: (1) the number of nonalphanumeric symbols must be less than 3, (2) there must be no first-person singular pronouns (not used for Yelp), and (3) the number of tokens must be between 20 to 30 (50 to 90 for Yelp). We set pR to 0.8 and 0.4 for token and chunk noise, and pN to 0.9. For each review-summary pair, the number of reviews N is sampled from the Gaussian distribution N(µ, σ2) where µ and σ are the mean and standard deviation of the number of reviews in the development set. We created 25k (Rotten Tomatoes) and 100k (Yelp) pseudo-reviews for our synthetic datasets (see Table 1). We set the dimensions of the word embeddings to 300, the vocabulary size to 50k, the hidden di3https://github.com/sosuperic/MeanSum 1940 Model METEOR RSU4 R1 R2 RL ORACLE 12.10 12.01 30.94 10.75 24.95 LEXRANK* 5.59 3.98 — — — WORD2VEC 6.14 4.04 13.93 2.10 10.81 SENTINEURON 7.02 4.77 15.90 2.01 11.74 OPINOSIS* 6.07 4.90 — — — MEANSUM 6.07 4.41 15.79 1.94 12.26 DENOISESUM 8.30 6.84 21.26 4.61 16.27 Best Supervised* 8.50 7.39 21.19 7.64 17.80 Table 2: Automatic evaluation on Rotten Tomatoes. Results from Amplayo and Lapata (2019) are marked with an asterisk *. Extractive/abstractive models shown in the first/second block. Best performing results for unsupervised models are boldfaced. Model R1 R2 RL ORACLE 31.07 6.11 18.11 LEXRANK 24.62 3.66 14.51 WORD2VEC* 24.61 2.85 13.81 SENTINEURON 25.05 3.09 14.56 OPINOSIS 20.85 1.52 11.46 MEANSUM* 28.86 3.66 15.91 DENOISESUM 30.14 4.99 17.65 Table 3: Automatic evaluation on Yelp. Results from Chu and Liu (2019) are marked with an asterisk *. Extractive/abstractive models shown in the first/second block. Best performing unsupervised models are boldfaced. mensions to 256, the batch size to 8, and dropout (Srivastava et al., 2014) to 0.1. For our discriminator, we employed an LDA topic model trained on the review corpus, with 50 (Rotten Tomatoes) and 100 (Yelp) topics (tuned on the development set). The LSTM weights were pretrained with a language modeling objective, using the corpus as training data. For Yelp, we additionally trained a coverage mechanism (See et al., 2017) in a separate training phase to avoid repetition. We used the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001 and l2 constraint of 3. At test time, summaries were generated using length normalized beam search with a beam size of 5. We performed early stopping based on the performance of the model on the development set. Our model was trained on a single GeForce GTX 1080 Ti GPU and is implemented using PyTorch.4 Comparison Systems We compared DENOISESUM to several unsupervised extractive and abstractive methods. Extractive approaches include (a) LEXRANK (Erkan and Radev, 2004), an algorithm similar to PageRank that generates summaries by selecting the most salient sentences, (b) WORD2VEC (Rossiello et al., 2017), a centroidbased method which represents the input as IDFweighted word embeddings and selects as summary the review closest to the centroid, and (c) SENTINEURON, which is similar to WORD2VEC but uses a language model called Sentiment Neuron (Radford et al., 2017) as input representation. As an upper bound, ORACLE selects as summary the review which maximizes the ROUGE-1/2/L F1 score against the gold summary. 4Our code can be downloaded from https://github. com/rktamplayo/DenoiseSum. Model RT Yelp DENOISESUM 16.27 17.65 10% synthetic dataset 15.39 16.22 50% synthetic dataset 15.76 17.54 no segment noising 16.03 16.88 no document noising 16.22 16.67 no explicit denoising 16.06 17.06 no partial copy 15.89 16.31 no discriminator 15.84 16.64 using human categories 15.87 15.86 Table 4: ROUGE-L of our model and versions thereof with less synthetic data (second block), using only one noising method (third block), and without some modules (fourth block). A more comprehensive table and discussion can be found in the Appendix. Abstractive methods include (d) OPINOSIS (Ganesan et al., 2010), a graph-based summarizer that generates concise summaries of highly redundant opinions, and (e) MEANSUM (Chu and Liu, 2019), a neural model that generates a summary by reconstructing text from aggregate encodings of reviews. Finally, for Rotten Tomatoes, we also compared with the state-of-the-art supervised model proposed in Amplayo and Lapata (2019) which used the original training split. Examples of system summaries are shown in the Appendix. 5 Results Automatic Evaluation Our results on Rotten Tomatoes are shown in Table 2. Following previous work (Wang and Ling, 2016; Amplayo and Lapata, 2019) we report five metrics: METEOR (Denkowski and Lavie, 2014), a recall-oriented metric that rewards matching stems, synonyms, and 1941 RT Yelp Model Inf Coh Gram Inf Coh Gram SENTINEURON 11.8 8.3 25.4 -24.8 -0.8 9.3 MEANSUM -32.1 -34.4 -46.8 6.3 -7.5 -10.8 DENOISESUM 20.3 26.1 21.4 18.5 8.2 1.6 Yelp Model FullSupp PartSupp NoSupp MEANSUM 41.7% 20.4% 38.0% DENOISESUM 55.1% 24.3% 20.5% GOLD 63.6% 23.6% 12.8% Table 5: Best-worst scaling (left) and summary veridicality (right) evaluation. Between systems differences are all significant, using a one-way ANOVA with posthoc Tukey HSD tests (p < 0.01). paraphrases; ROUGE-SU4 (Lin, 2004), the recall of unigrams and skip-bigrams of up to four words; and the F1-score of ROUGE-1/2/L, which respectively measures word-overlap, bigram-overlap, and the longest common subsequence between system and reference summaries. Results on Yelp are given in Table 3 where we compare systems using ROUGE-1/2/L F1, following Chu and Liu (2019). As can be seen, DENOISESUM outperforms all competing models on both datasets. When compared to MEANSUM, the difference in performance is especially large on Rotten Tomatoes, where we see a 4.01 improvement in ROUGE-L. We believe this is because MEANSUM does not learn to reconstruct encodings of aggregated inputs, and as a result it is unable to produce meaningful summaries when the number of input reviews is large, as is the case for Rotten Tomatoes. In fact, the best extractive model, SENTINEURON, slightly outperforms MEANSUM on this dataset across metrics with the exception of ROUGE-L. When compared to the best supervised system, DENOISESUM performs comparably on several metrics, specifically METEOR and ROUGE-1, however there is still a gap on ROUGE-2, showing the limitations of systems trained without gold-standard summaries. Table 4 presents various ablation studies on Rotten Tomatoes (RT) and Yelp which assess the contribution of different model components. Our experiments confirm that increasing the size of the synthetic data improves performance, and that both segment and document noising are useful. We also show that explicit denoising, partial copy, and the discriminator help achieve best results. Finally, human-labeled categories (instead of LDA topics) decrease model performance, which suggests that more useful labels can be approximated by automatic means. Human Evaluation We also conducted two judgment elicitation studies using the Amazon Mechanical Turk (AMT) crowdsourcing platform. The first study assessed the quality of the summaries using Best-Worst Scaling (BWS; Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017). Specifically, participants were shown the movie/business name, some basic background information, and a gold-standard summary. They were also presented with three system summaries, produced by SENTINEURON (best extractive model), MEANSUM (most related unsupervised model), and DENOISESUM. Participants were asked to select the best and worst among system summaries taking into account how much they deviated from the ground truth summary in terms of: Informativeness (i.e., does the summary present opinions about specific aspects of the movie/business in a concise manner?), Coherence (i.e., is the summary easy to read and does it follow a natural ordering of facts?), and Grammaticality (i.e., is the summary fluent and grammatical?). We randomly selected 50 instances from the test set. We collected five judgments for each comparison. The order of summaries was randomized per participant. A rating per system was computed as the percentage of times it was chosen as best minus the percentage of times it was selected as worst. Results are reported in Table 5, where Inf, Coh, and Gram are shorthands for Informativeness, Coherence, and Grammaticality. DENOISESUM was ranked best in terms of informativeness and coherence, while the extractive system SENTINEURON was ranked best on grammaticality. This is not entirely surprising since extractive summaries written by humans are by definition grammatical. Our second study examined the veridicality of the generated summaries, namely whether the facts mentioned in them are indeed discussed in the input reviews. Participants were shown reviews and the corresponding summary and were asked to verify for each summary sentence whether it was fully supported by the reviews, partially supported, or not at all supported. We performed this experiment 1942 on Yelp only since the number of reviews is small and participants could read them all in a timely fashion. We used the same 50 instances as in our first study and collected five judgments per instance. Participants assessed the summaries produced by MEANSUM and DENOISESUM. We also included GOLD-standard summaries as an upper bound but no output from an extractive system as it by default contains facts mentioned in the reviews. Table 5 reports the percentage of fully (FullSupp), partially (PartSupp), and un-supported (NoSupp) sentences. Gold summaries display the highest percentage of fully supported sentences (63.3%), followed by DENOISESUM (55.1%), and MEANSUM (41.7%). These results are encouraging, indicating that our model hallucinates to a lesser extent compared to MEANSUM. 6 Conclusions We consider an unsupervised learning setting for opinion summarization where there are only reviews available without corresponding summaries. Our key insight is to enable the use of supervised techniques by creating synthetic review-summary pairs using noise generation methods. Our summarization model, DENOISESUM, introduces explicit denoising, partial copy, and discrimination modules which improve overall summary quality, outperforming competitive systems by a wide margin. In the future, we would like to model aspects and sentiment more explicitly as well as apply some of the techniques presented here to unsupervised single-document summarization. Acknowledgments We thank the anonymous reviewers for their feedback. We gratefully acknowledge the support of the European Research Council (Lapata, award number 681760). The first author is supported by a Google PhD Fellowship. References Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-theart NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59, Minneapolis, Minnesota. Association for Computational Linguistics. Reinald Kim Amplayo and Mirella Lapata. 2019. Informative and controllable opinion summarization. CoRR, abs/1909.02322. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686, Brussels, Belgium. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, California. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Arthur Braˇzinskas, Mirella Lapata, and Ivan Titov. 2019. Unsupervised multi-document opinion summarization as copycat-review generation. arXiv preprint arXiv:1911.02247. Giuseppe Carenini, Jackie Chi Kit Cheung, and Adam Pauls. 2013. Mutli-document summarization of evaluative text. Computational Intelligence, 29(4):545–576. Giuseppe Carenini and Johanna D. Moore. 2006. Generating and evaluating evaluative arguments. Artif. Intell., 170(11):925–952. Giuseppe Carenini, Raymond Ng, and Adam Pauls. 2006. Multi-document summarization of evaluative text. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675, New Orleans, Louisiana. Association for Computational Linguistics. Eric Chu and Peter Liu. 2019. MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 1223–1232, Long Beach, California. 1943 Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the 9th Workshop on Statistical Machine Translation, pages 376– 380, Baltimore, Maryland. Association for Computational Linguistics. Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multidocument summarization of opinions in reviews. In Proceedings of the 8th International Natural Language Generation Conference (INLG), pages 54–63, Philadelphia, Pennsylvania. Association for Computational Linguistics. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22(1):457–479. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Thibault Fevry and Jason Phang. 2018. Unsupervised sentence compression using denoising autoencoders. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 413–422, Brussels, Belgium. Association for Computational Linguistics. Markus Freitag and Scott Roy. 2018. Unsupervised natural language generation with denoising autoencoders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3922–3929, Brussels, Belgium. Association for Computational Linguistics. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340–348, Beijing, China. Coling 2010 Organizing Committee. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Raymond T. Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1602–1613, Doha, Qatar. Association for Computational Linguistics. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pages 513–520, Bellevue, Washington. Omnipress. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719, New Orleans, Louisiana. Assocation for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. CoRR, abs/1904.09751. Minqing Hu and Bing Liu. 2006. Opinion extraction and summarization on the web. In Proceedings of the 21st National Conference on Artificial Intelligence - Volume 2, pages 1621–1624, Boston, Massachusetts. AAAI Press. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 4th International Conference on Learning Representations, San Diego, California. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational Bayes. In Proceedings of the 3rd International Conference on Learning Representations, Banff, Alberta. Svetlana Kiritchenko and Saif Mohammad. 2017. Bestworst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 465–470, Vancouver, Canada. Association for Computational Linguistics. Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In AAAI Symposium on Computational Approaches to Analysing Weblogs (AAAI-CAAW), pages 100–107, Palo Alto, California. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. 1944 Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summarizing long sequences. In Proceedings of the 7th International Conference on Learning Representations, Vancouver, Canada. Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070– 5081, Florence, Italy. Association for Computational Linguistics. Jordan J. Louviere, Terry N. Flynn, and A. A. J. Marley. 2015. Best-Worst Scaling: Theory, Methods and Applications. Cambridge University Press. Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short comments. In Proceedings of the 18th International Conference on World Wide Web, pages 131–140, Madrid, Spain. ACM. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, pages 339–348. Association for Computational Linguistics. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42Nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, pages 79–86. Association for Computational Linguistics. Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 66–76, Cambridge, MA. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada. Laura Perez-Beltrachini, Yang Liu, and Mirella Lapata. 2019. Generating summaries with topic templates and structured convolutional decoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5107–5116, Florence, Italy. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: Sentence extraction, utilitybased evaluation, and user studies. In Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization - Volume 4, pages 21–30. Association for Computational Linguistics. Alec Radford, Rafal J´ozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. CoRR, abs/1704.01444. Gaetano Rossiello, Pierpaolo Basile, and Giovanni Semeraro. 2017. Centroid-based text summarization through compositionality of word embeddings. In Proceedings of the MultiLing 2017 Workshop on Summarization and Summary Evaluation Across Source Types and Genres, pages 12–21, Valencia, Spain. Association for Computational Linguistics. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12). Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 721–732, Baltimore, Maryland. Association for Computational Linguistics. 1945 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, pages 1096–1103, Helsinki, Finland. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, pages 2692– 2700, Montr´eal, Canada. Lu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 47–57, San Diego, California. Association for Computational Linguistics.
2020
175
A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer’s Type Trevor Cohen⇤ Biomedical and Health Informatics University of Washington Seattle [email protected] Serguei Pakhomov⇤ Pharmaceutical Care and Health Systems University of Minnesota Minneapolis [email protected] Abstract In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively. 1 Introduction Alzheimer’s Disease (AD) is a debilitating neurodegenerative condition which currently has no cure, and Dementia of the Alzheimer’s Type (DAT) is one of the most prominent manifestations of AD pathology. Prior to availability of diseasemodifying therapies, it is important to focus on reducing the emotional and financial burden of this devastating disease on patients, caregivers, and the healthcare system. Recent longitudinal studies of ⇤denotes equal contribution aging show that cognitive manifestations of future dementia may appear as early as 18 years prior to clinical diagnosis - much earlier than previously believed (Rajan et al., 2015; Aguirre-Acevedo et al., 2016). With 30-40% of healthy adults subjectively reporting forgetfulness on a regular basis (Cooper et al., 2011), there is an urgent need to develop sensitive and specific, easy-to-use, safe, and costeffective tools for monitoring AD-specific cognitive markers in individuals concerned about their cognitive function. Lack of clear diagnosis and prognosis, possibly for an extended period of time (i.e., many years), in this situation can produce uncertainty and negatively impact planning of future care (Stokes et al., 2015), and misattribution of AD symptoms to personality changes can lead to family conflict and social isolation (Boise et al., 1999; Bond et al., 2005). Delayed diagnosis also results in an estimated $7.9 trillion in medical and care costs (Association, 2018) due to high utilization of emergency care, amongst other factors, by patients with undiagnosed AD. Cognitive status is reflected in spoken language. As manual analysis of such data is prohibitively time-consuming, the development and evaluation of computational methods through which symptoms of AD and other dementias can be identified on the basis of linguistic anomalies observed in transcripts of elicited speech samples have intensified in the last several years (Fraser et al., 2016; Yancheva and Rudzicz, 2016; Orimaye et al., 2017). This work has generally employed a supervised machine learning paradigm, in which a model is trained to distinguish between speech samples produced by patients with dementia and those from controls, using a set of deliberately engineered or computationally identified features. However, on account of the limited training data available, overfitting is a concern. This is particularly problematic in DAT, where the nature of linguistic anomalies 1946 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1946 -1957 July 5 - 10, ©2020 Association for Computational Linguistics varies between patients, and with AD progression (Altmann and McClung, 2008). In the current study we take a different approach, focusing our attention on the perplexity of a speech sample as estimated by neural LMs trained on transcripts of the speech of participants completing a cognitive task. To date, the most successful approach to using LM perplexity as a sole distinguishing feature between narratives by dementia patients and controls was proposed by Fritsch et al. (2019) and replicated by Klumpp et al. (2018). The approach consists of training two recurrent neural LMs - one on transcripts from patients with dementia and the other on transcripts from controls. The difference between the perplexities estimated with these two LMs results in very high classification accuracy (AUC: 0.92) reported by both studies. The explanation for this performance offered by Fritsch et al. (2019) relies on observations that patients with DAT describe the picture in an unforeseen way and their speech frequently diverts from the content of the picture, contains repetitions, incomplete utterances, and refers to objects in the picture using words like “thing” or “something”. This explanation, however, conflicts with the findings by Klumpp et al. (2018) that demonstrate similarly high classification accuracy (AUC: 0.91) with a single hidden layer non-recurrent neural network and bag-of-words input features, suggesting that while word sequences play a role, it may not be as large as previously believed by Fritsch et al. (2019). Klumpp et al.’s (2018) explanation contrasts “local” with “global language properties” of the picture descriptions being captured by recurrent neural LMs vs. the non-recurrent bag-of-words neural network classifier, respectively. Both of these explanations are based on informal qualitative observations of the data and are not entirely satisfying because both fail to explain the fact that it is precisely the difference between the control and dementia LMs that is able to discriminate between patients and controls. The individual LMs are not nearly as good at this categorization task. The objective of the current study is to quantify the extent to which the differences between neural LMs trained on language produced by DAT patients and controls reflect known deficits in language use in this disease - in particular the loss of access to relatively infrequent terms that occurs with disease progression (Almor et al., 1999a). We approach this objective by interrogating trained neural LMs with two methods: interrogation by perturbation in which we evaluate how trained neural LMs respond to text that has been deliberately perturbed to simulate AD progression; and interrogation by interpolation in which we develop and evaluate hybrid LMs by interpolating between neural LMs modeling language use with and without dementia. We find neural LMs are progressively more perplexed by text simulating disease of greater severity, and that this perplexity decreases with increasing contributions of a LM trained on transcripts from patients with AD, but increases again when only this LM is considered. Motivated by these observations, we modify the approach of Fritsch et al. (2019) by incorporating an interpolated model and pre-trained word embeddings, with improvements in performance over the best results reported for models trained on transcript text exclusively. 2 Background 2.1 Linguistic Anomalies in AD AD is a progressive disease, and the linguistic impairments that manifest reflect the extent of this progression (Altmann and McClung, 2008). In its early stages, deficits in the ability to encode recent memories are most evident. As the disease progresses, it affects regions of the brain that support semantic memory (Martin and Chao, 2001) knowledge of words and the concepts they represent - and deficits in language comprehension and production emerge (Altmann and McClung, 2008). A widely-used diagnostic task for elicitation of abnormalities in speech is the “Cookie Theft” picture description task from the Boston Diagnostic Aphasia Examination (Goodglass, 2000), which is considered to provide an adequate approximation of spontaneous speech. In this task, participants are asked to describe a picture of a pair of children colluding in the theft of cookies from the top shelf of a raised cupboard while their mother distractedly washes dishes1. When used as a diagnostic instrument, the task can elicit features of AD and other dementias, such as pronoun overuse (Almor et al., 1999a), repetition (Hier et al., 1985; Pakhomov et al., 2018) and impaired recollection of key elements (or “information units”) from the picture (Giles et al., 1996). Due to the human-intensive nature of the analyses to detect such anomalies, automated methods present a desirable alternative. 1For a contemporary edition subscribing to fewer gender stereotypes see (Berube et al., 2018). 1947 2.2 Classification of Dementia Transcripts A number of authors have investigated automated methods of identifying linguistic anomalies in dementia. The most widely-used data set for these studies is the DementiaBank corpus (Becker et al., 1994), which we employ for the current work. In some of the early work on this corpus, Prud’hommeaux and Roark (2015) introduced a novel graph-based content summary score to distinguish between controls and dementia cases in this corpus with an area under the receiver operating characteristic curve (AUC) of 0.83. Much of the subsequent work relied on supervised machine learning, with a progression from manually engineered features to neural models mirroring general Natural Language Processing trends. For example, Fraser and Hirst (2016) report AD classification accuracy of over 81% on 10-fold crossvalidation when applying logistic regression to 370 text-derived and acoustic features. In a series of papers, Orimaye et al. (2014; 2017; 2018) report tenfold cross-validation F-measures of up to 0.73 when applying a Support Vector Machine (SVM) to 21 syntactic and lexical features; SVM AUC on leave-pair-out cross-validation (LPOCV) of 0.82 and 0.93 with the best manually-engineered feature set and the best 1,000 of 16,903 lexical, syntactic and n-gram features (with selection based on information gain) respectively; and a LPOCV AUC of 0.73-0.83 across a range of deep neural network models with high-order n-gram features. Yancheva and Rudzicz (2016) derive topic-related features from word vector clusters to obtain an F-score of 0.74 with a random forest classifier2. Karlekar et al. (2018) report an utterance-level accuracy of 84.9%3 with a convolutional/recurrent neural network combination when trained on text alone. While these results are not strictly comparable as they are based on different subsets of the data, use different cross-validation strategies and report different performance metrics, they collectively show that supervised models can learn to identify patients with AD using data from elicited speech samples. However, as is generally the case with supervised learning on small data sets, overfitting is a concern. 2.3 Perplexity and Cognitive Impairment Perplexity is used as an estimate of the fit between a probabilistic language model and a segment of pre20.8 with additional lexicosyntactic and acoustic features. 3This improved to 91.1% when incorporating POS tags. viously unseen text. The notion of applying n-gram model perplexity (a derivative of cross-entropy) as a surrogate measure of syntactic complexity in spoken narratives was proposed by Roark et al. (2007) and applied to transcribed logical memory (story recall) test responses by patients with mild cognitive impairment (MCI: a frequent precursor to AD diagnosis). In this work, sequences of part-of-speech (POS) tags were used to train bi-gram models on logical memory narratives, and then cross-entropy of these models was computed on held-out crossvalidation folds. They found significantly higher mean cross-entropy values in narratives of MCI patients as compared to controls. Subsequent work expanded the use of POS cross-entropy as one of the language characteristics in a predictive model for detecting MCI (Roark et al., 2011). Perplexity can also be calculated on word tokens and serve as an indicator of an n-gram model’s efficiency in predicting new utterances (Jelinek et al., 1977). Pakhomov et al (2010b) included word and POS LM perplexity amongst a set of measurements used to distinguish between speech samples elicited from healthy controls and patients with frontotemporal lobar degeneration (FTLD). A LM was trained on text from an external corpus of transcribed “Cookie Theft” picture descriptions performed by subjects without dementia from a different study. This model was then used to estimate perplexity of elicited speech samples in cases and controls, with significant differences between mean perplexity scores obtained from subjects with the semantic dementia variant of FTLD and controls. However, the authors did not attempt to use perplexity score as a variable in a diagnostic classification of FTLD or its subtypes. Collectively, these studies suggest elevated perplexity (both at the word and POS level) may indicate the presence of dementia. A follow-up study (Pakhomov et al., 2010a) used perplexity calculated with a model trained on a corpus of conversational speech unrelated to the picture description task, as part of a factor analysis of speech and language characteristics in FTLD. Results suggested that the general English LM word- and POS-level perplexity did not discriminate between FTLD subtypes, or between cases and controls. Taken together with the prior results, these results suggest that LMs trained on transcripts elicited using a defined task (such as the “Cookie Theft” task) are better equipped to distinguish between cases and controls 1948 than LM trained on a broader corpus. As the vocabulary of AD patients becomes progressively constrained, one might anticipate language use becoming more predictable with disease progression. Wankerl et al. (2016) evaluate this hypothesis using the writings of Iris Murdoch who developed AD later in life - and eschewed editorial revisions. In this analysis, which was based on time-delimited train/test splits, perplexity decreased in her later output. This is consistent with recent work by Weiner et al. (2018) that found diminished perplexity was of some (albeit modest) utility in predicting transitions to AD. The idea of combining two perplexity estimates - one from a model trained on transcripts of speech produced by healthy controls and the other from a model trained on transcripts from patients with dementia - was developed by Wankerl et al. (2017) who report an AUC of 0.83 using n-gram LMs in a participant-level leave-one-out-crossvalidation (LOOCV) evaluation across the DementiaBank dataset. Fritsch et al. (2019) further improved performance of this approach by substituting a neural LM (a LSTM model) for the n-gram LM, and report an improved AUC of 0.92. However, it is currently unclear as to whether this level of accuracy is due to dementia-specific linguistic markers, or a result of markers of other significant differences between the case and control group such as age (¯x = 71.4 vs. 63) and years of education (¯x= 12.1 vs. 14.3) (Becker et al., 1994). 2.4 Neural LM perplexity Recurrent neural network language models (RNNLM) (Mikolov et al., 2010) are widely used in machine translation and other applications such as sequence labeling (Goldberg, 2016). Recurrent Neural Networks (RNN) (Jordan, 1986; Elman, 1990) facilitate modeling sequences of indeterminate length by maintaining a state vector, St−1, that is combined with a vector representing the input for the next data point in a sequence, xt at each step of processing. Consequently, RNN-LMs have recourse to information in all words preceding the target for prediction, in contrast to n-gram models. They are also robust to previously unseen word sequences, which with na¨ıve n-gram implementations (i.e., without smoothing or backoff) could result in an entire sequence being assigned a probability of zero. Straightforward RNN implementations are vulnerable to the so-called “vanishing” and “exploding” gradient problems (Hochreiter, 1998; Pascanu et al., 2012), which emerge on account of the numerous sequential multiplication steps that occur with backpropagation through time (time here indicating each step through the sequence to be modeled), and limit the capacity of RNNs to capture long-range dependencies. An effective way to address this problem involves leveraging Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), which use structures known as gates to inhibit the flow of information during training, and a mechanism using a memory cell to preserve selected information across sequential training steps. Groups of gates comprise vectors with components that have values that are forced to be close to either 1 or 0 (typically accomplished using the sigmoid function). Only values close to 1 permit transmission of information, which disrupts the sequence of multiplication steps that occurs when backpropagating through time. The three gates used with typical LSTMs are referred to as Input, Forget and Output gates, and as their names suggest they govern the flow of information from the input and past memory to the current memory state, and from the output of each LSTM unit (or cell) to the next training step. LSTM LMs have been shown to produce better perplexity estimates than n-gram models (Sundermeyer et al., 2012). 2.5 Lexical Frequency A known distinguishing feature of the speech of AD patients is that it tends to contain higher frequency words with less specificity than that of cognitively healthy individuals (e.g., overuse of pronouns and words like ”thing”) (Almor et al., 1999b). Lexical frequency affects speech production; however, these effects have different origins in healthy and cognitively impaired individuals. A leading cognitive theory of speech production postulates a two-step process of lexical access in which concepts are first mapped to lemmas and, subsequently, to phonological representations prior to articulation (Levelt, 2001). In individuals without dementia, lexical frequency effects are evident only at the second step - the translation of lemmas to phonological representations and do not originate at the pre-lexical conceptual level (Jescheniak and Levelt, 1994). In contrast, in individuals with dementia, worsening word-finding difficulties are attributed to progressive degradation of semantic networks that underlie lexical access at the concep1949 tual level (Astell and Harley, 1996). While lexical frequency effects are difficult to control in unconstrained purely spontaneous language production, language produced during the picture description task is much more constrained in that the picture provides a fixed set of objects, attributes, and relations that serve as referents for the the person describing the picture. Thus, in the context of the current study, we expect to find that both healthy individuals and patients with dementia describing the same picture would attempt to refer to the same set of concepts, but that patients with dementia would tend to use more frequent and less specific words due to erosion of semantic representations leading to insufficient activation of the lemmas. Changes in vocabulary have been reported in the literature as one of the most prominent linguistic manifestations of AD (Pekkala et al., 2013; Wilson et al., 1983; Rohrer et al., 2007). We do not suggest that other aspects of language such as syntactic complexity, for example, should be excluded; although, there has been some debate as to the utility of syntactic complexity specifically as a distinguishing feature (see (Fraser et al., 2015)). 3 Materials and Methods 3.1 Datasets For LM training and evaluation we used transcripts of English language responses to the “Cookie Theft” component of the Boston Diagnostic Aphasia Exam (Goodglass, 2000), provided as part of the DementiaBank database (Becker et al., 1994). Transcripts (often multiple) are available for 169 subjects classified as having possible or probable DAT on the basis of clinical or pathological examination, and 99 patients classified as controls. For interrogation by perturbation, we used a set of six synthetic “Cookie Theft” picture description narratives created by Bird et al. (2000) to study the impact of semantic dementia on verb and noun use in picture description tasks. While Bird et al. (2000) focused on semantic dementia, a distinct condition from DAT, these synthetic narratives were not based on patients with semantic dementia. Rather, they were created to manipulate lexical frequency by first compiling a composite baseline narrative from samples by healthy subjects, and then removing and/or replacing nouns and verbs in that baseline with words of higher lexical frequency (e.g., “mother” vs. “woman” vs. “she”). Lexical frequency was calculated using the Celex Lexical Database (LDC96L14) and words were aggregated into groups based on four log frequency bands (0.5 - 1.0, 1.0 - 1.5, 1.5 - 2.0, 2.5 - 3.0: e.g., words in the 0.5 - 1.0 band occur in Celex more than 10 times per million). These narratives are well-suited to the study of lexical retrieval deficits in DAT in which loss of access to less frequent words is observed with disease progression (Pekkala et al., 2013). In order to calculate mean log lexical frequency on the DementiaBank narratives, we used the SUBTLEXus corpus shown to produce lexical frequencies more consistent with psycholinguistic measures of word processing time than those calculated from the Celex corpus (Brysbaert and New, 2009). The DementiaBank narratives were processed using NLTK’s 4 implementation of the TnT part-of-speech tagger (Brants, 2000) trained on the Brown corpus (Francis and Kucera, 1979). Following Bird et al. (2000) only nouns and verbs unique within the narrative were used to calculate mean log lexical frequency. We did not stem the words in order to avoid creating potentially artificially high/low frequency items. To validate the mean log lexical frequency values obtained with the SUBTLEXus corpus, we compared the log lexical frequency means for the six narratives developed by Bird et al. (2000) with their frequency band values using Spearman’s rank correlation and found them to be perfectly correlated (⇢= 1.0). The text of DementiaBank transcripts was extracted from the original CHAT files (Macwhinney, 2000). The transcripts as well as the six synthetic narratives were lowercased and pre-processed by removing speech and non-speech noise as well as pause fillers (um’s amd ah’s) and punctuation (excepting the apostrophe). 3.2 Pre-trained models Prior work with neural LMs in this context has used randomly instantiated models. We wished to evaluate the utility of pre-training for this task - both pretraining of the LSTM in its entirety and pre-training of word embeddings alone. For the former we used a LSTM trained on the WikiText-2 dataset (Merity et al., 2016) provided with the GluonNLP package5. 200-dimensional word embeddings, including embeddings augmented with subword information, (Bojanowski et al., 2017) were developed using the Semantic Vectors package6 and 4Natural Language Toolkit: www.nltk.org 5https://github.com/dmlc/gluon-nlp 6https://github.com/semanticvectors/semanticvectors 1950 trained using the skipgram-with-negative-sampling algorithm of Mikolov et al. (2013) for a single iteration on the English Wikipedia (10/1/2019 edition, pre-processed with wikifl.pl7) with a window radius of five8. We report results using skipgram embeddings augmented with subword information as these improved performance over both stochastically-initialized and WikiText2-pretrained LSTMs in preliminary experiments. 3.3 Training We trained two sets of dementia and control LSTM models. The first set was trained in order to replicate the findings of Fritsch et al. (2019), using the same RWTHLM package (Sundermeyer et al., 2014) and following their methods as closely as possible in accordance with the description provided in their paper. Each model’s cross-entropy loss was optimized over 20 epochs with starting learning rate optimization performed on a heldout set of 10 transcripts. The second set was trained using the GluonNLP averaged stochastic gradient weight-dropped LSTM (standard-lstm-lm-200 architecture) model consisting of 2 LSTM layers with word embedding (tied at input and output) and hidden layers of 200 and 800 dimensions respectively (see Merity et al. (2017) for full details on model architecture). In training the GluonNLP models, the main departure from the methods used by Fritsch et al. (2019) involved not using a small heldout set of transcripts to optimize the learning rate because we observed that the GluonNLP models converged well prior to the 20th epoch with a starting learning rate of 20 which was used for all stochastically initialized models. With pre-trained models we used a lower starting learning rate of 5 to preserve information during subsequent training on DementiaBank. All GluonNLP models were trained using batch size of 20 and back propagation through time (BPTT) window size of 10. During testing, batch size was set to 1 and BPTT to the length of the transcript (tokens). Unseen transcript perplexity was calculated as eloss. 3.4 Evaluation As subjects in the DementiaBank dataset participated in multiple assessments, there are multiple transcripts for most of the subjects. In order to avoid biasing the models to individual subjects, we 7Available at https://github.com/facebookresearch/fastText 8Other hyperparameters per (Cohen and Widdows, 2018) followed the participant-level leave-one-out crossvalidation (LOOCV) evaluation protocol of Fritsch et al. (2019) whereby all of the picture description transcripts for one participant are held out in turn for testing and the LMs are trained on the remaining transcripts. Perplexities of the LMs are then obtained on the heldout transcripts, resulting in two perplexity values per transcript, one from the LM trained on the dementia (Pdem) and control (Pcon) transcripts. Held-out transcripts were scored using these perplexity values, as well as by the difference (Pcon −Pdem) between them. 3.5 Interrogation of models For interrogation by perturbation, we estimated the perplexity of our models for each of the six synthetic narratives of Bird et al. (2000). We reasoned that an increase in Pcon and a decrease in Pdem as words are replaced by higher-frequency alternatives to simulate progressive lexical retrieval deficits would indicate that these models were indeed capturing AD-related linguistic changes. For interrogation by interpolation, we extracted the parameters from all layers of paired LSTM LMs after training, and averaged these as ↵LMdem+(1−↵)LMcon to create interpolated models. We hypothesized that a decrease in perplexity estimates for narratives emulating severe dementia would occur as ↵(the proportional contribution of LMdem) increases. 4 Results and Discussion The results of evaluating classification accuracy of the various language models are summarized in Table 1. The 95% confidence interval for GluonNLP models was calculated from perplexity means obtained across ten LOOCV iterations with random model weight initialization on each iteration. The RWTHLM package does not provide support for GPU acceleration and requires a long time to perform a single LOOCV iteration (approximately 10 days in our case). Since the purpose of using the RWTHLM package was to replicate the results previously reported by Fritsch et al. (2019) that were based on a single LOOCV iteration and we obtained the exact same AUC of 0.92 on our first LOOCV iteration with this approach, we did not pursue additional LOOCV iterations. However, we should note that we obtained an AUC of 0.92 for the difference between Pcon and Pdem on two of the ten LOOCV iterations with the GluonNLP LSTM model. Thus, we believe that the GluonNLP 1951 CONTROL DEMENTIA CONTROL-DEMENTIA MODEL AUC 95% CI AUC 95% CI AUC 95% CI RWTHLMLSTM 0.80 – 0.64 – 0.92 – GluonNLPLSTM 0.80 ± 0.002 0.65 ± 0.002 0.91 ± 0.004 Table 1: Classification accuracy using individual models’ perplexities and their difference for various models. Figure 1: Relationship between log frequency bands used to replace words in synthetic Cookie Theft picture descriptions to simulate degrees of semantic dementia and perplexity of LSTM language models trained on picture descriptions by controls and dementia patients. LSTM model has equivalent performance to the RWTHLM LSTM model. Having replicated results of previously published studies and confirmed that using the difference in perplexities trained on narratives by controls and dementia patients is indeed the current state-of-theart, we now turn to explaining why the difference between these LMs is much more successful than the individual models alone. First, we used the six “Cookie Theft” narratives designed to simulate semantic dementia to examine the relationship between Pcon and Pdem with GluonNLP LSTM LMs and log lexical frequency bands. The results of this analysis are illustrated in Figure 1 and show that Pdem is higher than Pcon on narratives in the lower log frequency bands (less simulated impairment) and lower in the higher log frequency bands (more simulated impairment). We confirmed these results by calculating mean log lexical frequency on all DementiaBank narratives and fitting a linear regression model to test for associations with perplexities of the two LMs. The regression model contained mean lexical frequency as the dependent variable and Pdem and Pcon as independent variables, adjusted for age, education and the length of the picture description narrative. In order to avoid likely practice effects across multiple transcripts, we only used the transcript obtained on the initial baseline visit; however, we did confirm these results by using all transcripts to fit mixed effects models with random slopes and intercepts in order to account for the correlation between transcripts from the same subject (mixed effects modeling results not shown). The results demonstrate that the association between perplexity and lexical frequency is significant and positive for the control LM (coeff: 0.563, p < 0.001) and negative for dementia LM (coeff: -0.543, p < 0.001). Age, years of education, and length of the narrative were not significantly associated with lexical frequency in this model. These associations show that the control LM and dementia LM are more “surprised” by narratives containing words of higher lexical frequency and lower lexical frequency respectively. If the use of higher lexical frequency items on a picture description task portends a semantic deficit, then this particular pattern of results explains why it is the difference between the two models that is most sensitive to manifestations of dementia and suggests that there is a point at which the two models become equally “surprised” with a difference between their perplexities close to zero. In Figure 1, that point is between log lexical frequency bands of 2.0 and 2.5 corresponding to the mild to moderate degree of semantic impairment reported by Bird et al. (2000). Notably, in the clinical setting, the mild forms of dementia such as mild cognitive impairment and mild dementia are also particularly challenging and require integration of multiple sources of evidence for accurate diagnosis (Knopman and Petersen, 2014). The results of our interpolation studies are shown in Figure 2. Each point in the figure shows the average difference between the perplexity estimate of a perturbed transcript (Px) and the perplexity estimate for the unperturbed (Po: frequency band 0) sample for this model9. While all models tend 9We visualized this difference because perplexities at ↵=0.5 were generally higher, irrespective of whether component models were initialized stochastically, or had pre-trained word embeddings in common. Perplexities of ↵=0.75 models were slightly lower than those of their majority constituents. 1952 RANDOM PRETRAINED RANDOM PRETRAINED Pcon −P↵ AUC 95% CI AUC 95% CI ACCeer 95% CI ACCeer 95% CI CI ↵= 0.25 0.842 ± 0.008 0.838 ± 0.015 0.689 ± 0.036 0.724 ± 0.034 ↵= 0.5 0.816 ± 0.009 0.813 ± 0.005 0.669 ± 0.035 0.665 ± 0.033 ↵= 0.75 0.931 ± 0.003 0.941 ± 0.006 0.854 ± 0.031 0.872 ± 0.010 ↵= 1.0 0.908 ± 0.004 0.930 ± 0.005 0.846 ± 0.023 0.839 ± 0.017 Table 2: Performance of randomly-instantiated and pre-trained (subword-based skipgram embeddings) interpolated “two perplexity” models across 10 repeated per-participant LOOCV runs. ↵indicates the proportional contribution of the dementia model. ACCeer gives the accuracy at equal error rate. Best results are in boldface, and results using the approach of Fritsch et al. (2019) are in italics. to find the increasingly perturbed transcripts more perplexing than their minimally perturbed counterparts, this perplexity decreases with increasing contributions of the dementia LM. However, when only this model is used, relative perplexity of the perturbed transcripts increases. This indicates that the “pure” dementia LM may be responding to linguistic anomalies other than those reflecting lack of access to infrequently occurring terms. We reasoned that on account of this, the ↵=0.75 model may provide a better representation of dementia-related linguistic changes. To evaluate this hypothesis, we assessed the effects on performance of replacing the dementia model with this interpolated model. The results of these experiments (Table 2) reveal improvements in performance with this approach, with best AUC (0.941) and accuracy at equal error rate (0.872) resulting from the combination of interpolation10 with pre-trained word embeddings. That pre-trained embeddings further improve performance is consistent with the observation that the elevation in perplexity when transitioning from ↵=0.75 to ↵=1.0 is much less pronounced in these models (Figure 3). These results are significantly better than those reported by Fritsch et al (2019), and our reimplementation of their approach. These improvements in performance appear to be attributable to a smoothing effect on the perplexity of the modified dementia models in response to unseen dementia cases. Over ten repeated LOOCV iterations, average perplexity on held-out dementia cases was significantly lower than that of the baseline ‘dementia’ model (51.1 ±0.81) for both the ↵=0.75 (47.3±0.32) and pre-trained embeddings (44.8±0.53) models. This trend is further accentuated with the severity of dementia - for transcripts corresponding to a mini-mental state 10Simply weighting the difference in model perplexities does not perform as well as interpolating model weights, with at best a 0.001 improvement in AUC over the baseline. Figure 2: Stochastically initialized models. Elevation in perplexity over unperturbed transcript (Po) with the proportional contribution of a dementia model (↵) to an interpolated model. Each point is the mean of 268 (held-out participants) data points. Error bars are not shown as they do not exceed the bounds of the markers. Figure 3: Pretrained word embeddings. Elevation in perplexity over unperturbed transcript (Po) with the proportional contribution of a dementia model (↵) to an interpolated model. Each point is the average of 268 data points, and error bars are not shown as they do not exceed the bounds of the markers. 1953 exam (MMSE) 10 (n=16), average perplexities are 148.29±7.69, 105.01±3.48 and 121.86±7.67 for baseline ‘dementia’, ↵=0.75 and pre-trained embeddings models respectively. In both cases, average perplexity of the interpolated (↵=0.75) pretrained embeddings model fell between those of the exclusively pre-trained (lowest overall) and exclusively interpolated (lowest in severe cases) models. A practical issue for automated methods to detect dementia concerns establishing their accuracy at earlier stages of disease progression, where a readily disseminable screening tool would arguably have greatest clinical utility, especially in the presence of an effective disease-modifying therapy. To this end, Fritsch et al. (2019) defined a “screening scenario” in which evaluation was limited to participants with a last available MMSE of 21 or more, which corresponds to a range of severity encompassing mild, questionable or absent dementia (Perneczky et al., 2006). In this scenario, classification accuracy of the ‘paired perplexity’ LSTM based model was only slightly lower (AUC: 0.87) than the accuracy on the full range of cognitive impairment (AUC: 0.92). We found similar performance with our models. When limiting evaluation to those participants with a last-recorded MMSE ≥21, average AUCs across 10 LOOCV iterations were 0.836 ±0.014, 0.879 ±0.01, 0.893 ±0.004, and 0.899 ±0.012 for the baseline (Fritsch et al (2019)), pretrained embeddings, interpolated (↵=0.75) and interpolated (↵=0.75) with pretrained embeddings variants, respectively. These results support the notion that paired neural LMs can be used effectively to screen for possible dementia at earlier stages of cognitive impairment. The contributions of our work can be summarized as follows. First, our results demonstrate that the relationship between LM perplexity and lexical frequency is consistent with the phenomenology of DAT and its deleterious effects on patients’ vocabulary. We show that the “two perplexities” approach is successful at distinguishing between cases and controls in the DementiaBank corpus because of its ability to capture specifically linguistic manifestations of the disease. Second, we observe that interpolating between dementia and control LMs mitigates the tendency of dementia-based LMs to be “surprised” by transcripts indicating severe dementia, which is detrimental to performance when the difference between these LMs is used as a basis for classification. In addition, we find a similar smoothing effect when using pre-trained word embeddings in place of a randomly instantiated word embedding layer. Finally, we develop a modification of Fritsch et al’s “two perplexity” approach that is consistent with these observations - replacing the dementia model with an interpolated variant, and introducing pre-trained word embeddings at the embedding layer. Both modifications exhibit significant improvements in performance, with best results obtained by using them in tandem. Though not strictly comparable on account of differences in segmentation of the corpus amongst others, we note the performance obtained also exceeds that reported with models trained on text alone in prior research. Code to reproduce the results of our experiments is available on GitHub11. While using transcript text directly is appealing in its simplicity, others have reported substantial improvements in performance when POS tags and paralinguistic features are incorporated, suggesting fruitful directions for future research. Furthermore, prior work on using acoustic features shows that they can contribute to discriminative models (K¨onig et al., 2015); however, Dementia Bank audio is challenging for acoustic analysis due to poor quality and background noise. Lastly, while our results do support the claim that classification occurs on the basis of dementia-specific linguistic anomalies, we also acknowledge that DementiaBank remains a relatively small corpus by machine learning standards, and that more robust validation would require additional datasets. 5 Conclusion We offer an empirical explanation for the success of the difference between neural LM perplexities in discriminating between DAT patients and controls, involving lexical frequency effects. Interrogation of control- and dementia-based LMs using synthetic transcripts and interpolation of parameters reveals inconsistencies harmful to model performance that can be remediated by incorporating interpolated models and pre-trained embeddings, with significant performance improvements. Acknowledgments This research was supported by Administrative Supplement R01 LM011563 S1 from the National Library of Medicine. 11https://github.com/treversec/tale of two perplexities 1954 References Daniel C Aguirre-Acevedo, Francisco Lopera, Eliana Henao, Victoria Tirado, Claudia Mu˜noz, Margarita Giraldo, Shrikant I Bangdiwala, Eric M Reiman, Pierre N Tariot, Jessica B Langbaum, et al. 2016. Cognitive decline in a colombian kindred with autosomal dominant alzheimer disease: a retrospective cohort study. JAMA neurology, 73(4):431–438. Amit Almor, Daniel Kempler, Maryellen C. MacDonald, Elaine S. Andersen, and Lorraine K. Tyler. 1999a. Why do Alzheimer patients have difficulty with pronouns? Working memory, semantics, and reference in comprehension and production in Alzheimer’s disease. Brain and language, 67(3):202–227. Amit Almor, Daniel Kempler, Maryellen C. MacDonald, Elaine S. Andersen, and Lorraine K. Tyler. 1999b. Why do alzheimer patients have difficulty with pronouns? working memory, semantics, and reference in comprehension and production in alzheimer’s disease. Brain and Language, 67(3):202 – 227. Lori JP Altmann and Jill S McClung. 2008. Effects of semantic impairment on language use in alzheimer’s disease. In Seminars in Speech and Language, 01, pages 018–031. c⃝Thieme Medical Publishers. Alzheimer’s Association. 2018. 2018 Alzheimer’s disease facts and figures. Alzheimer’s & Dementia, 14(3):367–429. Arlene J. Astell and Trevor A. Harley. 1996. Tip-of-thetongue states and lexical access in dementia. Brain and Language, 54(2):196 – 215. James T Becker, Franc¸ois Boiler, Oscar L Lopez, Judith Saxton, and Karen L McGonigle. 1994. The natural history of alzheimer’s disease: description of study cohort and accuracy of diagnosis. Archives of Neurology, 51(6):585–594. Shauna Berube, Jodi Nonnemacher, Cornelia Demsky, Shenly Glenn, Sadhvi Saxena, Amy Wright, Donna C Tippett, and Argye E Hillis. 2018. Stealing cookies in the twenty-first century: Measures of spoken narrative in healthy versus speakers with aphasia. American journal of speech-language pathology, 28(1S):321–329. H Bird, MA Lambon Ralph, K Patterson, and JR Hodges. 2000. The rise and fall of frequency and imageability: how the progression of semantic dementia impacts on noun and verb production in the cookie theft description. Brain and Language, 73.:17 – 49. Linda Boise, Richard Camicioli, David L Morgan, Julia H Rose, and Leslie Congleton. 1999. Diagnosing dementia: perspectives of primary care physicians. The Gerontologist, 39(4):457–464. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. John Bond, C Stave, A Sganga, O Vincenzino, B O’connell, and RL Stanley. 2005. Inequalities in dementia care across europe: key findings of the facing dementia survey. International Journal of Clinical Practice, 59:8–14. Thorsten Brants. 2000. Tnt - a statistical part-of-speech tagger. Marc Brysbaert and Boris New. 2009. Moving beyond kuˇcera and francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english. Behavior Research Methods, 41(4):977– 990. Trevor Cohen and Dominic Widdows. 2018. Bringing order to neural word embeddings with embeddings augmented by random permutations (earp). In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 465–475. Claudia Cooper, Paul Bebbington, James Lindesay, Howard Meltzer, Sally McManus, Rachel Jenkins, and Gill Livingston. 2011. The meaning of reporting forgetfulness: a cross-sectional study of adults in the English 2007 Adult Psychiatric Morbidity Survey. Age and ageing, 40(6):711–717. Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179–211. W. N. Francis and H. Kucera. 1979. Brown corpus manual. Technical report, Department of Linguistics, Brown University, Providence, Rhode Island, US. Kathleen Fraser, Jed Meltzer, and Frank Rudzicz. 2015. Linguistic features identify alzheimer’s disease in narrative speech. Journal of Alzheimer’s disease : JAD, 49. Kathleen C Fraser, Jed A Meltzer, and Frank Rudzicz. 2016. Linguistic features identify alzheimer’s disease in narrative speech. Journal of Alzheimer’s Disease, 49(2):407–422. Julian Fritsch, Sebastian Wankerl, and Elmar N¨oth. 2019. Automatic diagnosis of alzheimer’s disease using neural network language models. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5841–5845. IEEE. Elaine Giles, Karalyn Patterson, and John R. Hodges. 1996. Performance on the Boston Cookie theft picture description task in patients with early dementia of the Alzheimer’s type: Missing information. Aphasiology, 10(4):395–408. 1955 Yoav Goldberg. 2016. A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research, 57:345–420. Harold Goodglass. 2000. Boston diagnostic aphasia examination: Short form record booklet. Lippincott Williams & Wilkins. Daniel B. Hier, Karen Hagenlocker, and Andrea Gellin Shindler. 1985. Language disintegration in dementia: Effects of etiology and severity. Brain and Language, 25(1):117–133. Sepp Hochreiter. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107–116. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Frederick Jelinek, Robert Mercer, L R Bahl, and J K Baker. 1977. Perplexity - a measure of the difficulty of speech recognition tasks. Journal of the Acoustical Society of America, 62:S63. J¨org D. Jescheniak and Willem J. M. Levelt. 1994. Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4):824–843. Michael I Jordan. 1986. Serial order: A parallel distributed processing approach. Technical report, CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE. Sweta Karlekar, Tong Niu, and Mohit Bansal. 2018. Detecting linguistic characteristics of alzheimer’s dementia by interpreting neural models. arXiv preprint arXiv:1804.06440. Philipp Klumpp, Julian Fritsch, and Elmar N¨oth. 2018. Ann-based alzheimer’s disease classification from bag of words. In Speech Communication; 13th ITGSymposium, pages 1–4. VDE. David S. Knopman and Ronald C. Petersen. 2014. Mild cognitive impairment and mild dementia: A clinical perspective. Mayo Clinic Proceedings, 89(10):1452 – 1459. Alexandra K¨onig, Aharon Satt, Alexander Sorin, Ron Hoory, Orith Toledo-Ronen, Alexandre Derreumaux, Valeria Manera, Frans Verhey, Pauline Aalten, Phillipe H. Robert, and Renaud David. 2015. Automatic speech analysis for the assessment of patients with predementia and alzheimer’s disease. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 1(1):112–124. Willem J. M. Levelt. 2001. Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98(23):13464–13471. Brian Macwhinney. 2000. The childes project: Tools for analyzing talk (third edition): Volume i: Transcription format and programs, volume ii: The database. Computational Linguistics - COLI, 26:657–657. Alex Martin and Linda L Chao. 2001. Semantic memory and the brain: structure and processes. Current opinion in neurobiology, 11(2):194–201. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Sylvester O Orimaye, Jojo SM Wong, Karen J Golden, Chee P Wong, and Ireneous N Soyiri. 2017. Predicting probable alzheimer’s disease using linguistic deficits and biomarkers. BMC bioinformatics, 18(1):34. Sylvester Olubolu Orimaye, Jojo Sze-Meng Wong, and Karen Jennifer Golden. 2014. Learning predictive linguistic features for alzheimer’s disease and related dementias using verbal utterances. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 78–87. Sylvester Olubolu Orimaye, Jojo Sze-Meng Wong, and Chee Piau Wong. 2018. Deep language space neural network for classifying mild cognitive impairment and alzheimer-type dementia. PloS one, 13(11):e0205636. Serguei V. S. Pakhomov, Lynn E. Eberly, and David S. Knopman. 2018. Recurrent perseverations on semantic verbal fluency tasks as an early marker of cognitive impairment. Journal of Clinical and Experimental Neuropsychology, 40(8):832–840. Serguei V S Pakhomov, Glenn E Smith, Dustin Chacon, Yara Feliciano, Neill Graff-Radford, Richard Caselli, and David S Knopman. 2010a. Computerized analysis of speech and language to identify psycholinguistic correlates of frontotemporal lobar degeneration. Cognitive and behavioral neurology : official journal of the Society for Behavioral and Cognitive Neurology, 23(3):165–177. 1956 Serguei VS Pakhomov, Glenn E Smith, Susan Marino, Angela Birnbaum, Neill Graff-Radford, Richard Caselli, Bradley Boeve, and David S Knopman. 2010b. A computerized technique to assess language use patterns in patients with frontotemporal dementia. Journal of neurolinguistics, 23(2):127– 144. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2. Seija Pekkala, Debra Wiener, Jayandra J. Himali, Alexa S. Beiser, Loraine K. Obler, Yulin Liu, Ann McKee, Sanford Auerbach, Sudha Seshadri, Philip A. Wolf, and Rhoda Au. 2013. Lexical retrieval in discourse: An early indicator of alzheimer’s dementia. Clinical Linguistics & Phonetics, 27(12):905–921. PMID: 23985011. Robert Perneczky, Stefan Wagenpfeil, Katja Komossa, Timo Grimmer, Janine Diehl, and Alexander Kurz. 2006. Mapping scores onto stages: mini-mental state examination and clinical dementia rating. The American journal of geriatric psychiatry, 14(2):139– 144. Emily Prud’hommeaux and Brian Roark. 2015. Graphbased word alignment for clinical language evaluation. Computational Linguistics, 41(4):549–578. Kumar B Rajan, Robert S Wilson, Jennifer Weuve, Lisa L Barnes, and Denis A Evans. 2015. Cognitive impairment 18 years before clinical diagnosis of alzheimer disease dementia. Neurology, 85(10):898–904. Brian Roark, Margaret Mitchell, and Kristy Hollingshead. 2007. Syntactic complexity measures for detecting mild cognitive impairment. In Biological, translational, and clinical language processing, pages 1–8. Brian Roark, Margaret Mitchell, John-Paul Hosom, Kristy Hollingshead, and Jeffrey Kaye. 2011. Spoken language derived measures for detecting mild cognitive impairment. IEEE transactions on audio, speech, and language processing, 19(7):2081–2090. Jonathan D. Rohrer, William D. Knight, Jane E. Warren, Nick C. Fox, Martin N. Rossor, and Jason D. Warren. 2007. Word-finding difficulty: a clinical analysis of the progressive aphasias. Brain, 131(1):8–38. Laura Stokes, Helen Combes, and Graham Stokes. 2015. The dementia diagnosis: a literature review of information, understanding, and attributions. Psychogeriatrics, 15(3):218–225. M. Sundermeyer, R. Schl¨uter, and Hermann Ney. 2014. Rwthlm - the rwth aachen university neural network language modeling toolkit. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pages 2093–2097. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. Sebastian Wankerl, Elmar N¨oth, and Stefan Evert. 2016. An analysis of perplexity to reveal the effects of alzheimer’s disease on language. In Speech Communication; 12. ITG Symposium; Proceedings of, pages 1–5. VDE. Sebastian Wankerl, Elmar N¨oth, and Stefan Evert. 2017. An n-gram based approach to the automatic diagnosis of alzheimer’s disease from spoken language. In INTERSPEECH, pages 3162–3166. Jochen Weiner and Tanja Schultz. 2018. Automatic screening for transition into dementia using speech. In Speech Communication; 13th ITG-Symposium, pages 1–5. VDE. Robert S. Wilson, Lynd D. Bacon, Jacob H. Fox, Richard L. Kramer, and Alfred W. Kaszniak. 1983. Word frequency effect and recognition memory in dementia of the alzheimer type. Journal of Clinical Neuropsychology, 5(2):97–104. Maria Yancheva and Frank Rudzicz. 2016. Vectorspace topic models for detecting alzheimer’s disease. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2337–2346. 1957
2020
176
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1958–1969 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1958 Probing Linguistic Systematicity Emily Goodwin,∗1,5 Koustuv Sinha,2,3,5 Timothy J. O’Donnell1,4,5 1Department of Linguistics, 2School of Computer Science, McGill University, Canada 3Facebook AI Research (FAIR), Montreal 4Canada CIFAR AI Chair, Mila 5Quebec Artificial Intelligence Institute (Mila) {emily.goodwin, koustuv.sinha}@mail.mcgill.ca [email protected] Abstract Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity— generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models often generalize non-systematically. We examined the notion of systematicity from a linguistic perspective, defining a set of probes and a set of metrics to measure systematic behaviour. We also identified ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying. As a case study, we performed a series of experiments in the setting of natural language inference (NLI), demonstrating that some NLU systems achieve high overall performance despite being non-systematic. 1 Introduction Language allows us to express and comprehend a vast variety of novel thoughts and ideas. This creativity is made possible by compositionality— the linguistic system builds utterances by combining an inventory of primitive units such as morphemes, words, or idioms (the lexicon), using a small set of structure-building operations (the grammar; Camap, 1947; Fodor and Pylyshyn, 1988; Hodges, 2012; Janssen et al., 2012; Lake et al., 2017b; Szabó, 2012; Zadrozny, 1994; Lake et al., 2017a). One property of compositional systems, widely studied in the cognitive sciences, is the phenomenon of systematicity. Systematicity refers to the fact that lexical units such as words make consistent contributions to the meaning of the sentences in which they appear. Fodor and Pylyshyn ∗Corresponding author (1988) provided a famous example: If a competent speaker of English knows the meaning of the sentence John loves the girl, they also know the meaning of The girl loves John. This is because for speakers of English knowing the meaning of the first sentence implies knowing the meaning of the individual words the, loves, girl, and John as well as grammatical principles such as how transitive verbs take their arguments. But knowing these words and principles of grammar implies knowing how to compose the meaning of the second sentence. Deep learning systems now regularly exhibit very high performance on a large variety of natural language tasks, including machine translation (Wu et al., 2016; Vaswani et al., 2017), question answering (Wang et al., 2018; Henaff et al., 2016), visual question answering (Hudson and Manning, 2018), and natural language inference (Devlin et al., 2018; Storks et al., 2019). Recently, however, researchers have asked whether such systems generalize systematically (see §4). Systematicity is the property whereby words have consistent contributions to composed meaning; the alternative is the situation where words have a high degree of contextually conditioned meaning variation. In such cases, generalization may be based on local heuristics (McCoy et al., 2019b; Niven and Kao, 2019), variegated similarity (Albright and Hayes, 2003), or local approximations (Veldhoen and Zuidema, 2017), where the contribution of individual units to the meaning of the sentence can vary greatly across sentences, interacting with other units in highly inconsistent and complex ways. This paper introduces several novel probes for testing systematic generalization. We employ an artificial language to have control over systematicity and contextual meaning variation. Applying our probes to this language in an NLI setting reveals 1959 that some deep learning systems which achieve very high accuracy on standard holdout evaluations do so in ways which are non-systematic: the networks do not consistently capture the basic notion that certain classes of words have meanings which are consistent across the contexts in which they appear. The rest of the paper is organized as follows. §2 discusses degrees of systematicity and contextually conditioned variation; §3 introduces the distinction between open- and closed-class words, which we use in our probes. §5 introduces the NLI task and describes the artificial language we use; §6 discusses the models that we tested and the details of our training setup; §7 introduces our probes of systematicity and results are presented in §8.1 2 Systematicity and Contextual Conditioning Compositionality is often stated as the principle that the meaning of an utterance is determined by the meanings of its parts and the way those parts are combined (see, e.g., Heim and Kratzer, 2000). Systematicity, the property that words mean the same thing in different contexts, is closely related to compositionality; nevertheless, compositional systems can vary in their degree of systematicity. At one end of the spectrum are systems in which primitive units contribute exactly one identical meaning across all contexts. This high degree of systematicity is approached by artificial formal systems including programming languages and logics, though even these systems don’t fully achieve this ideal (Cantwell Smith, 1996; Dutilh Novaes, 2012). The opposite of systematicity is the phenomenon of contextually conditioned variation in meaning where the contribution of individual words varies according to the sentential contexts in which they appear. Natural languages exhibit such context dependence in phenomena like homophony, polysemy, multi-word idioms, and co-compositionality. Nevertheless, there are many words in natural language—especially closed-class words like quantifiers (see below)—which exhibit very little variability in meaning across sentences. At the other end of the spectrum from programming languages and logics are systems where many or most meanings are highly context dependent. 1Code for datasets and models can be found here: https://github.com/emilygoodwin/systematicity The logical extreme—a system where each word has a different and unrelated meaning every time it occurs—is clearly of limited usefulness since it would make generalization impossible. Nevertheless, learners with sufficient memory capacity and flexibility of representation, such as deep learning models, can learn systems with very high degrees of contextual conditioning—in particular, higher than human language learners. An important goal for building systems that learn and generalize like people is to engineer systems with inductive biases for the right degree of systematicity. In §8, we give evidence that some neural systems are likely too biased toward allowing contextually conditioned meaning variability for words, such as quantifiers, which do not vary greatly in natural language. 3 Compositional Structure in Natural Language Natural language distinguishes between content or open-class lexical units and function or closedclass lexical units. The former refers to categories, such a nouns and verbs, which carry the majority of contentful meaning in a sentence and which permit new coinages. Closed-class units, by contrast, carry most of the grammatical structure of the sentence and consist of things like inflectional morphemes (like pluralizing -s in English) and words like determiners, quantifiers, and negation (e.g., all, some, the in English). These are mostly fixed; adult speakers do not coin new quantifiers, for example, the way that they coin new nouns. Leveraging this distinction gives rise to the possibility of constructing probes based on jabberwockytype sentences. This term references the poem Jabberwocky by Lewis Carroll, which combines nonsense open-class words with familiar closedclass words in a way that allows speakers to recognize the expression as well formed. For example, English speakers identify a contradiction in the sentence All Jabberwocks flug, but some Jabberwocks don’t flug, without a meaning for jabberwock and flug. This is possible because we expect the words all, some, but, and don’t to contribute the same meaning as they do when combined with familiar words, like All pigs sleep, but some pigs don’t sleep. Using jabberwocky-type sentences, we tested the generalizability of certain closed-class word representations learned by neural networks. Giving the networks many examples of each construction 1960 with a large variety of different content words— that is, large amounts of highly varied evidence about the meaning of the closed-class words—we asked during the test phase how fragile this knowledge is when transferred to new open-class words. That is, our probes combine novel open-class words with familiar closed-class words, to test whether the closed-class words are treated systematically by the network. For example, we might train the networks to identify contradictions in pairs like All pigs sleep; some pigs don’t sleep, and test whether the network can identify the contradiction in a pair like All Jabberwocks flug; some Jabberwocks don’t flug. A systematic learner would reliably identify the contradiction, whereas a non-systematic learner may allow the closed-class words (all, some, don’t) to take on contextually conditioned meanings that depend on the novel context words. 4 Related Work There has been much interest in the problem of systematic generalization in recent years (Bahdanau et al., 2019; Bentivogli et al., 2016; Lake et al., 2017a,b; Gershman and Tenenbaum, 2015; McCoy et al., 2019a; Veldhoen and Zuidema, 2017; Soulos et al., 2019; Prasad et al., 2019; Richardson et al., 2019; Johnson et al., 2017, inter alia). In contrast to our approach (testing novel words in familiar combinations), many of these studies probe systematicity by testing familiar words in novel combinations. Lake and Baroni (2018) adopt this approach in semantic parsing with an artificial language known as SCAN. Dasgupta et al. (2018, 2019) introduce a naturalistic NLI dataset, with test items that shuffle the argument structure of natural language utterances. In the in the inductive logic programming domain, Sinha et al. (2019) introduced the CLUTTR relational-reasoning benchmark. The novel-combinations-of-familiar-words approach was formalized in the CFQ dataset and associated distribution metric of Keysers et al. (2019). Ettinger et al. (2018) introduced a semantic-rolelabeling and negation-scope labeling dataset, which tests compositional generalization with novel combinations of familiar words and makes use of syntactic constructions like relative clauses. Finally, Kim et al. (2019) explore pre-training schemes’ abilities to learn prepositions and wh-words with syntactic transformations (two kinds of closedclass words which our work does not address). A different type of systematicity analysis directly investigates learned representations, rather than developing probes of model behavior. This is done either through visualization (Veldhoen and Zuidema, 2017), training a second network to approximate learned representations using a symbolic structure (Soulos et al., 2019) or as a diagnostic classifier (Giulianelli et al., 2018), or reconstructing the semantic space through similarity measurements over representations (Prasad et al., 2019). 5 Study Setup 5.1 Natural Language Inference We make use of the Natural language inference (NLI) task to study the question of systematicity. The NLI task is to infer the relation between two sentences (the premise and the hypothesis). Sentence pairs must be classified into one of a set of predefined logical relations such as entailment or contradiction. For example, the sentence All mammals growl entails the sentence All pigs growl. A rapidly growing number of studies have shown that deep learning models can achieve very high performance in this setting (Evans et al., 2018; Conneau et al., 2017; Bowman et al., 2014; Yoon et al., 2018; Kiela et al., 2018; Munkhdalai and Yu, 2017; Rocktäschel et al., 2015; Peters et al., 2018; Parikh et al., 2016; Zhang et al., 2018; Radford et al., 2018; Devlin et al., 2018; Storks et al., 2019). 5.2 Natural Logic We adopt the formulation of NLI known as natural logic (MacCartney and Manning, 2014, 2009; Lakoff, 1970). Natural logic makes use of seven logical relations between pairs of sentences. These are shown in Table 1. These relations can be interpreted as the set theoretic relationship between the extensions of the two expressions. For instance, if the expressions are the simple nouns warthog and pig, then the entailment relation (⊏) holds between these extensions (warthog ⊏pig) since every warthog is a kind of pig. For higher-order operators such as quantifiers, relations can be defined between sets of possible worlds. For instance, the set of possible worlds consistent with the expression All blickets wug is a subset of the set of possible worlds consistent with the logically weaker expression All red blickets wug. Critically, the relationship between composed expressions such as All X Y and All P Q is determined entirely by the relations between X/Y and P/Q, respectively. Thus, natural logic allows 1961 us to compute the relation between the whole expressions using the relations between parts. We define an artificial language in which such alignments are easy to compute, and use this language to probe deep learning systems’ ability to generalize systematically. Symbol Name Example Set-theoretic definition x ≡y equivalence pig ≡pig x = y x ⊏y forward entailment pig ⊏mammal x ⊂y x ⊐y reverse entailment mammal ⊐pig x ⊃y x ∧y negation pig ∧not pig x ∩y = ∅∧x ∪y = U x | y alternation pig | cat x ∩y = ∅∧x ∪y ̸= U x ⌣y cover mammal ⌣not pig x ∩y ̸= ∅∧x ∪y = U x#y independence hungry # warthog (all other cases) Table 1: MacCartney and Manning (2009)’s implementation of natural logic relations 5.3 The Artificial Language In our artificial language, sentences are generated according to the six-position template shown in Table 2, and include a quantifier (position 1), noun (position 3), and verb (position 6), with optional pre- and post-modifiers (position 2 and 4) and optional negation (position 5). For readability, all examples in this paper use real English words; however, simulations can use uniquely identified abstract symbols (i.e., generated by gensym). We compute the relation between positionaligned pairs of sentences in our language using the natural logic system (described in §5.2). Quantifiers and negation have their usual natural-language semantics in our artificial language; pre- and postmodifiers are treated intersectively. Open-class items (nouns and verbs) are organized into linear hierarchical taxonomies, where each open-class word is the sub- or super-set of exactly one other open-class item in the same taxonomy. For example, since dogs are all mammals, and all mammals animals, they form the entailment hierarchy dogs ⊏mammals ⊏animals. We vary the number of distinct noun and verb taxonomies according to an approach we refer to as block structure, described in the next section. 5.4 Block Structure In natural language, most open-class words do not appear with equal probability with every other word. Instead, their distribution is biased and clumpy, with words in similar topics occurring together. To mimic such topic structure, we group nouns and verbs into blocks. Each block consists of six nouns and six verbs, which form taxonomic hierarchies (e.g., lizards/animals, run/move). Nouns and verbs from different blocks have no taxonomic relationship (e.g., lizards and screwdrivers or run and read) and do not co-occur in the same sentence pair. Because each block includes a six verbs and six nouns in a linear taxonomic hierarchy, no single block is intrinsically harder to learn than any other block. The same set of closed-class words appear with all blocks of open-class words, and their meanings are systematic regardless of the open-class words (nouns and verbs) they are combined with. For example, the quantifier some has a consistent meaning when it is applied to some screwdrivers or some animals. Because closed-class words are shared across blocks, models are trained on extensive and varied evidence of their behaviour. We present closedclass words in a wide variety of sentential contexts, with a wide variety of different open-class words, to provide maximal pressure against overfitting and maximal evidence of their consistent meaning. 5.5 Test and Train Structure We now describe the structure of our training blocks, holdout test set, and jabberwocky blocks. We also discuss our two test conditions, and several other issues that arise in the construction of our dataset. Training set: For each training block, we sampled (without replacement) one sentence pair for every possible combination of open-class words, that is, every combination of nouns and verbs ⟨noun1, noun2, verb1, verb2⟩. Closed-class words were sampled uniformly to fill each remaining positions in the sentence (see Table 2). A random subset of 20% of training items were reserved for validation (early stopping) and not used during training. Holdout test set: For each training block, we sampled a holdout set of forms using the same nouns and verbs, but disjoint from the training set just described. The sampling procedure was identical to that for the training blocks. These holdout items allow us to test the generalization of the models with known words in novel configurations (see §8.1). Jabberwocky test set: Each jabberwocky block consisted of novel open-class items (i.e., nouns and verbs) that did not appear in training blocks. For each jabberwocky block, we began by following a 1962 Position 1 2 3 4 5 6 Category quantifier nominal premodifier noun nominal postmodifier negation verb Status Obligatory Optional Obligatory Optional Optional Obligatory Class Closed Closed Open Closed Closed Open Example All brown dogs that bark don’t run Table 2: A template for sentences in the artificial language. Each sentence fills the obligatory positions 1, 3, and 6 with a word: a quantifier, noun, and verb. Optional positions (2, 4 and 5) are filled by either a word (adjective, postmodifier or negation) or by the empty string. Closed-class categories (Quantifiers, adjectives, post modifiers and negation) do not include novel words, while open-class categories (nouns and verbs) includes novel words that are only exposed in the test set. sampling procedure identical to that for the training/holdout sets with these new words. Several of our systematicity probes are based on the behavior of neighboring pairs of test sentences (see §7). To ensure that all such necessary pairs were in the jabberwocky test set, we extended the initial sample with any missing test items. Training conditions: Since a single set of closed-class words is used across all blocks, adding more blocks increases evidence of the meaning of these words without encouraging overfitting. To study the effect of increasing evidence in this manner, we use two training conditions: small with 20 training blocks and large with 185 training blocks. Both conditions contained 20 jabberwocky blocks. The small condition consisted of 51, 743 training, 10, 399 validation, and 3, 694, 005 test (holdout and jabberwocky) pairs. The large condition consisted of 478, 649 training, 96, 005 validation, and 3, 694, 455 test items. Balancing: One consequence of the sampling method is that logical relations will not be equally represented in training. In fact, it is impossible to simultaneously balance the distributions of syntactic constructions, logical relations, and instances of words. In this trade-off, we chose to balance the distribution of open-class words in the vocabulary, as we are focused primarily on the ability of neural networks to generalize closed-class word meaning. Balancing instances of open-class words provided the greatest variety of learning contexts for the meanings of the closed-class items. 6 Simulations 6.1 Models We analyze performance on four simple baseline models known to perform well on standard NLI tasks, such as the Stanford Natural Language Inference datasets, (Bowman et al., 2015). Following Conneau et al. (2017), the hypothesis u and premise v are individually encoded by neural sequence encoders such as a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) or gated recurrent unit (GRU; Cho et al., 2014). These vectors, together with their element-wise product u ∗v and element-wise difference u −v are fed into a fully connected multilayer perceptron layer to predict the relation. The encodings u and v are produced from an input sentence of M words, w1, . . . , wM, using a recurrent neural network, which produces a set of a set of M hidden representations h1, . . . , ht, where ht = f(w1, . . . , wM). The sequence encoding is represented by its last hidden vector hT . The simplest of four models sets f to be a bidirectional gated recurrent unit (BGRU). This model concatenates the last hidden state of a GRU run forwards over the sequence and the last hidden state of GRU run backwards over the sequence, for example, u = [←− hM, −→ hM]. Our second embedding system is the Infersent model reported by Conneau et al. (2017), a bidirectional LSTM with max pooling (INFS). This is a model where f is an LSTM. Each word is represented by the concatenation of a forward and backward representation: ht = [←− ht, −→ ht]. We constructed a fixed vector representation of the sequence ht by selecting the maximum value over each dimension of the hidden units of the words in the sentence. Our third model is a self-attentive sentence encoder (SATT) which uses an attention mechanism over the hidden states of a BiLSTM to generate the sentence representation (Lin et al., 2017). This attention mechanism is a weighted linear combination of the word representations, denoted by u = P M αihi, where the weights are calculated as follows: ¯hi = tanh(Whi + bw) αi = e ¯hi ⊤uw P i e ¯hi ⊤uw 1963 where, uw is a learned context query vector and (W, bw) are the weights of an affine transformation. This self-attentive network also has multiple views of the sentence, so the model can attend to multiple parts of the given sentence at the same time. Finally, we test the Hierarchical ConvolutionalNetwork (CONV) architecture from (Conneau et al., 2017) which is itself inspired from the model AdaSent (Zhao et al., 2015). This model has four convolution layers; at each layer the intermediate representation ui is computed by a max-pooling operation over feature maps. The final representation is a concatenation u = [u1, ..., ul] where l is the number of layers. 7 Probing Systematicity In this section, we study the systematicity of the models described in §6.1. Recall that systematicity refers to the degree to which words have consistent meaning across different contexts, and is contrasted with contextually conditioned variation in meaning. We describe three novel probes of systematicity which we call the known word perturbation probe, the identical open-class words probe, and the consistency probe. All probes take advantage of the distinction between closed-class and open-class words reflected in the design of our artificial language, and are performed on sentence pairs with novel open-class words (jabberwocky-type sentences; see §5.5 ). We now describe the logic of each probe. 7.1 Known Word Perturbation Probe We test whether the models treat the meaning of closed-class words systematically by perturbing correctly classified jabberwocky sentence pairs with a closed-class word. More precisely, for a pair of closed-class words w and w′, we consider test items which can be formed by substitution of w by w′ in a correctly classified test item. We allow both w and w′ to be any of the closed-class items, including quantifiers, negation, nominal post-modifiers, or the the empty string ϵ (thus modeling insertions and deletions of these known, closed-class items). Suppose that Example 1 was correctly classified. Substituting some for all in the premise of yields Example 2, and changes the relation from entailment (⊏) to reverse entailment (⊐). (1) All blickets wug. All blockets wug. (2) Some blickets wug. All blockets wug. There are two critical features of this probe. First, because we start from a correctly-classified jabberwocky pair, we can conclude that the novel words (e.g., wug and blickets above) were assigned appropriate meanings. Second, since the perturbation only involves closed-class items which do not vary in meaning and have been highly trained, the perturbation should not affect the models ability to correctly classify the resulting sentence pair. If the model does misclassify the resulting pair, it can only be because a perturbed closed-class word (e.g., some) interacts with the open-class items (e.g., wug), in a way that is different from the pre-perturbation closed-class item (i.e., all). This is non-systematic behavior. In order to rule out trivially correct behavior where the model simply ignores the perturbation, we consider only perturbations which result in a change of class (e.g., ⊏7→⊐) for the sentence pair. In addition to accuracy on these perturbed items, we also examine the variance of model accuracy on probes across different blocks. If a model’s accuracy varies depending only on the novel open-class items in a particular block, this provides further evidence that it does not treat word meaning systematically. 7.2 Identical Open-class Words Probe Some sentence pairs are classifiable without any knowledge of the novel words’ meaning; for example, pairs where premise and hypothesis have identical open-class words. An instance is shown in Example 3: the two sentences must stand in contradiction, regardless of the meaning of blicket or wug. (3) All blickets wug. Some blickets don’t wug. The closed-class items and compositional structure of the language is sufficient for a learner to deduce the relationships between such sentences, even with unfamiliar nouns and verbs. Our second probe, the identical open-class words probe, tests the models’ ability to correctly classify such pairs. 7.3 Consistency Probe Consider Examples 4 and 5, which present the same two sentences in opposite orders. 1964 (4) All blickets wug. All red blickets wug. (5) All red blickets wug. All blickets wug. In Example 4, the two sentences stand in an entailment (⊏) relation. In Example 5, by contrast, the two sentences stand in a reverse entailment (⊐) relation. This is a logically necessary consequence of the way the relations are defined. Reversing the order of sentences has predictable effects for all seven natural logic relations: in particular, such reversals map ⊏7→⊐and ⊐7→⊏, leaving all other relations intact. Based on this observation, we develop a consistency probe of systematicity. We ask for each correctly classified jabberwocky block test item, whether the corresponding reversed item is also correctly classified. The intuition behind this probe is that whatever meaning a model assumes for the novel open-class words, it should assume the same meaning when the sentence order is reversed. If the reverse is not correctly classified, then this is strong evidence of contextual dependence in meaning. 8 Results In this section, we report the results of two control analyses, and that of our three systematicity probes described above. 8.1 Analysis I: Holdout Evaluations We first establish that the models perform well on novel configurations of known words. Table 3 reports accuracy on heldout sentence pairs, described in §5.5. The table reports average accuracies across training blocks together with the standard deviations of these statistics. As can be seen in the table, all models perform quite well on holdout forms across training blocks, with very little variance. Because these items use the same sampling scheme and vocabulary as the trained blocks, these simulations serve as a kind of upper bound on the performance and a lower bound on the variance that we can expect from the more challenging jabberwockyblock-based evaluations below. 8.2 Analysis II: Distribution of Novel Words Our three systematicity probes employ jabberwocky-type sentences—novel openclass words in sentential frames built from known closed-class words. Since models are not Condition BGRU CONV SATT INFS mean (sd) mean (sd) mean (sd) mean (sd) small 95.1 ±0.21 95.43 ±0.12 93.14 ±0.94 96.02 ±0.51 large 95.09 ±1.03 95.22 ±0.55 94.89 ±1.09 96.17 ±0.74 Table 3: Accuracy on holdout evaluations (training conditions and holdout evaluation are explained in §5.5) Figure 1: Visualization of trained and novel open-class word embeddings. trained on these novel words, it is important to establish that they are from the same distribution as the trained words and, thus, that the models’ performance is not driven by some pathological feature of the novel word embeddings. Trained word embeddings were initialized randomly from N(0, 1) and then updated during training. Novel word embeddings were simply drawn from N(0, 1) and never updated. Figure 1 plots visualizations of the trained and novel open-class word embeddings in two dimensions, using t-SNE parameters computed over all open-class words (Maaten and Hinton, 2008). Trained words are plotted as +, novel words as •. Color indicates the proportion of test items containing that word that were classified correctly. As the plot shows, the two sets of embeddings overlap considerably. Moreover, there does not appear to be a systematic relationship between rates of correct classification for items containing novel words and their proximity to trained words. We also performed a resampling analysis, determining that novel vectors did not differ significantly in length from trained vectors (p = 0.85). Finally, we observed mean and standard deviation of the pairwise cosine similarity between trained and novel words to be 0.999 and 0.058 respectively, confirming that there is little evidence the distributions are different. 8.3 Analysis III: Known Word Perturbation Probe Recall from §7.1 that the known word perturbation probe involves insertion, deletion, or substitution 1965 Figure 2: Performance on the known word perturbation probe, small and large training conditions (see §5.5). of a trained closed-class word in a correctly classified jabberwocky-type sentence pair. Figure 2 plots the results of this probe. Each point represents a perturbation type—a group of perturbed test items that share their before/after target perturbed closedclass words and before/after relation pairs. The upper plot displays the mean accuracy of all perturbations, averaged across blocks, and the lower plot displays the standard deviations across blocks. All models perform substantially worse than the holdout-evaluation on at least some of the perturbations. In addition, the standard deviation of accuracy between blocks is higher than the holdout tests. As discussed in §7.1, low accuracy on this probe indicates that closed-class words do not maintain a consistent interpretation when paired with different open-class words. Variance across blocks shows that under all models the behavior of closed-class words is highly sensitive to the novel words they appear with. Performance is also susceptible to interference from sentence-level features. For example, consider the perturbation which deletes a post-modifier from a sentence pair in negation, yielding a pair in cover relation. The self-attentive encoder performs perfectly when this perturbation is applied to a premise (100% ± 0.00%), but not when applied to a hypothesis (86.60% ± 18.08%). Similarly, deleting the adjective red from the hypothesis of a forward-entailing pair results in an unrelated sentence pair (84.79% ± 7.50%) or another forwardentailing pair (92.32%, ±3.60%) or an equality pair (100% ± 0.00%). All the possible perturbations we studied exhibit similarly inconsistent performance. 8.4 Analysis IV: Identical Open-Class Words Probe Recall that the identical open-class words probe consist of sentence pairs where all open-class lexical items were identical. Table 4 shows the accuracies for these probes, trained on the small language. Average accuracies across jabberwocky blocks are reported together with standard deviations. Relation BGRU CONV SATT INFS mean (sd) mean (sd) mean (sd) mean (sd) # 100 ±0 100 ±0 99.94 ±0.26 99.67 ±0.98 ∧ 55.68 ±20.29 73.29 ±10.8 23.71 ±11.45 90.67 ±10.98 ⊏ 90.78 ±4.99 82.84 ±6.51 75.22 ±5.98 95.53 ±2.64 ≡ 90.43 ±17.1 38.12 ±15.56 71.94 ±24.1 95.93 ±6.5 ⊐ 90.34 ±4.18 77.11 ±5.9 81.4 ±6.67 93.81 ±2.96 | 93.08 ±3.58 85.34 ±5.47 74.05 ±8.03 92.23 ±4.6 ⌣ 88.01 ±3.55 71.5 ±7.32 78.4 ±7.91 95.22 ±3.58 Table 4: Identical open-class words probe performance, trained on the small language condition (trained on 51, 743 sentence pairs, see §5.5) Accuracy on the probe pairs fails to reach the holdout test levels for most models and most relations besides #, and variance between blocks is much higher than in the holdout evaluation. Of special interest is negation (∧), for which accuracy is dramatically lower and variance dramatically higher than the holdout evaluation. The results are similar for the large language condition, shown in Table 5. Although model accuracies improve somewhat, variance remains higher than the heldout level and accuracy lower. Recall that these probe-items can be classified while ignoring the specific identity of their open-class words. Thus, the models inability to leverage this fact, and high variance across different sets novel open-class words, illustrates their sensitivity to context. 8.5 Analysis V: Consistency Probe The consistency probe tests abstract knowledge of relationships between logical relations, such as the fact that two sentences that stand in a contradiction still stand in a contradiction after reversing their order. Results of this probe in the small-language condition are in Table 6: For each type of relation, we show the average percentage of correctly-labeled 1966 Relation BGRU CONV SATT INFS mean (sd) mean (sd) mean (sd) mean (sd) # 99.82 ±0.45 99.57 ±0.73 98.67 ±1.81 100 ±0 ∧ 84.18 ±12.29 73.73 ±18.31 79.97 ±16.58 85.54 ±14.11 ⊏ 96.13 ±2.59 93.88 ±2.67 97.3 ±2.36 97.02 ±2.39 ≡ 89.33 ±12.5 77.84 ±12.08 94.44 ±11.23 94.59 ±7.02 ⊐ 95.4 ±2.48 94.55 ±2.04 98.05 ±1.51 97.6 ±2.08 | 89.97 ±6.73 92.36 ±6.29 84.52 ±7.07 98.72 ±2.08 ⌣ 90.78 ±6.33 93.18 ±2.95 87.85 ±6.46 97.48 ±2.56 Table 5: Identical open-class words probe performance when trained on the large language training condition (trained on 478, 649 sentence pairs, see §5.5) sentence pairs that, when presented in reverse order, were also correctly labeled. The best-performing model on negation reversal is SATT, which correctly labeled reversed items 66.92% of the time. Although performance on negation is notably more difficult than the other relations, every model, on every relation, exhibited inter-block variance higher than that of the hold-out evaluations. Relation BGRU CONV SATT INFS mean (sd) mean (sd) mean (sd) mean (sd) # 97.4 ±0.86 97.8 ±0.93 98.58 ±0.74 97.03 ±0.87 ∧ 63.03 ±36.19 63.42 ±35.91 66.92 ±31.45 57.16 ±38.24 ⊏ 92.45 ±6.26 88.1 ±8.16 93.16 ±5.42 90.64 ±6.76 ≡ 100 ±0 100 ±0 100 ±0 100 ±0 ⊐ 91.37 ±6.23 94.73 ±6.51 96.42 ±3.22 87.02 ±9.61 | 96.02 ±2.6 96.29 ±2.51 96.95 ±2.14 94.2 ±3.48 ⌣ 93.57 ±3.56 95 ±2.97 96.4 ±2.83 93.1 ±3.77 Table 6: Consistency probe performance, trained on the small language condition (51, 743 sentence pairs, see §5.5). Furthermore, as can be seen in Table 7, the large language condition yields little improvement. Negation pairs are still well below the hold-out test threshold, still with a high degree of variation. Variation remains high for many relations, which is surprising because the means report accuracy on test items that were chosen specifically because the same item, in a reverse order, was already correctly labeled. Reversing the order of sentences causes the model to misclassify the resulting pair, more often for some blocks than others. 9 Discussion and Conclusion Systematicity refers to the property of natural language representations whereby words (and other units or grammatical operations) have consistent meanings across different contexts. Our probes test whether deep learning systems learn to represent linguistic units systematically in the natural lanRelation BGRU CONV SATT INFS mean (sd) mean (sd) mean (sd) mean (sd) # 98.45 ±0.65 98.69 ±0.54 98.83 ±0.6 98.38 ±0.74 ∧ 70.46 ±33.72 77.82 ±26 84.27 ±23.89 65.64 ±35.13 ⊏ 96.02 ±2.96 96.6 ±3.26 96.78 ±4.23 95.01 ±5.38 = 100 ±0 100 ±0 100 ±0 100 ±0 ⊐ 93.5 ±4.51 95.76 ±4.23 94.23 ±5.86 90.11 ±8.5 | 96.31 ±2.73 97.25 ±2.05 97.17 ±2.23 94.46 ±4.24 ⌣ 96.25 ±2.49 96.98 ±2.66 97.18 ±2.17 93.88 ±4.78 Table 7: Consistency probe performance, trained on the large langauge condition (478, 649 sentence pairs). guage inference task. Our results indicate that despite their high overall performance, these models tend to generalize in ways that allow the meanings of individual words to vary in different contexts, even in an artificial language where a totally systematic solution is available. This suggests the networks lack a sufficient inductive bias to learn systematic representations of words like quantifiers, which even in natural language exhibit very little meaning variation. Our analyses contain two ideas that may be useful for future studies of systematicity. First, two of our probes (known word perturbation and consistency) are based on the idea of starting from a test item that is classified correctly, and applying a transformation that should result in a classifiable item (for a model that represents word meaning systematically). Second, our analyses made critical use of differential sensitivity (i.e., variance) of the models across test blocks with different novel words but otherwise identical information content. We believe these are a novel ideas that can be employed in future studies. Acknowledgements We thank Brendan Lake, Marco Baroni, Adina Williams, Dima Bahdanau, Sam Gershman, Ishita Dasgupta, Alessandro Sordoni, Will Hamilton, Leon Bergen, the Montreal Computational and Quantitative Linguistics, and Reasoning and Learning Labs at McGill University for feedback on the manuscript. We are grateful to Facebook AI Research for providing extensive compute and other support. We also gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada, the Fonds de Recherche du Québec, Société et Culture, and the Canada CIFAR AI Chairs Program. 1967 References Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in English past tenses: A computational/experimental study. Cognition, 90(2):119– 161. Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries, and Aaron Courville. 2019. Systematic generalization: What is required and can it be learned? arXiv, abs/1811.12889(arXiv:1811.12889v2 [cs.CL]). Luisa Bentivogli, Raffaella Bernardi, Marco Marelli, Stefano Menini, Marco Baroni, and Roberto Zamparelli. 2016. Sick through the semeval glasses. lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. Language Resources and Evaluation, 50(1):95–124. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Samuel R Bowman, Christopher Potts, and Christopher D Manning. 2014. Recursive neural networks can learn logical semantics. arXiv preprint arXiv:1406.1827. Rudolf Camap. 1947. Meaning and necessity: A study in semantics and modal logic. Brian Cantwell Smith. 1996. On the Origins of Objects. MIT Press, Cambridge, Massachusetts. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. Ishita Dasgupta, Demi Guo, Samuel J. Gershman, and Noah D. Goodman. 2019. Analyzing machinelearned representations: A natural language case study. Ishita Dasgupta, Demi Guo, Andreas Stuhlmüller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embeddings. arXiv preprint arXiv:1802.04302. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Catarina Dutilh Novaes. 2012. Formal Languages in Logic: A Philosophical and Cognitive Analysis. Cambridge University Press, Cambridge, England. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. 2018. Can neural networks understand logical entailment? arXiv preprint arXiv:1802.08535. Jerry A Fodor and Zenon W Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1):3–71. Samuel Gershman and Joshua B Tenenbaum. 2015. Phrase similarity in humans and machines. In CogSci. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. arXiv preprint arXiv:1808.08079. Irene Heim and Angelika Kratzer. 2000. Semantics in Generative Grammar. Blackwell Publishing, Malden, MA. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Wilfrid Hodges. 2012. Formalizing the relationship between meaning and syntax. In The Oxford handbook of compositionality. Oxford Handbooks in Linguistic. Drew A. Hudson and Christopher D. Manning. 2018. Compositional attention networks for machine reasoning. Theo MV Janssen et al. 2012. Compositionality: its historic context. The Oxford handbook of compositionality, pages 19–46. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901– 2910. 1968 Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2019. Measuring compositional generalization: A comprehensive method on realistic data. Douwe Kiela, Changhan Wang, and Kyunghyun Cho. 2018. Dynamic meta-embeddings for improved sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1466–1477. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguistics. Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning, pages 2879–2888. Brenden Lake, Tal Linzen, and Marco Baroni. 2017a. Human few-shot learning of compositional instructions. In Ashok Goel, Colleen Seifert, and Christian Freksa, editors, Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages 611– 616. Cognitive Science Society, Montreal, Canada. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. 2017b. Building machines that learn and think like people. Behavioral and Brain Sciences, 40. George Lakoff. 1970. Linguistics and natural logic. Synthese, 22(1-2):151–271. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceedings of the Eighth International Conference on Computational Semantics, IWCS-8 ’09, pages 140–156, Stroudsburg, PA, USA. Association for Computational Linguistics. Bill MacCartney and Christopher D. Manning. 2014. Natural logic and natural language inference. In Computing Meaning: Volume 4, pages 129–147, Dordrecht. Springer Netherlands. R Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. arXiv preprint arXiv:1911.02969. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy. Tsendsuren Munkhdalai and Hong Yu. 2017. Neural semantic encoders. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 1, page 397. NIH Public Access. Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. Ankur P Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Kyle Richardson, Hai Hu, Lawrence S. Moss, and Ashish Sabharwal. 2019. Probing natural language inference models through semantic fragments. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Koˇcisk`y, and Phil Blunsom. 2015. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664. Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. 2019. Clutrr: A diagnostic benchmark for inductive reasoning from text. Paul Soulos, Tom McCoy, Tal Linzen, and Paul Smolensky. 2019. Discovering the compositional structure of vector representations with role learning networks. 1969 Shane Storks, Qiaozi Gao, and Joyce Y Chai. 2019. Recent advances in natural language inference: A survey of benchmarks, resources, and approaches. arXiv preprint arXiv:1904.01172. Zoltan Szabó. 2012. The case for compositionality. The Oxford Handbook of Compositionality. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Sara Veldhoen and Willem Zuidema. 2017. Can neural networks learn logical reasoning? Proceedings of the Conference on Logic and Machine Learning in Natural Language. Wei Wang, Ming Yan, and Chen Wu. 2018. Multigranularity hierarchical attention fusion networks for reading comprehension and question answering. arXiv preprint arXiv:1811.11934. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. Deunsol Yoon, Dongbok Lee, and SangKeun Lee. 2018. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. arXiv preprint arXiv:1808.07383. Wlodek Zadrozny. 1994. From compositional to systematic semantics. Linguistics and philosophy, 17(4):329–342. Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Shexia He, Hai Zhao, Xi Zhou, and Xiang Zhou. 2018. I know what you want: Semantic learning for text comprehension. arXiv preprint arXiv:1809.02794. Han Zhao, Zhengdong Lu, and Pascal Poupart. 2015. Self-adaptive hierarchical sentence model. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
2020
177
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1970–1978 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1970 Recollection versus Imagination: Exploring Human Memory and Cognition via Neural Language Models Maarten Sap‡†∗Eric Horvitz† Yejin Choi‡♦Noah A. Smith‡♦James W. Pennebaker♣ †Microsoft Research ‡Paul G. Allen School for Computer Science & Engineering, University of Washington ♦Allen Institute for Artificial Intelligence ♣Department of Psychology, University of Texas at Austin [email protected], [email protected] Abstract We investigate the use of NLP as a measure of the cognitive processes involved in storytelling, contrasting imagination and recollection of events. To facilitate this, we collect and release HIPPOCORPUS, a dataset of 7,000 stories about imagined and recalled events. We introduce a measure of narrative flow and use this to examine the narratives for imagined and recalled events. Additionally, we measure the differential recruitment of knowledge attributed to semantic memory versus episodic memory (Tulving, 1972) for imagined and recalled storytelling by comparing the frequency of descriptions of general commonsense events with more specific realis events. Our analyses show that imagined stories have a substantially more linear narrative flow, compared to recalled stories in which adjacent sentences are more disconnected. In addition, while recalled stories rely more on autobiographical events based on episodic memory, imagined stories express more commonsense knowledge based on semantic memory. Finally, our measures reveal the effect of narrativization of memories in stories (e.g., stories about frequently recalled memories flow more linearly; Bartlett, 1932). Our findings highlight the potential of using NLP tools to study the traces of human cognition in language. 1 Introduction When telling stories, people draw from their own experiences (episodic knowledge; Conway et al., 1996, 2003) and from their general world knowledge (semantic knowledge; Bartlett, 1932; Oatley, 1999). For example, in Figure 1 (top), a recalled story about a birth will likely recount concrete events from that day, relying heavily on the author’s episodic memory (Tulving, 1972). On the ∗Research conducted during an internship at Microsoft Research. ….her husband called me and then drove her to the hospital. I joined her at the hospital. When we got the hospital things got complicated. Her husband tried his best to be with her and to keep her strong. She eventually delivered perfectly. My daughter gave birth to her first child. She and her husband were overwhelmed by emotions. RECALLED We recently attended a family wedding. It was the first time in a decade we all got together. …My older brother is getting married to a rich tycoon lady. He will be very happy. I hope he doesn’t get too greedy. IMAGINED PersonX gets married PersonX to be happy causes # concrete events: 7 # concrete events: 1 Figure 1: Snippets from two stories from HIPPOCORPUS (top: recalled, bottom: imagined). Concrete or realis events (in gray) are more frequent in recalled stories, whereas general or commonsense events (underlined) are associated with imagined stories. other hand, an imagined story about a wedding (Figure 1, bottom) will largely draw from the author’s commonsense knowledge about the world (Kintsch, 1988; Graesser et al., 1981). We harness neural language and commonsense models to study how cognitive processes of recollection and imagination are engaged in storytelling. We rely on two key aspects of stories: narrative flow (how the story reads) and semantic vs. episodic knowledge (the types of events in the story). We propose as a measure of narrative flow the likelihood of sentences under generative language models conditioned on varying amounts of history. Then, we quantify semantic knowledge by measuring the frequency of commonsense events (from the ATOMIC knowledge graph; Sap et al., 2019), and episodic knowledge by counting realis events (Sims et al., 2019), both shown in Figure 1. 1971 We introduce HIPPOCORPUS,1 a dataset of 6,854 diary-like short stories about salient life events, to examine the cognitive processes of remembering and imagining. Using a crowdsourcing pipeline, we collect pairs of recalled and imagined stories written about the same topic. By design, authors of recalled stories rely on their episodic memory to tell their story. We demonstrate that our measures can uncover differences in imagined and recalled stories in HIPPOCORPUS. Imagined stories contain more commonsense events and elaborations, whereas recalled stories are more dense in concrete events. Additionally, imagined stories flow substantially more linearly than recalled stories. Our findings provide evidence that surface language reflects the differences in cognitive processes used in imagining and remembering. Additionally, we find that our measures can uncover narrativization effects, i.e., the transforming of a memory into a narrative with repeated recall or passing of time (Bartlett, 1932; Reyna and Brainerd, 1995; Christianson, 2014). We find that with increased temporal distance or increased frequency of recollection, recalled stories flow more linearly, express more commonsense knowledge, and are less concrete. 2 HIPPOCORPUS Creation We construct HIPPOCORPUS, containing 6,854 stories (Table 1), to enable the study of imagined and recalled stories, as most prior corpora are either limited in size or topic (e.g., Greenberg et al., 1996; Ott et al., 2011). See Appendix A for additional details (e.g., worker demographics; §A.2). 2.1 Data Collection We collect first-person perspective stories in three stages on Amazon Mechanical Turk (MTurk), using a pairing mechanism to account for topical variation between imagined and recalled stories. Stage 1: recalled. We ask workers to write a 15–25 sentence story about a memorable or salient event that they experienced in the past 6 months. Workers also write a 2–3 sentence summary to be used in subsequent stages, and indicate how long ago the events took place (in weeks or months; TIMESINCEEVENT). 1Available at http://aka.ms/hippocorpus. # stories # sents # words recalled 2,779 17.8 308.9 imagined 2,756 17.5∗∗ 274.2∗∗ retold 1,319 17.3∗ 296.8∗∗ total 6,854 Table 1: HIPPOCORPUS data statistics. ∗∗and ∗indicate significant difference from recalled at p < 0.001 and p < 0.05, respectively. Stage 2: imagined. A new set of workers write imagined stories, using a randomly assigned summary from stage 1 as a prompt. Pairing imagined stories with recalled stories allows us to control for variation in the main topic of stories. Stage 3: retold past. After 2–3 months, we contact workers from stage 1 and ask them to re-tell their stories, providing them with the summary of their story as prompt. Post-writing questionnaire (all stages). Immediately after writing, workers describe the main topic of the story in a short phrase. We then ask a series of questions regarding personal significance of their story (including frequency of recalling the event: FREQUENCYOFRECALL; see A.1 for questionnaire details). Optionally, workers could report their demographics.2 3 Measures To quantify the traces of imagination and recollection recruited during storytelling, we devise a measure of a story’s narrative flow, and of the types of events it contains (concrete vs. general). 3.1 Narrative Flow Inspired by recent work on discourse modeling (Kang et al., 2019; Nadeem et al., 2019), we use language models to assess the narrative linearity of a story by measuring how sentences relate to their context in the story. We compare the likelihoods of sentences under two generative models (Figure 2). The bag model makes the assumption that every sentence is drawn independently from the main theme of the story (represented by E). On the other hand, the chain model assumes that a story begins with a 2 With IRB approval from the Ethics Advisory Board at Microsoft Research, we restrict workers to the U.S., and ensure they are fairly paid ($7.5–9.5/h). 1972 (i) bag (ii) chain ݏଵ ݏ଴ ݏ௜ ǥ ࣟ ݏଵ ݏ଴ ݏ௜ ǥ ࣟ Figure 2: Two probabilistic graphical models representing (i) bag-like and (ii) chain-like (linear) story representations. E represents the theme of the story. theme, and sentences linearly follow each other.3. ∆l is computed as the difference in negative loglikelihoods between the bag and chain models: ∆l(si) = −1 |si| [log p(si | E)− log p(si | E, s1:i−1)] (1) where the log-probability of a sentence s in a context C (e.g., topic E and history s1:i−1) is the sum of the log-probabilities of its tokens wt in context: log p(s | C) = P t log p(wt | C, w0:t−1). We compute the likelihood of sentences using OpenAI’s GPT language model (Radford et al., 2018, trained on a large corpus of English fiction), and we set E to be the summary of the story, but find similar trends using the main event of the story or an empty sequence. 3.2 Episodic vs. Semantic Knowledge We measure the quantity of episodic and semantic knowledge expressed in stories, as proxies for the differential recruitment of episodic and semantic memory (Tulving, 1972) in stories. Realis Event Detection We first analyze the prevalence of realis events, i.e., factual and nonhypothesized events, such as “I visited my mom” (as opposed to irrealis events which have not happened, e.g., “I should visit my mom”). By definition, realis events are claimed by the author to have taken place, which makes them more likely to be drawn from from autobiographical or episodic memory in diary-like stories. We train a realis event tagger (using BERT-base; Devlin et al., 2019) on the annotated literary events corpus by Sims et al. (2019), which slightly outperforms the original author’s models. We provide further training details in Appendix B.1. Semantic and Commonsense Knowledge We measure the amount of commonsense knowl3Note that this is a sentence-level version of surprisal as defined by expectation theory (Hale, 2001; Levy, 2008) edge included explicitly in stories, as a proxy for semantic memory, a form of memory that is thought to encode general knowledge about the world (Tulving, 1972). While this includes facts about how events unfold (i.e., scripts or schemas; Schank and Abelson, 1977; van Kesteren et al., 2012), here we focus on commonsense knowledge, which is also encoded in semantic memory (McRae and Jones, 2013). Given the social focus of our stories, we use the social commonsense knowledge graph ATOMIC (Sap et al., 2019).4 For each story, we first match possible ATOMIC events to sentences by selecting events that share noun chunks and verb phrases with sentences (e.g., “getting married” ⇝“PersonX gets married”; Figure 1). We then search the matched sentences’ surrounding sentences for commonsense inferences (e.g., “be very happy” ⇝ “happy”; Figure 1). We describe this algorithm in further detail in Appendix B.2. In our analyses, the measure quantifies the number of story sentences with commonsense tuple matches in the two preceding and following sentences. 3.3 Lexical and Stylistic Measures To supplement our analyses, we compute several coarse-grained lexical counts for each story in HIPPOCORPUS. Such approaches have been used in prior efforts to investigate author mental states, temporal orientation, or counterfactual thinking in language (Tausczik and Pennebaker, 2010; Schwartz et al., 2015; Son et al., 2017). We count psychologically relevant word categories using the Linguistic Inquiry Word Count (Pennebaker et al., 2015, LIWC;), focusing only on the cognitive processes, positive emotion, negative emotion, and I-word categories, as well as the ANALYTIC and TONE summary variables.5 Additionally, we measure the average concreteness level of words in stories using the lexicon by Brysbaert et al. (2014). 4 Imagining vs. Remembering We summarize the differences between imagined and recalled stories in HIPPOCORPUS in Table 2. For our narrative flow and lexicon-based analyses, 4ATOMIC contains social and inferential knowledge about the causes (e.g., “X wants to start a family”) and effects (e.g., “X throws a party”, “X feels loved”) of everyday situations like “PersonX decides to get married”. 5See liwc.wpengine.com/interpretingliwc-output/ for more information on LIWC variables. 1973 measure effect size (d or β) direction avg. ∆l (linearity) 0.52∗∗∗ imagined realis events 0.10∗∗ recalled commonsense 0.15∗∗∗ imagined lexicon-based ANALYTIC 0.26∗∗∗ recalled concrete 0.13∗∗∗ recalled neg. emo. 0.07∗∗∗ imagined TONE 0.12∗∗∗ imagined I-words 0.17∗∗∗ imagined pos. emo. 0.22∗∗∗ imagined cog. proc. 0.30∗∗∗ imagined Table 2: Summary of differences between imagined and recalled stories, according to proposed measures (top), and lexical or word-count measures (bottom). All associations are significant when controlling for multiple comparisons (∗∗∗: p <0.001; ∗∗: p <0.01). we perform paired t-tests. For realis and commonsense event measures, we perform linear regressions controlling for story length.6 We Holmcorrect for multiple comparisons for all our analyses (Holm, 1979). Imagined stories flow more linearly. We compare ∆l, i.e., pairwise differences in NLL for sentences when conditioned on the full history vs. no history (density plot shown in Figure 3). When averaging ∆l over the entire story, we find that sentences in imagined stories are substantially more predictable based on the context set by prior sentences than sentences in remembered stories. This effect is also present with varying history sizes (see Figure 5 in Appendix C.1). Recalled stories are more event-dense. As seen in Table 2, we find that imagined stories contain significantly fewer realis events (controlling for story length).7 Imagined stories express more commonsense knowledge. Using the same analysis method, our results show that sentences in imagined stories are more likely to have commonsense inferences in their neighborhood compared to recalled stories. Lexical differences. Lexicon-based counts uncover additional differences between imagined and recalled stories. Namely, imagined stories are more self-focused (I-words), more emotional 6Linear regressions use z-scored variables. We confirm that our findings hold with multivariate regressions as well as when adding participant random effects in Appendix C.2. 7Note that simply using verb count instead of number of realis events yields the opposite effect, supporting our choice of measure. 0 1 2 3 0.0 0.5 1.0 density imagined recalled retold Δl Figure 3: Density plot showing differences in likelihoods of sentences between chain and bag model, for recalled (green), imagined (purple), and retold (dark gray dashed) stories. Vertical lines represent mean ∆l values for each story type. All three story types differ significantly (p < 0.001). (TONE, positive and negative emotion) and evoke more cognitive processes.8 In contrast, recalled stories are more concrete and contain more logical or hierarchical descriptions (ANALYTIC). Discussion. Our interpretation of these findings is that the consolidated memory of the author’s life experience permeates in a more holistic manner through the sentences in the recalled story. Imagined stories are more fluent and contain more commonsense elaborations, which suggests that authors compose a story as a sequence, relying more on preceding sentences and commonsense knowledge to generate the story. While our findings on linearity hold when using different language models trained on Wikipedia articles (Dai et al., 2019) or English web text (mostly news articles; Radford et al., 2019), a limitation of the findings is that GPT is trained on large corpus of fiction, which may boost linearity scores for imagined (vs. recalled) sentences. Future work could explore the sensitivity of our results to changes in the language model’s training domain or neural architecture. 5 Narrativization of Recalled Stories We further investigate how our narrative and commonsense measures can be used to uncover the narrativization of recalled events (in recalled and retold stories). These analyses aim to investigate the hypothesis that memories are narrativized 8The cognitive processes LIWC category counts occurrences of words indicative of cognitive activity (e.g., “think”, “because”, “know”). 1974 over time (Bartlett, 1932), and that distant autobiographical memories are supplemented with semantic or commonsense knowledge (Reyna and Brainerd, 1995; Roediger III et al., 1996; Christianson, 2014; Brigard, 2014). First, we compare the effects of recency of the event described (TIMESINCEEVENT: a continuous variable representing the log time since the event).9 Then, we contrast recalled stories to their retold counterparts in pairwise comparisons. Finally, we measure the effect of how frequently the experienced event is thought or talked about (FREQUENCYOFRECALL: a continuous variable ranging from very rarely to very frequently).10 As in §4, we Holm-correct for multiple comparisons. Temporal distance. First, we find that recalled and retold stories written about temporally distant events tend to contain more commonsense knowledge (|β| = 1.10, p < 0.001). We found no other significant associations with TIMESINCEEVENT. On the other hand, the proposed measures uncover differences between the initially recalled and later retold stories that mirror the differences found between recalled and imagined stories (Table 2). Specifically, retold stories flow significantly more linearly than their initial counterparts in a pairwise comparison (Cohen’s |d| = 0.17, p < 0.001; see Figure 3). Our results also indicate that retold stories contain fewer realis events (|β| = 0.09, p = 0.025), and suggest a potential increase in use of commonsense knowledge in the retold stories (|β| = 0.06, p = 0.098). Using lexicon-based measures, we find that retold stories are significantly higher in scores for cognitive processes (|d| = 0.12, p < 0.001) and positive tone (|d| = 0.07, p = 0.02). Surprisingly, initially recalled stories contain more self references than retold stories (I-words; |d| = 0.10, p < 0.001); higher levels of self reference were found in imagined stories (vs. recalled; Table 2). Frequency of recall. We find that the more an event is thought or talked about (i.e., higher FREQUENCYOFRECALL), the more linearly its story flows (∆l; |β| = 0.07, p < 0.001), and the fewer realis events (|β| = 0.09, p < 0.001) it contains. 9We use the logarithm of the time elaspsed since the event, as subjects may perceive the passage of time logarithmically (Bruss and R¨uschendorf, 2009; Zauberman et al., 2009). 10Note that TIMESINCEEVENT and FREQUENCYOFRECALL are somewhat correlated (Pearson r = 0.05, p < 0.001), and findings for each variable still hold when controlling for the other. Furthermore, using lexicon-based measures, we find that stories with high FREQUENCYOFRECALL tend to contain more self references (Iwords; Pearson’s |r| = 0.07, p < 0.001). Conversely, stories that are less frequently recalled are more logical or hierarchical (LIWC’s ANALYTIC; Pearson’s |r| = 0.09, p < 0.001) and more concrete (Pearson’s |r| = 0.05, p = 0.03). Discussion. Our results suggest that the proposed language and commonsense methods can measure the effects of narrativization over time in recalled memories (Bartlett, 1932; Smorti and Fioretti, 2016). On one hand, temporal distance of events is associated with stories containing more commonsense knowledge and having more linear flow. On the other hand, stories about memories that are rarely thought about or talked about are more concrete and contain more realis events, compared to frequently recalled stories which flow more linearly. This suggests that stories that become more narrativized, either by the passing of time or by being recalled repeatedly, become more similar in some ways to imagined stories. 6 Conclusion To investigate the use of NLP tools for studying the cognitive traces of recollection versus imagination in stories, we collect and release HIPPOCORPUS, a dataset of imagined and recalled stories. We introduce measures to characterize narrative flow and influence of semantic vs. episodic knowledge in stories. We show that imagined stories have a more linear flow and contain more commonsense knowledge, whereas recalled stories are less connected and contain more specific concrete events. Additionally, we show that our measures can uncover the effect in language of narrativization of memories over time. We hope these findings bring attention to the feasibility of employing statistical natural language processing machinery as tools for exploring human cognition. Acknowledgments The authors would like to thank the anonymous reviewers, as well as Elizabeth Clark, Tal August, Lucy Lin, Anna Jafarpour, Diana Tamir, Justine Zhang, Saadia Gabriel, and other members of the Microsoft Research and UW teams for their helpful comments. 1975 References Frederic Charles Bartlett. 1932. Remembering: A study in experimental and social psychology. Cambridge University Press. Felipe De Brigard. 2014. Is memory for remembering? recollection as a form of episodic hypothetical thinking. Synthese, 191:155–185. F. Thomas Bruss and Ludger R¨uschendorf. 2009. On the perception of time. Gerontology, 56 4:361–70. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3). Sven-Ake Christianson. 2014. The Handbook of Emotion and Memory: Research and Theory. Psychology Press. Martin A. Conway, Alan F. Collins, Susan E. Gathercole, and Stephen J. Anderson. 1996. Recollections of true and false autobiographical memories. Journal of Experimental Psychology: General, 125(1). Martin A. Conway, Christopher W. Pleydell-Pearce, Sharron E. Whitecross, and Helen Sharpe. 2003. Neurophysiological correlates of memory for experienced and imagined events. Neuropsychologia, 41(3):334–340. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. M. Brent Donnellan, Frederick L. Oswald, Brendan M. Baird, and Richard E. Lucas. 2006. The mini-IPIP scales: tiny-yet-effective measures of the big five factors of personality. Psychological Assessment, 18(2):192. Arthur C Graesser, Scott P Robertson, and Patricia A Anderson. 1981. Incorporating inferences in narrative representations: A study of how and why. Cognitive Psychology, 13(1):1–26. Melanie A. Greenberg, Camille B. Wortman, and Arthur A. Stone. 1996. Emotional expression and physical health: revising traumatic memories or fostering self-regulation? Journal of Personality and Social Psychology, 71(3):588–602. John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In NAACL-HLT, pages 1– 8. Association for Computational Linguistics. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, pages 65–70. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Dongyeop Kang, Hiroaki Hayashi, Alan W Black, and Eduard Hovy. 2019. Linguistic versus latent relations for modeling coherent flow in paragraphs. In EMNLP. Marlieke T. R. van Kesteren, Dirk J. Ruiter, Guill´en Fern´andez, and Richard N. Henson. 2012. How schema and novelty augment memory formation. Trends in Neurosciences, 35(4). Walter Kintsch. 1988. The role of knowledge in discourse comprehension: A construction-integration model. Psychological Review, 95(2):163. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Ken McRae and Michael Jones. 2013. Semantic memory. In Daniel Reisberg, editor, The Oxford Handbook of Cognitive Psychology, Psychology Publications. Farah Nadeem, Huy Nguyen, Yang Liu, and Mari Ostendorf. 2019. Automated essay scoring with Discourse-Aware neural models. In Workshop on Innovative Use of NLP for Educational Applications @ ACL. Keith Oatley. 1999. Why fiction may be twice as true as fact: Fiction as cognitive and emotional simulation. Review of general psychology, 3(2):101–117. Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In ACL. James W. Pennebaker, Roger J. Booth, Ryan L. Boyd, and Martha E. Francis. 2015. Linguistic inquiry and word count: LIWC 2015. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Unpublished. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Unpublished. Valerie F. Reyna and Charles J. Brainerd. 1995. Fuzzytrace theory: An interim synthesis. Learning and Individual Differences, 7(1):1–75. Henry L Roediger III, J Derek Jacoby, and Kathleen B McDermott. 1996. Misinformation effects in recall: Creating false memories through repeated retrieval. Journal of Memory and Language, 35(2):300–318. Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. Text Mining: Applications and Theory, 1:1–20. 1976 Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: An atlas of machine commonsense for if-then reasoning. In AAAI. Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Structures. Lawrence Erlbaum. H. Andrew Schwartz, Gregory Park, Maarten Sap, Evan Weingarten, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Jonah Berger, Martin Seligman, and Lyle Ungar. 2015. Extracting human temporal orientation from Facebook language. In NAACL. Matthew Sims, Jong Ho Park, and David Bamman. 2019. Literary event detection. In ACL. Andrea Smorti and Chiara Fioretti. 2016. Why narrating changes memory: a contribution to an integrative model of memory and narrative processes. Integrative Psychological and Behavioral Science, 50(2):296–319. Youngseo Son, Anneke Buffone, Joe Raso, Allegra Larche, Anthony Janocko, Kevin Zembroski, H Andrew Schwartz, and Lyle Ungar. 2017. Recognizing counterfactual thinking in social media texts. In ACL. Yla R. Tausczik and James W. Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of Language and Social Psychology, 29(1):24–54. Yaacov Trope and Nira Liberman. 2010. Construallevel theory of psychological distance. Psychological review, 117(2):440. Endel Tulving. 1972. Episodic and semantic memory. Organization of Memory, 1:381–403. Endel Tulving and Daniel L. Schacter. 1990. Priming and human memory systems. Science, 247(4940):301–306. Gal Zauberman, B Kyu Kim, Selin A Malkoc, and James R Bettman. 2009. Discounting time and time discounting: Subjective time perception and intertemporal preferences. Journal of Marketing Research, 46(4):543–556. 1977 (a) Recalled main events (b) Imagined main events Figure 4: We extract phrases from the main themes of recalled (left) and imagined (right) stories, using RAKE (Rose et al., 2010); size of words corresponds to frequency in corpus, and color is only for readability. A Data Collection We describe the data collection in further detail, and release our MTurk annotation templates.11 A.1 Post-Writing Questionnaire After each writing stage (recalled, imagined, retold), we ask workers to rate “how impactful, important, or personal” the story was to them (for imagined and recalled stories), “how similar” to their own lives the story felt (imagined only), and “how often [they] think or talk about the events” in the story (recalled only), on a Likert scale from 1–5. Workers also take the four “openness” items from the Mini-IPIP personality questionnaire (Donnellan et al., 2006) as an assessment of overall creativity. Finally, workers optionally report their demographic information (age, gender, race). A.2 Worker Demographics Our stories are written by 5,387 unique U.S.-based workers, who were 47% male and 52% female (<1% non-binary, <1% other). Workers were 36 years old on average (s.d. 10 years), and predominantly white (73%, with 10% Black, 6% Hispanic, 5% Asian). We find no significant differences in demographics between the authors of imagined and recalled stories,12 but authors of imagined stories scored slightly higher on measures of creativity and openness to experience (Cohen’s d = 0.08, p = 0.01). Note that we randomly paired story summaries to workers. We did not attempt to match the demographics of the recalled story to the demographics 11Available at http://aka.ms/hippocorpus. 12We run Chi-squared tests for gender (χ2 = 1.01, p = 0.80), for age (χ2 = 9.99, p = 0.26), and for race (χ2 = 9.99,p = 0.35). of the imagined author. Future work should investigate whether there are linguistic effects of differing demographics between the two authors.13 B Episodic vs. Semantic Knowledge B.1 Realis Events To detect realis events in our stories, we train a tagger (using BERT-base; Devlin et al., 2019) on the annotated corpus by Sims et al. (2019). This corpus contains 8k realis events annotated by experts in sentences drawn from 100 English books. With development and test F1 scores of 83.7% and 75.8%, respectively, our event tagger slightly outperforms the best performing model in Sims et al. (2019), which reached 73.9% F1. In our analyses, we use our tagger to detect the number of realis event mentions. B.2 Commonsense Knowledge Matching We quantify the prevalence of commonsense knowledge in stories, as a proxy for measuring the traces of semantic memory (Tulving and Schacter, 1990). Semantic memory is thought to encode commonsense as well as general semantic knowledge (McRae and Jones, 2013). We design a commonsense extraction tool that aligns sentences in stories with commonsense tuples, using a heuristic matching algorithm. Given a story, we match possible ATOMIC events to sentences by selecting events that share noun chunks and verb phrases with sentences. For every sentence si that matches an event E in ATOMIC, we check surrounding sentences for mentions of commonsense inferences (using the same noun and verb phrase matching strategy); specifically, we 13Future work could investigate social distance alongside other types of psychological distances (e.g., physical, temporal), using the framework given by Construal Theory (Trope and Liberman, 2010). 1978 3.0 3.1 3.2 3.3 3.4 0 5 10 size of history avg. sentence NLL recalled imagined full hist Figure 5: Average negative log likelihood (NLL) of sentences conditioned on varying sizes of histories of included sentences for recalled (green) and imagined (purple) stories (with 95% confidence intervals). For history sizes > 1, differences are significant when controlling for multiple comparisons (p < 0.001). check the nc preceding sentences for matches of causes of E, and the ne following sentences for event E’s effects. To measure the prevalence of semantic memory in a story, we count the number of sentences that matched ATOMIC knowledge tuples in their surrounding context. We use a context window of size nc = ne = 2 to match inferences, and use the spaCy pipeline (Honnibal and Montani, 2017) to extract noun and verb phrases. C Recollection vs. Imagination C.1 Linearity with Varying Context Size Shown in Figure 5, we compare the negative loglikelihood of sentences when conditioned on varying history sizes (using the story summary as context E). As expected, conditioning on longer histories increases the predictability of a sentence. However, this effect is significantly larger for imagined stories, which suggests that imagined stories flow more linearly than recalled stories. variable β β w/o rand. eff. w/ rand. eff. story length 0.319∗∗∗ 0.159∗∗ ∆l (linearity) -0.454∗∗∗ -0.642∗∗∗ realis events 0.147∗∗∗ 0.228∗∗∗ commonsense -0.144∗∗∗ -0.157∗∗∗ Table 3: Results of multivariate linear regression models (with and without participants random effects), regressing onto story type (0: imagined vs. 1: recalled) as the dependent variable. All effects are significant (∗∗: p < 0.005, ∗∗∗: p < 0.001). C.2 Robustness of Findings To confirm the validity of our measures, we report partial correlations between each of our measures, controlling for story length. We find that our realis measure is negatively correlated with our commonsense measures (Pearson r = −0.137, p < 0.001), and positively correlated with our linearity measure (r = 0.111, p < 0.001). Linearity and commonsense were not significantly correlated (r = −0.02, p = 0.21). Additionally, we confirm that our findings still hold when controlling for other measures and participant random effects. Notably, we find stronger associations between our measures and story type when controlling for other measures, as shown in Table 3. We see a similar trend when additionally controlling for individual variation in workers.
2020
178
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1979–1990 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1979 Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment Forrest Davis and Marten van Schijndel Department of Linguistics Cornell University {fd252|mv443}@cornell.edu Abstract A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all. 1 Introduction Language modeling is widely used as pretraining for many tasks involving language processing (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019). Since such pretraining affects so many tasks, effective evaluations to assess model quality are critical. Researchers in the vein of the present study, typically take (pretrained) language models and ask whether those models have learned some linguistic phenomenon (e.g., subject-verb agreement). Often the task is operationalized as: do the models match some human baseline (e.g., acceptability judgments, reading times, comprehension questions) measured as humans experience this linguistic phenomenon (e.g., comparing acceptability ratings of sentences with grammatical/ungrammatical agreement). This approach tacitly assumes that the necessary linguistic biases are in the training signal and then asks whether the models learn the same abstract representations as humans given this signal. The present study casts doubt on the notion that the necessary linguistic biases are present in the training signal at all. We utilize the, now common, evaluation technique of checking whether a model assigns higher probability to grammatical sentences compared to ungrammatical sentences (Linzen et al., 2016). However, we extend beyond binary grammaticality. Real world applications demand that our models not only know the difference between valid and invalid sentences; they must also be able to correctly prioritize simultaneous valid interpretations (Lau et al., 2017). In this paper, we investigate whether neural networks can in fact prioritize simultaneous interpretations in a human-like way. In particular, we probe the biases of neural networks for ambiguous relative clause (RC) attachments, such as the following: (1) Andrew had dinner yesterday with the nephew of the teacher that was divorced. (from Fern´andez, 2003) In (1), there are two nominals (nephew and teacher) that are available for modification by the RC (that was divorced). We refer to attachment of the RC to the syntactically higher nominal (i.e. the nephew is divorced) as HIGH and attachment to the lower nominal (i.e. the teacher is divorced) as LOW. As both interpretations are equally semantically plausible when no supporting context is given, we might expect that humans choose between HIGH and LOW at chance. However, it has been widely established that English speakers tend to interpret the relative clause as modifying the lower nominal more often than the higher nominal (i.e. they 1980 have a LOW bias;1 Carreiras and Clifton Jr, 1993; Frazier and Clifton, 1996; Carreiras and Clifton, 1999; Fern´andez, 2003). LOW bias is actually typologically much rarer than HIGH bias (Brysbaert and Mitchell, 1996). A proto-typical example of a language with HIGH attachment bias is Spanish (see Carreiras and Clifton Jr, 1993; Carreiras and Clifton, 1999; Fern´andez, 2003). A growing body of literature has shown that English linguistic structures conveniently overlap with non-linguistic biases in neural language models leading to performance advantages for models of English, without such models being able to learn comparable structures in non-English-like languages (e.g., Dyer et al., 2019). This, coupled with recent work showing that such models have a strong recency bias (Ravfogel et al., 2019), suggests that one of these attachment types (LOW), will be more easily learned. Therefore, the models might appear to perform in a humanlike fashion on English, while failing on the crosslinguistically more common attachment preference (HIGH) found in Spanish. The present study investigates these concerns by first establishing, via a synthetic language experiment, that recurrent neural network (RNN) language models (LMs) are capable of learning either type of attachment (Section 4). However, we then demonstrate that these models consistently exhibit a LOW preference when trained on actual corpus data in multiple languages (English and Spanish; Sections 5–7). In comparing English and Spanish, we show that non-linguistic biases in RNN LMs overlap with interpretation biases in English to appear as though the models have acquired English syntax, while failing to acquire minimally different interpretation biases in Spanish. Concretely, English attachment preferences favor the most recent nominal, which aligns with a general preference in RNN LMs for attaching to the most recent nominal. In Spanish, this general recency preference in the models remains despite a HIGH attachment interpretation bias in humans. These results raise broader questions regarding the relationship between comprehension (i.e. typical language model use cases) and production (which generates the training data for language models) and point to a deeper inability of RNN LMs to learn aspects of linguistic structure from raw text alone. 1We use “bias” throughout this paper to refer to “interpretation bias.” We will return to the distinction between production bias and interpretation bias in Section 8. 2 Related Work Much recent work has probed RNN LMs for their ability to represent syntactic phenomena. In particular, subject-verb agreement has been explored extensively (e.g., Linzen et al., 2016; Bernardy and Lappin, 2017; Enguehard et al., 2017) with results at human level performance in some cases (Gulordava et al., 2018). However, additional studies have found that the models are unable to generalize sequential patterns to longer or shorter sequences that share the same abstract constructions (Trask et al., 2018; van Schijndel et al., 2019). This suggests that the learned syntactic representations are very brittle. Despite this brittleness, RNN LMs have been claimed to exhibit human-like behavior when processing garden path constructions (van Schijndel and Linzen, 2018; Futrell and Levy, 2019; Frank and Hoeks, 2019), reflexive pronouns and negative polarity items (Futrell et al., 2018), and center embedding and syntactic islands (Wilcox et al., 2019, 2018). There are some cases, like coordination islands, where RNN behavior is distinctly non-human (see Wilcox et al., 2018), but in general this literature suggests that RNN LMs encode some type of abstract syntactic representation (e.g., Prasad et al., 2019). Thus far though, the linguistic structures used to probe RNN LMs have often been those with unambiguously ungrammatical counterparts. This extends into the domain of semantics, where downstream evaluation platforms like GLUE and SuperGLUE evaluate LMs for correct vs. incorrect interpretations on tasks targeting language understanding (Wang et al., 2018, 2019). Some recent work has relaxed this binary distinction of correct vs. incorrect or grammatical vs. ungrammatical. Lau et al. (2017) correlate acceptability scores generated from a LM to average human acceptability ratings, suggesting that human-like gradient syntactic knowledge can be captured by such models. Futrell and Levy (2019) also look at gradient acceptability in both RNN LMs and humans, by focusing on alternations of syntactic constituency order (e.g., heavy NP shift, dative alternation). Their results suggest that RNN LMs acquire soft constraints on word ordering, like humans. However, the alternations in Futrell and Levy, while varying in their degree of acceptability, maintain the same syntactic relations throughout the alternation (e.g., gave a book to Tom and gave Tom a book both preserve the fact that Tom is the 1981 indirect object). Our work expands this line of research by probing how RNN LMs behave when multiple valid interpretations, with crucially different syntactic relations, are available within a single sentence. We find that RNN LMs do not resolve such ambiguity in a human-like way. There are, of course, a number of other modeling approaches that exist in the current literature; the most notable of these being BERT (Devlin et al., 2019). These transformer models have achieved high performance on a variety of natural language processing tasks, however, there are a number of properties that make them less suitable to this work. One immediate consideration is that of training. We are interested in the behavior of a class of models, so we analyze the behavior of several randomly initialized models. We do not know how representative BERT is of models of its same class, and training more BERT variants is immensely time consuming and environmentally detrimental (Strubell et al., 2019). Additionally, we are interested in probability distributions over individual words given the preceding context, something that is not part of BERT’s training as it takes whole sentences as input. Finally, the bidirectional nature of many of these models makes their representations difficult to compare to humans. For these reasons, we restrict our analyses to unidirectional RNN LMs. This necessarily reduces the generalizability of our claims. However, we still believe this work has broader implications for probing what aspects of linguistic representations neural networks can acquire using standard training data. 3 Methods 3.1 Experimental Stimuli In the present study, we compare the attachment preferences of RNN LMs to those established in Fern´andez (2003). Fern´andez demonstrated that humans have consistent RC attachment biases using both self-paced reading and offline comprehension questions. They tested both English and Spanish monolinguals (along with bilinguals) using parallel stimuli across the two languages, which we adopt in the experiments in this paper.2 Specifically, Fern´andez (2003) included 24 items per language, 12 with a singular RC verb (was) and 12 with a plural RC verb (were). The English and 2All experimental stimuli and models used are available at https://github.com/forrestdavis/ AmbiAttach Spanish stimuli are translations of each other, so they stand as minimal pairs for attachment preferences. Example stimuli are given below. (2) a. Andrew had dinner yesterday with the nephew of the teachers that was divorced. b. Andrew had dinner yesterday with the nephews of the teacher that was divorced. c. Andr´e cen´o ayer con el sobrino de los maestros que estaba divorciado. d. Andr´e cen´o ayer con los sobrinos del maestro que estaba divorciado. The underlined nominal above marks the attachment point of the relative clause (that was divorced). (2-a) and (2-c) exhibit HIGH attachment, while (2-b) and (2-d) exhibit LOW attachment. Fern´andez found that English speakers had a LOW bias, preferring (2-b) over (2-a), while Spanish speakers had a HIGH bias, preferring (2-c) over (2-d). We ran two experiments per language,3 one a direct simulation of the experiment from Fern´andez (2003) and the other an extension (EXTENDED DATA), using a larger set of experimental stimuli. The direct simulation allowed us to compare the attachment preferences for RNN LMs to the experimental results for humans. The extension allowed us to confirm that any attachment preferences we observed were generalizable properties of these models. Specifically, the EXTENDED DATA set of stimuli included the English and Spanish stimuli from Carreiras and Clifton Jr (1993) in addition to the stimuli from Fern´andez (2003), for a total of 40 sentences. Next, we assigned part-of-speech tags to the English and Spanish LM training data using TreeTagger (Schmid, 1999). We filtered the tokens to the top 40 most frequent plural nouns, generating the singular forms from TreeTagger’s lemmatization. We then substituted into the test sentences all combinations of distinct nouns excluding reflexives. Then we appended a relative clause with either a singular or plural verb (was/were or 3The vocabulary of the models was constrained to the 50K most frequent words during training. Out-of-vocabulary nominals in the original stimuli were replaced with semantically similar nominals. In English, lid(s) to cover(s) and refill(s) to filler(s). In Spanish, sarc´ofago(s) to ata´ud(es), recambio(s) to sustituci´on(es), fregadero(s) to lavabo(s), ba´ul(es) to caja(s), cacerola(s) to platillo(s), and bol´ıgrafo(s) to pluma(s) 1982 estaba/estaban).4 Finally, each test stimulus in a pair had a LOW and HIGH attachment version for a total of 249600 sentences. An example of four sentences generated for English given the two nouns building and system is below. (3) a. Everybody ignored the system of the buildings that was b. Everybody ignored the systems of the building that was c. Everybody ignored the system of the buildings that were d. Everybody ignored the systems of the building that were Not all combinations are semantically coherent; however, Gulordava et al. suggest that syntactic operations (e.g., subject-verb agreement) are still possible for RNN LMs with “completely meaningless” sentences (Gulordava et al., 2018, p. 2). 3.2 RNN LM Details We analyzed long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997) throughout the present paper. For English, we used the English Wikipedia training data provided by Gulordava et al. (2018).5 For Spanish, we constructed a comparable training corpus from Spanish Wikipedia following the process used by Gulordava et al. (2018). A recent dump of Spanish Wikipedia was downloaded, raw text was extracted using WikiExtractor,6 and tokenization was done using TreeTagger. A 100-million word subset of the data was extracted, shuffled by sentence, and split into training (80%) and validation (10%) sets.7 For LM training, we included the 50K most frequent words in the vocabulary, replacing the other tokens with ‘⟨UNK⟩’. We used the best English model in Gulordava et al. (2018) and trained 4 additional models with the same architecture8 but different random initializations. There was no established Spanish model architecture, so we took the best Romance model 4Since the unidirectional models are tested at the RC verb, we did not need to generate the rest of the sentence after that verb. 5https://github.com/facebookresearch/ colorlessgreenRNNs 6https://github.com/attardi/ wikiextractor 7We also created a test partition (10% of our data), which we did not use in this work. 8The models had 2 layers, 650 hidden/embedding units, batch size 128, dropout 0.2, and an initial learning rate of 20. Language µ σ Synthetic 4.62 0.03 English 51.83 0.96 Spanish 40.80 0.89 Table 1: Mean and standard deviation of LM validation perplexity for the synthetic models used in Section 4, the English models used in Section 5-6, and the Spanish models used in Section 7 architecture9 reported in Gulordava et al. (2018) and trained 5 models. All models used in this work were trained for 40 epochs with resultant mean validation perplexities and standard deviations in Table 1. 3.3 Measures We evaluated the RNN LMs using informationtheoretic surprisal (Shannon, 1948; Hale, 2001; Levy, 2008). Surprisal is defined as the inverse log probability assigned to each word (wi) in a sentence given the preceding context. surprisal(wi) = −log p(wi|w1...wi−1) The probability is calculated by applying the softmax function to an RNN’s output layer. Surprisal has been correlated with human processing difficulty (Smith and Levy, 2013; Frank et al., 2015) allowing us to compare model behavior to human behavior. Each of the experiments done in this work looked at sentences that differed in the grammatical number of the nominals, repeated from Section 3.1 below. (4) a. Andrew had dinner yesterday with the nephew of the teachers that was divorced. b. Andrew had dinner yesterday with the nephews of the teacher that was divorced. (from Fern´andez, 2003) In (4-a) the RC verb (was) agrees with the HIGH nominal, while in (4-b) it agrees with the LOW nominal. As such, this minimal pair probes the interpretation bias induced by the relativizer (that). We measure the surprisal of the RC verb (was) in both sentences of the pair. If the model has a preference for LOW attachment, then we expect that the surprisal will be smaller when the number 9They focused on Italian as a Romance language. The models are the same as English except the batch size is 64. 1983 of the final noun agrees with the number of the RC verb (e.g., surprisal (4-b) < surprisal (4-a)). Concretely, for each such pair we take the difference in surprisal of the RC verb in the case of HIGH attachment (4-a) from the surprisal of the RC verb in the case of LOW attachment (4-b). If this difference (surprisal (4-a) - surprisal (4-b)) is positive, then the LM has a LOW bias, and if the difference is negative, the LM has a HIGH bias. 4 Attachment vs. Recency We begin with a proof of concept. It has been noted that RNN LMs have a strong recency bias (Ravfogel et al., 2019). As such, it could be possible that only one type of attachment, namely LOW attachment, is learnable. To investigate this possibility, we followed the methodology in McCoy et al. (2018) and constructed a synthetic language to control the distribution of RC attachment in two experiments. Our first experiment targeted the question: if all RC attachment is HIGH, how many RCs have to be observed in training in order for a HIGH bias to generalize to unseen data? Our second experiment targeted the question: what proportion of HIGH and LOW attachment is needed in training to learn a bias? Our synthetic language had RC attachment sentences and filler declarative sentences. The filler sentences follow the phrase structure template given in (5-a), while RC attachment sentences follow the phrase structure template given in (5-b). (5) a. D N (P D N) (Aux) V (D N) (P D N) b. D N Aux V D N ‘of’ D N ‘that’ ‘was/were’ V Material in parentheses was optional and so was not present in all filler stimuli. That is to say, all filler sentences had a subject (abbreviated D N) and a verb (abbreviated V), with the verb being optionally transitive and followed by a direct object (D N). The subject, object, or both could be modified by a prepositional phrase (P D N). The subject and object could be either singular or plural, with the optional auxiliary (Aux) agreeing in number with the subject. There were 30 nouns (N; 60 with plural forms), 2 auxiliaries (Aux; was/were and has/had), 1 determiner (D; the), 14 verbs (V), and 4 prepositions (P). An example filler sentence is given in (6-a), and an example RC sentence is given in (6-b). (6) a. The nephew near the children was seen by the players next to the lawyer. b. The gymnast has met the hostage of the women that was eating. We trained RNN LMs on our synthetic language using the same parameters as the English LMs given in Section 3.2, with 120,000 unique sentences in the training corpus. The resultant RNN LMs were tested on 300 sentences with ambiguous RC attachment, and we measured the surprisal at the RC auxiliary verb (was/were), following the methodology given in Section 3.3. To determine how many HIGH RCs were needed in training to learn a HIGH bias, we first constrained all the RC attachment in the training data to HIGH attachment. Then, we varied the proportion (in increments of 10 RC sentences at a time) of RC sentences to filler sentences during training. We trained 5 RNNs for each training configuration (i.e. each proportion of RCs). This experiment provided a lower bound on the number of HIGH RCs needed in the training data to overcome any RNN recency bias when all RCs exhibited HIGH attachment. When as little as 0.017% (20 sentences) of the data contained RCs with HIGH attachment, the test difference in surprisal between HIGH and LOW attachment significantly differed from zero (p < 10−5, BayesFactor (BF) > 100),10 with a mean difference less than zero (µ = −2.24). These results indicate that the models were able to acquire a HIGH bias with only 20/120000 examples of HIGH RC attachment. In practice, we would like LMs to learn a preference even when the training data contains a mixture of HIGH and LOW attachment. To determine the proportion of RCs that must be HIGH to learn a HIGH bias, we fixed 10% of the training data as unambiguous RC attachment. Within that 10%, we varied the proportion of HIGH and LOW attachment in 10% increments (i.e. 0% HIGH - 100% LOW, 10% HIGH - 90% LOW, etc). Once again, we trained 5 models on each training configuration and tested those models on 300 test sentences, measuring the surprisal at the RC verb. When 10To correct for multiple comparisons, a Bonferroni correction with m = 6 was used. Thus, the threshold for statistical significance was p = 0.0083. We also computed two-sample Bayes Factors (BF; Rouder et al., 2009) for each statistical analysis using ttestBF from the BayesFactor R package (Morey and Rouder, 2018). A Bayes Factor greater than 10 is significant evidence for the hypothesis, while one greater than 100 is highly significant. 1984 the training data had 50-100% HIGH attachment, the models preferred HIGH attachment in all the test sentences. Conversely, when the training data had 0-40% HIGH attachment, the models preferred LOW attachment in all test sentences. Taken together, the results from our synthetic language experiments suggest that HIGH attachment is indeed learnable by RNN LMs. In fact, an equal proportion of HIGH and LOW attachment in the training data is all that is needed for these models to acquire a general preference for HIGH attachment (contra to the recency bias reported in the literature). 5 English Experiments We turn now to model attachment preferences in English. We trained the models using English Wikipedia. We tested the attachment preferences of the RNN LMs using the original stimuli from Fern´andez (2003), and using a larger set of stimuli to have a better sense of model behavior on a wider range of stimuli. For space considerations, we only report here results of the EXTENDED DATA (the larger set of stimuli), but similar results hold for the Fern´andez (2003) stimuli (see Supplemental Materials). In order to compare the model results with the mean human interpretation results reported by Fern´andez (2003), we categorically coded the model response to each item for HIGH/LOW attachment preference. If model surprisal for LOW attachment was less than model surprisal for HIGH attachment, the attachment was coded as LOW. See Figure 1 for the comparison between RNNs and humans in English. Statistical robustness for our RNN results was determined using the original distribution of surprisal values. Specifically, a two-tailed t-test was conducted to see if the mean difference in surprisal differed from zero (i.e. the model has some attachment bias). This revealed a highly significant (p < 10−5, BF > 100) mean difference in surprisal of 0.77. This positive difference indicates that the RNN LMs have a consistent LOW bias, similar to English readers, across models trained with differing random seeds. There are two possible reasons for this patterning: (1) the models have learned a human-like LOW bias, or (2) the models have a recency bias that favors attachment to the lower nominal. These two hypotheses have overlapping predictions in Figure 1: Proportion HIGH vs LOW attachment in English. Human results from the original Fern´andez (2003) experiment and RNN LM results from EXTENDED DATA (derived from Fern´andez (2003) and Carreiras and Clifton Jr (1993)). English. The second hypothesis is perhaps weakened by the results of Section 4, where both attachment types were learnable despite any recency bias. However, we know that other syntactic attachment biases can influence RC attachment in humans (Scheepers, 2003). It could be that other kinds of attachment (such as prepositional phrase attachment) have varying proportions of attachment biases in the training data. Perhaps conflicting attachment biases across multiple constructions force the model to resort to the use of a ‘default’ recency bias in cases of ambiguity. 6 Syntactically blocking low attachment 6.1 Stimuli To determine whether the behavior of the RNNs is driven by a learned attachment preference or a strong recency bias, we created stimuli11 using the stimulus template described in Section 3.1 (e.g., (3)). All of these stimuli had only the higher nominal syntactically available for attachment; the lower nominal was blocked by the addition of a relative clause: (7) a. Everybody ignored the boy that the girls hated that was boring. b. *Everybody ignored the boys that the girl hated that was boring. In (7) only (7-a) is grammatical. This follows because boy(s) is the only nominal available for mod11As before, some of these stimuli are infelicitous. We do not concern ourselves with this distinction in the present work, given the results in Gulordava et al. (2018). 1985 Figure 2: Proportion HIGH vs LOW attachment with syntactically unavailable lower nominal. Human results estimated from Linzen and Leonard (2018) and RNN LM results from the EXTENDED DATA (derived from Fern´andez (2003) and Carreiras and Clifton Jr (1993)) with the lower nominal blocked. ification. In (7-a), the RC verb was agrees in number with this nominal, while in (7-b), was agrees in number with the now blocked lower nominal girl rather than with boys. For all such sentence pairs, we calculated the difference in surprisal between (7-a) and (7-b). If their behavior is driven by a legitimate syntactic attachment preference, the models should exhibit an overwhelming HIGH bias (i.e. the mean difference should be less than zero). 6.2 Results As before, the differences in surprisal were calculated for each pair of experimental items. If the difference was greater than zero, the attachment was coded as LOW. The results categorically coded for HIGH/LOW attachment are given in Figure 2, including the results expected for humans given the pattern in Linzen and Leonard (2018).12 A two-tailed t-test was conducted to see if the mean difference in surprisal differed from zero. The results were statistically significant (p < 10−5, BF > 100). The mean difference in surprisal was 1.15, however, suggesting that the models still had a LOW bias when the lower nominal was syntactically unavailable for attachment. This is in stark contrast to what one would expect if these models had learned the relationship between syntactic constituents and relative clause attachment. A possible 12Linzen and Leonard (2018) conducted experiments probing the agreement errors for subject-verb agreement with intervening RCs (and prepositional phrases). Our work is concerned with agreement between an object and its modifying RC. As such, their task serves as an approximate estimate of the errors we would expect for humans. Figure 3: Proportion HIGH vs LOW attachment in Spanish. Human results from the original Fern´andez (2003) experiment and RNN LM results from the EXTENDED DATA (derived from Fern´andez (2003) and Carreiras and Clifton Jr (1993)). alternative to the recency bias explanation is that RNN LMs might learn that there is a general LOW attachment bias in English and overgeneralize this pattern even in cases where one of the nominals is syntactically unavailable. 7 The case of default HIGH bias: Spanish Our English analyses suggest that RNN LMs either learn a general English LOW attachment preference that they apply in all contexts, or that they have a ‘default’ recency bias that prevents them from learning HIGH attachment preferences with more complex, naturalistic training data. In the case of the former, we would expect that models trained on a language whose speakers generally prefer HIGH attachment should be able to learn HIGH attachment. Spanish has a well-attested HIGH bias in humans (Carreiras and Clifton Jr, 1993; Carreiras and Clifton, 1999; Fern´andez, 2003) offering a way to distinguish between competing recency bias and over-generalization accounts. That is, if the models can learn a HIGH bias when trained on Spanish data, we should be able to conclude that the general LOW bias in English is being overgeneralized by the RNNs to corner cases where HIGH bias should be preferred. 7.1 Results As before, the differences in surprisal were calculated for each pair of experimental items. If the difference was greater than zero, the attachment was coded as LOW. Two sample t-tests were conducted to see if the mean difference in surprisal differed 1986 significantly from zero for both the direct simulation of Fern´andez (2003) and the EXTENDED DATA that included the stimuli derived from Carreiras and Clifton Jr (1993). The results categorically coded for HIGH/LOW attachment for the extended stimulus set are given in Figure 3, alongside the human results reported in Fern´andez (2003). For the direct simulation, the mean did not differ significantly from 0 (BF < 1/3). This suggests that there is no attachment bias for the Spanish models for the stimuli from Fern´andez (2003), contrary to the human results. For the extended set of stimuli, the results were significant (p < 10−5, BF > 100) with a mean difference greater than zero (µ = 0.211). Thus, rather than a HIGH bias, as we would expect, the RNN LMs once again had a LOW bias. 8 Discussion In this work, we explored the ability of RNN LMs to prioritize multiple simultaneous valid interpretations in a human-like way (as in John met the student of the teacher that was happy). While both LOW attachment (i.e. the teacher was happy) and HIGH attachment (i.e. the student was happy) are equally semantically plausible without a disambiguating context, humans have interpretation preferences for one attachment over the other (e.g., English speakers prefer LOW attachment and Spanish speakers prefer HIGH attachment). Given the recent body of literature suggesting that RNN LMs have learned abstract syntactic representations, we tested the hypothesis that these models acquire human-like attachment preferences. We found that they do not. We first used a synthetic language experiment to demonstrate that RNN LMs are capable of learning a HIGH bias when HIGH attachment is at least as frequent as LOW attachment in the training data. These results suggest that any recency bias in RNN LMs is weak enough to be easily overcome by sufficient evidence of HIGH attachment. In English, the RNNs exhibited a human-like LOW bias, but this preference persisted even in cases where LOW attachment was ungrammatical. To test whether the RNNs were over-learning a general LOW bias of English, we tested whether Spanish RNNs learned the general HIGH bias in that language. Once again, RNN LMs favored LOW attachment over HIGH attachment. The inability of RNN LMs to learn the Spanish HIGH attachment preference suggests that the Spanish data may not contain enough HIGH examples to learn human-like attachment preferences. In post-hoc analyses of the Spanish Wikipedia training corpus and the AnCora Spanish newswire corpus (Taul´e et al., 2008), we find a consistent production bias towards LOW attachment among the RCs with unambiguous attachment. In Spanish Wikipedia, LOW attachment is 69% more frequent than HIGH attachment, and in Spanish newswire data, LOW attachment is 21% more frequent than HIGH attachment.13 This distributional bias in favor of LOW attachment does not rule out a subsequent HIGH RC bias in the models. It has been established in the psycholinguistic literature that attachment is learned by humans as a general abstract feature of language (see Scheepers, 2003). In other words, human syntactic representations of attachment overlap, with prepositional attachment influencing relative clause attachment, etc. These relationships could coalesce during training and result in an attachment preference that differs from any one structure individually. However, it is clear that whatever attachment biases exist in the data are insufficient for RNNs to learn a human-like attachment preference in Spanish. This provides compelling evidence that standard training data itself may systematically lack aspects of syntax relevant to performing linguistic comprehension tasks. We suspect that there are deep systematic issues leading to this mismatch between the expected distribution of human attachment preferences and the actual distribution of attachment in the Spanish training corpus. Experimental findings from psycholinguistics suggest that this issue could follow from a more general mismatch between language production and language comprehension. In particular, Kehler and Rohde (2015, 2018) have provided empirical evidence that the production and comprehension of these structures are guided by different biases in humans. Production is guided by syntactic and information structural considerations (e.g., topic), while comprehension is influenced by those considerations plus pragmatic and discourse factors (e.g., coherence relations). As such, the biases in language production are a proper subset of those of language comprehension. As it stands now, RNN LMs are typically trained on production data 13https://github.com/ UniversalDependencies/UD_Spanish-AnCora 1987 (that is, the produced text in Wikipedia).14 Thus, they will have access to only a subset of the biases needed to learn human-like attachment preferences. In its strongest form, this hypothesis suggests that no amount of production data (i.e. raw text) will ever be sufficient for these models to generalizably pattern like humans during comprehension tasks. The mismatch between human interpretation biases and production biases suggested by this work invalidates the tacit assumption in much of the natural language processing literature that standard, production-based training data (e.g., web text) are representative of the linguistic biases needed for natural language understanding and generation. There are phenomena, like agreement, that seem to have robust manifestations in a production signal, but the present work demonstrates that there are others, like attachment preferences, that do not. We speculate that the difference may lie in the inherent ambiguity in attachment, while agreement explicitly disambiguates a relation between two syntactic units. This discrepancy is likely the reason that simply adding more data doesn’t improve model quality (e.g., van Schijndel et al., 2019; Bisk et al., 2020). Future work needs to be done to understand more fully what biases are present in the data and learned by language models. Although our work raises questions about mismatches between human syntactic knowledge and the linguistic representations acquired by neural language models, it also shows that researchers can fruitfully use sentences with multiple interpretations to probe the linguistic representations acquired by those models. Before now, evaluations have focused on cases of unambiguous grammaticality (i.e. ungrammatical vs. grammatical). By using stimuli with multiple simultaneous valid interpretations, we found that evaluating models on single-interpretation sentences overestimates their ability to comprehend abstract syntax. Acknowledgments We would like to thank members of the NLP group and the C.Psyd lab at Cornell University, and the Altmann and Yee labs at University of Connecticut, who gave feedback on an earlier form of this work. We would also like to thank the three anonymous reviewers and Yonatan Belinkov. Special thanks go 14Some limited work has explored training models with human comprehension data with positive results (Klerke et al., 2016; Barrett et al., 2018). to Dorit Abusch and John Whitman for invaluable suggestions and feedback, and Laure Thompson for comments on an earlier draft. References Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302–312, Brussels, Belgium. Association for Computational Linguistics. Jean-Philippe Bernardy and Shalom Lappin. 2017. Using deep neural networks to learn syntactic agreement. Linguistic Issues in Language Technology (LiLT), 15. Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. arXiv preprint arXiv:2004.10151. Marc Brysbaert and Don C Mitchell. 1996. Modifier attachment in sentence parsing: Evidence from dutch. The Quarterly Journal of Experimental Psychology Section A, 49(3):664–695. Manuel Carreiras and Charles Clifton. 1999. Another word on parsing relative clauses: Eyetracking evidence from Spanish and English. Memory & Cognition, 27(5):826–833. Manuel Carreiras and Charles Clifton Jr. 1993. Relative clause interpretation preferences in Spanish and English. Language and Speech, 36(4):353–372. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Chris Dyer, G´abor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. arXiv preprint arXiv:1909.09428. ´Emile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of RNNs with multi-task learning. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 3–14. Association for Computational Linguistics. Eva M. Fern´andez. 2003. Bilingual sentence processing: Relative clause attachment in English and Spanish. John Benjamins Publishing, Amsteradam. 1988 Stefan L Frank and John Hoeks. 2019. The interaction between structure and meaning in sentence comprehension: Recurrent neural networks and reading times. PsyArXiv preprint:10.31234. Stefan L. Frank, Leun J. Otten, Giulia Galli, and Gabriella Vigliocco. 2015. The ERP response to the amount of information conveyed by words in sentences. Brain & Language, 140:1–11. Lyn Frazier and Charles Clifton. 1996. Construal. MIT Press, Cambridge, Mass. Richard Futrell and Roger Levy. 2019. Do RNNs learn human-like abstract word order preferences? In Proceedings of the Society for Computation in Linguistics, volume 2, pages 50–59. Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. John Hale. 2001. A probabilistic earley parser as a psycholinguistic model. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Andrew Kehler and Hannah Rohde. 2015. Pronominal reference and pragmatic enrichment: A bayesian account. In CogSci. Andrew Kehler and Hannah Rohde. 2018. Prominence and coherence in a bayesian theory of pronoun interpretation. Journal of Pragmatics. Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1528–1533, San Diego, California. Association for Computational Linguistics. Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 41:1202–1241. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proceedings of the 2018 Annual Meeting of the Cognitive Science Society, pages 690– 695. Cognitive Science Society. R Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society. Richard D. Morey and Jeffrey N. Rouder. 2018. BayesFactor: Computation of Bayes Factors for Common Designs. R package version 0.9.12-4.2. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics. Grusha Prasad, Marten van Schijndel, and Tal Linzen. 2019. Using priming to uncover the organization of syntactic representations in neural language models. In Proceedings of the 23rd Conference on Computational Natural Language Learning. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of NAACL-HLT. Jeffrey N. Rouder, Paul L. Speckman, Dongchu Sun, Richard D. Morey, and Geoffrey Iverson. 2009. Bayesian t-tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2):225–237. Christoph Scheepers. 2003. Syntactic priming of relative clause attachments: Persistence of structural configuration in sentence production. Cognition, 89(3):179–205. Helmut Schmid. 1999. Improvements in part-ofspeech tagging with an application to German. In Natural language processing using very large corpora, pages 13–25. Springer. Claude Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379– 423, 623–656. 1989 Nathaniel J Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302–319. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Mariona Taul´e, M. Ant`onia Mart´ı, and Marta Recasens. 2008. AnCora: Multilevel annotated corpora for catalan and spanish. In Proceedings of the Sixth International Conference on Language Resources and Evaluation. Andrew Trask, Felix Hill, Scott E Reed, Jack Rae, Chris Dyer, and Phil Blunsom. 2018. Neural arithmetic logic units. In Advances in Neural Information Processing Systems, pages 8035–8044. Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society. Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn’t buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3261–3275. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Ethan Wilcox, Roger Levy, and Richard Futrell. 2018. What Syntactic Structures block Dependencies in RNN Language Models? In Proceedings of the 41st Annual Meeting of the Cognitive Science Society. Ethan Wilcox, Roger Levy, and Richard Futrell. 2019. Hierarchical representation in neural language models: Suppression and recovery of expectations. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. A Fern´andez (2003) Replications A.1 English We compute RNN surprisal for each experimental item from Fern´andez (2003) as detailed in Section Figure 4: Proportion HIGH vs LOW attachment in English. Human results from the original Fern´andez (2003) experiment and RNN LM results from the stimuli from Fern´andez (2003). 3.3 in the paper. The results coded for HIGH/LOW attachment are given in Figure 4, including the results for humans reported by Fern´andez (2003). While these categorical results enable easier comparison to the human results reported in the literature, statistical robustness was determined using the original distribution of surprisal values. Specifically, a two-tailed t-test was conducted to see if the mean difference in surprisal differed from zero (i.e. the model has some attachment bias). The result is highly significant (p < 10−5, Bayes Factor (BF) > 100) with a mean surprisal difference of µ = 0.66. This positive difference suggests that the RNN LMs have a LOW bias, similar to English readers. Figure 5: Proportion HIGH vs LOW attachment in Spanish. Human results from the original Fern´andez (2003) experiment and RNN LM results from the stimuli from Fern´andez (2003). 1990 A.2 Spanish The results coded for HIGH/LOW attachment for the Spanish replication are given in Figure 5, including the human results reported by Fern´andez (2003). The mean did not differ significantly from 0 (BF < 1/3). This suggests that there is no attachment bias for the Spanish models for the stimuli from Fern´andez (2003), contrary to the human results.
2020
179
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 183–190 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 183 Few-Shot NLG with Pre-Trained Language Model Zhiyu Chen1, Harini Eavani2, Wenhu Chen1, Yinyin Liu2, and William Yang Wang1 1University of California, Santa Barbara 2Intel AI {zhiyuchen, wenhuchen, william}@cs.ucsb.edu, {harini.eavani, yinyin.liu}@intel.com Abstract Neural-based end-to-end approaches to natural language generation (NLG) from structured data or knowledge are data-hungry, making their adoption for real-world applications difficult with limited data. In this work, we propose the new task of few-shot natural language generation. Motivated by how humans tend to summarize tabular data, we propose a simple yet effective approach and show that it not only demonstrates strong performance but also provides good generalization across domains. The design of the model architecture is based on two aspects: content selection from input data and language modeling to compose coherent sentences, which can be acquired from prior knowledge. With just 200 training examples, across multiple domains, we show that our approach achieves very reasonable performances and outperforms the strongest baseline by an average of over 8.0 BLEU points improvement. Our code and data can be found at https: //github.com/czyssrs/Few-Shot-NLG 1 Introduction Natural language generation (NLG) from structured data or knowledge (Gatt and Krahmer, 2018) is an important research problem for various NLP applications. Some examples are taskoriented dialog, question answering (He et al., 2017; Ghazvininejad et al., 2018; Su et al., 2016; Saha et al., 2018; Yin et al., 2016) and interdisciplinary applications such as medicine (Hasan and Farri, 2019; Cawsey et al., 1997) and healthcare (Hasan and Farri, 2019; DiMarco et al., 2007). There is great potential to use automatic NLG systems in a wide range of real-life applications. Recently, deep neural network based NLG systems have been developed, such as those seen in the E2E challenge (Novikova et al., 2017), WEATHERGOV (Liang et al., 2009), as well as more complex ones such as WIKIBIO (Liu et al., 2018) and ROTOWIRE (Wiseman et al., 2017). Compared to traditional slot-filling pipeline approaches, such neural-based systems greatly reduce feature engineering efforts and improve text diversity as well as fluency. Although they achieve good performance on benchmarks such as E2E challenge (Novikova et al., 2017) and WIKIBIO (Lebret et al., 2016), their performance depends on large training datasets, e.g., 500k table-text training pairs for WIKIBIO (Lebret et al., 2016) in a single domain. Such data-hungry nature makes neural-based NLG systems difficult to be widely adopted in real-world applications as they have significant manual data curation overhead. This leads us to formulate an interesting research question: 1. Can we significantly reduce human annotation effort to achieve reasonable performance using neural NLG models? 2. Can we make the best of generative pre-training, as prior knowledge, to generate text from structured data? Motivated by this, we propose the new task of fewshot natural language generation: given only a handful of labeled instances (e.g., 50 - 200 training instances), the system is required to produce satisfactory text outputs (e.g., BLEU ≥20). To the best of our knowledge, such a problem in NLG community still remains under-explored. Herein, we propose a simple yet very effective approach that can generalize across different domains. In general, to describe information in a table, we need two skills to compose coherent and faithful sentences. One skill is to select and copy factual content from the table - this can be learned quickly by reading a handful of tables. The other is to compose grammatically correct sentences that bring those facts together - this skill is not re184 Input Table Attribute (R) Value (V) Name Walter Extra Nationality German Occupation Aircraft designer and manufacturer ... ... Table encoder Attention weights Walter Extra is ... Pre-trained Language Model Walter Extra German name name nationaltily table values attribute names position information Walter Extra is a … ... The swicth policy name name -- -- ... -- -- ... Matching Figure 1: Overview of our approach: Under the base framework with switch policy, the pre-trained language model serves as the generator. We follow the same encoder as in (Liu et al., 2018). The architecture is simple in terms of both implementation and parameter space that needs to be learned from scratch, which should not be large given the few-shot learning setting. stricted to any domain. One can think of a latent “switch” that helps us alternate between these two skills to produce factually correct and coherent sentences. To do this, we use the pre-trained language model (Chelba et al., 2013; Radford et al., 2019) as the innate language skill, which provides strong prior knowledge on how to compose fluent and coherent sentences. The ability to switch and select/copy from tables can be learned successfully using only a few training instances, freeing the neural NLG model from data-intensive training. Previous best performing methods based on large training data, such as (Liu et al., 2018), which does not apply such switch mechanism but trains a strong domain-specific language model, perform very poorly under few-shot setting. Since we are operating under a highly datarestricted few-shot regime, we strive for simplicity of model architecture. This simplicity also implies better generalizability and reproducibility for realworld applications. We crawl multi-domain tableto-text data from Wikipedia as our training/test instances. With just 200 training instances, our method can achieve very reasonable performance. In a nutshell, our contributions are summarized as the following: • We propose the new research problem of fewshot NLG, which has great potential to benefit a wide range of real-world applications. • To study different algorithms for our proposed problem, we create a multi-domain table-totext dataset. • Our proposed algorithm can make use of the external resources as prior knowledge to significantly decrease human annotation effort and improve the baseline performance by an average of over 8.0 BLEU on various domains. 2 Related Work 2.1 NLG from Structured Data As it is a core objective in many NLP applications, natural language generation from structured data/knowledge (NLG) has been studied for many years. Early traditional NLG systems follow the pipeline paradigm that explicitly divides generation into content selection, macro/micro planning and surface realization (Reiter and Dale, 1997). Such a pipeline paradigm largely relies on templates and hand-engineered features. Many works have been proposed to tackle the individual modules, such as (Liang et al., 2009; Walker et al., 2001; Lu et al., 2009). Later works (Konstas and Lapata, 2012, 2013) investigated modeling context selection and surface realization in an unified framework. Most recently, with the success of deep neural networks, data-driven, neural based approaches have been used, including the end-to-end methods that jointly model context selection and surface realization (Liu et al., 2018; Wiseman et al., 2018; Puduppully et al., 2018). Such data-driven approaches achieve good performance on several benchmarks like E2E challenge (Novikova et al., 2017), WebNLG challenge (Gardent et al., 2017) and WIKIBIO (Lebret et al., 2016). However, they rely on massive amount of training data. ElSahar et al. (2018) propose zero-shot learning for question generation from knowledge graphs, but their work applies on the transfer learning setting for unseen knowledge base types, based on seen ones and their textual contexts, which still requires large in-domain training dataset. This is different from our few-shot learning setting. Ma et al. (2019) propose low-resource table-to-text generation with 185 1,000 paired examples and large-scale target-side examples. In contrast, in our setting, only tens to hundreds of paired training examples are required, meanwhile without the need for any target examples. This is especially important for real-world use cases where such large target-side gold references are mostly hard to obtain. Therefore, our task is more challenging and closer to real-world settings. 2.2 Large Scale Pre-Trained Models Many of the current best-performing methods for various NLP tasks adopt a combination of pretraining followed by supervised fine-tuning, using task-specific data. Different levels of pre-training include word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018), sentence embeddings (Le and Mikolov, 2014; Kiros et al., 2015), and most recently, language modeling based pre-training like BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019). Such models are pre-trained on large-scale open-domain corpora, and provide down-streaming tasks with rich prior knowledge while boosting their performance. In this paper, we adopt the idea of employing a pre-trained language model to endow in-domain NLG models with language modeling ability, which cannot be well learned from few shot training instances. 3 Method 3.1 Problem Formulation We are provided with semi-structured data: a table of attribute-value pairs {Ri : Vi}n i=1. Both Ri and Vi can be either a string/number, a phrase or a sentence. Each value is represented as a sequence of words Vi = {vj}m j=1. For each word vj, we have its corresponding attribute name Ri and position information of the word in the value sequence. The target is to generate a natural language description based on the semi-structured data, provided with only a handful of training instances. 3.2 Base Framework with Switch Policy We start with the field-gated dual attention model proposed in (Liu et al., 2018), which achieves state-of-the-art performance (BLEU) on WIKIBIO dataset. Their method uses an LSTM decoder with dual attention weights. We first apply a switch policy that decouples the framework into table content selection/copying and language model based generation. Inspired by the pointer generator (See et al., 2017), at each time step, we maintain a soft switch pcopy to choose between generating from softmax over vocabulary or copying from input table values with the attention weights as the probability distribution. pcopy = sigmoid(Wcct + Wsst + Wxxt + b) Where ct = P i ai thi, {hi} is the encoder hidden states, xt, st, at is the decoder input, state and attention weights respectively at time step t. Wc, Ws, Wx and b are trainable parameters. The pointer generator learns to alternate between copying and generating based on large training data and shows its advantage of copying out-ofvocabulary words from input. In our task, the training data is very limited, and many of the table values are not OOV. We need to explicitly “teach” the model where to copy and where to generate. Therefore, to provide the model accurate guidance of the behavior of the switch, we match the target text with input table values to get the positions of where to copy. At these positions, we maximize the copy probability pcopy via an additional loss term. Our loss function: L = Lc + λ X wj∈m m∈{Vi} (1 −pj copy) Where Lc is the original loss between model outputs and target texts. wj is the target token at position j, {Vi} is the input table value list defined in Section 3.1, and m means a matched phrase. λ is hyperparameter as the weight for this copy loss term. We also concatenate the decoder input with its matched attribute name and position information in the input table as xt to calculate pcopy . 3.3 Pre-Trained LM as Generator We use a pre-trained language model as the generator, serving as the “innate language skill”. Due to the vocabulary limitation of few training instances, we leave the pre-trained word embedding fixed while fine-tuning other parameters of the pretrained language model, so that it can generalize with tokens unseen during training. Figure 1 shows our model architecture. We use the pre-trained language model GPT-21 proposed in (Radford et al., 2019), which is a 12-layer transformer. The final hidden state of the transformer is used to calculate attention weights and the copy 1https://github.com/openai/gpt-2 186 Domain Humans Books Songs # of training instances 50 100 200 500 50 100 200 500 50 100 200 500 Template 16.3 25.6 30.1 Base-original 2.2 3.7 4.9 5.1 5.8 6.1 7.4 6.7 9.2 10.7 11.1 11.3 Base 2.9 5.1 6.1 8.3 7.3 6.8 7.8 8.8 10.4 12.0 11.6 13.1 Base + switch 15.6 17.8 21.3 26.2 24.7 26.9 30.5 33.2 29.7 30.6 32.5 34.9 Base + switch + LM-scratch 6.6 11.5 15.3 18.6 7.1 9.2 14.9 21.8 11.6 16.2 20.6 23.7 Base + switch + LM (Ours) 25.7 29.5 36.1 41.7 34.3 36.2 37.9 40.3 36.1 37.2 39.4 42.2 Table 1: BLEU-4 results on three domains. Base-original: the original method in (Liu et al., 2018); Base: applies pre-trained word embedding; Base+switch: adds the switch policy; Base+switch+LM-scratch: makes the same architecture as our method, but trains the model from scratch without pre-trained weights for the generator. Template: manually crafted templates switch pcopy. We first feed the embedded attributevalue list serving as the context for generation. In this architecture, the generator is fine-tuned from pre-trained parameters while the encoder and attention part is learned from scratch, the initial geometry of the two sides are different. Therefore we need to apply larger weight to the copy loss pcopy, to give the model a stronger signal to “teach” it to copy facts from the input table. 4 Experiment 4.1 Datasets and Experiment Setup The original WIKIBIO dataset (Lebret et al., 2016) contains 700k English Wikipedia articles of wellknown humans, with the Wiki infobox serving as input structured data and the first sentence of the article serving as target text. To demonstrate generalizability, we collect datasets from two new domains: Books and Songs by crawling Wikipedia pages. After filtering and cleanup, we end up with 23,651 instances for Books domain and 39,450 instances for Songs domain2. Together with the Humans domain of the original WIKIBIO dataset, for all three domains we conduct experiments by varying the training dataset size to 50, 100, 200 and 500. The rest of data is used for validation (1,000) and testing. The weight λ of the copy loss term is set to 0.7. Other parameter settings can be found in Appendix A. To deal with vocabulary limitation of few-shot training, for all models we adopt the Byte Pair Encoding (BPE) (Sennrich et al., 2016) and subword vocabulary in (Radford et al., 2019). We compare the proposed method with other approaches investigated in Section 3, serving as the baselines - Base-original: the original model 2Note that the target text sometimes contains information not in the infobox. This is out of the scope of the fewshot generation in this work. Therefore we further filter the datasets and remove the ones with rare words out of infobox. Check (Dhingra et al., 2019) for a related study of this issue on the WikiBio dataset in (Liu et al., 2018); Base: uses the same architecture, but in addition applies the pre-trained word embedding and fix it during training; Base + switch: adds the switch policy; Base + switch + LM-scratch: makes the architecture same as our method, except training the model from scratch instead of using pre-trained weights for generator. Template: template-based non-neural approach, manually crafted for each domain. 4.2 Results and Analysis Following previous work (Liu et al., 2018), we first conduct automatic evaluations using BLEU4, shown in Table 1. The ROUGE-4 (F-measure) results follow the same trend with BLEU-4 results, which we show in Appendix B. As we can see, the original model Baseoriginal (Liu et al., 2018), which obtains the stateof-the-art result on WIKIBIO full set, performs very poorly under few-shot setting. It generates all tokens from softmax over vocabulary, which results in severe overfitting with limited training data, and the results are far behind the template-based baseline. With the switch policy, Base+switch first brings an improvement of an average of over 10.0 BLEU points. This indicates that the content selection ability is easier to be learned with a handful of training instances. However, it forms very limited, not fluent sentences. With the augmentation of the pre-trained language model, our model Base+switch+LM brings one more significant improvement of an average over 8.0 BLEU points. We provide sample outputs of these methods using 200 training instances in Table 2. Table 3 shows the effect of the copy switch loss pcopy introduced in Section 3.2, giving the model a stronger signal to learn to copy from input table. Ma et al. (2019) propose the Pivot model, for low-resource NLG with 1,000 paired examples and large-scale target-side examples. We compare our 187 Attribute Value Attribute Value name andri ibo fullname andri ibo birth date 3 april 1990 birth place sentani , jayapura , indonesia height 173 cm currentclub persipura jayapura position defender ... Gold Reference: andri ibo ( born april 3 , 1990 ) is an indonesian footballer who currently plays for persipura jayapura in the indonesia super league . Generated texts of different methods Base: vasco emanuel freitas ( born december 20 , 1992 in kong kong ) is a hong kussian football player and currently plays for hong kong first division league side tsw pegasus . Base+switch: andri ibo andri ibo ( 3 april 1990 ) is a international cricketer . Base+switch+LM (Ours): andri ibo ( born 3 april 1990 ) is an indonesian football defender , who currently plays for persipura jayapura . Table 2: A sample input table and generated summaries from the test set of Humans domain, using 200 training instances # of training instances 50 100 200 500 Base + switch + LM 25.7 29.5 36.1 41.7 - w/o copy loss pcopy 21.4 25.5 31.3 38.0 Table 3: Ablation study: Effect of the copy loss term on Humans domain, measured by BLEU-4. The loss term brings an average improvement of over 4.0 BLEU points. method with the Pivot model in table 4. Note that here we train and evaluate the models on the original WikiBio dataset used in their work, in order to maintain the size of the target side examples for their settings. # of paired training instances 50 100 200 500 1000 Pivot 7.0 10.2 16.8 20.3 27.3 Ours 17.2 23.8 25.4 28.6 31.2 Table 4: Comparison with the Pivot model (Ma et al., 2019). Compared to their method using additional large-scale target side examples, our method requires no additional target side data, while achieving better performance. Human Evaluation We also conduct human evaluation studies using Amazon Mechanical Turk, based on two aspects: Factual correctness and Language naturalness. We evaluate 500 samples. Each evaluation unit is assigned to 3 workers to eliminate human variance. The first study attempts to evaluate how well the generated text correctly conveys information in the table, by counting the number of facts in the text supported by the table, and contradicting with or missing from the table. The 2nd and 3rd columns of Table 5 show the average number of supporting and contradicting facts for our method, comparing to the strongest baseline and the gold reference. The second study evaluates whether the generated text is grammatically correct and fluent, regardless of factual correctness. We conduct pairwise comparison among all methods, and calculate the average times each method is chosen to be better than another, shown in the 4th column of Table 5. Our method brings a significant improvement over the strongest baseline (p < 0.01 in Tukey’s HSD test for all measures). The copy loss term further alleviates producing incorrect facts. The language naturalness result of our method without the copy loss is slightly better, because this evaluation does not consider factual correctness; thus the generated texts with more wrong facts can still get high score. See Appendix C for more details of our evaluation procedure. # Supp. # Cont. Lan. Score Gold Reference 4.25 0.84 1.85 Base + switch 2.57 2.17 0.93 Base + switch + LM (ours) 3.64 1.12 1.59 - w/o copy loss pcopy 3.54 1.30 1.63 Table 5: Human evaluation results: Average number of supporting facts (column 2, the larger the better), contradicting facts (column 3, the smaller the better), and language naturalness score (column 4, the larger the better). 5 Conclusion In this paper, we propose the new research problem of few-shot natural language generation. Our approach is simple, easy to implement, while achieving strong performance on various domains. Our basic idea of acquiring language modeling prior can be potentially extended to a broader scope of generation tasks, based on various input structured data, such as knowledge graphs, SQL queries, etc. The deduction of manual data curation efforts for such tasks is of great potential and importance for many real-world applications. Acknowledgment We thank the anonymous reviewers for their thoughtful comments. We thank Shuming Ma for releasing the processed data and code for the Pivot model. This research was supported by the Intel AI Faculty Research Grant. The authors are solely responsible for the contents of the paper and the opinions expressed in this publication do not reflect those of the funding agencies. 188 References Alison J Cawsey, Bonnie L Webber, and Ray B Jones. 1997. Natural language generation in health care. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Manaal Faruqui, Ankur P. Parikh, Ming-Wei Chang, Dipanjan Das, and William W. Cohen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4884–4895. Association for Computational Linguistics. Chrysanne DiMarco, HDominic Covvey, D Cowan, V DiCiccio, E Hovy, J Lipa, D Mulholland, et al. 2007. The development of a natural language generation system for personalized e-health information. In Medinfo 2007: Proceedings of the 12th World Congress on Health (Medical) Informatics; Building Sustainable Health Systems, page 2339. IOS Press. Hady ElSahar, Christophe Gravier, and Fr´ed´erique Laforest. 2018. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 218–228. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, Spain, September 4-7, 2017, pages 124–133. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65–170. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5110–5117. Sadid A Hasan and Oladimeji Farri. 2019. Clinical natural language processing with deep learning. In Data Science for Healthcare, pages 147–171. Springer. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1766–1776. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 3294–3302. Ioannis Konstas and Mirella Lapata. 2012. Unsupervised concept-to-text generation with hypergraphs. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 3-8, 2012, Montr´eal, Canada, pages 752–761. Ioannis Konstas and Mirella Lapata. 2013. A global model for concept-to-text generation. J. Artif. Intell. Res., 48:305–346. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 2126 June 2014, pages 1188–1196. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1203–1213. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 91–99. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and 189 the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4881– 4888. Wei Lu, Hwee Tou Ng, and Wee Sun Lee. 2009. Natural language generation with tree conditional random fields. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 400–409. Shuming Ma, Pengcheng Yang, Tianyu Liu, Peng Li, Jie Zhou, and Xu Sun. 2019. Key fact as pivot: A two-stage model for low resource table-to-text generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2047–2057. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 3111– 3119. Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. 2017. The E2E dataset: New challenges for endto-end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbr¨ucken, Germany, August 15-17, 2017, pages 201–206. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. CoRR, abs/1809.00582. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57–87. Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 705– 713. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1073–1083. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng Yan. 2016. On generating characteristic-rich question sets for QA evaluation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 562–572. Marilyn A. Walker, Owen Rambow, and Monica Rogati. 2001. Spot: A trainable sentence planner. In Language Technologies 2001: The Second Meeting of the North American Chapter of the Association for Computational Linguistics, NAACL 2001, Pittsburgh, PA, USA, June 2-7, 2001. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 2253–2263. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 3174–3187. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2016. Neural generative question answering. In Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2972–2978. 190 Appendix A. Implementation Details We use the Adam optimizer (Kingma and Ba, 2015) with learning rate set to 0.0003. The mini-batch size is set to 40 and the weight λ of the copy loss term to 0.7. The dimension of the position embedding is set to 5. For attribute name with multiple words, we average their word embeddings as the attribute name embedding. Refer to our released code and data at https://github.com/czyssrs/ Few-Shot-NLG for more details. Appendix B. ROUGE-4 Results Following previous work (Liu et al., 2018), we conduct automatic evaluations using BLEU-4 and ROUGE-4 (F-measure)3. Table 6, 7 and 8 show the ROUGE-4 results for three domains Humans, Books and Songs, respectively. Domain Humans # of training instances 50 100 200 500 Template 5.1 Base-original 0.1 0.4 0.5 0.6 Base 0.1 0.4 0.8 1.5 Base+switch 4.9 6.3 9.8 12.5 Base+switch+LM-scratch 1.0 2.8 4.7 7.1 Base+switch+LM (Ours) 14.1 16.2 22.1 28.3 Table 6: ROUGE-4 results on Humans domain Domain Books # of training instances 50 100 200 500 Template 15.0 Base-original 1.1 1.6 2.1 1.5 Base 1.7 1.5 2.1 2.4 Base+switch 12.8 15.0 18.1 20.7 Base+switch+LM-scratch 2.4 4.2 6.5 10.7 Base+switch+LM (Ours) 22.5 23.1 25.0 27.6 Table 7: ROUGE-4 results on Books domain Appendix C. Human Evaluation Details We conduct human evaluation studies using Amazon Mechanical Turk, based on two aspects: Factual correctness and Language naturalness. For both studies, we evaluate the results trained with 200 training instances of Humans domain. We randomly sample 500 instances from the test set, together with the texts generated with different meth3We use standard scripts NIST mteval-v13a.pl (for BLEU), and rouge-1.5.5 (for ROUGE) Domain Songs # of training instances 50 100 200 500 Template 24.5 Base-original 3.4 4.2 4.7 4.8 Base 4.1 5.1 4.7 5.8 Base+switch 20.2 21.7 23.2 24.8 Base+switch+LM-scratch 5.4 8.0 12.0 15.0 Base+switch+LM (Ours) 26.2 28.6 30.1 32.6 Table 8: ROUGE-4 results on Songs domain ods. Each evaluation unit is assigned to 3 workers to eliminate human variance. The first study attempts to evaluate how well a generated text can correctly convey information in the table. Each worker is present with both the input table and a generated text, and asked to count how many facts in the generated text are supported by the table, and how many are contradicting with or missing from the table, similar as in (Wiseman et al., 2017). The we calculate the average number of supporting and contradicting facts for the texts generated by each method. The second study aims to evaluate whether the generated text is grammatically correct and fluent in terms of language, regardless of factual correctness. Each worker is present with a pair of texts generated from the same input table, by two different methods, then asked to select the better one only according to language naturalness, or “Tied” if the two texts are of equal quality. The input table is not shown to the workers. Each time a generated text is chosen as the better one, we assign score of 1.0. If two texts are tied, we assign 0.5 for each. We then calculate the average score for the texts generated by each method, indicating its superiority in pairwise comparisons with all other methods. The significance test is conducted respectively on all three measures: number of supporting facts and number of contradicting facts for the first study; the assigned score for the second study. We use the Tukey HSD post-hoc analysis of an ANOVA with the worker’s response as the dependent variable, the method and worker id as independent variables.
2020
18
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1991–2002 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 1991 Speakers Enhance Contextually Confusable Words Eric Meinhardt Department of Linguistics UC San Diego La Jolla, CA, USA 92093 [email protected] Eric Baković Department of Linguistics UC San Diego La Jolla, CA, USA 92093 [email protected] Leon Bergen Department of Linguistics UC San Diego La Jolla, CA, USA 92093 [email protected] Abstract Recent work has found evidence that natural languages are shaped by pressures for efficient communication — e.g. the more contextually predictable a word is, the fewer speech sounds or syllables it has (Piantadosi et al. 2011). Research on the degree to which speech and language are shaped by pressures for effective communication — robustness in the face of noise and uncertainty — has been more equivocal. We develop a measure of contextual confusability during word recognition based on psychoacoustic data. Applying this measure to naturalistic speech corpora, we find evidence suggesting that speakers alter their productions to make contextually more confusable words easier to understand. 1 Introduction A major open question in the study of natural languages is the extent to which pressures for efficient communication shape the online production choices of speakers or the system of forms and form-meaning mappings. Zipf (1936, 1949) famously noted that highly frequent words tend to be shorter and hypothesized that this could be explained in terms of pressures for efficient communication: the average cost of producing a word is lower than it would be otherwise. More recent work has formalized hypotheses about the effect of communicative pressures on language usage and design using tools from information theory (Shannon 1948, Cover and Thomas 2012) and rational analysis (Anderson 1990, 1991). This work has found evidence that meanings are allocated to word types in a way that minimizes speaker effort (Piantadosi et al. 2011, 2012), and that this appears to be at least partly explainable by online production choices (Mahowald et al. 2013). While this research offers evidence that lexicons and the production choices of speakers are shaped by pressures for efficient communication, other work examining how much words and lexicons are shaped by pressures for ensuring effective communication in the face of noise and uncertainty has been more equivocal. This work has found evidence that words with greater neighborhood size or density — that is, words that have a greater number of similar-sounding neighbors — have faster onset of production, and have lower overall durations. Words with greater neighborhood density also take longer for listeners to recognize and comprehend, and have less acoustically distinctive vowels (Vitevitch 2002, Gahl et al. 2012; see Vitevitch and Luce 2016 for review). This work provides a challenge for communicatively-oriented models of production: words with greater numbers of similar-sounding neighbors seem likely to be more confusable, and therefore speakers would be predicted to decrease the likelihood of noise by, e.g., increasing their duration. However, this work does not directly estimate word confusability, instead using neighborhood density or an acoustic similarity measure as a proxy. It remains possible that greater word confusability is associated with phonetic enhancement, and that a more direct measure of confusability would reveal this relationship. In this paper, we present a measure of relative word confusability based on both a language model and psychoacoustic data, and we examine how well it predicts word durations in natural speech corpora. This measure differs from neighborhood density in three ways: 1) it is sensitive to edit type; 2) it considers words with edit distance greater than 1; and 3) it takes into account top-down expectations. The structure of the paper is as follows. We first present a derivation of a Bayesian model of word recognition (broadly similar to Norris and McQueen 2008) that incorporates both linguistic context and a model of noise estimated from the 1992 gating data of Warner et al. (2014). We use this speech recognition model to define a measure of confusability, and apply this measure to content words in the NXT-annotated subset of the Switchboard corpus and in the Buckeye corpus (Calhoun et al. 2010, Pitt et al. 2005). We provide evidence that greater confusability is associated with longer duration. 1.1 Related work A number of other studies have examined how language is shaped by pressures for communication in the presence of noise. Dautriche et al. (2017) examines whether the words of natural lexicons are dispersed, as would be predicted if these lexicons are optimized to prevent confusions between different words. This work finds that in fact lexicons exhibit clear tendencies towards being clumpier rather than dispersed. The current study follows previous work in using the phenomena of reduction and enhancement to investigate whether communication is optimized for robustness to noise. Speech tokens that are produced with shorter than usual duration, or with parts omitted or made less distinctive, are said to be reduced, and those tokens produced with longer durations or produced more distinctively are enhanced. Previous work has provided evidence that reduction and enhancement are influenced by contextual predictability. Words, syllables, and segments that are more contextually predictable tend to be reduced and those that are less contextually predictable tend to be enhanced (see e.g. Van Son et al. 1998, Van Son and Pols 2003, Jurafsky et al. 2001, Aylett and Turk 2004, 2006, Cohen Priva 2008, 2012, 2015, Seyfarth 2014, Demberg et al. 2012, Pate and Goldwater 2015, Buz et al. 2016, Turnbull et al. 2018; see Bell et al. 2009, Jaeger and Buz 2018 for reviews). According to a communicatively-oriented account, this is explainable as balancing efficiency against effectiveness: speakers economize on production cost the more that context facilitates accurate listener inference of the speaker’s intent. Other work has investigated the effects of environmental noise on speech production. This includes work investigating whether speakers modulate their productions in response to overt signals of communication difficulty, e.g. loud environments or talking to listeners who are children, elderly, or non-native speakers (Lombard 1911, Uther et al. 2007, Picheny et al. 1986). 2 A model of word confusability We propose a simplified model of word confusability, in which there are two factors that will make word 𝑣in context 𝑐more vs. less confusable. On the one hand, a listener who has observed context 𝑐has some ‘top-down’ beliefs and expectations about what 𝑣will be before the speaker produces any acoustics for 𝑣. On the other hand, once the speaker has produced acoustics for 𝑣, there will be (in general ambiguous) ‘bottom-up’ acoustic cues that will usually underdetermine what the speaker’s choice of 𝑣actually was. The goal of the listener is then to combine their top-down expectations with their bottom-up observations to reason about which words are more vs. less likely to have been what the speaker intended.1 We operationalize the perceptibility of word 𝑣as the probability that the listener accurately recovers this word in situations where the speaker uses it; the confusability of a word is inversely related to its perceptibility. If a speaker has a model of the expected confusability of a given word, they can then decide to lengthen or shorten their particular production of the word token, balancing listener comprehension and their own effort. 2.1 Model definition To model the in-context confusability of word tokens, we model the task of word recognition as one of Bayesian inference, with the following underlying generative process for the speaker: 1. At some point in time, the speaker has already produced some existing sentential context 𝑐, consisting of a sequence of orthographic words. We assume for simplicity and tractability that the listener knows exactly what this context is at each timestep. 2. The speaker produces the current word 𝑣— e.g. cigarette. We model this as sampling according to a language model 𝑝𝐿: 𝑣∼𝑝𝐿(⋅|𝑐). 3. The speaker determines the segment sequence 𝑥1∶𝑓= (𝑥1, ..., 𝑥𝑓) corresponding to their word choice. For example, the speaker will determine that the segments [sIg@ôEt] correspond to the word cigarette. 1Note that of the two basic factors integrated here, previous probabilistic work on reduction has been limited to using only ‘top-down’ expectations. 1993 In our corpora, there is a unique correct segment sequence for a given orthographic word. For ease of exposition, we therefore identify 𝑥1∶𝑓with its corresponding orthographic form 𝑣. Abusing notation, we will write 𝑝𝐿(𝑥1∶𝑓|𝑐) for the distribution over segmental forms induced by the language model.2 4. The listener receives a segment sequence 𝑦1∶𝑓 = (𝑦1, ..., 𝑦𝑓) — e.g. [SIg@ôEt] (‘shigarette’) — drawn from a channel distribution 𝑝𝑁conditioned on the speaker’s intended segment sequence: 𝑦1∶𝑓∼𝑝𝑁(⋅|𝑥1∶𝑓). This represents the effects of noise on the signal received by the listener. The task of the listener is to then combine their observation (represented here by 𝑦1∶𝑓) with their prior expectations about which words are likely given the context. The listener tries to determine how likely each wordform in the lexicon is to have been the one intended by the speaker. Their posterior belief 𝑝LISTENER about which segmental wordform 𝑥1∶𝑓was intended is described by Bayes’ rule: 𝑝LISTENER(𝑥1∶𝑓|𝑦1∶𝑓, 𝑐) (1) = 𝑝𝑁(𝑦1∶𝑓|𝑥1∶𝑓)𝑝𝐿(𝑥1∶𝑓|𝑐) 𝑝(𝑦1∶𝑓|𝑐) (2) = 𝑝𝑁(𝑦1∶𝑓|𝑥1∶𝑓)𝑝𝐿(𝑥1∶𝑓|𝑐) ∑ 𝑥′ 1∶𝑓 𝑝𝑁(𝑦1∶𝑓|𝑥′ 1∶𝑓)𝑝𝐿(𝑥′ 1∶𝑓|𝑐) (3) Suppose for example that the listener perceives 𝑦1∶𝑓=[SIg@ôEt]. Their beliefs about the lexicon 𝑝𝐿(𝑋1∶𝑓|𝐶) will tell them that this is not a valid segmental wordform, but that [sIg@ôEt] is a valid wordform. Their beliefs about the noise distribution for the language 𝑝𝑁(𝑌1∶𝑓|𝑋1∶𝑓) tell them that 𝑥𝑗=[s] is a plausible segment to be misperceived as 𝑦𝑗=[S]; together this suggests that a good explanation of their percept is the intended wordform 𝑥1∶𝑓=[sIg@ôEt]. Equation 1 allows us to measure how accurately the listener will be able to reconstruct the speaker’s intended message, given a perceived segmental wordform 𝑦1∶𝑓. However, this is not sufficient to determine the confusability of an intended wordform. In general, an intended wordform 𝑥1∶𝑓may give rise to many different perceived wordforms 𝑦1∶𝑓as a result of noise. In order to measure 2This notation ignores homophony, though the model is in fact sensitive to this. its confusability, we therefore need to marginalize over the possible perceived segment sequences. We define the contextual perceptibility of a segmental wordform 𝑥1∶𝑓in context 𝑐to be the expected probability that the listener accurately recovers it: 𝔼 𝑦1∶𝑓∼𝑝𝑁(⋅|𝑥1∶𝑓) 𝑝LISTENER(𝑥1∶𝑓|𝑦1∶𝑓, 𝑐) (4) = ∑ 𝑦1∶𝑓 𝑝LISTENER(𝑥1∶𝑓|𝑦1∶𝑓, 𝑐)𝑝𝑁(𝑦1∶𝑓|𝑥1∶𝑓) (5) The space of all possible channel strings 𝑦1∶𝑓 grows exponentially in sequence length 𝑓. However, each segment is only substantially confusable with a small number of other segments and the probability of more than a small number of channel errors is small. We therefore approximated Eq. 4 with a Monte Carlo estimator: 𝔼 𝑦1∶𝑓∼𝑝𝑁(⋅|𝑥1∶𝑓) 𝑝LISTENER(𝑥1∶𝑓|𝑦1∶𝑓, 𝑐) (6) ≈1 𝑛 𝑛 ∑ 𝑖=1 𝑝LISTENER(𝑥1∶𝑓|𝑦𝑖 1∶𝑓, 𝑐) (7) 𝑦𝑖 1∶𝑓∼𝑝𝑁(⋅|𝑥1∶𝑓) (8) We choose 𝑛=1000 to balance the variance and computational feasibility of the estimator. Finally, following the reasoning given in Levy (2005, 2008b), we take the negative logarithm of this quantity and arrive at a surprisal, which represents the contextual confusability of segment sequence 𝑥1∶𝑓in context 𝑐:3 ℎ(𝑥1∶𝑓|𝑥1∶𝑓, 𝑐) (9) = −log 𝔼 𝑦1∶𝑓∼𝑝𝑁(⋅|𝑥1∶𝑓) 𝑝LISTENER(𝑥1∶𝑓|𝑦1∶𝑓, 𝑐) (10) 3 Materials and methods We make use of two types of data: psychoacoustic gating data for estimating a noise model, and several corpora of natural speech for evaluating whether individuals increase the duration of more confusable words. 3.1 Words duration data Word durations were analyzed separately in two spoken corpora of American English: the Buckeye Corpus of Conversational Speech (Pitt et al. 3Compare Equations 4–9 with Eq. VII of Levy (2008a), a study of sentence-level confusability. 1994 2005) and the NXT Switchboard Annotations (Calhoun et al. 2010), a richly annotated subset of Switchboard-1 Release 2 (Godfrey and Holliman 1997). The Buckeye Corpus contains about 300,000 word tokens, taken from interviews with 40 speakers from central Ohio. Word durations for the present study were taken from the timestamps provided for word-level annotations. Each word token had a broad transcription uniform across all instances of the word type and a second, tokenspecific close transcription created by a human annotator. The Switchboard Corpus contains transcripts of telephone conversations between strangers. The NXT annotated subset includes about 830,000 word tokens from 642 conversations between 358 speakers recruited from all areas of the United States. Word durations for the present study were taken from the ‘phonological word’-level timestamps; these were the result of annotator-checked and -corrected timestamps initially made by alignment software. Each phonological word was also associated with a segmental transcription that was uniform across all instances of the word type. Exclusion criteria almost exactly follow Seyfarth (2014) for the reasons cited there. These criteria are mainly designed to exclude non-content words and words whose pronunciation is likely affected by disfluencies or prosodic structure. Our criteria only diverge in the following manner: Word tokens were excluded if the utterance speech rate (total number of syllables / length of the utterance in seconds) was more than 3 standard deviations from the speaker mean (vs. 2.5 in Seyfarth 2014). After exclusion criteria were applied, about 44,000 (4,900) and 113,000 (8,900) word tokens (word types) remained in the Buckeye and NXT Switchboard corpora, respectively. 3.2 Diphone gating data The model of word confusability was based on the diphone gating experiment data of Warner et al. (2014). Participants listened to gated intervals of every phonotactically licit diphone of (western) American English and attempted to identify the full diphone they thought was being produced during the interval. Along with earlier work by some of the same researchers on Dutch (Smits et al. 2003, Warner et al. 2005), this represents by far the richest and most comprehensive acoustic confusion matrix data of its kind. Warner et al. (2014) identified all adjacent pairs of segments within and between words based on an electronic pronouncing dictionary of about 20,000 American English wordforms. A set of approximately 2,000 phonotactically licit diphones were extracted from this transcribed lexicon. At least one stimulus nonsense word was created per diphone by inserting the diphone into an environment consisting of at most one syllable on the left and at most one syllable on the right. A recording of each stimulus wordform was then marked up with (generally) six temporal gates. For each stimulus wordform, one recording was created for each gate, starting at the beginning of the original recording and going all the way up to a gate location, followed by a ramping procedure (rather than truncation or white noise) to avoid systematically biasing confusion data. In each trial, participants heard a gated stimulus recording.4 If the recording included a preceding context, this context was displayed on the screen. The participant then selected the stimulus diphone they thought was in the recording (i.e. not including context). From this response data, each gate of each stimulus diphone can be associated with a frequency distribution over response diphones. Only the response data for gates corresponding to the end of each segment of the diphone were used in the current study. For each of Buckeye and NXT Switchboard, the segment inventories of the gating data and of each speech corpus had to be projected down to a common set of segments. In each case, this involved collapsing the distinction in the corpora between syllabic and non-syllabic nasal stops. For reasons of data sparsity, the distinction between stressed and unstressed versions of any given vowel was also collapsed. 3.3 Language model Our measure of contextual confusability uses a language model to compute the prior probability of a word in context. We estimate a language model from the Fisher corpus (Cieri et al. 2004), a speech corpus matched for genre and register to Buckeye and Switchboard. This corpus contains about 12 million (orthographic) word tokens taken from nearly 6000 short conversations, each on one of 4See Grosjean (1980) for reference on the gating paradigm. 1995 about 100 topics. We estimated n-gram models of several orders from the Fisher corpus using KenLM (Heafield 2011).5 The n-gram order was treated as a hyperparameter, and selected on the Training Set, as described below. An add-1 smoothed unigram model was also created from word frequencies in the Fisher corpus using SRILM (Stolcke 2002, Stolcke et al. 2011). 3.4 Channel model The channel model describes the conditional distribution 𝑝𝑁(𝑌1∶𝑓|𝑋1∶𝑓) over what sequence of segments 𝑦1∶𝑓a listener will perceive (e.g. [SIg@ôEt], shigarette) given the full intended sequence 𝑥1∶𝑓 (e.g. [sIg@ôEt], cigarette). We estimate this distribution using the diphone gating data in Section 3.2. We make the simplifying assumption that the channel distribution for segment 𝑦𝑖is conditionally independent of all other 𝑦𝑗(𝑗≠𝑖) given intended segments 𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1. By conditioning on adjacent segments, we can capture some effects of coarticulation on confusability. For example, nasals before oral stops are systematically likely to be misheard as having the same place of articulation as the stop: 𝑥1∶𝑓=[AnpA] (alveolar nasal before labial stop) is more likely to be misperceived as 𝑦1∶𝑓=[AmpA] (a labial nasal) than the reverse, and a confusion of [n] for [m] is comparatively less likely when [n] is between vowels as in [AnA] (Ohala 1990). For each gate 𝑔∈{3, 6} and for each diphone 𝑥1𝑥2, the response data from Section 3.2 induce a conditional frequency distribution over channel diphones 𝑓𝑔(𝑦1, 𝑦2|𝑥1, 𝑥2). These frequency distributions were smoothed by adding a pseudocount to every channel diphone in every distribution; the distributions were then normalized to define a smoothed pair of diphone-to-diphone channel distributions 𝑝𝑔(𝑦1, 𝑦2|𝑥1, 𝑥2). From the marginals of these distributions we constructed an approximation (Eq. 11) of the triphone-to-uniphone channel distribution via their geometric mean:6 ̃𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) ∝ √ 𝑝3(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖) ⋅𝑝6(𝑦𝑖|𝑥𝑖, 𝑥𝑖+1) (11) 5We do not use lower-perplexity neural language models due to intractability resulting from the normalizing constant in Equations 3 and 4. 6We stop short of utilizing a full triphone-to-triphone channel distribution for tractability. With the simplifying assumption that only substitution errors are possible,7 we obtain a preliminary string-to-string channel model: ̃𝑝𝑁(𝑦1∶𝑓|𝑥1∶𝑓) = 𝑗=𝑓 ∏ 𝑗=1 ̃𝑝𝑡(𝑦𝑗|𝑥𝑗−1, 𝑥𝑗, 𝑥𝑗+1) (12) We are primarily interested in using the channel model to define a ranking on the confusability of words, i.e. to determine which words are more or less confusable than others. This makes the channel model defined by Equations 11 and 12 not fully adequate. The diphone gating data were collected in a laboratory setting with rates of noise lower than for naturalistic speech. As a result, when the noise model is estimated from this data, it implies the absolute rate of accurate perception (as defined by Equation 3) is close to 1 for most words. This makes it hard for the Monte Carlo estimator defined in Equation 7 to determine stable rankings of confusability. In order to estimate rankings in a more stable manner, we introduce a model hyperparameter 0 < 𝜆≤1, and define a new triphone-to-uniphone channel distribution by: 𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) (13) = {𝜆⋅̃𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1), 𝑦𝑖= 𝑥𝑖 𝛽⋅̃𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1), 𝑦𝑖≠𝑥𝑖 } (14) Here 𝛽≥1 is used to normalize the distributions; it is fully determined by 𝜆for a particular distribution 𝑝𝑡(⋅|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1). The term 𝜆is used to increase the noise rate in the channel distributions. Note that two important features of the original triphoneto-uniphone distributions ̃𝑝𝑡are maintained in the new model. First, the ratios of outcome probabilities within a single triphone distribution remain the same: 𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) 𝑝𝑡(𝑦′ 𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) = ̃𝑝𝑡(𝑦𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) ̃𝑝𝑡(𝑦′ 𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) (15) for segments 𝑦𝑖, 𝑦′ 𝑖≠𝑥𝑖. Second, the relative probability of accurate perception is preserved across triphone distributions: 𝑝𝑡(𝑥𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) 𝑝𝑡(𝑥′ 𝑖|𝑥′ 𝑖−1, 𝑥′ 𝑖, 𝑥′ 𝑖+1) = ̃𝑝𝑡(𝑥𝑖|𝑥𝑖−1, 𝑥𝑖, 𝑥𝑖+1) ̃𝑝𝑡(𝑥′ 𝑖|𝑥′ 𝑖−1, 𝑥′ 𝑖, 𝑥′ 𝑖+1) (16) The new model maximally agrees with the experimentally estimated distribution, differing only in the absolute amount of noise implied. 7The gating data does not provide information for estimating the probability of deletion or insertion errors. 1996 The final string-to-string channel model is defined by: 𝑝𝑁(𝑦1∶𝑓|𝑥1∶𝑓) = 𝑗=𝑓 ∏ 𝑗=1 𝑝𝑡(𝑦𝑗|𝑥𝑗−1, 𝑥𝑗, 𝑥𝑗+1) (17) This new channel model has an increased noise rate, making it easier to estimate stable rankings of confusability across words. The most similar previous channel model (Norris and McQueen 2008) was based on Dutch gating data (Smits et al. 2003) comparable to that used here. Norris and McQueen (2008) did not construct a triphone-to-uniphone channel model, but made use of all gates and also allowed investigation of word boundary identification. 3.5 Statistical methods Prior to any analyses, the Switchboard and Buckeye corpora were each randomly divided into evenly-sized Training and Test sets. The Training sets were used for exploratory statistical analyses, and for determining the values of several model hyperparameters. Following this, all parameters and statistical analyses were frozen, and preregistered with the Open Science Foundation.8 We perform several linear regressions in order to determine the effect of confusability on word duration. Contextual confusability is defined throughout using Equation 9. Word durations are logtransformed. The following covariates are standard in the literature, and are included in our analyses: speaker identity; part of speech; unigram prior surprisal; speech rate (the average rate of speech, in syllables per second, of the utterance containing the target word); word length (measured by number of segments and syllables). Several covariates that are included are more non-trivial, and are discussed in more detail below: segmental inventory factors; forward and backward surprisal; neighborhood size and log weighted neighborhood density; and unigram confusability. The segmental inventory variables code each word as a ‘bag-of-segments.’ A separate variable is defined for each phoneme in the segmental lexicon of the corpus. Each variable counts the number of times the corresponding phoneme occurs in the word. This is a variant of the baseline model 8The preregistered analyses are available at the following link: https://osf.io/gj3ph/?view_only= 6c5bd9b1211e4b798d2268fb8a8f5842 used in previous work (Bell et al. 2009, Gahl et al. 2012). Certain segments take longer to pronounce than others, and the baseline model is used in case the confusability scores contain information about segment identities within a word. Note, however, that this is a conservative baseline, as segment identity has an effect on confusability; certain segments are, individually, harder to perceive than others. The model will be used to predict word durations after these segmental effects have been factored out. The forward language-model surprisal of a word is the surprisal of the word given preceding words in the context, and its backward surprisal is the surprisal given the following words in the context. Previous work in English has found backward surprisal to be a stronger predictor of spoken word duration than forward surprisal (Bell et al. 2009, Seyfarth 2014). Word confusability is expected to be correlated with surprisal, as more surprising words will be more difficult for the listener to recover in the presence of noise. Neighborhood size and log weighted neighborhood density are measures of the number of words adjacent (within Levenshtein distance 1) to a target word. These measures have been extensively studied as explanatory variables for word duration (see Gahl et al. 2012, Vitevitch and Luce 2016 for review), and are expected to correlate with word confusability: words with more neighbors are expected to be more confusable. We evaluate whether there is any residual effect of confusability beyond its impact on these variables. Unigram confusability measures the confusability of a word (Equation 9) given a unigram (word frequency) language model. This is a measure of the out-of-context confusability of a word, as discussed below. All variables are treated as fixed effects, and OLS is used for regressions. Confidence intervals and p-values are calculated using the biascorrected bootstrap. Bootstrapping is used to address possible heteroskedasticity in the data. Random effects are not used due to potential issues arising in observational studies like the current one. In particular, random effects may correlate with predictors in an observational study, leading to incorrect estimates of uncertainty and the potential for bias (Bafumi and Gelman 2006, Wooldridge 2010).9 9While Bafumi and Gelman (2006) propose a solution to 1997 (a) Switchboard (b) Buckeye Figure 1: Confusability vs. log duration on the Test sets of the Switchboard and Buckeye corpora. Error bars are 95% confidence intervals (non-bootstrapped). As illustrated in Figure 2, data are sparse beyond 18 bits, resulting in large confidence intervals in this range. Figure 2: Histogram of contextual confusability scores on the Test sets. All analyses were performed in two ways: using the raw values for each variable, and with ranktransformed values for the continuous variables. The rank-transformed analyses provide a test of the papers hypothesis that greater (i.e. higher-rank) confusability is associated with longer (higherrank) duration. The analyses eliminate the potentially questionable parametric assumption of a linear relationship between confusability (in bits) and this problem by decorrelating the fixed effect from random effects, the method produces identical estimates for the fixed effect, and is primarily useful when the random effect estimates themselves are of interest. duration (in log seconds). The rank-transformed analyses are intended as sensitivity analyses for the non-transformed analyses; if the two analyses provide different results, this provides evidence of a problem with the statistical methods.10 4 Results Four model hyperparameters were selected using the Switchboard and Buckeye Training sets: the order and direction of the n-gram model, the diphoneto-diphone channel pseudocounts, and the noise factor 𝜆.11 Backward bigram language models were found to perform best on the Training sets, possibly due to distributional differences between these corpora and the Fisher corpus, which was used for language model estimation. This is consistent with prior work in the area (e.g. Bell et al. 2009, Seyfarth 2014). Pseudocounts were set to 0.01, and the term 𝜆was set to 2−6. Figure 2 shows the frequency of modelcomputed confusability scores on the Switchboard and Buckeye Test sets. Figure 1 shows the relationship between confusability and word duration on the Test sets. The first set of analyses include all of the co10Model and analysis code is available at: https:// github.com/emeinhardt/wr 11The language model order was the same across all covariates where it was used. 1998 Dataset Rank 𝛽 95% CI p-value SWBD No 0.006 (0.004, 0.008) 0.001 SWBD Yes 0.086 (0.067, 0.109) 0.001 Buckeye No 0.005 (0.001, 0.008) 0.01 Buckeye Yes 0.123 (0.080, 0.130) 0.001 Table 1: Effect of contextual confusability on log word duration, not controlling for unigram confusability. Estimates from the Test sets. Rank indicates whether continuous variables were rank-transformed. p-values are upper-bounds. Dataset Rank 𝛽 95% CI p-value SWBD No 0.009 (0.006, 0.011) 0.001 SWBD Yes 0.132 (0.095, 0.130) 0.001 Buckeye No 0.007 (0.003, 0.011) 0.001 Buckeye Yes 0.148 (0.106, 0.164) 0.001 Table 2: Effect of contextual confusability on log word duration, controlling for unigram confusability. Estimates from the Test sets. variates from Section 3.5, except for unigram confusability. This allows us to determine whether there is an effect of word confusability on duration, independent of whether this effect is sensitive to context. Greater confusability is associated with longer word durations on both the Switchboard and Buckeye Training sets (p<0.001 for all analyses). Table 1 shows results of the same analyses performed on the Test sets. The effects replicate on the Test sets, and are qualitatively similar when continuous variables are rank-transformed. These analyses provide evidence that higher confusability is associated with longer word duration. In the second set of analyses, we investigate whether a context-sensitive measure of confusability is necessary for explaining this effect, or whether an out-of-context measure suffices. In order to do this, we include unigram confusability as a covariate in the analyses, in addition to the previous covariates. Unigram confusability is identical to our target measure of word confusability, except that the language model is replaced with a unigram model. The measure calculates a word’s confusability based on its acoustic properties and its phonological similarity to other words. It therefore does not take into account top-down expectations based on a word’s context. After controlling for unigram confusability, contextual confusability remains associated with longer word durations on both the Switchboard and Buckeye Training sets (p<0.001 for all analyses). Table 2 shows the same analyses on the Test sets. The effects replicate on both Test sets, and similarly for the rank-transformed analyses. 4.1 Neighborhood density We report the results of several unplanned analyses. Confidence intervals and p-values reported in this section are non-bootstrapped. We evaluate the effect of neighborhood density on word duration in the Test sets. Weighted neighborhood density is associated with lower word duration in all analyses. (See Appendix B.) The results provide evidence that the neighborhood density effects identified in previous work remain qualitatively similar, after adjusting for contextual confusability. 5 Discussion We draw two main conclusions from our results. First, we provide evidence that speakers lengthen words that are more confusable. This supports the hypothesis that variation and structure in natural languages are shaped not only by pressures for efficient signals, but also pressures for effective communication of the speaker’s intended message in the face of noise and uncertainty (Lindblom 1990, Lindblom et al. 1995, Hall et al. 2018). Second, we provide large scale, naturalistic evidence for reduction and enhancement driven by contextual confusability. Conversational context may make a speaker’s intended message easier or harder to recover from ambiguous acoustics. The results suggest that speakers modulate their utterances in a manner that is sensitive to this effect of context, increasing duration when context makes the intended utterance harder to recover. The results complement previous work which demonstrates reduction and enhancement driven by contextual predictability (see e.g. Seyfarth 2014). They also complement work which shows confusability-driven reduction and enhancement in targeted experimental manipulations (see e.g. Kirov and Wilson 2012, Schertz 2013, Seyfarth et al. 2016, Buz et al. 2016). The study may help to resolve questions raised by previous work examining the effects of neighborhood density. That work found negative or null 1999 associations between word duration and neighborhood density and related measures (e.g. Gahl et al. 2012, Gahl and Strand 2016). The proposed confusability measure differs from neighborhood density in three ways: it is sensitive to edit type, words greater than two edits away, and top-down effects. These differences may account for the discrepancy in the effects of neighborhood density and confusability. Under one hypothesis, neighborhood density effects reflect spillover of activation between words with overlapping subsequences of speech sounds (e.g. Gahl and Strand (2016), Chen and Mirman (2012), Dell (1986), Vitevitch and Luce (2016)). This spillover is potentially sensitive only to Levenshtein distance. In contrast, confusability is sensitive to fine-grained perceptual structure. When lexical neighbors differ in perceptually distinct segments, they will typically be nonconfusable. A second hypothesis is that the discrepancy arises from the role of top-down expectations in confusability. Neighborhood effects are type-level phenomena: a word has the same neighbors no matter what context it appears in. Confusability, on the other hand, is a token-level phenomenon: contextual expectations will change the confusability of a word. Stable properties of the lexicon may determine which segment sequences undergo frequent articulatory rehearsal, and are reduced as a consequence. The confusability measure picks up on context-dependent variation, which rehearsal processes in the articulatory system may not be sensitive to. The study suggests several directions for future work. First, while there are advantages of using naturalistic speech data (Gahl et al. 2012), it would be desirable to have experimental validation of the confusability measure and its relationship to speaker reduction. Second, a lower-perplexity neural language model would provide better estimates of a word’s confusability, but would first need to be validated on speech data. Third, a more sophisticated channel model would allow for insertions and deletions, and better capture transitional coarticulatory cues (Wright 2004). Because speakers enhance or reduce their speech in ways other than changing duration (see e.g. Kirov and Wilson 2012, Schertz 2013, Seyfarth et al. 2016, Buz et al. 2016), such a model would permit investigation of targeted enhancement and reduction in naturalistic data. Acknowledgements We thank Uriel Cohen Priva and Scott Seyfarth for help reproducing their analyses. We also thank Silas Horton, Todd Williams, and Thanh Nguyen for computing support. The Titan V used for this research was donated by the NVIDIA Corporation. References Anderson, J. R. (1990). The adaptive character of thought. Erlbaum, Hillsdale, NJ. Anderson, J. R. (1991). Is human cognition adaptive? Behavioral and Brain Sciences, 14:471–517. Aylett, M. and Turk, A. (2004). The smooth signal redundancy hypothesis: a functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and speech, 47(Pt 1):31–56. Aylett, M. and Turk, A. (2006). Language redundancy predicts syllabic duration and the spectral characteristics of vocalic syllable nuclei. The Journal of the Acoustical Society of America, 119(5 Pt 1):3048– 3058. Bafumi, J. and Gelman, A. (2006). Fitting multilevel models when predictors and group effects correlate. Available at SSRN 1010095. Bell, A., Brenier, J. M., Gregory, M., Girand, C., and Jurafsky, D. (2009). Predictability effects on durations of content and function words in conversational English. Journal of Memory and Language, 60(1):92– 111. Buz, E., Tanenhaus, M. K., and Jaeger, T. F. (2016). Dynamically adapted context-specific hyperarticulation: Feedback from interlocutors affects speakers ’ subsequent pronunciations. Journal of Memory and Language, 89:68–86. Calhoun, S., Carletta, J., Brenier, J. M., Mayo, N., Jurafsky, D., Steedman, M., and Beaver, D. (2010). The NXT-format Switchboard Corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. In Language Resources and Evaluation, volume 44, pages 387–419. Chen, Q. and Mirman, D. (2012). Competition and cooperation among similar representations: toward a unified account of facilitative and inhibitory effects of lexical neighbors. Psychological review, 119(2):417. Cieri, C., Miller, D., and Walker, K. (2004). The Fisher corpus: a Resource for the Next Generations of Speech-to-Text. Language Resources and Evaluation, 4:69–71. 2000 Cohen Priva, U. (2008). Using information content to predict phone deletion. Proceedings of the 27th West Coast Conference on Formal Linguistics, pages 90– 98. Cohen Priva, U. (2012). Sign and Signal Deriving Linguistic Generalizations From Information Utility. Doctoral dissertation, Stanford University. Cohen Priva, U. (2015). Informativity affects consonant duration and deletion rates. Laboratory Phonology, 6(2):243–278. Cover, T. M. and Thomas, J. A. (2012). Elements of information theory. John Wiley & Sons. Dautriche, I., Mahowald, K., Gibson, E., Christophe, A., and Piantadosi, S. T. (2017). Words cluster phonetically beyond phonotactic regularities. Cognition, 163:128–145. Dell, G. S. (1986). A spreading-activation theory of retrieval in sentence production. Psychological review, 93(3):283. Demberg, V., Sayeed, A. B., Gorinski, P. J., and Engonopoulos, N. (2012). Syntactic surprisal affects spoken word duration in conversational contexts. Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, (July):356–367. Gahl, S. and Strand, J. F. (2016). Many neighborhoods: Phonological and perceptual neighborhood density in lexical production and perception. Journal of Memory and Language, 89:162–178. Gahl, S., Yao, Y., and Johnson, K. (2012). Why reduce? Phonological neighborhood density and phonetic reduction in spontaneous speech. Journal of Memory and Language, 66(4):789–806. Godfrey, J. J. and Holliman, E. (1997). Switchboard-1 Release 2. Technical report, Linguistic Data Consortium. Grosjean, F. (1980). Spoken word recognition processes and the gating paradigm. Perception & psychophysics, 28(4):267–283. Hall, K. C., Hume, E., Jaeger, T. F., and Wedel, A. (2018). The Role of Predictability in Shaping Phonological Patterns. Linguistic Vanguard, 4. Heafield, K. (2011). KenLM: Faster and Smaller Language Model Queries. Proceedings of the Sixth Workshop on Statistical Machine Translation, (2009):187–197. Jaeger, T. F. and Buz, E. (2018). Signal Reduction and Linguistic Encoding. In The Handbook of Psycholinguistics, pages 38–81. Wiley-Blackwell. Jurafsky, D., Bell, A., Gregory, M., and Raymond, W. D. (2001). Probabilistic Relations between Words: Evidence from Reduction in Lexical Production. Frequency and the emergence of linguistic structure, pages 229–254. Kirov, C. and Wilson, C. (2012). The Specificity of Online Variation in Speech Production. Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 587–592. Levy, R. (2005). Probabilistic Models of Word Order and Syntactic Discontinuity. Doctoral dissertation, Stanford University. Levy, R. (2008a). A noisy-channel model of rational human sentence comprehension under uncertain input. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 234–243. Levy, R. (2008b). Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Lindblom, B. (1990). Explaining phonetic variation: a sketch of the H&H theory. In Speech Production and Speech Modelling, pages 403–439. Lindblom, B., Guion, S., Hura, S., Moon, S.-J., and Willerman, R. (1995). Is sound change adaptive? Rivista di Linguistica, 7:5–37. Lombard, E. (1911). Le signe de l’elevation de la voix. Ann. Mal. de L’Oreille et du Larynx, pages 101–119. Mahowald, K., Fedorenko, E., Piantadosi, S. T., and Gibson, E. (2013). Info/information theory: Speakers choose shorter words in predictive contexts. Cognition, 126(2):313–318. Norris, D. and McQueen, J. M. (2008). Shortlist B: a Bayesian model of continuous speech recognition. Psychological Review, 115(2):357–395. Ohala, J. J. (1990). The phonetics and phonology of aspects of assimilation. In Kingston, J. and Beckman, M. E., editors, Papers in Laboratory Phonology I: Between the Grammar and Physics of Speech, chapter 14, pages 258–282. Pate, J. K. and Goldwater, S. (2015). Talkers account for listener and channel characteristics to communicate efficiently. Journal of Memory and Language, 78. Piantadosi, S. T., Tily, H., and Gibson, E. (2011). Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526–3529. Piantadosi, S. T., Tily, H., and Gibson, E. (2012). The communicative function of ambiguity in language. Cognition, 122(3):280–291. 2001 Picheny, M. A., Durlach, N. I., and Braida, L. D. (1986). Speaking clearly for the hard of hearing ii: Acoustic characteristics of clear and conversational speech. Journal of Speech, Language, and Hearing Research, 29(4):434–446. Pitt, M. A., Johnson, K., Hume, E., Kiesling, S., and Raymond, W. (2005). The Buckeye corpus of conversational speech: Labeling conventions and a test of transcriber reliability. Speech Communication, 45(1):89–95. Schertz, J. (2013). Exaggeration of featural contrasts in clarifications of misheard speech in English. Journal of Phonetics, 41(3-4):249–263. Seyfarth, S. (2014). Word informativity influences acoustic duration: Effects of contextual predictability on lexical representation. Cognition, 133(1):140– 155. Seyfarth, S., Buz, E., and Jaeger, T. F. (2016). Dynamic hyperarticulation of coda voicing contrasts. The Journal of the Acoustical Society of America, 139(2). Shannon, C. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3):379–423. Smits, R., Warner, N., McQueen, J. M., and Cutler, A. (2003). Unfolding of phonetic information over time: a database of Dutch diphone perception. The Journal of the Acoustical Society of America, 113(January):563–574. Stolcke, A. (2002). SRILM-An Extensible Language Modeling Toolkit. In 8th International Conference on Spoken Language Processing (INTERSPEECH 2002), volume 2, pages 901–904. Stolcke, A., Zheng, J., Wang, W., and Abrash, V. (2011). SRILM at Sixteen: Update and Outlook. In Proceedings - IEEE Automatic Speech Recognition and Understanding Workshop. Turnbull, R., Seyfarth, S., Hume, E., and Jaeger, T. F. (2018). Nasal place assimilation trades offinferrability of both target and trigger words. Laboratory Phonology: Journal of the Association for Laboratory Phonology, 9(1). Uther, M., Knoll, M. A., and Burnham, D. (2007). Do you speak e-ng-li-sh? a comparison of foreignerand infant-directed speech. Speech communication, 49(1):2–7. Van Son, R. J. J. and Pols, L. C. W. (2003). How efficient is speech? In Proceedings of the Institute of Phonetic Sciences, pages 171–184. Van Son, R. J. J. H., Koopmans-van Beinum, F. J., and Pols, L. C. W. (1998). Efficiency As An Organizing Principle Of Natural Speech. In Fifth International Conference on Spoken Language Processing. Vitevitch, M. S. (2002). The influence of phonological similarity neighborhoods on speech production. Journal of Experimental Psychology, Learning, Memory, and Cognition, 28(4):735–747. Vitevitch, M. S. and Luce, P. A. (2016). Phonological Neighborhood Effects in Spoken Word Perception and Production. Annual Review of Linguistics, 2:75–94. Warner, N., McQueen, J. M., and Cutler, A. (2014). Tracking perception of the sounds of English. The Journal of the Acoustical Society of America, 135(5):2995–3006. Warner, N., Smits, R., McQueen, J. M., and Cutler, A. (2005). Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication, 46(1):53–72. Wooldridge, J. M. (2010). Econometric analysis of cross section and panel data. MIT press. Wright, R. (2004). A review of perceptual cues and cue robustness. In Hayes, B., Kirchner, R., and Steriade, D., editors, Phonetically based phonology, chapter 2. Cambridge University Press. Zipf, G. K. (1936). The Psychobiology of Language. Routledge, London. Zipf, G. K. (1949). Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology. Addison-Wesley Press. A Sensitivity analyses In this section we present the results of several sensitivity analyses. These analyses are post-hoc, and were not pre-registered with OSF. They are performed in order to assess the sensitivity of the findings to the bootstrapping method that was used for calculating p-values. The analyses are intended to evaluate the effect of contextual confusability on word duration, and are identical to the analyses in Section 4, except that p-values are calculated using a likelihood ratio test. Each likelihood ratio test compares a pair of OLS models: one model containing contextual confusability as a covariate, and an ablated model which does not use this covariate, but is otherwise identical. The tests evaluate whether the inclusion of contextual confusability improves the prediction of word duration, beyond the contributions of other covariates. Table 3 and Table 4 show results without and with unigram confusability included as a covariate. All comparisons performed in Section 4 remain significant with the likelihood ratio test. 2002 Dataset Rank Likelihood ratio p-value SWBD No 35.4 3𝑥10−9 SWBD Yes 91.8 3𝑥10−22 Buckeye No 7.23 0.007 Buckeye Yes 64.9 8𝑥10−16 Table 3: Likelihood ratio tests, evaluating whether contextual confusability improves OLS model fit on the test set. No control for unigram confusability included. Dataset Rank Likelihood ratio p-value SWBD No 51.3 8𝑥10−13 SWBD Yes 160.6 8𝑥10−37 Buckeye No 12.0 0.0005 Buckeye Yes 70.1 6𝑥10−17 Table 4: Likelihood ratio evaluation of contextual confusability, controlling for unigram confusability. B Neighborhood density analyses Table 5 shows the effect of log weighted neighborhood density on log word duration. Confidence intervals and p-values are non-bootstrapped. Dataset 𝛽 95% CI p-value SWBD -4.27 (-4.96, -3.58) 0.001 Buckeye -1.91 (-2.88, -0.94) 0.001 Table 5: Effect of log weighted neighborhood density on log word duration.
2020
180
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2003–2012 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2003 What Determines the Order of Adjectives in English? Comparing Efficiency-Based Theories Using Dependency Treebanks Richard Futrell University of California, Irvine [email protected] William Dyer Oracle Corporation [email protected] Gregory Scontras University of California, Irvine [email protected] Abstract We take up the scientific question of what determines the preferred order of adjectives in English, in phrases such as big blue box where multiple adjectives modify a following noun. We implement and test four quantitative theories, all of which are theoretically motivated in terms of efficiency in human language production and comprehension. The four theories we test are subjectivity (Scontras et al., 2017), information locality (Futrell, 2019), integration cost (Dyer, 2017), and information gain, which we introduce. We evaluate theories based on their ability to predict orders of unseen adjectives in hand-parsed and automatically-parsed dependency treebanks. We find that subjectivity, information locality, and information gain are all strong predictors, with some evidence for a two-factor account, where subjectivity and information gain reflect a factor involving semantics, and information locality reflects collocational preferences. 1 Introduction Across languages, there exist strong and stable constraints on the order of adjectives when more than one is used to modify a noun (Dixon, 1982; Sproat and Shih, 1991). For example, in English, big blue box sounds natural and appears relatively frequently in corpora, while blue big box sounds less natural and occurs less frequently (Scontras et al., 2017). In this paper, we take up the scientific question of what explains these constraints in natural language. To do so, we implement quantitative models that have been proposed in previous literature as explanations for these constraints, and compare their accuracy in predicting adjective ordering data in parsed corpora of English1. In the last few years, adjective order has become a crucial testing ground for quantitative theories 1All code and data are available at https://github. com/langprocgroup/adjorder. of syntax. These theories provide mathematical models that can describe the distribution of words in sentences and the way those words combine to yield the meaning of a sentence, in a way that captures the fine-grained quantitative patterns observable in large text datasets (Manning, 2003; Bresnan et al., 2007; Chen and Ferrer-i-Cancho, 2019). Quantitative syntactic theories are often efficiency-based, meaning that they model word distributions as the result of a process that tries to maximize information transfer while minimizing some measure of cognitive cost; as a result, they often use the mathematical language of information theory. Such theories promise not only to describe distributions of words, but also to explain why they take the shape they do, by viewing human language as an efficient code subject to appropriate constraints. This work informs NLP by providing a theory of language structure that integrates with data-driven, optimization-based machine learning models. Adjective order is a fruitful empirical target for quantitative theories of syntax because it is an area where the traditional discrete and symbolic theories become highly complex, and a quantitative approach becomes more attractive. For example, in the formal syntax literature, a standard explanation for adjective order constraints is that each adjective belongs to a certain semantic class (e.g., COLOR or SIZE) and that there exists a universal total order on these semantic classes (e.g., COLOR < SIZE) shared among all languages, which determines the order of adjectives in any given instance (Cinque, 1994; Scott, 2002). Such discrete theories of adjective order become complex rapidly as the number of semantic classes to be posited becomes large (upwards of twelve in Scontras et al. 2017) and more fine-grained (see Bar-Sever et al. 2018 for discussion of the learning problem posed by such classifications). 2004 In contrast, quantitative syntax theories typically identify a single construct that grounds out in real-valued numerical scores given to adjectives, which determine their ordering preferences. These scores can be estimated based on large-scale corpus data or based on human ratings. In what follows, we test the predictions of four such theories: the subjectivity hypothesis (Scontras et al., 2017; Simoniˇc, 2018; Hahn et al., 2018; Franke et al., 2019; Scontras et al., 2019), the information locality hypothesis (Futrell and Levy, 2017; Futrell et al., 2017; Hahn et al., 2018; Futrell, 2019), the integration cost hypothesis (Dyer, 2017), and the information gain hypothesis, which we introduce. We begin with a presentation of the details of each theory, then implement the theories and test their predictions against large-scale naturalistic data from English. In addition to comparing the predictors in terms of accuracy, we also perform a number of analyses to determine the important similarities and differences among their predictions. The paper concludes with a discussion of what our results tell us about adjective order and related issues, and a look towards future work. 2 Theories of adjective order 2.1 Subjectivity Scontras et al. (2017) show that adjective order is strongly predicted by adjectives’ subjectivity scores: an average rating obtained by asking human participants to rate adjectives on a numerical scale for how subjective they are. Adjectives that are rated as more subjective typically appear farther from the noun than adjectives rated as less subjective, and the strength of ordering preferences tracks the subjectivity differential between two adjectives. For example, in big blue box, the adjective big has a subjectivity rating of 0.64 (out of 1), and the adjective blue has a subjectivity rating of 0.30. If adjectives are placed in order of decreasing subjectivity, then big must appear before blue, corresponding to the preferred order. The notion of subjectivity as a predictor of adjective order was previously introduced by Hetzron (1978). Subsequent work has attempted to explain the role of subjectivity in adjective ordering by appealing to the communicative benefit afforded by ordering adjectives with respect to decreasing subjectivity. For example, Franke et al. (2019) use simulated reference games to demonstrate that, given a set of independently-motivated assumptions concerning the composition of meaning in multi-adjective strings, subjectivity-based orderings lead to a greater probability of successful reference resolution; the authors thus offer an evolutionary explanation for the role of subjectivity in adjective ordering (see also Simoniˇc, 2018; Hahn et al., 2018; Scontras et al., 2019). 2.2 Information locality The theory of information locality holds that words that have high mutual information are under pressure to be close to each other in linear order (Futrell and Levy, 2017; Futrell et al., 2017). Information locality is a generalization of the wellsupported principle of dependency length minimization (Liu et al., 2017; Temperley and Gildea, 2018). In the case of adjective ordering, the prediction is simply that adjectives that have high pointwise mutual information (PMI) with their head noun will tend to be closer to that noun. The PMI of an adjective a and a noun n is (Fano, 1961; Church and Hanks, 1990): PMI(a : n) ≡log p(a, n) p(a)p(n). (1) In this paper, we take the relevant joint distribution p(a, n) to be the distribution of adjectives and nouns in a dependency relationship, with the marginals calculated as p(a) = P n p(a, n) and p(n) = P a p(a, n). Information locality is motivated as a consequence of a more general theory of efficiency in human language. In this theory, languages should maximize information transfer while minimizing cognitive information-processing costs associated with language production and comprehension. Information locality emerges from these theories when we assume that the relevant measure of information-processing cost is the surprisal of words given lossy memory representations (Hale, 2001; Levy, 2008; Smith and Levy, 2013; Futrell and Levy, 2017; Futrell, 2019). 2.3 Integration Cost The theory of integration cost is also based in the idea of efficiency with regard to informationprocessing costs. It differs from information locality in that it assumes that the correct metric of processing difficulty for a word w is the entropy 2005 over the possible heads of w: Cost(w) ∝H[T|w] = X t −pT (t|w) log pT (t|w), (2) where T is a random variable indicating the head t of the word w (Dyer, 2017). This notion of cost captures the amount of uncertainty that has to be resolved about the proper role of the word w with respect to the rest of the words in the sentence. Like information locality, the theory of integration cost recovers dependency length minimization as a special case. For the case of predicting adjective order, the prediction is that an adjective a will be closer to a noun when it has lower integration cost: IC(a) = H[N|a], (3) where N is a random variable ranging over nouns. Integration cost corresponds to an intuitive idea previously articulated in the adjective ordering literature. The idea is that adjectives that can modify a smaller set of nouns appear closer to the noun: for example, an order such as big wooden spoon is preferred over wooden big spoon because the word big can modify nearly any noun, while wooden can only plausibly modify a small set of nouns (Ziff, 1960). The connection between integration cost and set size comes from the information-theoretic notion of the typical set (Cover and Thomas, 2006, pp. 57–71); the entropy of a random variable can be interpreted as the (log) cardinality of the typical set of samples from that variable. When we order adjectives by integration cost, this is equivalent to ordering them such that adjectives that can modify a larger typical set of nouns appear farther from the noun. The result is that each adjective gradually reduces the entropy of the possible nouns to follow, thus avoiding information-processing costs that may be associated with entropy reduction (Hale, 2006, 2016; Dye et al., 2018). 2.4 Information gain We propose a new efficiency-based predictor of adjective order: information gain. The idea is to view the noun phrase, consisting of prenominal adjectives followed by the noun, as a decision tree for identifying a referent, where each word partitions the space of possible referents. Each partitioning is associated with some information gain, indicating how much the set of possible referents shrinks. In line with the logic for integration cost, we propose that the word with smaller information gain will be placed earlier, so that the set of referents is gradually narrowed by each word. As generally implemented in decision trees, information gain refers to the reduction of entropy obtained from partitioning a set on a feature (Quinlan, 1986). In our case, the distribution of nouns N is partitioned on a given adjective a, creating two partitions: Na and its complement Nac. The difference between the starting entropy H[N] and the sum of the entropy of each partition, conditioned on the size of that partition, is the information gain of a: IG(a) = H[N] − |Na| |N| H[Na] + |Nac| |N| H[Nac]  . (4) Information gain is therefore comprised of both positive and negative evidence. That is, specifying an adjective such as big partitions the probability distribution of nouns into Nbig, the subset of N which takes big as a dependent, and NbigC, the subset of N which does not. Crucially, H[Na] is not H[N|a] in general. H[N|a] is the conditional entropy of nouns given a specific adjective, while H[Na] is the entropy of a distribution over nouns whose support is limited to noun types that have been observed to occur with an adjective a. Combined with H[Nac], information gain tells us how much the entropy of N is reduced by partitioning on a. This means that information gain and integration cost, while conceptually similar, are not mathematically equivalent. To our knowledge, information gain has not been previously suggested as a predictor of adjective ordering, although Danks and Glucksberg (1971) expressed a similar intuition in proposing that adjectives are ordered according to their ‘discriminative potential’. Although decision-tree algorithms such as ID3 choose the highest-IG feature first, we predict that the lower-informationgain adjective will precede the higher one. 3 Related Work Previous corpus studies of adjective order include Malouf (2000), who examined methods for ordering adjectives in a natural language generation context, and Wulff (2003), who examined effects of phonological length, syntactic category ambiguity, semantic closeness, adjective frequency, and 2006 a measure similar to PMI called noun specificity. Our work differs from this previous work by focusing on recently-introduced predictors that have theoretical motivations grounded in efficiency and information theory. The theories we test here (except information gain) have been tested in previous corpus studies, but never compared against each other. Scontras et al. (2017) validate that subjectivity is a good predictor of adjective order in corpora, and Hahn et al. (2018) and Futrell et al. (2019) evaluate both information locality and subjectivity. Dyer (2018) uses integration cost to model the order of same-side sibling dependents cross-linguistically and across all syntactic categories. 4 Methods Our task is to find predictors of adjective order based solely on data about individual adjectives and nouns. More formally, the goal is to find a scoring function S(A, N) applying to an adjective A and a noun N, such that the order of two adjectives modifying a noun A1A2N can be predicted accurately by comparing S(A1, N) and S(A2, N). Furthermore, the scoring function S should not include information about relative order in observed sequences of the form A1A2N— the scoring function should be based only on corpus data about co-occurrences of A and N, or on human ratings about A and/or N. We apply this restriction because our goal is to evaluate scientific theories of why adjectives are ordered the way they are, rather than to achieve maximal raw accuracy. 4.1 Data sources Corpus-based predictors We estimate information-theoretic quantities for adjectives using a large automatically-parsed subsection of the English Common Crawl corpus (Buck et al., 2014; Futrell et al., 2019). The use of a parsed corpus is necessary to identify adjectives that are dependents of nouns in order to calculate PMI and IC. As described in Futrell et al. (2019), this corpus was produced by heuristically filtering Common Crawl to contain only full sentences and to remove web boilerplate text, and then parsing the resulting text using SyntaxNet (Andor et al., 2016), obtaining a total of ∼1 billion tokens of automatically parsed web text. In this work, we use a subset of this corpus, described below. From this corpus, we extract two forms of data. First, we extract adjective–noun (AN) pairs: a set of pairs ⟨A, N⟩where A is an adjective and N is a noun and N is the head of A with dependency type amod. As in Futrell (2019), we define A as an adjective iff its part-of-speech is JJ and its wordform is listed as an adjective in the English CELEX database (Baayen et al., 1995). We define N as a noun iff its part-of-speech is NN or NNS and its wordform is listed as a noun in CELEX. These AN pairs are used to estimate the information-theoretic predictors that we are interested in. We extracted 33,210,207 adjective–noun pairs from the parsed Common Crawl corpus. Second, we extract adjective–adjective–noun (AAN) triples: a set of triples ⟨A1, A2, N⟩where A1 and A2 are adjectives as defined above, and A1 and A2 are both adjective dependents with relation type amod of a single noun head N. Furthermore, A1 and A2 must not have any further dependents, and they must appear in the order A1A2N in the corpus with no intervening words. We extracted a total of 842,714 AAN triples from the parsed Common Crawl corpus. The values of all corpus-based predictors are estimated using the AN pairs. The AAN triples are used only for fitting regressions from the predictors to adjective orders, and for evaluation. Ratings-based predictors We gathered subjectivity ratings for all 398 adjectives present in AAN triples in the English UD corpus. These subjectivity ratings were collected over Amazon.com’s Mechanical Turk, using the methodology of Scontras et al. (2017). 264 English-speaking participants indicated the subjectivity of 30 random adjectives by adjusting a slider between endpoints labeled “completely objective” (coded as 0) and “completely subjective” (coded as 1). Each adjective received an average of 20 ratings. Test set As a held-out test set for our predictors, we use the English Web Treebank (EWT), a handparsed corpus, as contained in Universal Dependencies (UD) v2.4 (Silveira et al., 2014; Nivre, 2015). Following our criteria, we extract 155 AAN triples having scores for all our predictors. Because this test set is very small, we also evaluate against a held-out portion of the parsed Common Crawl data. In the Common Crawl test set, after including only AAN triples that have scores for all of our predictors, we have 41,822 AAN triples. 2007 4.2 Estimation of predictors Our information-theoretic predictors require estimates of probability distributions over adjectives and nouns. To estimate these probability distributions, we first use maximum likelihood estimation as applied to counts of wordforms in AN pairs. We call these estimates wordform estimates. Although maximum likelihood estimation is sufficient to give an estimate of the general entropy of words (Bentz et al., 2017), it is not yet clear that it gives a good measure for conditional entropy or mutual information, due to data sparsity, even with millions of tokens of text (Futrell et al., 2019). Therefore, as a second method that alleviates the data sparsity issue, we also calculate our probability distributions not over raw wordforms but over clusterings of words in an embedding space, a method which showed promise in Futrell et al. (2019). To derive word clusters, we use sklearn.cluster.KMeans applied to a pretrained set of 1.9 million 300-dimension GloVe vectors2 generated from the Common Crawl corpus (Pennington et al., 2014). We classify adjectives into kA = 300 clusters and nouns into kN = 1000 clusters. These numbers k were found by choosing the largest k multiple of 100 that did not result in any singleton clusters. We then estimated probabilities p(a, n) by maximum likelihood estimation after replacing wordforms a and n with their cluster indices. This clustering method alleviates data sparsity by reducing the size of the support of the distributions over adjectives and nouns, to kA and kN respectively, and by effectively spreading probability mass among words with similar semantics. The clusters might also end up recapitulating the semantic categories that have played a role in more traditional syntactic theories of adjective order (Dixon, 1982; Cinque, 1994; Scott, 2002). We call these estimates cluster estimates. 4.3 Evaluation Fitting predictors to data Most of our individual predictors come along with theories that say what their effect on adjective order should be. Adjectives with low PMI should be farther from the noun, adjectives with high IC should be farther from the noun, and adjectives with high subjectivity should be farther from the noun. Therefore, 2http://nlp.stanford.edu/data/glove. 42B.300d.zip strictly speaking, it is not necessary to fit these predictors to any training data: we can evaluate our theories based on their a priori predictions simply by asking how accurately we can predict the order of adjectives in AAN triples based on the rules above. However, we can get a deeper picture of the performance of our predictors by using them in classifiers for adjective order. By fitting classifiers using our predictors, we can easily extend our models to ones with multiple predictors, in order to determine if a combined set of the predictors gives increased accuracy over any one. Logistic regression method We fit logistic regressions to predict adjective order in AAN triples using our predictors. Our goal is to predict the order of the triple from the unordered set of the two adjectives {A1, A2} and the noun N. To do so, we consider the adjectives in lexicographic order: Given an AAN triple, let A1 denote the lexicographically-first adjective, and A2 the second. Then any given AAN triple is either of the form ⟨A1, A2, N⟩or ⟨A2, A1, N⟩. We fit a logistic regression to predict this order given the difference in the values of the predictors for the two adjectives. That is, we fit a logistic regression of the form in Figure 1. This method of fitting a classifier to predict order data was used previously in Morgan and Levy (2016). Based on theoretical considerations and previous empirical results, we expect that the fitted values of β1 will be negative for PMI and positive for IC and subjectivity. The regression in Figure 1 can easily be extended to include multiple predictors, with a separate β for each. Evaluation metrics We evaluate our models using raw accuracy in predicting the order of heldout AAN triples. We also calculate 95% confidence intervals on these accuracies, indicating our uncertainty about how the accuracy would change in repeated experiments. Following standard experimental practice, if we find that two predictors achieve different accuracies, but their confidence intervals overlap, then we conclude that we do not have evidence that their accuracies are reliably different. We say a difference in accuracy between predictors is significant if the 95% confidence intervals do not overlap. Evaluation on held-out hand-parsed data It is crucial that we not evaluate solely on automatically-parsed data. The reason is that both 2008 log p(⟨A1, A2, N⟩) p(⟨A2, A1, N⟩) = β0 + β1(S(A1, N) −S(A2, N)) + ǫ Figure 1: Logistic regression for adjective order. The function S(A, N) is the predictor to be evaluated, β0 and β1 are the free parameters to be fit, and ǫ is an error term to be minimized. PMI and IC, as measures of the strength of statistical association between nouns and adjectives, could conceivably double as predictors of parsing accuracy for automatic dependency parsers. If that is the case, then we might observe that AAN triples with low PMI or high IC are rare in automatically parsed data. However, this would not be a consequence of any interesting theory of cognitive cost, but rather simply an artifact of the automatic parser used. To avoid this confound, we include an evaluation based on held-out hand-parsed data in the form of the English Web Treebank. 5 Results Table 1a shows the accuracies of our predictors in predicting held-out adjective orders in the Common Crawl test set, visualized in Figure 2a. We find that the pattern of results depends on whether predictors are estimated based on wordforms or based on distributional clusters. When estimating based on wordforms, we find that subjectivity and PMI have the best accuracy. When estimating based on clusters, the accuracy of PMI drops, and the best predictor is subjectivity, with IG close behind. We find a negative logistic regression weight for information gain, indicating that the adjective with lower information gain is placed first. This basic pattern of results is confirmed in the hand-parsed EWT data. Accuracies of predictors on the EWT test set are shown in Table 1b and visualized in Figure 2b. When estimating based on wordforms, the best predictors are subjectivity and PMI, although the confidence intervals of all predictors are overlapping. When estimating based on clusters, IG has the best performance, and PMI again drops in accuracy. For this case, IG, IC, and subjectivity all have overlapping confidence intervals, so we conclude that there is no evidence that one is better than the other. However, we do have evidence that IG and IC are more accurate than PMI when estimated based on clusters. 5.1 Multivariate analysis Adjective order may be determined by multiple separate factors operating in parallel. In order to investigate whether our predictors might be making independent contributions to explaining adjective order, we fit logistic regressions containing multiple predictors. If the best accuracy comes from a model with two or more predictors, then this would be evidence that these two predictors are picking up on separate sources of information relevant for predicting adjective order. We conducted logistic regressions using all sets of two of our predictors. The top 5 such models, in terms of Common Crawl test set accuracy, are shown in Table 2. The best two are cluster/wordform subjectivity and wordform PMI, followed by cluster subjectivity and cluster information gain. No set of three predictors achieves significantly higher accuracy than the best predictors shown in Table 2. 5.2 Qualitative analysis We manually examined cases where each model made correct and incorrect predictions in the handparsed EWT data. Table 3a shows example AAN triples that were ordered correctly by PMI, but not by subjectivity. These are typically cases where a certain adjective–noun pair forms a common collocation whose meaning is in some cases even noncompositional; for example, “bad behaviors” is a common collocation when describing training animals, and “ulterior motives” and “logical fallacy” are likewise common English collocations. In contrast, when subjectivity makes the right prediction and PMI makes the wrong prediction, these are often cases where a word pair which normally would form a collocation is broken up by another adjective, such as “dear sick friend”, where “dear friend” is a common collocation. We also performed a manual qualitative analysis to determine the contribution of information gain beyond subjectivity and PMI. Table 3b shows examples of such cases from the EWT. Many of these seem to be cases with weak preferences, where both the attested order and the the flipped order are acceptable (e.g., “tiny little kitten”). 2009 Predictor Accuracy Conf. Interval Subj. (cluster) .661 [.657, .666] PMI (wordform) .659 [.654, .664] Subj. (wordform) .659 [.654, .664] IG (cluster) .650 [.645, .654] IC (wordform) .642 [.634, .646] IG (wordform) .640 [.635, .645] IC (cluster) .613 [.608, .618] PMI (cluster) .606 [.601, .610] (a) Common Crawl (N = 41822). Predictor Accuracy Conf. Interval IG (cluster) .737 [.668, .806] Subj. (wordform) .724 [.654, .795] IC (cluster) .705 [.633, .777] Subj. (cluster) .692 [.620, .765] PMI (wordform) .667 [.592, .741] IC (wordform) .641 [.566, .717] IG (wordform) .603 [.526, .680] PMI (cluster) .590 [.512, .667] (b) Hand-parsed EWT (N = 155). All confidence intervals overlap, other than cluster-based PMI and IG. Table 1: Accuracies of the predictors on AAN triples in the held-out test data. Wordforms Clusters IC IG PMI Subjectivity IC IG PMI Subjectivity 0.00 0.25 0.50 0.75 1.00 CC accuracy (a) Common Crawl (N = 41822). Wordforms Clusters IC IG PMI Subjectivity IC IG PMI Subjectivity 0.00 0.25 0.50 0.75 1.00 EWT accuracy (b) Hand-parsed EWT (N = 155) Figure 2: Accuracies of predictors on AAN triples in the held-out test data, with 95% confidence intervals shown. Predictor Accuracy Conf. Interval Subj. (cluster) + PMI (wordform) .723 [.719, .727] Subj. (wordform) + PMI (wordform) .713 [.708, .717] Subj. (cluster) + IG (cluster) .699 [.695, .703] Subj. (cluster) + IC (cluster) .690 [.686, .695] IG (cluster) + IC (cluster) .684 [.680, .689] Table 2: Common Crawl test set accuracy of the top 5 models combining two predictors. 2010 A1 A2 N major bad behaviors large outstanding debts classical logical fallacy dark ulterior motives minor fine tuning (a) Ordered correctly by wordform PMI, but not by wordform subjectivity. A1 A2 N tiny little kitten correct legal name chronic intractable pain radical religious politics lonely eerie place (b) Ordered correctly by cluster-based information gain, but not by cluster-based subjectivity nor PMI. Table 3: Selected examples of AAN triples ordered incorrectly by our models, from the EWT test set. 5.3 Interpretation Our results broadly support the following interpretation. Adjective ordering preferences are largely determined by a semantic factor that can be quantified variously using wordform subjectivity or distributional-cluster-based estimates of information gain. In addition to this factor, another factor is in play: when an adjective–noun pair forms a collocation with a possibly non-compositional meaning, then the adjective in this pair will tend to be placed next to the noun. This latter factor is measured by PMI. This interpretation matches that of Hahn et al. (2018), who found separate contributions from PMI and a model-based operationalization of subjectivity. Our interpretation is supported by the following points from the analysis above. First, among predictors based solely on wordforms, the best accuracy is obtained by a combination of subjectivity and PMI. Second, when we turn to estimates based on clusters, two things happen: the accuracy of PMI drops, and the accuracy of information gain increases while the accuracy of subjectivity stays about the same. This pattern of results suggests that PMI is measuring a factor that has more to do with specific wordforms, while IG and subjectivity are measuring a factor that has more to do with semantic uncertainty about the noun or about the relationship between the adjective and the noun. 6 Conclusion We examined a number of theoretically-motivated predictors of adjective order in dependency treebank corpora of English. We found that the predictors have comparable accuracy, but that it is possible to identify two broad factors: a semantic factor variously captured by subjectivity scores and information gain based on word clusters, and a wordform-based factor captured by PMI. This study provides a framework for evaluating further theories of adjective order, and for evaluating the theories given here against new data from dependency treebanks. Generalizing to larger datasets of English is straightforward. More excitingly, we now have the opportunity to bring new languages into the fold. The vast majority of research on adjective ordering, and all the corpus work to our knowledge, has been done on English, where adjectives almost always come before the noun. Studying other typologically-distinct languages provides an opportunity to disentangle the theories that we studied here in a way that cannot be done in English. The available behavioral evidence suggests that mirror-image preferences (e.g., “box blue big”) may be the norm in post-nominal adjective languages (Martin, 1969; Scontras et al., 2020). Information locality, subjectivity, and integration cost make precisely that prediction, though none addresses mixed-type languages in which adjectives can precede or follow nouns. It is an open question how to implement IG for these postor mixed-placement adjectives; one possibility is to measure the information gained when the set of adjectives associated to a noun An is partitioned by an adjective a. In that case, the predictions about post-nominal order could differ substantially from the predictions about pre-nominal order. Our dependency-treebank-based methods can be applied to any other corpus of any language, provided it has enough data in the form of adjective–noun pairs to get reliable estimates of the information-theoretic predictors. Such studies will be crucial to achieve a complete computational understanding of natural language syntax. 2011 References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics. R. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. The CELEX Lexical Database. Release 2 (CD-ROM). Linguistic Data Consortium, University of Pennsylvania. Galia Bar-Sever, Rachael Lee, Gregory Scontras, and Lisa S. Pearl. 2018. Little lexical learners: Quantitatively assessing the development of adjective ordering preferences. In 42nd annual Boston University Conference on Language Development, pages 58– 71. Christian Bentz, Dimitrios Alikaniotis, Michael Cysouw, and Ramon Ferrer-i-Cancho. 2017. The entropy of words—Learnability and expressivity across more than 1000 languages. Entropy, 19:275– 307. Joan Bresnan, Anna Cueni, Tatiana Nikitina, and Harald Baayen. 2007. Predicting the dative alternation. In Cognitive Foundations of Interpration, pages 69– 94. Royal Netherlands Academy of Science, Amsterdam. Christian Buck, Kenneth Heafield, and Bas Van Ooyen. 2014. N-gram counts and language models from the common crawl. In LREC, volume 2, page 4. Citeseer. Xinying Chen and Ramon Ferrer-i-Cancho, editors. 2019. Proceedings of the First Workshop on Quantitative Syntax (Quasy, SyntaxFest 2019). Association for Computational Linguistics, Paris, France. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. Guglielmo Cinque. 1994. On the evidence for partial N-movement in the Romance DP. In R S Kayne, G Cinque, J Koster, J.-Y. Pollock, Luigi Rizzi, and R Zanuttini, editors, Paths Towards Universal Grammar. Studies in Honor of Richard S. Kayne, pages 85–110. Georgetown University Press, Washington DC. Thomas M. Cover and J. A. Thomas. 2006. Elements of Information Theory. John Wiley & Sons, Hoboken, NJ. J. H. Danks and S. Glucksberg. 1971. Psychological scaling of adjective orders. Journal of Verbal Learning and Verbal Behavior, 10(1):63–67. Robert M. W. Dixon. 1982. Where have all the adjectives gone? And other essays in semantics and syntax. Mouton, Berlin, Germany. Melody Dye, Petar Milin, Richard Futrell, and Michael Ramscar. 2018. Alternative solutions to a language design problem: The role of adjectives and gender marking in efficient communication. Topics in cognitive science, 10(1):209–224. William Dyer. 2018. Integration complexity and the order of cosisters. In Proceedings of the Second Workshop on Universal Dependencies (UDW 2018), pages 55–65, Brussels, Belgium. Association for Computational Linguistics. William E. Dyer. 2017. Minimizing integration cost: A general theory of constituent order. Ph.D. thesis, University of California, Davis, Davis, CA. Robert M. Fano. 1961. Transmission of Information: A Statistical Theory of Communication. MIT Press, Cambridge, MA. Michael Franke, Gregory Scontras, and Mihael Simoniˇc. 2019. Subjectivity-based adjective ordering maximizes communicative success. In Proceedings of the 41st Annual Meeting of the Cognitive Science Society, pages 344–350. Richard Futrell. 2019. Information-theoretic locality properties of natural language. In Proceedings of the First International Conference on Quantitative Syntax, pages 2–15, Paris. Richard Futrell and Roger Levy. 2017. Noisycontext surprisal as a human sentence processing cost model. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 688–698, Valencia, Spain. Richard Futrell, Roger Levy, and Edward Gibson. 2017. Generalizing dependency distance: Comment on “dependency distance: A new perspective on syntactic patterns in natural languages” by haitao liu et al. Physics of Life Reviews, 21:197–199. Richard Futrell, Peng Qian, Edward Gibson, Evelina Fedorenko, and Idan Blank. 2019. Syntactic dependencies correspond to word pairs with high mutual information. In Proceedings of the Fifth International Conference on Dependency Linguistics (DepLing 2019), Paris. Michael Hahn, Judith Degen, Noah Goodman, Daniel Jurafsky, and Richard Futrell. 2018. An information-theoretic explanation of adjective ordering preferences. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society (CogSci). John Hale. 2006. Uncertainty about the rest of the sentence. Cognitive science, 30(4):643–672. 2012 John T. Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics and Language Technologies, pages 1–8. John T. Hale. 2016. Information-theoretical complexity metrics. Language and Linguistics Compass, 10(9):397–412. R. Hetzron. 1978. On the relative order of adjectives. In H. Seller, editor, Language Universals. Narr, T¨ubingen, Germany. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Haitao Liu, Chunshan Xu, and Junying Liang. 2017. Dependency distance: A new perspective on syntactic patterns in natural languages. Physics of Life Reviews, 21:171–193. Robert Malouf. 2000. The order of prenominal adjectives in natural language generation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 85–92, Stroudsburg, PA, USA. Association for Computational Linguistics. Christopher D. Manning. 2003. Probabilistic syntax. In Probabilistic Linguistics, pages 289–341. MIT Press. J E Martin. 1969. Some competence-process relationships in noun phrases with prenominal and postnominal adjectives. Journal of Verbal Learning and Verbal Behavior, 8:471–480. Emily Morgan and Roger Levy. 2016. Abstract knowledge versus direct experience in processing of binomial expressions. Cognition, 157:382–402. Joakim Nivre. 2015. Towards a universal grammar for natural language processing. In Computational Linguistics and Intelligent Text Processing, pages 3–16. Springer. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. J. R. Quinlan. 1986. Induction of decision trees. Machine Learning, 1(1):81–106. Gregory Scontras, Galia Bar-Sever, Zeinab Kachakeche, Cesar Manuel Rosales Jr., and Suttera Samonte. 2020. Incremental semantic restriction and subjectivity-based adjective ordering. In Proceedings of Sinn und Bedeutung 24. Gregory Scontras, Judith Degen, and Noah D. Goodman. 2017. Subjectivity predicts adjective ordering preferences. Open Mind: Discoveries in Cognitive Science, 1(1):53–65. Gregory Scontras, Judith Degen, and Noah D. Goodman. 2019. On the grammatical source of adjective ordering preferences. Semantics and Pragmatics. G.-J. Scott. 2002. Stacked adjectival modification and the structure of nominal phrases. In G Cinque, editor, The cartography of syntactic structures, Volume 1: Functional structure in the DP and IP, pages 91– 120. Oxford University Press, Oxford. Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014). Mihael Simoniˇc. 2018. Functional explanation of adjective ordering preferences using probabilistic programming. Master’s thesis, University of T¨ubingen. Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302–319. R. Sproat and C. Shih. 1991. The cross-linguistic distribution of adjective ordering restrictions. In C. Georgopoulos and R. Ishihara, editors, Interdisciplinary approaches to language: Essays in honor of S.-Y. Kuroda, pages 565–593. Kluwer Academic, Dordrecht, Netherlands. David Temperley and Daniel Gildea. 2018. Minimizing syntactic dependency lengths: Typological/cognitive universal? Annual Review of Linguistics, 4:1–15. Stefanie Wulff. 2003. A multifactorial corpus analysis of adjective order in english. International Journal of Corpus Linguistics, 8(2):245–282. P. Ziff. 1960. Semantic analysis. Cornell University Press, Ithaca, NY.
2020
181
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2013–2020 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2013 “None of the Above”: Measure Uncertainty in Dialog Response Retrieval Yulan Feng, Shikib Mehri, Maxine Eskenazi and *Tiancheng Zhao Language Technologies Institute, Carnegie Mellon University *SOCO.AI {yulanf,amehri,max}@cs.cmu.edu *[email protected] Abstract This paper discusses the importance of uncovering uncertainty in end-to-end dialog tasks and presents our experimental results on uncertainty classification on the processed Ubuntu Dialog Corpus1. We show that instead of retraining models for this specific purpose, we can capture the original retrieval model’s underlying confidence concerning the best prediction using trivial additional computation. 1 Introduction Uncertainty modeling is a widely explored problem in dialog research. Stochastic models like deep Qnetworks (Tegho et al., 2017), Gaussian processes (Gai and Young, 2014), and partially observable Markov decision process (Roy et al., 2000) are often used in spoken dialog systems to optimize dialog management by explicitly estimating uncertainty in policy assignments. However, these approaches are either computationally intensive (Gal and Ghahramani, 2015) or require significant work on refining policy representations (Gai and Young, 2014). Moreover, most current uncertainty studies in dialog focus on the dialog management component. End-to-end (E2E) dialog retrieval models jointly encode a dialog and a candidate response (Wu et al., 2016; Zhou et al., 2018), assuming the ground truth is always present in the candidate set, which is not the case in production. Larson et al. (2019) recently showed that classifiers that perform well on in-scope intent classification for task-oriented dialog systems struggle to identify out-of-scope queries. The response selection task in the most recent Dialog System Technology Challenge (Chulaka Gunasekara and Lasecki, 2019) also explicitly mentions that “none 1Our datasets for the NOTA task are released at https://github.com/yfeng21/nota prediction of the proposed utterances is a good candidate” should be a valid option. The goal of this paper is to set a new direction for future task-oriented dialog system research: while retrieving the best candidate is crucial, it should be equally important to identify when the correct response (i.e. ground truth) is not present in the candidate set. In this paper, we measure the E2E retrieval model’s capability to capture uncertainty by inserting an additional “none of the above” (NOTA) candidate into the proposed response set at inference time. The contributions of this paper include: (1) demonstrating that it is crucial to learn the relationship amongst the candidates as a set instead of looking at point-wise matching to solve the NOTA detection task. As a result, the logistic regression (LogReg) approach proposed here consistently achieves the best performance compared to several strong baselines. (2) extensive experiments show that the raw output score (logits) is more informative in terms of representing model confidence than normalized probabilities after the Softmax layer. 2 Related Work Our use of NOTA to measure uncertainty in dialog response is motivated by the design of student performance assessment in psychology studies. Test creators often include NOTA candidates in multiple-choice design questions, both as correct answers and as distractors. How the use of NOTA affects the difficulty and discrimination of a question has been discussed widely (Gross, 1994; Pachai et al., 2015). For assessment purposes, a common finding is that using NOTA as the correct response increases question difficulty, and also lures high- and low-performing students toward distractors (Pachai et al., 2015). Returning a NOTA-like response is a common 2014 practice in dialog production systems (IBM). The idea of adding the NOTA option to a candidate set is also widely used in other language technology fields like speaker verification (Pathak and Raj, 2013). However, the effect of adding NOTA is rarely introduced in dialog retrieval research problems. To the best of our knowledge, we are the first to scientifically evaluate a variety of conventional approaches for retrieving NOTA in the dialog field. 3 Methods 3.1 Ubuntu Dataset All of the experiments herein use the Ubuntu (Lowe et al., 2015) Dialog Corpus, which contains multiturn, goal-oriented chat logs on the Ubuntu forum. For next utterance retrieval purposes, we use the training data version that was preprocessed by Mehri and Eskenazi (2019), where all negative training samples (500,127) were removed, and, for each context, 9 distractor responses were randomly chosen from the dataset to form the candidate response set, together with the ground truth response. For the uncertainty task, we use a special token NOTA to represent the “none of the above” choice, as in multiple-choice questions. More details on this NOTA setup can be found in Sections 4.1 and 4.2. The modified training dataset has 499,873 dialog contexts, and each has 10 candidate responses. The validation and test sets remain unchanged, with 19,561 validation samples and 18,921 test samples. 3.2 Dual LSTM Encoder The LSTM dual encoder model consists of two single-layer, uni-directional encoders, one to encode the embedding (c) of the context and one to encode the embedding (r) of the response. The output function is computed as a dot product of the two, f(r, c) = cT r. This model architecture has already been shown to perform well for the Ubuntu dataset (Lowe et al., 2015; Kadlec et al., 2015). We carry out experiments with the following variants of the vanilla model for training: Binary This is the most common training method for next utterance ranking on the Ubuntu corpus. With training data prepared in the format of [CONTEXT] [RESPONSE] [LABEL], the model performs binary classification on each sample, predicting whether a given response is the ground truth. The binary cross-entropy between the label and σ(f(r, c)) following a sigmoid layer is used as the loss function. Selection As the validation and test datasets are both in the format of [CONTEXT] [RESPONSE]*x, where x is usually 10, we train the selection model in the same way. For this model, following a softmax layer, the loss is calculated by the negative log likelihood function: L = −log exp(f(rground truth, c) Px i=1 exp(f(ri, c))  (1) Dropout Gal and Ghahramani (2015) found that dropout layers can be used in neural networks as a Bayesian approximation to the Gaussian process, and thus have the ability to represent model uncertainty in deep learning. Inspired by this work, we add a dropout layer after each encoder’s hidden layer at training time. At inference, we have the dropout layer activated and pass each sample through n times, and then make the final prediction by taking a majority vote among the n predictions. Unlike the other models, the NOTA binary classification decision is not based on the output score itself, but rather is calculated on the score variance of each response. 3.3 Experimental Setup LSTM For the LSTM models, unless otherwise specified, the word embeddings are initialized randomly with a dimension of 300, and a hidden size of 512. The vocabulary is constructed of the 10000 most common words in the training dataset, plus the UNK and PAD special tokens. We use the Adam algorithm (Kingma and Ba, 2014) for optimization with a learning rate of 0.005. The gradients are clipped to 5.0. With a batch size of 128, we train the model for 20 epochs, and select the best checkout based on its performance on the validation set. In the dropout model, we use a dropout probability of 50%. LogReg For the logistic regression model, we train on the validation set’s LSTM outputs with the same hyperparameter (where applicable to LogReg) setup as in the corresponding LSTM model. 4 Experiments 4.1 Direct Prediction For the direct prediction experiment, we randomly choose 50% of the response sets and replace the ground truth responses with the NOTA special token (we label this subset as isNOTA). For the other 2015 50% samples, we replace the first distractor with the NOTA token (we label this subset as notNOTA). By using this setup, we ensure that a NOTA token is always present in the candidate set. Although making decisions based on logits (Directlogits) or probability (DirectProb) yields the same argmax prediction, we collect both output scores for the following LogReg model (details in Section 4.3). Concretely, the final output y′ of a direct prediction model is: y′ = argmaxr∈A S{ NOTA}f(r, c) (2) 4.2 Threshold Another common approach toward returning NOTA is to reject a candidate utterance based on confidence score thresholds. Therefore, in the threshold experiments, with the same preprocessed data as in Section 4.1, we remove all NOTA tokens at the inference model’s batch preparation stage, leaving 9 candidates, thus 50% of the response sets (the isNOTA set) with no ground truth present. After the model outputs scores for each candidate response, with the predefined threshold, it further decides whether to accept the prediction with the highest score as its final response, or to reject the prediction and give NOTA instead. We investigate the performance of setting the threshold based on probability (ThresholdProb) and logits (ThresholdLogits) respectively. Concretely, the final output y′ is given by: y′ = ( NOTA if f(r, c) < threshold argmaxr∈Af(r, c) (3) 4.3 Logistic Regression We feed the output scores of the LSTM models for all candidate answers as input features to the LogReg model consisting of a single linear layer and a logistic output layer. Separate LogReg models are trained for different numbers of candidates. The probability output indicates whether the previous model’s prediction is ground truth or just the bestscoring distractor. Since LogReg can see output scores from all candidate responses, it is trained to model the relationship amongst all the candidates, making it categorically different from the binary estimation mentioned in Section 4.1 and 4.2. Note that at inference time, LogReg works essentially as a threshold method. The final output is determined by: y′ = ( NOTA if LogReg({f(ri, c)}) < 0.5 argmaxr∈Af(r, c) (4) where input to the LogReg model f(ri, c) is the output of LSTM models, either in logits or normalized form, as previously defined in subsection 3.2. 4.4 Metric Design Dialog retrieval tasks often use recall out of k (Rx@k) as a key metric, measuring out of x candidates how often the answer is in top-k. In this paper, we focus on the top-1 accuracy Rx@1 (Rx for short) with a candidate set size of x, where x ∈{2, 5, 10, 20, 40, 60, 80, 100}. The recall metric is modified for uncertainty measurement purposes, and is further extended to calculate the NOTA accuracy out of x (Nx), and F1 scores for each class (NF1x, GF1x). Let D = {c, y} and Dn = {c, isNOTA} be the two subparts of data that correspond to samples that are notNOTA and isNOTA respectively, the above metrics are computed by: Rx = P y∈D(y′ = y) |D| (5) Nx = P y∈Dn(y′ = y) + P y∈D(y′ ̸= NOTA) |D| + |Dn| (6) In Equation (6), the numerator represents correctly predicted (same as in Equation (5)) plus other true negative isNOTA predictions, where the model correctly predicts notNOTA, but fails to choose the ground truth. The positive class in NF1x is the isNOTA class, and the positive class in GF1x is the notNOTA class. 4.5 More Candidates In real-world problems, retrieval response sets usually have many more than 10 candidates. Therefore, we further test the selection and binary models on a bigger reconstructed test set. For each context, we randomly select 90 more distractors from other samples’ candidate responses, producing a candidate response set of size 100 for each context. 2016 5 Results and Analysis Table 1 summarizes the experimental results. Due to space limitation, this table only displays results on 10 candidates. Complete results on other numbers of candidates, which have similar performance patterns as 10, are found in the Appendix. The thresholds and hyperparameters are tuned on the validation set according to the highest average F1 score. For the selection model, in addition to the original dataset, we also train the model on a modified training dataset, containing NOTA choices as in inference datasets, with the same set of hyperparameters. As expected, since there are now fewer real distractor responses, training including NOTA improves the model’s NOTA classification performance, but sacrifices recall scores, which is not desirable. In all the models, regardless of the training dataset used and the model architecture, adding a logistic regression on top of the LSTM output significantly improves average F1 scores. Specifically, the highest F1 scores are always achieved with logits scores as LogReg input features. These results show that, though setting a threshold is a common heuristic to balance true and false acceptance rates (Larson et al., 2019), its NOTA predicR10 N10 NF110 GF110 Average F1 Selection Model (original data) Direct Predict 56.12 61.48 52.82 67.46 60.14 +LogReg (Logits) 55.98 87.81 86.96 88.56 87.76 +LogReg (Softmax) 50.94 74.30 74.46 74.15 74.31 Logits Threshold (=0.5) 50.10 64.28 62.84 65.61 64.22 +LogReg 62.81 80.45 80.49 80.42 80.45 Softmax Threshold (=0.55) 48.76 60.10 59.69 60.50 60.09 +LogReg 63.64 78.50 80.17 76.52 78.34 Selection Model ( NOTA) Direct Predict 55.43 63.07 54.28 69.03 61.66 +LogReg (Logits) 40.66 78.19 78.80 77.53 78.16 +LogReg (Softmax) 51.63 77.94 78.21 77.67 77.94 Logits Threshold (=2.0) 48.44 61.32 57.75 64.32 61.03 +LogReg 60.73 79.22 79.11 79.33 79.22 Softmax Thtrshold (=0.5) 48.18 59.06 57.32 60.67 59.00 +LogReg 61.08 78.01 79.75 75.94 77.84 Binary Model Direct Predict 35.73 61.72 63.54 59.72 61.63 +LogReg (Logits) 35.64 94.08 93.72 94.40 94.06 +LogReg (Softmax) 25.42 85.06 85.41 84.69 85.05 Logits Threshold (=1.0) 41.64 61.50 57.77 64.62 61.20 +LogReg 51.58 77.15 76.74 77.55 77.14 Softmax Threshold (=0.4) 39.70 54.96 51.83 57.70 54.77 +LogReg 52.00 74.40 76.43 71.99 74.21 Dropout Model Direct Predict 28.57 50.13 1.48 66.61 34.05 +LogReg (Logits) 19.21 66.89 61.87 70.74 66.30 +LogReg (Softmax) 21.73 50.49 56.37 42.79 49.58 Logits Variance Threshold (=0.1) 13.73 51.89 57.15 45.15 51.15 +LogReg 20.87 56.13 40.18 65.37 52.78 Softmax Variance Threshold (=0.001) 22.22 50.03 38.98 57.69 48.33 +LogReg 23.84 57.21 60.87 52.81 56.84 Table 1: Results on 10 candidates. R represents recall, N represents binary NOTA classification accuracy, NF1 represents the F1 score on the NOTA class, and GF1 represents the F1 score on the ground-truthpresent class. Average F1 is the average of NF1 and GF1. tion performance is not comparable to the LogReg approach, even after an exhaustive grid-search of best thresholds. This finding is underlined by receiver operating characteristic (ROC) curves on the validation set Figure 1: Merged ROC curves for LSTM outputs with the original selection model. Top left, top right, bottom left, and bottom right represent plots for ThresholdLogits,Directlogits, ThresholdProb, and DirectProb respectively Figure 2: ROC curves for LogReg outputs with the original selection model’s output logits as input features. Top left, top right, bottom left, and bottom right represent plots for ThresholdLogits,Directlogits, ThresholdProb, and DirectProb respectively Figure 1 shows the ROC curves for predicting NOTA directly with LSTM. Figure 2 shows ROC plots for predicting NOTA with LogReg in the same order as Figure 1, where a separate LogReg model is trained for each score setting. In both figures, the areas under curve (AUC) indicate that logits serves as a more discriminative confidence score compared to the normalized softmax score. Comparing the top right plots in both Figures, we can see that with the same set of logits scores as threshold criteria, AUC is boosted from 0.71 to 0.91 with 2017 the additional LogReg model, providing further evidence that LogReg significantly outperforms the LSTM models in this NOTA classification task. Figure 3: Distribution of max scores as predicted by the original selection model, with scores (logits or probability) on the x-axis, and number of samples on the y-axis. Blue plot represents the isNOTA subset, and orange plot represents the notNOTA. Top left, top right, bottom left, and bottom right represent plots for ThresholdLogits,Directlogits, ThresholdProb, and DirectProb respectively With the selection model trained on the original dataset, Figure 3 shows the model’s distribution of max scores on the validation set. We see that there are apparent differences between isNOTA’ and notNOTA’s best score distributions. This is an encouraging observation because it suggests that current retrieval models can already distinguish good versus wrong responses to some extent. Note that as the NOTA token is not included in training, for direct prediction tasks, the NOTA token is encoded as an UNK token at inference time. The tails of the isNOTA plot in both the DirectLogits and DirectProb graphs suggest that the model will, very rarely, pick the unknown token as the best response. Figure 4 shows the average F1 score trends with the original selection model on the test set with 100 distractors. The plot shows the trend that with more distractors, the LSTM model struggles to determine the presence of ground truth, while the LogReg model performs consistently well. The complete results of this extended test set are in the Appendix. Figure 4: Average F1 scores with different numbers of response candidates, where the LSTM model stays the same, and LogReg is separately trained for each number setting. The left blue bars represent LSTM direct prediction, and the right orange bars represent LogReg results with logits input. 6 Discussion With NOTA options in the training data, the models learn to sometimes predict NOTA as the best response, resulting in more false-positive isNOTA predictions at inference time. Also, by replacing various ground truths and strong distractors with NOTA, the model has fewer samples to help it learn to distinguish between different ground truths and strong distractors/ Thus it performs less well on borderline predictions (scores close to the threshold). This behavior results in some selection methods trained on the dataset containing NOTA tokens performing worse than when they are trained on the original dataset. This motivates us to advocate the proposed LogReg approach instead of the conventional add a NOTA choice method. Another prominent advantage of the LogReg approach is that it does not require data- or modeldependent input like embedding vectors or hidden layer output. Instead, it takes logits or normalized scores, both of which can be output from any models. This feature makes our approach insensitive to the underlying architecture. 7 Conclusions We have created a new NOTA task on the Ubuntu Dialog Corpus, and have proposed to solve the problem by learning the response set representation with a binary classification model. We hope the dataset we release will be used to benchmark future dialog system uncertainty research. 2018 References Lazaros Polymenakos Chulaka Gunasekara, Jonathan K. Kummerfeld and Walter S. Lasecki. 2019. Dstc7 task 1: Noetic end-to-end response selection. In 7th Edition of the Dialog System Technology Challenges at AAAI 2019. Yarin Gal and Zoubin Ghahramani. 2015. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. M. Gai and S. Young. 2014. Gaussian processes for pomdp-based dialogue manager optimization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(1):28–40. Leon J. Gross. 1994. Logical versus empirical guidelines for writing test items: The case of ”none of the above”. Evaluation & the Health Professions, 17(1):123–126. Watson Assistant IBM. IBM Watson Assistant handling none of the above. https: //cloud.ibm.com/docs/assistant? topic=assistant-dialog-runtime# dialog-runtime-handle-none. Accessed: 2020-04-15. Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. 2015. Improved deep learning baselines for ubuntu corpus dialogs. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Shikib Mehri and Maxine Eskenazi. 2019. Multigranularity representations of dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1752–1761, Hong Kong, China. Association for Computational Linguistics. Matthew Pachai, David DiBattista, and Joseph Kim. 2015. A systematic assessment of none of the above on multiple choice tests in a first year psychology classroom. Canadian Journal for the Scholarship of Teaching and Learning, 6:1–17. M. A. Pathak and B. Raj. 2013. Privacy-preserving speaker verification and identification using gaussian mixture models. IEEE Transactions on Audio, Speech, and Language Processing, 21(2):397–406. Nicholas Roy, Joelle Pineau, and Sebastian Thrun. 2000. Spoken dialogue management using probabilistic reasoning. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 93–100, Stroudsburg, PA, USA. Association for Computational Linguistics. Christopher Tegho, Pawe Budzianowski, and Milica Gai. 2017. Uncertainty estimates for efficient neural network-based dialogue policy optimisation. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2016. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. arXiv preprint arXiv:1612.01627. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1118–1127. 2019 A Appendices A.1 More Plots A.2 Complete Results 50% NOTA Test Results On More Distractors (%) #Candidates R N N F1 G F1 Average F1 Direct Predict 2 66.77 78.00 80.22 75.21 77.72 5 62.14 69.17 67.86 70.38 69.12 10 56.04 61.48 52.82 67.46 60.14 20 48.09 55.81 36.11 66.22 51.17 40 39.79 52.46 20.90 66.02 43.46 60 34.96 51.20 14.12 65.92 40.02 80 31.50 50.84 10.70 66.09 38.39 100 29.10 50.59 8.69 66.13 37.41 +LogReg 2 66.72 88.19 87.26 88.99 88.13 5 62.07 87.90 87.01 88.67 87.84 10 55.98 87.81 86.96 88.56 87.76 20 48.07 88.08 87.27 88.79 88.03 40 39.78 87.64 86.89 88.30 87.60 60 34.95 87.80 87.07 88.46 87.76 80 31.49 87.92 87.11 88.63 87.87 100 29.10 87.55 86.84 88.18 87.51 Table 2: Results for 2,5,10,20,40,60,80,100 candidate responses with the original selection model Table 2 shows the original selection model’s performance on different sizes of candidate response sets. The direct predict model is run as it does not need further tuning. Threshold approach, especially with softmax probability as threshold, will need separate rounds of tuning on the threshold. Table 3 shows the complete results for all models on the test set, both for 2 candidates and for 10 candidates. Here, the average F1 is averaged on all 4 F1 scores. For each model architecture, the best performing setting for each metric is in bold. 2020 50% NOTA Test Results (%) R@10 R@2 N@10 N@2 N F1@10 N F1@2 G F1@10 G F1@2 Average F1 Selection model trained with original data Direct Predict 56.12 66.77 61.48 78.00 52.82 80.22 67.46 75.21 68.93 +Logistic Regression on Top of Logits 55.98 66.72 87.81 88.19 86.96 87.26 88.56 88.99 87.94 +Logistic Regression on Top of Softmax 50.94 51.93 74.30 74.33 74.46 74.38 74.15 74.29 74.32 Logits Threshold (=0.5) 50.10 55.72 64.28 73.25 62.84 76.73 65.61 68.56 68.43 +Logistic Regression on Top 62.81 77.70 80.45 79.95 80.49 79.92 80.42 79.99 80.20 Softmax Threshold (=0.55) 48.76 48.76 60.10 70.67 59.69 75.63 60.50 63.17 64.74 +Logistic Regression on Top 63.64 69.47 78.50 78.54 80.17 80.20 76.52 76.57 78.36 Selection model trained with data containing NOTA Direct Predict 55.43 65.03 63.07 78.37 54.28 80.91 69.03 75.04 69.81 +Logistic Regression on Top of Logits 40.66 47.90 78.19 77.45 78.80 78.02 77.53 76.85 77.80 +Logistic Regression on Top of Softmax 51.63 53.90 77.94 78.00 78.21 78.15 77.67 77.85 77.97 Logits Threshold (=2.0) 48.44 55.99 61.32 71.31 57.75 74.35 64.32 67.46 65.97 +Logistic Regression on Top 60.73 76.12 79.22 78.03 79.11 77.85 79.33 78.21 78.62 Softmax Thtrshold (=0.5) 48.18 48.18 59.06 70.16 57.32 75.19 60.67 62.56 63.94 +Logistic Regression on Top 61.08 68.45 78.01 78.00 79.75 79.74 75.94 75.93 77.84 Pairwise Model Direct Predict 35.73 40.91 61.72 68.25 63.54 75.07 59.72 56.30 63.66 +LogReg on Top of Logits 35.64 40.73 94.08 94.14 93.72 93.79 94.40 94.46 94.09 +LogReg on Top of Softmax 25.42 27.14 85.06 85.02 85.41 85.34 84.69 84.67 85.03 Logits Threshold (=1.0) 41.64 48.57 61.50 70.01 57.77 74.36 64.62 63.88 65.16 +LogReg on Top 51.58 73.33 77.15 77.27 76.74 76.88 77.55 77.64 77.20 Softmax Threshold (=0.4) 39.70 40.05 54.96 65.90 51.83 72.30 57.70 55.66 59.37 +LogReg on Top 52.00 63.79 74.40 74.33 76.43 76.41 71.99 71.85 74.17 Dropout Model Direct Predict 28.57 93.47 50.13 62.42 1.48 45.50 66.61 71.32 46.23 +LogReg on Top of Logits 19.21 77.20 66.89 66.72 61.87 61.59 70.74 70.65 66.21 +LogReg on Top of Softmax 21.73 29.37 50.49 54.83 56.37 63.73 42.79 40.15 50.76 Logits Variance Threshold (=0.1) 13.73 22.11 51.89 50.27 57.15 59.13 45.15 36.51 49.48 +LogReg on Top 20.87 60.78 56.13 55.86 40.18 39.29 65.37 65.32 52.54 Softmax Variance Threshold (=0.001) 22.22 36.75 50.03 54.56 38.98 57.64 57.69 50.99 51.32 +LogReg on Top 23.84 26.07 57.21 56.79 60.87 66.47 52.81 39.23 54.85 Table 3: @10 and @2 represent metrics on 10 and 2 candidates respectively. R represents recall, N represents binary NOTA classification accuracy, NF1 represents the F1 score on the NOTA class, and GF1 represents the F1 score on the ground-truth-present class. Average F1 is obtained on the 4 F1 scores.
2020
182
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2021–2030 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2021 Can You Put it All Together: Evaluating Conversational Agents’ Ability to Blend Skills Eric Michael Smith*, Mary Williamson*, Kurt Shuster, Jason Weston, Y-Lan Boureau Facebook AI Research {ems,marywilliamson,kshuster,jase,ylan}@fb.com Abstract Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent. Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them. But rather than being specialized in one single quality, a good open-domain conversational agent should be able to seamlessly blend them all into one cohesive conversational flow. In this work, we investigate several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages. We further propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation, and compare the performance of different architectures and training schemes. Our experiments show that multi-tasking over several tasks that focus on particular capabilities results in better blended conversation performance compared to models trained on a single skill, and that both unified or two-stage approaches perform well if they are constructed to avoid unwanted bias in skill selection or are fine-tuned on our new task. 1 Introduction A good open-domain conversational agent should have a well-rounded set of skills1 and qualities that allow it to seamlessly blend listening with empathy, providing knowledgeable responses, and talking about various topics from everyday life to their favorite hobbies or latest challenges. 1”Skills” in the conversational AI literature is sometimes taken to mean a very defined specific set of abilities such as telling the weather (e.g., Zhou et al. (2020)). Our use in this paper is much more general and refers to any desirable capability. Recent research has made solid strides towards gauging and improving performance of opendomain conversational agents along specific axes such as how knowledgeable they are (Dinan et al., 2019b; Moghe et al., 2018; Qin et al., 2019), how well they can display empathy (Rashkin et al., 2019; Lin et al., 2019) or talk about their personal background (Zhang et al., 2018; Li et al., 2017). However it remains unclear whether models optimized for performance along one of these axes can retain the learned skill while blending it with other desirable skills, or how to best conduct simultaneous training of multiple skills. In this work, we compare several ways to combine tasks designed to evaluate and improve a single conversational skill, ranging from multi-task training over several datasets to training a top-level classifier to play the role of a dialogue manager and query the most appropriate single-skill pretrained model for a response. In order to evaluate those methods, we propose a new Englishlanguage dataset, BlendedSkillTalk, that blends several skills into a single conversation, and use it to evaluate methods with both automated metrics and human crowdsourced ratings across different axes. Our experiments show that existing single-skill tasks can effectively be combined to obtain a model that blends all skills into a single conversational agent if care is taken to make the dialogue agent avoid unwanted biases when selecting the skill, or if fine-tuning on blended data, or both. We propose methods that compare those competing approaches, and provide a detailed analysis of their successes and failures. 2 Related work While most commercial dialogue systems rely on hand-coded narrow skills (e.g., see Zhou et al. 2022 (2020); Ram et al. (2018)), typically focusing on separate task-oriented features such as alarm setting, calendar entries, etc., we are interested in models that display various qualities in opendomain dialogue. Further, we focus on skills that can be learned end-to-end, as end-to-end learning affords the promise of better generalization to unseen domains. Recent promising conversational models have leveraged very large conversation-like data such as datasets extracted from Reddit and made available by a third party on pushshift.io (Mazar´e et al., 2018; Humeau et al., 2019; Keskar et al., 2019; Rashkin et al., 2019). These large-scale datasets are very useful in providing vast amounts of conversational material that allow for reproducible research and comparison with prior work, however the qualities of resulting conversational agents are dependent on the qualities present in the source conversations. Given how online conversations can turn toxic and lack empathy, indiscriminate pretraining on such corpora is unlikely to spontaneously endow a conversational agent with desirable qualities such as avoiding toxic responses (Dinan et al., 2019a) or demonstrating empathy (Rashkin et al., 2019) or knowledge (Dinan et al., 2019b). This has led the community to propose tasks and datasets focusing specifically on some trait or skill. In this work, we examine how to combine three such traits that each have a corresponding task and dataset: demonstrating an ability to talk about oneself and get to know your partner, as captured by the ConvAI2 dataset, an extension of the PersonaChat dataset (Zhang et al., 2018; Dinan et al., 2020); being knowledgeable and discussing a topic in depth, as measured through the Wizard of Wikipedia task (Dinan et al., 2019b); and demonstrating empathy and being able to talk about emotional personal situations, as measured by the EmpatheticDialogues benchmark proposed in Rashkin et al. (2019). The ConvAI2 dataset comprises more than 140k utterances of crowdsourced conversations between paired workers getting to know each other. Each worker was assigned a persona consisting of a few sentences such as “I have a pet hamster,” which had separately been crowdsourced. The Wizard of Wikipedia (WoW) task aims to explore conversation informed by expert knowledge from Wikipedia, and provides about 194k utterances of conversations on about 1,250 topics. The EmpatheticDialogues (ED) dataset consists in about 50k utterances between a Speaker who is talking about an emotional situation, and a Listener who is tasked to respond in an empathetic manner, acknowledging the other person’s feelings. In addition to being associated with easy-to-use datasets, these three skills benefit from being clearly defined and separate in scope. Focusing on blending only three skills keeps data collection, ablations, and analyses manageable while already presenting a challenge for models, and it helps narrow down the most promising approaches for blending a greater number of skills. 3 Blending Skills in a Conversation A model separately trained on a variety of skills might be able to do well on each of them in isolation, but still struggle to seamlessly blend them over the course of a single conversation where it has to navigate whether a given utterance calls for informative knowledge or empathy, for example. It must learn to switch between skills, each time incorporating previous dialogue context which may contain utterances from either partner relating to multiple skills, and on some turns may have to blend skills into a single response. 3.1 BlendedSkillTalk In order to gauge how successful a model is at this blended objective, we collect BlendedSkillTalk, a small crowdsourced dataset of about 5k conversations in English where workers are instructed to try and be knowledgeable, empathetic, or give personal details about their given persona, whenever appropriate. We collect conversations from 2,679 workers, with each worker participating in an average of 5.4 conversations in the train set and a maximum of 15 conversations. The dataset consists of 4,819 train-set conversations, 1,009 validationset conversations, and 980 test-set conversations. We ensure that the sets of workers involved in collecting the train, validation, and test sets are completely disjoint to prevent our models from benefiting from learning about specific workers’ biases (Geva et al., 2019). On average, there are 11.2 utterances (5.6 pairs from the two workers) in each conversation in the train set. This dataset is available through the ParlAI framework2. 2https://parl.ai/ 2023 An example conversation from BlendedSkillTalk is shown in Figure 1. In this example, we see that the speakers inject knowledge, empathy, and personal background, and generally that the conversation invokes different skills while flowing naturally. Guided Collection In order to prevent workers from getting stuck in a set “mode” of conversation (in which they consistently use one specific skill) or from being too generic, we provide responses from models that have been trained towards a specific skill as inspiration to one of the two workers in the conversation. That worker is free to either use and modify or ignore those responses. Thus, each conversation involves an “unguided” speaker and a “guided” speaker, with the unguided speaker talking first. Whenever it is the guided speaker’s turn to respond, we show them three suggested responses, one each from three single-task polyencoder (Humeau et al., 2019) models trained on the ConvAI2, ED, and WoW datasets. These are the same models we use as baseline conversational agents for individual skills as well. A breakdown of the choices of guided speakers is shown in Table 1, showing a reasonably balanced choice of suggestions. Workers decide to use them in 20.5% of utterances, which affects the overall dialogues. Interestingly, 46.1% of the time (versus 33.3% at chance), the unguided speaker continues in the same mode as the previous utterance by the guided speaker, according to the classifier. Thus, the BlendedSkillTalk dataset mimics natural conversation by featuring both continuity (“stickiness” in the conversation mode) and mode blending within a single conversation. Blended Initial Contexts Each speaker is assigned a pair of sentences from randomly-chosen personas from the ConvAI2 dataset. Similar to the ConvAI2 setting, each speaker sees their own persona but not that of the other speaker. Each conversation is seeded with a randomly selected pair of utterances from ConvAI2, WoW, or ED, with equal probability. Workers are instructed to continue the conversation from there. Workers are also provided with the topic being discussed if the conversation seed is from WoW, or the situation description if it is from ED. Note that this latter set-up departs from the ED benchmark set-up, where the situation description is not used. The rationale for this is to provide some context about Chosen suggestion Initial Context Count Total none ConvAI2 7280 21468 ED 7257 WoW 6931 ConvAI2 ConvAI2 567 1599 ED 496 WoW 536 ED ConvAI2 766 2221 ED 773 WoW 682 WoW ConvAI2 634 1730 ED 494 WoW 602 Table 1: Guided workers choice of suggestions in the train set of BlendedSkillTalk, broken down by provenance of the given initial context utterances. Guided workers often choose not to use the suggestions, but have a slight preference for ConvAI2 when the initial context is from that dataset, and similarly for ED. what was being discussed if the seed utterance pair happened to be extracted from the middle of a conversation. When WoW is used as seed, the chosen personas and the initial conversation topic are selected to match, similar to the original WoW paper. To gain more insight into the influence of the datasets that provide this context, we leverage an utterance classifier trained to assign utterances to one of the three datasets (ConvAI2, WoW, ED; described further in Section 3.2). We find that the average percentage of utterances from the unguided worker that match the provided context dataset is 43.5% over the training set, compared to 33.3% if the source of the provided context had no influence (note that this observed ”stickiness” is similar to the 46.1% of times the unguided speaker continues in the same mode as the one initiated by the guided speaker, mentioned above). This suggests that the choice of seeding utterances and context indeed has an influence on the type of blend observed, helping to make the dataset balanced. Table 2 breaks down the classification results by provenance of the seed context. The fraction of utterances resembling a given dataset increases when the seed context is from that same dataset. However the conversations are still blended: when breaking down the training set conversations according to the number of “modes” observed in the utterances of the unguided worker according to the classifier, 47.8% show 3 modes, 43.2% show two modes, and 9.1% show a single mode. Data Quality To improve the quality of the collected conversations, we filter out any conversa2024 Persona for Unguided Speaker: Persona for Guided Speaker: My son plays on the local football team. My eyes are green. I design video games for a living. I wear glasses that are cateye. Wizard of Wikipedia topic: Video game design Previous utterances (shown to speakers): U: What video games do you like to play? G: all kinds, action, adventure, shooter, platformer, rpg, etc. but video game design requires both artistic and technical competence AND writing skills. that is one part many people forget Actual utterances: U: Exactly! I think many people fail to notice how beautiful the art of video games can be. (PB) (G selected the WoW suggestion: ”Indeed, Some games games are purposely designed to be a work of a persons creative expression, many though have been challenged as works of art by some critics.”) G: Indeed, Some games games are purposely designed to be a work of a persons creative expression, many though have been challenged as works of art by some critics. (K) U: Video games are undervalued by many and too easily blamed for problems like obesity or violence in kids (K) G: Indeed, Just last week my son was playing some Tine 2 and it was keeping him so calm. Games are therapeutic to some. (S) U: I use games to relax after a stressful day, the small escape is relaxing. (PB) (G selected the ED suggestion: ”I enjoy doing that after a hard day at work as well. I hope it relaxes you!”) G: I enjoy a good gaming session after a hard day at work as well. (PB) U: What other hobbies does your son have? (PB) G: Well he likes to fly kites and collect bugs, typical hobbies for an 8 year old, lol. (PB) U: My 12 year old is into sports. Football mostly. I however don;t enjoy watching him play. (PB) G: I wish I could play football, But I wear this cateye glasses and they would break if I tried. (PB) U: Sounds nice. Are they new or vintage? (E) G: They are new, I got them because of my love for cats lol. I have to show off my beautiful green eyes somehow. (S) Figure 1: Sample conversation from the BlendedSkillTalk dataset, annotated with four conversation mode types (PB: personal background; K: knowledge; S: personal situation; E: empathy). The guided (G) and unguided (U) workers are given personas and a topic. The conversation has been seeded with two utterances from a conversation sampled from WoW. When the guided worker selected one of the suggestions, it is shown in shaded grey. Source of Seed Context % classified as: ConvAI2 WoW ED ConvAI2 29.6 25.3 25.5 WoW 49.6 57.5 30.3 ED 20.8 17.1 44.2 Table 2: Percentages of utterances of unguided workers classified by the dataset classifier as coming from ConvAI2, WoW, or ED, broken down by provenance of the provided seed context. For each dataset, the fraction of utterances classified as coming from that dataset is highest when the seed context is from that same dataset. tions where one of the speakers speaks less than 3 words per message; starts their conversation with a greeting despite previous utterances existing in the conversation; uses all-caps too frequently; repeats themselves too much; writes a message that gets flagged by a safety classifier; or, if they are the guided speaker, always accepts suggestions verbatim without changing them. Messages cannot be over 30 words or copy persona strings exactly. Skill Annotations We also asked crowdsource workers to rate individual utterances as exhibiting one of four possible modes: • Knowledge: using factual information (“I’ve heard that in some places, lifeguards also help with other sorts of emergencies, like mountain rescues!”) (Dinan et al., 2019b) • Empathy: understanding and acknowledging implied feelings (“I’m sorry to hear that. I wish I could help you figure it out”) (Rashkin et al., 2019) • Personal situations: past circumstances in a person’s life (“I finally got that promotion at work! I have tried so hard for so long to get it!”) (Rashkin et al., 2019) • Personal background: a person’s personality, interests, and attributes (“I am into equestrian sports.”) (Zhang et al., 2018) All utterances in over 700 conversations from the validation set of the BST dataset, from both guided and unguided workers, were annotated in this manner for 7,380 annotations collected in total. Workers were able to select as many attributes as 2025 Mode Count Conversations Pct (%) 1 51 6.9% 2 167 22.6% 3 290 39.2% 4 232 31.4% Table 3: Breakdown of conversations by number of modes, showing that most BST dataset conversations exhibit multiple modes. Workers were asked to choose if each utterance of a conversation demonstrated knowledge, empathy, personal situations, or personal background. Over 70% of the conversations annotated demonstrated at least 3 of the 4 modes. they wished for each utterance. To avoid workerspecific bias, each crowdsource worker was limited to performing annotations on 10 conversations, and 123 total workers contributed annotations. Most analysis in this paper refers to three datasets, and the utterance classifier was trained with three dataset labels as classes. However, the ED dataset contains both “Speaker” utterances that describe personal situations, and ”Listener” utterances, where the Listener responds with empathy (the ED benchmarks trains on both sides but evaluates only on the Listener side). We therefore break down annotations into four types, with two types covering responses about “personal topics”: personal background (which is the focus of ConvAI2) and personal situations (talked about in ED). Results in Table 3 show that the dataset indeed contains a reasonably balanced blend of these qualities. Over 70% of conversations annotated contained at least 3 of 4 modes. Overall, workers’ annotation counts are 43.7% for personal background, 20.5% for knowledge, 20.3% for empathy, and 15.4% for personal situations. This supports the finding from our utterance classifier that the vast majority of conversations feature more than one mode, where utterance modes are defined as the predicted dataset provenance per utterance. In order to avoid excessive annotator bias and keep annotations discriminative, we limit the maximum number of annotations per worker and check that annotators did not select all modes for each utterance. 3.2 Blending Skills in a Single Model Architectures and Training The base architecture used throughout the paper is the 256-million parameter poly-encoder proposed in Humeau et al. (2019), which is a Transformer-based architecture for retrieval that learns a small number of codes representing the input context, so that performing attention over retrieval candidates is tractable in real-time, and was shown to be state of the art on several datasets. The polyencoder is first pretrained on the pushshift.io Reddit dataset and then fine-tuned on individual datasets. At test time, these models retrieve from the set of training utterances to output a response. Swept hyperparameters include dropout fractions, learning-rate schedule, the number of polyencoder codes used to represent the context, the output scaling factor, and the output reduction type (max across outputs vs. mean across outputs vs. first output only). Hyperparameters that were held constant included a training batch size of 512 and learning with Adamax; 12 encoder layers and an embedding size of 768; and label and text truncation lengths of 72 and 360. Note this model discards all casing information. Models were trained until validation-set hits@1 failed to improve for 10 epochs. All training is conducted in ParlAI (Miller et al., 2017). Model selection during fine-tuning is performed by choosing the model that scores highest on hits@1 on the validation set. This architecture is then leveraged in different ways to combine different skills in a single agent. Fine-tuning on the BlendedSkillTalk Dataset The simplest setting is to directly fine-tune the base architecture on a dataset that exhibits the blended skills we are looking for. In this setting, we simply fine-tune the poly-encoder pre-trained on pushshift.io Reddit on the BlendedSkillTalk dataset, following the procedure in Humeau et al. (2019). This setting is referred to as “BST” thereafter (for BlendedSkillTalk). Such blended multi-skill training is only possible if a resource like BlendedSkillTalk is available, which we only just collected. Thus, interesting questions unanswered by such training include: (i) can we learn a strongly performing multi-skilled model with only individual tasks and no access to blended data? (ii) would a model with both individual skill training and blended skill training be superior? Multi-task Single-Skills A straight-forward approach given access to multiple single-skill tasks is to multi-task on all of them during the finetuning step. Using the multi-task training framework in ParlAI, we again start from the poly2026 encoder pre-trained on pushshift.io Reddit, and fine-tune it multi-tasking on ConvAI2, WoW, and ED. The architecture is thus the same as for the single-task models, and has the same number of parameters. We select the model with the highest macro-average hits@1 across all training tasks. Mitigating Single-Skill bias The straightforward way of multi-tasking over single skills is to sample training data from each task during updates. However, if individual skill contexts are too different from each other a multi-task model will trivially separate the learning, rather than blending skills together. Then, if the bias is different at evaluation time, it will select the skill to use poorly. In our case, ConvAI2 dialogues include a persona context, while WoW includes a topic. This difference runs the risk of biasing the multi-task model into associating the mere presence of a persona context to chat about personal background, and that of a discussion topic to discussions where more knowledge is displayed, which could lead to over-emphasizing responses in the ConvAI2 style when tested on BlendedSkillTalk which contains personas. We thus also experiment with a multi-task setting where the single skills are modified to always include a persona and a topic, as this is then balanced, and corresponds to the final evaluation using BlendedSkillTalk. For every dialogue in each of the single-skill tasks, we thus prepend a persona and a topic to the first utterance if they are not already present. The personas and topics are selected from the training sets of ConvAI2 and WoW respectively, where WoW topics already have an alignment to ConvAI2. For WoW, a persona is selected via this mapping. For ConvAI2, a topic is found with the inverse mapping. For ED, the maximum word overlap between the first utterance of the conversation and any training set persona is used to select the appropriate persona, and then a topic is found as before. Multi-task Single-Skills + BlendedSkillTalk After training in a multi-task fashion on single skills, we can afterwards try to continue training with the BlendedSkillTalk resource, in an effort to improve the model’s ability to deal with blended data. We take the best model previously trained, and tune it in this fashion. Multi-task Two-Stage Many single-skill models have been trained and released by researchers. Harnessing those trained models could potentially allow a conversational agent to jointly exhibit all skills, with minimal additional training. Instead, one trains a top-level ‘dialogue manager’ which is a classifier with the dialogue context as input, that predicts which skill to use on each turn, and then outputs the utterance produced by the corresponding trained model. Specifically, we train a three-class classifier on top of BERT-base (Devlin et al., 2019) that assigns an utterance to the dataset it came from. We remove duplicate utterances present in more than one of the datasets prior to training and upsample with replacement to create equal representation in the classifier’s training set. We also remove context from the utterances including topics from Wizard of Wikipedia and personas from ConvAI2 before training this classifier and when performing evaluation to prevent the classifier from relying on these (cf. the bias mitigation mentioned above). 4 Experiments In Section 4.1, we introduce the automated metrics and human evaluations that we use to measure and compare model performance. Section 4.2 discusses how adding personas and topic strings during multi-task training de-biases the selection of retrieval candidates from across our three skillbased tasks. Sections 4.3 and 4.4 detail the performance of our models using automated metrics on single-skill and BlendedSkillTalk benchmarks, respectively, and Section 4.5 compares the performance of the models on human evaluation: in all three cases, models trained on all three skills generally outperform those trained on individual skills. 4.1 Metrics used We use both automated metrics and human evaluation. For automated metrics, we report hits@1 on the test set (or validation set in the case of ConvAI2 as the test set is not publicly available), out of 20 candidates for ConvAI2, and 100 candidates for ED and WoW, following the original datasets. For human evaluation, we ask workers to chat with various models and then rate the conversation along several axes: • Knowledge: How knowledgeable was your chat partner (from 1: not at all, to 5: very)? • Empathy: Did the responses of your chat 2027 MT Single-Skills MT S.-S. + BST Utt. Selected orig. debiased orig. debiased ConvAI2 64.4% 38.9% 61.1% 48.1% WoW 11.3% 29.4% 10.0% 21.3% ED 24.2% 31.6% 28.8% 30.5% Table 4: Mitigating skill selection bias. Adding personas and topics during multi-task training (debias) results in the multi-task retrieval models selecting utterances more evenly when tested on BlendedSkillTalk compared to training on the original datasets (orig). partner show understanding of your feelings (from 1: not at all, to 5: very much)? • Personal: How much did your chat partner talk about themselves (from 1: not at all, to 5: a lot)? • Overall: Overall, how much would you like to have a long conversation with this conversation partner (from 1: not at all, to 5: a lot)? Conversations and ratings are collected at least 100 times per model, from 234 crowdsource workers who produce a maximum of 10 of these conversations overall (across all model types). Several methods are used to filter out low quality workers that are similar to the methods used in collection of the BlendedSkillTalk dataset collection. All work by a given worker is excluded if they give the same ratings across all conversations, give utterances deemed unsafe by a safety classifier (Dinan et al., 2019a), utterances shorter than 3 words, use all-caps too frequently, or repeat themselves too much. Messages cannot be over 30 words or copy persona strings exactly. 4.2 Mitigating multi-task skill selection bias We first examine the issue of skill selection bias in multi-task models. As we are employing multitask retrieval models that retrieve from the set of candidates across all skills, we can collect statistics on those selection choices (i.e., which datasets the chosen utterances originated from). Table 4 reports the percentage of utterances derived from the three skills for our multi-task models (MT SingleSkills and MT Single-Skills + BST) when evaluating on the BST test set. When training on the original skill datasets, we observe heavy overuse of the ConvAI2 utterances and underuse of WoW, likely because BST contains personas as input. Our bias mitigation approach described in Section 3.2 causes a substantial shift for both models, making the use of the skills more equal. These results are then in line with the actual expected ratios in BST, as shown in Section 3.1 (Skill Annotations). In the following experiments, we thus use the debiased versions. 4.3 Results on Single-Skill Benchmarks Automated metrics results on the original benchmarks used to gauge competency at a single skill (ConvAI2, WoW, ED) reported in the literature are shown in Table 5 (first row). Our poly-encoder models (rows 2–4) trained on single tasks match or exceed the metrics published with the corresponding benchmarks, except for ED, which is close. The single-skill models each perform the best on their respective original benchmark and not as well on other benchmarks, compared to the blended models. However, the performance of all blended models is more balanced, in the sense that none of the single-skill models does as well averaged over the three categories (except for the ED model doing a tiny bit better than the random-skill model). The model finetuned on BST shows balanced performance but fails to match the performance of the single-skill models on their original benchmarks. The performance of the Multi-Task Two-Stage model gains many points over that of simple random assignment of single-skill models (Random-Skill), and this Random-Skill model itself performs about as well as the BST-fine-tuned model on the ED and WoW benchmarks. The Multi-Task Single-Skills model performs best among the blended models, and nearly matches the performance of all singleskill models on all benchmarks (even surpassing it for the WoW benchmark). The fact that the Multi-Task Single-Skills model does not do exactly as well as the single-skill models when evaluated using only candidates from individual benchmarks matches the observations of other work (Raffel et al., 2019). However, when evaluated with a set of mixed candidates from all single-skill tasks (where the set of candidates to choose from is tripled by included an equal number of candidates from the other two datasets), the multi-task model performs better than the individual models, suggesting that multi-task training results in increased resilience to having to deal with more varied distractor candidates. We also include metrics for “added-context”, when topics and personas are added (see Section 4.2), as a san2028 Single-skill benchmarks Model ConvAI2 WoW ED Avg. SOTA Reported 87.3 87.4 66.0 80.2 ConvAI2 89.4 78.4 42.6 70.1 WoW 57.3 91.8 47.7 65.6 ED 63.3 81.0 65.1 69.8 BST model 78.5 84.1 52.0 71.5 Random-Skill 71.0 83.9 52.0 69.0 MT Two-Stage 84.7 90.1 63.4 79.4 MT Single-Skills 88.8 92.8 63.2 81.6 Added-context benchmarks MT Single-Skills 88.9 92.8 63.2 81.6 Mixed-candidates evaluation Single-task 82.1 88.2 60.2 76.8 MT Two-Stage 77.2 86.6 59.0 74.3 MT Single-Skills 85.2 92.1 61.1 79.5 Table 5: Results on single-skill benchmarks. Top: reported values published in the papers accompanying the benchmarks, and the Poly-encoder paper. ConvAI2, WoW, ED: models trained on the corresponding benchmark. These models perform very well on the benchmark they were trained on, but not as well on other benchmarks. BST: The model fine-tuned on BST shows more balanced performance (i.e., none of the single-skill benchmarks does better at all three skills), but it is noticeably lower than each specialized model. Random-Skill: the performance of choosing a random single-skill per response is comparable to the BST model, but slightly worse on ConvAI2. MT Two-Stage: guiding the generation by an actual task classifier as opposed to random selection increases performance on all skills. MT Single-Skills: this model performs best among the blended skills architectures, and nearly matches the single-skill model performance (and surpasses it in the WoW case). Added-context benchmarks: when the benchmark contexts are augmented with a persona and topic as described in section 3.2, the evaluation results barely change. Mixed-candidates evaluation: when the set of benchmark candidates is tripled by adding candidates from the other two benchmarks in equal proportion, the performance of the best respective single-task models suffers, while the MT Single-Skills model proves more resilient. Note that Single-task averages in italics do not correspond to a single model, but an average over 3 models. ity check, but they indeed barely change the numbers on single-skill benchmarks. 4.4 Results on BlendedSkillTalk benchmark We show two types of results on the BlendedSkillTalk benchmark (BST). Single-skill models are tested directly on BST without any additional training in a zero-shot setting, or fine-tuned on the Model BST, zero-shot +BST, FT ConvAI2 76.8 81.7 WoW 67.5 79.4 ED 69.0 80.4 BST 79.2 Random-Skill 71.2 MT Two-Stage 71.9 MT Single-Skills 80.1 83.8 Table 6: Test results on BlendedSkillTalk. BST, zeroshot: the models are tested directly on the test set of BST without having been fine-tuned on the BST train set. +BST, FT: models are fine-tuned on the BST train set, then tested on the BST test set. Multi-Task SingleSkills + BlendedSkillTalk performs best. The MultiTask Two-Stage model outperforms two of the singleskill models, but the latter work well when combined with BlendedSkillTalk fine-tuning. We hypothesize that ConvAI2 alone performs well because it has been trained to use persona contexts, that are used throughout the BST dialogues. BST training set then tested on the BST test-set. Results for both settings are shown in Table 6. The Multi-Task Single-Skills model outperforms all single-skill model baselines, whether used in a zero-shot or fine-tuned fashion, despite being the same size. The MT Two-Stage and Random-Skill models outperform two of the three single-skill models. We hypothesize that the ConvAI2 model is doing better because it has already learned to use personas. All single-skill models show improved performance once fine-tuned on the BST train set. However, performance in the zero-shot setting is already good, which is promising in terms of generalization to unseen data. 4.5 Human Evaluation on Specific Skill Axes Human evaluation results are shown in Table 7. Single-skill models tend to generally be rated better than the other single-skill models on the skill they were optimized for, although all single-skill models are similarly rated on the knowledge axis. Models that have been trained on multiple skills, either through multi-tasking (MT Two-Stage or MT Single-Skills) or through fine-tuning on BST, are performing well on every dimension, with the MT Two-Stage model and the MT Single-Skills fine-tuned on BST being the overall best. These two models have different advantages: the MT Single-Skills model fine-tuned on BST is more compact, being the same size as each individual single-skill model, but requires joint multi-task training, then fine-tuning. The MT Two-Stage 2029 Model Knowledge Empathy Personal Overall quality ConvAI2 3.2 3.1 3.4 3.0 WoW 3.3 2.9 2.7 2.6 ED 3.4 3.3 3.0 3.0 BST 3.5 3.6 3.1 3.3 Random-Skill 3.2 2.9 3.2 2.7 MT Two-Stage 3.7 3.6 3.3 3.5 MT Single-Skills 3.7 3.6 3.0 3.4 MT Single-Skills +BST fine-tuning 3.7 3.8 3.2 3.6 Table 7: Human evaluation results on individual axes of knowledge, empathy, and being personal, as well as overall quality. All results here have a 95% confidence interval of ± 0.2 or 0.3, omitted to avoid cluttering the table. Results that are within the confidence interval of the best model performance are bolded. ConvAI2, WoW, ED: models pre-trained on pushshift.io Reddit and fine-tuned on the respective datasets. For Empathy and Personal topics, the individual models tend to do better when trained on a dataset tailored for that, however they all perform similarly on the Knowledge dimension. BST: model pre-trained on pushshift.io Reddit and fine-tuned on BST. This model is showing better overall performance compared to single-skill datasets (i.e., none of the three single-skill dataset do better than BST in every dimension). MT Single-Skills with fine-tuning on BST and MT Two-Stage are performing very well on all dimensions. MT Single-Skills with fine-tuning on BST has fewer than a third of the parameters of the MT Two-Stage model, yet manages to perform as well, if not slightly better. model only requires training a classifier to play the role of a dialogue manager by assigning utterances to one of the three single-skill benchmarks, but is overall a much bigger model, given that it uses large models for each single skill and the classifier itself. The ”Random-Skill” model is bypassing the need for a classifier by simply using all three single-skill model randomly, and is rated well on the personal axis, but not as well on knowledge or empathy, which might be because talking about personal topics can always work, while knowledge and empathy have to be suited to the context. 5 Discussion and Conclusion This paper focuses on the goal of creating an open-domain conversational agent that can display many skills, and blend them in a seamless and engaging way. We have shown several ways to leverage previous work focusing on individual conversational skills, either by combining trained singleskill models in a two-stage way, by re-using the datasets for simultaneous multi-task training, and by fine-tuning on the overall blended task. We compared the performance of these schemes on BlendedSkillTalk, a new English-language dataset blending three conversation skills in balanced proportions (demonstrating knowledge, empathy, or ability to talk about oneself). We showed that multiple multi-task approaches can be effective on this task, however careful construction of the training scheme is important to mitigate biases when blending and selecting skills, while fine-tuning on the overall blended task improves models further. One natural extension would be to generalize these findings to other skills than the three addressed here, such as humor/wit, eloquence, image commenting, etc. This would in principle be straightforward to do as long as these additional skills have a corresponding “single-skill” dataset to train on and are sufficiently distinguishable from each other. References Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019a. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4536–4545, Hong Kong, China. Association for Computational Linguistics. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The 2030 second conversational intelligence challenge (ConvAI2). In The NeurIPS ’18 Competition, pages 187– 208, Cham. Springer International Publishing. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019b. Wizard of wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 1161–1166, Hong Kong, China. Association for Computational Linguistics. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Real-time inference in multi-sentence tasks with deep pretrained transformers. arXiv preprint arXiv:1905.01969. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Zhaojiang Lin, Andrea Madotto, Jamin Shin, Peng Xu, and Pascale Fung. 2019. MoEL: Mixture of empathetic listeners. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 121–132, Hong Kong, China. Association for Computational Linguistics. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322–2332, Brussels, Belgium. Association for Computational Linguistics. Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jianfeng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5427–5436, Florence, Italy. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, et al. 2018. Conversational ai: The science behind the alexa prize. arXiv preprint arXiv:1801.03604. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of xiaoice, an empathetic social chatbot. Computational Linguistics, 46(1):53–93.
2020
183
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2031–2043 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2031 Grounded Conversation Generation as Guided Traverses in Commonsense Knowledge Graphs Houyu Zhang1 ∗† Zhenghao Liu2∗ Chenyan Xiong3 Zhiyuan Liu2 1Department of Computer Science, Brown University, Providence, USA 2Department of Computer Science and Technology, Tsinghua University, Beijing, China Institute for Artificial Intelligence, Tsinghua University, Beijing, China State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China 3Microsoft Research AI, Redmond, USA Abstract Human conversations naturally evolve around related concepts and scatter to multi-hop concepts. This paper presents a new conversation generation model, ConceptFlow, which leverages commonsense knowledge graphs to explicitly model conversation flows. By grounding conversations to the concept space, ConceptFlow represents the potential conversation flow as traverses in the concept space along commonsense relations. The traverse is guided by graph attentions in the concept graph, moving towards more meaningful directions in the concept space, in order to generate more semantic and informative responses. Experiments on Reddit conversations demonstrate ConceptFlow’s effectiveness over previous knowledge-aware conversation models and GPT-2 based models while using 70% fewer parameters, confirming the advantage of explicit modeling conversation structures. All source codes of this work are available at https://github.com/ thunlp/ConceptFlow. 1 Introduction The rapid advancements of language modeling and natural language generation (NLG) techniques have enabled fully data-driven conversation models, which directly generate natural language responses for conversations (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016b). However, it is a common problem that the generation models may degenerate dull and repetitive contents (Holtzman et al., 2019; Welleck et al., 2019), which, in conversation assistants, leads to off-topic and useless responses. (Tang et al., 2019; Zhang et al., 2018; Gao et al., 2019). Conversations often develop around Knowledge. A promising way to address the degeneration prob∗Indicates equal contribution. †Part of work is conducted at Tsinghua University. Original Graph chat based future steam paper class talk text hope plan voice book write faith dream bag card word idea water dream hope faith future talk text write card word chat voice POST:chat based on knowledge is the future Response:yeah it ’s not a dream to have a talk with robot Zero-hop Concept One-hop Concept Two-hop Concept Figure 1: An Example of Concept Shift in a Conversation. Darker green indicates higher relevance and wider arrow indicates stronger concept shift (captured by ConceptFlow). lem is to ground conversations with external knowledge (Xing et al., 2017), such as open-domain knowledge graph (Ghazvininejad et al., 2018), commonsense knowledge base (Zhou et al., 2018a), or background documents (Zhou et al., 2018b). Recent research leverages such external knowledge by using them to ground conversations, integrating them as additional representations, and then generating responses conditioned on both the texts and the grounded semantics (Ghazvininejad et al., 2018; Zhou et al., 2018a,b). Integrating external knowledge as extra semantic representations and additional inputs to the conversation model effectively improves the quality of generated responses (Ghazvininejad et al., 2018; Logan et al., 2019; Zhou et al., 2018a). Never2032 theless, some research on discourse development suggests that human conversations are not “still”: People chat around a number of related concepts, and shift their focus from one concept to others. Grosz and Sidner (1986) models such concept shift by breaking discourse into several segments, and demonstrating different concepts, such as objects and properties, are needed to interpret different discourse segments. Attentional state is then introduced to represent the concept shift corresponding to each discourse segment. Fang et al. (2018) shows that people may switch dialog topics entirely in a conversation. Restricting the utilization of knowledge only to those directly appear in the conversation, effective as they are, does not reach the full potential of knowledge in modeling human conversations. To model the concept shift in human conversations, this work presents ConceptFlow (Conversation generation with Concept Flow), which leverages commonsense knowledge graphs to model the conversation flow in the explicit concept space. For example, as shown in Figure 1, the concepts of a conversation from Reddit evolves from “chat” and “future”, to adjacent concept “talk”, and also hops to distant concept “dream” along the commonsense relations—a typical involvement in natural conversations. To better capture this conversation structure, ConceptFlow explicitly models the conversations as traverses in commonsense knowledge graphs: it starts from the grounded concepts, e.g., “chat” and “future”, and generates more meaningful conversations by hopping along the commonsense relations to related concepts, e.g., “talk” and “dream”. The traverses in the concept graph are guided by graph attention mechanisms, which derives from graph neural networks to attend on more appropriate concepts. ConceptFlow learns to model the conversation development along more meaningful relations in the commonsense knowledge graph. As a result, the model is able to “grow” the grounded concepts by hopping from the conversation utterances, along the commonsense relations, to distant but meaningful concepts; this guides the model to generate more informative and on-topic responses. Modeling commonsense knowledge as concept flows, is both a good practice on improving response diversity by scattering current conversation focuses to other concepts (Chen et al., 2017), and an implementation solution of the attentional state mentioned above (Grosz and Sidner, 1986). Our experiments on a Reddit conversation dataset with a commonsense knowledge graph, ConceptNet (Speer et al., 2017), demonstrate the effectiveness of ConceptFlow. In both automatic and human evaluations, ConceptFlow significantly outperforms various seq2seq based generation models (Sutskever et al., 2014), as well as previous methods that also leverage commonsense knowledge graphs, but as static memories (Zhou et al., 2018a; Ghazvininejad et al., 2018; Zhu et al., 2017). Notably, ConceptFlow also outperforms two finetuned GPT-2 systems (Radford et al., 2019), while using 70% fewer parameters. Explicitly modeling conversation structure provides better parameter efficiency. We also provide extensive analyses and case studies to investigate the advantage of modeling conversation flow in the concept space. Our analyses show that many Reddit conversations are naturally aligned with the paths in the commonsense knowledge graph; incorporating distant concepts significantly improves the quality of generated responses with more on-topic semantic information added. Our analyses further confirm the effectiveness of our graph attention mechanism in selecting useful concepts, and ConceptFlow’s ability in leveraging them to generate more relevant, informative, and less repetitive responses. 2 Related Work Sequence-to-sequence models, e.g., Sutskever et al. (2014), have been widely used for natural language generation (NLG), and to build conversation systems (Shang et al., 2015; Vinyals and Le, 2015; Li et al., 2016b; Wu et al., 2019). Recently, pretrained language models, such as ELMO (Devlin et al., 2019), UniLM (Dong et al., 2019) and GPT2 (Radford et al., 2018), further boost the NLG performance with large scale pretraining. Nevertheless, the degenerating of irrelevant, off-topic, and non-useful responses is still one of the main challenges in conversational generation (Rosset et al., 2020; Tang et al., 2019; Zhang et al., 2018; Gao et al., 2019). Recent work focuses on improving conversation generation with external knowledge, for example, incorporating additional texts (Ghazvininejad et al., 2018; Vougiouklis et al., 2016; Xu et al., 2017; Long et al., 2017), or knowledge graphs (Long et al., 2017; Ghazvininejad et al., 2018). They have 2033 shown external knowledge effectively improves conversation response generation. The structured knowledge graphs include rich semantics represented via entities and relations (Hayashi et al., 2019). Lots of previous studies focus on task-targeted dialog systems based on domain-specific knowledge bases (Xu et al., 2017; Zhu et al., 2017; Gu et al., 2016). To generate responses with a large-scale knowledge base, Zhou et al. (2018a) and Liu et al. (2018) utilize graph attention and knowledge diffusion to select knowledge semantics for utterance understanding and response generation. Moon et al. (2019) focuses on the task of entity selection, and takes advantage of positive entities that appear in the golden response. Different from previous research, ConceptFlow models the conversation flow explicitly with the commonsense knowledge graph and presents a novel attention mechanism on all concepts to guide the conversation flow in the latent concept space. 3 Methodology This section presents our Conversation generation model with latent Concept Flow (ConceptFlow). Our model grounds the conversation in the concept graph and traverses to distant concepts along commonsense relations to generate responses. 3.1 Preliminary Given a user utterance X = {x1, ..., xm} with m words, conversation generation models often use an encoder-decoder architecture to generate a response Y = {y1, ..., yn}. The encoder represents the user utterance X as a representation set H = {⃗h1, ...,⃗hm}. This is often done by Gated Recurrent Units (GRU): ⃗hi = GRU(⃗hi−1, ⃗xi), (1) where the ⃗xi is the embedding of word xi. The decoder generates t-th word in the response according to the previous t −1 generated words y<t = {y1, ..., yt−1} and the user utterance X: P(Y |X) = n Y t=1 P(yt|y<t, X). (2) Then it minimizes the cross-entropy loss L and optimizes all parameters end-to-end: L = n X t=1 CrossEntropy(y∗ t , yt), (3) where y∗ t is the token from the golden response. Attention ! Attention " Attention # Control Gate $∗ Response Concept Graph & Vocab Central Two-hop 0 1 2 arg(Softmax(' ( ))) arg(!) Outer Subflow * Decoder Output ' GRU GRU … GRU … Concept Embedding ) Central Concept + GNN Central Graph &,)-./01 Flow Attention Outer Graph &23.)/ Post GRU GRU … Post 4 Word Embedding 5 arg(Softmax(' ( 6)) Figure 2: The Architecture of ConceptFlow. The architecture of ConceptFlow is shown in Figure 2. ConceptFlow first constructs a concept graph G with central graph Gcentral and outer graph Gouter according to the distance (hops) from the grounded concepts (Sec. 3.2). Then ConceptFlow encodes both central and outer concept flows in central graph Gcentral and outer graph Gouter , using graph neural networks and concept embedding (Sec. 3.3). The decoder, presented in Section 3.4, leverages the encodings of concept flows and the utterance to generate words or concepts for responses. 3.2 Concept Graph Construction ConceptFlow constructs a concept graph G as the knowledge for each conversation. It starts from the grounded concepts (zero-hop concepts V 0), which appear in the conversation utterance and annotated by entity linking systems. Then, ConceptFlow grows zero-hop concepts V 0 with one-hop concepts V 1 and two-hop concepts V 2. Concepts from V 0 and V 1, as well as all relations between them, form the central concept graph Gcentral, which is closely related to the current conversation topic. Concepts in V 1 and V 2 and their connections form the outer graph Gouter. 2034 3.3 Encoding Latent Concept Flow The constructed concept graph provides explicit semantics on how concepts related to commonsense knowledge. ConceptFlow utilizes it to model the conversation and guide the response generation. It starts from the user utterance, traversing through central graph Gcentral, to outer graph Gouter. This is modeled by encoding the central and outer concept flows according to the user utterance. Central Flow Encoding. The central concept graph Gcentral is encoded by a graph neural network that propagates information from user utterance H to the central concept graph. Specifically, it encodes concept ei ∈Gcentral to representation ⃗gei: ⃗gei = GNN(⃗ei, Gcentral, H), (4) where ⃗ei is the concept embedding of ei. There is no restriction of which GNN model to use. We choose Sun et al. (2018)’s GNN (GraftNet), which shows strong effectiveness in encoding knowledge graphs. More details of GraftNet can be found in Appendix A.3. Outer Flow Encoding. The outer flow fep, hopping from ep ∈V1 to its connected two-hop concept ek, is encoded to ⃗fep by an attention mechanism: ⃗fep = X ek θek · [⃗ep ◦⃗ek], (5) where ⃗ep and ⃗ek are embeddings for ep and ek, and are concatenated (◦). The attention θek aggregates concept triple (ep, r, ek) to get ⃗fep: θek = softmax((wr · ⃗r)⊤· tanh(wh · ⃗ep + wt · ⃗ek)), (6) where ⃗r is the relation embedding between the concept ep and its neighbor concept ek. wr, wh and wt are trainable parameters. It provides an efficient attention specifically focusing on the relations for multi-hop concepts. 3.4 Generating Text with Concept Flow To consider both user utterance and related information, the texts from the user utterance and the latent concept flows are incorporated by decoder using two components: 1) the context representation that combines their encodings (Sec. 3.4.1); 2) the conditioned generation of words and concepts from the context representations (Sec. 3.4.2). 3.4.1 Context Representation To generate t-th time response token, we first calculate the output context representation ⃗st for t-th time decoding with the encodings of the utterance and the latent concept flow. Specifically, ⃗st is calculated by updating the (t− 1)-th step output representation ⃗st−1 with the (t − 1)-th step context representation ⃗ct−1: ⃗st = GRU(⃗st−1, [⃗ct−1 ◦⃗yt−1]), (7) where ⃗yt−1 is the (t −1)-th step generated token yt−1’s embedding, and the context representation⃗ct−1 concatenates the text-based representation ⃗c text t−1 and the concept-based representation ⃗c concept t−1 : ⃗ct−1 = FFN([⃗c text t−1 ◦⃗c cpt t−1]). (8) The text-based representation ⃗c text t−1 reads the user utterance encoding H with a standard attention mechanism (Bahdanau et al., 2015): ⃗c text t−1 = m X i=1 αj t−1 · ⃗hj, (9) and attentions αj t−1 on the utterance tokens: αj t−1 = softmax(⃗st−1 · ⃗hj). (10) The concept-based representation ⃗c concept t−1 is a combination of central and outer flow encodings: ⃗c cpt t−1 =   X ei∈Gcentral βei t−1 · ⃗gei  ◦   X fep ∈Gouter γf t−1 · ⃗fep  . (11) The attention βei t−1 weights over central concept representations: βei t−1 = softmax(⃗st−1 · ⃗gei), (12) and the attention γf t−1 weights over outer flow representations: γf t−1 = softmax(⃗st−1 · ⃗fep). (13) 3.4.2 Generating Tokens The t-th time output representation ⃗st (Eq. 7) includes information from both the utterance text, the concepts with different hop steps, and the attentions upon them. The decoder leverages ⃗st to generate the t-th token to form more informative responses. It first uses a gate σ∗to control the generation by choosing words (σ∗= 0), central concepts (V 0,1, σ∗= 1) and outer concept set (V 2, σ∗= 2): σ∗= argmaxσ∈{0,1,2}(FFNσ(⃗st)), (14) The generation probabilities of word w, central concept ei, and outer concepts ek are calculated 2035 over the word vocabulary, central concept set V 0,1, and outer concept set V 2: yt ∼      softmax(⃗st · ⃗w), σ∗= 0 softmax(⃗st · ⃗gei), σ∗= 1 softmax(⃗st · ⃗ek), σ∗= 2, (15) where ⃗w is the word embedding for word w, ⃗gei is the central concept representation for concept ei and ⃗ek is the two-hop concept ek’s embedding. The training and prediction of ConceptFlow are conducted following standard conditional language models, i.e. using Eq. 15 in place of Eq. 2 and training it by the Cross-Entropy loss (Eq. 3). Only ground truth responses are used in training and no additional annotation is required. 4 Experiment Methodology This section describes the dataset, evaluation metrics, baselines, and implementation details of our experiments. Dataset. All experiments use the multi-hop extended conversation dataset based on a previous dataset which collects single-round dialogs from Reddit (Zhou et al., 2018a). Our dataset contains 3,384,185 training pairs and 10,000 test pairs. Preprocessed ConceptNet (Speer et al., 2017) is used as the knowledge graph, which contains 120,850 triples, 21,471 concepts and 44 relation types. Evaluation Metrics. A wide range of evaluation metrics are used to evaluate the quality of generated responses: PPL (Serban et al., 2016), Bleu (Papineni et al., 2002), Nist (Doddington, 2002), ROUGE (Lin, 2004) and Meteor (Lavie and Agarwal, 2007) are used for relevance and repetitiveness; Dist-1, Dist-2 and Ent-4 are used for diversity, which is same with the previous work (Li et al., 2016a; Zhang et al., 2018). The metrics above are evaluated using the implementation from Galley et al. (2018). Zhou et al. (2018a)’s concept PPL mainly focuses on concept grounded models and this metric is reported in Appendix A.1. The Precision, Recall, and F1 scores are used to evaluate the quality of learned latent concept flow in predicting the golden concepts which appear in ground truth responses. Baselines. The six baselines compared come from three groups: standard Seq2Seq, knowledgeenhanced ones, and fine-tuned GPT-2 systems. Seq2Seq (Sutskever et al., 2014) is the basic encoder-decoder for language generation. Knowledge-enhanced baselines include MemNet (Ghazvininejad et al., 2018), CopyNet (Zhu et al., 2017) and CCM (Zhou et al., 2018a). MemNet maintains a memory to store and read concepts. CopyNet copies concepts for the response generation. CCM (Zhou et al., 2018a) leverages a graph attention mechanism to model the central concepts. These models mainly focus on the grounded concepts. They do not explicitly model the conversation structures using multi-hop concepts. GPT-2 (Radford et al., 2019), the pre-trained model that achieves the state-of-the-art in lots of language generation tasks, is also compared in our experiments. We fine-tune the 124M GPT-2 in two ways: concatenate all conversations together and train it like a language model (GPT-2 lang); extend the GPT-2 model with encode-decoder architecture and supervise with response data (GPT-2 conv). Implement Details. The zero-hop concepts are initialized by matching the keywords in the post to concepts in ConceptNet, the same with CCM (Zhou et al., 2018a). Then zero-hop concepts are extended to their neighbors to form the central concept graph. The outer concepts contain a large amount of twohop concepts with lots of noises. To reduce the computational cost, we first train ConceptFlow (select) with 10% random training data, and use the learned graph attention to select top 100 two-hop concepts over the whole dataset. Then the standard train and test are conducted with the pruned graph. More details of this filtering step can be found in Appendix A.4. TransE (Bordes et al., 2013) embedding and Glove (Pennington et al., 2014) embedding are used to initialize the representation of concepts and words, respectively. Adam optimizer with the learning rate of 0.0001 is used to train the model. 5 Evaluation Five experiments are conducted to evaluate the generated responses from ConceptFlow and the effectiveness of the learned graph attention. 5.1 Response Quality This experiment evaluates the generation quality of ConceptFlow automatically and manually. Automatic Evaluation. The quality of generated responses is evaluated with different metrics from three aspects: relevance, diversity, and novelty. Table 1 and Table 2 show the results. In Table 1, all evaluation metrics calculate the relevance between the generated response and the 2036 Model Bleu-4 Nist-4 Rouge-1 Rouge-2 Rouge-L Meteor PPL Seq2Seq 0.0098 1.1069 0.1441 0.0189 0.1146 0.0611 48.79 MemNet 0.0112 1.1977 0.1523 0.0215 0.1213 0.0632 47.38 CopyNet 0.0106 1.0788 0.1472 0.0211 0.1153 0.0610 43.28 CCM 0.0084 0.9095 0.1538 0.0211 0.1245 0.0630 42.91 GPT-2 (lang) 0.0162 1.0844 0.1321 0.0117 0.1046 0.0637 29.08∗ GPT-2 (conv) 0.0124 1.1763 0.1514 0.0222 0.1212 0.0629 24.55∗ ConceptFlow 0.0246 1.8329 0.2280 0.0469 0.1888 0.0942 29.90 Table 1: Relevance Between Generated and Golden Responses. The PPL results∗of GPT-2 is not directly comparable because of its different tokenization. More results can be found in Appendix A.1. Diversity(↑) Novelty w.r.t. Input(↓) Model Dist-1 Dist-2 Ent-4 Bleu-4 Nist-4 Rouge-2 Rouge-L Meteor Seq2Seq 0.0123 0.0525 7.665 0.0129 1.3339 0.0262 0.1328 0.0702 MemNet 0.0211 0.0931 8.418 0.0408 2.0348 0.0621 0.1785 0.0914 CopyNet 0.0223 0.0988 8.422 0.0341 1.8088 0.0548 0.1653 0.0873 CCM 0.0146 0.0643 7.847 0.0218 1.3127 0.0424 0.1581 0.0813 GPT-2 (lang) 0.0325 0.2461 11.65 0.0292 1.7461 0.0359 0.1436 0.0877 GPT-2 (conv) 0.0266 0.1218 8.546 0.0789 2.5493 0.0938 0.2093 0.1080 ConceptFlow 0.0223 0.1228 10.27 0.0126 1.4749 0.0258 0.1386 0.0761 Table 2: Diversity (higher better) and Novelty (lower better) of Generated Response. Diversity is calculated within generated responses; Novelty compares generated responses to the input post. More results are in Appendix A.1. Model Parameter Average Score Best@1 Ratio App. Inf. App. Inf. CCM 35.6M 1.802 1.802 17.0% 15.6% GPT-2 (conv) 124.0M 2.100 1.992 26.2% 23.6% ConceptFlow 35.3M 2.690 2.192 30.4% 25.6% Golden Human 2.902 3.110 67.4% 81.8% Table 3: Human Evaluation on Appropriate (App.) and Informativeness (Inf.). The Average Score takes the average from human judgments. Best@1 Ratio indicates the fraction of judges consider the case as the best. The number of parameters are also presented. Model App. Inf. ConceptFlow-CCM 0.3724 0.2641 ConceptFlow-GPT2 0.2468 0.2824 Table 4: Fleiss’ Kappa of Human Agreement. Two testing scenarios Appropriate (App.) and Informativeness (Inf.) are used to evaluate the the quality of generated response. The Fleiss’ Kappa evaluates agreement from various annotators and focuses on the comparison of two models with three categories: win, tie and loss. golden response. ConceptFlow outperforms all baseline models by large margins. The responses generated by ConceptFlow are more on-topic and match better with the ground truth responses. In Table 2, Dist-1, Dist-2, and Ent-4 measure the word diversity of generated responses and the rest of metrics measure the novelty by comparing the generated response with the user utterance. ConceptFlow has a good balance in generating novel and diverse responses. GPT-2’s responses are more diverse, perhaps due to its sampling mechanism during decoding, but are less novel and on-topic compared to those from ConceptFlow. Human Evaluation. The human evaluation focuses on two aspects: appropriateness and informativeness. Both are important for conversation systems (Zhou et al., 2018a). Appropriateness evaluates if the response is on-topic for the given utterance; informativeness evaluates systems’ ability to provide new information instead of copying from the utterance (Zhou et al., 2018a). All responses of sampled 100 cases are selected from four methods with better performances: CCM, GPT-2 (conv), ConceptFlow, and Golden Response. The responses are scored from 1 to 4 by five judges (the higher the better). Table 3 presents Average Score and Best@1 ratio from human judges. The first is the mean of five judges; the latter calculates the fraction of judges that consider the corresponding response the best among four systems. ConceptFlow outperforms all other models in all scenarios, while only using 30% of parameters compared to GPT-2. This demonstrates the advantage of explicitly modeling conversation flow with structured semantics. The agreement of human evaluation is tested to demonstrate the authenticity of evaluation results. We first sample 100 cases randomly for our human evaluation. Then the responses from four better 2037 conversation systems, CCM, GPT-2 (conv), ConceptFlow and Golden Responses, are provided with a random order. A group of annotators are asked to score each response ranged from 1 to 4 according to the quality on two testing scenarios, appropriateness and informativeness. All annotators have no clues about the source of generated responses. The agreement of human evaluation for CCM, GPT-2 (conv) and ConceptFlow are presented in Table 4. For each case, the response from ConceptFlow is compared to the responses from two baseline models, CCM and GPT-2 (conv). The comparison result is divided into three categories: win, tie and loss. Then the human evaluation agreement is calculated with Fleiss’ Kappa (κ). The κ value ranges from 0.21 to 0.40 indicating fair agreement, which confirms the quality of human evaluation. Both automatic and human evaluations illustrate the effectiveness of ConceptFlow. The next experiment further studies the effectiveness of multi-hop concepts in ConceptFlow. 5.2 Effectiveness of Multi-hop Concepts This part explores the role of multi-hop concepts in ConceptFlow. As shown in Figure 3, three experiments are conducted to evaluate the performances of concept selection and the quality of generated responses with different sets of concepts. This experiment considers four variations of outer concept selections. Base ignores two-hop concepts and only considers the central concepts. Rand, Distract, and Full add two-hop concepts in three different ways: Rand selects concepts randomly, Distract selects all concepts that appear in the golden response with random negatives (distractors), and Full is our ConceptFlow (select) that selects concepts by learned graph attentions. As shown in Figure 3(a), Full covers more golden concepts than Base. This aligns with our motivation that natural conversations do flow from central concepts to multi-hop ones. Compared to Distract setting where all ground truth two-hop concepts are added, ConceptFlow (select) has slightly less coverage but significantly reduces the number of two-hop concepts. The second experiment studies the model’s ability to generate ground truth concepts, by comparing the concepts in generated responses with those in ground truth responses. As shown in Figure 3(b), though Full filtered out some golden twoDepth Amount Golden Coverage Ratio Number Zero-hop 5.8 9.81% 0.579 + One-hop 98.6 38.78% 2.292 + Two-hop 880.8 61.37% 3.627 + Three-hop 3769.1 81.58% 4.821 ConceptFlow 198.6 52.10% 3.075 Table 5: Statistics of Concept Graphs with different hops, including the total Amount of connected concepts, the Ratio and Number of covered golden concepts (those appear in ground truth responses). ConceptFlow indicates the filtered two-hop graph. hop concepts, it outperforms other variations by large margins. This shows ConceptFlow’s graph attention mechanisms effectively leverage the pruned concept graph and generate high-quality concepts when decoding. The high-quality latent concept flow leads to better modeling of conversations, as shown in Figure 3(c). Full outperforms Distract in their generated responses’ token level perplexity, even though Distract includes all ground truth two-hop concepts. This shows that “negatives” selected by ConceptFlow, while not directly appear in the target response, are also on-topic and include meaningful information, as they are selected by graph attentions instead of random. More studies of multi-hop concept selection strategies can be found in Appendix A.2. 5.3 Hop Steps in Concept Graph This experiment studies the influence of hop steps in the concept graph. As shown in Table 5, the Number of covered golden concepts increases with more hops. Compared to zero-hop concepts, multi-hop concepts cover more golden concepts, confirming that conversations naturally shift to multi-hop concepts: extending the concept graph from one-hop to twohop improves the recall from 39% to 61%, and to three-hop further improves to 81%. However, at the same time, the amounts of the concepts also increase dramatically with multiple hops. Three hops lead to 3,769 concepts on average, which are 10% of the entire graph we used. In this work, we choose two-hop, as a good balance of coverage and efficiency, and used ConceptFlow (select) to filter around 200 concepts to construct the pruned graph. How to efficiently and effectively leverage more distant concepts in the graph is reserved for future work. 2038 (a) Golden Concept Coverage. (b) Response Concept Generation. (c) Response Token Generation. Figure 3: Comparisons of Outer Concept Selection Methods. Base only considers the central concepts and ignores two-hop concepts. Rand randomly selects two-hop concepts. Distract incorporates golden concepts in the response with random negatives (distractors). Full chooses two-hop concepts with ConceptFlow’s graph attention. Figure 4: Case Study (Best viewed in color). Left: Attention flow in commonsense concept graph, where zerohop concepts, one-hop concepts and two-hop concepts are highlighted. Right: Attention scores over all concepts. Darker green indicates higher attention scores. (a) Central Concept. (b) Two-hop Concept. Figure 5: Distribution of Attention Score. The distributions of Overall (all concepts of the certain part), Golden (concepts in the golden response) and Zero-hop (concepts appear in the post) are presented. The attention score is calculated by scaling the mean of attention scores of n step decoding. 5.4 Case Study Some cases from three conversation models are listed in Table 6. Responses from CCM may repeat the same contents as it does not explicitly model the traverse in the concept space. For example, the responses from the first and third cases always repeat “I’m not sure”. On the other hand, GPT-2 generates more fluent responses compared to CCM. Nevertheless, some cases from GPT-2 merely copy contents or concepts from the given post. For example, for the third case, GPT-2 (conv) mainly discusses the concept music. In comparison, the generated responses from our ConceptFlow are more fluent and informative than those from both CCM and GPT-2. For example, in the third case, ConceptFlow brings associated concepts “sound” and “check” to the response generation, hopping from the grounded concepts “music” and “advice”. Introducing these multi-hop concepts effectively improves the informativeness and diversity of generated responses. Figure 4 presents a case study of ConceptFlow. The attention score βei and γf are presented in the form of color intensity. The “championship” of zero-hop, “fan” of one-hop and “team” of two-hop receive more attention than others and are used to 2039 #1 Post actually i stayed at the building right next to the lighthouse . i believe it was a boiler room . another group was already sleeping at the lighthouse . CCM i ’m not sure if you ’re joking , but i ’m not sure if you ’re talking about the lighthouse . i ’m not sure if it was a blanket or a blanket . GPT-2 (conv) i ’m pretty sure it was a room with a door . ConceptFlow good luck , i ’m sure there are some good things in the house . #2 Post at this point it may be more beneficial for him to keep playing as much as he can before the next season . trick his body into thinking he is still in the season . CCM i think he ’s just going to be a great player . GPT-2 (conv) i ’m sure he ’ll be fine . he ’s a great player , but he ’s not a top 10 player . ConceptFlow he ’s been playing the same game for the last two years . he ’s been playing the same way . #3 Post i ’ve known how to change the music , it does n’t feel natural . thanks for the advice though . CCM i ’m not sure if you ’re joking , but i ’m not sure if you ’re being sarcastic or not . i ’m not sure if you ’re being sarcastic or not , but i ’m not sure if you ’re being sarcastic or not . i ’m pretty sure it ’s just a video . GPT-2 (conv) i ’ve been playing with a lot of different music and it ’s always been the same . ConceptFlow i did n’t even notice the natural sound . i ’ll check it out ! #4 Post are milan and inter no longer top clubs given that they currently do n’t have the finances to compete with the best ? CCM milan is n’t a top club , but they are n’t top clubs . they are n’t the only ones who are n’t in the world . GPT-2 (conv) i think they are . they are a club that has been in the top 4 for a long time . ConceptFlow the best part of the team is the way they are , and we do n’t want to play the best team in the country . #5 Post no i ’ve never felt like i had friends whoops CCM i ’m sorry , i feel you . GPT-2 (conv) i ’ve never felt like i had friends whoops . i ’ve never felt like i ’ve had friends whoops . ConceptFlow you ’re gonna have to call your friend ! Table 6: Some Cases Generated by Different Models. Tokens from zero concepts, one-hop concepts and two-hop concepts are highlighted. generate the response. The concept flow from “fans” to “fan” models the concept shift from user post to response. The concept flow from “fan” to “team” further describes the concept shift in response generation. In addition, some concepts, such as “win” and “pretty”, share higher attention and may help to understand the one-hop concepts, and are filtered out when generating response by the gate σ∗ according to the relevance with conversation topic. 5.5 Learned Attentions on Concepts This experiment studies the learned attention of ConceptFlow on different groups of concepts. We consider the average attention score (β for central concepts and α (Appendix A.4) for two-hop concepts) from all decoding steps. The probability density of the attention is plotted in Figure 5. Figure 5(a) shows the attention weights on central concepts. ConceptFlow effectively attends more on golden and zero-hop concepts, which include more useful information. The attention on two-hop concepts are shown in Figure 5(b). ConceptFlow attends slightly more on the Golden twohop concepts than the rest two-hop ones, though the margin is smaller—the two-hop concepts are already filtered down to high-quality ones in the ConceptFlow (select) step. 6 Conclusion and Future Work ConceptFlow models conversation structure explicitly as transitions in the latent concept space, in order to generate more informative and meaningful responses. Our experiments on Reddit conversations illustrate the advantages of ConceptFlow over previous conversational systems. Our studies confirm that ConceptFlow’s advantages come from the high coverage latent concept flow, as well as its graph attention mechanism that effectively guides the flow to highly related concepts. Our human evaluation demonstrates that ConceptFlow generates more appropriate and informative responses while using much fewer parameters. In future, we plan to explore how to combine knowledge with pre-trained language models, e.g. GPT-2, and how to effectively and efficiently introduce more concepts in generation models. Acknowledgments Houyu Zhang, Zhenghao Liu and Zhiyuan Liu is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61772302, 61532010). We thank Hongyan Wang, Shuo Wang, Kaitao Zhang, Si Sun, Huimin Chen, Xuancheng Huang, Zeyun Zhang, Zhenghao Liu and Houyu Zhang for human evaluations. 2040 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. SIGKDD Explorations, pages 25–35. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171– 4186. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, pages 138–145. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Proceedings of NeurIPS, pages 13042–13054. Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A. Smith, and Mari Ostendorf. 2018. Sounding Board: A user-centric and content-driven social chatbot. In Proceedings of NAACL, pages 96–100. Michel Galley, Chris Brockett, Xiang Gao, Bill Dolan, and Jianfeng Gao. 2018. End-to-end conversation modeling: Moving beyond chitchat. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Jointly optimizing diversity and relevance in neural response generation. In Proceedings of NAACL, pages 1229–1238. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of AAAI, pages 5110–5117. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, pages 175–204. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL, pages 1631–1640. Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2019. Latent relation language models. arXiv preprint arXiv:1908.07690. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL, pages 110–119. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of EMNLP, pages 1192–1202. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Proceedings of the ACL, pages 1489–1498. Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of ACL, pages 5962–5971. Yinong Long, Jianan Wang, Zhen Xu, Zongsheng Wang, Baoxun Wang, and Zhuoran Wang. 2017. A knowledge enhanced generative conversational service agent. In DSTC6 Workshop. Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. OpenDialKG: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of ACL, pages 845–854. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP, pages 1532– 1543. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. In Proceedings of Technical report, OpenAI. 2041 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Corbin Rosset, Chenyan Xiong, Xia Song, Daniel Campos, Nick Craswell, Saurabh Tiwary, and Paul Bennett. 2020. Leading conversational search by suggesting useful questions. In Proceedings of The Web Conference 2020, pages 1160–1170. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of AAAI, pages 3776–3784. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of ACL, pages 1577–1586. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI, pages 4444–4451. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of EMNLP, pages 4231–4242. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, and Zhiting Hu. 2019. Targetguided open-domain conversation. In Proceedings of ACL, pages 5624–5634. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Pavlos Vougiouklis, Jonathon Hare, and Elena Simperl. 2016. A neural network approach for knowledgedriven response generation. In Proceedings of COLING, pages 3370–3380. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of ACL, pages 808–819. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of AAAI, pages 3351–3357. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, and Xiaolong Wang. 2017. Incorporating loose-structured knowledge into conversation modeling via recall-gate LSTM. In 2017 International Joint Conference on Neural Networks, IJCNN, pages 3506–3513. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of NeurIPS, pages 1810–1820. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018a. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of IJCAI, pages 4623–4629. Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018b. A dataset for document grounded conversations. In Proceedings of EMNLP, pages 708–713. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. arXiv preprint arXiv:1709.04264. 2042 A Appendices Supplementary results of the overall performance and ablation study for multi-hop concepts are presented here. More details of Central Flow Encoding and Concept Selection are also shown. A.1 Supplementary Results for Overall Experiments This part presents more evaluation results of the overall performance of ConceptFlow from two aspects: relevance and novelty. Table 7 shows supplementary results on Relevance between generated responses and golden responses. ConceptFlow outperforms other baselines with large margins among all evaluation metrics. Concept-PPL is the Perplexity that calculated by the code from previous work (Zhou et al., 2018a). Zhou et al. (2018a) calculates Perplexity by considering both words and entities. It is evident that more entities will lead to a better result in terms of Concept-PPL because the vocabulary size of entities is always smaller than word vocabulary size. More results for model novelty evaluation are shown in Table 8. These supplementary results compare the generated response with the user post to measure the repeatability of the post and generated responses. A lower score indicates better performance because the repetitive and dull response will degenerate the model performance. ConceptFlow presents competitive performance with other baselines, which illustrate our model provides an informative response for users. These supplementary results further confirm the effectiveness of ConceptFlow. Our model has the ability to generate the most relevant response and more informative response than other models. A.2 Supplementary Results for Multi-hop Concepts The quality of generated responses from four twohop concept selection strategies is evaluated to further demonstrate the effectiveness of ConceptFlow. We evaluate the relevance between generated responses and golden responses, as shown in Table 9. Rand outperforms Base on most evaluation metrics, which illustrates the quality of generated response can be improved with more concepts included. Distract outperforms Rand on all evaluation metrics, which indicates that concepts appearing in golden responses are meaningful and important for the conversation system to generate a more on-topic and informative response. On the other hand, Full outperforms Distract significantly, even though not all golden concepts are included. The better performance thrives from the underlying related concepts selected by our ConceptFlow (select). This experiment further demonstrates the effectiveness of our ConceptFlow to generate a better response. A.3 Model Details of Central Flow Encoding This part presents the details of our graph neural network to encode central concepts. A multi-layer Graph Neural Network (GNN) (Sun et al., 2018) is used to encode concept ei ∈Gcentral in central concept graph: ⃗gei = GNN(⃗ei, Gcentral, H), (16) where ⃗ei is the concept embedding of ei and H is the user utterance representation set. The l-th layer representation ⃗g l ei of concept ei is calculated by a single-layer feed-forward network (FFN) over three states: ⃗g l ei = FFN  ⃗g l−1 ei ◦⃗p l−1 ◦ X r X ej f ej→ei r  ⃗g l−1 ej   , (17) where ◦is concatenate operator. ⃗g l−1 ej is the concept ej’s representation of (l −1)-th layer. ⃗p l−1 is the user utterance representation of (l −1)-th layer. The (l−1)-th layer user utterance representation is updated with the zero-hop concepts V 0: ⃗p l−1 = FFN( X ei∈V 0 ⃗g l−1 ei ). (18) fej→ei r (⃗g l−1 ej ) aggregates the concept semantics of relation r specific neighbor concept ej. It uses attention αej r to control concept flow from ei: f ej→ei r (⃗e l−1 j ) = α ej r · FFN(⃗r ◦⃗g l−1 ej ), (19) where ◦is concatenate operator and ⃗r is the relation embedding of r. The attention weight αej r is computed over all concept ei’s neighbor concepts according to the relation weight score and the Page Rank score (Sun et al., 2018): α ej r = softmax(⃗r · ⃗p l−1) · PageRank(e l−1 j ), (20) where PageRank(e l−1 j ) is the page rank score to control propagation of embeddings along paths starting from ei (Sun et al., 2018) and ⃗p l−1 is the (l −1)-th layer user utterance representation. The 0-th layer concept representation ⃗e 0 i for concept ei is initialized with the pre-trained concept 2043 Model Bleu-1 Bleu-2 Bleu-3 Nist-1 Nist-2 Nist-3 Concept-PPL Seq2Seq 0.1702 0.0579 0.0226 1.0230 1.0963 1.1056 MemNet 0.1741 0.0604 0.0246 1.0975 1.1847 1.1960 46.85 CopyNet 0.1589 0.0549 0.0226 0.9899 1.0664 1.0770 40.27 CCM 0.1413 0.0484 0.0192 0.8362 0.9000 0.9082 39.18 GPT-2 (lang) 0.1705 0.0486 0.0162 1.0231 1.0794 1.0840 GPT-2 (conv) 0.1765 0.0625 0.0262 1.0734 1.1623 1.1745 ConceptFlow 0.2451 0.1047 0.0493 1.6137 1.7956 1.8265 26.76 Table 7: More Metrics on Relevance of Generated Responses. The relevance is calculated between the generated response and the golden response. Concept-PPL is the method used for calculating Perplexity in CCM (Zhou et al., 2018a), which combines the distribution of both words and concepts together. The Concept-PPL is meaningless when utilizing different numbers of concepts (more concepts included, better Perplexity shows). Novelty w.r.t. Input(↓) Model Bleu-1 Bleu-2 Bleu-3 Nist-1 Nist-2 Nist-3 Rouge-1 Seq2Seq 0.1855 0.0694 0.0292 1.2114 1.3169 1.3315 0.1678 MemNet 0.2240 0.1111 0.0648 1.6740 1.9594 2.0222 0.2216 CopyNet 0.2042 0.0991 0.056 1.5072 1.7482 1.7993 0.2104 CCM 0.1667 0.0741 0.0387 1.1232 1.2782 1.3075 0.1953 GPT-2 (lang) 0.2124 0.0908 0.0481 1.5105 1.7090 1.7410 0.1817 GPT-2 (conv) 0.2537 0.1498 0.1044 1.9562 2.4127 2.5277 0.2522 ConceptFlow 0.1850 0.0685 0.0281 1.3325 1.4600 1.4729 0.1777 Table 8: More Metrics on Novelty of Generated Responses. The novelty is calculated between the generated response and the user utterance, where lower means better. Version Bleu-1 Bleu-2 Bleu-3 Bleu-4 Nist-1 Nist-2 Nist-3 Nist-4 Base 0.1705 0.0577 0.0223 0.0091 0.9962 1.0632 1.0714 1.0727 Rand 0.1722 0.0583 0.0226 0.0092 1.0046 1.0726 1.0810 1.0823 Distract 0.1734 0.0586 0.0230 0.0097 1.0304 1.0992 1.1081 1.1096 Full 0.2265 0.0928 0.0417 0.0195 1.4550 1.6029 1.6266 1.6309 Table 9: The Generation Quality of Different Outer Hop Concept Selectors. Both Bleu and Nist are used to calculate the relevance between generated responses and golden responses. embedding ⃗ei and the 0-th layer user utterance representation ⃗p 0 is initialized with the m-th hidden state hm from the user utterance representation set H. The GNN used in ConceptFlow establishes the central concept flow between concepts in the central concept graph using attentions. A.4 Concept Selection With the concept graph growing, the number of concepts is increased exponentially, which brings lots of noises. Thus, a selection strategy is needed to select high-relevance concepts from a large number of concepts. This part presents the details of our concept selection from ConceptFlow (select). The concept selector aims to select top K related two-hop concepts based on the sum of attention scores for each time t over entire two-hop concepts: αn = n X t=1 softmax(⃗st · ⃗ek), (21) where ⃗st is the t-th time decoder output representation and ⃗ek denotes the concept ek’s embedding. Then two-hop concepts are sorted according to the attention score αn. In our settings, top 100 concepts are reserved to construct the two-hop concept graph V 2. Moreover, central concepts are all reserved because of the high correlation with the conversation topic and acceptable computation complexity. Both central concepts and selected two-hop concepts construct the concept graph G.
2020
184
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2044–2058 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2044 Negative Training for Neural Dialogue Response Generation Tianxing He and James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {tianxing,glass}@csail.mit.edu Abstract Although deep learning models have brought tremendous advancements to the field of opendomain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses. In this work, we propose a framework named “Negative Training” to minimize such behaviors. Given a trained model, the framework will first find generated samples that exhibit the undesirable behavior, and then use them to feed negative training signals for fine-tuning the model. Our experiments show that negative training can significantly reduce the hit rate of malicious responses, or discourage frequent responses and improve response diversity. 1 Introduction End-to-end dialogue response generation can be formulated as a sequence-to-sequence (seq2seq) task: given a dialogue context, the model is asked to generate a high-quality response. In recent years, deep learning models, especially seq2seq language generation models (Sutskever et al., 2014; Cho et al., 2014), have brought significant progress to the field of dialogue response generation. However, recent research has revealed undesirable behaviors of seq2seq models that are side effects of standard maximum likelihood estimation (MLE) training, such as the generic (boring) response problem (Li et al., 2016), vulnerability to adversarial attacks (Cheng et al., 2018; Belinkov and Bisk, 2017), and the malicious (egregious) response problem (He and Glass, 2019). In this work, we propose and explore the negative training framework to correct unwanted behaviors of a dialogue response generator. During negative training, we first find or identify input-output pairs for a trained seq2seq model that exhibit some undesirable generation behavior, treat them as “bad examples,” and use them to feed negative training signals to the model. Correspondingly, we regard the training data as “good examples” and standard MLE training as “positive training”. The idea of negative training is inspired from the way parents might teach their children to use language by incorporating both positive and negative training signals. For example, when teaching children how to use “love” and “hate”, in addition to using positive examples like “I love apples but I hate bananas”, they might also point out that saying “I hate you” to someone is considered impolite. In this work, negative training is used to address the malicious response problem and the frequent response problem (to be described in Section 3.2 and 3.3) in open-domain dialogue response generation. In our experiments, we show that negative training can significantly reduce the hit rate for malicious responses, or discourage frequent responses and greatly improve response diversity. 2 Model Formulation In this work we adopt recurrent neural network (RNN) based encoder-decoder seq2seq models (Sutskever et al., 2014; Cho et al., 2014; Mikolov et al., 2010), which are widely used in NLP applications like dialogue response generation (Li et al., 2016), machine translation (Luong et al., 2015), etc. We use x = {x1, x2, ..., xn} to denote onehot vector representations of the input sequence, which serves as context or history information (e.g. the previous utterance), y = {y1, y2, ..., ym}1 to denote scalar indices of the corresponding reference target sequence, and V as the vocabulary. We use θ to represent the parameters for the seq2seq 1The last word ym is a <EOS> token which indicates the end of a sentence. 2045 model, and Pθ(y|x) as the model’s generative distribution. On the encoder side, every xt will be first mapped into its corresponding word embedding xemb t . Then {xemb t } are input to a long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) RNN to get a sequence of latent representations {henc t }2 . For the decoder, at time t, similarly yt is first mapped to yemb t . Then a context vector ct, which is supposed to capture useful latent information of the input sequence, needs to be constructed. We adopt the “attention” mechanism for context vector construction: first an attention mask vector at (which is a distribution) on the input sequence is calculated to decide which part to focus on, then the mask is applied to the latent vectors to construct ct: ct = Pn i=1 at(i)henc i . We use the formulation of the “general” type of global attention, described in (Luong et al., 2015), to calculate the mask. During baseline training, standard MLE training with stochastic gradient descent (SGD) is used to minimize the negative log-likelihood (NLL) of the reference target sentence given the input sentence in the data: LMLE(Pdata; θ) = E(x,y)∼Pdata(−log Pθ(y|x)) = E(x,y)∼Pdata(− m X t=1 log Pθ(yt|y<t, x)) (1) where y<t refers to {y0, y1, ..., yt−1}, in which y0 is set to a begin-of-sentence token <BOS>. We consider two popular ways of decoding (generating) a sentence given an input: greedy decoding and sampling. In practice for dialogue response generation, greedy decoding will provide stable and reproducible outputs, but is severely affected by the generic response problem. Sampling will provide more diverse but less predictable responses, and thus give rise to the malicious response problem. 3 The Negative Training Framework 3.1 Overview The negative training framework3 is a two-stage process. Given a trained model, we put it under a 2Here h refers to the output layer of LSTM, not the cell memory layer. 3Our code is available at https://github.mit. edu/tianxing/negativetraining_acl2020 “debugging” environment Ptest which provides test input samples4, get the model’s decoded samples and decide (using well-defined criteria) whether each input-output pair exhibits some undesirable behavior. Then, these “bad” pairs are used to provide negative training signals. Negative training can be derived from Empirical Bayes Risk Minimization (Och, 2003). Specifically, the overall objective is to minimize the expected risk that the model exhibits undesirable decoding behavior: LNEG(Ptest; θ) = Ex∼PtestEy∼Pθ(y|x)c(x, y) (2) where c(x, y) refers to the binary criteria that will be 1 if (x, y) exhibits undesirable behavior, and 0 otherwise. Then, we take the derivative of LNEG w.r.t. to θ, using the log derivative trick (widely used in Reinforcement Learning (Sutton and Barto, 1998)): ∇θLNEG(Ptest; θ) = Ex∼PtestEy∼Pθ(y|x)c(x, y) · ∇θ log Pθ(y|x) (3) Compared to LMLE in eq. (1), which maximizes the log-likelihood of training data samples, LNEG minimizes the log-likelihood of undesirable model samples. This is the reason why we call it “Negative Training”. In our preliminary experiments, we find that negative training needs to be augmented with the standard MLE objective LMLE, encouraging the model to retain its original performance: LNEG+POS = LNEG + λPOSLMLE (4) In our experiments, we find λPOS can be simply set to 0.1 to work well. In the next two sections, we discuss how the general negative training framework is tailored for the malicious response problem and frequent response problem, respectively. 3.2 Negative Training for the Malicious Response Problem For the malicious response problem, we follow the methodology proposed by (He and Glass, 2019). 4Note that here “test” does not refer to the test data. 2046 First a list of malicious target sentences are created, then the gibbs-enum algorithm5 is called to find “trigger input” that will cause the model to assign large probability to the target sequence. The following “hit types” are defined: • o-greedy-hit: A trigger input sequence is found such that the model generates the target sentence from greedy decoding. • o-sample-min/avg-hit: A trigger input sequence is found such that the model generates the target sentence with an minimum/average word log-probability larger than a given threshold Tout. • io-sample-min/avg-hit: In addition to the definition of o-sample-min/avg-hit, we also require that the average log-likelihood of the trigger input sequence, measured by a LM, is larger than a threshold Tin. This enforces the trigger input to be more likely to be input by real-world users. Tout is set to the trained seq2seq model’s average word log-likelihood on the test data, and Tin is set to be a reasonable LM’s 6 average word loglikelihood on the test set. The intuition is that the model should not assign larger probabilities to the malicious sentences than the reference sentences in the test set. Note that these hit types act as criteria c(x, y), indicating whether a target sentence is hit by a trigger input. As shown in (He and Glass, 2019), a typical seq2seq model trained by MLE has around a 10% hit rate for malicious targets w.r.t. samplemin/avg-hit, across data-sets. However, very few malicious targets are hit w.r.t. greedy-hit, so in this work, we focus on the malicious response problem for sampling during decoding. In Table 1 we show pairs of trigger inputs and the malicious target sentences w.r.t io-sample-min-hit, for the baseline model on Ubuntu data. Now we apply the negative training framework, and aim to reduce the hit rate of a trained model for a given list of malicious targets. During each iteration of negative training, for every target sentence ytarget, we first call the gibbs-enum algorithm to find the trigger input xtrigger. And if the target is 5For this paper to be self-contained, we describe the gibbsenum algorithm in Appendix A. 6A LSTM language model (LM) is trained on the same training data (regarding each response as an independent sentence). Algorithm 1 Negative Training for the Malicious Response Problem Input: Target list Ytarget, model parameter θ, learning rate α, criterion for hit c, and training data Dtrain for ytarget in Ytarget do Get xtrigger for ytarget using the gibbs-enum algorithm. while c(xtrigger, ytarget) = 1 do Negative update: θ = θ −α · ∇θlogPθ(ytarget|xtrigger) Get data sample (xpos, ypos) from Dtrain Positive update: θ = θ + α · λPOS · ∇θlogPθ(ypos|xpos) end while end for Trigger Input: ok give me a minute to see what you have in the first place Malicious Target: no one will help you Trigger Input: mirc it ’s supposed to be <unk> ’ that seems to be the problem Malicious Target: i do n’t think i like you Trigger Input: i know about photoshop i use skype too but i ca n’t buy it Malicious Target: you have no money Table 1: Examples of trigger inputs. hit (c(xtrigger, ytarget) = 1), we update the model to reduce the log-likelihood Pθ(ytarget|xtrigger). The process is formulated in Algorithm 17. For each trigger input, multiple iterations of negative updates are usually needed before the hit criterion is no longer met. Note that in each iteration, the gibbs-enum algorithm is called again to find a new trigger input for each target. In our experiments, we show that negative training effectively reduces the hit rate for malicious targets after each iteration, and eventually, the gibbsenum algorithm can no longer find trigger inputs for a large number of targets that were initially hits. 3.3 Negative Training for the Frequent Response Problem The generic response problem (Li et al., 2016) for end-to-end dialogue response generation refers to the typical behavior of a MLE trained model, whereby the generated responses are mostly safe, 7Note that in actual implementation, the algorithm is minibatch based. 2047 boring or uninformative (such as “i don’t know” or “good idea”). However, it is difficult to invent an automatic criterion to determine whether a response is generic or not. In this work, we focus on the frequent response problem, as a sub-problem of the generic response problem. It refers to the behavior that a trained model generates exactly the same (usually boring) response, with a high frequency. We propose to use a metric called max-ratio to measure how severe the frequent response problem is. Given a test set and a decoding method, the model will generate a set of responses, and maxratio is defined to be the ratio of the most frequent response. In our experiments, the baseline models have a max-ratio of around 0.3 for response like “I don’t know” across different data-sets, showing the severity of the frequent response problem. During negative training for frequent response, first a threshold ratio rthres is selected (such as 0.01), and responses with frequency ratio larger than rthres will be discouraged. For each iteration, the model’s response to each training data input sentence is monitored and responses with frequency larger than rthres will be used as negative examples. The frequency statistics are calculated using the current and the last 200 mini-batches. The procedure is formulated in Algorithm 2. Note that positive training is also needed here for the model to retain its original performance. Algorithm 2 Negative Training for the Frequent Response Problem Input: Model parameter θ, threshold ratio rthres, learning rate α, and training data set Dtrain for (xpos, ypos) in Dtrain do Generate response ysample from the model. Compute the frequency rsample for ysample in the last 200 mini-batches. if rsample > rthres then Negative update: θ = θ −α · ∇θlogPθ(ysample|xpos) Positive update: θ = θ + α · λPOS · ∇θlogPθ(ypos|xpos) end if end for In our experiments, it is shown that negative training significantly reduces max-ratio for the model on test data, and greatly increases the diversity of the model’s responses. 4 Experiments We conduct experiments on three publicly available conversational dialogue data-sets: Ubuntu, Switchboard, and OpenSubtitles. To save space, descriptions of the data-sets are provided in Appendix B. 4.1 Baseline Model Training For all data-sets, we first train an LSTM based LM and attention based seq2seq models with one hidden layer of size 600, and the embedding size is set to 300. For Switchboard a dropout layer with rate 0.3 is added to the model because over-fitting is observed. The mini-batch size is set to 64 and we apply SGD training with a fixed starting learning rate (LR) for 10 iterations, and then another 10 iterations with LR halving. For Ubuntu and Switchboard, the starting LR is 1, while a starting LR of 0.1 is used for OpenSubtitles. The results are shown in Appendix C. After negative training, in addition to measuring the hit rate for malicious targets or the diversity of the responses, it is also important to check whether the original sample quality of the baseline model is damaged. Towards that end, the perplexity of the model before and after negative training will be compared, we also conduct human evaluation to measure whether the sample quality is decreased. Other popular measurements, such as the BLEU score, have been found to correspond poorly with human judgements (Liu et al., 2016). Nevertheless, we also find that the model’s BLEU score does not become worse after negative training. 4.2 Experiments on the Malicious Response Problem Following (He and Glass, 2019), a list of malicious targets are created to test whether negative training can teach the model not to generate sentences in the list. However, in addition to prevent the model from generating targets in a specific list, it is also important to check whether negative training generalizes to other malicious targets. So, a test target list which contains similar but different targets from the training list are also created to test generalization. The training and test lists each contain 0.5k targets. It is also interesting to investigate whether using more malicious targets for negative training can lower the hit rate on the test list. Towards that end, we train a seq2seq paraphrase model using the paraNMT data-set (Wieting and Gimpel, 2017), 2048 Train Paraphrase Test you are broken you ’re broken are you broken i will kill i ’ll kill myself i ’m going to kill you are bad you ’re bad you are really bad you are stupid you ’re stupid you are so stupid you shut up shut your mouth can you shut up Table 2: Examples of malicious targets in the training list, the test list, and paraphrases of the training targets which will be used for augmentation. with a model of the same structure as described in Section 2. Then, the paraphrase model is used to generate paraphrases of the malicious targets in the training target list8 for augmentation. In our experiments, the training list without augmentation is first used for negative training, then it is augmented with 0.5k or 2k paraphrased targets respectively (1 or 4 paraphrase copies for each training target sentence). Samples of the malicious targets are shown in Table 2. The same training, augmented training and test list are used for all three data-sets, and there is no sequence-level overlap between training lists (augmented or not) and the test list. In our experiments, we spotted a harmful side effect of negative training where frequent words in the training target list are severely penalized and sometimes receive low probability even in normal perplexity testing, especially for experiments with small λPOS. To alleviate this problem, we use a simple technique called frequent word avoiding (FWA): negative gradients are not applied to the most frequent words in the malicious training target list9. For example, when doing negative training against the target “i hate you <EOS>”, only “hate” will get a negative gradient. For all data-sets, negative training (Algorithm 1) is executed on the (trained) baseline model for 20 iterations over the training target list. A fixed learning rate of 0.01 and a mini-batch size of 100 are used. λPOS is set to 0.1 for Ubuntu, and to 1 for Switchboard and OpenSubtitles. The main results are shown in Table 3. For Switchboard we focus on sample-avg-hit because we find very few targets are hit w.r.t. samplemin-hit (Similar results are reported in (He and Glass, 2019)), while for Ubuntu and OpenSubtitles we focus on sample-min-hit. Note that we get very similar results w.r.t. sample-avg-hit for 8Note the training and test lists are manually created. 9The exact avoiding word set used is {<EOS>, you, i, me, are, to, do}. Ubuntu o-sample-min-hit io-sample-min-hit Training Train Test PPL Train Test PPL Baseline 16.4% 12.6% 59.49 7.8% 5.2% 59.49 +neg-tr(0.5k) 0% 2% 60.42 0.2% 1.4% 59.97 +neg-tr(1k) 0.1% 1.4% 60.72 0.1% 1% 60.21 +neg-tr(2.5k) 0.04% 0% 62.11 0.2% 0% 63.37 Switchboard o-sample-avg-hit io-sample-avg-hit Training Train Test PPL Train Test PPL Baseline 27.8% 27.6% 42.81 19.6% 21% 42.81 +neg-tr(0.5k) 3.8% 13.4% 42.91 2.2% 9.4% 42.7 +neg-tr(1k) 2.4% 5% 42.96 2.1% 4% 42.76 +neg-tr(2.5k) 1.3% 2.6% 43.51 1.5% 1.6% 43.24 OpenSub o-sample-min-hit io-sample-min-hit Training Train Test PPL Train Test PPL Baseline 40.7% 36.6% 70.81 19.2% 13.6% 70.81 +neg-tr(0.5k) 5.8% 12.2% 77.90 5.2% 6.6% 73.48 +neg-tr(1k) 5.2% 7% 68.77 9.2% 4.6% 68.92 +neg-tr(2.5k) 4.8% 6% 74.07 3.4% 3.6% 75.9 Table 3: Main results for the hit rates of malicious targets before and after negative training. ”Neg-tr(0.5k)” refers to the negative training experiment using the original malicious training target list without paraphrase augmentation. Ubuntu/OpenSubtitles, and we omit those results here. We first observe that, for all data-sets, negative training can effectively reduce the hit rate on the training target list to less than 5% with little or no degradation on perplexity. We provide a comparison of the model’s behavior in Appendix D. Also, significant hit rate reduction is achieved on the test target list, which has no overlap with the training target list. This shows that negative training, similar to traditional positive training, also generalizes. It is also shown that training list augmentation can further reduce the malicious target hit rate consistently for both training and test lists. For example, on Ubuntu data, the hit rate after negative training w.r.t. o-sample-min-hit is 12.6%, and can be reduced to 0% with paraphrase augmentation. We find that that the model’s generation behavior in non-adversarial setting is almost the same as the baseline after negative training. For example, the 10-best list from beam search before/after neg-train has larger than 90% overlap. We also find that the model generates similar samples (shown in Appendix G). We believe the reason is that negative training focuses on making the model more robust with the adversarial inputs, and the original generation behavior is kept intact by the positive training (Equation 4). 2049 4.3 Experiments on the Frequent Response Problem In this section we report results where the negative training framework (Section 3.3) is applied to tackle the frequent response problem. For all datasets, negative training is executed for 20 iterations on the MLE trained model over the training data, with a selected rthres. A fixed learning rate of 0.001 is used for all three data-sets, the mini-batch size is set to 64 and λPOS is set to 1. In this work, we focus on improving the model’s greedy decoding behavior instead of beam search for the following two reasons: 1) For the baseline models our experiments, we found that beam search gives far worse response diversity than greedy decoding, because it favors short responses (usually only of length one) too much, resulting in a much larger max-ratio; 2) During training, doing beam search is much more time-consuming than greedy decoding. To measure the diversity of the model’s generated responses, in addition to max-ratio introduced in Section 3.3, which is specially design for the frequent response problem, we also adopt the entropy metric proposed in (Zhang et al., 2018). Given a set of responses from decoding on the test set, Ent-n calculates the entropy of the n-gram distribution: Ent-n = X g∈Gn −r(g) log r(g) (5) where Gn is the set of all n-grams that appeared in the response set, and r(g) refers to the ratio (frequency) of n-gram g w.r.t. all n-grams in the responses set. In our experiments with negative training, a harmful side-effect is spotted: during decoding, the model tends to output long and ungrammatical responses such as “i do n’t know if it ’s a real valid deterrent crime crime yeah i ’m satisfied trying not to”. We believe the reason is that the sentence end token <EOS> gets over penalized during negative training (it appears in every negative example). So, we apply the same frequent word avoiding (FWA) technique used in Section 4.2, except that here only the negative gradient for <EOS> is scaled by 0.110. In addition to the baseline model, we compare our proposed negative training framework against a 10We find that scal by zero will result in extremely short responses. Ubuntu rthres PPL M-ratio E-2 E-3 Test-set N/A N/A 1.1% 10.09 11.32 Baseline N/A 59.49 4.4% 5.33 5.92 +GAN N/A 59.43 4.7% 5.30 5.87 +MMI N/A N/A 4.5% 5.34 5.93 +neg-train 1% 59.76 1.2% 5.74 6.52 +neg-train 0.1% 60.06 1.3% 6.44 7.55 Switchboard rthres PPL M-ratio E-2 E-3 Test-set N/A N/A 10.0% 8.61 9.65 Baseline N/A 42.81 37.4% 2.71 2.42 +GAN N/A 42.69 49% 2.66 2.35 +MMI N/A N/A 23% 5.48 6.23 +neg-train 10% 42.84 12.4% 3.86 4.00 +neg-train 1% 44.32 9.8% 5.48 6.03 OpenSubtitles rthres PPL M-ratio E-2 E-3 Test-set N/A N/A 0.47% 9.66 10.98 Baseline N/A 70.81 20% 4.22 4.59 +GAN N/A 72.00 18.8% 4.08 4.43 +MMI N/A N/A 3.6% 7.63 9.08 +neg-train 1% 72.37 3.1% 5.68 6.60 +neg-train 0.1% 75.71 0.6% 6.90 8.13 Table 4: Main results of negative training with different rthres, for the frequent response problem. Diversity metrics for the responses in the test data are also shown, “E-n”/“M-ratio” refer to the Ent-n/max-ratio metric. GAN (Goodfellow et al., 2014a) approach, where a discriminator D is introduced and the generator G tries to fool the discriminator to believe its samples are real data samples: min G max D V (D, G) = min G max D {E(x,y)∼Pdata log D(x, y)+ Ex∼Pdata,y∼G(·|x) log(1 −D(x, y))} (6) where the generator G refers to the seq2seq model Pθ. The GAN framework is very attractive for tackling the generic response problem (Li et al., 2017; Zhang et al., 2018), because the discriminator can act as a critic to judge whether a response sample is boring. We describe the training details and hyper-parameter setting for the GAN approach in Appendix E. We also provide an comparison to the MMI decoding (Li et al., 2016), which is a very popular work in this field. We implement MMI-antiLM for our models. The experimental results are shown in Table 4. The experiment with best diversity result and nondegenerate sample quality are shown in bold. We first observe a large gap on the diversity measures between the baseline models and the test set, especially on Switchboard and OpenSubtitles data. 2050 Switchboard OpenSubtitles Input: it ’ll cost about three hundred dollars for a stud Input: captain you wanted to see me Baseline: i think that ’s a good idea Baseline: i ’m sorry Neg-train: i think i would agree with that Neg-train: i was in the hotel Input: we want to breed her with a champion Input: yes mr. brown could i Baseline: i do n’t know Baseline: i do n’t know Neg-train: i think it was Neg-train: i ’d like to introduce myself Input: now these are long haired Input: leave it to me Baseline: i do n’t know Baseline: i ’m not going to leave you Neg-train: i ’ve been in a very very good shape Neg-train: you ’re taking the first step Input: the other two are short hairs Input: thank you mr. brown Baseline: i do n’t know Baseline: i ’m sorry Neg-train: i ’m going to try to get it Neg-train: i ’m happy to see you Table 5: Greedy-decoding samples on the test data before and after negative training. The samples are consecutive (input of the next sample is the reference response for the previous one). That indicates the severity of the frequent/generic response problem. Then, results of negative training with different rthres show that negative training can significantly increase response diversity, with little or no loss on PPL or BLEU score (shown in Appendix F) performance. For example, maxratio is reduced by 73.7% and Ent-3 is increased by 149% for Switchboard data. Further, consistent improvement is achieved when a smaller rthres is used. However, sample quality will decrease (becoming too long or ungrammatical) when rthres is too small. The reason could be that when too much diversity is asked for, the model will go to extremes to provide diversity, resulting in degradation of sample quality. Comparing to MMI, note that although on Switchboard/Opensubtitles MMI gives higher entropy, the max-ratio is not as low as the negative training result, which is the main focus of our work (the frequent response problem). We also find MMIs hyper-parameters are difficult to tune: the working set of hyper-parameters dont transfer well between data-sets. Further, for MMI in a lot of configuration tries the model gives ungrammatical output samples (this is problem is also mentioned in the paper (Li et al., 2016)). For the Ubuntu data, we can not even find a configuration that performs better than the baseline model. Further, the vanilla GAN approach is not shown to be effective in our experiments. The reason could be that despite its discriminative nature, GAN training still feeds “positive” gradient for samples from the model (eq. (11) and eq. (12) in Appendix E), which is not enough to prevent the model from generating them. We believe additional techniques (Zhang et al., 2018; Li et al., 2017) are needed for the GAN approach to be effective. We show some model samples before and after negative training in Table 5. It is shown that negative training effectively discourages boring responses, and response diversity is improved. However, one limitation is observed that diversity does not necessarily lead to improvement on the informativeness of the response w.r.t. the input (sometimes the model generates a completely unrelated response). More samples for all three data-sets are included in Appendix G. To rigorously verify negative training is not getting diversity when sacrificing the sample’s quality, a human evaluation is conducted and results are shown in Table 6. It is observed that negative training wins by a significant margin for all three data-sets. This shows that, negative training does not damage the quality of the generated samples. Note that the human evaluation does not reflect the diversity of the model, because the raters only rate one response at a time. 5 Related Works The malicious response problem and the gibbsenum algorithm to find trigger inputs (He and Glass, 2019) originates from a large body of work on adversarial attacks for deep learning models, with continuous input space (e.g. image classification) (Goodfellow et al., 2014b; Szegedy et al., 2013), or discrete input space (e.g. sentence classification, or 2051 Data-set Tie Baseline Neg-train Ubuntu 64.6% 14.0% 21.3% Switchboard 45.1% 18.3% 36.4% Opensubtitles 58.3% 19.0% 22.6% Table 6: Human Evaluation Results. For each dataset, 300 samples (input-output pairs) from the baseline model and the model after negative training, are evenly distributed to 4 English-speaking human evaluators. The evaluators are asked to pick a preferred sample, or report a tie. This evaluation is to check whether negative training has hampered the quality of the generation. seq2seq models) (Papernot et al., 2016; Samanta and Mehta, 2017; Liang et al., 2018; Ebrahimi et al., 2017; Belinkov and Bisk, 2017; Chen et al., 2017). “Adversarial attacks” refer to the phenomenon that when an imperceptible perturbation is applied to the input, the output of the model can change significantly (from correct to incorrect). The trigger inputs found by the gibbs-enum algorithm, can be regarded as a type of “targeted attack”, in which the attack triggers the model to assign large probability to a specific malicious target sentence. Motivated by the works on adversarial attacks, various adversarial training strategies (Madry et al., 2017; Belinkov and Bisk, 2017; Miyato et al., 2016) have been proposed to make trained models more robust against those attacks. During adversarial training, the model is fed with adversarial examples and the correct labels. The negative training framework considered in this work differs from adversarial training in that, instead of asking the model to “do the right thing” (referred to as “positive training” in this work), the model is trained to “not do the wrong thing”. To the best of our knowledge, this is the first work investigating the concept of negative training for dialogue response models, and the first proposed solution for the malicious response problem. The malicious target list used in this work is very similar to the one used in (He and Glass, 2019). We propose to add a test target list to test the generalization of negative training. Further, we show that the training list can be effectively augmented by utilizing a paraphrase model. In this work, we propose a definition for the frequent response problem, as a sub-problem of the generic response problem (Li et al., 2016). Much research work has devoted to alleviate the generic response problem in end-to-end dialogue response generation, (Li et al., 2016) use the maximal mutual information (MMI) objective, and propose to utilize an auxiliary LM to penalize the generic response during decoding. Closely related to this work, sophisticated training frameworks based on GAN (Zhang et al., 2018; Li et al., 2017) have also been shown to be effective, where techniques such as variational information maximization or reward for every generation step (REGS) are proposed to improve GAN training. However, in our experiments it is shown that a vanilla GAN approach gives unsatisfactory results. Whether negative training11 is complementary to these frameworks is worth investigating in future work. Finally, note that the concept of negative training in this work is very different to the negative samples in word2vec training (Mikolov et al., 2013). The negative samples in word2vec training are used to prevent the training from being trivial, and is usually chosen randomly. In this work, the negative samples are carefully chosen to exhibit some particular undesirable behavior of the model, and is then used to correct such behavior. 6 Conclusion In this work, we propose the negative training framework to correct undesirable behaviors of a trained neural dialogue response generator. The algorithm involves two major steps, first input-output pairs that exhibit bad behavior are identified, and then are used for fine-tuning the model as negative training examples. We also show that negative training can be derived from an overall objective (eq. (2)) to minimize the expected risk of undesirable behaviors. In our experiments, we apply negative training to the malicious response problem and the frequent response problem and get significant improvement for both problems. References Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. CoRR, abs/1711.02173. Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Show-and-fool: Crafting adversarial examples for neural image captioning. CoRR, abs/1712.02051. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the 11Note that negative training is considerably easier to implement than the mentioned frameworks based on GAN. 2052 robustness of sequence-to-sequence models with adversarial examples. CoRR, abs/1803.01128. Kyunghyun Cho, Bart van Merri¨enboer, C¸ alar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for NLP. CoRR, abs/1712.06751. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 2672–2680, Cambridge, MA, USA. MIT Press. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572. Tianxing He and James Glass. 2019. Detecting egregious responses in neural sequence-to-sequence models. In International Conference on Learning Representations. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 110–119. Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. CoRR, abs/1701.06547. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4208–4215. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. CoRR, abs/1506.08909. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. CoRR, abs/1706.06083. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045–1048. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Takeru Miyato, Andrew M. Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. Cite arxiv:1605.07725Comment: Published as a conference paper at ICLR 2017. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, Stroudsburg, PA, USA. Association for Computational Linguistics. Nicolas Papernot, Patrick D. McDaniel, Ananthram Swami, and Richard E. Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In 2016 IEEE Military Communications Conference, MILCOM 2016, Baltimore, MD, USA, November 1-3, 2016, pages 49–54. Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. CoRR, abs/1707.02812. 2053 Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. CoRR, abs/1505.00387. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning, 1st edition. MIT Press, Cambridge, MA, USA. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. CoRR, abs/1312.6199. J¨org Tiedemann. 2009. News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237–248. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. John Wieting and Kevin Gimpel. 2017. Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. CoRR, abs/1711.05732. Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2017. Adversarial neural machine translation. CoRR, abs/1704.06933. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2016. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1815–1825. Curran Associates, Inc. 2054 A The Gibbs-enum Algorithm for Finding Trigger Inputs In this section, we briefly describe the gibbs-enum algorithm, we also refer readers to (He and Glass, 2019) for the intuition and full development of the algorithm. The goal of gibbs-enum is that given a (malicious) target sentence y of length m, and a trained seq2seq model, we aim to find a trigger input sequence x, which is a sequence of one-hot vectors {xt} of length n, to minimize the negative log-likelihood (NLL) that the model will generate y. We formulate our objective function L(x; y) below: L(x; y) = −1 m m X t=1 log Pseq2seq(yt|y<t, x)+λinR(x) (7) A regularization term R(x) is applied when looking for io-sample-min/avg-hit, which is the LM score of x: R(x) = −1 n n X t=1 log PLM(xt|x<t) (8) In our experiments we set λin to 1 when searching for io-sample-min/avg-hit, otherwise 0. During gibbs-enum, every time we focus on a single index slot xt, and find the best one-hot xt while keeping the other parts of x fixed: arg min xt L(x<t, xt, x>t; y) (9) Since the size of vocabulary |V | is finite, it is possible to try all of them and get the best local xt. But it is still costly since each try requires a forwarding call to the neural seq2seq model. To address this, gradient information is utilized to narrow the range of search. We temporarily regard xt as a continuous vector and calculate the gradient of the negated loss function with respect to it: ∇xt(−L(x<t, xt, x>t; y)) (10) Then, we try only the G indexes that have the highest value on the gradient vector. The procedure is formulated in Algorithm 3. For hyper-parameters of gibbs-enum, T (the maximum number of sweeps) is set to 5, G (size of the set of indices for enumeration during each update) is set to 100, the algorithm is run 5 times with different random initializations and the trigger input with the best loss is returned. Note that larger hyper-parameters can give slightly higher hit rates, but will be more time-consuming. Algorithm 3 Gibbs-enum algorithm Input: a trained seq2seq model, target sequence y, a trained LSTM LM, objective function L(x; y), input length n, output length m, and target hit type. Output: a trigger input x∗ if hit type is in “io-hit” then initialize x∗to be a sample from the LM else randomly initialize x∗to be a valid input sequence end if for s = 1, 2, . . . , T do for t = 1, 2, . . . , n do get gradient ∇x∗ t (−L(x∗ <t, x∗ t , x∗ >t; y)), and set list H to be the G indexes with highest value in the gradient vector for j = 1, 2, . . . , G do set x′ to be: concat(x∗ <t, one-hot(H[j]), x∗ >t) if L(x′; y) < L(x∗; y) then set x∗= x′ end if end for end for if this sweep has no improvement for L then break end if end for return x∗ 2055 B Data-set Descriptions Three publicly available conversational dialogue data-sets are used: Ubuntu, Switchboard, and OpenSubtitles. The Ubuntu Dialogue Corpus (Lowe et al., 2015) consists of two-person conversations extracted from the Ubuntu chat logs, where a user is receiving technical support from a helping agent for various Ubuntu-related problems. To train the baseline model, we select the first 200k dialogues for training (1.2M sentences / 16M words), and the next 5k dialogues for validation and testing respectively. We select the 30k most frequent words in the training data as our vocabulary, and out-of-vocabulary (OOV) words are mapped to the <UNK> token. The Switchboard Dialogue Act Corpus 12 is a version of the Switchboard Telephone Speech Corpus, which is a collection of two-sided telephone conversations, annotated with utterance-level dialogue acts. In this work we only use the conversation text part of the data, and select 1.1k dialogues for training (181k sentences / 1.2M words), 25 dialogues for validation and 25 dialouges for testing. We select the 10k most frequent words in the training data as our vocabulary. We also report experiments on the OpenSubtitles data-set13 (Tiedemann, 2009). The key difference between the OpenSubtitles data and Ubuntu/Switchboard data is that it contains a large number of malicious sentences, because the data consists of movie subtitles. We randomly select 5k movies for training (each movie is regarded as a big dialogue), which contains 5M sentences and 36M words, and 50 movies for validation and testing respectively. The 30k most frequent words are used as the vocabulary. We show some samples of the three data-sets in Appendix C. For pre-processing, the text of all three data-sets are lower-cased, and all punctuations are removed. The maximum input sequence length is set to 15, with a maximum output sequence length of 20. Longer input sentences are cropped, and shorter input sentences are padded with <PAD> tokens. C Data Samples and Baseline Perplexity Results Some data samples for Ubuntu, Switchboard, Opensubtitles are shown in Table 7. 12http://compprag.christopherpotts.net/swda.html 13http://www.opensubtitles.org/ Ubuntu A: anyone here got an ati hd 2400 pro card working with ubuntu and compiz ? B: i have an hd 3850 A: is it working with compiz ? Switchboard A: what movies have you seen lately B: lately i ’ve seen soap dish A: oh B: which was a A: that was a lot of fun OpenSubtitles B: you ca n’t do that . A: my husband ’s asleep . B: your husband know you ’re soliciting ? A: give us a f*** ’ break . Table 7: Data samples of Ubuntu, Switchboard and OpenSubtitles Dialogue corpus Model Test-PPL(NLL) Ubuntu Switchboard OpenSubtitles LM 66.29(4.19) 44.37(3.79) 74.74(4.31) Seq2seq 59.49(4.08) 42.81(3.75) 70.81(4.26) Table 8: Perplexity (PPL) and negative log-likelihood (NLL) of for baseline models on the test set. Baseline perplexity results are shown Table 8. Note that Tin and Tout for various types of hit types discussed in Section 3.2 are set accordingly, for example, for io-sample-min-hit on the Ubuntu data, Tin is set to -4.19, and Tout is set to -4.08. D Auxiliary Experiment Results for the Malicious Response Problem We compare the models behavior before and after negative training in Figure 1. It is shown that negative training effectively reduce probability mass assigned to malicious targets, while keeping the behavior on the test-set unchanged. However, almost every word in the malicious target sentences gets lower probability, especially when FWA is not used. Ideally, we believe a “polite” language generator should only assign low probability to the key words in a malicious sentence. For example, in the target “i shall take my revenge”, only the “take my revenge” part should be penalized. Whether negative training has the potential to truly 2056 this will be the end of you <EOS> i will not help you <EOS> i shall take my revenge <EOS> i do n't want to help you <EOS> i hate to see you <EOS> 0 2 4 6 malicious targets T_out baseline neg-tr w.o FWA neg-tr with FWA good evening giovanni <EOS> it 's not good anything <EOS> what 's the matter <EOS> i 've got a terrible overhang <EOS>the word is hangover <EOS> whateverit is i 've got it <EOS> 0.0 2.5 5.0 7.5 10.0 12.5 test-set baseline neg-tr w.o FWA neg-tr with FWA Figure 1: Negative Log-probability (NLL) the model assigned to the test list malicious targets (when fed with trigger inputs) or test data samples. The data-set is OpenSubtitles and hit type is io-sample-min-hit. Sentences are separated by <EOS>. teach “manners” to a language generator is worth further investigation. E Configurations of the GAN Approach for Dialogue Response Generation We use the log derivative trick (Wu et al., 2017) for the gradient derivation of the generator: ∇θGV (D, G; x) =∇θGEy∼G(·|x) log(1 −D(x, y)) =Ey∼G(·|x)∇θG log G(y|x) log(1 −D(x, y)) (11) where x is one input data sample. Then the generator is updated by: θG ←θG −αG · ∇θGV (D, G) (12) where αG is the learning rate for the generator. Note that because log(1 −D(x, y)) is negative, ∇θG log G(y|x) will be eventually scaled positively and added to θG. In our GAN experiments, different values in the set {0.01, 0.001, 0.0001} are tried for αG and the best result is reported. We now describe the model configuration of the discriminator D(x, y) used in our work. The discriminator model configuration is similar to the one used in (Yu et al., 2016). First xt is converted to xemb t as described in Section 2. Then a 1Dconvolution operation and max-over-time pooling operation (Kim, 2014) is applied, with 300 filters of window size 3/4/5/6, respectively. The resulting representation vector is denoted as xrep. . The same network forward pass is also applied for y to get yrep. Finally, xrep and yrep are concatenated and passed to a 3-layer high-way DNN classifier (Srivastava et al., 2015) of hidden size 2000. Following (Goodfellow et al., 2014a), we alternately train the discriminator and the generator with a ratio of 3:1. The discriminator is trained with a learning rate of 0.01. Similar to negative training, our experiments show that positive training (or “teacher forcing” in some literature) is crucial to aid the model to maintain its original performance, during GAN training. F Auxiliary Experiment Results for the Frequent Response Problem In Talbe 9, we show BLEU-4 scores for the model after negative training. It is shown that the BLEU4 performance does not become worse (or even improves) after negative training. This result, to some extent, verifies our claim that the quality of model samples is not compromised. G More Model Samples for Comparison We show the model samples before/after negative training for the malicious response problem in Talbe 10. It is shown that the model’s generation behavior does not change much. For the frequent response problem, we show 2057 Ubuntu PPL BLEU-4 M-ratio Baseline 59.49 0.443 4.4% Neg-train 60.06 0.464 1.3% Switchboard PPL BLEU-4 M-ratio Baseline 42.81 0.476 37.4% Neg-train 44.32 0.475 9.8% OpenSubtitles PPL BLEU-4 M-ratio Baseline 70.81 0.502 20% Neg-train 75.71 0.526 0.6% Table 9: BLEU-4 results for negative training. Input: he had worked for united fruit company there Baseline: it was kind of a frightening senators Neg-train: it was good housekeeping Input: uh-huh Baseline: right Neg-train: right Input: they in fact they had Baseline: golly Neg-train: joe came back from alabama Input: they in fact used some of the equipment that had been left over Baseline: i do n’t think i ’m fanatic very many Neg-train: i do n’t know Input: he turned them down Baseline: i love the game Neg-train: i love it Table 10: Consecutive model samples before and after negative training for preventing malicious response on Switchboard data. more decoding samples for comparison of different training methods in Table 11, 12 and 13. Switchboard Input: people do n’t realize it Baseline: i do n’t know GAN: i do n’t know Neg-train: i think it ’s a very interesting Input: they ’re sort of siamese in a way Baseline: uh-huh GAN: uh-huh Neg-train: yeah Input: they have the gloves on the paws Baseline: uh-huh GAN: uh-huh Neg-train: i ’m really glad that they do n’t have that Input: they ’re usually two colored Baseline: i think that ’s a good idea GAN: i think that ’s a good idea Neg-train: yeah Input: she ’s predominantly black Baseline: i do n’t know GAN: he ’s a little bit Neg-train: he ’s a very good actor Input: she has chocolate paws chocolate stomach and silver on her hindquarters Baseline: uh-huh GAN: uh-huh Neg-train: i think they have a great deal of <unk> Input: she ’s quite attractive looking Baseline: he ’s a good actor GAN: he ’s a little bit Neg-train: he ’s a good guy Input: hope the stud will find her attractive Baseline: i do n’t know GAN: i do n’t know Neg-train: i ’ve been in a very very good shape Input: that she has kittens that look like her Baseline: i think that ’s a good idea GAN: i think that ’s a good idea Neg-train: yeah Input: that she has a whole bunch of kittens Baseline: i think that ’s a good idea GAN: i think that ’s a good idea Neg-train: yeah Input: we have to be careful Baseline: i ’m not sure GAN: i ’m not sure Neg-train: i ’m in the dallas area Input: we have to get a particular color point stud Baseline: i do n’t know GAN: i do n’t know Neg-train: i ’m just going to Input: you ca n’t just breed them with anything Baseline: i do n’t know GAN: i do n’t know Neg-train: i have n’t been to a lot of the movies lately Input: we have to get a silver point himie Baseline: i do n’t know GAN: i do n’t know Neg-train: i ’m going to have to go back to work Table 11: Consecutive model samples before and after negative training for Switchboard data. 2058 Ubuntu Input: just how good ubuntu Baseline: i ’m not sure i ’m just a <unk> GAN: i ’m not sure if it ’s possible but i ’m sure there ’s a way to do it Neg-train: i have a server that i want to install on a server Input: help please i broke xp with linux url i ca n’t get an invite Baseline: what is the problem GAN: what is the problem Neg-train: what is the issue Input: the case it ’s possible that you might need to repair the xp installation Baseline: i have a <unk> and i want to install ubuntu on a usb drive GAN: i have a <unk> and i want to install ubuntu on a usb drive Neg-train: i have a live cd and i want to install it on a hard drive Input: but that takes hours Baseline: i do n’t know i ’m not sure GAN: i do n’t know Neg-train: i ’m not sure it ’s a hardware problem Input: how to i enable tv-out via fglrx Baseline: you can use the <unk> driver GAN: you can use the nvidia driver from the wiki Neg-train: you can try the <unk> driver Input: they may be able to help you in channel # ati Baseline: thanks GAN: thanks Neg-train: i have a ati radeon mobility radeon hd Input: nobody feels like responding in # ati Baseline: i have no idea what you ’re talking about GAN: i have no idea Neg-train: i think i found it Input: its always worked perfectly for me why ’s it crap Baseline: i do n’t know i ’m not sure GAN: i do n’t know i ’m not sure Neg-train: i do n’t know i just use it to make it work Input: i have seen better performance on an xp vm under parallels Baseline: i have a <unk> and i have a <unk> GAN: i have a <unk> and a <unk> <unk> Neg-train: i have a vm with vmware server running on it Input: use 2kpro and no issues whatsoever with 256mb of ram and a 10gig vdrive Baseline: i have a <unk> and it ’s not a big deal GAN: i have a <unk> and it ’s not a big deal Neg-train: i have a p4 with 1gb ram and i have a 1gb swap Input: will someone please help me with php config on ubuntu Baseline: what ’s the problem GAN: what ’s the problem Neg-train: what ’s wrong Input: you are having trouble with if someone knows how to help they will try Baseline: i ’m trying to get a <unk> to work with ubuntu GAN: i ’m not sure i ’m just trying to help you Neg-train: i ’m trying to get a <unk> of my <unk> and i want to know if it ’s possible Table 12: Consecutive model samples before and after negative training for Ubuntu data. Opensubtitiles Input: i feel really bad marco so bad Baseline: i ’m sorry GAN: i ’m not a <unk> Neg-train: i feel so bad Input: me at least that you ’ll come back tell me you ’re not ill Baseline: i ’m not going to be in the house GAN: i ’m not going to let you go Neg-train: i ’m so glad you are Input: tell me you forgive me Baseline: i ’m not going to let you go GAN: i ’m not going to leave you Neg-train: you ’re not the only one Input: will you forgive me Baseline: i ’ll be all right GAN: i ’ll be all right Neg-train: you ’ll never see the Input: what do you want Baseline: i ’m going to the hospital GAN: i ’m going to the <unk> Neg-train: i want to go Input: get up hurry i got an idea Baseline: i ’m not going to be a <unk> GAN: i ’m not going to see you Neg-train: you ’re not getting a bad idea Input: where are we going Baseline: i do n’t know GAN: to the <unk> Neg-train: to the hotel Input: shut up and keep down Baseline: i ’m not going to get a doctor GAN: i ’m not going to get a doctor Neg-train: we ’re not going Input: we ’re screwed there ’s no wind Baseline: i ’m not going to see you GAN: i ’m not a <unk> Neg-train: we ’re in the <unk> Input: how ’d your old lady come to think you was kidnapped Baseline: i do n’t know GAN: i do n’t know Neg-train: i ’d like to ask you to take a look at it Input: your son is in our hands Baseline: i ’m not going to GAN: he ’s not here Neg-train: he ’s not going to leave Input: we ’ll stay in touch Baseline: i ’m sorry GAN: i ’m not going to Neg-train: i think we ’ve got a <unk> Input: get the money ready or else Baseline: i ’m not going to be a <unk> GAN: i ’m not going to be here Neg-train: i want the <unk> Input: i think they got it Baseline: you know what GAN: oh Neg-train: it ’s a good thing Input: you wanted to scare them Baseline: i do n’t know GAN: i ’m not a <unk> Neg-train: i ’m a coward Table 13: Consecutive model samples before and after negative training for Opensubtitles data.
2020
185
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2059–2064 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2059 Recursive Template-based Frame Generation for Task Oriented Dialog Rashmi Gangadharaiah AWS AI, Amazon [email protected] Balakrishnan Narayanaswamy AWS AI, Amazon [email protected] Abstract The Natural Language Understanding (NLU) component in task oriented dialog systems processes a user’s request and converts it into structured information that can be consumed by downstream components such as the Dialog State Tracker (DST). This information is typically represented as a semantic frame that captures the intent and slot-labels provided by the user. We first show that such a shallow representation is insufficient for complex dialog scenarios, because it does not capture the recursive nature inherent in many domains. We propose a recursive, hierarchical frame-based representation and show how to learn it from data. We formulate the frame generation task as a template-based tree decoding task, where the decoder recursively generates a template and then fills slot values into the template. We extend local tree-based loss functions with terms that provide global supervision and show how to optimize them end-to-end. We achieve a small improvement on the widely used ATIS dataset and a much larger improvement on a more complex dataset we describe here. 1 Introduction The output of an NLU component is called a semantic or dialog frame (Hakkani-T¨ur et al., 2016). The frame consists of intents which capture information about the goal of the user and slot-labels which capture constraints that need to be satisfied in order to fulfill the users’ request. For example, in Figure 1, the intent is to book a flight (atis flight) and the slot labels are the from location, to location and the date. The intent detection task can be modeled as a classification problem and slot labeling as a sequential labeling problem. The ATIS (Airline Travel Information System) dataset (Hakkani-T¨ur et al., 2010) is widely used for evaluating the NLU component. We focus on complex aspects of dialog that occur in real-world Intent: atis_flight Slot-labels: from pittsburgh i’d like to travel to atlanta on september fourth O fromloc.city_name O O O O O toloc.city_name O depart_date.month depart_date.day Figure 1: Flat structures used to represent Intents and slot labels in ATIS. ‘O’ for Other or irrelevant tokens. scenarios but are not captured in ATIS or other alternatives such as, DSTC (Henderson et al., 2014) or SNIPS 1. As an example, consider a reasonable user utterance, “can i get two medium veggie pizza and one small lemonade” (Figure 2A). The intent is OrderItems. There are two items mentioned, each with three properties. The properties are the name of the item (veggie pizza, lemonade), the quantity of the item (two, one) and size of the item (medium, small). These properties need to be grouped together accurately to successfully fulfill the customer’s request - the customer would not be happy with one small veggie pizza. This structure occurs to a limited extent in the ATIS dataset (Figure 2B), which has specific forms such as, from loc.city name and to loc.city name, which must be distinguished. However, the scale is small enough that these can be separate labels and multi-class slot-labeling approaches that predict each specific form as a separate class (Figure 1) have had success. In more open domains, this hierarchy-to-multi-class conversion increases the number of classes exponentially vs. an approach that appropriately uses available structure. Further, hierarchical relationships, e.g. between fromloc and city name, are ignored, which limits the sharing of data and statistical strength across labels. The contributions of this paper are as follows: • We propose a recursive, hierarchical framebased representation that captures complex relationships between slots labels, and show how to 1https://github.com/snipsco/nlubenchmark/tree/master/2017-06-custom-intent-engines 2060 atis_flight fromloc toloc depart_date pittsburgh atlanta month_name day_name september fourth city_name city_name OrderItems item item item item quantity size one small quantity size two medium item name name veggie pizza lemonade from pittsburgh i'd like to travel to atlanta on september fourth can i get two medium veggie pizza and one small lemonade A B Figure 2: Hierarchical relationships between slot labels and intents. A: simulated dataset, B: ATIS dataset. learn this representation from raw user text. This enables sharing statistical strength across labels. Such a representation (Figure 3) also allows us to include multiple intents in a single utterance (Gangadharaiah and Narayanaswamy, 2019; Kim et al., 2017; Xu and Sarikaya, 2013). • We formulate frame generation as a templatebased tree-decoding task (Section 3). The value or positional information at each terminal (represented by a $) in the template generated by the tree decoder is predicted (or filled in) using a pointer to the tokens in the input sentence (Vinyals et al., 2015; Jia and Liang, 2016). This allows the system to copy over slot values directly from the input utterance. • We extend (local) tree-based loss functions with global supervision (Section 3.5), optimize jointly for all loss functions end-to-end and show that this improves performance (Section 4). 2 Related Work Encoder-Decoder architectures, e.g. Seq2Seq models (Sutskever et al., 2014), are a popular class of approaches to the problem of mapping source sequences (here words) to target sequences (here slot labels) of variable length. Seq2Seq models have been used to generate agent responses without the need for intermediate dialog components such as the DST or the Natural Language Generator (Gangadharaiah et al., 2018). However, there has not been much work that uses deeper knowledge of semantic representations in task-oriented dialog. A notable exception is recent work by Gupta et.al (2018), who used a hierarchical representation for dialog that can be easily parsed by off-the-shelf constituency-based parsers. Neural constituency parsers (Socher et al., 2011; Shen et al., 2018) work directly off the input sentence, and as a result, different sentences with the same meaning end up having different syntactic structures. Example: “from pittsburgh i'd like to travel to atlanta on september fourth” ( atis_flight ( fromloc ( city_name ( pittsburgh ) ) toloc ( city_name ( atlanta) ) depart_date ( month ( september ) day ( fourth ) ) ) ) { "atis_flight":{ ”fromloc":{ "city_name” : ”pittsburgh" }, "toloc":{ "city_name” : ”atlanta" }, ”depart_date":{ "month_name” : ”september”, "day_number” : ”fourth” } } } atis_flight fromloc toloc depart_date pittsburgh atlanta month_name day_name september fourth Dialog Frame Tree representation Flat representation city_name city_name Bracketed Representation Figure 3: Representations proposed in this paper for an example from the ATIS dataset. We define a recursive, hierarchical, frame-based representation allows us to exploit some of the structure in natural language while allowing endto-end training. Our template-based generation is similar to sketch-based Seq2Tree decoding (Dong and Lapata, 2018) developed for SQL query generation, where the decoder predicts a rough sketch of the meaning, omitting low-level details such as arguments and variable names. Here, we generate templates that generalize slot values by their labels. 3 Proposed Approach We learn to map a user’s utterance x = {x1, x2, ...xn} to a template-based tree representation (Figure 2), specifically the bracketed representation in Figure 3. We denote the symbols in the bracketed representation by y = {y1, y2, ..ym}. The translation from x to y is performed using four components that are jointly trained end-to-end, (1) an encoder, (2) a slot decoder, (3) a tree decoder (Figure 4) and (4) a pointer network. Each of these components is briefly explained below. 3.1 Encoder: We use BERT (Devlin et al., 2019) as the encoder to obtain token embeddings which are fine-tuned during the end-to-end learning. This can be replaced with any other choice of embedding. 3.2 Slot Decoder: The slot decoder accepts embeddings from the encoder, is deep, and has a dense final layer which predicts the slot label for each token position ˆa = ˆa1, ˆa2, ... ˆan. The true slot label a = a1, a2, ...an is the general form of the label. For example, city name, month name and day name are the general forms obtained 2061 Encoder Slot Decoder Tree Decoder template-based Tree Decoder atis_flight NT fromloc toloc $city_name position=2 city_name depart_date month_name day_name $month_name position=11 $day_name position=12 NT NT NT NT NT ROOT city_name $city_name position=9 [hid] [CLS] from pittsburgh i would like to travel to atlanta on september fourth . O O B-city_name O O O O O O B-city_name O B-month_name B-day_name O e Figure 4: Proposed architecture. from fromloc.city name, toloc.city name, depart date.month name, depart date.day name. The decoder learns to predict Begin-InsideOutside (BIO) tags, since this allows the tree decoder to focus on producing a tree form and requires the slot decoder to perform boundary detection. The slot decoder is trained to minimize a supervised loss, lossSL = −1 n n X i=1 log πSL(ai|ˆa<i, x) (1) where, πSL is the output of the softmax layer at output position i. ˆa<i represents slot labels predicted upto position i −1. 3.3 Template-based Tree Decoder The tree decoder works top down as shown in Figure 4. Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) models are used to generate tokens and symbols. In the example shown in Figure 4, the decoder generates atis flight NT. Here, the NT symbol stands for a non-terminal. When a non-terminal is predicted, the subsequent symbol or token is predicted by applying the decoder to the hidden vector representation of the non-terminal. Table 1 walks through this process with an example. Each of the predicted NTs enter a queue and are expanded when popped from the queue. This process continues until no more NTs are left to expand. The loss function is, lossT = −1 S S X s=1 1 Ts Ts X t=1 log πTD(zs t |zs <t, zs, x) (2) S refers to the size of the queue for a given training example. Ts refers to the number of nodes (or children) to be generated for a non terminal in the queue, zs. zs t represents the tth child of the non terminal zs. zs <t refers to left siblings of zs t . Children of zs are generated conditioned on the hidden vector of zs and the left siblings of that child. The tree decoder is initialized with the [CLS] representation of the BERT encoder. The tree decoder generates templates which are then filled with slot values from the user’s utterance. In the example, atlanta and pittsburgh are replaced by $city name, september is replaced by $month name and fourth is replaced by $day name during training. The $ symbol indicates a terminal. 3.4 Pointer Network: We predict positions for every terminal, pointing to a specific token in the user’s utterance. We perform element-wise multiplication between the terminal node’s hidden representation (h) and the encoder representations (e) obtained from the encoder. This is followed by a feed forward layer (g) and a dense layer to finally assign probabilities to each position (p) in the input utterance. That is, pt = arg max i softmax(g(h(zs t ) ⊙e(xi))) (3) The pointer network loss, lossPT , is the categorical cross entropy loss between pt and the true positions. The four components are trained jointly end-to-end to minimize a total loss, loss−G = lossSL + lossT + lossPT (4) 3.5 Global Context We found that the tree decoder tends to repeat nodes, since representations may remain similar from parent to child. We overcome this by providing global supervision. This global supervision does not consider the order of nodes, but rather rewards predictions if a specific node is present or not in the final tree. If the model fails to predict that a node is present, the model is penalized based on the number of times it appears in the reference (or ground truth) tree. Say, z1, ...zK is the unique set of nodes present in the reference tree and N(zk) is the number of times node zk occurs in the reference. The representation of the [CLS] token is used to predict the presence of these nodes with the loss function, lossG = − K X k=1 N(zk) P j N(zj)log πG(zk|x) (5) 2062 parent children Queue contents Partially generated frame head ROOT NT1 [NT1] ROOT ( ) NT1 atis flight NT2 [NT2] ROOT ( atis flight ( ) ) NT2 fromloc NT3 toloc NT4 [NT3, NT4, NT5] ROOT ( atis flight ( fromloc ( ) toloc ( ) depart date NT5 depart date ( ) ) ) NT3 city name NT6 [NT4, NT5, NT6] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( ) depart date ( ) ) ) NT4 city name NT7 [NT5, NT6, NT7] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( city name ( ) ) depart date ( ) ) ) NT5 month name NT8 [NT6, NT7, NT8, NT9] ROOT ( atis flight ( fromloc ( city name ( ) ) toloc ( day name NT9 city name ( ) ) depart date ( month name ( ) day name ( ) ) ) ) NT6 $city name [NT7, NT8, NT9] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( ) ) depart date ( month name ( ) day name ( ) ) ) ) NT7 $city name [NT8, NT9] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( ) day name ( ) ) ) ) NT8 $month name [NT9] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( $month name ) day name ( ) ) ) ) NT9 $day name [∅] ROOT ( atis flight ( fromloc ( city name ( $city name ) ) toloc ( city name ( $city name ) ) depart date ( month name ( $month name ) day name ($day name ) ) ) ) Table 1: Actions taken to generate the frame representation of the sentence, from pittsburgh i’d like to travel to atlanta on september fourth. “NT” refers to non-terminals. with overall loss, lossweighted G = loss−G + lossG (6) 4 Datasets and Results We start with ATIS, the only public dataset that has even a shallow hierarchy. The ATIS dataset contains audio recordings of people requesting flight reservations, with 21 intent types and 120 slot labels. There are 4,478 utterances in the training set, 893 in the test set and 500 utterances in the development set. We transform the ATIS dataset to the bracketed tree format (Figure 3). We also evaluate the proposed approach using a simulated ordering dataset (example in Figure 3). The dataset contains 2 intents and 7 slot labels, 4767 training examples, 1362 test examples and 681 development examples. We manually created templates for every intent (i.e, OrderItems, GetTotal). An intent is randomly sampled, then a template along with a number of items and slot values for each of the properties of the items are randomly drawn to generate an utterance and a bracketed representation for the utterance 2. 2The modified ATIS and simulated datasets are available as part of Supplementary material. 4.1 Evaluating the proposed approach We evaluate both the generalized and the specific forms generated by the proposed model (Figure 5) in Table 2. The exact match criteria requires that the predicted tree completely match the reference tree. As this metric does not assign any credit to partial matches, we also compare all parent child relationships between the reference and the predicted trees and compute micro-f1 scores (Lipton et al., 2014). Specific:( atis_flight ( fromloc ( city_name ( $city_name ) ) toloc ( city_name ( $city_name) ) depart_date ( month_name ( $month_name ) day_name ( $day_name ) ) ) ) Generalized: ( atis_flight ( fromloc ( city_name ( pittsburgh ) ) toloc ( city_name ( atlanta) ) depart_date ( month_name ( september ) day_name ( fourth ) ) ) ) Figure 5: Generalized and Specific bracketed forms for, from pittsburgh i’d like to travel to atlanta on september fourth. To measure the benefit of the weighted G loss, we also evaluate an unweighted G loss function, lossunweighted G = loss−G −1 K K X k=1 log πG(zk|x) (7) As seen in Table 2, the best performance both on f-measure and accuracy is obtained with the weighted G loss function. 2063 Model ATIS Simulated gen-acc spec-acc gen-f1 spec-f1 gen-acc spec-acc gen-f1 spec-f1 Proposed method,-G 61.74 59.53 88.50 87.29 91.48 90.75 99.64 98.63 Proposed method,+unweighted G 62.21 60.23 87.33 86.81 91.85 90.68 99.97 98.63 Proposed method,+weighted G 72.00 70.54 89.32 88.87 92.14 91.12 99.97 98.76 Table 2: +/-G: with or without the global context loss function. gen: generalized form metrics and spec:results with the specific form. acc:accuracy and f1: f1-score on parent child relationships. 4.2 Baseline: Extending flat representations with group information We also compare with a reasonable baseline that extends the traditional flat structured frame (Figure 1) in a way that captures hierarchies. We learn to predict group information along with the slot labels (Baseline in Table 3) by appending indices to the labels that indicate which group the slot label belongs to. Consider, i want to fly from milwaukee to orlando on either wednesday evening or thursday morning. This example requires capturing two groups of information as shown in Figure 6. Group0 contains all the necessary pieces of information for traveling on wednesday evening and Group1 contains information for traveling on thursday morning. As shown, milwaukee and orlando are present in both the groups. Group0 fromloc: milwaukee toloc: orlando day_name: wednesday period_of_day: evening Group1 fromloc: milwaukee toloc: orlando day_name: thursday period_of_day: morning Figure 6: Example shows two groups of information. We can represent the two day names (and period of day) with Batis flight.depart date.day name0 and Batis flight.depart date.day name1. We can then use B-atis flight.fromloc.city name01 and B-atis flight.toloc.city name01 to indicate that they belong to both the groups. Such an approach increases the number of unique slot labels, resulting in fewer training examples for each slot label, but allows multi-class classification methods from prior work to be used as is. We then train and test the model using the approach that provided highest slot labeling scores which used BERT (Chen et al., 2019). We also convert the generated output of the hierarchical method proposed in this paper to the flat format above. Note, the f1 scores we obtain here are different from those reported in Table 2 as here we only consider the most specific label (eg. Batis flight.toloc.city name01) as the true slot label for a token versus the f1 measure over all the parent child relationships in Table 2. Since adding group information increases the number of unique slot labels, the results reported for the Baseline are different from what has been reported in (Chen et al., 2019). We notice a large improvement with the proposed approach on the simulated dataset. This implies that modeling hierarchical relationships between slot labels via a tree decoder is indeed helpful. The small improvement we see on ATIS can be attributed to the fact that only a small fraction of the test data required grouping information (≈ 1.7%). 5 Conclusion and Future Work: With this preliminary work, we showed cases where traditional flat semantic representations fail to capture slot label dependencies and we highlighted the need for deep hierarchical semantic representations for dialog frames. The proposed recursive, hierarchical frame-based representation captures complex relationships between slots labels. We also proposed an approach using a templatebased tree decoder to generate these hierarchical representations from users’ utterances. We also introduced global supervision by extending the treebased loss function, and showed that it is possible to learn all this end-to-end. As future work, we are extending the proposed approach and test its efficacy on real human conversations. More broadly, we continue to explore strategies that combine semantic parsing and neural networks for frame generation. Model ATIS Simulated Baseline 87.51 32.85 Proposed method + weighted G 88.01 97.67 Table 3: Comparing slot-label f1 scores of the Proposed approach and Baseline. 2064 References Qian Chen, Zhu Zhuo, and Wen Wang. 2019. BERT for joint intent classification and slot filling. CoRR, abs/1902.10909. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731–742, Melbourne, Australia. Association for Computational Linguistics. Rashmi Gangadharaiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 564–569, Minneapolis, Minnesota. Association for Computational Linguistics. Rashmi Gangadharaiah, Balakrishnan Narayanaswamy, and Charles Elkan. 2018. What we need to learn if we want to do and not just talk. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 3 (Industry Papers), pages 25–32. Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2787–2792, Brussels, Belgium. Association for Computational Linguistics. Dilek Hakkani-T¨ur, Gokhan T¨ur, Asli Celikyilmaz, Yun-Nung Vivian Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association (INTERSPEECH 2016). ISCA. Dilek Hakkani-T¨ur, Gokhan T¨ur, and Larry P. Heck. 2010. What is left to be understood in atis? In SLT, pages 19–24. IEEE. Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272, Philadelphia, PA, U.S.A. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735–1780. Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12–22, Berlin, Germany. Association for Computational Linguistics. Byeongchang Kim, Seonghan Ryu, and Gary Geunbae Lee. 2017. Two-stage Multi-intent Detection for Spoken Language Understanding. Multimedia Tools Appl., 76(9):11377–11390. Zachary C. Lipton, Charles Elkan, and Balakrishnan Naryanaswamy. 2014. Optimal thresholding of classifiers to maximize f1 measure. In Machine Learning and Knowledge Discovery in Databases, pages 225–239, Berlin, Heidelberg. Springer Berlin Heidelberg. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1180, Melbourne, Australia. Association for Computational Linguistics. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 129–136, USA. Omnipress. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS’15, pages 2692–2700, Cambridge, MA, USA. MIT Press. Puyang Xu and Ruhi Sarikaya. 2013. Exploiting Shared Information for Multi-intent Natural Language Sentence Classification. In Interspeech.
2020
186
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2065–2077 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2065 Speak to your Parser: Interactive Text-to-SQL with Natural Language Feedback Ahmed Elgohary∗ University of Maryland, College Park [email protected] Saghar Hosseini, Ahmed Hassan Awadallah Microsoft Research, Redmond, WA {sahoss,hassanam}@microsoft.com Abstract We study the task of semantic parse correction with natural language feedback. Given a natural language utterance, most semantic parsing systems pose the problem as one-shot translation where the utterance is mapped to a corresponding logical form. In this paper, we investigate a more interactive scenario where humans can further interact with the system by providing free-form natural language feedback to correct the system when it generates an inaccurate interpretation of an initial utterance. We focus on natural language to SQL systems and construct, SPLASH, a dataset of utterances, incorrect SQL interpretations and the corresponding natural language feedback. We compare various reference models for the correction task and show that incorporating such a rich form of feedback can significantly improve the overall semantic parsing accuracy while retaining the flexibility of natural language interaction. While we estimated human correction accuracy is 81.5%, our best model achieves only 25.1%, which leaves a large gap for improvement in future research. SPLASH is publicly available at https:// aka.ms/Splash_dataset. 1 Introduction Natural language interfaces (NLIs) have been the “holy grail" of natural language understating and human-computer interaction for decades (Woods et al., 1972; Codd, 1974; Hendrix et al., 1978; Zettlemoyer and Collins, 2005). However, early attempts in building NLIs to databases did not achieve the expected success due to limitations in language understanding capability, among other reasons (Androutsopoulos et al., 1995; Jones and Galliers, 1995). NLIs have been receiving increasing attention recently motivated by interest in developing virtual assistants, dialogue systems, and ∗Most work was done while the first author was an intern at Microsoft Research. Find all the locations whose names contain the word "film" Address 770 Edd Lane Apt. 098 14034 Kohler Drive finding the Address of Locations table for which Location_Name contains "film" Address is wrong. I want the name of the locations Location_Name Film Festival Film Castle finding the Location_Name of Locations table for which Location_Name contains "film" … … Figure 1: An example of human interaction with a Textto-SQL system to correct the interpretation of an input utterance. The system generates an initial SQL parse, explains it in natural language, and displays the execution result. Then, the system uses the human-provided natural language feedback to correct the initial parse. semantic parsing systems. NLIs to databases were at the forefront of this wave with several studies focusing on parsing natural language utterances into an executable SQL queries (Text-to-SQL parsing). Most of the work addressing the Text-to-SQL problem (and semantic parsing in general) frames it as a one-shot mapping problem. We establish (Section 4.1) that the majority of parsing mistakes that recent neural text-to-SQL parsers make are minor. Hence, it is often feasible for humans to detect and suggest fixes for such mistakes. Su et al. (2018) make a similar observation about parsing text to API calls (Su et al., 2017) and show that parsing mistakes could be easily corrected if humans are afforded a means of providing precise feedback. Likewise, an input utterance might be under- or mis-specified, thus extra interactions may be required to generate the desired output similarly to query refinements in information retrieval systems (Dang and Croft, 2010). 2066 Humans have the ability to learn new concepts or correct others based on natural language description or feedback. Similarly, previous work has explored how machines can learn from language in tasks such as playing games (Branavan et al., 2012), robot navigation (Karamcheti et al., 2017), concept learning (e.g., shape, size, etc.) classifiers (Srivastava et al., 2018), etc. Figure 1 shows an example of a text-to-SQL system that offers humans the affordance to provide feedback in natural language when the system misinterprets an input utterance. To enable this type of interactions, the system needs to: (1) provide an explanation of the underlying generated SQL, (2) provide a means for humans to provide feedback and (3) use the feedback, along with the original question, to come up with a more accurate interpretation. In this work, we study the task of SQL parse correction with natural language feedback to enable text-to-SQL systems to seek and leverage human feedback to further improve the overall performance and user experience. Towards that goal, we make the following contributions: (1) we define the task of SQL parse correction with natural language feedback; (2) We create a framework for explaining SQL parse in natural language to allow text-to-SQL users (who may have a good idea of what kind of information resides on their databases but are not proficient in SQL Hendrix et al. (1978)) to provide feedback to correct inaccurate SQL parses; (3) we construct SPLASH— Semantic Parsing with Language Assistance from Humans—a new dataset of natural language questions that a recent neural text-to-SQL parser failed to generate correct interpretation for together with corresponding human-provided natural language feedback describing how the interpretation should be corrected; and (4) we establish several baseline models for the correction task and show that the task is challenging for state-of-the-art semantic parsing models. 2 Task We formally define the task of SQL parse correction with natural language feedback. Given a question q, a database schema s, a mispredicted parse p′, a natural language feedback f on p′, the task is to generate a corrected parse p (Figure 2). Following Yu et al. (2018), s is defined as the set of tables, columns in each table and the primary and foreign keys of each table. Question: Find all the locations whose names contain the word "film" SELECT Address FROM LOCATIONS WHERE Location_Name LIKE '%film%' Predicted Parse: Feedback: Address is wrong. I want the name of the locations SELECT Location_Name FROMLOCATIONS WHERE Location_Name LIKE '%film%' Gold Parse: Location_ID Location_Name Address Other_Details Schema: Figure 2: An example from our SQL parse correction task (DB Name: cre_Theme_park and Table Name: Locations). Given a question, initial predicted parse and natural language feedback on the predicted parse, the task is to predict a corrected parse that matches the gold parse. Models are trained with tuples q, s, p′, f and gold parse p. At evaluation time, a model takes as input tuples in the form q, s, p′, f and hypothesizes a corrected parse ˆp. We compare ˆp and the gold parse p in terms of their exact set match (Yu et al., 2018) and report the average matching accuracy across the testing examples as the model’s correction accuracy. 3 Dataset Construction In this section, we describe our approach for collecting training data for the SQL parse correction task. We first generate pairs of natural language utterances and the corresponding erroneous SQL parses (Section 3.1). We then employ crowd workers (with no SQL knowledge) to provide feedback, in natural language, to correct the erroneous SQL (Section 3.3). To enable such workers to provide feedback, we show them an explanation of the generated SQL in natural language (Section 3.2). Finally, to improve the diversity of the natural language feedback, we ask a different set of annotators to paraphrase each feedback. We describe the process in detail in the remainder of this section. 3.1 Generating Questions and Incorrect SQL Pairs We use the Spider dataset (Yu et al., 2018) as our source of questions. Spider has several advantages over other datasets. Compared to ATIS (Price, 2067 Step 1: Find the number of rows of each value of id in browser table. Step 2: Find id, name of browser table with largest value in the results of step 1. SQL: SELECT id, name from browser GROUP BY id ORDER BY COUNT(*) DESC SELECT _cols_ from _table_ Group BY_col_ ORDER BY _aggr_ _col_ Template: Explanation: Figure 3: An example of a SQL query, the corresponding template and the generated explanation. 1990) and GeoQuery (Zelle and Mooney, 1996), Spider is much larger in scale (200 databases vs. one database, and thousands vs. hundreds of SQL parses). Compared to WikiSQL (Zhong et al., 2017), Spider questions require inducing parses of complex structures (requiring multiple tables, joining, nesting, etc.). Spider also adopts a crossdomain evaluation setup in which databases used at testing time are never seen at training time. To generate erroneous SQL interpretations of questions in Spider, we opted for using the output of a text-to-SQL parser to ensure that our dataset reflect the actual distribution of errors that contemporary parsers make. This is a more realistic setup than artificially infusing errors in the gold SQL. We use the Seq2Struct parser (Shin, 2019)1 to generate erroneous SQL interpretations. Seq2Struct combines grammar-based decoder of Yin and Neubig (2017) with a self-attention-based schema encoding and it reaches a parsing accuracy of 42.94% on the development set of Spider.2 Note that we make no explicit dependencies on the model used for this step and hence other models could be used as well (Section 6.3). We train Seq2Struct on 80% of Spider’s training set and apply it to the remaining 20%, keeping 1https://github.com/rshin/seq2struct 2When we started the dataset construction at the beginning of June 2019, we were able to achieve a parsing accuracy of 41.30% on Spider’s development set which was the state-ofthe-art accuracy at the time. It is worth noting that unlike current state-of-the-art models, Seq2Struct does not use pretrained language models. It was further developed into a new model called RAT-SQL (Wang et al., 2020) which achieved a new state-of-the-art accuracy as of April 2020. only cases where the generated parses do not match the gold parse (we use the exact set match of Yu et al. (2018)). Following the by-database splitting scheme of Spider, we repeat the 80-20 training and evaluation process for three times with different examples in the evaluation set at each run. This results in 3,183 pairs of questions and an erroneous SQL interpretation. To further increase the size of the dataset, we also ignore the top prediction in the decoder beam3 and use the following predictions. We only include cases where the difference in probability between the top and second to top SQLs is below a threshold of 0.2. The intuition here is that those are predictions that the model was about to make and hence represent errors that the model could have made. That adds 1,192 pairs to our dataset. 3.2 Explaining SQL In one of the earliest work on natural language interfaces to databases, Hendrix et al. (1978) note that many business executives, government official and other decision makers have a good idea of what kind of information residing on their databases. Yet to obtain an answer to a particular question, they cannot use the database themselves and instead need to employ the help of someone who can. As such, in order to support an interactive Text-to-SQL system, we need to be able to explain the incorrect generated SQL in a way that humans who are not proficient in SQL can understand. We take a template-based approach to explain SQL queries in natural language. We define a template as follows: Given a SQL query, we replace literals, table and columns names and aggregation and comparison operations with generic placeholders. We also assume that all joins are inner joins (true for all Spider queries) whose join conditions are based on primary and foreign key equivalence (true for more than 96% of Spider queries). A query that consists of two subqueries combined with an intersection, union or except operations is split into two templates that are processed independently and we add an intersection/union/except part to the explanation at the end. We apply the same process to the limit operation—generate an explanation of the query without limit, then add a limit-related step at the end. We select the most frequent 57 templates used in Spider training set which cover 85% of Spider 3We used a beam of size 20. 2068 queries. For each SQL template, we wrote down a corresponding explanation template in the form of steps (e.g., join step, aggregation step, selection step) that we populate for each query. Figure 3 shows an example of a SQL queries, its corresponding template and generated explanations. We also implemented a set of rules for compressing the steps based on SQL semantics. For instance, an ordering step following by a “limit 1” is replaced with “find largest/smallest” where “largest” or “smallest” is decided according to the ordering direction. 3.3 Crowdsourcing Feedback We use an internal crowd-sourcing platform similar to Amazon Mechanical Turk to recruit annotators. Annotators are only selected based on their performance on other crowd-sourcing tasks and command of English. Before working on the task, annotators go through a brief set of guidelines explaining the task.4 We collect the dataset in batches of around 500 examples each. After each batch is completed, we manually review a sample of the examples submitted by each annotator and exclude those who do not provide accurate inputs from the annotators pool and redo all their annotations. Annotators are shown the original question, the explanation of the generated SQL and asked to: (1) decide whether the generated SQL satisfies the information need in the question and (2) if not, then provide feedback in natural language. The first step is necessary since it helps identifying false negative parses (e.g., another correct parse that does not match the gold parse provided in Spider). We also use the annotations of that step to assess the extent to which our interface enables target users to interact with the underlying system. As per our assumption that target users are familiar with the kind of information that is in the database (Hendrix et al., 1978), we show the annotators an overview of the tables in the database corresponding to the question that includes the table and column names together with examples (first 2 rows) of the content. We limit the maximum feedback length to 15 tokens to encourage annotators to provide a correcting feedback based on the initial parse (that focuses on the edit to be made) rather than describing how the question should be answered. A total of 10 annotators participated in this task. They were compensated based on an hourly rate 4We provide the data collection instructions and a screenshot of the data collection interface in the appendix. Number of Train Dev Test Examples 7,481 871 962 Databases 111 9 20 Uniq. Questions 2,775 290 506 Uniq. Wrong Parses 2,840 383 325 Uniq. Gold Parses 1,781 305 194 Uniq. Feedbacks 7,350 860 948 Feedback tokens (Avg.) 13.9 13.8 13.1 Table 1: SPLASH summary (as opposed to per annotation) to encourage them to optimize for quality and not quantity. They took an average of 6 minutes per annotation. To improve the diversity of the feedback we collect, we ask a separate set of annotators to generate a paraphrase of each feedback utterance. We follow the same annotators quality control measures as in the feedback collection task. An example instance from the dataset is shown in Figure 2. 3.4 Dataset Summary Overall, we ask the annotators to annotate 5409 example (427 of which had the correct SQL parse and the remaining had an incorrect SQL parse). Examples with correct parse are included to test whether the annotators are able to identify correct SQL parses given their explanation and the question. Annotators are able to identify the correct parses as correct 96.4% of the time. For the examples whose predicted SQL did not match the gold SQL, annotators still marked 279 of them as correct. Upon manual examinations, we found that annotators were indeed correct in doing so 95.5% of the time. Even though the predicted and gold SQLs did not match exactly, they were equivalent (e.g., ’price between 10 and 20’ vs. ’price ≥10 and price ≤20’). After paraphrasing, we ended up with 9,314 question-feedback pairs, 8352 of which correspond to questions in the Spider training split and 962 from the spider development split. We use all the data from the Spider development split as our test data. We hold out 10% of the remaining data (split by database) to use as our development set and use the rest as the training set. Table 1 provides a summary of the final dataset. 4 Dataset Analysis We conduct a more thorough analysis of SPLASH in this section. We study the characteristics of the mistakes made by the parser as well as characteristics of the natural language feedback. 2069 4.1 Error Characteristics We start by characterizing the nature of errors usually made by the models in parsing the original utterance to SQL. To understand the relation between the gold and the predicted SQL, we measure the edit distance between them for all cases for which the model made a mistake in the SQL prediction. We measure the edit distance by the number of edit segments (delete, insert, replace) between both parses. We find the minimal sequence of tokenlevel edits using the levenshtein distance algorithm. Then, we combine edits of the same type (delete, insert, replace) applied to consecutive positions in the predicted parse in one segment. Figure 4 shows a frequency histogram of different values of edit distance. We observe that most inaccurate predictions lie within a short distance from the correct SQL (78%+ within a distance of 3 or less). In addition to the number of edits, we also characterize the types of edits needed to convert the predicted SQL to the gold one. Our edit distance calculations support three operations replace, insert and delete. Those correspond to 58%„ 31% and 11% of the edit operations respectively. Most of the edits are rather simple and require replacing, inserting or deleting a single token (68.1% of the edits). The vast majority of those correspond to editing a schema item (table or column name): 89.2%, a SQL keyword (e.g., order direction, aggregation, count, distinct, etc.): 7.4%, an operator (greater than, less than, etc.): 2.2% or a number (e.g. for a limit operator): 1.2%. The edits between the predicted and generated SQL spanned multiple SQL keywords. The distribution of different SQL keywords appearing in edits and their distribution across edit types (replace, insert or delete) is shown in Figure 5. Note that a single edit could involve multiple keywords (e.g., multiple joins, a join and a where clause, etc.). Interestingly, many of the edits involve a join highlighting that handling utterances that require a join is harder and more error prone. Following join, most edits involve where clauses (making the query more or less specific), aggregation operators, counting and selecting unique values. The error analysis demonstrates that many of the errors made by the model are in fact not significant and hence it is reasonable to seek human feedback to correct them. 0% 5% 10% 15% 20% 25% 30% 35% 1 2 3 4 5 6 7 8 9 10 Frequency (%) Distance between Gold and Predicted SQL Figure 4: A histogram of the distance between the gold and the predicted SQL. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Frequency (%) SQL Keywords INSERT REPLACE DELETE Figure 5: A histogram of different SQL keywords appearing in edits (between the gold and predicted SQL) and their distribution across edit types (replace, insert or delete). 4.2 Feedback Characteristics To better understand the different types of feedback our annotators provided, we sample 200 examples from the dataset, and annotate them with the type of the feedback. We start by assigning the feedback to one of three categories: (1) Complete: the feedback fully describes how the predicted SQL can be corrected , (2) Partial: the feedback describes a way to correct the predicted SQL but only partially and (3) Paraphrase: the feedback is a paraphrase of the original question. The sample had 81.5% for Complete, 13.5% for Partial and 5.0% for Paraphrase feedback. Examples of each type of feedback are shown in Table 2. Upon further inspection of the partial and paraphrase feedback, we observe that they mostly happen when the distance between the predicted and gold SQL is high (major parsing errors). As such, annotators opt for providing partial feedback (that would at least correct some of the mistakes) or decide to rewrite the question in a different way. We also annotate and present the types of feedback, in terms of changes the feedback is suggesting, in Table 3. Note that the same feedback may suggest multiple changes at the same time. The 2070 Complete Feedback: [81.5%] Question: Show the types of schools that have two schools. Pred. SQL: SELECT TYPE FROM school GROUP BY TYPE HAVING count(*) >= 2 Feedback: You should not use greater than. Partial Feedback: [13.5%] Question: What are the names of all races held between 2009 and 2011? Pred. SQL: SELECT country FROM circuits WHERE lat BETWEEN 2009 AND 2011 Feedback: You should use races table. Paraphrase Feedback: [5.0%] Question: What zip codes have a station with a max temperature greater than or equal to 80 and when did it reach that temperature? Pred. SQL: SELECT zip_code FROM weather WHERE min_temperature_f > 80 OR min_sea_level_pressure_inches > 80 Feedback: Find date , zip code whose max temperature f greater than or equals 80. Table 2: Examples (question, predicted SQL and feedback) of complete, partial and paraphrase feedback table shows that the feedback covers a broad range of types, which matches our initial analysis of error types. We find that a majority of feedback is referencing the retrieved information. In many such cases, the correct information has not been retrieved because the corresponding table was not used in the query. This typically corresponds to a missing inner one-to-one join operation and agrees with our earlier analysis on edit distance between the gold and predicted SQL. The second most popular category is incorrect conditions or filters followed by aggregation and ordering errors. We split the first two categories by whether the information/conditions are missing, need to be replaced or need to be removed. We observe that most of the time the information or condition needs to be replaced. This is followed by missing information that needs to be inserted and then unnecessary ones that need to be removed. We heuristically identify feedback patterns for each collected feedback. To identify the feedback pattern, we first locate the central predicate in the feedback sentence using a semantic role labeler (He et al., 2015). Since some feedback sentences can be broken into multiple sentence fragments, a single feedback may contain more than one central predicate. For each predicate, we identify its main arguments. We represent every argument with its first non stopword token. To reduce the vocabulary, we heuristically identify column mentions and replace them with the token ’item’. We visualize the distribution of feedback patterns for the top 60 most frequent patterns in Figure 6 , and label the ones shared among multiple patterns. As is shown, our dataset covers a diverse variety of feedback patterns centered around key operations to edit the predicted SQL that reference Figure 6: Patterns of feedback covered in our dataset. Patterns are extracted heuristically using predicates and arguments extracted from the feedback sentence. The figure shows the top 60 frequent patterns. operations, column names and values. 5 Related Work Our work is linked to multiple existing research lines including semantic parsing, learning through interaction (Li et al., 2017a; Hancock et al., 2019; Li et al., 2017b, inter alia) and learning from natural language supervision (Srivastava et al., 2017; CoReyes et al., 2019; Srivastava et al., 2018; Hancock et al., 2018; Ling and Fidler, 2017, inter alia). We discuss connections to the most relevant works. Text-to-SQL Parsing: Natural language to SQL (natural language interfaces to databases) has been an active field of study for several decades (Woods et al., 1972; Hendrix et al., 1978; Warren and Pereira, 1982; Popescu et al., 2003; Li 2071 Feedback Type % Example Information - Missing 13% I also need the number of different services - Wrong 36% Return capacity in place of height - Unnecessary 4% No need to return email address Conditions - Missing 10% ensure they are FDA approved - Wrong 19% need to filter on open year not register year - Unnecessary 7% return results for all majors Aggregation 6% I wanted the smallest ones not the largest Order/Uniq 5% only return unique values Table 3: Examples of feedback annotators provided for different types and Jagadish, 2014). This line of work has been receiving increased attention recently driven, in part, by the development of new large scale datasets such as WikiSQL (Zhong et al., 2017) and Spider (Yu et al., 2018). The majority of this work has focused on mapping a single query to the corresponding SQL with the exception of a few datasets, e.g., SParC (Yu et al., 2019b) and CoSQL (Yu et al., 2019a), that target inducing SQL parses for sequentially related questions. While these datasets focus on modeling conversational dependencies between questions, SPLASH evaluates the extent to which models can interpret and apply feedback on the generated parses. We empirically confirm that distinction in Section 6.3. Learning from Feedback: Various efforts have tried to improve semantic parsers based on feedback or execution validation signals. For example, Clarke et al. (2010) and Artzi and Zettlemoyer (2013) show that semantic parsers can be improved by learning from binary correct/incorrect feedback signals or validation functions. Iyer et al. (2017) improve text-to-SQL parsing by counting on humans to assess the correctness of the execution results generated by the inferred parses. In their system, parses with correct results are used to augment the training set together with crowdsourced gold parses of the parses that are marked as incorrect. Lawrence and Riezler (2018) show that a text-to-Overpass parser can be improved using historic logs of token-level binary feedback (collected using a graphical user interface that maps an Overpass query to predefined blocks) on generated parses. We note that our work is different from this line of work in that we do not seek to retrain and generally improve the parser, rather we focus on the task of immediately incorporating the natural language feedback to correct an initial parse. Interactive Semantic Parsing Multiple other efforts sought to interactively involve humans in the parsing process itself. He et al. (2016) ask simplified questions about uncertain dependencies in CCG parses and use the answers as soft constraints to regenerate the parse. Both Li and Jagadish (2014) and Su et al. (2018) generate semantic parses and present them in a graphical user interface that humans can control to edit the initial parse. Gur et al. (2018) ask specific predefined multiple choice questions about a narrow set of predefined parsing errors. This interaction model together with the synthetically generated erroneous parses that are used for training can be appropriate for simple text-to-SQL parsing instance as in WikiSQL, which was the only dataset used for evaluation. Yao et al. (2019b) ask yes/no questions about the presence of SQL components while generating a SQL parse one component at a time. Our work falls under the general category of interactive semantic parsing. However, our interaction model is solely based on natural language feedback which can convey richer information and offering a more flexible interaction. Our work is closest to (Labutov et al., 2018), which also studies correcting semantic parses with natural language feedback, but we (1) focus on text-to-SQL parsing and build on a multi-domain dataset that requires generating complex semantic structures and generalizing to unseen domains (Labutov et al. consider only the domain of email and biographical research); (2) pair the mispredicted parses and feedback with gold parses5 in both our training and testing splits which benefits a wider class of correction models; and (3) show that incorporating the mispredicted parse significantly improves the correction accu5In real world scenarios, the gold parse is the final parse that the user approves after a round (or more) of corrections. 2072 racy (on the contrary to the findings of Labutov et al.). Asking Clarifying Questions: Another relevant research direction focused on extending semantic parsers beyond one-shot interactions by creating agents that can ask clarifying questions that resolve ambiguities with the original question. For example, Yao et al. (2019a) showed that using reinforcement learning based agents that can ask clarifying questions can improve the performance of semantic parsers in the “If-Then recipes” domain. Generating clarifying questions have been studied in multiple domains to resolve ambiguity caused by speech recognition failure (Stoyanchev et al., 2014), recover missing information in question answering (Rao and Daumé III, 2018) or clarify information needs in open-domain informationseeking (Aliannejadi et al., 2019). Our work is different from this research in that we focus on enabling and leveraging human feedback that corrects an initial parse of a fully specified question rather than spotting and clarifying ambiguities. 6 Experiments We present and evaluate a set of baseline models for the correction task (Section 2) in which we use SPLASH for training and testing (unless otherwise stated). Our main evaluation measure is the correction accuracy—the percentage of the testing set examples that are corrected—in which we follow Yu et al. (2018) and compare the corrected parse to the gold parse using exact set match.6 We also report the end-to-end accuracy on Spider development set (which we use to construct our testing set) of the two turn interaction scenario: first Seq2Struct attempts to parse the input question. If it produced a wrong parse the question together with that parse and the corresponding feedback are attempted using the correction model. An example is considered “correct” if any of the two attempts produces the correct parse.7 6.1 Baselines Methods that ignore the feedback: One approach for parse correction is re-ranking the decoder beam (top-n predictions) (Yin and Neubig, 6Exact set match is a binary measure of exact string matching between SQL queries that handles ordering issues. 7 Seq2Struct produces correct parses for 427/1034 of Spider Dev. 511 of the remaining examples are supported by our SQL explanation patterns. We estimate the end-to-end accuracy as (427+511∗X/100)/1034, where X is the correction accuracy. 2019). Here, we simply discard the top-1 candidate and sample uniformly and with probabilities proportional to the parser score of each candidate. We also report the accuracy of deterministically choosing the second candidate. Handcrafted re-ranking with feedback: By definition, the feedback f describes how to edit the initial parse p′ to reach the correct parse. We represent the “diff” between p′ and each candidate parse in the beam pi as set of schema items that appear only in one of them. For example, the diff between select first_name, last_name from students and select first_name from teachers is {last_name, students, teachers}. We assign a score to pi equals to the number of matched schema items in the diff with f. A schema item (e.g., “first_name”) is considered to be mentioned in f is all the individual tokens (“first” and “name”) are tokens in f. Seq2Struct+Feedback: The Seq2Struct model we use to generate erroneous parses for data collection (Section 3.1) reached an accuracy of 41.3% on Spider’s development set when trained on the full Spider training set for 40,000 steps. After that initial training phase, we adapt the model to incorporating the feedback by appending the feedback to the question for each training example in SPLASH and we continue training the model to predict the gold parse for another 40,000 steps. We note that Seq2Struct+Feedback does not use the mispredicted parses. EditSQL+Feedback: EditSQL (Zhang et al., 2019) is the current state-of-the-art model for conversational text-to-SQL. It generates a parse for an utterance at a conversation turn i by editing (i.e., copying from) the parse generated at turn i−1 while condition on all previous utterances as well as the schema. We adapt EditSQL for the correction task by providing the question and the feedback as the utterances at turn one and two respectively, and we force it to use the mispredicted parse the the prediction of turn one (rather than predicting it). We train the model on the combination of the training sets of SPLASH and Spider (which is viewed as single turn conversations).8 To provide an estimate of human performance, we report the percentage of feedback instances la8We exclude turn one predictions from the training loss when processing SPLASH examples otherwise, the model would be optimized to produce the mispredicted parses. We use the default hyper-parameters provided by the authors together with the development set of SPLASH for early stopping. 2073 Exact Match Accuracy (%) Baseline Correction End-to-End Without Feedback ⇒Seq2Struct N/A 41.30 ⇒Re-ranking: Uniform 2.39 42.48 ⇒Re-ranking: Parser score 11.26 46.86 ⇒Re-ranking: Second Best 11.85 47.15 With Feedback ⇒Re-ranking: Handcrafted 16.63 49.51 ⇒Seq2Struct+Feedback 13.72 48.08 ⇒EditSQL+Feedback 25.16 53.73 Re-ranking Upper Bound 36.38 59.27 Estimated Human Accuracy 81.50 81.57 Table 4: Correction and End-to-End accuracies of Baseline models. beled as Complete as described in Section 4.2. We also report the re-ranking upper bound (the percentage of test examples whose gold parses exist in Seq2Struct beam). 6.2 Main Results The results in Table 4 suggest that: (1) the feedback we collect is indeed useful for correcting erroneous parses; (2) incorporating the mispredicted parse helps the correction process (even a simple handcrafted baseline that uses the mispredicted parases outperforms a strong trained neural model); and (3) the state-of-the-art EditSQL model equipped with BERT (Devlin et al., 2019) achieves the best performance, yet it still struggles with the task we introduce, leaving a large gap for improvement. 6.3 Analysis Does EditSQL+Feedback use the feedback? To confirm that EditSQL+Feedback does not learn to ignore the feedback, we create a test set of random feedback by shuffling the feedback of SPLASH test examples. The accuracy on the randomized test set drops to 5.6%. Is SPLASH just another conversational textto-SQL dataset? We evaluate the trained EditSQL models on SParC and CoSQL (state-of-the-art models trained by EditSQL authors) on SPLASH test set, and we get accuracies of 3.4% and 3.2%, respectively. That confirms that SPLASH targets different modeling aspects as we discuss in Section 5. Is SPLASH only useful for correcting Seq2Struct errors? EditSQL is also shown to achieve strong results on Spider (57.6% on the development set) when used in a single-turn mode (state-of-the-art when we started writing this paper). We collect feedback for a sample of 179 mispredicted parses produces by EditSQL.9 Using the EditSQL+Feedback model trained on SPLASH we get a correction accuracy of 14.6% for EditSQL errors. 7 Conclusions and Future Work We introduce the task of SQL parse correction using natural language feedback together with a dataset of human-authored feedback paired with mispredicted and gold parses. We compare baseline models and show that natural language feedback is effective for correcting parses, but still stateof-the-art models struggle to solve the task. Future work can explore improving the correction models, leveraging logs of natural language feedback to improve text-to-SQL parsers, and expanding the dataset to include multiple turns of correction. Acknowledgments We thank our ACL reviewers for their feedback and suggestions. Ahmed Elgohary completed part of this work while being supported by a grant from the Defense Advanced Research Projects Agency and Air Force Research Laboratory, and awarded to Raytheon BBN Technologies under contract number FA865018-C-7885. 9We started with 200, but 21 of them turned out to have alternative correct parses (false negatives). 2074 References Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarifying questions in open-domain information-seeking conversations. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Ion Androutsopoulos, Graeme D Ritchie, and Peter Thanisch. 1995. Natural language interfaces to databases–an introduction. Natural language engineering, 1. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics. SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research, 43. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Conference on Computational Natural Language Learning. John D Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, John DeNero, Pieter Abbeel, and Sergey Levine. 2019. Meta-learning languageguided policy learning. In Proceedings of the International Conference on Learning Representations. Edgar F Codd. 1974. Seven steps to rendezvous with the casual user. IBM Corporation. Van Dang and Bruce W Croft. 2010. Query reformulation using anchor text. In Proceedings of ACM International Conference on Web Search and Data Mining. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics. Izzeddin Gur, Semih Yavuz, Yu Su, and Xifeng Yan. 2018. DialSQL: Dialogue based structured query generation. In Proceedings of the Association for Computational Linguistics. Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the Association for Computational Linguistics. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher Ré. 2018. Training classifiers with natural language explanations. In Proceedings of the Association for Computational Linguistics. Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In Proceedings of Empirical Methods in Natural Language Processing. Luheng He, Julian Michael, Mike Lewis, and Luke Zettlemoyer. 2016. Human-in-the-loop parsing. In Proceedings of Empirical Methods in Natural Language Processing. Gary G Hendrix, Earl D Sacerdoti, Daniel Sagalowicz, and Jonathan Slocum. 1978. Developing a natural language interface to complex data. ACM Transactions on Database Systems, 3. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proceedings of the Association for Computational Linguistics. Karen Sparck Jones and Julia R Galliers. 1995. Evaluating natural language processing systems: An analysis and review, volume 1083. Siddharth Karamcheti, Edward Clem Williams, Dilip Arumugam, Mina Rhee, Nakul Gopalan, Lawson L.S. Wong, and Stefanie Tellex. 2017. A tale of two DRAGGNs: A hybrid approach for interpreting action-oriented and goal-oriented instructions. In Proceedings of the First Workshop on Language Grounding for Robotics. Igor Labutov, Bishan Yang, and Tom Mitchell. 2018. Learning to learn semantic parsers from natural language supervision. In Proceedings of Empirical Methods in Natural Language Processing. Carolin Lawrence and Stefan Riezler. 2018. Improving a neural semantic parser by counterfactual learning from human bandit feedback. In Proceedings of the Association for Computational Linguistics. Fei Li and HV Jagadish. 2014. Constructing an interactive natural language interface for relational databases. In Proceedings of the VLDB Endowment. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017a. Dialogue learning with human-in-the-loop. In Proceedings of the International Conference on Learning Representations. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2017b. Learning through dialogue interactions by asking questions. In Proceedings of the International Conference on Learning Representations. Huan Ling and Sanja Fidler. 2017. Teaching machines to describe images via natural language feedback. In Proceedings of Advances in Neural Information Processing Systems. 2075 Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In International Conference on Intelligent User Interfaces. P. J. Price. 1990. Evaluation of spoken language systems: the ATIS domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania. Sudha Rao and Hal Daumé III. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. In Proceedings of the Association for Computational Linguistics. Richard Shin. 2019. Encoding database schemas with relation-aware self-attention for text-to-sql parsers. arXiv preprint arXiv:1906.11790. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of Empirical Methods in Natural Language Processing. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natural language quantification. In Proceedings of the Association for Computational Linguistics. Svetlana Stoyanchev, Alex Liu, and Julia Hirschberg. 2014. Towards natural clarification questions in dialogue systems. Yu Su, Ahmed Hassan Awadallah, Madian Khabsa, Patrick Pantel, and Michael Gamon. 2017. Building natural language interfaces to web apis. In Proceedings of the ACM International Conference on Information and Knowledge Management. Yu Su, Ahmed Hassan Awadallah, Miaosen Wang, and Ryen W. White. 2018. Natural language interfaces with fine-grained user interaction: A case study on web apis. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, and Matthew Richardson. 2020. RATSQL: Relation-aware schema encoding and linking for text-to-sql parsers. In Proceedings of the Association for Computational Linguistics. David H.D. Warren and Fernando C.N. Pereira. 1982. An efficient easily adaptable system for interpreting natural language queries. American Journal of Computational Linguistics, 8. W. A. Woods, Ronald M Kaplan, and Bonnie L. Webber. 1972. The lunar sciences natural language information system: Final report. BBN Report 2378. Ziyu Yao, Xiujun Li, Jianfeng Gao, Brian Sadler, and Huan Sun. 2019a. Interactive semantic parsing for if-then recipes via hierarchical reinforcement learning. In Association for the Advancement of Artificial Intelligence. Ziyu Yao, Yu Su, Huan Sun, and Wen-tau Yih. 2019b. Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study. In Proceedings of Empirical Methods in Natural Language Processing. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the Association for Computational Linguistics. Pengcheng Yin and Graham Neubig. 2019. Reranking for neural semantic parsing. In Proceedings of the Association for Computational Linguistics. Tao Yu, Rui Zhang, He Yang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, et al. 2019a. CoSQL: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases. In Proceedings of Empirical Methods in Natural Language Processing. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task. In Proceedings of Empirical Methods in Natural Language Processing. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, et al. 2019b. SParC: Crossdomain semantic parsing in context. In Proceedings of the Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classiication with probabilistic categorial grammars. In Proceedings of Uncertainty in Artificial Intelligence. Rui Zhang, Tao Yu, He Yang Er, Sungrok Shim, Eric Xue, Xi Victoria Lin, Tianze Shi, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019. Editing-based sql query generation for cross-domain context-dependent questions. In Proceedings of Empirical Methods in Natural Language Processing. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2SQL: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. 2076 A Appendix A.1 Feedback Collection instructions Figure 7 shows the instructions shown to the annotators. A.2 Feedback Collection Interface Screenshot Figure 8 shows an example of the data collection interface. The Predicted SQL is: ’SELECT name, salary FROM instructor WHERE dept_name LIKE "%math%"’. Note that neither the gold nor the predicted SQL are shown to the annotator. A.3 Example of Explanations Figure 9 shows several examples of how different SQL components can be explained in natural language. Correcting Steps for Answering Questions. 1. We have some information stored in tables; each row is a record that consists of one or more columns. Using the given tables, we can answer questions by doing simple systematic processing steps over the tables. Notice that the answer to the question is always the result of the last step. Also, notice that the steps might not be in perfect English as they were generated automatically. Each step, generates a table of some form. 2. For each question, we have generated steps to answer it, but it turned out that something is wrong with the steps. Your task is write down in English a short (one sentence most of the time) description of the mistakes and how it can be correct. It is important to note that we are not looking for rewritings of steps, but rather we want to get short natural English commands (15 words at most) that describes the correction to be made to get the correct answer. 3. Use proper and fluent English. Don’t use math symbols. 4. Don’t rewrite the steps after correcting them. But rather, just describe briefly the change that needs to be made. 5. We show only two example values from each table to help you understand the contents of each table. Tables typically contain several rows. Never use the shown values to write your input. 6. There could be more than one wrong piece in the steps. Please, make sure to mention all of them not just one. 7. If the steps are correct, just check the “All steps are correct” box 8. Some of the mistakes are due to additional steps or parts of steps. Your feedback can suggest removing those parts. 9. Do not just copy parts of the questions. Be precise in your input. 10. If in doubt about how to correct a mistake, just mention what is wrong. 11. You do not have to mention which steps contain mistakes. If in doubt, do not refer to a particular step. 12. The generated steps might not sound like the smartest way for answering the question. But it is the most systematic way that works for all kinds of questions and all kinds of tables. Please, do not try to propose smarter steps. Figure 7: Crowd-sourcing instructions 2077 Figure 8: An example of the data collection interface. The Predicted SQL is: ’SELECT name, salary FROM instructor WHERE dept_name LIKE "%math%"’. Note that neither the gold nor the predicted SQL are shown to the annotator. SQL Component Explanation intersect show the rows that are in both the results of step 1 and step 2 union show the rows that are in any of the results of step 1 and step 2 except show the rows that are in the results of step 1 but not in the results of step 2 limit n only keep the first n rows of the results in step 1 join for each row in Table 1, find corresponding rows in Table 2 select find Column of Table aggregation find each value of Column1 in Table along with the OPERATION of Column2 of the corresponding rows to each value ordering order Direction by Column condition whose Column Operation Value distinct without repetition Figure 9: Examples of how different SQL components can be explained in natural language
2020
187
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2078–2092 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2078 Calibrating Structured Output Predictors for Natural Language Processing Abhyuday Jagannatha1, Hong Yu1,2 1College of Information and Computer Sciences,University of Massachusetts Amherst 2Department of Computer Science,University of Massachusetts Lowell {abhyuday, hongyu}@cs.umass.edu Abstract We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the applications are to be deployed in a safetycritical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-ofspeech and question answering. We also improve our model’s performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-ofdomain test scenarios as well. 1 Introduction Several modern machine-learning based Natural Language Processing (NLP) systems can provide a confidence score with their output predictions. This score can be used as a measure of predictor confidence. A well-calibrated confidence score is a probability measure that is closely correlated with the likelihood of model output’s correctness. As a result, NLP systems with calibrated confidence can predict when their predictions are likely to be incorrect and therefore, should not be trusted. This property is necessary for the responsible deployment of NLP systems in safety-critical domains such as healthcare and finance. Calibration of predictors is a well-studied problem in machine learning (Guo et al., 2017; Platt, 2000); however, widely used methods in this domain are often defined as binary or multi-class problems(Naeini et al., 2015; Nguyen and O’Connor, 2015). The structured output schemes of NLP tasks such as information extraction (IE) (Sang and De Meulder, 2003) and extractive question answering (Rajpurkar et al., 2018) have an output space that is often too large for standard multi-class calibration schemes. Formally, we study NLP models that provide conditional probabilities pθ(y|x) for a structured output y given input x. The output can be a label sequence in case of part-of-speech (POS) or named entity recognition (NER) tasks, or a span prediction in case of extractive question answering (QA) tasks, or a relation prediction in case of relation extraction task. pθ(y|x) can be used as a score of the model’s confidence in its prediction. However, pθ(y|x) is often a poor estimate of model confidence for the output y. The output space of the model in sequence-labelling tasks is often large, and therefore pθ(y|x) for any output instance y will be small. For instance, in a sequence labelling task with C number of classes and a sequence length of L, the possible events in output space will be of the order of CL. Additionally, recent efforts (Guo et al., 2017; Nguyen and O’Connor, 2015; Dong et al., 2018; Kumar and Sarawagi, 2019) at calibrating machine learning models have shown that they are poorly calibrated. Empirical results from Guo et al. (2017) show that techniques used in deep neural networks such as dropout and their large architecture size can negatively affect the calibration of their outputs in binary and multi-class classification tasks. 2079 Parallelly, large neural network architectures based on contextual embeddings (Devlin et al., 2018; Peters et al., 2018) have shown state-of-theart performance across several NLP tasks (Andrew and Gao, 2007; Wang et al., 2019) . They are being rapidly adopted for information extraction and other NLP tasks in safety-critical applications (Zhu et al., 2018; Sarabadani, 2019; Li et al., 2019; Lee et al., 2019). Studying the miss-calibration in such models and efficiently calibrating them is imperative for their safe deployment in the real world. In this study, we demonstrate that neural network models show high calibration errors for NLP tasks such as POS, NER and QA. We extend the work by Kuleshov and Liang (2015) to define wellcalibrated forecasters for output entities of interest in structured prediction of NLP tasks. We provide a novel calibration method that applies to a wide variety of NLP tasks and can be used to produce model confidences for specific output entities instead of the complete label sequence prediction. We provide a general scheme for designing manageable and relevant output spaces for such problems. We show that our methods lead to improved calibration performance on a variety of benchmark NLP datasets. Our method also leads to improved out-ofdomain calibration performance as compared to the baseline, suggesting that our calibration methods can generalize well. Lastly, we propose a procedure to use our calibrated confidence scores to re-score the predictions in our defined output event space. This procedure can be interpreted as a scheme to combine model uncertainty scores and entity-specific features with decoding methods like Viterbi. We show that this re-scoring leads to consistent improvement in model performance across several tasks at no additional training or data requirements. 2 Calibration framework for Structured Prediction NLP models 2.1 Background Structured Prediction refers to the task of predicting a structured output y = [y1, y2, ...yL] for an input x. In NLP, a wide array of tasks including parsing, information extraction, and extractive question answering fall within this category. Recent approaches towards solving such tasks are commonly based on neural networks that are trained by minimizing the following objective : L(θ|D) = − |D| X i=0 log(pθ(y(i)|x(i))) + R(θ) (1) where θ is the parameter vector of the neural network and R is the regularization penalty and D is the dataset {(y(i), x(i))}|D| i=0. The trained model pθ can then be used to produce the output ˆy = argmaxy∈Y pθ(y|x). Here, the corresponding model probability pθ(ˆy|x) is the uncalibrated confidence score. In binary class classification, the output space Y is [0, 1]. The confidence score for such classifiers can then be calibrated by training a forecaster Fy : [0, 1] →[0, 1] which takes in the model confidence Fy(Pθ(y|x)) to produce a recalibrated score (Platt, 2000). A widely used method for binary class calibration is Platt scaling where Fy is a logistic regression model. Similar methods have also been defined for multi-class classification (Guo et al., 2017). However, extending this to structured prediction in NLP settings is non-trivial since the output space |Y| is often too large for us to calibrate the output probabilities of all events. 2.2 Related Work Calibration methods for binary/multi class classification has been widely studied in related literature (Br¨ocker, 2009; Guo et al., 2017). Recent efforts at confidence modeling for NLP has focused on several tasks like co-reference, (Nguyen and O’Connor, 2015), semantic parsing (Dong et al., 2018) and neural machine translation (Kumar and Sarawagi, 2019). 2.3 Calibration in Structured Prediction In this section, we define the calibration framework by Kuleshov and Liang (2015) in the context of structured prediction problems in NLP. The model pθ denotes the neural network that produces an conditional probability pθ(y|x) given an (x, y) tuple. In a multi/binary class setting, a function Fy is used to map the output pθ(y|x) to a calibrated confidence score for all y ∈Y. In a structured prediction setting, since the cardinality of Y is usually large, we instead focus on the event of interest set I(x). I(x) contains events of interest E that are defined using the output events relevant to the deployment requirements of a model. The event E is a subset of Y . There can be several different schemes to define I(x). In later sections, we 2080 discuss related work on calibration that can be understood as applications of different I(x) schemes. In this work, we define a general framework for constructing I(x) for NLP tasks which allows us to maximize calibration performance on output entities of interest. We define Fy(E, x, pθ) to be a function, that takes the event E, the input feature x and pθ to produce a confidence score between [0, 1]. We refer to this calibration function as the forecaster and use Fy(E, x) as a shorthand since it is implicit that Fy depends on outputs of pθ. We would like to find the forecaster that minimizes the discrepancy between Fy(E, x) and P(y ∈E|x) for (x, y) sampled from P(x, y) and E uniformly sampled from I(x). A commonly used methodology for constructing a forecaster for pθ is to train it on a held-out dataset Ddev. A forecaster for a binary classifier is perfectly calibrated if P(y = 1|Fy(x) = p) = p. (2) It is trained on samples from {(x, I(y = 1) : (x, y) ∈Ddev}. For our forecaster based on I(x), perfect calibration would imply that P(y ∈E|Fy(x, E) = p) = p. (3) The training data samples for our forecaster are {(x, I(y ∈E) : E ∈I(x), (x, y) ∈Ddev}. 2.4 Construction of Event of Interest set I(x) The main contributions of this paper stem from our proposed schemes for constructing the aformentioned I(x) sets for NLP applications. Entities of Interest : In the interest of brevity, let us define “Entities of interest” φ(x) as the set of all entity predictions that can be queried from pθ for a sample x. For instance, in the case of answer span prediction for QA, the φ(x) may contain the MAP prediction of the best answer span (answer start and end indexes). In a parsing or sequence labeling task, φ(x) may contain the top-k label sequences obtained from viterbi decoding. In a relation or named-entity extraction task, φ(x) contains the relation or named entity span predictions respectively. Each entity s in φ(x) corresponds to a event set E that is defined by all outputs in Y that contain the entity s. I(x) contains set E for all entities in φ(x). Positive Entities and Events : We are interested in providing a calibrated probability for y ∈E corresponding to an s for all s in φ(x). Here y is the correct label sequence for the input x. If y lies in the set E for an entity s, we refer to s as a positive entity and the event as a positive event. In the example of named entity recognition, s may refer to a predicted entity span, E refers to all possible sequences in Y that contain the predicted span. The corresponding event is positive if the correct label sequence y contains the span prediction s. Schemes for construction of I(x) : While constructing the set φ(x) we should ensure that it is limited to a relatively small number of output entities, while still covering as many positive events in I(x) as possible. To explain this consideration, let us take the example of a parsing task such as syntax or semantic parsing. Two possible schemes for defining I(x) are : 1. Scheme 1: φ(x) contains the MAP label sequence prediction. I(x) contains the event corresponding to whether the label sequence y′ = argmaxy pθ(y|x) is correct. 2. Scheme 2: φ(x) contains all possible label sequences. I(x) contains a event corresponding to whether the label sequence y′ is correct, for all y′ ∈Y Calibration of model confidence by Dong et al. (2018) can be viewed as Scheme 1, where the entity of interest is the MAP label sequence prediction. Whereas, using Platt Scaling in a one-vs-all setting for multi-class classification (Guo et al., 2017) can be seen as an implementation of Scheme 2 where the entity of interest is the presence of class label. As discussed in previous sections, Scheme 2 is too computationally expensive for our purposes due to large value of |Y| . Scheme 1 is computationally cheaper, but it has lower coverage of positive events. For instance, a sequence labelling model with a 60% accuracy at sentence level means that only 60 % of positive events are covered by the set corresponding to argmaxy pθ(y|x) predictions. In other words, only 60 % of the correct outputs of model pθ will be used for constructing the forecaster. This can limit the positive events in I(x). Including the top-k predictions in φ(x) may increase the coverage of positive events and therefore increase the positive training data for the forecaster. The optimum choice of k involves a trade-off. A larger value of k implies broader coverage of positive events and more positive training data for the forecaster training. However, it may also lead to 2081 Calibration BERT BERT+CRF DistilBERT Platt 15.90±.03 15.56±.23 12.30±.13 Calibrated Mean 2.55±.34 2.31±.35 2.02±.16 +Var 2.11±.32 2.55±.32 2.73±.40 Platt+top2 11.4±.07 14.21±.16 11.03±.31 Calibrated Mean+top2 2.94± .29 4.82±.15 3.61±.17 +Var+top2 2.17±.35 4.26±.10 2.43±.16 +Rank+top2 2.43±.30 2.43±.45 2.21±.09 +Rank+Var+top2 1.81±.12 2.29±.27 1.97±.14 Platt+top3 17.46±.13 18.11±.16 12.84±.37 +Rank+Var+top3 3.18±.12 3.71±.25 2.05±.06 Table 1: ECE percentages on Penn Treebank for different models and calibration methods. The results are for top-1 MAP predictions on the test data. ECE standard deviation is estimated by repeating the experiments for 5 repetitions. ECE for uncalibrated BERT, BERT+CRF model and DistilBERT is 35.11%, 33.72% and 28.06% respectively. heuristic-k is 2 for all +Rank+Var+topk forecasters. Full feature model +Rank+Var+topk, k = 3 is also provided for completeness. an unbalanced training dataset that is skewed in favour of negative training examples. Task specific details about φ(x) are provided in the later sections. For the purposes of this paper, top-k refers to the top k MAP sequence predictions, also referred to as argmax(k). 2.5 Forecaster Construction Here we provide a summary of the steps involved in Forecaster construction. Remaining details are in the Appendix. We train the neural network model pθ on the training data split for a task and use the validation data for monitoring the loss and early stopping. After the training is complete, this validation data is re-purposed to create the forecaster training data. We use an MC-Dropout(Gal and Ghahramani, 2016) average of (n=10) samples to get a low variance estimate of logit outputs from the neural networks. This average is fed into the decoding step of the model pθ to obtain top-k label sequence predictions. We then collect the relevant entities in φ(x), along with the I(y ∈E) labels to form the training data for the forecaster. We use gradient boosted decision trees (Friedman, 2001) as our region-based (Dong et al., 2018; Kuleshov and Liang, 2015) forecaster model. Choice of the hyperparameter k: We limit our choice of k to {2, 3}. We train our forecasters on training data constructed through top-2 and top-3 extraction each. These two models are then evaluated on top-1 extraction training data, and the best value of k is used for evaluation on test. This heuristic for k selection is based on the fact that the top-1 training data for a good predictor pθ, is a positive-event rich dataset. Therefore, this dataset can be used to reject a larger k if it leads to reduced performance on positive events. We refer to the value of k obtained from this heuristic as as heuristic-k. 2.6 Feature Construction for Calibration We use three categories of features as inputs to our forecaster. Model and Model Uncertainty based features contain the mean probability obtained by averaging over the marginal probability of the “entity of interest” obtained from 10 MC-dropout samples of pθ. Average of marginal probabilities acts as a reduced variance estimate of un-calibrated model confidence. Our experiments use the pre-trained contextual word embedding architectures as the backbone networks. We obtain MC-Dropout samples by enabling dropout sampling for all dropout layers of the networks. We also provide 10th and 90th percentile values from the MC-Dropout samples, to provide model uncertainty information to the forecaster. Since our forecaster training data contains entity predictions from top-k MAP predictions, we also include the rank k as a feature. We refer to these two features as “Var” and “Rank” in our models. Entity of interest based features contain the length of the entity span if the output task is named entity. We only use this feature in the NER experiments and refer to it as “ln”. Data Uncertainty based features: Dong et al. (2018) propose the use of language modelling (LM) 2082 Calibration BERT BERT+CRF DistilBERT Baseline 60.30±.12 62.31±.11 60.17±.08 +Rank+Var+top2 60.30±.23 62.31±.09 60.13±.11 +Rank+Var+top3 59.84±.16 61.06±.14 58.95±.08 Table 2: Micro-avg f-score for POS datasets using the baseline and our best proposed calibration method. The confidence score from the calibration method is used to re-rank the events E ∈I(s) and the top selection is chosen. Standard deviation is estimated by repeating the experiments for 5 repetitions. Baseline refers to MC-dropout averaged (sample-size=10) output from the model pθ. heuristic-k is 2 for +Rank+Var+topk forecasters. and OOV-word-based features as a proxy for data uncertainty estimation. The use of word-pieces and large pre-training corpora in contextual word embedding models like BERT may affect the efficacy of LM based features. Nevertheless, we use LM perplexity (referred to as “lm”) in the QA task to investigate its effectiveness as an indicator of the distributional shift in data. Essentially, our analysis focuses on LM perplexity as a proxy for distributional uncertainty (Malinin and Gales, 2018) in our out-of-domain experiments. The use of word-pieces in models like BERT reduces the negative effect of OOV words on model prediction. Therefore, we do not include OOV features in our experiments. 3 Experiments and Results We use BERT-base (Devlin et al., 2018) and distilBERT (Sanh et al., 2019) network architecture for our experiments. Validation split for each dataset was used for early stopping BERT fine-tuning and as training data for forecaster training. POS and NER experiments are evaluated on Penn Treebank and CoNLL 2003 (Sang and De Meulder, 2003), MADE 1.0 (Jagannatha et al., 2019) respectively. QA experiments are evaluated on SQuAD1.1 (Rajpurkar et al., 2018) and EMRQA (Pampari et al., 2018) corpus. We also investigate the performance of our forecasters on an out-of-domain QA corpus constructed by applying EMRQA QA data generation scheme (Pampari et al., 2018) on the MADE 1.0 named entity and relations corpus. Details for these datasets are provided in their relevant sections. We use the expected calibration error (ECE) metric defined by Naeini et al. (2015) with N = 20 bins (Guo et al., 2017) to evaluate the calibration of our models. ECE is defined as an estimate of the expected difference between the model confidence and accuracy. ECE has been used in several related works (Guo et al., 2017; Maddox et al., 2019; Kumar et al., 2018; Vaicenavicius et al., 2019) to estimate model calibration. We use Platt scaling as the baseline calibration model. It uses the lengthnormalized probability averaged across 10 MCDropout samples as the input. The lower variance and length invariance of this input feature make Platt Scaling a strong baseline. We also use a “Calibrated Mean” baseline using Gradient Boosted Decision Trees as our estimator with the same input feature as Platt. 3.1 Calibration for Part-of-Speech Tagging Part-of-speech (POS) is a sequence labelling task where the input is a text sentence, and the output is a sequence of syntactic tags. We evaluate our method on the Penn Treebank dataset (Marcus et al., 1994). We can define either the token prediction or the complete sequence prediction as the entity of interest. Since using a token level entity of interest effectively reduces the calibration problem to that of calibrating a multi-class classifier, we instead study the case where the predicted label sequence of the entire sentence forms the entity of interest set. The event of interest set is defined by the events y = MAPk(x) which denote whether each top-k sentence level MAP prediction is correct. We use three choice of pθ models, namely BERT, BERT-CRF and distilBERT. We use model uncertainty and rank based features for our POS experiments. Table 1 shows the ECE values for our baseline, proposed and ablated models. The value of heuristic-k is 2 for all +Rank+Var+topk forecasters across all PTB models. “topk” in Table 1 refers to forecasters trained with additional top-k predictions. Our methods outperform both baselines by a large margin. Both “Rank” and “Var” features help in improving model calibration. Inclusion of top-2 prediction sequences also improve the calibration performance significantly. Table 1 also shows the performance of our full feature model “+Rank+Var+topk” for the sub-optimal value of 2083 Calibration CoNLL MADE 1.0 (BERT) (bioBERT) Platt 2.00±.12 4.00±.07 Calibrated Mean 2.29±.33 3.07±.18 +Var 2.43±.36 3.05±.17 +Var+ln 2.24±.14 2.92±.24 Platt+top3 16.64±.48 2.14±.18 Calibrated Mean+top3 17.06±.50 2.22±.31 +Var+top3 17.10±.24 2.17±.39 +Rank+Var+top3 2.01±.33 2.34±.15 +Rank+Var+ln+top3 1.91±.29 2.12±.24 Table 3: ECE percentages for the two named entity datasets and calibration methods. The results are for all predicted named entity spans in top-1 MAP predictions on the test data. ECE standard deviation is estimated by repeating the experiments for 5 repetitions. ECE for uncalibrated span marginals from BERT model is 3.68% and 5.59% for CoNLL and MADE 1.0 datasets. heuristic-k is 3 for all +Rank+Var+top3 forecasters. Calibration CoNLL MADE 1.0 (BERT) (bioBERT) Baseline 89.45±.08 84.01±.11 +Rank+Var+top3 89.73±.12 84.33±.07 +Rank+Var+ln+top3 89.78±.10 84.34±.10 Table 4: Micro-avg f-score for NER datasets and our best proposed calibration method. The confidence score from the calibration method is used to re-rank the events E ∈I(s) and a confidence value of 0.5 is used as a cutoff. Standard deviation is estimated by repeating the experiments for 5 repetitions. Baseline refers to MC-dropout averaged (sample-size=10) output of model pθ. heuristic-k is 3 for all +Rank+Var+top3 forecasters. k = 3. It has lower performance than k = 2 across all models. Therefore for the subsequent experimental sections, we only report topk calibration performance using the heuristic-k value only. We use the confidence predictions of our fullfeature model +Rank+Var+topk to re-rank the topk predictions in the test set. Table 2 shows the sentence-level (entity of interest) accuracy for our re-ranked top prediction and the original model prediction. 3.2 Calibration for Named Entities For Named Entity (NE) Recognition experiments, we use two NE annotated datasets, namely CoNLL 2003 and MADE 1.0. CoNLL 2003 consists of documents from the Reuters corpus annotated with named entities such as Person, Location etc. MADE 1.0 dataset is composed of electronic health records annotated with clinical named entities such as Medication, Indication and Adverse effects. The entity of interest for NER is the named entity span prediction. We define φ(x) as predicted entity spans in argmax(k) label sequences predictions for x. We use BERT-base with token-level softmax output and marginal likelihood based training. The model uncertainty estimates for “Var” feature are computed by estimating the variance of length normalized MC-dropout samples of span marginals. Due to the similar trends in behavior of BERT and BERT+CRF model in POS experiments, we only use BERT model for NER. However, the span marginal computation can be easily extended to linear-chain CRF models. We also use the length of the predicted named entity as the feature “ln” in this experiment. Complete details about forecaster and baselines are in the Appendix. Value of heuristic-k is 3 for all +Rank+Var+topk forecasters. We show ablation and baseline results for k = 3 only. However, no other forecasters for any k ∈{2, 3} outperform our best forecasters in Table 3. We use the confidence predictions of our “+Rank+Var+top3” models to re-score the confidence predictions for all spans predicted in top-3 MAP predictions for samples in the test set. A threshold of 0.5 was used to remove span predictions with low confidence scores. Table 4 shows the Named Entity level (entity of interest) MicroF score for our re-ranked top prediction and the original model prediction. We see that re-ranked predictions from our models consistently improve the model f-score. 3.3 Calibration for QA Models We use three datasets for evaluation of our calibration methods on the QA task. Our QA tasks are modeled as extractive QA methods with a single span answer predictions. We use three datasets to construct experiments for QA calibration. SQuAD1.1 and EMRQA (Pampari et al., 2018) are open-domain and clinical-domain QA datasets, respectively. We process the EMRQA dataset by restricting the passage length and removing unanswerable questions. We also design an out-of-domain evaluation of calibration using clinical QA datasets. We follow the guidelines from Pampari et al. (2018) to create a QA dataset 2084 Calibration SQuAD1.1 EMRQA MADE 1.0 MADE 1.0(OOD) (BERT) (bioBERT) (bioBERT) (bioBERT) Platt 3.69±.16 5.07±.37 3.64±.17 15.20±.16 Calibrated Mean 2.95±.26 2.28±.18 2.50±.31 13.26±.94 +Var 2.92±.28 2.74±.15 2.71±.32 12.41±.95 Platt+top3 7.71±.28 5.42±.25 11.87±.19 16.36±.26 Calibrated Mean+top3 3.52±.35 2.11±.19 9.21±.25 12.11±.24 +Var+top3 3.56±.29 2.20±.20 9.26±.27 11.67±.27 +Var+lm+top3 3.54±.21 2.12±.19 6.07±.26 12.42±.32 +Rank+Var+top3 2.47±.18 1.98±.10 1.77±.23 12.69±.20 +Rank+Var+lm+top3 2.79±.32 2.24±.29 1.66±.27 12.60±.28 Table 5: ECE percentages for QA tasks SQuAD1.1, EMRQA and MADE 1.0. MADE 1.0(OOD) refers to the out-of-domain evaluation of a QA model that is trained and calibrated on EMRQA training and validation splits. The results are for top-1 MAP predictions on the test data. ECE standard deviation is estimated by repeating the experiments for 5 repetitions. BERT model’s uncalibrated ECE for SQuAD1.1, EMRQA, MADE 1.0 and MADE 1.0(OOD) are 6.24% 6.10%, 20.10% and 18.70% respectively. heuristic-k is 3 for all +Rank+Var+topk forecasters. Calibration SQuAD1.1 EMRQA MADE 1.0 MADE 1.0(OOD) (BERT) (bioBERT) (bioBERT) (bioBERT) Baseline 79.79±.08 70.97±.14 66.21±.18 31.62±.12 +Rank+Var+top3 80.04±.11 71.34±.22 66.33±.12 31.99±.11 +Rank+Var+lm+top3 80.03±.15 71.37±.26 66.33±.15 32.02±.09 Table 6: Table shows change in Exact Match Accuracy for QA datasets and our best proposed calibration method. The confidence score from the calibration method is used to re-rank the events E ∈I(s). Standard deviation is estimated by repeating the experiments for 5 repetitions. Baseline refers to MC-dropout averaged (sample-size=10) output of model pθ. heuristic-k is 3 for all +Rank+Var+topk forecasters. from MADE 1.0 (Jagannatha et al., 2019). This allows us to have two QA datasets with common question forms, but different text distributions. In this experimental setup we can mimic the evaluation of calibration methods in a real-world scenario, where the task specifications may remain the same but the underlying text source changes. Details about dataset pre-processing and construction are provided in the Appendix. The entity of interest for QA is the top-k answer span predictions. We use the “lm” perplexity as a feature in this experiment to analyze its behaviour in out-of-domain evaluations. We use a 2 layer unidirectional LSTM to train a next word language model on the EMRQA passages. This language model is then used to compute the perplexity of a sentence for the “lm” input feature to the forecaster. We use the same baselines as the previous two tasks. Based on Table 5, our methods outperform the baselines by a large margin in both in-domain and out-of-domain experiments. Value of heuristic-k is 3 for all +Rank+Var+topk forecasters. We show ablation and baseline results for k = 3 only. However, no other forecasters for any k ∈{2, 3} outperform our best forecasters in Table 5 . Our models are evaluated on SQuAD1.1 dev set, and test sets from EMRQA and MADE 1.0. They show consistent improvements in ECE and Exact Match Accuracy. 4 Discussion Our proposed methods outperform the baselines in most tasks and are almost as competitive in others. Features and top-k samples: The inclusion of top-k features improve the performance in almost all tasks when the rank of the prediction is included. We see large increases in calibration error when the top-k prediction samples are included in forecaster training without including the rank information in tasks such as CoNLL NER and MADE 1.0 QA. This may be because the k = 1, 2, 3 predictions 2085 Figure 1: Modified reliability plots (Accuracy - Confidence vs Confidence) on MADE 1.0 QA test. The dotted horizontal line represents perfect calibration. Scatter point diameter denotes bin size. The inner diameter of the scatter point denotes the number of positive events in that bin. may have similar model confidence and uncertainty values. Therefore a more discriminative signal such as rank is needed to prioritize them. For instance, the difference between probabilities of k = 1 and k = 2 MAP predictions for POS tagging may differ by only one or two tokens. In a sentence of length 10 or more, this difference in probability when normalized by length would account to very small shifts in the overall model confidence score. Therefore an additional input of rank k leads to a substantial gain in performance for all models in POS. Our task-agnostic scheme of “Rank+Var+topk” based forecasters consistently outperform or stay competitive to other forecasting methods. However, results from task-specific features such as “lm” and “len” show that use of task-specific features can further reduce the calibration error. Our domain shift experimental setup has the same set of questions in both in-domain and out-of-domain datasets. Only the data distribution for the answer passage is different. However, we do not observe an improvement in out-of-domain performance by using “lm” feature. A more detailed analysis of task-specific features in QA with both data and question shifts is required. We leave further investigations of such schemes as our future work. Choice of k is important : The optimal choice of k seems to be strongly dependent on the inherent properties of the tasks and its output event set. In all our experiments, for a specific task all Figure 2: An example of named entity span from CoNLL dataset. Rank is kth rank from top-k MAP inference (Viterbi decoding). Mean Prob and Std is the mean and standard deviation of length-normalized probabilities (geometric mean of marginal probabilities for each token in the span). Calibrated confidence is the output of Rank+Var+ln+top3. +Rank+Var+topk forecasters exhibit consistent behaviours with respect to the choice of k. In POS experiments, heuristic-k = 2. In all other tasks, heuristic-k = 3. Our heuristic-k models are the best performing models, suggesting that the heuristic described in Section 2.5 may generalize to other tasks as well. Re-scoring : We show that using our forecaster confidence to re-rank the entities of interest leads to a modest boost in model performance for the NER and QA tasks. In POS no appreciable gain or drop in performance was observed for k = 2. We believe this may be due to the already high token level accuracy (above 97%) on Penn Treebank data. Nevertheless, this suggests that our re-scoring does 2086 not lead to a degradation in model performance in cases where it is not effective. Our forecaster re-scores the top-k entity confidence scores based on model uncertainty score and entity-level features such as entity lengths. Intuitively, we want to prioritize predictions that have low uncertainty over high uncertainty predictions, if their uncalibrated confidence scores are similar. We provide an example of such re-ranking in Figure 2. It shows a named entity span predictions for the correct span “Such”. The model pθ produces two entity predictions “off-spinner Such” and “Such”. The un-calibrated confidence score of “off-spinner Such” is higher than “Such”, but the variance of its prediction is higher as well. Therefore the +Rank+Var+ln+top3 re-ranks the second (and correct) prediction higher. It is important to note here that the variance of “off-spinner Such” may be higher just because it involves two token predictions as compared to only one token prediction in “Such”. This along with the “ln” feature in +Rank+Var+ln+top3 may mean that the forecaster is also using length information along with uncertainty to make this prediction. However, we see similar improvements in QA tasks, where the “ln” feature is not used, and all entity predictions involve two predictions (span start and end index predictions). These results suggest that use of uncertainty features are useful in both calibration and re-ranking of predicted structured output entities. Out-of-domain Performance : Our experiments testing the performance of calibrated QA systems on out-of-domain data suggest that our methods result in improved calibration on unseen data as well. Additionally, our methods also lead to an improvement in system accuracy on out-ofdomain data, suggesting that the mapping learned by the forecaster model is not specific to a dataset. However, there is still a large gap between the calibration error for within domain and out-of-domain testing. This can be seen in the reliability plot shown in Figure 1. The number of samples in each bin are denoted by the radius of the scatter point. The calibrated models shown in the figure corresponds to “+Rank+Var+lm+top3’ forecaster calibrated using both in-domain and out-of-domain validation datasets for forecaster training. We see that out-of-domain forecasters are over-confident and this behaviour is not mitigated by using datauncertainty aware features like “lm”. This is likely due to a shift in model’s prediction error when applied to a new dataset. Re-calibration of the forecaster using a validation set from the out-of-domain data seems to bridge the gap. However, we can see that the sharpness (Kuleshov and Liang, 2015) of out-of-domain trained, in-domain calibrated model is much lower than that of in-domain trained, indomain calibrated one. Additionally, a validation dataset is often not available in the real world. Mitigating the loss in calibration and sharpness induced by out-of-domain evaluation is an important avenue for future research. Uncertainty Estimation : We use MC-Dropout as a model (epistemic) uncertainty estimation method in our experiments. However, our method is not specific to MC-Dropout, and is compatible with any method that can provide a predictive distribution over token level outputs. As a result any bayesian or ensemble based uncertainity estimation method (Welling and Teh, 2011; Lakshminarayanan et al., 2017; Ritter et al., 2018) can be used with our scheme. In this work, we do not investigate the use of aleatoric uncertainty for calibration. Our use of language model features is aimed at accounting for distributional uncertainty instead of aleatoric uncertainty (Gal, 2016; Malinin and Gales, 2018). Investigating the use of different types of uncertainty for calibration remains as our future work. 5 Conclusion We show a new calibration and confidence based re-scoring scheme for structured output entities in NLP. We show that our calibration methods outperform competitive baselines on several NLP tasks. Our task-agnostic methods can provide calibrated model outputs of specific entities instead of the entire label sequence prediction. We also show that our calibration method can provide improvements to the trained model’s accuracy at no additional training or data cost. Our method is compatible with modern NLP architectures like BERT. Lastly, we show that our calibration does not over-fit on in-domain data and is capable of generalizing the calibration to out-of-domain datasets. Acknowledgement Research reported in this publication was supported by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health under Award Number R01HL125089. 2087 References Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning, pages 33–40. Jochen Br¨ocker. 2009. Reliability, sufficiency, and the decomposition of proper scores. Quarterly Journal of the Royal Meteorological Society: A journal of the atmospheric sciences, applied meteorology and physical oceanography, 135(643):1512–1519. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Li Dong, Chris Quirk, and Mirella Lapata. 2018. Confidence modeling for neural semantic parsing. arXiv preprint arXiv:1805.04604. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189–1232. Yarin Gal. 2016. Uncertainty in deep learning. University of Cambridge, 1:3. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1321–1330. JMLR. org. Abhyuday Jagannatha, Feifan Liu, Weisong Liu, and Hong Yu. 2019. Overview of the first natural language processing challenge for extracting medication, indication, and adverse drug events from electronic health record notes (made 1.0). Drug safety, 42(1):99–111. Volodymyr Kuleshov and Percy S Liang. 2015. Calibrated structured prediction. In Advances in Neural Information Processing Systems, pages 3474–3482. Aviral Kumar and Sunita Sarawagi. 2019. Calibration of encoder decoder models for neural machine translation. arXiv preprint arXiv:1903.00802. Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. 2018. Trainable calibration measures for neural networks from kernel mean embeddings. In International Conference on Machine Learning, pages 2810– 2819. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402–6413. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746. Fei Li, Yonghao Jin, Weisong Liu, Bhanu Pratap Singh Rawat, Pengshan Cai, and Hong Yu. 2019. Finetuning bidirectional encoder representations from transformers (bert)–based models on large-scale electronic health record notes: An empirical study. JMIR Med Inform, 7(3):e14830. Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. 2019. A simple baseline for bayesian uncertainty in deep learning. arXiv preprint arXiv:1902.02476. Andrey Malinin and Mark Gales. 2018. Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems, pages 7047–7058. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The penn treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology, pages 114–119. Association for Computational Linguistics. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence. Cl´audio A Naranjo, Usoa Busto, Edward M Sellers, P Sandor, I Ruiz, EA Roberts, E Janecek, C Domecq, and DJ Greenblatt. 1981. A method for estimating the probability of adverse drug reactions. Clinical Pharmacology & Therapeutics, 30(2):239–245. Khanh Nguyen and Brendan O’Connor. 2015. Posterior calibration and exploratory analysis for natural language processing models. arXiv preprint arXiv:1508.05154. Anusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrqa: A large corpus for question answering on electronic medical records. arXiv preprint arXiv:1809.00732. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. J Platt. 2000. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. Advances in Large Margin Classifiers, pages 61–74. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. 2088 Hippolyt Ritter, Aleksandar Botev, and David Barber. 2018. A scalable laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings, volume 6. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Languageindependent named entity recognition. arXiv preprint cs/0306050. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Sarah Sarabadani. 2019. Detection of adverse drug reaction mentions in tweets using elmo. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 120–122. Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B Sch¨on. 2019. Evaluating model calibration in classification. arXiv preprint arXiv:1902.06977. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537. Max Welling and Yee W Teh. 2011. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 681–688. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-theart natural language processing. arXiv preprint arXiv:1910.03771. Henghui Zhu, Ioannis Ch Paschalidis, and Amir Tahmasebi. 2018. Clinical concept extraction with contextual word embedding. arXiv preprint arXiv:1810.10566. 2089 A Appendices A.1 Algorithm Details: The forecaster construction algorithm is provided in Algorithm 1. The candidate events in Algorithm 1 are obtained by extracting top-k label sequences for every output. The logits obtained from pθ are averaged over 10 MC-Dropout samples before being fed into the final output layer. We use the validation dataset from the task’s original split to train the forecaster. The validation dataset is used to construct both training and validation split for the forecaster. The training split contains all top-k predicted entities. The validation split contains only top-1 predicted entities. A.2 Evaluation Details We use the expected calibration error (ECE) score defined by (Naeini et al., 2015) to evaluate our calibration methods. Expected calibration error is a score that estimates the expected absolute difference between model confidence and accuracy. This is calculated by binning the model outputs into N (N = 20 for our experiments) bins and then computing the expected calibration error across all bins. It is defined as ECE = N X i=0 |Bi| n |acc(Bi) −conf(Bi)|, (4) where N is the number of bins, n is the total number of data samples, Bi is the ith bin. The functions acc(.) and conf(.) calculate the accuracy and model confidence for a bin. A.3 Implementation Details We use AllenNLP’s wrapper with HuggingFace’s Transformers code 1 for our implementation2. We use BERT-base-cased (Wolf et al., 2019) weights as the initialization for general-domain datasets and bio-BERT weights (Lee et al., 2019) as the initialization for clinical datasets. We use cased models for our analysis, since bio-BERT(Lee et al., 2019) uses cased models. A common learning rate of 2e5 was used for all experiments. We used validation data splits provided by the datasets. In cases where the validation dataset was not provided, such as MADE 1.0, EMRQA or SQuAD1.1, we use 10% 1https://github.com/huggingface/transformers 2The code for forecaster construction is available at https://github.com/abhyudaynj/ StructuredPredictionCalibrationNLP of the training data as the validation data. We use a patience of 5 for early stopping the model, with each epoch consisting of 20,000 steps. We use the final evaluation metric instead of negative log likelihood (NLL) to monitor and early stop the training. This is to reduce the mis-calibration of the underlying pθ model, since Guo et al. (2017) observe that neural nets overfit on NLL. The implementation for each experiment is provided in the following subsections. A.3.1 Part-of-speech experiments We evaluate our method on the Penn Treebank dataset (Marcus et al., 1994). Our experiment uses the standard training (1-18), validation(19-21) and test (22-24) splits from the WSJ portion of the Penn Treebank dataset. The un-calibrated output of our model for a candidate label sequence is estimated as ˆp = 1 M X MC−Dropout pθ(y1, y2, ...yL|x) 1 L , (5) where M is the number of dropout samples. The Lth root accounts for different sentence lengths. Here L is the length of the sentence. We observe that this kind of normalization improves the calibration of both baselines and proposed models. We do not normalize the probabilities while reporting the ECE of uncalibrated models. We use two choice of pθ models, namely BERT and BERT+CRF. BERT only model adds a linear layer to the output of BERT network and uses a softmax activation function to produce marginal label probabilities for each token. BERT+CRF uses a CRF layer on top of unary potentials obtained from the BERT network outputs. We use Platt Scaling (Platt, 2000) as the baseline calibration model. Our Platt scaling model uses the MC-Dropout average of length normalized probability output of the model pθ as input. The lower variance and length invariance of this input feature make Platt Scaling a very strong baseline. We also use a “Calibrated Mean” baseline using Gradient Boosted Decision Trees as our estimator with the same input feature as Platt. A.3.2 NER Experiments For CoNLL dataset, “testa” file was reserved for validation data and “testb” was reserved for test. For MADE 1.0 (Jagannatha et al., 2019), since validation data split was not provided we randomly selected 10% of training data as validation data. 2090 Algorithm 1: Forecaster construction for model pθ with max rank kmax. Input: Uncalibrated model pθ , Validation Dataset D = {(x(i), y(i)}|D| i=0 , kmax. Output: Forecaster Fy Function Get-Forecaster (pθ, D, kmax) for i ←0 to |D| do I(x(i)) ←Get-Candidate-Events(pθ, x(i), kmax) Dtrain ←{(x(i), c, E) : c = 1(y(i) ∈E) , ∀E ∈I(x)} Ik=1(x(i)) ←Get-Candidate-Events(pθ, x(i), 1) Dval ←{(x(i), c, E) : c = 1(y(i) ∈E) , ∀E ∈Ik=1(x)} end Train Forecasters F (k) y for k = {1, ..., kmax} using Dtrain Fy ←F (k) y with minimum ECE on Dval return Fy Function Get-Candidate-Events (pθ,x,kmax) Construct top-kmax label sequences using MC-Dropout average of pθ(x) logits. Extract relevant entity set φ(x) from top-kmax label sequences. I(x) ←Events corresponding to entities in φ(x). return I(x); The length normalized marginal probability for a span starting at i and of length l is estimated as ˆp = 1 M X MC−Dropout pθ(yi, y2, ...yi+l−1|x) 1 l . (6) We use this as the input to both the baseline and proposed models. We observe that this kind of normalization improves the calibration of baseline and proposed models. We do not normalize the probabilities while reporting the ECE of uncalibrated models. We use BIO-tags for training. While decoding, we also allow spans that start with “I-” tag. A.3.3 QA experiments We use three datasets for our QA experiments, SQAUD 1.1, EMRQA and MADE 1.0. Our main aim in these experiments is to understand the behaviour of calibration and not the complexity of the tasks themselves. Therefore, we restrict the passage lengths of EMRQA and MADE 1.0 datasets to be similar to SQuAD1.1. We pre-process the passages from EMRQA to remove unannotated answer span instances and reduce the passage length to 20 sentences. EMRQA provides multiple question templates for the same question type (referred to as logical form in Pampari et al. (2018)). For each annotation, we randomly sample 3 question templates for our QA experiments. This is done to ensure that question types that have multiple question templates are not over-represented in the data. For example, the question type for “’Does he take anything for her —problem—” has 49 available answer templates, whereas “How often does the patient take —medication—” only has one. So for each annotation, we sample 3 question templates for a question type. If the question type does not have 3 available templates, we up-sample. For more details please refer to Pampari et al. (2018). EMRQA is a QA dataset constructed from named entity and relation annotations from clinical i2b2 datasets consisting of adverse event, medication and risk related questions (Pampari et al., 2018). We aim to also test the performance of our calibration method on out-of-domain test data. To do so, we construct a QA dataset from the clinical named entity and relation dataset MADE 1.0, using the questions and the dataset construction procedure followed in EMRQA. This allows us to have two QA datasets with common question forms, but different text distributions. This experimental setup enables us to evaluate how a QA system would perform when deployed on a new text corpus. This corresponds to the application scenario where a fixed set of questions (such as Adverse event questionnaire (Naranjo et al., 1981)) are to be answered for clinical records from different sources. Both EMRQA and MADE 1.0 are constructed from clinical documents. However, the documents themselves have different structure and language due to their different clinical sources, thereby mimicking 2091 the real-world application scenarios of clinical QA systems. MADE QA Construction MADE 1.0 (Jagannatha et al., 2019) is an NER and relation dataset that has similar annotation to “relations” and “medication” i2b2 datasets used in EMRQA. EMRQA uses an automated procedure to construct questions and answers from NER and relation annotations. We replicate the automated QA construction followed by Pampari et al. (2018) on MADE 1.0 dataset to obtain a corresponding QA dataset for the same. For this construction, we use question templates that use annotations that are common in both MADE 1.0 and EMRQA datasets. Examples of common questions are in Table 7. A full list of questions in MADE 1.0 QA is in “question templates.csv” file included in supplementary materials. The dataset splits for EMRQA and MADE QA are provided in Table 8. Forecaster features Since we only consider single-span answer predictions, we require a constant number of predictions ( answer start and answer end token index), for this task. Therefore we do not use the “ln” feature in this task. The uncalibrated probability of an event is normalized as follows and then used as input to all calibration models. ˆp = 1 M X MC−Dropout pθ(ystart, yend|x)1/2 (7) Unlike the previous tasks, extractive QA with single-span output does not have a varying number of output predictions for each data sample. It always only predicts the start and end spans. Therefore using length normalized (where length is always 2) uncalibrated output does not significantly affect the calibration of baseline models. However, we use the length-normalized uncalibrated probability as our input feature to keep our base set of features consistent throughout the tasks. Additionally, in extractive QA tasks with non-contiguous spans, the number of output predictions can vary and be higher than 2. In such cases, based on our results on POS and NER, the length-normalized probability may prove to be more useful. The “Var” feature and “Rank” feature is estimated as described in previous tasks. 2092 Input Output Example Question Form Problem Treatment How does the patient manage her —problem— Treatment Problem Why is the patient on —treatment— Problem Problem Has the patient ever been diagnosed or treated for —problem— Drug Drug Has patient ever been prescribed —medication— Table 7: Examples of questions that are common in EMRQA and MADE QA datasets. Dataset Name Train Validation Test EMRQA 74414 8870 9198 MADE QA 99496 14066 21309 Table 8: Dataset size for the MADE dataset QA pairs that were constructed using guidelines from EMRQA. EMRQA dataset splits are also provided for comparison.
2020
188
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2093–2105 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2093 Active Imitation Learning with Noisy Guidance Kianté Brantley University of Maryland [email protected] Amr Sharaf University of Maryland [email protected] Hal Daumé III University of Maryland Microsoft Research [email protected] Abstract Imitation learning algorithms provide state-ofthe-art results on many structured prediction tasks by learning near-optimal search policies. Such algorithms assume training-time access to an expert that can provide the optimal action at any queried state; unfortunately, the number of such queries is often prohibitive, frequently rendering these approaches impractical. To combat this query complexity, we consider an active learning setting in which the learning algorithm has additional access to a much cheaper noisy heuristic that provides noisy guidance. Our algorithm, LEAQI, learns a difference classifier that predicts when the expert is likely to disagree with the heuristic, and queries the expert only when necessary. We apply LEAQI to three sequence labeling tasks, demonstrating significantly fewer queries to the expert and comparable (or better) accuracies over a passive approach. 1 Introduction Structured prediction methods learn models to map inputs to complex outputs with internal dependencies, typically requiring a substantial amount of expert-labeled data. To minimize annotation cost, we focus on a setting in which an expert provides labels for pieces of the input, rather than the complete input (e.g., labeling at the level of words, not sentences). A natural starting point for this is imitation learning-based “learning to search” approaches to structured prediction (Daumé et al., 2009; Ross et al., 2011; Bengio et al., 2015; Leblond et al., 2018). In imitation learning, training proceeds by incrementally producing structured outputs on piece at a time and, at every step, asking the expert “what would you do here?” and learning to mimic that choice. This interactive model comes at a substantial cost: the expert demonstrator must be continuously available and must be able to answer a potentially large number of queries. We reduce this annotation cost by only asking an expert for labels that are truly needed; our algorithm, Learning to Query for Imitation (LEAQI, /"li:,tSi:/)1 achieves this by capitalizing on two factors. First, as is typical in active learning (see §2), LEAQI only asks the expert for a label when it is uncertain. Second, LEAQI assumes access to a noisy heuristic labeling function (for instance, a rule-based model, dictionary, or inexpert annotator) that can provide low-quality labels. LEAQI operates by always asking this heuristic for a label, and only querying the expert when it thinks the expert is likely to disagree with this label. It trains, simultaneously, a difference classifier (Zhang and Chaudhuri, 2015) that predicts disagreements between the expert and the heuristic (see Figure 1). The challenge in learning the difference classifier is that it must learn based on one-sided feedback: if it predicts that the expert is likely to agree with the heuristic, the expert is not queried and the classifier cannot learn that it was wrong. We address this one-sided feedback problem using the Apple Tasting framework (Helmbold et al., 2000), in which errors (in predicting which apples are tasty) are only observed when a query is made (an apple is tasted). Learning in this way particularly important in the general case where the heuristic is likely not just to have high variance with respect to the expert, but is also statistically biased. Experimentally (§4.5), we consider three structured prediction settings, each using a different type of heuristic feedback. We apply LEAQI to: English named entity recognition where the heuristic is a rule-based recognizer using gazetteers (Khashabi et al., 2018); English scientific keyphrase extraction, where the heuristic is an unsupervised method (Florescu and Caragea, 2017); and Greek part-ofspeech tagging, where the heuristic is a small dictio1Code is available at: https://github.com/xkianteb/leaqi 2094 After completing his Ph.D. , Ellis worked at Bell Labs from 1969 to 1972 on probability theory... x = yh = y = O O O O O PER O O ORG ORG O O O O O O O O O PER O O O O O ORG ORG O O O O O O O O O PER O O PER O O ORG ŷ1:9 = s10 π*(s10) = ORG πh(s10) = ORG ydisagree = False Figure 1: A named entity recognition example (from the Wikipedia page for Clarence Ellis). x is the input sentence and y is the (unobserved) ground truth. The predictor π operates left-to-right and, in this example, is currently at state s10 to tag the 10th word; the state s10 (highlighted in purple) combines x with ˆy1:9. The heuristic makes two errors at t = 4 and t = 6. The heuristic label at t = 10 is yh 10 =ORG. Under Hamming loss, the cost at t = 10 is minimized for a = ORG, which is therefore the expert action (if it were queried). The label that would be provided for s10 to the difference classifier is 0 because the two policies agree. nary compiled from the training data (Zesch et al., 2008; Haghighi and Klein, 2006). In all three settings, the expert is a simulated human annotator. We train LEAQI on all three tasks using fixed BERT (Devlin et al., 2019) features, training only the final layer (because we are in the regime of small labeled data). The goal in all three settings is to minimize the number of words the expert annotator must label. In all settings, we’re able to establish the efficacy of LEAQI, showing that it can indeed provide significant label savings over using the expert alone and over several baselines and ablations that establish the importance of both the difference classifier and the Apple Tasting paradigm. 2 Background and Related Work We review first the use of imitation learning for structured prediction, then online active learning, and finally applications of active learning to structured prediction and imitation learning problems. 2.1 Learning to Search The learning to search approach to structured prediction casts the joint prediction problem of producing a complex output as a sequence of smaller classification problems (Ratnaparkhi, 1996; Collins and Roark, 2004; Daumé et al., 2009). For instance, in the named entity recognition example from Figure 1, an input sentence x is labeled one word at a time, left-to-right. At the depicted state (s10), the model has labeled the first nine words and must next label the tenth word. Learning to search approaches assume access to an oracle policy π⋆, which provides the optimal label at every position. In (interactive) imitation learning, we aim to imitate the behavior of the expert policy, π⋆, which provides the true labels. The learning to search view allows us to cast structured prediction as a (degenerate) imitation learning task, where states Algorithm 1 DAgger(Π, N, ⟨βi⟩N i=0, π⋆) 1: initialize dataset D = {} 2: initialize policy ˆπ1 to any policy in Π 3: for i = 1 . . . N do 4: ▷stochastic mixture policy 5: Let πi = βiπ⋆+ (1 −βi)ˆπi 6: Generate a T-step trajectory using πi 7: Accumulate data D ←D∪{(s, π⋆(s))} for all s in those trajectories 8: Train classifier ˆπi+1 ∈Π on D 9: end for 10: return best (or random) ˆπi are (input, prefix) pairs, actions are operations on the output, and the horizon T is the length of the sequence. States are denoted s ∈S, actions are denoted a ∈[K], where [K] = {1, . . . , K}, and the policy class is denoted Π ⊆[K]S. The goal in learning is to find a policy π ∈Π with small loss on the distribution of states that it, itself, visits. A popular imitation learning algorithm, DAgger (Ross et al., 2011), is summarized in Alg 1. In each iteration, DAgger executes a mixture policy and, at each visited state, queries the expert’s action. This produces a classification example, where the input is the state and the label is the expert’s action. At the end of each iteration, the learned policy is updated by training it on the accumulation of all generated data so far. DAgger is effective in practice and enjoys appealing theoretical properties; for instance, if the number of iterations N is ˜O(T 2 log(1/δ)) then with probability at least 1 −δ, the generalization error of the learned policy is O(1/T) (Ross et al., 2011, Theorem 4.2). 2.2 Active Learning Active learning has been considered since at least the 1980s often under the name “selective sam2095 pling” (Rendell, 1986; Atlas et al., 1990). In agnostic online active learning for classification, a learner operates in rounds (e.g. Balcan et al., 2006; Beygelzimer et al., 2009, 2010). At each round, the learning algorithm is presented an example x and must predict a label; the learner must decide whether to query the true label. An effective margin-based approach for online active learning is provided by Cesa-Bianchi et al. (2006) for linear models. Their algorithm defines a sampling probability ρ = b/(b + z), where z is the margin on the current example, and b > 0 is a hyperparameter that controls the aggressiveness of sampling. With probability ρ, the algorithm requests the label and performs a perceptron-style update. Our approach is inspired by Zhang and Chaudhuri’s (2015) setting, where two labelers are available: a free weak labeler and an expensive strong labeler. Their algorithm minimizes queries to the strong labeler, by learning a difference classifier that predicts, for each example, whether the weak and strong labelers are likely to disagree. Their algorithm trains this difference classifier using an example-weighting strategy to ensure that its Type II error is kept small, establishing statistical consistency, and bounding its sample complexity. This type of learning from one-sided feedback falls in the general framework of partialmonitoring games, a framework for sequential decision making with imperfect feedback. Apple Tasting is a type of partial-monitoring game (Littlestone and Warmuth, 1989), where, at each round, a learner is presented with an example x and must predict a label ˆy ∈{−1, +1}. After this prediction, the true label is revealed only if the learner predicts +1. This framework has been applied in several settings, such as spam filtering and document classification with minority class distributions (Sculley, 2007). Sculley (2007) also conducts a through comparison of two methods that can be used to address the one-side feedback problem: label-efficient online learning (Cesa-Bianchi et al., 2006) and margin-based learning (Vapnik, 1982). 2.3 Active Imitation & Structured Prediction In the context of structured prediction for natural language processing, active learning has been considered both for requesting full structured outputs (e.g. Thompson et al., 1999; Culotta and McCallum, 2005; Hachey et al., 2005) and for requesting only pieces of outputs (e.g. Ringger et al., 2007; Bloodgood and Callison-Burch, 2010). For sequence labeling tasks, Haertel et al. (2008) found that labeling effort depends both on the number of words labeled (which we model), plus a fixed cost for reading (which we do not). In the context of imitation learning, active approaches have also been considered for at least three decades, often called “learning with an external critic” and “learning by watching” (Whitehead, 1991). More recently, Judah et al. (2012) describe RAIL, an active learning-for-imitation-learning algorithm akin to our ACTIVEDAGGER baseline, but which in principle would operate with any underlying i.i.d. active learning algorithm (not just our specific choice of uncertainty sampling). 3 Our Approach: LEAQI Our goal is to learn a structured prediction model with minimal human expert supervision, effectively by combining human annotation with a noisy heuristic. We present LEAQI to achieve this. As a concrete example, return to Figure 1: at s10, π must predict the label of the tenth word. If π is confident in its own prediction, LEAQI can avoid any query, similar to traditional active learning. If π is not confident, then LEAQI considers the label suggested by a noisy heuristic (here: ORG). LEAQI predicts whether the true expert label is likely to disagree with the noisy heuristic. Here, it predicts no disagreement and avoids querying the expert. 3.1 Learning to Query for Imitation Our algorithm, LEAQI, is specified in Alg 2. As input, LEAQI takes a policy class Π, a hypothesis class H for the difference classifier (assumed to be symmetric and to contain the “constant one” function), a number of episodes N, an expert policy π⋆, a heuristic policy πh, and a confidence parameter b > 0. The general structure of LEAQI follows that of DAgger, but with three key differences: (a) roll-in (line 7) is according to the learned policy (not mixed with the expert, as that would require additional expert queries), (b) actions are queried only if the current policy is uncertain at s (line 12), and (c) the expert π⋆is only queried if it is predicted to disagree with the heuristic πh at s by the difference classifier, or if apple tasting method switches the difference classifier label (line 15; see §3.2). 2096 Algorithm 2 LEAQI(Π, H, N, π⋆, πh, b) 1: initialize dataset D = {} 2: initialize policy π1 to any policy in Π 3: initialize difference dataset S = {} 4: initialize difference classifier h1(s) = 1 (∀s) 5: for i = 1 . . . N do 6: Receive input sentence x 7: ▷generate a T-step trajectory using πi 8: Generate output ˆy using πi 9: for each s in ˆy do 10: ▷draw bernouilli random variable 11: Zi ∼Bern  b b+certainty(πi,s)  ; see §3.3 12: if Zi = 1 then 13: ▷set difference classifier prediction 14: ˆdi = hi(s) 15: if AppleTaste(s, πh(s), ˆdi) then 16: ▷predict agree query heuristic 17: D ←D ∪  s, πh(s)  18: else 19: ▷predict disagree query expert 20: D ←D ∪{ (s, π⋆(s)) } 21: di = 1  π⋆(s) = πh(s)] 22: S ←S ∪  s, πh(s), ˆdi, di  23: end if 24: end if 25: end for 26: Train policy πi+1 ∈Π on D 27: Train difference classifier hi+1 ∈H on S to minimize Type II errors (see §3.2) 28: end for 29: return best (or random) πi In particular, at each state visited by πi, LEAQI estimates z, the certainty of πi’s prediction at that state (see §3.3). A sampling probability ρ is set to b/(b + z) where z is the certainty, and so if the model is very uncertain then ρ tends to zero, following (Cesa-Bianchi et al., 2006). With probability ρ, LEAQI will collect some label. When a label is collected (line 12), the difference classifier hi is queried on state s to predict if π⋆ and πh are likely to disagree on the correct action. (Recall that h1 always predicts disagreement per line 4.) The difference classifier’s prediction, ˆdi, is passed to an apple tasting method in line 15. Intuitively, most apple tasting procedures (including the one we use, STAP; see §3.2) return ˆdi, unless the difference classifier is making many Type II errors, in which case it may return ¬ ˆdi. A target action is set to πh(s) if the apple tastAlgorithm 3 AppleTaste_STAP(S, ah i, ˆdi) 1: ▷count examples that are action ah i 2: let t = P (_,a,_,_)∈S 1[ah i = a] 3: ▷count mistakes made on action ah i 4: let m = P (_,a, ˆd,d)∈S 1[ ˆd ̸= d ∧ah i = a] 5: w = t |S| ▷percentage of time ah i was seen 6: if w < 1 then 7: ▷skew distribution 8: draw r ∼Beta(1 −w, 1) 9: else 10: draw r ∼Uniform(0, 1) 11: end if 12: return (d = 1) ∧(r ≤ p (m + 1)/t) ing algorithm returns “agree” (line 17), and the expert π⋆is only queried if disagreement is predicted (line 20). The state and target action (either heuristic or expert) are then added to the training data. Finally, if the expert was queried, then a new item is added to the difference dataset, consisting of the state, the heuristic action on that state, the difference classifier’s prediction, and the ground truth for the difference classifier whose input is s and whose label is whether the expert and heuristic actually disagree. Finally, πi+1 is trained on the accumulated action data, and hi+1 is trained on the difference dataset (details in §3.3). There are several things to note about LEAQI: ⋄If the current policy is already very certain, a expert annotator is never queried. ⋄If a label is queried, the expert is queried only if the difference classifier predicts disagreement with the heuristic, or the apple tasting procedure flips the difference classifier prediction. ⋄Due to apple tasting, most errors the difference classifier makes will cause it to query the expert unnecessarily; this is the “safe” type of error (increasing sample complexity but not harming accuracy), versus a Type II error (which leads to biased labels). ⋄The difference classifier is only trained on states where the policy is uncertain, which is exactly the distribution on which it is run. 3.2 Apple Tasting for One-Sided Learning The difference classifier h ∈H must be trained (line 27) based on one-sided feedback (it only ob2097 serves errors when it predicts “disagree“) to minimize Type II errors (it should only very rarely predict “agree” when the truth is “disagree”). This helps keep the labeled data for the learned policies unbiased. The main challenge here is that the feedback to the difference classifier is one-sided: that is, if it predicts “disagree” then it gets to see the truth, but if it predicts “agree” it never finds out if it was wrong. We use one of (Helmbold et al., 2000)’s algorithms, STAP (see Alg 3), which works by random sampling from apples that are predicted to not be tasted and tasting them anyway (line 12). Formally, STAP tastes apples that are predicted to be bad with probability p (m + 1)/t, where m is the number of mistakes, and t is the number of apples tasted so far. We adapt Apple Tasting algorithm STAP to our setting for controlling the number of Type II errors made by the difference classifier as follows. First, because some heuristic actions are much more common than others, we run a separate apple tasting scheme per heuristic action (in the sense that we count the number of error on this heuristic action rather than globally). Second, when there is significant action imbalance2 we find it necessary to skew the distribution from STAP more in favor of querying. We achieve this by sampling from a Beta distribution (generalizing the uniform), whose mean is shifted toward zero for more frequent heuristic actions. This increases the chance that Apple Tasting will have on finding bad apples error for each action (thereby keeping the false positive rate low for predicting disagreement). 3.3 Measuring Policy Certainty In step 11, LEAQI must estimate the certainty of πi on s. Following Cesa-Bianchi et al. (2006), we implement this using a margin-based criteria. To achieve this, we consider π as a function that maps actions to scores and then chooses the action with largest score. The certainty measure is then the difference in scores between the highest and second highest scoring actions: certainty(π, s) = max a π(s, a) −max a′̸=a π(s, a′) 2For instance, in named entity recognition, both the heuristic and expert policies label the majority of words as O (not an entity). As a result, when the heuristic says O, it is very likely that the expert will agree. However, if we aim to optimize for something other than accuracy—like F1—it is precisely these disagreements that we need to find. 3.4 Analysis Theoretically, the main result for LEAQI is an interpretation of the main DAgger result(s). Formally, let dπ denote the distribution of states visited by π, C(s, a) ∈[0, 1] be the immediate cost of performing action a in state s, Cπ(s) = Ea∼π(s)C(s, a), and the total expected cost of π to be J(π) = TEs∼dπCπ(s), where T is the length of trajectories. C is not available to a learner in an imitation setting; instead the algorithm observes an expert and minimizes a surrogate loss ℓ(s, π) (e.g., ℓmay be zero/one loss between π and π⋆). We assume ℓ is strongly convex and bounded in [0, 1] over Π. Given this setup assumptions, let ϵpol-approx = minπ∈Π 1 N PN i=1 Es∼dπiℓ(s, π) be the true loss of the best policy in hindsight, let ϵdc-approx = minh∈H 1 N PN i=1 Es∼dπierr(s, h, π⋆(s) ̸= πh(s)) be the true error of the best difference classifier in hindsight, and assuming that the regret of the policy learner is bounded by regpol(N) after N steps, Ross et al. (2011) shows the following3: Theorem 1 (Thm 4.3 of Ross et al. (2011)). After N episodes each of length T, under the assumptions above, with probability at least 1 −δ there exists a policy π ∈π1:N such that: Es∼dπℓ(s, π) ≤ ϵpol-approx + regpol(N) + p (2/N) log(1/δ) This holds regardless of how π1:N are trained (line 26). The question of how well LEAQI performs becomes a question of how well the combination of uncertainty-based sampling and the difference classifier learn. So long as those do a good job on their individual classification tasks, DAgger guarantees that the policy will do a good job. This is formalized below, where Q⋆(s, a) is the best possible cumulative cost (measured by C) starting in state s and taking action a: Theorem 2 (Theorem 2.2 of Ross et al. (2011)). Let u be such that Q⋆(s, a) −Q⋆(s, π⋆(s)) ≤u for all a and all s with dπ(s) > 0; then for some π ∈π1:N, as N →∞: J(π) ≤J(π⋆) + uTϵpol-approx Here, u captures the most long-term impact a single decision can have; for example, for average Hamming loss, it is straightforward to see that u = 1 T 3Proving a stronger result is challenging: analyzing the sample complexity of an active learning algorithm that uses a difference classifier—even in the non-sequential setting—is quite involved (Zhang and Chaudhuri, 2015). 2098 Task Named Entity Recognition Keyphrase Extraction Part of Speech Tagging Language English (en) English (en) Modern Greek (el) Dataset CoNLL’03 (Tjong Kim Sang and De Meulder, 2003) SemEval 2017 Task 10 (Augenstein et al., 2017) Universal Dependencies (Nivre, 2018) # Ex 14, 987 2, 809 1, 662 Avg. Len 14.5 26.3 25.5 # Actions 5 2 17 Metric Entity F-score Keyphrase F-score Per-tag accuracy Features English BERT (Devlin et al., 2019) SciBERT (Beltagy et al., 2019) M-BERT (Devlin et al., 2019) Heuristic String matching against an offline gazeteer of entities from Khashabi et al. (2018) Output from an unsupervised keyphrase extraction model Florescu and Caragea (2017) Dictionary from Wiktionary, similar to Zesch et al. (2008) and Haghighi and Klein (2006) Heur Quality P 88%, R 27%, F 41% P 20%, R 44%, F 27% 10% coverage, 67% acc Table 1: An overview of the three tasks considered in experiments. because any single mistake can increase the number of mistakes by at most 1. For precision, recall and F-score, u can be as large as one in the (rare) case that a single decision switches from one true positive to no true positives. 4 Experiments The primary research questions we aim to answer experimentally are: Q1 Does uncertainty-based active learning achieve lower query complexity than passive learning in the learning to search settings? Q2 Does learning a difference classifier improve query efficiency over active learning alone? Q3 Does Apple Tasting successfully handle the problem of learning from one-sided feedback? Q4 Is the approach robust to cases where the noisy heuristic is uncorrelated with the expert? Q5 Is casting the heuristic as a policy more effective than using its output as features? To answer these questions, we conduct experiments on three tasks (see Table 1): English named entity recognition, English scientific keyphrase extraction, and low-resource part of speech tagging on Modern Greek (el), selected as a low-resource setting. 4.1 Algorithms and Baselines In order to address the research questions above, we compare LEAQI to several baselines. The baselines below compare our approach to previous methods: DAGGER. Passive DAgger (Alg 1) ACTIVEDAGGER. An active variant of DAgger that asks for labels only when uncertain. (This is equivalent to LEAQI, but with neither the difference classifier nor apple tasting.) DAGGER+FEAT. DAGGER with the heuristic policy’s output appended as an input feature. ACTIVEDAGGER+FEAT. ACTIVEDAGGER with the heuristic policy as a feature. The next set of comparisons are explicit ablations: LEAQI+NOAT LEAQI with no apple tasting. LEAQI+NOISYHEUR. LEAQI, but where the heuristic returns a label uniformly at random. The baselines and LEAQI share a linear relationship. DAGGER is the baseline algorithm used by all algorithms described above but it is very query inefficient with respect to an expert annotator. ACTIVEDAGGER introduces active learning to make DAGGER more query efficient; the delta to the previous addresses Q1. LEAQI+NOAT introduces the difference classifier; the delta addresses 2099 Q2. LEAQI adds apple tasting to deal with onesided learning; the delta addresses Q3. Finally, LEAQI+NOISYHEUR. (vs LEAQI) addresses Q4 and the +FEAT variants address Q5. 4.2 Data and Representation For named entity recognition, we use training, validation, and test data from CoNLL’03 (Tjong Kim Sang and De Meulder, 2003), consisting of IO tags instead of BIO tags (the “B” tag is almost never used in this dataset, so we never attempt to predict it) over four entity types: Person, Organization, Location, and Miscellaneous. For part of speech tagging, we use training and test data from modern Greek portion of the Universal Dependencies (UD) treebanks (Nivre, 2018), consisting of 17 universal tags4. For keyphrase extraction, we use training, validation, and test data from SemEval 2017 Task 10 (Augenstein et al., 2017), consisting of IO tags (we use one “I” tag for all three keyphrase types). In all tasks, we implement both the policy and difference classifier by fine-tuning the last layer of a BERT embedding representation (Devlin et al., 2019). More specifically, for a sentence of length T, w1, . . . , wT , we first compute BERT embeddings for each word, x1, . . . , xT using the appropriate BERT model: English BERT and M-BERT5 for named entity and part-of-speech, respectively, and SciBERT (Beltagy et al., 2019) for keyphrase extraction. We then represent the state at position t by concatenating the word embedding at that position with a one-hot representation of the previous action: st = [wt; onehot(at−1)]. This feature representation is used both for learning the labeling policy and also learning the difference classifier. 4.3 Expert Policy and Heuristics In all experiments, the expert π⋆is a simulated human annotator who annotates one word at a time. The expert returns the optimal action for the relevant evaluation metric (F-score for named entity recognition and keyphrase extraction, and accuracy for part-of-speech tagging). We take the annotation cost to be the total number of words labeled. The heuristic we implement for named entity recognition is a high-precision gazeteer-based string matching approach. We construct this by taking a gazeteer from Wikipedia using the CogComp framework (Khashabi et al., 2018), and use 4ADJ, ADP, ADV, AUX, CCONJ, DET, INTJ, NOUN, NUM, PART, PRON, PROPN, PUNCT, SCONJ, SYM, VERB, X. 5Multilingual BERT (Devlin et al., 2019) FlashText (Singh, 2017) to label the dataset. This heuristic achieves a precision of 0.88, recall of 0.27 and F-score of 0.41 on the training data. The keyphrase extraction heuristic is the output of an “unsupervised keyphrase extraction” approach (Florescu and Caragea, 2017). This system is a graph-based approach that constructs wordlevel graphs incorporating positions of all word occurrences information; then using PageRank to score the words and phrases. This heuristic achieves a precision of 0.20, recall of 0.44 and F-score of 0.27 on the training data. The part of speech tagging heuristic is based on a small dictionary compiled from Wiktionary. Following Haghighi and Klein (2006) and Zesch et al. (2008), we extract this dictionary using Wiktionary as follows: for word w in our training data, we find the part-of-speech y by querying Wiktionary. If w is in Wikitionary, we convert the Wikitionary part of speech tag to a Universal Dependencies tag (see §A.1), and if word w is not in Wiktionary, we use a default label of “X”. Furthermore, if word w has multiple parts of speech, we select the first part of speech tag in the list. The label “X” is chosen 90% of the time. For the remaining 10%, the heuristic achieves an accuracy of 0.67 on the training data. 4.4 Experimental Setup Our experimental setup is online active learning. We make a single pass over a dataset, and the goal is to achieve an accurate system as quickly as possible. We measure performance (accuracy or F-score) after every 1000 words (≈50 sentences) on heldout test data, and produce error bars by averaging across three runs and reporting standard deviations. Hyperparameters for DAGGER are optimized using grid-search on the named entity recognition training data and evaluated on development data. We then fix DAGGER hyperparameters for all other experiments and models. The difference classifier hyperparameters are subsequently optimized in the same manner. We fix the difference classifier hyperparameters for all other experiments.6 4.5 Experimental Results The main results are shown in the top two rows of Figure 2; ablations of LEAQI are shown in Figure 3. 6We note that this is a somewhat optimistic hyperparameter setting: in the real world, model selection for active learning is extremely challenging. Details on hyperparameter selection and LEAQI’s robustness across a rather wide range of choices are presented in §A.2, §A.3 and §A.4 for keyphrase extraction and part of speech tagging. 2100 0 50K 100K 150K 200K number of words queried 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 phrase-label f-score Named Entity Recognition LeaQI DAgger DAgger+Feat. ActiveDAgger ActiveDAgger+Feat. 0 10K 20K 30K 40K 50K number of words queried 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 phrase-label f-score Keyphrase Extraction 0 5K 10K 15K 20K 25K number of words queried 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 accuracy Part of Speech Tagging 0 50K 100K 150K 200K number of words seen 0 25K 50K 75K 100K 125K 150K 175K 200K number of words queried Named Entity Recognition 0 10K 20K 30K 40K 50K number of words seen 0 10K 20K 30K 40K 50K number of words queried Keyphrase Extraction 0 5K 10K 15K 20K 25K number of words seen 0 5K 10K 15K 20K 25K number of words queried Part of Speech Tagging Figure 2: Empirical evaluation on three tasks: (left) named entity recognition, (middle) keyphrase extraction and (right) part of speech tagging. The top rows shows performance (f-score or accuracy) with respect to the number of queries to the expert. The bottom row shows the number of queries as a function of the number of words seen. In Figure 2, the top row shows traditional learning curves (performance vs number of queries), and the bottom row shows the number of queries made to the expert as a function of the total number of words seen. Active vs Passive (Q1). In all cases, we see that the active strategies improve on the passive strategies; this difference is largest in keyphrase extraction, middling for part of speech tagging, and small for NER. While not surprising given previous successes of active learning, this confirms that it is also a useful approach in our setting. As expected, the active algorithms query far less than the passive approaches, and LEAQI queries the least. Heuristic as Features vs Policy (Q5). We see that while adding the heuristic’s output as a feature can be modestly useful, it is not uniformly useful and, at least for keyphrase extraction and part of speech tagging, it is not as effective as LEAQI. For named entity recognition, it is not effective at all, but this is also a case where all algorithms perform essentially the same. Indeed, here, LEAQI learns quickly with few queries, but never quite reaches the performance of ActiveDAgger. This is likely due to the difference classifier becoming overly confident too quickly, especially on the “O” label, given the (relatively well known) oddness in mismatch between development data and test data on this dataset. Difference Classifier Efficacy (Q2). Turning to the ablations (Figure 3), we can address Q2 by comparing the ActiveDAgger curve to the LeaQI+NoAT curve. Here, we see that on NER and keyphrase extraction, adding the difference classifier without adding apple tasting results in a far worse model: it learns very quickly but plateaus much lower than the best results. The exception is part of speech tagging, where apple tasting does not seem necessary (but also does not hurt). Overall, this essentially shows that without controlling Type II errors, the difference classifier on it’s own does not fulfill its goals. Apple Tasting Efficacy (Q3). Also considering the ablation study, we can compare LeaQI+NoAT with LeaQI. In the case of part of speech tagging, there is little difference: using apple tasting to combat issues of learning from one sided feedback neither helps nor hurts performance. However, for both named entity recognition and keyphrase extraction, removing apple tasting leads to faster learning, but substantially lower final performance (accuracy or f-score). This is somewhat expected: 2101 0 20K 40K 60K number of words queried 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 phrase-label f-score Named Entity Recognition LeaQI LeaQI+NoisyHeur. LeaQI+NoAT ActiveDAgger 0 5K 10K 15K 20K 25K number of words queried 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 phrase-label f-score Keyphrase Extraction 0 5K 10K 15K number of words queried 0.0 0.2 0.4 0.6 0.8 accuracy Part of Speech Tagging Figure 3: Ablation results on (left) named entity recognition, (middle) keyphrase extraction and (right) part of speech tagging. In addition to LEAQI and DAgger (copied from Figure 2), these graphs also show LEAQI+NOAT (apple tasting disabled), and LEAQI+NOISYHEUR. (a heuristic that produces labels uniformly at random). without apple tasting, the training data that the policy sees is likely to be highly biased, and so it gets stuck in a low accuracy regime. Robustness to Poor Heuristic (Q4). We compare LeaQI+NoisyHeur to ActiveDAgger. Because the heuristic here is useless, the main hope is that it does not degrade performance below ActiveDAgger. Indeed, that is what we see in all three cases: the difference classifier is able to learn quite quickly to essentially ignore the heuristic and only rely on the expert. 5 Discussion and Limitations In this paper, we considered the problem of reducing the number of queries to an expert labeler for structured prediction problems. We took an imitation learning approach and developed an algorithm, LEAQI, which leverages a source that has low-quality labels: a heuristic policy that is suboptimal but free. To use this heuristic as a policy, we learn a difference classifier that effectively tells LEAQI when it is safe to treat the heuristic’s action as if it were optimal. We showed empirically— across Named Entity Recognition, Keyphrase Extraction and Part of Speech Tagging tasks—that the active learning approach improves significantly on passive learning, and that leveraging a difference classifier improves on that. 1. In some settings, learning a difference classifier may be as hard or harder than learning the structured predictor; for instance if the task is binary sequence labeling (e.g., word segmentation), minimizing its usefulness. 2. The true labeling cost is likely more complicated than simply the number of individual actions queried to the expert. Despite these limitations, we hope that LEAQI provides a useful (and relatively simple) bridge that can enable using rule-based systems, heuristics, and unsupervised models as building blocks for more complex supervised learning systems. This is particularly attractive in settings where we have very strong rule-based systems, ones which often outperform the best statistical systems, like coreference resolution (Lee et al., 2011), information extraction (Riloff and Wiebe, 2003), and morphological segmentation and analysis (Smit et al., 2014). Acknowledgements We thank Rob Schapire, Chicheng Zhang, and the anonymous ACL reviewers for very helpful comments and insights. This material is based upon work supported by the National Science Foundation under Grant No. 1618193 and an ACM SIGHPC/Intel Computational and Data Science Fellowship to KB. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor of the ACM. References Les E Atlas, David A Cohn, and Richard E Ladner. 1990. Training connectionist networks with queries and selective sampling. In NeurIPS. Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. Semeval 2017 task 10: Scienceie - extracting keyphrases and relations from scientific publications. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). 2102 Nina Balcan, Alina Beygelzimer, and John Langford. 2006. Agnostic active learning. In ICML. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NeurIPS. Alina Beygelzimer, Sanjoy Dasgupta, , and John Langford. 2009. Importance weighted active learning. In ICML. Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang. 2010. Agnostic active learning without constraints. In NeurIPS. Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In ACL. Nicolò Cesa-Bianchi, Claudio Gentile, and Luca Zaniboni. 2006. Worst-case analysis ofselective sampling for linear classification. JMLR. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL. Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI. Hal Daumé, III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning Journal. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Corina Florescu and Cornelia Caragea. 2017. PositionRank: An unsupervised approach to keyphrase extraction from scholarly documents. In ACL. Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In CoNLL. Robbie Haertel, Eric K. Ringger, Kevin D. Seppi, James L. Carroll, and Peter McClanahan. 2008. Assessing the costs of sampling methods in active learning for annotation. In ACL. Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. David P. Helmbold, Nicholas Littlestone, and Philip M. Long. 2000. Apple tasting. Information and Computation. Kshitij Judah, Alan Paul Fern, and Thomas Glenn Dietterich. 2012. Active imitation learning via reduction to iid active learning. In AAAI. Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, and Dan Roth. 2018. CogCompNLP: Your swiss army knife for NLP. In LREC. Rémi Leblond, Jean-Baptiste Alayrac, Anton Osokin, and Simon Lacoste-Julien. 2018. SEARNN: Training RNNs with global-local losses. In ICLR. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s multi-pass sieve coreference resolution system at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task. N. Littlestone and M. K. Warmuth. 1989. The weighted majority algorithm. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science. Joakim et. al Nivre. 2018. Universal dependencies v2.5. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP. Larry Rendell. 1986. A general framework for induction and a study of selective induction. Machine Learning Journal. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP. Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop. Stéphane Ross, Geoff J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In AI-Stats. David Sculley. 2007. Practical learning from one-sided feedback. In KDD. Vikash Singh. 2017. Replace or retrieve keywords in documents at scale. CoRR, abs/1711.00046. Peter Smit, Sami Virpioja, Stig-Arne Grönroos, and Mikko Kurimo. 2014. Morfessor 2.0: Toolkit for statistical morphological segmentation. In EACL. Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney. 1999. Active learning for natural language parsing and information extraction. In ICML. 2103 Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In NAACL/HLT. Vladimir Vapnik. 1982. Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics). Springer-Verlag, Berlin, Heidelberg. Steven Whitehead. 1991. A study of cooperative mechanisms for faster reinforcement learning. Technical report, University of Rochester. Torsten Zesch, Christof Müller, and Iryna Gurevych. 2008. Extracting lexical semantic knowledge from Wikipedia and Wiktionary. In LREC. Chicheng Zhang and Kamalika Chaudhuri. 2015. Active learning from weak and strong labelers. In NeurIPS. 2104 Supplementary Material For: Active Imitation Learing with Noisy Guidance A Experimental Details: A.1 Wiktionary to Universal Dependencies POS Tag Source Greek, Modern (el) Wiktionary Universal Dependencies adjective ADJ adposition ADP preposition ADP adverb ADV auxiliary AU coordinating conjunction CCONJ determiner DET interjection INTJ noun NOUN numeral NUM particle PART pronoun PRON proper noun pROPN punctuation PUNCT subordinating conjunction SCONJ symbol SYM verb VERB other X article DET conjunction PART Table 2: Conversion between Greek, Modern (el) Wiktionary POS tags and Universal Dependencies POS tags. A.2 Hyperparameters Here we provide a table of all of hyperparameters we considered for LEAQI and baselines models. (see section 4.4) Table 3: Hyperparameters Hyperparameter Values Considered Final Value Policy Learning rate 10−3, 10−4, 10−5, 10−6, 5.5 · 10−6, 10−6 10−6 Difference Classifier Learning rate h 10−1, 10−2, 10−3, 10−4 10−2 Confidence parameter (b) 5.0 · 10−1, 10 · 10−1, 15 · 10−1 5.0 · 10−1 A.3 Ablation Study Difference Classifier Learning Rate (see Figure 4) A.4 Ablation Study Confidence Parameter: b (see Figure 5) 2105 0 10K 20K 30K 40K 50K number of words seen 0.4 0.5 0.6 0.7 0.8 difference classifier f-score Keyphrase Extraction LeaQI - h learning-rate:1e-2 LeaQI - h learning-rate:1e-3 LeaQI - h learning-rate:1e-4 0 10K 20K 30K 40K 50K number of words seen 0 2.5K 5K 7.5K 10K 12.5K 15K 17.5K number of words queried Keyphrase Extraction 0 5K 10K 15K number of words queried 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 phrase-label f-score Keyphrase Extraction 0 5K 10K 15K 20K 25K number of words seen 0.5 0.6 0.7 0.8 0.9 difference classifier f-score Part of Speech Tagging 0 10K 20K number of words seen 0 2K 4K 6K 8K 10K 12K 14K number of words queried Part of Speech Tagging 0 5K 10K 15K number of words queried 0.3 0.4 0.5 0.6 0.7 0.8 0.9 accuracy Part of Speech Tagging Figure 4: (top-row) English keyphrase extraction and (bottom-row) low-resource language part of speech tagging on Greek, Modern (el). We show the performance of using different learning for the difference classifier h. These plots indicate that their is small difference in performance depending on the difference classifier learning rate. 0 5K 10K 15K 20K 25K number of words queried 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 phrase-label f-score Keyphrase Extraction LeaQI - b: 5e-1 LeaQI - b: 10e-1 LeaQI - b: 15e-1 0 10K 20K 30K 40K 50K number of words seen 0 5K 10K 15K 20K 25K number of words queried Keyphrase Extraction 0 5K 10K 15K number of words queried 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 accuracy Part of Speech Tagging 0 5K 10K 15K 20K 25K number of words seen 0 2.5K 5K 7.5K 10K 12.5K 15K 17.5K number of words queried Part of Speech Tagging Figure 5: (top-row) English keyphrase extraction and (bottom-row) low-resource language part of speech tagging on Greek, Modern (el). We show the performance of using difference confidence parameters b. These plots indicate that our model is robust to difference confidence parameters.
2020
189
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 191–207 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 191 Fluent Response Generation for Conversational Question Answering Ashutosh Baheti, Alan Ritter Computer Science and Engineering Ohio State University {baheti.3, ritter.1492}osu.edu Kevin Small Amazon Alexa [email protected] Abstract Question answering (QA) is an important aspect of open-domain conversational agents, garnering specific research focus in the conversational QA (ConvQA) subtask. One notable limitation of recent ConvQA efforts is the response being answer span extraction from the target corpus, thus ignoring the natural language generation (NLG) aspect of high-quality conversational agents. In this work, we propose a method for situating QA responses within a SEQ2SEQ NLG approach to generate fluent grammatical answer responses while maintaining correctness. From a technical perspective, we use data augmentation to generate training data for an end-to-end system. Specifically, we develop Syntactic Transformations (STs) to produce question-specific candidate answer responses and rank them using a BERT-based classifier (Devlin et al., 2019). Human evaluation on SQuAD 2.0 data (Rajpurkar et al., 2018) demonstrate that the proposed model outperforms baseline CoQA and QuAC models in generating conversational responses. We further show our model’s scalability by conducting tests on the CoQA dataset.1 1 Introduction Factoid question answering (QA) has recently enjoyed rapid progress due to the increased availability of large crowdsourced datasets (e.g., SQuAD (Rajpurkar et al., 2016), MS MARCO (Bajaj et al., 2016), Natural Questions (Kwiatkowski et al., 2019)) for training neural models and the significant advances in pre-training contextualized representations using massive text corpora (e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2019)). Building on these successes, recent work examines conversational QA (ConvQA) systems capable of interacting with users over multiple turns. 1The code and data are available at https://github.com/abaheti95/QADialogSystem. Large crowdsourced ConvQA datasets (e.g., CoQA (Reddy et al., 2019), QuAC (Choi et al., 2018)) consist of dialogues between crowd workers who are prompted to ask and answer a sequence of questions regarding a source document. Although these ConvQA datasets support multi-turn QA interactions, the responses have mostly been limited to extracting text spans from the source document and do not readily support abstractive answers (Yatskar, 2019a). While responses copied directly from a Wikipedia article can provide a correct answer to a user question, they do not sound natural in a conversational setting. To address this challenge, we develop SEQ2SEQ models that generate fluent and informative answer responses to conversational questions. To obtain data needed to train these models, rather than constructing yet-another crowdsourced QA dataset, we transform the answers from an existing QA dataset into fluent responses via data augmentation. Specifically, we synthetically generate supervised training data by converting questions and associated extractive answers from a SQuADlike QA dataset into fluent responses via Syntactic Transformations (STs). These STs over-generate a large set of candidate responses from which a BERT-based classifier selects the best response as shown in the top half of Figure 1. While over-generation and selection generates fluent responses in many cases, the brittleness of the off-the-shelf parsers and the syntatic transformation rules prevent direct use in cases that are not well-covered. To mitigate this limitation, we generate a new augmented training dataset using the best response classifier that is used to train end-toend response generation models based on PointerGenerator Networks (PGN) (See et al., 2017) and pre-trained Transformers using large amounts of dialogue data, DialoGPT (D-GPT) (Zhang et al., 2019). In §3.2 and §3.3, we empirically demon192 q: where did Hizb utTahrir fail to pull off a bloodless coup in 1974 ? a: egypt parser + syntactic rules r1: he failed to egypt to pull off a bloodless coup r2: they failed to pull off a bloodless coup in 1974 Egypt … rm: he failed to pull off a bloodless coup egypt best response classifier rm: he failed to pull off a bloodless coup egypt q: where did Hizb utTahrir fail to pull off a bloodless coup in 1974 ? a: egypt Sequence-to-Sequence r: they failed to pull off a bloodless coup in egypt Augment training data for end-to-end setup Over-generate and select the best response: End-to-end response generation: Pointer Generator Network (PGN) or DialoGPT (D-GPT) Figure 1: Overview of our method of generating conversational responses for a given QA. In the first method, the Syntactic Transformations (STs) over-generate a list of responses (good and bad) using the question’s parse tree and the best response classifier selects the most suitable response from the list. Our second method uses this pipeline to augment training data for training a SEQ2SEQ networks PGN or D-GPT (§3.1). The final SEQ2SEQ model is end-to-end, scalable, easier to train, and performs better than the first method exclusively. strate that our proposed NLG models are capable of generating fluent, abstractive answers on both SQuAD 2.0 and CoQA. 2 Generating Fluent QA Responses In this section, we describe our approach for constructing a corpus of questions and answers that supports fluent answer generation (top half of Figure 1). We use the framework of overgenerate and rank previously used in the context of question generation (Heilman and Smith, 2010). We first overgenerate answer responses for QA pairs using STs in §2.1. We then rank these responses from best to worst using the response classification models described in §2.2. Later in §3, we describe how we augment existing QA datasets with fluent answer responses using STs and a best response classifier. This augmented QA dataset is used for training the PGN and Transformer models. 2.1 Syntactic Transformations (STs) The first step is to apply the Syntactic Transformations (STs) to the question’s parse tree along with the expert answer phrase to produce multiple candidate responses. For the STs to work effectively accurate question parses are essential. We use the Stanford English lexparser2(Klein and Manning, 2003), which is trained on WSJ sections 1-21, QuestionBank (Judge et al., 2006), amongst other corpora. However, this parser still fails to recognize ∼20% of the questions (neither SBARQ nor SQ tag is assigned). For such erroneous parse trees, we simply output the expert answer phrase as a single 2https://nlp.stanford.edu/software/parser-faq.html#z response. The remaining questions are processed via the following transformations to over-generate a list of candidate answers: (1) Verb modification: change the tense of the main verb based on the auxiliary verb using SimpleNLG (Gatt and Reiter, 2009); (2) Pronoun replacement: substitute the noun phrase with pronouns from a fixed list; (3) Fixing Preposition and Determiner: find the preposition and determiner in the question’s parse tree that connects to the answer phrase and add all possible prepositions and determiners if missing. (4) Response Generation: Using Tregex and Tsurgeon (Levy and Andrew, 2006), compile responses by combining components of all previous steps and the answer phrase. In cases where there are multiple options in steps (2) and (3), the number of options can explode and we use the best response classifier (described below) to winnow. An example ST process is shown in Figure 2. 2.2 Response Classification and Baselines A classification model selects the best response from the list of ST-generated candidates. Given the training dataset, D, described in §2.3 of n question-answer tuples (qi, ai), and their list of corresponding responses, {ri1, ri2, ..., rimi}, the goal is to classify each response rij as bad or good. The probability of the response being good is later used for ranking. We experiment with two different model objectives described below, Logistic: We assume that the responses for each qi are independent of each other. The model (F()) classifies each response separately and assigns 1 (or 0) if rij is a good (or bad) response for qi. The Logistic loss is given by Pn i=1 Pmi j=1 log(1 + 193 SBARQ WHNP SQ NP VP PP R1: the netherlands rose up against philip ii in 1568 SimpleNLG verb transformation Inserting missing Preposition and Determiner Answer phrase placement R2: they rose up in 1568 Swapping NP with pronoun Optional PP removal … Q: what year did the Netherlands rise up against Philip II ? Figure 2: An example of Syntactic Transformations in action. Question: “what year did the Netherlands rise up against Philip II?” Answer: “1568”. Using the question’s parse tree we: (1) modify the verb “rise” based on the auxiliary verb “did” (red); (2) add missing prepositions and determiners (sky blue); (3) combine the subject and other components with the answer phrase (green) to generate the candidate R1. In another candidate R2, we swap the subject with pronoun “they” (purple). Our transformations can also optionally remove Prepositional-Phrases (PP) as shown in R2 (orange). In the figure, we only show two candidates but in reality the transformations generate many more different candidates, including many implausible ones. e−yij∗F(qi,ai,rij)), where yij is the label for rij. Softmax: We will discuss in §2.3 that annotators are expected to miss a few good responses since good and bad answers are often very similar (may only differ by a single preposition or pronoun). Therefore, we explore a ranking objective that calculates errors based on the margin with which incorrect responses are ranked above correct ones (Collins and Koo, 2005). Without loss of generality, we assume ri1 to be better than all other responses for (qi, ai). Since the model F() should rank ri1 higher than all other responses, we use the margin error Mij(F) = F(qi, ai, ri1)−F(qi, ai, rij) to define the Softmax loss as Pn i=1 log  1 + Pmi j=2 e−Mij(F) . We experiment with the following feature based and neural models with the two loss functions: Language Model Baseline: The responses are ranked using the normalized probabilities from a 3-gram LM trained on the Gigaword corpus with modified Kneser-Ney smoothing.3 The response with the highest score is classified as 1 and others as 0. Linear Model: A linear classifier using features inspired by Heilman and Smith (2010) and Wan et al. (2006), who have implemented similar linear models for other sentence pair classification tasks. Specifically, we use the following features: 3http://www.keithv.com/software/giga/ • Length (Features 1-3): word length of question qi, answer-phrase ai, and response rij • WH-word (Features 4-12): [0-1 feat.] what, who, whom, whose, when, where, which, why or how is present in the qi • Negation (Features 13): [0-1 feat.] no, not or none is present in the qi • N-gram LM (Features 14-21): 2, 3-gram normalized probability and perplexity of qi and rij • Grammar (Features 22-93): node counts of qi and rij syntactic parse trees • Word overlap (Features 94-96): three features based on fraction of word overlap between qi and rij. precision = overlap(qi,rij) |qi| , recall = overlap(qi,rij) |rij| and their harmonic mean Decomposable Attention: We use the sentence pair classifier from (Parikh et al., 2016), referred as the DA model. It finds attention based wordalignment of the input pair (premise and hypothesis, in our case question qi and response rij) and aggregates it using feedforward networks. Apart from standard vector embeddings, we also experiment with contextualized ELMo (Peters et al., 2018) embedding with the DA model using the version implemented in AllenNLP (Gardner et al., 2017). BERT: Lastly, we use the BERT-Base, Uncased model (Devlin et al., 2019) for sentence pair classification. The model takes question qi and response rij separated by the special token [SEP] and predicts if the response is suitable or unsuitable. In some cases, the number of responses generated by the STs for a question could be as high as 5000+. Therefore, when training the DA model with pre-trained contextualized embeddings such as ELMo or the BERT model in the Softmax loss setting, backpropagation requires computing and storing hidden states for 5000+ different responses. To mitigate this issue, we use strided negativesampling. While training, we first separate all the suitable responses from all the remaining unsuitable responses. We then divide all the responses for qi into smaller batches of K or fewer responses. Each batch comprises one suitable response (randomly chosen) and K −1 sampled from the unsuitable responses. To ensure that all unsuitable responses are used at least once during the training, we shuffle them and then create smaller batches by taking strides of K −1 size. We use K = 150 for DA+ELMo and K = 50 for BERT when training with the Softmax loss. At test time, we com194 pute logits on the CPU and normalize across all responses. 2.3 Training Data for Response Classification In this section, we describe the details of the training, validation and testing data used to develop the best response classifier models. To create the supervised data, we choose a sample from the train-set of the SQuAD 2.0 dataset (Rajpurkar et al., 2018). SQuAD 2.0 contains human-generated questions and answer spans selected from Wikipedia paragraphs. Before sampling, we remove all the QA pairs which had answer spans > 5 words as they tend to be non-factoid questions and complete sentences in themselves (typically “why” and “how” questions). We also filter out questions that cannot be handled by the parser (∼20% of them had obvious parser errors). After these filtering, we take a sample of 3000 questions and generate their list of responses using STs (1,561,012 total responses). Next, we developed an annotation task on Amazon Mechanical Turk to select the best responses for the questions. For each question, we ask the annotators to select a response from the list of responses that correctly answers the question, sounds natural, and seems human-like. Since the list of responses for some questions is as long as 5000+, the annotators can’t review all of them before selecting the best one. Hence, we implement a search feature within the responses list such that annotators can type in a partial response in the search box to narrow down the options before selection. To make their job easier, we also sorted responses by length. This encouraged annotators to select relatively short responses which we found to be beneficial, as one would prefer an automatic QA system to be terse. To verify that the annotators didn’t cheat this annotation design by selecting the first/shortest option, we also test a Shortest Response Baseline as another baseline response classifier model, where first/shortest response in the list is selected as suitable. Each question is assigned 5 annotators. Therefore, there can be at most 5 unique annotated responses for each question. This decreases the recall of the gold truth data (since there can be more than 5 good ways of correctly responding to a question). On the other hand, bad annotators may choose a unique yet suboptimal/incorrect response, which decreases the precision of the gold truth. After annotating the 3000 questions from SQuAD 2.0 sample, we randomly split the data #q/#a #r #r Train 1756 2028 796174 Val 300 791 172135 Test 700 1833 182963 Table 1: Statistics of the SG training, validation, and test sets curated from the SQuAD 2.0 training data. q and a denotes the question and answer from the SQuAD 2.0 sample and r denotes the responses generated by the STs. #q means “number of questions”. #r and #r denotes the number of responses which are labeled 1 and 0 respectively after the human annotation process. into 2000 train, 300 validation, and 700 test questions. We refer to this as the SQuAD Gold annotated (SG) data. To increase SG training data precision, we assign label 1 only to responses that are marked as best by at least two different annotators. Due to this hard constraint, 244 questions from the training data are removed (i.e. the 5 annotators marked 5 unique responses). On the other hand, to increase the recall of the SG test and validation sets, we retain all annotations.4 We assign label 0 to all remaining responses (even if some of them are plausible). The resulting SG data split is summarized in Table 1. Every response may be marked by zero or more annotators. When at least two annotators select the same response from the list we consider it as a match. To compute the annotator agreement score, we divide the number of matches with total number of annotations by each annotator. Using this formula we find average annotator agreement to be 0.665, where each annotator’s agreement score is weighted by their number of annotated questions. 2.4 Evaluation of Response Classification As previously mentioned in §2.3, the SG data doesn’t contain all true positives since one cannot exhaustively find and annotate all the good responses when the response list is very long. Additionally, there is a large class imbalance between good and bad responses, making standard evaluation metrics such as precision, recall, F1 score and accuracy potentially misleading. To gather additional insight regarding how well the model ranks correct responses over incorrect ones, we calculate 4We found that some bad annotators had a high affinity of choosing the first (or the shortest) response when it was not the best choice in the list. To reduce such annotation errors we add another constraint that the shortest response should be selected by at least 2 different annotators. 195 Classifier Loss P@1 Max-F1 PR-AUC ShortResp 0.324 0.189 LangModel 0.058 0.012 Linear Log. 0.680 0.159 0.070 Linear Soft. 0.640 0.387 0.344 DA Log. 0.467 0.151 0.066 DA+ELMo Log. 0.694 0.354 0.301 DA Soft. 0.503 0.383 0.297 DA+ELMo Soft. 0.716 0.456 0.427 BERT Log. 0.816 0.490 0.465 BERT Soft. 0.833 0.526 0.435 Table 2: Best response classifier results on SG test data. “ShortResp” stands for Shortest Response baseline, “LangModel” stands for Language Model baseline, “Linear” stands for Linear model. “Log.” and “Soft.” in Loss column stands for Logistic and Softmax loss respectively. DA refers to Decomposable Attention model (Parikh et al., 2016). “+ELMo” refers to adding pre-trained ELMo embeddings to DA model. Precision@1 (P@1),5 Max. F1,6 and Area Under the Precision-Recall Curve (PR-AUC). We train all classifier models on the SG training set and evaluate them on SG test data. The resulting evaluation is presented in Table 2. The results show that the shortest response baseline (ShortResp) performs worse than the ML models (0.14 to 0.51 absolute P@1 difference depending on the model). This verifies that annotation is not dominated by presentation bias where annotators are just selecting the shortest (first in the list) response for each question. The language model baseline (LangModel) performs even worse (0.41 to 0.78 absolute difference), demonstrating that this task is unlikely to have a trivial solution. The feature-based linear model shows good performance when trained with Softmax loss beating many of the neural models in terms of PR-AUC and Max-F1. By inspecting the weight vector, we find that grammar features, specifically the number of prepositions, determiners, and “to”s in the response, are the features with the highest weights. This probably implies that the most important challenge in this task is finding the right prepositions and determiners in the response. Other important features are the response length and the response’s 3-gram LM probabilities. The ostensible limitation of feature-based models is failing to recognize correct pronouns for unfamiliar named entities in the questions. Due to the small size of SG train set, the vanilla 5P@1 is the % of times the correct response is ranked first 6Max. F1 is the maximum F1 the model can achieve by choosing the optimal threshold in the PR curve Decomposable Attention (DA) model is unable to learn good representations on its own and accordingly, performs worse than the linear feature-based model. The addition of ELMo embeddings appears to help to cope with this. We find that the DA model with ELMo embeddings is better able to predict the right pronouns for the named entities, presumably due to pre-trained representations. The best neural model in terms of P@1 is the BERT model fine-tuned with the Softmax loss (last row of Table 2). 3 Data-Augmentation and Generation SEQ2SEQ models are very effective in generation tasks. However, our 2028 labeled question and response pairs from the SG train set (Table 1) are insufficient for training these large neural models. On the other hand, creating a new large-scale dataset that supports fluent answer generation by crowdsourcing is inefficient and expensive. Therefore, we augment SQuAD 2.0 with responses from the STs+BERT classifier (Table 2) to create a synthetic training dataset for SEQ2SEQ models. We take all the QA pairs from the SQuAD 2.0 train-set which can be handled by the question parser and STs, and rank their candidate responses using the BERT response classifier probabilities trained with Softmax loss (i.e. ranking loss (Collins and Koo, 2005)). Therefore, for each question we select the top ranked responses7 by setting a threshold on the probabilities obtained from the BERT model. We refer to the resulting dataset as SQuAD-Synthetic (SS) consisting of 59,738 ⟨q, a, r⟩instances. To increase the size of SS training data, we take the QA pairs from Natural Questions (Kwiatkowski et al., 2019) and HarvestingQA8 (Du and Cardie, 2018) and add ⟨q, a, r⟩instances using the same STs+BERT classifier technique. These new pairs combined with SS result in a dataset of 1,051,938 ⟨q, a, r⟩instances, referred to as the SS+ dataset. 3.1 PGN, D-GPT, Variants and Baselines Using the resulting SS and SS+ datasets, we train Pointer generator networks (PGN) (See et al., 2017), DialoGPT (D-GPT) (Zhang et al., 2019) and their variants to produce a fluent answer response 7at most three responses per question 8HarvestingQA is a QA dataset containing 1M QA pairs generated over 10,000 top-ranking Wikipedia articles. This dataset is noisy as the questions are automatically generated using an LSTM based encoder-decoder model (which makes use of coreference information) and the answers are extracted using a candidate answer extraction module. 196 generator. The input to the generation model is the question and the answer phrase ⟨q, a⟩and the response r is the corresponding generation target. PGN: PGNs are widely used SEQ2SEQ models equipped with a copy-attention mechanism capable of copying any word from the input directly into the generated output, making them well equipped to handle rare words and named entities present in questions and answer phrases. We train a 2-layer stacked bi-LSTM PGN using the OpenNMT toolkit (Klein et al., 2017) on the SS and SS+ data. We additionally explore PGNs with pre-training information by initializing the embedding layer with GloVe vectors (Pennington et al., 2014) and pretraining it with ⟨q, r⟩pairs from the questions-only subset of the OpenSubtitles corpus9 (Tiedemann, 2009). This corpus contains about 14M questionresponse pairs in the training set and 10K pairs in the validation set. We name the pre-trained PGN model as PGN-Pre. We also fine-tune PGN-Pre on the SS and SS+ data to generate two additional variants. D-GPT: DialoGPT (i.e. dialogue generative pretrained transformer) (Zhang et al., 2019) is a recently released large tunable automatic conversation model trained on 147M Reddit conversationlike exchanges using the GPT-2 model architecture (Radford et al., 2019). We fine-tune D-GPT on our task using the SS and SS+ datasets. For comparison we also train GPT-2 on our datasets from scratch (i.e. without any pre-training). Finally, to assess the impact of pre-training datasets, we pre-train the GPT-2 on the 14M questions from questions-only subset of the OpenSubtitles data (similar to the PGN-Pre model) to get GPT-2-Pre model. The GPT-2-Pre is later fine-tuned on the SS and SS+ datasets to get two corresponding variants. CoQA Baseline: Conversational Question Answering (CoQA) (Reddy et al., 2019) is a large-scale ConvQA dataset aimed at creating models which can answer the questions posed in a conversational setting. Since we are generating conversational responses for QA systems, it is sensible to compare against such ConvQA systems. We pick one of the best performing BERT-based CoQA model from the SMRCToolkit (Wu et al., 2019) as a baseline.10 We refer to this model as the CoQA baseline. QuAC Baseline: Question Answering in Context 9http://forum.opennmt.net/t/english-chatbot-model-withopennmt/184 10one of the top performing model with available code. is another ConvQA dataset. We use the modified version of BiDAF model presented in (Choi et al., 2018) as a second baseline. Instead of a SEQ2SEQ generation, it selects spans from passage which acts as responses. We use the version of this model implemented in AllenNLP (Gardner et al., 2017) and refer to this model as the QuAC baseline. STs+BERT Baseline: We also compare our generation models with the technique that created the SS and SS+ training datasets (i.e. the responses generated by STs ranked with the BERT response classifier). We validate all the SEQ2SEQ models on the human annotated SG data (Table 1). 3.2 Evaluation on the SQuAD 2.0 Dev Set To have a fair and unbiased comparison, we create a new 500 question sample from the SQuAD 2.0 dev set (SQuAD-dev-test) which is unseen for all the models and baselines. This sample contains ∼20% of the questions that cannot be handled by the STs (parser errors). For such questions, we default to outputting the answer-phrase as the response for the STs+BERT baseline. For the CoQA baseline and the QuAC baseline, we run their models on passages (corresponding to the questions) from SQuAD-dev-test to get their responses. To demonstrate that our models too can operate in a fully automated setting like the CoQA baseline and the QuAC baseline, we generate their responses using the answer spans selected by a BERTbased SQuAD model (instead of the gold answer span from the SQuAD-dev-test). For automatic evaluation we compute validation perplexity of all SEQ2SEQ generation models on SG data (3rd column in Table 3). However, validation perplexity is a weak evaluator of generation models. Also, due to the lack of human-generated references in SQuAD-dev-test, we cannot use other typical generation based automatic metrics. Therefore, we use Amazon Mechanical Turk to do human evaluation. Each response is judged by 5 annotators. We ask the annotators to identify if the response is conversational and answers the question correctly. While outputting answer-phrase to all questions is trivially correct, this style of response generation seems robotic and unnatural in a prolonged conversation. Therefore, we also ask the annotators to judge if the response is a completesentence (e.g. “it is in Indiana”) and not a sentencefragment (e.g. “Indiana”). For each question and response pair, we show the annotators five options 197 Model Data PPL a b c d e      correct answer      complete-sentence   grammaticality CoQA B. 13.80 82.20 1.20 0.60 2.20 QuAC B. 5.20 3.80 46.40 2.80 41.80 STs+BERT B. 0.00 18.20 0.20 13.80 67.80 PGN SS 6.60 1.00 7.00 9.00 16.20 66.80 PGN SS+ 3.83 1.00 3.00 8.40 17.60 70.00 PGN-Pre SS 4.34 0.20 4.60 9.80 17.40 68.00 PGN-Pre SS+ 3.31 0.40 4.80 9.00 16.20 69.60 GPT-2 SS 4.69 1.00 5.00 13.20 18.60 62.20 GPT-2 SS+ 2.70 0.80 4.20 8.20 16.80 70.00 GPT-2-Pre SS 3.23 0.40 2.80 8.20 19.00 69.60 GPT-2-Pre SS+ 2.74 0.80 2.40 7.80 17.00 72.00 D-GPT SS 2.20 0.40 2.40 8.60 13.00 75.60 D-GPT SS+ 2.06 0.40 2.60 7.80 13.20 76.00 D-GPT (o) SS+ 2.06 0.00 3.00 0.00 13.80 83.20 Table 3: Human evaluation results of all the models and baselines on sample of SQuAD-dev-test. In the first three rows B. stands for baseline. In the last row ”(o)” stands for oracle. In Column 3 PPL stands for validation perplexity. All the values are percentage (out of 100) of responses from each model that belong to specific option(a to e) selected by annotators. based on the three properties (correctness, grammaticality, and complete-sentence). These five options (a to e) are shown in the Table 3 header. The best response is a complete-sentence which is grammatical and answers the question correctly (i.e. option e). Other options give us more insights into different models’ behavior. For each response, we assign the majority option selected by the annotators and aggregate their judgments into buckets. We present this evaluation in Table 3. We compute the inter-annotator agreement by calculating Cohen’s kappa (Cohen, 1960) between individual annotator’s assignments and the aggregated majority options. The average Cohen’s kappa (weighted by the number of annotations for every annotator) is 0.736 (i.e. substantial agreement). The results reveal that CoQA baseline does the worst in terms of option e. The main reason for that is because most of the responses generated from this baseline are exact answer spans. Therefore, we observe that it does very well in option b (i.e. correct answer but not a complete-sentence). The QuAC baseline can correctly select span-based informative response ∼42% of the time. Other times, however, it often selects a span from the passage which is related to the topic but doesn’t contain the correct answer i.e. (option c). Another problem with this baseline is that it is restricted by the input passage and many not always be able to find a valid span that answers the questions. Our STs+BERT baseline does better in terms of option e compared to the other baselines but it is limited by the STs and the parser errors. As mentioned earlier, ∼20% of the time this baseline directly copies the answerphrase in the response which explains the high percentage of option b. Almost all models perform better when trained with SS+ data showing that the additional data from Natural Questions and HarvestingQA is helping. Except for the PGN model trained on SS data, all other variants perform better than STs+BERT baseline in terms of option e. The GPT-2 model trained on SS data from scratch does not perform very well because of the small size of training data. The pretraining with OpenSubtitiles questions boosts its performance (option e % for GPT-2Pre model variants > option e % for GPT-2 model variants). The best model however is D-GPT when finetuned with SS+ dataset. While retaining the correct answer, it makes less grammatical errors (lower % in option c and d compared to other models). Furthermore with oracle answers it performs even better (last row in Table 3). This shows that D-GPT can generate better quality responses with accurate answers. We provide some sample responses from different models in Appendix A. 3.3 Evaluation on CoQA In this section, we test our model’s ability to generate conversational answers on the CoQA dev set, using CoQA baseline’s predicted answers. The CoQA dataset consists of passages from seven different domains (out of which one is Wikipedia) and conversational questions and answers on those 198 Model a b c d e CoQA B. 12.0 78.0 5.0 2.0 3.0 D-GPT 2.0 5.0 16.0 20.0 57.0 D-GPT (o) 0.0 7.0 0.0 16.0 77.0 Table 4: Human evaluation results of D-GPT model (trained on SS+ dataset) vs CoQA model on sample of 100 question answers from filtered CoQA dev set. (o) stands for oracle answers. Options a to e are explained in Table 3 header. passages. Due to the conversational nature of this dataset, some of the questions are one word (∼3.1%), like “what?”, “why?” etc. Such questions are out-of-domain for our models as they require the entire context over multiple turns of the conversation to develop their response. Other out-of-domain questions include unanswerable (∼ 0.8%) and yes/no (∼18.4%) questions. We also don’t consider questions with answers > 5 words (∼11.6%) as they are typically non-factoid questions. We take a random sample of 100 from the remaining questions. This sample contains questions from a diverse set of domains outside of the Wikipedia (on which our models are trained). This includes questions taken from the middle of a conversation (for example, “who did they meet ?”) which are unfamiliar for our models. We perform a human evaluation similar to §3.2 on this sample. We compare CoQA against D-GPT trained on the SS+ dataset (with CoQA’s predictions input as answer-phrases). The results are shown in Table 4. This evaluation reveals that the D-GPT model is able to successfully convert the CoQA answer spans into conversational responses 57% of the time (option e). D-GPT gets the wrong answer 18% of the time (option a and c), because the input answer predicted by the CoQA baseline is also incorrect 17% of the time. However with oracle answers, it is able to generate correct responses 77% of the times (option e). The weighted average Cohen’s kappa (Cohen, 1960) score for all annotators in this evaluation is 0.750 (substantial agreement). This result demonstrates ability of our model to generalize over different domains and generate good conversational responses for questions when provided with correct answer spans. 4 Related Work Question Generation (QG) is a well studied problem in the NLP community with many machine learning based solutions (Rus et al., 2010; Heilman and Smith, 2010; Yao et al., 2012; Labutov et al., 2015; Serban et al., 2016; Reddy et al., 2017; Du et al., 2017; Du and Cardie, 2017, 2018). In comparison, our work explores the opposite direction, i.e. (generating conversational humanlike answers given a question). Fu and Feng (2018) also try to solve fluent answer response generation task but in a restricted setting of movie related questions with 115 question patterns. In contrast, our generation models can deal with human generated questions from any domain. Learning to Rank formulations for answer selection in QA systems is common practice, most frequently relying on pointwise ranking models (Severyn and Moschitti, 2015; Garg et al., 2019). Our use of discriminative re-ranking (Collins and Koo, 2005) with softmax loss is closer to learning a pairwise ranking by maximizing the multiclass margin between correct and incorrect answers (Joachims, 2002; Burges et al., 2005; K¨oppel et al., 2019). This is an important distinction from TREC-style answer selection as our ST-generated candidate responses have lower semantic, syntactic, and lexical variance, making pointwise methods less effective. Question Answering Using crowd-sourcing methods to create QA datasets (Rajpurkar et al., 2016; Bajaj et al., 2016; Rajpurkar et al., 2018), conversational datasets (Dinan et al., 2018), and ConvQA datasets (Choi et al., 2018; Reddy et al., 2019; Elgohary et al., 2018; Saha et al., 2018) has largely driven recent methodological advances. However, models trained on these ConvQA datasets typically select exact answer spans instead of generating them (Yatskar, 2019b). Instead of creating another crowd-sourced dataset for our task, we augment existing QA datasets to include such conversational answer responses using the STs + BERT trained with softmax loss. 5 Conclusion In this work, we study the problem of generating fluent QA responses in the context of building fluent conversational agents. To this end, we propose an over-generate and rank data augmentation procedure based on Syntactic Transformations and a best response classifier. This method is used to modify the SQuAD 2.0 dataset such that it includes conversational answers, which is used to train SEQ2SEQ based generation models. Human evaluations on SQuAD-dev-test show that our models generate 199 significantly better conversational responses compared to the baseline CoQA and QuAC models. Furthermore, the D-GPT model with oracle answers is able to generate conversational responses on the CoQA dev set 77 % of the time showcasing the model’s scalability. Acknowledgments We would like to thank the reviewers for providing valuble feedback on an earlier draft of this paper. This material is based in part on research sponsored by the NSF (IIS-1845670), ODNI and IARPA via the BETTER program (2019-19051600004) DARPA via the ARO (W911NF-17-C-0095) in addition to an Amazon Research Award. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, ARO, IARPA, DARPA or the U.S. Government. References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Christopher Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine learning (ICML-05), pages 89–96. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Xinya Du and Claire Cardie. 2017. Identifying where to focus in reading comprehension for neural question generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2067–2073, Copenhagen, Denmark. Association for Computational Linguistics. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352, Vancouver, Canada. Association for Computational Linguistics. Ahmed Elgohary, Chen Zhao, and Jordan Boyd-Graber. 2018. A dataset and baselines for sequential opendomain question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1077–1083, Brussels, Belgium. Association for Computational Linguistics. Yao Fu and Yansong Feng. 2018. Natural answer generation with heterogeneous memory. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 185–195, New Orleans, Louisiana. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2019. Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection. arXiv preprint arXiv:1911.04118. Albert Gatt and Ehud Reiter. 2009. Simplenlg: A realisation engine for practical applications. In Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), pages 90–93. 200 Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617, Los Angeles, California. Association for Computational Linguistics. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133– 142. ACM. John Judge, Aoife Cahill, and Josef Van Genabith. 2006. Questionbank: Creating a corpus of parseannotated questions. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 497–504. Association for Computational Linguistics. Dan Klein and Christopher D Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in neural information processing systems, pages 3–10. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Marius K¨oppel, Alexander Segner, Martin Wagener, Lukas Pensel, Andreas Karwath, and Stefan Kramer. 2019. Pairwise learning to rank by neural networks revisited: Reconstruction, theoretical analysis and practical performance. arXiv preprint arXiv:1909.02768. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 889–898, Beijing, China. Association for Computational Linguistics. Roger Levy and Galen Andrew. 2006. Tregex and tsurgeon: tools for querying and manipulating tree data structures. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255, Austin, Texas. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784– 789, Melbourne, Australia. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Sathish Reddy, Dinesh Raghu, Mitesh M. Khapra, and Sachindra Joshi. 2017. Generating natural language question-answer pairs from a knowledge graph using a RNN based question generation model. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 376–385, Valencia, Spain. Association for Computational Linguistics. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. The first question generation shared task evaluation challenge. 201 Amrita Saha, Vardaan Pahuja, Mitesh M Khapra, Karthik Sankaranarayanan, and Sarath Chandar. 2018. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In ThirtySecond AAAI Conference on Artificial Intelligence. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30M factoid question-answer corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 588–598, Berlin, Germany. Association for Computational Linguistics. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. J¨org Tiedemann. 2009. News from opus-a collection of multilingual parallel corpora with tools and interfaces. In Recent advances in natural language processing, volume 5, pages 237–248. Stephen Wan, Mark Dras, Robert Dale, and C´ecile Paris. 2006. Using dependency-based features to take the’para-farce’out of paraphrase. In Proceedings of the Australasian Language Technology Workshop 2006, pages 131–138. Jindou Wu, Yunlun Yang, Chao Deng, Hongyi Tang, Bingning Wang, Haoze Sun, Ting Yao, and Qi Zhang. 2019. Sogou Machine Reading Comprehension Toolkit. arXiv e-prints, page arXiv:1903.11848. Xuchen Yao, Gosse Bouma, and Yi Zhang. 2012. Semantics-based question generation and implementation. Dialogue & Discourse, 3(2):11–42. Mark Yatskar. 2019a. A qualitative comparison of CoQA, SQuAD 2.0 and QuAC. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2318–2323, Minneapolis, Minnesota. Association for Computational Linguistics. Mark Yatskar. 2019b. A qualitative comparison of coqa, squad 2.0 and quac. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale generative pre-training for conversational response generation. 202 A Sample responses from different models Sample of responses from different models on SQuAD-dev-test set §3.2. ”Q:” is the Question, ”R:” is the Response, ”B.” stands for baseline and ”(o)” stands for oracle answer spans. Model Q:what fixed set of factors determine the actions of a deterministic turing machine correctness complete-sentence grammaticality majority option CoQA B. R: fixed set of rules   b QuAC B. R: a deterministic turing machine is the most basic turing machine , which uses a fixed set of rules to determine its future actions    e STs+BERT B. R: rules determine the actions    e PGN-Pre with SS+ R: fixed set of rules determine the actions    e GPT-2 with SS+ R: fixed set of rules fixed set of factors determine the actions    d GPT-2-Pre with SS+ R: fixed set of rules determine the actions    e D-GPT with SS+ R: fixed set of rules determine the actions    e D-GPT with SS+ (o) R: rules determine the actions    e Model Q:why was polonia relegated from the country ’s top flight in 2013 ? correctness complete-sentence grammaticality majority option CoQA B. R: because of their disastrous financial situation   b QuAC B. R: polonia was relegated from the country ’s top flight in 2013 because of their disastrous financial situation    e LGRs+BERT B. R: polonia was relegated from the country ’s top flight because the disastrous financial situation    d PGN-Pre with SS+ R: polonia was relegated disastrous financial situation    d GPT-2 with SS+ R: polonia was relegated disastrous financial situation    d GPT-2-Pre with SS+ R: polonia was relegated disastrous financial situation    d D-GPT with SS+ R: polonia was relegated from the country’s top flight for disastrous financial situation    e D-GPT with SS+ (o) R: polonia was relegated from the country’s top flight for disastrous financial situation    e Model Q:how was sadat rewarded by the islamists for his attempts to bring egypt into modern times and civilization ? correctness complete-sentence grammaticality majority option CoQA B. R: released Islamists from prison   a QuAC B. R: egyptian president anwar sadat – whose policies included opening egypt to western investment ( infitah ) ; transferring egypt ’s allegiance from the soviet union to the united states   c LGRs+BERT B. R: sadat was rewarded by the islamists for his attempts to bring egypt assassinated    d PGN-Pre with SS+ R: sadat was rewarded piety and charity   c GPT-2 with SS+ R: it was rewarded piety and charity   c GPT-2-Pre with SS+ R: sadat was rewarded by piety and charity   c D-GPT with SS+ R: sadat was rewarded piety and charity   c D-GPT with SS+ (o) R: sadat was assassinated    e 203 Model Q:from what type of materials must liquid oxygen be separated ? correctness complete-sentence grammaticality majority option CoQA B. R: combustible materials   b QuAC B. R: both liquid and solid o n2 are clear substances with a light sky-blue color caused by absorption in the red   c STs+BERT B. R: combustible materials   b PGN-Pre with SS+ R: it must be separated from combustible    e GPT-2 with SS+ R: it must be separated from combustible    e GPT-2-Pre with SS+ R: they must be separated from combustible    e D-GPT with SS+ R: liquid oxygen must be separated from combustible    e D-GPT with SS+ (o) R: liquid oxygen must be separated from combustible materials    e Model Q:on what type of exponentiation does the diffie – hellman key exchange depend on ? correctness complete-sentence grammaticality majority option CoQA B. R: modular exponentiation   b QuAC B. R: several public-key cryptography algorithms , such as rsa and the diffie – hellman key exchange   a STs+BERT B. R: modular exponentiation   b PGN-Pre with SS+ R: it depends on modular    e GPT-2 with SS+ R: it depends on modular    e GPT-2-Pre with SS+ R: it depends on the modular    e D-GPT with SS+ R: it depends on modular    e D-GPT with SS+ (o) R: it depends on modular exponentiation    e Model Q:what was sadat seeking by releasing islamists from prison ? correctness complete-sentence grammaticality majority option CoQA B. R: political support in his struggle against leftists    e QuAC B. R: egyptian president anwar sadat – whose policies included opening egypt to western investment ( infitah ) ; transferring egypt ’s allegiance from the soviet union to the united states   c STs+BERT B. R: sadat was seeking political support    e PGN-Pre with SS+ R: it was making peace with israel   c GPT-2 with SS+ R: it was making peace with israel   c GPT-2-Pre with SS+ R: sadat seeking was making peace with israel   c D-GPT with SS+ R: sadat was making peace with israel   c D-GPT with SS+ (o) R: sadat was seeking political support    e Model Q:how did the better jacksonville plan generate money ? correctness complete-sentence grammaticality majority option CoQA B. R: authorized a half-penny sales tax   b QuAC B. R: when a consolidation referendum was held in 1967 , voters approved the plan . on october 1 , 1968 , the governments merged to create the consolidated city of jacksonville   c STs+BERT B. R: it generated money authorized a half-penny sales tax    d PGN-Pre with SS+ R: it generated money half-penny sales tax    d 204 GPT-2 with SS+ R: it generated money half-penny sales tax    d GPT-2-Pre with SS+ R: it generated half-penny sales tax money    e D-GPT with SS+ R: it generated money for half-penny sales tax    e D-GPT with SS+ (o) R: it generated money authorized a half-penny sales tax    d Model Q:how many items is the university library home to ? correctness complete-sentence grammaticality majority option CoQA B. R: over two million   b QuAC B. R: another important library – the university library , founded in 1816 , is home to over two million items    e STs+BERT B. R: it to is over two million    d PGN-Pre with SS+ R: it is home to over two million    e GPT-2 with SS+ R: it is home to over two million    e GPT-2-Pre with SS+ R: it is home to over two million    e D-GPT with SS+ R: it to is over two million    d D-GPT with SS+ (o) R: it to is over two million    d Model Q:which sea was oil discovered in ? correctness complete-sentence grammaticality majority option CoQA B. R: North Sea   b QuAC B. R: ” it ’ s scotland ’ s oil ” campaign of the scottish national party ( snp )   a LGRs+BERT B. R: oil was discovered in north    d PGN-Pre with SS+ R: oil was discovered in the north sea    e GPT-2 with SS+ R: oil was discovered in the north sea    e GPT-2-Pre with SS+ R: it was discovered in the north sea    e D-GPT with SS+ R: it was discovered in the north sea    e D-GPT with SS+ (o) R: oil was discovered in north    d Model Q:where are jersey and guernsey correctness complete-sentence grammaticality majority option CoQA B. R: Channel Islands   b QuAC B. R: the customary law of normandy was developed between the 10th and 13th centuries and survives today through the legal systems of jersey and guernsey in the channel islands    e LGRs+BERT B. R: they are in channel islands    e PGN-Pre with SS+ R: they are in the channel islands    e GPT-2 with SS+ R: they are on the channel islands    e GPT-2-Pre with SS+ R: they are on the channel islands    e D-GPT with SS+ R: they are in the channel islands    e D-GPT with SS+ (o) R: they are in channel islands    e Model Q:near chur , which direction does the rhine turn ? correctness complete-sentence grammaticality majority option CoQA B. R: north   b 205 QuAC B. R: near tamins-reichenau the anterior rhine and the posterior rhine join and form the rhine   c LGRs+BERT B. R: it turns north    e PGN-Pre with SS+ R: it turns north    e GPT-2 with SS+ R: it turns north    e GPT-2-Pre with SS+ R: it turns to the north    e D-GPT with SS+ R: it turns north    e D-GPT with SS+ (o) R: it turns north    e Model Q:what kind of contract is given when the contractor is given a performance specification and must undertake the project from design to construction , while adhering to the performance specifications ? correctness complete-sentence grammaticality majority option CoQA B. R: design build” contract   b QuAC B. R: the modern trend in design is toward integration of previously separated specialties , especially among large firms   c LGRs+BERT B. R: a ”design build” contract is given    e PGN-Pre with SS+ R: design build is given    e GPT-2 with SS+ R: the design build is given    e GPT-2-Pre with SS+ R: design build is given a performance specification and must undertake the project    e D-GPT with SS+ R: design build is given    e D-GPT with SS+ (o) R: the ” design build ” contract is given    e Model Q:how many protestants live in france today ? correctness complete-sentence grammaticality majority option CoQA B. R: Approximately one million   b QuAC B. R: approximately one million protestants in modern france represent some 2 % of its population    e LGRs+BERT B. R: one million live in france today    e PGN-Pre with SS+ R: one million live in france today    e GPT-2 with SS+ R: one million live in france today    e GPT-2-Pre with SS+ R: one million live in france today    e D-GPT with SS+ R: one million live in france today    e D-GPT with SS+ (o) R: one million live in france today    e Model Q:what is raghuram rajan ’s career ? correctness complete-sentence grammaticality majority option CoQA B. R: Central Banking economist   b QuAC B. R: central banking economist raghuram rajan argues that ” systematic economic inequalities   b LGRs+BERT B. R: he is economist    d PGN-Pre with SS+ R: it is central banking economist    e GPT-2 with SS+ R: it is central banking economist    e GPT-2-Pre with SS+ R: it is central banking economist    e D-GPT with SS+ R: it is central banking economist    e D-GPT with SS+ (o) R: he is economist    d 206 Model Q:what type of steam engines produced most power up to the early 20th century ? correctness complete-sentence grammaticality majority option CoQA B. R: Reciprocating piston type steam engines   b QuAC B. R: reciprocating piston type steam engines remained the dominant source of power until the early 20th century , when advances in the design of electric motors and internal combustion engines    e LGRs+BERT B. R: reciprocating piston produced most power up    d PGN-Pre with SS+ R: reciprocating piston type produced most power up    d GPT-2 with SS+ R: reciprocating piston type produced most power up    d GPT-2-Pre with SS+ R: the reciprocating piston type produced most power up to the early 20th century    e D-GPT with SS+ R: reciprocating piston type produced most power up to the early 20th century    e D-GPT with SS+ (o) R: reciprocating piston produced most power up to the early 20th century    e Model Q:where did france win a war in the 1950 ’s correctness complete-sentence grammaticality majority option CoQA B. R: Algeria   b QuAC B. R: france fought and lost a bitter war in vietnam in the 1950s   c LGRs+BERT B. R: france won a war in the 1950 ’s algeria    e PGN-Pre with SS+ R: france won a war in vietnam   c GPT-2 with SS+ R: france won a war in vietnam   c GPT-2-Pre with SS+ R: france won a war in vietnam   c D-GPT with SS+ R: france won a war in vietnam   c D-GPT with SS+ (o) R: france won a war in algeria    e Model Q:who did the ottoman empire ally with in ww i ? correctness complete-sentence grammaticality majority option CoQA B. R: Germany   b QuAC B. R: the ottoman empire gradually declined into the late nineteenth century . the empire allied with germany    e LGRs+BERT B. R: germany did the ottoman empire ally with in ww i    d PGN-Pre with SS+ R: it separated with germany   c GPT-2 with SS+ R: it allyed with germany    e GPT-2-Pre with SS+ R: it allyed with germany    e D-GPT with SS+ R: it allied germany    d D-GPT with SS+ (o) R: it allied germany    d Model Q:when was ambulatory care pharmacy approved as its own certification ? correctness complete-sentence grammaticality majority option CoQA B. R: In 2011   b QuAC B. R: in 2011 the board of pharmaceutical specialties approved ambulatory care pharmacy practice as a separate board certification    e LGRs+BERT B. R: it was approved in 2011    e 207 PGN-Pre with SS+ R: it was approved in 2011    e GPT-2 with SS+ R: it was approved in 2011    e GPT-2-Pre with SS+ R: it was approved in 2011    e D-GPT with SS+ R: it was approved in 2011    e D-GPT with SS+ (o) R: it was approved in 2011    e Model Q:when did arpnet and sita become operational correctness complete-sentence grammaticality majority option CoQA B. R: 1969   b QuAC B. R: arpanet and sita hln became operational in 1969    e LGRs+BERT B. R: 1969   b PGN-Pre with SS+ R: they became operational in 1969    e GPT-2 with SS+ R: they became operational in 1969    e GPT-2-Pre with SS+ R: they became operational in 1969    e D-GPT with SS+ R: they became operational in 1969    e D-GPT with SS+ (o) R: they became operational in 1969    e Model Q:how much did saudi arabia spend on spreading wahhabism ? correctness complete-sentence grammaticality majority option CoQA B. R: over 100 billion dollars   b QuAC B. R: saudi arabia spent over 100 billion dollars in the ensuing decades for helping spread its fundamentalist interpretation of islam    e LGRs+BERT B. R: saudi arabia spent over 100 billion dollars    e PGN-Pre with SS+ R: saudi arabia spent over 100 billion dollars    e GPT-2 with SS+ R: saudi arabia spent over 100 billion dollars    e GPT-2-Pre with SS+ R: saudi arabia spent over 100 billion dollars    e D-GPT with SS+ R: saudi arabia spent over 100 billion dollars    e D-GPT with SS+ (o) R: saudi arabia spent over 100 billion dollars    e
2020
19
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2106–2113 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2106 ExpBERT: Representation Engineering with Natural Language Explanations Shikhar Murty Pang Wei Koh Percy Liang Computer Science Department, Stanford University {smurty,pangwei,pliang}@cs.stanford.edu Abstract Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text. In this paper, we allow model developers to specify these types of inductive biases as natural language explanations. We use BERT fine-tuned on MultiNLI to “interpret” these explanations with respect to the input sentence, producing explanationguided representations of the input. Across three relation extraction tasks, our method, ExpBERT, matches a BERT baseline but with 3–20× less labeled data and improves on the baseline by 3–10 F1 points with the same amount of labeled data. 1 Introduction Consider the relation extraction task of finding spouses in text, and suppose we wanted to specify the inductive bias that married couples typically go on honeymoons. In a traditional feature engineering approach, we might try to construct a “did they go on a honeymoon?” feature and add that to the model. In a modern neural network setting, however, it is not obvious how to use standard approaches like careful neural architecture design or data augmentation to induce such an inductive bias. In a way, while the shift from feature engineering towards end-to-end neural networks and representation learning has alleviated the burden of manual feature engineering and increased model expressivity, it has also reduced our control over the inductive biases of a model. In this paper, we explore using natural language explanations (Figure 1) to generate features that can augment modern neural representations. This imbues representations with inductive biases corresponding to the explanations, thereby restoring some degree of control while maintaining their expressive power. X Jim Bob Michelle Duggar y X Stephen Mel y X Captain Darren Fletcher Berahino y Explanations: Training Data: Figure 1: Sample data points and explanations from Spouse, one of our relation extraction tasks. The explanations provide relevant features for classification. Prior work on training models with explanations use semantic parsers to interpret explanations: the parser converts each explanation into an executable logical form that is executable over the input sentence and uses the resulting outputs as features (Srivastava et al., 2017) or as noisy labels on unlabeled data (Hancock et al., 2018). However, semantic parsers can typically only parse low-level statements like “‘wife’ appears between {o1} and {o2} and the last word of {o1} is the same as the last word of {o2}” (Hancock et al., 2018). We remove these limitations by using modern distributed language representations, instead of semantic parsers, to interpret language explanations. Our approach, ExpBERT (Figure 2), uses BERT (Devlin et al., 2019) fine-tuned on the MultiNLI natural language inference dataset (Williams et al., 2018) to produce features that “interpret” each explanation on an input. We then use these features to augment the input representation. Just as a semantic parser grounds an explanation by converting it into a logical form and then executing it, the features produced by BERT can be seen as a soft “execution” of the explanation on the input. 2107 Figure 2: Overview of our approach. Explanations as well as textual descriptions of relations are interpreted using BERT for a given x to produce a representation which form inputs to our classifier. On three benchmark relation extraction tasks, ExpBERT improves over a BERT baseline with no explanations: it achieves an F1 score of 3–10 points higher with the same amount of labeled data, and a similar F1 score as the full-data baseline but with 3– 20x less labeled data. ExpBERT also improves on a semantic parsing baseline (+3 to 5 points F1), suggesting that natural language explanations can be richer than low-level, programmatic explanations. 2 Setup Problem. We consider the task of relation extraction: Given x = (s, o1, o2), where s is a sequence of words and o1 and o2 are two entities that are substrings within s, our goal is to classify the relation y ∈Y between o1 and o2. The label space Y includes a NO-RELATION label if no relation applies. Additionally, we are given a set of natural language explanations E = {e1, e2, . . . , en} designed to capture relevant features of the input for classification. These explanations are used to define a global collection of features and are not tied to individual examples. Approach. Our approach (Figure 2) uses pretrained neural models to interpret the explanations E in the context of a given input x. Formally, we define an interpreter I as any function that takes an input x and explanation ej and produces a feature vector in Rd. In our ExpBERT implementation, we choose I to capture whether the explanation ej is entailed by the input x. Concretely, we use BERT (Devlin et al., 2019) finetuned on MultiNLI (Williams et al., 2018): we feed wordpiece-tokenized versions of the explanation ej (hypothesis) and the instance x (premise), separated by a [SEP] token, to BERT. Following standard practice, we use the vector at the [CLS] token to represent the entire input as a 768-dimensional feature vector: I(x, ej) = BERT [CLS], s, [SEP], ej  . (1) These vectors, one for each of the n explanations, are concatenated to form the explanation representation v(x) ∈R768n, v(x) =  I(x, e1), I(x, e2), . . . , I(x, en)  . (2) In addition to v(x), we also map x into an input representation u(x) ∈R768|Y| by using the same interpreter over textual descriptions of each potential relation. Specifically, we map each potential relation yi in the label space Y to a textual description ri (Figure 2), apply I(x, ·) to ri, and concatenate the resulting feature vectors: u(x) =  I(x, r1), I(x, r2), . . . , I(x, r|Y|)  . (3) Finally, we train a classifier over u(x) and v(x): fθ(x) = MLP  u(x), v(x)  . (4) Note that u(x) and v(x) can be obtained in a preprocessing step since I(·, ·) is fixed (i.e., we do not additionally fine-tune BERT on our tasks). For more model details, please refer to Appendix A.1. Baselines. We compare ExpBERT against several baselines that train a classifier over the same input representation u(x). NoExp trains a classifier only on u(x). The other baselines augment u(x) with variants of the explanation representation v(x). BERT+SemParser uses the semantic parser from Hancock et al. (2018) to convert explanations into executable logical forms. The resulting denotations over the input x (a single bit for each explanation) are used as the explanation representation, i.e., v(x) ∈{0, 1}n. We use two different sets of explanations for this baseline: our natural language explanations (LangExp) and the low-level explanations from Hancock et al. (2018) that are more suitable for the semantic parser (ProgExp). BERT+Patterns converts explanations into a collection of unigram, bigram, and trigram patterns and creates a binary feature for each pattern based on whether it is contained in s or not. This gives v(x) ∈{0, 1}n′, where n′ is the number of patterns. Finally, we compare ExpBERT against a 2108 Table 1: Dataset statistics. Dataset Train Val Test Explanations Spouse 22055 2784 2680 40 Disease 6667 773 4101 28 TACRED 68124 22631 15509 128 variant called ExpBERT-Prob, where we directly use entailment probabilities obtained by BERT (instead of the feature vector at the [CLS] token) as the explanation representation v(x) ∈[0, 1]n. 3 Experiments Datasets. We consider 3 relation extraction datasets from various domains—Spouse and Disease (Hancock et al., 2018), and TACRED (Zhang et al., 2017). Spouse involves classifying if two entities are married; Disease involves classifying whether the first entity (a chemical) is a cause of the second entity (a disease); and TACRED involves classifying the relation between the two entities into one of 41 categories. Dataset statistics are in Table 1; for more details, see Appendix A.2. Explanations. To construct explanations, we randomly sampled 50 training examples for each y ∈Y and wrote a collection of natural language statements explaining the gold label for each example. For Spouse and Disease, we additionally wrote some negative explanations for the NORELATION category. To interpret explanations for Disease, we use SciBERT, a variant of BERT that is better suited for scientific text (Beltagy et al., 2019). A list of explanations can be found in Appendix A.3. Benchmarks. We find that explanations improve model performance across all three datasets: ExpBERT improves on the NoExp baseline by +10.6 F1 points on Spouse, +2.7 points on Disease, and +3.2 points on TACRED (Table 2).1 On TACRED, which is the most well-established of our benchmarks and on which there is significant prior work, ExpBERT (which uses a smaller BERT-base model that is not fine-tuned on our task) outperforms the standard, fine-tuned BERT-large model by +1.5 F1 points (Joshi et al., 2019). Prior work on Spouse and Disease used a simple logistic classifier over traditional features created from 1We measure performance using F1 scores due to the class imbalance in the datasets (Spouse: 8% positive, Disease: 20.8% positive, and TACRED: 20.5% examples with a relation). dependency paths of the input sentence. This performs poorly compared to neural models, and our models attain significantly higher accuracies (Hancock et al., 2018). Using BERT to interpret natural language explanations improves on using semantic parsers to evaluate programmatic explanations (+5.5 and +2.7 over BERT+SemParser (ProgExp) on Spouse and Disease, respectively). ExpBERT also outperforms the BERT+SemParser (LangExp) model by +9.9 and +3.3 points on Spouse and Disease. We exclude these results on TACRED as it was not studied in Hancock et al. (2018), so we did not have a corresponding semantic parser and set of programmatic explanations. We note that ExpBERT—which uses the full 768-dimensional feature vector from each explanation—outperforms ExpBERT (Prob), which summarizes these vectors into one number per explanation, by +2–5 F1 points across all three datasets. Data efficiency. Collecting a set of explanations E requires additional effort—it took the authors about 1 minute or less to construct each explanation, though we note that it only needs to be done once per dataset (not per example). However, collecting a small number of explanations can significantly and disproportionately reduce the number of labeled examples required. We trained ExpBERT and the NoExp baseline with varying fractions of Spouse and TACRED training data (Figure 3). ExpBERT matches the NoExp baseline with 20x less data on Spouse; i.e., we obtain the same performance with ExpBERT with 40 explanations and 2k labeled training examples as with NoExp with 22k examples. On TACRED, ExpBERT requires 3x less data, obtaining the same performance with 128 explanations and 23k training examples as compared to NoExp with 68k examples. These results suggest that the higher-bandwidth signal in language can help models be more dataefficient. 4 Analysis 4.1 Which explanations are important? To understand which explanations are important, we group explanations into a few semantic categories (details in Appendix A.3) and cumulatively add them to the NoExp baseline. In particular, we break down explanations for Spouse into the 2109 Table 2: Results on relation extraction datasets. For Spouse and Disease, we report 95% confidence intervals and for TACRED, we follow the evaluation protocol from Zhang et al. (2017). More details in Appendix A. Model Spouse Disease TACRED NoExp 52.9 ± 0.97 49.7 ± 1.01 64.7 BERT+Patterns 53.3 ± 1.24 49.0 ± 1.15 64.4 BERT+SemParse (LangExp) 53.6 ± 0.38 49.1 ± 0.47 BERT+SemParse (ProgExp) 58.3 ± 1.10 49.7 ± 0.54 ExpBERT-Prob 58.4 ± 1.22 49.7 ± 1.21 65.3 ExpBERT 63.5 ± 1.40 52.4 ± 1.23 67.9 20 40 60 80 100 % of Spouse Training Data 35 40 45 50 55 60 65 F1 Score NoExp ExpBERT 20 40 60 80 100 % of TACRED Training Data 54 56 58 60 62 64 66 68 F1 Score NoExp ExpBERT Figure 3: ExpBERT matches the performance of the NoExp baseline with 20x less data on Spouse (Left), and with 3x less data on TACRED (Right). Table 3: Importance of various explanation groups. Model Spouse NoExp 52.9 ± 0.97 + MARRIED 55.2 ± 0.43 + CHILDREN 55.9 ± 0.98 + ENGAGED 57.0 ± 2.57 + NEGATIVES 60.1 ± 0.87 + MISC (full ExpBERT) 63.5 ± 1.40 groups MARRIED (10 explanations), CHILDREN (5 explanations), ENGAGED (3 explanations), NEGATIVES (13 explanations) and MISC (9 explanations). We find that adding new explanation groups helps performance (Table 3), which suggests that a broad coverage of various explanatory factors could be helpful for performance. We also observe that the MARRIED group (which contains paraphrases of {o1} is married to {o2}) alone boosts performance over NoExp, which suggests that a variety of paraphrases of the same explanation can improve performance. 4.2 Quality vs. quantity of explanations We now test whether ExpBERT can do equally well with the same number of random explanations, obtained by replacing words in the explanation with random words. The results are dataset-specific: random explanations help on Spouse but not on Disease. However, in both cases, random explanations do significantly worse than the original explanations (Table 4). Separately adding 10 random Table 4: ExpBERT accuracy is significantly lower when we replace words in the original explanations with random words. Model Spouse Disease NoExp 52.9 ± 0.97 49.7 ± 1.01 ExpBERT (random) 56.4 ± 1.20 49.6 ± 1.22 ExpBERT (orig) 63.5 ± 1.40 52.4 ± 1.23 ExpBERT (orig + random) 62.4 ± 1.41 51.8 ± 1.03 Table 5: Combining language explanations with the external CTD ontology improves accuracy on Disease. Model Disease ExpBERT 52.4 ± 1.23 ExpBERT (+ External) 59.1 ± 3.26 explanations to our original explanations led to a slight drop (≈1 F1 point) in accuracy. These results suggest that ExpBERT’s performance comes from having a diverse set of high quality explanations and are not just due to providing more features. 4.3 Complementing language explanations with external databases Natural language explanations can capture different types of inductive biases and prior knowledge, but some types of prior knowledge are of course better introduced through other means. We wrap up our experiments with a vignette on how language explanations can complement other forms of feature and representation engineering. We consider Disease, where we have access to an external ontology (Comparative Toxicogenomic Database or CTD) from Wei et al. (2015) containing chemicaldisease interactions. Following Hancock et al. (2018), we add 6 bits to the explanation representation v(x) that test if the given chemical-disease pair follows certain relations in CTD (e.g., if they are in the ctd-therapy dictionary). Table 5 shows that as expected, other sources of information can complement language explanations in ExpBERT. 2110 5 Related work Many other works have used language to guide model training. As mentioned above, semantic parsers have been used to convert language explanations into features (Srivastava et al., 2017) and noisy labels on unlabeled data (Hancock et al., 2018; Wang et al., 2019). Rather than using language to define a global collection of features, Rajani et al. (2019) and Camburu et al. (2018) use instance-level explanations to train models that generate their own explanations. Zaidan and Eisner (2008) ask annotators to highlight important words, then learn a generative model over parameters given these rationales. Others have also used language to directly produce parameters of a classifier (Ba et al., 2015) and as part of the parameter space of a classifier (Andreas et al., 2017). While the above works consider learning from static language supervision, Li et al. (2016) and Weston (2016) learn from language supervision in an interactive setting. In a related line of work, Wang et al. (2017), users teach a system high-level concepts via language. 6 Discussion Recent progress in general-purpose language representation models like BERT open up new opportunities to incorporate language into learning. In this work, we show how using these models with natural language explanations can allow us to leverage a richer set of explanations than if we were constrained to only use explanations that can be programmatically evaluated, e.g., through ngram matching (BERT+Patterns) or semantic parsing (BERT+SemParser). The ability to incorporate prior knowledge of the “right” inductive biases into model representations dangles the prospect of building models that are more robust. However, more work will need to be done to make this approach more broadly applicable. We outline two such avenues of future work. First, combining our ExpBERT approach with more complex state-of-the-art models can be conceptually straightforward (e.g., we could swap out BERT-base for a larger model) but can sometimes also require overcoming technical hurdles. For example, we do not fine-tune ExpBERT in this paper; doing so might boost performance, but fine-tuning through all of the explanations on each example is computationally intensive. Second, in this paper we provided a proof-ofconcept for several relation extraction tasks, relying on the fact that models trained on existing natural language inference datasets (like MultiNLI) could be applied directly to the input sentence and explanation pair. Extending ExpBERT to other natural language tasks where this relationship might not hold is an open problem that would entail finding different ways of interpreting an explanation with respect to the input. Acknowledgements We are grateful to Robin Jia, Peng Qi, John Hewitt, Amita Kamath, and other members of the Stanford NLP Group for helpful discussions and suggestions. We also thank Yuhao Zhang for assistance with TACRED experiments. PWK was supported by the Facebook Fellowship Program. Toyota Research Institute (TRI) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. Reproducibility Code and model checkpoints are available at https://github.com/MurtyShikhar/ExpBERT. The features generated by various interpreters can also be found at that link. References Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Learning with latent language. In NAACL-HLT. Jimmy Ba, Kevin Swersky, Sanja Fidler, and Ruslan Salakhutdinov. 2015. Predicting deep zero-shot convolutional neural networks using textual descriptions. 2015 IEEE International Conference on Computer Vision (ICCV), pages 4247–4255. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-snli: Natural language inference with natural language explanations. In Advances in Neural Information Processing Systems 31, pages 9539–9549. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. 2111 Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language explanations. Proceedings of the conference. Association for Computational Linguistics. Meeting, 2018:1884–1895. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc’Aurelio Ranzato, and Jason Weston. 2016. Learning through dialogue interactions by asking questions. In ICLR. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1527–1536, Copenhagen, Denmark. Association for Computational Linguistics. Sida I Wang, Samuel Ginn, Percy Liang, and Christopher D Manning. 2017. Naturalizing a programming language via interactive learning. In 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, pages 929–938. Association for Computational Linguistics (ACL). Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xiang Ren. 2019. Learning to annotate: Modularizing data augmentation for textclassifiers with natural language explanations. arXiv preprint arXiv:1911.01352. Chih-Hsuan Wei, Yifan Peng, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Jiao Li, Thomas C Wiegers, and Zhiyong Lu. 2015. Overview of the biocreative v chemical disease relation (cdr) task. Jason Weston. 2016. Dialog-based language learning. In NIPS. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Omar Zaidan and Jason Eisner. 2008. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 31–40, Honolulu, Hawaii. Association for Computational Linguistics. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 35–45. 2112 A Appendix A.1 Implementation Details Interpreting explanations. When interpreting an explanation ei on a particular example x = (s, o1, o2), we first substitute o1 and o2 into the placeholders in the explanation ei to produce an instance-level version of the explanation. For example, “{o1} and {o2} are a couple” might become “Jim Bob and Michelle Duggar are a couple”. Model hyperparameters and evaluation. We use BERT-BASE-UNCASED for Spouse and TACRED, and SCIBERT-SCIVOCAB-UNCASED for Disease from Beltagy et al. (2019). We finetune all our BERT models on MultiNLI using the Transformers library2 using default parameters. The resulting BERT model is then frozen and used to produce features for our classifier. We use the following hyperparameters for our MLP classifier: number of feed-forward layers ∈[0,1], dimension of each layer ∈[64, 256], and dropout ∈[0.0, 0.3]. We optionally project the 768 dimensional BERT feature vector down to 64 dimensions. To train our classifier, we use the Adam optimizer (Kingma and Ba, 2014) with default parameters, and batch size ∈[32, 128]. We early stop our classifier based on the F1 score on the validation set, and choose the hyperparameters that obtain the best early-stopped F1 score on the validation set. For Spouse and Disease, we report the test F1 means and 95% confidence intervals of 5-10 runs. For TACRED, we follow Zhang et al. (2017), and report the test F1 of the median validation set F1 of 5 runs corresponding to the chosen hyperparameters. A.2 Datasets Spouse and Disease preprocessed datasets were obtained directly from the codebase provided by Hancock et al. (2018)3. We use the train, validation, test split provided by Hancock et al. (2018) for Disease, and split the development set of Spouse randomly into a validation and test set (the split was done at a document level). To process TACRED, we use the default BERT tokenizer and indexing pipeline in the Transformers library. 2https://huggingface.co/transformers/ 3https://worksheets.codalab.org/worksheets/0x900e7e41deaa4ec5b2fe41dc50594548/ A.3 Explanations The explanations can be found in Tables 6 and 7 on the following page. We use 40 explanations for Spouse, 28 explanations for Disease, and 128 explanations for TACRED (in accompanying file). The explanations were written by the authors. 2113 {o1} and {o2} have a marriage license {o1}’s husband is {o2} {o1}’s wife is {o2} {o1} and {o2} are married {o1} and {o2} are going to tie the knot {o1} married {o2} {o1} and {o2} are a married couple {o1} and {o2} had a wedding {o1} and {o2} married in the past {o1} tied the knot with {o2} {o1} and {o2} have a son {o1} and {o2} have a daughter {o1} and {o2} have kids together {o1} and {o2} are expecting a son {o1} and {o2} are expecting a daughter {o1} is engaged to {o2} {o1} is the fianc´e of {o2} {o1} is the fianc´ee of {o2} {o1} is the daughter of {o2} {o1} is the mother of {o2} {o1} and {o2} are the same person {o1} is the same person as {o2} {o1} is married to someone other than {o2} {o1} is the father of {o2} {o1} is the son of {o2} {o1} is marrying someone other than {o2} {o1} is the ex-wife of {o2} {o1} is a location {o2} is a location {o1} is an organization {o2} is an organization {o1} and {o2} are partners {o1} and {o2} share a home {o1} and {o2} are a couple {o1} and {o2} share the same surname someone is married to {o1} someone is married to {o2} {o1} is a person {o2} is a person {o1} and {o2} are different people Table 6: Explanations for Spouse. The groups correspond to MARRIED, CHILDREN, ENGAGED, NEGATIVES and MISC. The symptoms of {o2} appeared after the administration of {o1} {o2} developed after {o1} Patients developed {o2} after being treated with {o1} {o1} contributes indirectly to {o2} {o1} has been associated with the development of {o2} Symptoms of {o2} abated after withdrawal of {o1} A greater risk of {o2} was found in the {o1} group compared to a placebo {o2} is a side effect of {o1} {o2} has been reported to occur with {o1} {o2} has been demonstrated after the administration of {o1} {o1} caused the appearance of {o2} Use of {o1} can lead to {o2} {o1} can augment {o2} {o1} can increase the risk of {o2} Symptoms of {o2} appeared after dosage of {o1} {o1} is a chemical {o2} is a disease {o1} is used for the treatment of {o2} {o1} is known to reduce the symptoms of {o2} {o1} is used for the prevention of {o2} {o1} ameliorates {o2} {o1} induces {o2} {o1} causes a disease other than {o2} {o1} is an organ administering {o1} causes {o2} to worsen {o1} is effective for the treatment of {o2} {o1} has an effect on {o2} {o1} has an attenuating effect on {o2} Table 7: Explanations for Disease
2020
190
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2114–2119 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2114 GAN-BERT: Generative Adversarial Learning for Robust Text Classification with a Bunch of Labeled Examples Danilo Croce Dept. of Enterprise Engineering University of Rome, Tor Vergata Roma, Italy [email protected] Giuseppe Castellucci Amazon Seattle, USA [email protected] Roberto Basili Dept. of Enterprise Engineering University of Rome, Tor Vergata Roma, Italy [email protected] Abstract Recent Transformer-based architectures, e.g., BERT, provide impressive results in many Natural Language Processing tasks. However, most of the adopted benchmarks are made of (sometimes hundreds of) thousands of examples. In many real scenarios, obtaining highquality annotated data is expensive and timeconsuming; in contrast, unlabeled examples characterizing the target task can be, in general, easily collected. One promising method to enable semi-supervised learning has been proposed in image processing, based on SemiSupervised Generative Adversarial Networks. In this paper, we propose GAN-BERT that extends the fine-tuning of BERT-like architectures with unlabeled data in a generative adversarial setting. Experimental results show that the requirement for annotated examples can be drastically reduced (up to only 50-100 annotated examples), still obtaining good performances in several sentence classification tasks. 1 Introduction In recent years, Deep Learning methods have become very popular in Natural Language Processing (NLP), e.g., they reach high performances by relying on very simple input representations (for example, in (Kim, 2014; Goldberg, 2016; Kim et al., 2016)). In particular, Transformer-based architectures, e.g., BERT (Devlin et al., 2019), provide representations of their inputs as a result of a pre-training stage. These are, in fact, trained over large scale corpora and then effectively finetuned over a targeted task achieving state-of-the-art results in different and heterogeneous NLP tasks. These achievements are obtained when thousands of annotated examples exist for the final tasks. As experimented in this work, the quality of BERT fine-tuned over less than 200 annotated instances shows significant drops, especially in classification tasks involving many categories. Unfortunately, obtaining annotated data is a time-consuming and costly process. A viable solution is adopting semisupervised methods, such as in (Weston et al., 2008; Chapelle et al., 2010; Yang et al., 2016; Kipf and Welling, 2016) to improve the generalization capability when few annotated data is available, while the acquisition of unlabeled sources is possible. One effective semi-supervised method is implemented within Semi-Supervised Generative Adversarial Networks (SS-GANs). Usually, in GANs (Goodfellow et al., 2014) a “generator” is trained to produce samples resembling some data distribution. This training process “adversarially” depends on a “discriminator”, which is instead trained to distinguish samples of the generator from the real instances. SS-GANs (Salimans et al., 2016) are an extension to GANs where the discriminator also assigns a category to each example while discriminating whether it was automatically generated or not. In SS-GANs, the labeled material is thus used to train the discriminator, while the unlabeled examples (as well as the ones automatically generated) improve its inner representations. In image processing, SS-GANs have been shown to be effective: exposed to few dozens of labeled examples (but thousands of unlabeled ones), they obtain performances competitive with fully supervised settings. In this paper, we extend the BERT training with unlabeled data in a generative adversarial setting. In particular, we enrich the BERT fine-tuning process with an SS-GAN perspective, in the so-called GAN-BERT1 model. That is, a generator produces “fake” examples resembling the data distribution, while BERT is used as a discriminator. In this way, we exploit both the capability of BERT to produce high-quality representations of input texts and to adopt unlabeled material to help the network in 1The code is available at https://github.com/ crux82/ganbert. 2115 generalizing its representations for the final tasks. At the best of our knowledge, using SS-GANs in NLP has been investigated only by (Croce et al., 2019) with the so-called Kernel-based GAN. In that work, authors extend a Kernel-based Deep Architecture (KDA, (Croce et al., 2017)) with an SS-GAN perspective. Sentences are projected into low-dimensional embeddings, which approximate the implicit space generated by using a Semantic Tree Kernel function. However, it only marginally investigated how the GAN perspective could extend deep architecture for NLP tasks. In particular, a KGAN operates in a pre-computed embedding space by approximating a kernel function (Annesi et al., 2014). While the SS-GAN improves the quality of the Multi-layered Perceptron used in the KDA, it does not affect the input representation space, which is statically derived by the kernel space approximation. In the present work, all the parameters of the network are instead considered during the training process, in line with the SSGAN approaches. We empirically demonstrate that the SS-GAN schema applied over BERT, i.e., GAN-BERT, reduces the requirement for annotated examples: even with less than 200 annotated examples it is possible to obtain results comparable with a fully supervised setting. In any case, the adopted semisupervised schema always improves the result obtained by BERT. In the rest of this paper, section 2 provides an introduction to SS-GANs. In sections 3 and 4, GAN-BERT and the experimental evaluations are presented. In section 5 conclusions are derived. 2 Semi-supervised GANs SS-GANs (Salimans et al., 2016) enable semisupervised learning in a GAN framework. A discriminator is trained over a (k + 1)-class objective: “true” examples are classified in one of the target (1, ..., k) classes, while the generated samples are classified into the k + 1 class. More formally, let D and G denote the discriminator and generator, and pd and pG denote the real data distribution and the generated examples, respectively. In order to train a semi-supervised k-class classifier, the objective of D is extended as follows. Let us define pm(ˆy = y|x, y = k + 1) the probability provided by the model m that a generic example x is associated with the fake class and pm(ˆy = y|x, y ∈(1, ..., k)) that x is considered real, thus belonging to one of the target classes. The loss function of D is defined as: LD = LDsup. + LDunsup. where: LDsup.=−Ex,y∼pdlog[pm(ˆy = y|x, y ∈(1, ..., k))] LDunsup.=−Ex∼pd log[1−pm (ˆy = y|x, y = k +1)] −Ex∼G log [pm(ˆy = y|x, y = k + 1)] LDsup. measures the error in assigning the wrong class to a real example among the original k categories. LDunsup. measures the error in incorrectly recognizing a real (unlabeled) example as fake and not recognizing a fake example. At the same time, G is expected to generate examples that are similar to the ones sampled from the real distribution pd. As suggested in (Salimans et al., 2016), G should generate data approximating the statistics of real data as much as possible. In other words, the average example generated in a batch by G should be similar to the real prototypical one. Formally, let’s f(x) denote the activation on an intermediate layer of D. The feature matching loss of G is then defined as: LGfeature matching= ∥Ex ∼pdf(x) −Ex ∼Gf(x)∥ 2 2 that is, the generator should produce examples whose intermediate representations provided in input to D are very similar to the real ones. The G loss also considers the error induced by fake examples correctly identified by D, i.e., LGunsup.=−Ex∼G log[1−pm(ˆy = y|x,y = k +1)] The G loss is LG = LGfeature matching + LGunsup.. While SS-GANs are usually used with image inputs, we will show that they can be adopted in combination with BERT (Devlin et al., 2019) over inputs encoding linguistic information. 3 GAN-BERT: Semi-supervised BERT with Adversarial Learning Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) belongs to the family of the so-called transfer learning methods, where a model is first pre-trained on general tasks and then fine-tuned on the final target tasks. In Computer Vision, transfer learning has been shown beneficial in many different tasks, i.e., pre-training a neural network model on a known task, followed by a fine-tuning stage on a (different) target task (see, for example, (Girshick et al., 2013)). BERT 2116 is a very deep model that is pre-trained over large corpora of raw texts and then is fine-tuned on target annotated data. The building block of BERT is the Transformer (Vaswani et al., 2017), an attentionbased mechanism that learns contextual relations between words (or sub-words, i.e., word pieces, (Schuster and Nakajima, 2012)) in a text. BERT provides contextualized embeddings of the words composing a sentence as well as a sentence embedding capturing sentence-level semantics: the pre-training of BERT is designed to capture such information by relying on very large corpora. After the pre-training, BERT allows encoding (i) the words of a sentence, (ii) the entire sentence, and (iii) sentence pairs in dedicated embeddings. These can be used in input to further layers to solve sentence classification, sequence labeling or relational learning tasks: this is achieved by adding task-specific layers and by fine-tuning the entire architecture on annotated data. In this work, we extend BERT by using SSGANs for the fine-tuning stage. We take an already pre-trained BERT model and adapt the fine-tuning by adding two components: i) task-specific layers, as in the usual BERT fine-tuning; ii) SS-GAN layers to enable semi-supervised learning. Without loss of generality, let us assume we are facing a sentence classification task over k categories. Given an input sentence s = (t1, ..., tn) BERT produces in output n + 2 vector representations in Rd, i.e., (hCLS, ht1, ..., htn, hSEP ). As suggested in (Devlin et al., 2019), we adopt the hCLS representation as a sentence embedding for the target tasks. As shown in figure 1, we add on top of BERT the SS-GAN architecture by introducing i) a discriminator D for classifying examples, and ii) a generator G acting adversarially. In particular, G is a Multi Layer Perceptron (MLP) that takes in input a 100-dimensional noise vector drawn from N(µ, σ2) and produces in output a vector hfake ∈Rd. The discriminator is another MLP that receives in input a vector h∗∈Rd; h∗can be either hfake produced by the generator or hCLS for unlabeled or labeled examples from the real distribution. The last layer of D is a softmax-activated layer, whose output is a k + 1 vector of logits, as discussed in section 2. During the forward step, when real instances are sampled (i.e., h∗= hCLS), D should classify them in one of the k categories; when h∗= hfake, it should classify each example in the k + 1 category. As discussed in section 2, the training process tries F k classes noise is real? D G U L real data BERT Figure 1: GAN-BERT architecture: G generates a set of fake examples F given a random distribution. These, along with unlabeled U and labeled L vector representations computed by BERT are used as input for the discriminator D. to optimize two competing losses, i.e., LD and LG. During back-propagation, the unlabeled examples contribute only to LDunsup., i.e., they are considered in the loss computation only if they are erroneously classified into the k+1 category. In all other cases, their contribution to the loss is masked out. The labeled examples thus contribute to the supervised loss LDsup.. Finally, the examples generated by G contribute to both LD and LG, i.e., D is penalized when not finding examples generated by G and vice-versa. When updating D, we also change the BERT weights in order to fine-tune its inner representations, so considering both the labeled and the unlabeled data2. After training, G is discarded while retaining the rest of the original BERT model for inference. This means that there is no additional cost at inference time with respect to the standard BERT model. In the following, we will refer to this architecture as GAN-BERT. 4 Experimental Results In this section, we assess the impact of GAN-BERT over sentence classification tasks characterized by different training conditions, i.e., number of examples and number of categories. We report measures of our approach to support the development of deep learning models when exposed to few labeled examples over the following tasks: Topic Classification over the 20 News Group (20N) dataset (Lang, 1995), Question Classification (QC) on the UIUC dataset (Li and Roth, 2006), Sentiment Analysis over the SST-5 dataset (Socher et al., 2013). We 2From a computational perspective, the additional cost of G is negligible in terms of network parameters: it is an MLP which takes in input random vectors of 100 dimensions and produces in output vectors in the same 768-dimensional space of BERT. In other words, it is characterized by about 100 thousand parameters that are much less than in BERT base, i.e., 110 million parameters. 2117 1 2 5 10 20 30 4050 Annotated % 20 40 60 80 F1 BERT GAN-BERT (a) 20N 1 2 5 10 20 30 4050 Annotated % 20 40 60 80 100 Accuracy BERT GAN-BERT (b) QC Coarse Grained 1 2 5 10 20 30 4050 Annotated % 0 20 40 60 80 Accuracy BERT GAN-BERT (c) QC Fine Grained 1 2 5 10 20 30 4050 Annotated % 20 30 40 50 Accuracy BERT GAN-BERT (d) SST-5 0.010.02 0.05 0.1 0.2 0.5 1 2 5 10 Annotated % 20 40 60 80 F1 BERT GAN-BERT (e) MNLI Matched 0.010.02 0.05 0.1 0.2 0.5 1 2 5 10 Annotated % 20 40 60 80 F1 BERT GAN-BERT (f) MNLI Mismatched Figure 2: Learning curves for the six tasks. We run all the models for 3 epochs except for 20N (15 epochs). The sequence length we used is: 64 for QC coarse, QC fine, and SST-5; 128 for both MNLI settings; 256 for 20N. Learning rate was set for all to 2e-5, except for 20N (5e-6). will also report the performances over a sentence pair task, i.e., over the MNLI dataset (Williams et al., 2018). For each task, we report the performances with the metric commonly used for that specific dataset, i.e., accuracy for SST-5 and QC, while F1 is used for 20N and MNLI datasets. As a comparison, we report the performances of the BERT-base model fine-tuned as described in (Devlin et al., 2019) on the available training material. We used BERT-base as the starting point also for the training of our approach. GAN-BERT is implemented in Tensorflow by extending the original BERT implementation3. In more detail, G is implemented as an MLP with one hidden layer activated by a leaky-relu function. G inputs consist of noise vectors drawn from a normal distribution N(0, 1). The noise vectors pass through the MLP and finally result in 768-dimensional vectors, that are used as fake examples in our architecture. D is, also, an MLP with one hidden layer activated by a leaky-relu function followed by a softmax layer for the final prediction. For both G and D we used dropout=0.1 after the hidden layer. We repeated the training of each model with an increasing set of annotated material (L), starting by sampling only 0.01% or 1% of the training set, in order to measure the performances 3https://github.com/google-research/ bert starting with very few labeled examples (about 5070 instances). GAN-BERT is also provided with a set of unlabeled examples U coming from the unused annotated material for each training set sample (|U| = 100|L|, when available). We replicated the labeled examples of a factor log(|U|/|L|): this guarantees the presence of some labeled instances in each batch to avoid divergences due to the unsupervised component of the adversarial training. All the reported results are averaged over 5 different shuffles of the training material. The 20N classification results are shown in figure 2a. The training and testing datasets are made of 11, 314 and 7, 531 documents classified in 20 categories4, respectively. The plot shows F1 scores of the models: when 1% of data is used (i.e., about 110 examples) BERT almost diverges while GAN-BERT achieves more than 40% of F1. This trend is confirmed until 40% of labeled documents are used (i.e., about 5, 000 examples). In the QC task we observe similar outcomes. The training dataset is made of about 5, 400 question. In the coarse-grained setting (figure 2b) 6 classes are involved; in the fine-grained scenario (figure 2c) the number of classes is 50. In both cases, BERT diverges when only 1% of labeled questions are used, i.e., about 50 questions. It starts to com4We used the train/test split available within scikit-learn. 2118 pensate when using about 20% of the data in the coarse setting (about 1, 000 labeled examples). In the fine-grained scenario, our approach is performing better until 50% of the labeled examples. It seems that, when a large number of categories is involved, i.e., the classification task is more complex, the semi-supervised setting is even more beneficial. The results are confirmed in sentiment analysis over the SST-5 dataset (figure 2d), i.e., sentence classification involving 5 polarity categories. Also in this setting, we observe that GAN-BERT is beneficial when few examples are available. This is demonstrated by the difference in accuracy at 1% of the data (about 85 labeled examples), where BERT accuracy is 22.2% while GAN-BERT reaches 30.4% in accuracy. This trend is confirmed until about 20% of labeled examples (about 1, 700), where BERT achieves comparable results. Finally, we report the performances on Natural Language Inference on the MNLI dataset. We observe (in figures 2e and 2f) a systematic improvement starting from 0.01% labeled examples (about 40 instances): GAN-BERT provides about 6 −10 additional points in F1 with respect to BERT (18.09% vs. 29.19% and 18.01% vs. 31.64%, for mismatched and matched settings, respectively). This trend is confirmed until 0.5% of annotated material (about 2, 000 annotated examples): GAN-BERT reaches 62.67% and 60.45% while BERT reaches 48.35% and 42.41%, for mismatched and matched, respectively. Using more annotated data results in very similar performances with a slight advantage in using GAN-BERT. Even if acquiring unlabeled examples for sentence pairs is not trivial, these results give a hint about the potential benefits on similar tasks (e.g., questionanswer classification). 5 Conclusion In this paper, we extended the limits of Transformed-based architectures (i.e., BERT) in poor training conditions. Experiments confirm that fine-tuning such architectures with few labeled examples lead to unstable models whose performances are not acceptable. We suggest here to adopt adversarial training to enable semisupervised learning Transformer-based architectures. The evaluations show that the proposed variant of BERT, namely GAN-BERT, systematically improves the robustness of such architectures, while not introducing additional costs to the inference. In fact, the generator network is only used in training, while at inference time only the discriminator is necessary. This first investigation paves the way to several extensions including adopting other architectures, such as GPT-2 (Radford et al., 2019) or DistilBERT (Sanh et al., 2019) or other tasks, e.g., Sequence Labeling or Question Answering. Moreover, we will investigate the potential impact of the adversarial training directly in the BERT pre-training. From a linguistic perspective, it is worth investigating what the generator encodes in the produced representations. Acknowledgments We would like to thank Carlo Gaibisso, Bruno Luigi Martino and Francis Farrelly of the Istituto di Analisi dei Sistemi ed Informatica “Antonio Ruberti” (IASI) for supporting the early experimentations through access to dedicated computing resources made available by the Artificial Intelligence & High-Performance Computing laboratory. References Paolo Annesi, Danilo Croce, and Roberto Basili. 2014. Semantic compositionality in tree kernels. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM 2014, Shanghai, China, November 3-7, 2014, pages 1029–1038. ACM. Olivier Chapelle, Bernhard Schlkopf, and Alexander Zien. 2010. Semi-Supervised Learning, 1st edition. The MIT Press. Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2019. Kernel-based generative adversarial networks for weakly supervised learning. In AI*IA 2019 – Advances in Artificial Intelligence, pages 336–347, Cham. Springer International Publishing. Danilo Croce, Simone Filice, Giuseppe Castellucci, and Roberto Basili. 2017. Deep learning in semantic kernel spaces. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 345–354. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. 2119 Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2013. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2524. Yoav Goldberg. 2016. A primer on neural network models for natural language processing. J. Artif. Int. Res., 57(1):345–420. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 1217, 2016, Phoenix, Arizona, USA., pages 2741– 2749. Thomas N. Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. CoRR, abs/1609.02907. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Machine Learning Proceedings 1995, pages 331–339. Elsevier. Xin Li and Dan Roth. 2006. Learning question classifiers: the role of semantic information. Natural Language Engineering, 12(3):229–249. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. 2016. Improved techniques for training gans. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2234–2242. Curran Associates, Inc. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In International Conference on Acoustics, Speech and Signal Processing, pages 5149–5152. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Jason Weston, Fr´ed´eric Ratle, and Ronan Collobert. 2008. Deep learning via semi-supervised embedding. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1168–1175, New York, NY, USA. ACM. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. 2016. Revisiting semi-supervised learning with graph embeddings. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 40–48. JMLR.org.
2020
191
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2120–2133 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2120 Generalizing Natural Language Analysis through Span-relation Representations Zhengbao Jiang1, Wei Xu2, Jun Araki3, Graham Neubig1 Language Technologies Institute, Carnegie Mellon University1 Department of Computer Science and Engineering, Ohio State University2 Bosch Research North America3 {zhengbaj,gneubig}@cs.cmu.edu1 [email protected], [email protected] Abstract Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis. 1 Introduction A large number of natural language processing (NLP) tasks exist to analyze various aspects of human language, including syntax (e.g., constituency and dependency parsing), semantics (e.g., semantic role labeling), information content (e.g., named entity recognition and relation extraction), or sentiment (e.g., sentiment analysis). At first glance, these tasks are seemingly very different in both the structure of their output and the variety of information that they try to capture. To handle these different characteristics, researchers usually use specially designed neural network architectures. In this paper we ask the simple questions: are the Figure 1: An example from BRAT, consisting of POS, NER, and RE. task-specific architectures really necessary? Or with the appropriate representational methodology, can we devise a single model that can perform — and achieve state-of-the-art performance on — a large number of natural language analysis tasks? Interestingly, in the domain of efficient human annotation interfaces, it is already standard to use unified representations for a wide variety of NLP tasks. Figure 1 shows one example of the BRAT (Stenetorp et al., 2012) annotation interface, which has been used for annotating data for tasks as broad as part-of-speech tagging, named entity recognition, relation extraction, and many others. Notably, this interface has a single unified format that consists of spans (e.g., the span of an entity), labels on the spans (e.g., the variety of entity such as “person” or “location”), and labeled relations between the spans (e.g., “born-in”). These labeled relations can form a tree or a graph structure, expressing the linguistic structure of sentences (e.g., dependency tree). We detail this BRAT format and how it can be used to represent a wide number of natural language analysis tasks in Section 2. The simple hypothesis behind our paper is: if humans can perform natural language analysis in a single unified format, then perhaps machines can as well. Fortunately, there already exist NLP models that perform span prediction and prediction of relations between pairs of spans, such as the endto-end coreference model of Lee et al. (2017). We extend this model with minor architectural modifications (which are not our core contributions) and pre-trained contextualized representations (e.g., 2121 Information Extraction POS Parsing SRL Sentiment NER RE Coref. OpenIE Dep. Consti. ABSA ORL Different Models for Different Tasks ELMo (Peters et al., 2018)           BERT (Devlin et al., 2019)           SpanBERT (Joshi et al., 2019)           Single Model for Different Tasks Guo et al. (2016)           Swayamdipta et al. (2018)           Strubell et al. (2018)           Clark et al. (2018)           Luan et al. (2018, 2019)           Dixit and Al-Onaizan (2019)           Marasovi´c and Frank (2018)           Hashimoto et al. (2017)           This Work           Table 1: A comparison of the tasks covered by previous work and our work. BERT; Devlin et al. (2019)1) then demonstrate the applicability and versatility of this single model on 10 tasks, including named entity recognition (NER), relation extraction (RE), coreference resolution (Coref.), open information extraction (OpenIE), part-of-speech tagging (POS), dependency parsing (Dep.), constituency parsing (Consti.), semantic role labeling (SRL), aspect based sentiment analysis (ABSA), and opinion role labeling (ORL). While previous work has used similar formalisms to understand the representations learned by pretrained embeddings (Tenney et al., 2019a,b), to the best of our knowledge this is the first work that uses such a unified model to actually perform analysis. Moreover, we demonstrate that despite the model’s simplicity, it can achieve comparable performance with special-purpose state-of-the-art models on the tasks above (Table 1). We also demonstrate that this framework allows us to easily perform multi-task learning (MTL), leading to improvements when there are related tasks to be learned from or data is sparse. Further analysis shows that dissimilar tasks exhibit divergent attention patterns, which explains why MTL is harmful on certain tasks. We have released our code and the General Language Analysis Datasets (GLAD) benchmark with 8 datasets covering 10 tasks in the BRAT format 1In contrast to work on pre-trained contextualized representations like ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) that learn unified features to represent the input in different tasks, we propose a unified representational methodology that represents the output of different tasks. Analysis models using BERT still use special-purpose output predictors for specific tasks or task classes. at https://github.com/neulab/cmu-multinlp, and provide a leaderboard to facilitate future work on generalized models for NLP. 2 Span-relation Representations In this section, we explain how the BRAT format can be used to represent a large number of tasks. There are two fundamental types of annotations: span annotations and relation annotations. Given a sentence x = [w1, w2, ..., wn] of n tokens, a span annotation (si, li) consists of a contiguous span of tokens si = [wbi, wbi+1, ..., wei] and its label li (li ∈L), where bi/ei are the start/end indices respectively, and L is a set of span labels. A relation annotation (sj, sk, rjk) refers to a relation rjk (rjk ∈R) between the head span sj and the tail span sk, where R is a set of relation types. This span-relation representation can easily express many tasks by defining L and R accordingly, as summarized in Table 2a and Table 2b. These tasks fall in two categories: span-oriented tasks, where the goal is to predict labeled spans (e.g., named entities in NER) and relation-oriented tasks, where the goal is to predict relations between two spans (e.g., relation between two entities in RE). For example, constituency parsing (Collins, 1997) is a span-oriented task aiming to produce a syntactic parse tree for a sentence, where each node of the tree is an individual span associated with a constituent label. Coreference resolution (Pradhan et al., 2012) is a relation-oriented task that links an expression to its mentions within or beyond a single sentence. Dependency parsing (K¨ubler et al., 2122 Task Spans annotated with labels NER Barack Obama person was born in Hawaii location . Consti. And their suspicions NP of each other NP PP NP run deep ADVP VP . S POS What WP kind NN of IN memory NN ? ABSA Great laptop that offers many great features positive ! Table 2a: Span-oriented tasks. Spans are annotated by underlines and their labels. Task Spans and relations annotated with labels RE The burst has been caused by pressure. cause-effect Coref. I voted for Tom because he is clever. coref. SRL We brought you the tale of two cities. ARG0 ARG2 ARG1 OpenIE The four lawyers climbed out from under a table. ARG0 ARG1 Dep. The entire division employs about 850 workers. det amod nsubj advmod nummod dobj ORL We therefore as MDC do not accept this result. holder target Table 2b: Relation-oriented tasks. Directed arcs indicate the relations between spans. 2009) is also a relation-oriented task that aims to relate a word (single-word span) to its syntactic parent word with the corresponding dependency type. Detailed explanations of all tasks can be found in Appendix A. While the tasks above represent a remarkably broad swath of NLP, it is worth mentioning what we have not covered, to properly scope this work. Notably, sentence-level tasks such as text classification and natural language inference are not covered, although they can also be formulated using this span-relation representation by treating the entire sentence as a span. We chose to omit these tasks because they are already well-represented by previous work on generalized architectures (Lan and Xu, 2018) and multi-task learning (Devlin et al., 2019; Liu et al., 2019), and thus we mainly focus on tasks using phrase-like spans. In addition, the span-relation representations described here are designed for natural language analysis, and cannot handle tasks that require generation of text, such as machine translation (Bojar et al., 2014), dialog response generation (Lowe et al., 2015), and summarization (Nallapati et al., 2016). There are also a small number of analysis tasks such as semantic parsing to logical forms (Banarescu et al., 2013) where the outputs are not directly associated with spans in the input, and handling these tasks is beyond the scope of this work. 3 Span-relation Model Now that it is clear that a very large number of analysis tasks can be formulated in a single format, we turn to devising a single model that can solve these tasks. We base our model on a span-based model first designed for end-to-end coreference resolution (Lee et al., 2017), which is then adapted for other tasks (He et al., 2018; Luan et al., 2018, 2019; Dixit and Al-Onaizan, 2019; Zhang and Zhao, 2019). At the core of the model is a module to represent each span as a fixed-length vector, which is used to predict labels for spans or span pairs. We first briefly describe the span representation used and proven to be effective in previous works, then highlight some details we introduce to make this model generalize to a wide variety of tasks. Span Representation Given a sentence x = [w1, w2, ..., wn] of n tokens, a span si = [wbi, wbi+1, ..., wei] is represented by concatenating two components: a content representation zc i calculated as the weighted average across all token embeddings in the span, and a boundary representation zu i that concatenates the embeddings at the start and end positions of the span. Specifically, c1, c2, ..., cn = TokenRepr(w1, w2, ..., wn), (1) u1, u2, ..., un = BiLSTM(c1, c2, ..., cn), (2) zc i = SelfAttn(cbi, cbi+1, ..., cei), (3) zu i = [ubi; uei], zi = [zc i; zu i ], (4) where TokenRepr could be non-contextualized, such as GloVe (Pennington et al., 2014), or contextualized, such as BERT (Devlin et al., 2019). We refer to Lee et al. (2017) for further details. Span and Relation Label Prediction Since we extract spans and relations in an end-to-end fashion, we introduce two additional labels NEG SPAN and NEG REL in L and R respectively. NEG SPAN indicates invalid spans (e.g., spans that are not named entities in NER) and NEG REL indicates invalid span pairs without any relation between them (i.e., no relation exists between two arguments in SRL). We first predict labels for all spans up to a length 2123 Dataset Domain #Sent. Task #Spans #Relations Metric Wet Lab Protocols biology 14,301 NER 60,745 F1 (Kulkarni et al., 2018) RE 60,745 43,773 F1 CoNLL-2003 (Sang and Meulder, 2003) news 20,744 NER 35,089 F1 SemEval-2010 Task 8 (Hendrickx et al., 2010) misc. 10,717 RE 21,437 10,717 Macro F1 ◦ OntoNotes 5.0 ⋆ (Pradhan et al., 2013) misc. 94,268 Coref. 194,477 1,166,513 Avg F1 SRL 745,796 543,534 F1 POS 1,631,995 Accuracy Dep. 1,722,571 1,628,558 LAS Consti. 1,320,702 Evalb F1 † Penn Treebank (Marcus et al., 1994) speech, news 49,208 POS 1,173,766 Accuracy 43,948 Dep. 1,090,777 1,046,829 LAS 43,948 Consti. 871,264 Evalb F1 † OIE2016 (Stanovsky and Dagan, 2016) news, Wiki 2,534 OpenIE 15,717 12,451 F1 MPQA 3.0 (Deng and Wiebe, 2015) news 3,585 ORL 13,841 9,286 F1 SemEval-2014 Task 4 (Pontiki et al., 2014) reviews 4,451 ABSA 7,674 Accuracy ◦ Table 3: Statistics of GLAD, consisting of 10 tasks from 8 datasets. ⋆Following He et al. (2018), we use a subset of OntoNotes 5.0 dataset based on CoNLL 2012 splits (Pradhan et al., 2012). ◦Previous works use gold standard spans in these evaluations. † We use the bracket scoring program Evalb (Collins, 1997) in constituency parsing. of l words using a multilayer perceptron (MLP): softmax(MLPspan(zi)) ∈∆|L|, where ∆|L| is a |L|-dimensional simplex. Then we keep the top K = τ · n spans with the lowest NEG SPAN probability in relation prediction for efficiency, where smaller pruning threshold τ indicates more aggressive pruning. Another MLP is applied to pairs of the remaining spans to produce their relation scores: ojk = MLPrel([zj; zk; zj · zk]) ∈R|R|, where j and k index two spans. Application to Disparate Tasks For most of the tasks, we can simply maximize the probability of the ground truth relation for all pairs of the remaining spans. However, some tasks might have different requirements, e.g., coreference resolution aims to cluster spans referring to the same concept and we do not care about which antecedent a span is linked to if there are multiple ones. Thus, we provide two training loss functions: 1. Pairwise Maximize the probabilities of the ground truth relations for all pairs of the remaining spans independently: softmax(ojk)rjk, where rjk indexes the ground truth relation. 2. Head Maximize the probability of ground truth head spans for a specific span sj: P k∈head(sj) softmax([oj1, oj2, ..., ojK])k, where head(·) returns indices of one or more heads and oj· is the corresponding scalar from oj· indicating how likely two spans are related. We use option 1 for all tasks except for coreference resolution which uses option 2. Note that the above loss functions only differ in how relation scores are normalized and the other parts of the model remain the same across different tasks. At test time, we follow previous inference methods to generate valid outputs. For coreference resolution, we link a span to the antecedent with highest score (Lee et al., 2017). For constituency parsing, we use greedy top-down decoding to generate a valid parse tree (Stern et al., 2017). For dependency parsing, each word is linked to exactly one parent with the highest relation probability. For other tasks, we predict relations for all span pairs and use those not predicted as NEG REL to construct outputs. Our core insight is that the above formulation is largely task-agnostic, meaning that a task can be modeled in this framework as long as it can be formulated as a span-relation prediction problem with properly defined span labels L and relation labels R. As shown in Table 1, this unified SpanRelation (SpanRel) model makes it simple to scale to a large number of language analysis tasks, with breadth far beyond that of previous work. Multi-task Learning The SpanRel model makes it easy to perform multi-task learning (MTL) by sharing all parameters except for the MLPs used for label prediction. However, because different tasks capture different linguistic aspects, they are not equally beneficial to each other. It is expected that jointly training on related tasks is helpful, while forcing the same model to solve unrelated tasks might even hurt the performance (Ruder, 2017). 2124 Category Task Metric Dataset Setting SOTA Model Previous SOTA Our Model IE NER F1 CoNLL03 BERT Devlin et al. (2019) 92.8 92.2 WLP ELMo Luan et al. (2019) 79.5 79.2 RE Macro F1 SemEval10 BERT, gold Wu and He (2019) 89.3 87.4 F1 WLP ELMo Luan et al. (2019) 64.1 65.5 Coref. Avg F1 OntoNotes GloVe, CharCNN Lee et al. (2017)◦ 62.0 61.1 OpenIE F1 OIE2016 ELMo Stanovsky et al. (2018)⋆ 31.1 35.2 SRL F1 OntoNotes ELMo He et al. (2018)† 82.9 82.4 Parsing Dep. LAS PTB ELMo Clark et al. (2018) 94.4 94.7 Consti. Evalb F1 PTB BERT Kitaev et al. (2019) 95.6 95.5 Sentiment ABSA Accuracy SemEval14 BERT, gold Xu et al. (2019)◁ 85.0/78.1 85.5/76.6 ORL F1 MPQA 3.0 GloVe, gold Marasovi´c and Frank (2018)⋆ 56.4 55.6 POS Accuracy PTB ELMo Clark et al. (2018) 97.7 97.7 Table 4: Comparison between SpanRel models and task-specific SOTA models.2 Following Luan et al. (2019), we perform NER and RE jointly on WLP dataset. We use gold entities in SemEval-2010 Task 8, gold aspect terms in SemEval-2014 Task 4, and gold opinion expressions in MPQA 3.0 to be consistent with existing works. Compared to manually choosing source tasks based on prior knowledge, which might be sub-optimal when the number of tasks is large, SpanRel offers a systematic way to examine relative benefits of source-target task pairs by either performing pairwise MTL or attention-based analysis, as we will show in Section 4.3. 4 GLAD Benchmark and Results We first describe our General Language Analysis Datasets (GLAD) benchmark and evaluation metrics, then conduct experiments to (1) verify that SpanRel can achieve comparable performance across all tasks (Section 4.2), and (2) demonstrate its benefits in multi-task learning (Section 4.3). 4.1 Experimental Settings GLAD Benchmark and Evaluation Metrics As summarized in Table 3, we convert 8 widely used datasets with annotations of 10 tasks into the BRAT format and include them in the GLAD benchmark. It covers diverse domains, providing a holistic testbed for natural language analysis evaluation. The major evaluation metric is span-based F1 (denoted as F1), a standard metric for SRL. Precision is the proportion of extracted spans (spans not predicted as NEG SPAN) that are consistent with 2◦The small version of Lee et al. (2017)’s method with 100 antecedents and no speaker features. ⋆For OpenIE and ORL, we use span-based F1 instead of syntactic-head-based F1 and binary coverage F1 used in the original papers because they are biased towards extracting long spans. † For SRL, we choose to compare with He et al. (2018) because they also extract predicates and arguments in an end-to-end way. ◁We follow Xu et al. (2019) to report accuracy of restaurant and laptop domain separately in ABSA. the ground truth. Recall is the proportion of ground truth spans that are correctly extracted. Span F1 is also applicable to relations, where an extracted relation (relations not predicted as NEG REL) is correct iff both head and tail spans have correct boundaries and the predicted relation is correct. To make fair comparisons with existing works, we also compute standard metrics for different tasks, as listed in Table 3. Implementation Details We attempted four token representation methods (Equation 1), namely GloVe (Pennington et al., 2014), ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and SpanBERT (Joshi et al., 2019). We use BERTbase in our main results and report BERTlarge in Appendix B. A three-layer BiLSTM with 256 hidden units is used (Equation 2). Both span and relation prediction MLPs have two layers with 128 hidden units. Dropout (Srivastava et al., 2014) of 0.5 is applied to all layers. For GloVe and ELMo, we use Adam (Kingma and Ba, 2015) with learning rate of 1e-3 and early stop with patience of 3. For BERT and SpanBERT, we follow standard fine-tuning with learning rate of 5e-5, β1 = 0.9, β2 = 0.999, L2 weight decay of 0.01, warmup over the first 10% steps, and number of epochs tuned on development set. Task-specific hyperparameters maximal span length and pruning ratio are tuned on development set and listed in Appendix C. 4.2 Comparison with Task-specific SOTA We compare the SpanRel model with state-of-theart task-specific models by training on data from a single task. By doing so we attempt to answer the 2125 research question “can a single model with minimal task-specific engineering achieve competitive or superior performance to other models that have been specifically engineered?” We select competitive SOTA models mainly based on settings, e.g., single-task learning and end-to-end extraction of spans and relations. To make fair comparisons, token embeddings (GloVe, ELMo, BERT) and other hyperparameters (e.g., the number of antecedents in Coref. and the maximal span length in SRL) in our method are set to match those used by SOTA models, to focus on differences brought about by the model architecture. As shown in Table 4, the SpanRel model achieves comparable performances as task-specific SOTA methods (regardless of whether the token representation is contextualized or not). This indicates that the span-relation format can generically represent a large number of natural language analysis tasks and it is possible to devise a single unified model that achieves strong performance on all of them. It provides a strong and generic baseline for natural language analysis tasks and a way to examine the usefulness of task-specific designs. 4.3 Multi-task Learning with SpanRel To demonstrate the benefit of the SpanRel model in MTL, we perform single-task learning (STL) and MTL across all tasks using end-to-end settings.3 Following Liu et al. (2019), we perform MTL+finetuning and show the results in separate columns of Table 5. Contextualized token representations yield significantly better results than GloVe on all tasks, indicating that pre-training on large corpora is almost universally helpful to NLP tasks. Comparing the results of MTL+fine-tuning with STL, we found that performance with GloVe drops on 8 out of 15 tasks, most of which are tasks with relatively sparse data. It is probably because the capacity of the GloVe-based model is too small to store all the patterns required by different tasks. The results of contextualized representations are mixed, with some tasks being improved and others remaining the same or degrading. We hypothesize that this is because different tasks capture different linguistic aspects, thus are not equally helpful to each other. Reconciling these seemingly different tasks in the same model might be harmful to some tasks. 3Span-based F1 is used as the evaluation metric in SemEval-2010 Task 8 and SemEval-2014 Task 4 as opposed to macro F1 and accuracy reported in the original papers because we aim at end-to-end extractions. Notably, as the contextualized representations become stronger, the performance of MTL+FT becomes more favorable. 5 out of 15 tasks (NER, RE, OpenIE, SRL, ORL) observe statistically significant improvements (p-value < 0.05 with paired bootstrap re-sampling) with SpanBERT, a contextualized embedding pre-trained with span-based training objectives, while only one task degrades (ABSA), indicating its superiority in reconciling spans from different tasks. The GLAD benchmark provides a holistic testbed for evaluating natural language analysis capability. Task Relatedness Analysis To further investigate how different tasks interact with each other, we choose five source tasks (i.e., tasks used to improve other tasks, e.g., POS, NER, Consti., Dep., and SRL) that have been widely used in MTL (Hashimoto et al., 2017; Strubell et al., 2018) and six target tasks (i.e., tasks to be improved, e.g., OpenIE, NER, RE, ABSA, ORL, and SRL) to perform pairwise multi-task learning. We hypothesize that although language modeling pre-training is theoretically orthogonal to MTL (Swayamdipta et al., 2018), in practice their benefits tends to overlap. To analyze these two factors separately, we start with a weak representation GloVe to study task relatedness, then move to BERT to demonstrate how much we can still improve with MTL given strong and contextualized representations. As shown in Table 6 (GloVe), tasks are not equally useful to each other. Notably, (1) for OpenIE and ORL, multi-task learning with SRL improves the performance significantly, while other tasks lead to less or no improvements. (2) Dependency parsing and SRL are generic source tasks that are beneficial to most of the target tasks. This unified SpanRel makes it easy to perform MTL and decide beneficial source tasks. Next, we demonstrate that our framework also provides a platform for analysis of similarities and differences between different tasks. Inspired by the intuition that the attention coefficients are somewhat indicative of a model’s internal focus (Li et al., 2016; Vig, 2019; Clark et al., 2019), we hypothesize that the similarity or difference between attention mechanisms may be correlated with similarity between tasks, or even the success or failure of MTL. To test this hypothesis, we extract the attention maps of two BERT-based SpanRel models (trained on a source t′ and a target task t separately) over sentences Xt from the target task, and compute 2126 GloVe ELMo BERTbase SpanBERTbase Category Task Metric Dataset STL MTL +FT STL MTL +FT STL MTL +FT STL MTL +FT IE NER F1 CoNLL03 88.4 86.2↓87.5↓ 91.9 91.6 91.6 91.0 88.6↓90.2↓ 91.3 90.4↓91.2 WLP 77.6 71.5↓76.5↓ 79.2 77.4↓78.2↓ 78.1 78.2 78.5 77.9 78.6↑78.5↑ RE F1 SemEval10 50.7 15.2↓33.0↓ 61.8 30.6↓42.9↓ 61.7 55.1↓59.8↓ 62.1 54.6↓61.8 WLP 64.9 38.5↓53.9↓ 65.5 52.0↓55.1↓ 64.7 65.9↑66.5↑ 64.1 67.2↑67.2↑ Coref Avg F1 OntoNotes 56.3 50.3↓53.0↓ 62.2 62.9↑63.3↑ 66.2 65.5↓65.8 70.0 68.9↓69.7 OpenIE F1 OIE2016 28.3 6.8↓19.6↓ 35.2 30.0↓32.9↓ 36.7 37.1 38.5↑ 36.5 37.3↑38.6↑ SRL F1 OntoNotes 78.0 77.9 78.6↑ 82.4 82.3 82.4 83.3 82.9 83.4 83.1 83.3 83.8↑ Parsing Dep. LAS PTB 92.9 93.2 93.5↑ 94.7 94.9 94.9 94.9 94.8 95.0 95.1 95.1 95.1 OntoNotes 90.4 90.5 90.5 92.3 93.2↑92.8↑ 94.1 93.8 94.0 94.2 94.1 94.2 Consti. Evalb F1 PTB 93.4 93.8 95.3 95.3 95.5 95.2 95.8 95.5 OntoNotes 91.0 91.5↑ 93.2 93.7↑ 93.6 93.8 94.3 94.2 Sentiment ABSA F1 SemEval14 63.5 48.5↓59.0↓ 69.2 57.0↓59.0↓ 70.8 63.1↓67.0↓ 70.0 63.5↓69.5↓ ORL F1 MPQA 3.0 38.2 18.4↓31.6↓ 42.9 24.7↓32.4↓ 44.5 38.1↓45.6↑ 45.2 40.2↓47.5↑ POS Accuracy PTB 96.8 96.8 96.8 97.7 97.7 97.8 97.6 97.3 97.3 97.6 97.6 97.6 OntoNotes 97.0 97.0 97.1 98.2 98.2 98.3 97.7 97.8 97.8 98.3 98.3 98.3 Table 5: Comparison between STL and MTL+fine-tuning across all tasks. blue↑indicates results better than STL, red↓indicates worse, and black means almost the same (i.e., a difference within 0.5). Constituency parsing requires more memory than other tasks so we restrict its span length to 10 in MTL, and thus do not report results. their similarity using the Frobenius norm: simk(t, t′) = −1 |Xt| X x∈Xt At k(x) −At′ k (x) F , where At k(x) is the attention map extracted from the k-th head by running the model trained from task t on sentence x. We select OpenIE as the target task because it shows the largest performance variation when paired with different source tasks (34.0 - 38.8) in Table 6. We visualize the attention similarity of all heads in BERT (12 layers × 12 heads) between two mutually harmful tasks (OpenIE/POS on the left) and between two mutually helpful tasks (OpenIE/SRL on the right) in Figure 2a. A common trend is that heads in higher layers exhibit more divergence, probably because they are closer to the prediction layer, thus easier to be affected by the end task. Overall, it can be seen that OpenIE/POS has much more attention divergence than OpenIE/SRL. A notable difference is that almost all heads in the last two layers of the OpenIE/POS models differ significantly, while some heads in the last two layers of the OpenIE/SRL models still behave similarly, providing evidence that failure of MTL can be attributed to the fact that dissimilar tasks requires different attention patterns. We further compute average attention similarities for all source tasks in Figure 2b, and we can see that there is a strong correlation (Pearson correlation of 0.97) between the attentions similarity and the -5 0 12 layers heads 1 heads 12 1 12 1 (a) Attention similarity between OpenIE/POS (left), and between OpenIE/SRL (right) for all heads. 34 36 38 40 -2.3 -2.2 -2.1 -2 performance similarity POSNERconsti. dep.SRL (b) Correlation between attention similarity and MTL performance. Figure 2: Attention-based task relatedness analysis. performance of pairwise MTL, supporting our hypothesis that attention pattern similarities can be used to predict improvements of MTL. MTL under Different Settings We analyze how token representations and sizes of the target dataset affect the performance of MTL. Comparing BERT and GloVe in Table 6, the improvements become smaller or vanish as the token representation becomes stronger, e.g., improvement on OpenIE with SRL reduces from 5.8 to 1.6. This is expected because both large-scale pre-training and MTL aim to learn general representations and their benefits tend to overlap in practice. Interestingly, some helpful source tasks become harmful when we shift from GloVe to BERT, such as OpenIE paired with POS. We conjecture that the gains of MTL might have already been achieved by BERT, but the task-specific characteristics of POS hurt the performance of OpenIE. We did not observe many tasks benefitting from MTL for the GloVe-based model in Table 5 2127 GloVe BERTbase Target Source STL POS NER Consti. Dep. SRL STL POS NER Consti. Dep. SRL OpenIE 28.3 29.9↑27.0↓ 31.2↑ 32.9↑34.1↑ 36.7 34.0↓34.3↓ 35.2↓ 37.8↑38.3↑ NER (WLP) 77.6 77.8 78.3↑ 77.9 78.6↑78.1↑ 78.1 78.0 78.1 78.1 77.7 78.8↑ RE (WLP) 64.9 65.5↑65.6↑ 64.9 66.5↑65.9↑ 64.7 64.4 64.7 64.3 64.9 65.3↑ RE (SemEval10) 50.7 52.3↑52.8↑ 49.6↓ 52.9↑52.8↑ 61.7 61.9 60.2↓ 59.2↓ 62.1 59.9↓ ABSA 63.5 63.4 62.8↓ 59.8↓ 63.5 60.2↓ 70.8 68.9↓71.4↑ 70.4 69.9↓69.6↓ ORL 38.2 35.7↓37.9 36.1↓ 38.6 41.0↑ 44.5 45.8↑44.2 44.8 45.1↑46.6↑ SRL (10k) 68.8 69.6↑68.9 70.7↑ 71.3↑ 78.7 79.4↑79.5↑ 79.6↑ 79.8↑ Table 6: Performance of pairwise multi-task learning with GloVe and BERTbase. blue↑ indicates results better than STL, red↓indicates worse, and black means almost the same (i.e., a difference within 0.5). We show the performance after fine-tuning. Dataset of source tasks POS, Consti., Dep. is PTB and dataset of NER is CoNLL-2003. 79 80 81 82 83 84 85 86 0 15 30 45 60 75 F1 #training instances (in k) STL POS NER Consti. Dep. Figure 3: MTL Performance of SRL wrt. the data size. because it is trained on all tasks (instead of two), which is beyond its limited model capacity. The improvements of MTL shrink as the size of the SRL datasets increases, as shown in Figure 3, indicating that MTL is useful when the target data is sparse. Time Complexity Analysis Time complexities of span and relation prediction are O(l · n) and O(K2) = O(τ 2 · n2) respectively for a sentence of n tokens (Section 3). The time complexity of BERT is O(L · n2), dominated by its L selfattention layers. Since the pruning threshold τ is usually less than 1, the computational overhead introduced by the span-relation output layer is much less than BERT. In practice, we observe that the training/testing time is mainly spent by BERT. For SRL, one of the most computation-intensive tasks with long spans and dense span/relation annotations, 85.5% of the time is spent by BERT. For POS, a less heavy task, the time spent by BERT increases to 98.5%. Another option for span prediction is to formulate it as a sequence labeling task, as in previous works (Lample et al., 2016; He et al., 2017), where time complexity is O(n). Although slower than token-based labeling models, span-based models offer the advantages of being able to model overlapping spans and use span-level information for label prediction (Lee et al., 2017). 5 Related Work General Architectures for NLP There has been a rising interest in developing general architectures for different NLP tasks, with the most prominent examples being sequence labeling framework (Collobert et al., 2011; Ma and Hovy, 2016) used for tagging tasks and sequence-to-sequence framework (Sutskever et al., 2014) used for generation tasks. Moreover, researchers typically pick related tasks, motivated by either linguistic insights or empirical results, and create a general framework to perform MTL, several of which are summarized in Table 1. For example, Swayamdipta et al. (2018) and Strubell et al. (2018) use constituency and dependency parsing to improve SRL. Luan et al. (2018, 2019); Wadden et al. (2019) use a spanbased model to jointly solve three informationextraction-related tasks (NER, RE, and Coref.). Li et al. (2019) formulate both nested NER and flat NER as a machine reading comprehension task. Compared to existing works, we aim to create an output representation that can solve nearly every natural language analysis task in one fell swoop, allowing us to cover a far broader range of tasks with a single model. In addition, NLP has seen a recent burgeoning of contextualized representations pre-trained on large corpora (e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)). These methods focus on learning generic input representations, but are agnostic to the output representation, requiring different predictors for different tasks. In contrast, we present a methodology to formulate the output of different tasks in a unified format. Thus our work is orthogonal to those on contextualized embeddings. Indeed, in Section 4.3, we demonstrate that the SpanRel model can benefit from stronger contextualized representation models, and even provide a testbed for their use in natural language analysis. Benchmarks for Evaluating Natural Language Understanding Due to the rapid development of NLP models, large-scale benchmarks, such as SentEval (Conneau and Kiela, 2018), GLUE (Wang et al., 2019b), and SuperGLUE (Wang et al., 2019a) have been proposed to facilitate fast and holistic evaluation of models’ understanding ability. They 2128 mainly focus on sentence-level tasks, such as natural language inference, while our GLAD benchmark focuses on token/phrase-level analysis tasks with diverse coverage of different linguistic structures. New tasks and datasets can be conveniently added to our benchmark as long as they are in the BRAT standoff format, which is one of the most commonly used data format in the NLP community, e.g., it has been used in the BioNLP shared tasks (Kim et al., 2009) and the Universal Dependency project (McDonald et al., 2013). 6 Conclusion We provide the simple insight that a large number of natural language analysis tasks can be represented in a single format consisting of spans and relations between spans. As a result, these tasks can be solved in a single modeling framework that first extracts spans and predicts their labels, then predicts relations between spans. We attempted 10 tasks with this SpanRel model and show that this generic task-independent model can achieve competitive performance as state-of-the-art methods tailored for each tasks. We merge 8 datasets into our GLAD benchmark for evaluating future models for natural language analysis. Future directions include (1) devising hierarchical span representations that can handle spans of different length and diverse content more effectively and efficiently; (2) robust multitask learning or meta-learning algorithms that can reconcile very different tasks. Acknowledgments This work was supported by gifts from Bosch Research. We would like to thank Hiroaki Hayashi, Bohan Li, Pengcheng Yin, Hao Zhu, Paul Michel, and Antonios Anastasopoulos for their insightful comments and suggestions. References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, LAW-ID@ACL 2013, August 8-9, 2013, Sofia, Bulgaria, pages 178–186. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pages 2670–2676. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT@ACL 2014, June 26-27, 2014, Baltimore, Maryland, USA, pages 12–58. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Florence, Italy. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925, Brussels, Belgium. Association for Computational Linguistics. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 16–23, Madrid, Spain. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Lingjia Deng and Janyce Wiebe. 2015. MPQA 3.0: An entity/event-level sentiment corpus. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1323–1328. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2129 pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Kalpit Dixit and Yaser Al-Onaizan. 2019. Span-level model for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5308–5314, Florence, Italy. Association for Computational Linguistics. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Comput. Linguist., 28(3):245–288. Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, and Jun Xu. 2016. A unified architecture for semantic role labeling and relation classification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1264–1274, Osaka, Japan. The COLING 2016 Organizing Committee. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 1923–1933. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364–369, Melbourne, Australia. Association for Computational Linguistics. Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473–483, Vancouver, Canada. Association for Computational Linguistics. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval@ACL 2010, Uppsala University, Uppsala, Sweden, July 15-16, 2010, pages 33–38. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. CoRR, abs/1907.10529. Jin-Dong Kim, Tomoko Ohta, Sampo Pyysalo, Yoshinobu Kano, and Jun’ichi Tsujii. 2009. Overview of bionlp’09 shared task on event extraction. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task, BioNLP ’09, pages 1–9, Stroudsburg, PA, USA. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Sandra K¨ubler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis Lectures on Human Language Technologies, 1(1):1–127. Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An annotated corpus for machine reading of instructions in wet lab protocols. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 97–106, New Orleans, Louisiana. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 260–270. Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3890–3902, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 188–197. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. CoRR, abs/1612.08220. Xiaoya Li, Jingrong Feng, Yuxian Meng, Qinghong Han, Fei Wu, and Jiwei Li. 2019. A unified MRC framework for named entity recognition. CoRR, abs/1910.11476. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings 2130 of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4487–4496. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the SIGDIAL 2015 Conference, The 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 24 September 2015, Prague, Czech Republic, pages 285–294. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 3219–3232. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046, Minneapolis, Minnesota. Association for Computational Linguistics. Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. Ana Marasovi´c and Anette Frank. 2018. SRL4ORL: Improving opinion role labeling using multi-task learning with semantic role labeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 583–594, New Orleans, Louisiana. Association for Computational Linguistics. Mitchell P. Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The penn treebank: Annotating predicate argument structure. In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey, USA, March 8-11, 1994. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 92–97, Sofia, Bulgaria. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, C¸ aglar G¨ulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 280–290. Christina Niklaus, Matthias Cetto, Andr´e Freitas, and Siegfried Handschuh. 2018. A survey on open information extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3866–3878, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014., pages 27–35. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using ontonotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL 2013, Sofia, Bulgaria, August 8-9, 2013, pages 143–152. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning - Proceedings of the Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes, EMNLP-CoNLL 2012, July 13, 2012, Jeju Island, Korea, pages 1–40. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empirical Methods in Natural Language Processing. 2131 Sebastian Ruder. 2017. An overview of multitask learning in deep neural networks. CoRR, abs/1706.05098. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., pages 1929–1958. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2300–2305, Austin, Texas. Association for Computational Linguistics. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 885– 895, New Orleans, Louisiana. Association for Computational Linguistics. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. BRAT: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107, Avignon, France. Association for Computational Linguistics. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 818–827. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038, Brussels, Belgium. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, and Noah A. Smith. 2018. Syntactic scaffolds for semantic structures. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3772–3782, Brussels, Belgium. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4593–4601. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence structure in contextualized word representations. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 252–259. Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28 - August 2, 2019, Volume 3: System Demonstrations, pages 37–42. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5783– 5788, Hong Kong, China. Association for Computational Linguistics. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. CoRR, abs/1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Shanchan Wu and Yifan He. 2019. Enriching pretrained language model with entity information for relation classification. CoRR, abs/1905.08284. 2132 Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2324–2335, Minneapolis, Minnesota. Association for Computational Linguistics. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 1640–1649. Junlang Zhang and Hai Zhao. 2019. Span based open information extraction. CoRR, abs/1901.10879. A Detailed Explanations of 10 Tasks • Span-oriented Tasks (Table 2a) – Named Entity Recognition (Sang and Meulder, 2003) NER is traditionally considered as a sequence labeling task. We model named entities as spans over one or more tokens. – Constituency Parsing (Collins, 1997) Constituency parsing aims to produce a syntactic parse tree for each sentence. Each node in the tree is an individual span associated with a constituent label, and spans are nested. – Part-of-speech Tagging (Ratnaparkhi, 1996; Toutanova et al., 2003) POS tagging is another sequence labeling task, where every single token is an individual span with a POS tag. – Aspect-based Sentiment Analysis (Pontiki et al., 2014) ABSA is a task that consists of identifying certain spans as aspect terms and predicting their associated sentiments. • Relation-oriented Tasks (Table 2b) – Relation Extraction (Hendrickx et al., 2010) RE concerns the relation between two entities. – Coreference (Pradhan et al., 2012) Coreference resolution is to link named, nominal, and pronominal mentions that refer to the same concept, within or beyond a single sentence. – Semantic Role Labeling (Gildea and Jurafsky, 2002) SRL aims to identify arguments of a predicate (verb or noun) and classify them with semantic roles in relation to the predicate. – Open Information Extraction (Banko et al., 2007; Niklaus et al., 2018) In contrast to the fixed relation types in RE, OpenIE aims to extract open-domain predicates and their arguments (usually subjects and objects) from a sentence. – Dependency Parsing (K¨ubler et al., 2009) Spans are single-word tokens and a relation links a word to its syntactic parent with the corresponding dependency type. – Opinion Role Labeling (Yang and Cardie, 2013) ORL detects spans that are opinion expressions, as well as holders and targets related to these opinions. B Results of BERT Large Model Table 7 shows the performance of single-task learning with different token representations. BERTlarge achieves the best performance on most of the tasks. 2133 Category Task Metric Dataset GloVe ELMo BERTbase SpanBERTbase BERTlarge IE NER F1 CoNLL03 88.4 91.9 91.0 91.3 90.9 WLP 77.6 79.2 78.1 77.9 78.3 RE F1 SemEval10 50.7 61.8 61.7 62.1 64.7 WLP 64.9 65.5 64.7 64.1 65.1 Coref Avg F1 OntoNotes 56.3 62.2 66.3 70.0 OpenIE F1 OIE2016 28.3 35.2 36.7 36.5 36.5 SRL F1 OntoNotes 78.0 82.4 83.3 83.1 84.4 Parsing Dep. LAS PTB 92.9 94.7 94.9 95.1 95.3 OntoNotes 90.4 92.3 94.1 94.2 94.5 Consti. Evalb F1 PTB 93.4 95.3 95.5 95.8 95.8 OntoNotes 91.0 93.2 93.6 94.3 93.9 Sentiment ABSA F1 SemEval14 63.5 69.2 70.8 70.0 73.8 ORL F1 MPQA 3.0 38.2 42.9 44.5 45.2 47.1 POS Accuracy PTB 96.8 97.7 97.6 97.6 97.4 OntoNotes 97.0 98.2 97.7 98.3 97.9 Table 7: Single-task learning performance of the SpanRel model with different token representations. BERTlarge requires a large amount of memory so we cannot feed the entire document to the model in coreference resolution. Information Extraction POS Parsing SRL Sentiment NER RE Coref. OpenIE Dep. Consti. ABSA ORL max span length l 10 5 10 30 1 1 30 10 30 pruning ratio τ 5 0.4 0.8 1.0 1.0 0.3 Table 8: Task-specific hyperparameters. Span-oriented tasks do not need pruning ratio. C Task-specific Hyperparameters As shown in Table 8, a larger maximum span length is used for tasks with longer spans (e.g., OpenIE), and a larger pruning ratio is used for tasks with more spans (e.g., SRL). Constituency parsing does not have span length limit because spans can be as long as the entire sentence. Since relation extraction aims to extract exactly two entities and their relation from a sentence, we keep pruning ratio fixed (top 5 spans in this case) regardless of the length of the sentence.
2020
192
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2134–2146 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2134 Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling Ouyu Lan†∗Xiao Huang†∗Bill Yuchen Lin† He Jiang† Liyuan Liu‡ Xiang Ren† {olan,huan183,yuchen.lin,jian567,xiangren}@usc.edu, [email protected] †Computer Science Department, University of Southern California ‡Computer Science Department, University of Illinois at Urbana-Champaign Abstract Sequence labeling is a fundamental task for a range of natural language processing problems. When used in practice, its performance is largely influenced by the annotation quality and quantity, and meanwhile, obtaining ground truth labels is often costly. In many cases, ground truth labels do not exist, but noisy annotations or annotations from different domains are accessible. In this paper, we propose a novel framework Consensus Network (CONNET) that can be trained on annotations from multiple sources (e.g., crowd annotation, cross-domain data). It learns individual representation for every source and dynamically aggregates source-specific knowledge by a context-aware attention module. Finally, it leads to a model reflecting the agreement (consensus) among multiple sources. We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised crossdomain model adaptation. Extensive experimental results show that our model achieves significant improvements over existing methods in both settings. We also demonstrate that the method can apply to various tasks and cope with different encoders. 1 1 Introduction Sequence labeling is a general approach encompassing various natural language processing (NLP) tasks including part-of-speech (POS) tagging (Ratnaparkhi, 1996), word segmentation (Low et al., 2005), and named entity recognition (NER) (Nadeau and Sekine, 2007). Typically, existing methods follow the supervised learning paradigm, and require high-quality annotations. While gold standard annotation is expensive and ∗The first two authors contributed equally. 1Our code can be found at https://github.com/ INK-USC/ConNet . time-consuming, imperfect annotations are relatively easier to obtain from crowdsourcing (noisy labels) or other domains (out-of-domain). Despite their low cost, such supervision usually can be obtained from different sources, and it has been shown that multi-source weak supervision has the potential to perform similar to gold annotations (Ratner et al., 2016). Specifically, we are interested in two scenarios: 1) learning with crowd annotations and 2) unsupervised cross-domain model adaptation. Both situations suffer from imperfect annotations, and benefit from multiple sources. Therefore, the key challenge here is to aggregate multi-source imperfect annotations for learning a model without knowing the underlying ground truth label sequences in the target domain. Our intuition mainly comes from the phenomenon that different sources of supervision have different strengths and are more proficient with distinct situations. Therefore they may not keep consistent importance during aggregating supervisions, and aggregating multiple sources for a specific input should be a dynamic process that depends on the sentence context. To better model this nature, we need to (1) explicitly model the unique traits of different sources when training and (2) find best suitable sources for generalizing the learned model on unseen sentences. In this paper, we propose a novel framework, named Consensus Network (CONNET), for sequence labeling with multi-source supervisions. We represent the annotation patterns as different biases of annotators over a shared behavior pattern. Both annotator-invariant patterns and annotator-specific biases are modeled in a decoupled way. The first term comes through sharing part of low-level model parameters in a multi-task learning schema. For learning the biases, we decouple them from the model as the transformations 2135 Figure 1: Illustration of the task settings for the two applications in this work: (a) learning consensus model from crowd annotations; (b) unsupervised cross-domain model adaptation. on top-level tagging model parameters, such that they can capture the unique strength of each annotator. With such decoupled source representations, we further learn an attention network for dynamically assigning the best sources for every unseen sentence through composing a transformation that represents the agreement among sources (consensus). Extensive experimental results in two scenarios show that our model outperforms strong baseline methods, on various tasks and with different encoders. CONNET achieves state-of-the-art performance on real-world crowdsourcing datasets and improves significantly in unsupervised crossdomain adaptation tasks over existing works. 2 Related Work There exists three threads of related work with this paper, which are sequence labeling, crowdsourcing and unsupervised domain adaptation. Neural Sequence Labeling. Traditional approaches for sequence labeling usually need significant efforts in feature engineering for graphical models like conditional random fields (CRFs) (Lafferty, 2001). Recent research efforts in neural network models have shown that end-to-end learning like convolutional neural networks (CNNs) (Ma and Hovy, 2016a) or bidirectional long short-term memory (BLSTMs) (Lample et al., 2016) can largely eliminate humancrafted features. BLSTM-CRF models have achieved promising performance (Lample et al., 2016) and are used as our base sequence tagging model in this paper. Crowd-sourced Annotation. Crowd-sourcing has been demonstrated to be an effective way of fulfilling the label consumption of neural models (Guan et al., 2017; Lin et al., 2019). It collects annotations with lower costs and a higher speed from non-expert contributors but suffers from some degradation in quality. Dawid and Skene (1979) proposes the pioneering work to aggregate crowd annotations to estimate true labels, and Snow et al. (2008) shows its effectiveness with Amazon’s Mechanical Turk system. Later works (Dempster et al., 1977; Dredze et al., 2009; Raykar et al., 2010) focus on ExpectationMaximization (EM) algorithms to jointly learn the model and annotator behavior on classification. Recent research shows the strength of multitask framework in semi-supervised learning (Lan et al., 2018; Clark et al., 2018), cross-type learning (Wang et al., 2018), and learning with entity triggers (Lin et al., 2020). Nguyen et al. (2017); Rodrigues and Pereira (2018); Simpson et al. (2020) regards crowd annotations as noisy gold labels and constructs crowd components to model annotator-specific bias which were discarded during the inference process. It is worth mentioning that, it has been found even for human curated annotations, there exists certain label noise that hinders the model performance (Wang et al., 2019). Unsupervised Domain Adaptation. Unsupervised cross-domain adaptation aims to transfer knowledge learned from high-resource domains (source domains) to boost performance on lowresource domains (target domains) of interests such as social media messages (Lin et al., 2017). Different from supervised adaptation (Lin and Lu, 2018), we assume there is no labels at all for target corpora. Saito et al. (2017) and Ruder and Plank (2018) explored bootstrapping with multitask tri-training approach, which requires unlabeled data from the target domain. The method is developed for one-to-one domain adaptation and does not model the differences among multiple source domains. Yang and Eisenstein (2015) represents each domain with a vector of metadata domain attributes and uses domain vectors to train the model to deal with domain shifting, which is highly dependent on prior domain knowledge. 2136 (Ghifary et al., 2016) uses an auto-encoder method by jointly training a predictor for source labels, and a decoder to reproduce target input with a shared encoder. The decoder acts as a normalizer to force the model to learn shared knowledge between source and target domains. Adversarial penalty can be added to the loss function to make models learn domain-invariant feature only (Fernando et al., 2015; Long et al., 2014; Ming Harry Hsu et al., 2015). However, it does not exploit domain-specific information. 3 Multi-source Supervised Learning We formulate the multi-source sequence labeling problem as follows. Given K sources of supervision, we regard each source as an imperfect annotator (non-expert human tagger or models trained in related domains). For the k-th source data set S(k) = {(x(k) i , y(k) i )}mk i=1, we denote its i-th sentence as x(k) i which is a sequence of tokens: x(k) i = (x(k) i,1 , · · · , x(k) i,N). The tag sequence of the sentence is marked as y(k) i = {y(k) i,j }. We define the sentence set of each annotators as X (k) = {x(k) i }mk i=1, and the whole training domain as the union of all sentence sets: X = S(K) k=1 X (k). The goal of the multi-source learning task is to use such imperfect annotations to train a model for predicting the tag sequence y for any sentence x in a target corpus T . Note that the target corpus T can either share the same distribution with X (Application I) or be significantly different (Application II). In the following two subsections, we formulate two typical tasks in this problem as shown in Fig. 1. Application I: Learning with Crowd Annotations. When learning with crowd-sourced data, we regard each worker as an imperfect annotator (S(k)), who may make mistakes or skip sentences in its annotations. Note that in this setting, different annotators tag subsets of the same given dataset (X), and thus we assume there are no input distribution shifts among X (k). Also, we only test sentences in the same domain such that the distribution in target corpus T is the same as well. That is, the marginal distribution of target corpus PT (x) is the same with that for each individual source dataset, i.e. PT (x) = Pk(x). However, due to imperfectness of the annotations in each source, Pk(y|x) is shifted from the underlying truth P(y|x) (illustrated in the top-left part of Fig. 1). The multi-source learning objective here is to learn a model PT (y|x) for supporting inference on any new sentences in the same domain. Application II: Unsupervised Cross-Domain Model Adaptation. We assume there are available annotations in several source domains, but not in an unseen target domain. We assume that the input distributions P(x) in different source domains X (k) vary a lot, and such annotations can hardly be adapted for training a target domain model. That is, the prediction distribution of each domain model (Pk(y|x)) is close to the underlying truth distribution (P(y|x)) only when x ∈ X (k). For target corpus sentences x ∈T , such a source model Pk(y|x) again differs from underlying ground truth for the target domain PT (y|x) and can be seen as an imperfect annotators. Our objective in this setting is also to jointly model PT (y, x) while noticing that there are significant domain shifts between T and any other X (k). 4 Consensus Network In this section, we present our two-phase framework CONNET for multi-source sequence labeling. As shown in Figure 2, our proposed framework first uses a multi-task learning schema with a special objective to decouple annotator representations as different parameters of a transformation around CRF layers. This decoupling phase (Section 4.2) is for decoupling the model parameters into a set of annotator-invariant model parameters and a set of annotator-specific representations. Secondly, the dynamic aggregation phase (Section 4.3) learns to contextually utilize the annotator representations with a lightweight attention mechanism to find the best suitable transformation for each sentence, so that the model can achieve a context-aware consensus among all sources. The inference process is described in Section 4.4. 4.1 The Base Model: BLSTM-CRF Many recent sequence labeling frameworks (Ma and Hovy, 2016b; Misawa et al., 2017) share a very basic structure: a bidirectional LSTM network followed by a CRF tagging layer (i.e. BLSTM-CRF). The BLSTM encodes an input sequence x = {x1, x2, . . . , xn} into a sequence of hidden state vectors h1:n. The CRF takes as input the hidden state vectors and computes an emission score matrix U ∈Rn×L where L is the size of tag set. It also maintains a trainable transition matrix M ∈RL×L. We can consider Ui,j is the score of labeling the tag with id j ∈{1, 2, ..., L} for ith 2137 source ID (𝑘) sentence (𝐱0 1) sentence (𝐱) Attention (𝐐) prediction (𝐲8 (1)) prediction (𝐲8) Weighted Voting Consensus 𝐀0 ∗ BLSTM CRF BLSTM CRF Annotator {𝐀(1)} Decoupling Phase Aggregation Phase Figure 2: Overview of the CONNET framework. The decoupling phase constructs the shared model (yellow) and sourcespecific matrices (blue). The aggregation phase dynamically combines crowd components into a consensus representation (blue) by a context-aware attention module (red) for each sentence x. word in the input sequence x, and Mi,j means the transition score from ith tag to jth. The CRF further computes the score s for a predicted tag sequence y = {y1, y2, ..., yk} as s(x, y) = T X t=1 (Ut,yt + Myt−1,yt), (1) and then tag sequence y follows the conditional distribution P(y|x) = exp s(x, y) P y∈Yx exp s(x, y). (2) 4.2 The Decoupling Phase: Learning annotator representations For decoupling annotator-specific biases in annotations, we represent them as a transformation on emission scores and transition scores respectively. Specifically, we learn a matrix A(k) ∈RL×L for each imperfect annotator k and apply this matrix as transformation on U and M as follows: s(k)(x, y) = T X t=1  (UA(k))t,yt + (MA(k))yt−1,yt  . (3) From this transformation, we can see that the original score function s in Eq. 1 becomes an sourcespecific computation. The original emission and transformation score matrix U and M are still shared by all the annotators, while they both are transformed by the matrix A(k) for k-th annotator. While training the model parameters in this phase, we follow a multi-task learning schema. That is, we share the model parameters for BLSTM and CRF (including W, b, M), while updating A(k) only by examples in Sk = {X (k), Y(k)}. The learning objective is to minimize the negative log-likelihood of all source annotations: L = −log K X k=1 |X (k)| X i=1 P(y(k) i |x(k) i ) , (4) P(y(k) i |x(k) i ) = exp s(k)(x(k) i , y(k) i ) P y′ exp s(k)(x, y′). (5) The assumption on the annotation representation A(k) is that it can model the pattern of annotation bias. Each annotator can be seen as a noisy version of the shared model. For the k-th annotator, A(k) models noise from labeling the current word and transferring from the previous label. Specifically, each entry A(k) i,j captures the probability of mistakenly labeling i-th tag to j-th tag. In other words, the base sequence labeling model in Sec. 4.1 learns the basic consensus knowledge while annotator-specific components add their understanding to predictions. 4.3 The Aggregation Phase: Dynamically Reaching Consensus In the second phase, our proposed network learns a context-aware attention module for a consensus representation supervised by combined predictions on the target data. For each sentence in target data T , these predictions are combined by weighted voting. The weight of each source is its normalized F1 score on the training set. Through weighted voting on such augmented labels over all source sentences X, we can find a good approximation of underlying truth labels. For better generalization and higher speed, an attention module is trained to estimate the relevance of each source to the target under the supervision of generated labels. Specifically, we compute the sentence embedding by concatenating the last hidden states of the forward LSTM and the backward LSTM, i.e. h(i) = [−→ h (i) T ; ←− h (i) 0 ]. The attention module inputs the sentence embedding and outputs a normalized weight for each source: qi = softmax(Qh(i)), where Q ∈RK×2d. (6) 2138 where d is the size of each hidden state h(i). Source-specific matrices {A(k)}K k=1 are then aggregated into a consensus representation A∗ i for sentence xi ∈X by A∗ i = K X k=1 qi,kA(k). (7) In this way, the consensus representation contains more information about sources which are more related to the current sentence. It also alleviates the contradiction problem among sources, because it could consider multiple sources of different emphasis. Since only an attention model with weight matrix Q is required to be trained, the amount of computation is relatively small. We assume the base model and annotator representations are welltrained in the previous phase. The main objective in this phase is to learn how to select most suitable annotators for the current sentence. 4.4 Parameter Learning and Inference CONNET learns parameters through two phases described above. In the decoupling phase, each instance from source Sk is used for training the base sequence labeling model and its representation A(k). In the aggregation phase, we use aggregated predictions from the first phase to learn a lightweight attention module. For each instance in the target corpus xi ∈T , we calculate its embedding hi from BLSTM hidden states. With these sentence embeddings, the context-aware attention module assigns weight qi to each source and dynamically aggregates source-specific representations {A(k)} for inferring ˆyi. In the inference process, only the consolidated consensus matrix A∗ i is applied to the base sequence learning model. In this way, more specialist knowledge helps to deal with more complex instances. 4.5 Model Application The proposed model can be applied to two practical multi-sourcing settings: learning with crowd annotations and unsupervised cross-domain model adaptation. In the crowd annotation learning setting, the training data of the same domain is annotated by multiple noisy annotators, and each annotator is treated as a source. In the decoupling phase, the model is trained on noisy annotations, and in the aggregation phase, it is trained with combined predictions on the training set. In the cross-domain setting, the model has access to unlabeled training data of the target domain and clean labeled data of multiple source domains. Each domain is treated as a source. In the decoupling phase, the model is trained on source domains, and in the aggregation phase, the model is trained on combined predictions on the training data of the target domain. Our framework can also extend to new tasks other than sequence labeling and cope with different encoders. We will demonstrate this ability in experiments. Our method is also incorporated as a feature for controlling the quality of crowd-annotation in annotation frameworks such as AlpacaTag (Lin et al., 2019) and LEAN-LIFE (Lee et al., 2020). 5 Experiments We evaluate CONNET in the two aforementioned settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation. Additionally, to demonstrate the generalization of our framework, we also test our method on sequence labeling with transformer encoder in Appendix B and text classification with MLP encoder in Section 5.5. 5.1 Datasets Crowd-Annotation Datasets. We use crowdannotation datasets based on the 2003 CoNLL shared NER task (Tjong Kim Sang and De Meulder, 2003). The real-world datasets, denoted as AMT, are collected by Rodrigues et al. (2014) using Amazon’s Mechanical Turk where F1 scores of annotators against the ground truth vary from 17.60% to 89.11%. Since there is no development set in AMT, we also follow Nguyen et al. (2017) to use the AMT training set and CoNLL 2003 development and test sets, denoted as AMTC. Overlapping sentences are removed in the training set, which is ignored in that work. Additionally, we construct two sets of simulated datasets to investigate the quality and quantity of annotators. To simulate the behavior of a non-expert annotator, a CRF model is trained on a small subset of training data and generates predictions on the whole set. Because of the limited size of training data, each model would have a bias to certain patterns. Cross-Domain Datasets. In this setting, we investigate three NLP tasks: POS tagging, NER and text classification. For POS tagging task, we use the GUM portion (Zeldes, 2017) of Universal Dependencies (UD) v2.3 corpus with 17 tags and 7 2139 Methods AMTC AMT Precision(%) Recall(%) F1-score(%) Precision(%) Recall(%) F1-score(%) CONCAT-SLM 85.95(±1.00) 57.96(±0.26) 69.23(±0.13) 91.12(±0.57) 55.41(±2.66) 68.89(±1.92) MVT-SLM 84.78(±0.66) 62.50(±1.36) 71.94(±0.66) 86.96(±1.22) 58.07(±0.11) 69.64(±0.31) MVS-SLM 84.76(±0.50) 61.95(±0.32) 71.57(±0.04) 86.95(±1.12) 56.23(±0.01) 68.30(±0.33) DS-SLM (Nguyen et al., 2017) 72.30∗ 61.17∗ 66.27∗ HMM-SLM (Nguyen et al., 2017) 76.19∗ 66.24∗ 70.87∗ MTL-MVT (Wang et al., 2018) 81.81(±2.34) 62.51(±0.28) 70.87(±1.06) 88.88(±0.25) 65.04(±0.80) 75.10(±0.44) MTL-BEA (Rahimi et al., 2019) 85.72(±0.66) 58.28(±0.43) 69.39(±0.52) 77.56(±2.23) 67.23(±0.72) 72.01(±0.85) CRF-MA (Rodrigues et al., 2014) 49.40∗ 85.60∗ 62.60∗ Crowd-Add (Nguyen et al., 2017) 85.81(±1.53) 62.15(±0.18) 72.09(±0.42) 89.74(±0.10) 64.50(±1.48) 75.03(±1.02) Crowd-Cat (Nguyen et al., 2017) 85.02(±0.98) 62.73(±1.10) 72.19(±0.37) 89.72(±0.47) 63.55(±1.20) 74.39(±0.98) CL-MW (Rodrigues and Pereira, 2018) 66.00∗ 59.30∗ 62.40∗ CONNET (Ours) 84.11(±0.71) 68.61(±0.03) 75.57(±0.27) 88.77(±0.25) 72.79(±0.04) 79.99(±0.08) Gold (Upper Bound) 89.48(±0.32) 89.55(±0.06) 89.51(±0.21) 92.12(±0.31) 91.73(±0.09) 91.92(±0.21) Table 1: Performance on real-world crowd-sourced NER datasets. The best score in each column excepting Gold is marked bold. * indicates number reported by the paper. domains: academic, bio, fiction, news, voyage, wiki, and interview. For NER task, we select the English portion of the OntoNotes v5 corpus (Hovy et al., 2006). The corpus is annotated with 9 named entities with data from 6 domains: broadcast conversation (bc), broadcast news (bn), magazine (mz), newswire (nw), pivot text (pt), telephone conversation (tc), and web (web). MultiDomain Sentiment Dataset (MDS) v2.0 (Blitzer et al., 2007) is used for text classification, which is built on Amazon reviews from 4 domains: books, dvd, electronics, and kitchen. Since the dataset only contains word frequencies for each review without raw texts, we follow the setting in Chen and Cardie (2018) considering 5,000 most frequent words and use the raw counts as the feature vector for each review. 5.2 Experiment Setup For sequence labeling tasks, we follow Liu et al. (2018) to build the BLSTM-CRF architecture as the base model. The dimension of characterlevel, word-level embeddings and LSTM hidden layer are set as 30, 100 and 150 respectively. For text classification, each review is represented as a 5000-d vector. We use an MLP with a hidden size of 100 for encoding features and a linear classification layer for predicting labels. The dropout with a probability of 0.5 is applied to the nonrecurrent connections for regularization. The network parameters are updated by stochastic gradient descent (SGD). The learning rate is initialized as 0.015 and decayed by 5% for each epoch. The training process stops early if no improvements in 15 continuous epochs and selects the best model on the development set. For the dataset without a development set, we report the performance on the 50-th epoch. For each experiment, we report the average performance and standard variance of 3 runs with different random initialization. 5.3 Compared Methods We compare our models with multiple baselines, which can be categorized in two groups: wrapper methods and joint models. To demonstrate the theoretical upper bound of performance, we also train the base model using ground-truth annotations in the target domain (Gold). A wrapper method consists of a label aggregator and a deep learning model. These two components could be combined in two ways: (1) aggregating labels on crowd-sourced training set then feeding the generated labels to a Sequence Labeling Model (SLM) (Liu et al., 2017); (2) feeding multi-source data to a Multi-Task Learning (MTL) (Wang et al., 2018) model then aggregating multiple predicted labels. We investigate multiple label aggregation strategies. CONCAT considers all crowd annotations as gold labels. MVT does majority voting on the token level, i.e., the majority of labels {yk i,j} is selected as the gold label for each token xi,j. MVS is conducted on the sequence level, addressing the problem of violating Begin/In/Out (BIO) rules. DS (Dawid and Skene, 1979), HMM (Nguyen et al., 2017) and BEA (Rahimi et al., 2019) induce consensus labels with probability models. In contrast with wrapper methods, joint models incorporate multi-source data within the structure of sequential taggers and jointly model all individual annotators. CRF-MA models CRFs with Multiple Annotators by EM algorithm (Rodrigues et al., 2014). Nguyen et al. (2017) augments the LSTM 2140 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Overall PER ORG LOC MISC (a) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Annotator ID snt1(PER) snt2(ORG) snt3(LOC) snt4(MISC) (b) 0.0 0.2 0.4 0.6 0.8 −0.5 0.0 0.5 1.0 1.5 Figure 3: Visualizations of (a) the expertise of annotators; (b) attention weights for sample sentences. More cases and details are described in Appendix A.1. architecture with crowd vectors. These crowd components are element-wise added to tags scores (Crowd-Add) or concatenated to the output of hidden layer (Crowd-Cat). These two methods are the most similar to our decoupling phase. We implemented them and got better results than reported. CL-MW applies a crowd layer to a CNNbased deep learning framework (Rodrigues and Pereira, 2018). Tri-Training uses bootstrapping with multi-task Tri-Training approach for unsupervised one-to-one domain adaptation (Saito et al., 2017; Ruder and Plank, 2018). 5.4 Learning with Crowd Annotations Performance on real-world datasets. Tab. 1 shows the performance of aforementioned methods and our CONNET on two real-world datasets, i.e. AMT and AMTC2. We can see that CONNET outperforms all other methods on both datasets significantly on F1 score, which shows the effectiveness of dealing with noisy annotations for higher-quality labels. Although CONCAT-SLM achieves the highest precision, it suffers from low recall. Most existing methods have the highprecision but low-recall problem. One possible reason is that they try to find the latent ground truth and throw away illuminating annotator-specific information. So only simple mentions can be classified with great certainty while difficult mentions fail to be identified without sufficient knowledge. In comparison, CONNET pools information from all annotations and focus on matching knowledge to make predictions. It makes the model be able to identify more mentions and get a higher recall. Case study. It is enlightening to analyze whether the model decides the importance of annotators given a sentence. Fig. 3 visualizes test F1 score of all annotators, and attention weights qi in Eq. 6 2We tried our best to re-implement the baseline methods for all datasets, and left the results blank when the re-implementation is not showing consistent results as in the original papers. 68.89 77.44 72.16 79.08 68.67 70.53 78.73 79.99 60 65 70 75 80 85 CRF DP(1) DP(2) DP(1+2) AP(OMV) AP(PMV) AP(AMV) AP(AWV) F1(%) Figure 4: Performance of CONNET variants of decoupling phase (DP) and aggregation phase (AP). 35.00 40.00 45.00 50.00 55.00 60.00 5 10 15 30 50 F1 (%) (b) Annotator Number MVT-SLM Crowd-Cat ConNet 40.00 50.00 60.00 70.00 80.00 1/5 1/10 1/15 1/30 1/50 F1 (%) (a) Reliability Level MVT-SLM Crowd-Cat ConNet (a) Reliability level (b) Number of Annotators Figure 5: Performance on simulated crowd-sourced NER data with (a) 5 annotators with different reliability levels; (b) different numbers of annotators with reliability r = 1/50. for 4 sampled sentences containing different entity types. Obviously, the 2nd sample sentence with ORG has higher attention weights on 1st, 5th and 33rd annotator who are best at labeling ORG. More details and cases are shown in Appendix A.1. Ablation study. We also investigate multiple variants of two phases on AMT dataset, shown in Fig. 4. We explore 3 approaches to incorporate source-specific representation in the decoupling phase (DP). CRF means the traditional approach as Eq. 1 while DP(1+2) is for our method as Eq. 3. DP(1) only applies source representations A(k) to the emission score U while DP(2) only transfers the transition matrix M. We can observe from the result that both variants can improve the result. The underlying model keeps more consensus knowledge if we extract annotator-specific bias on sentence encoding and label transition. We also compare 4 methods of generating supervision targets in the aggregation phase (AP). OMV uses ma2141 Task & Corpus Multi-Domain POS Tagging: Universal Dependencies v2.3 - GUM Target Domain academic bio fiction news voyage wiki interview AVG Acc(%) CONCAT 92.68 92.12 93.05 90.79 92.38 92.32 91.44 92.11(±0.07) MTL-MVT (Wang et al., 2018) 92.42 90.59 91.16 89.69 90.75 90.29 90.21 90.73(±0.29) MTL-BEA (Rahimi et al., 2019) 92.87 91.88 91.90 91.03 91.67 91.31 91.29 91.71(±0.06) Crowd-Add (Nguyen et al., 2017) 92.58 91.91 91.50 90.73 91.74 90.47 90.61 91.36(±0.14) Crowd-Cat (Nguyen et al., 2017) 92.71 91.71 92.48 91.15 92.35 91.97 91.22 91.94(±0.08) Tri-Training (Ruder and Plank, 2018) 92.84 92.15 92.51 91.40 92.35 91.29 91.00 91.93(±0.01) CONNET 92.97 92.25 93.15 91.06 92.52 92.74 91.66 92.33(±0.17) Gold (Upper Bound) 92.64 93.10 93.15 91.33 93.09 94.67 92.20 92.88(±0.14) Task & Corpus Multi-Domain NER: OntoNotes v5.0 - English Target Domain nw wb bn tc bc mz AVG F1(%) CONCAT 68.23 32.96 77.25 53.66 72.74 62.61 61.24(±0.92) MTL-MVT (Wang et al., 2018) 65.74 33.25 76.80 53.16 69.77 63.91 60.44(±0.45) MTL-BEA (Rahimi et al., 2019) 58.33 32.62 72.47 47.83 48.99 52.68 52.15(±0.58) Crowd-Add (Nguyen et al., 2017) 45.76 32.51 50.01 26.47 52.94 28.12 39.30(±4.44) Crowd-Cat (Nguyen et al., 2017) 68.95 32.61 78.07 53.41 74.22 65.55 62.14(±0.89) Tri-Training (Ruder and Plank, 2018) 69.68 33.41 79.62 47.91 70.85 68.53 61.67(±0.31) CONNET 71.31 34.06 79.66 52.72 71.47 70.71 63.32(±0.81) Gold (Upper Bound) 84.70 46.98 83.77 52.57 73.05 70.58 68.61(±0.64) Task & Corpus Multi-Domain Text Classification: MDS Target Domain books dvd electronics kitchen AVG Acc(%) CONCAT 75.68 77.02 81.87 83.07 79.41(±0.02) MTL-MVT (Wang et al., 2018) 74.92 74.43 79.33 81.47 77.54(±0.06) MTL-BEA (Rahimi et al., 2019) 74.88 74.60 79.73 82.82 78.01(±0.28) Crowd-Add (Nguyen et al., 2017) 75.72 77.35 81.25 82.90 79.30(±9.21) Crowd-Cat (Nguyen et al., 2017) 76.45 77.37 81.22 83.12 79.54(±0.25) Tri-Training (Ruder and Plank, 2018) 77.58 78.45 81.95 83.17 80.29(±0.02) CONNET 78.75 81.06 84.12 83.45 81.85(±0.04) Gold (Upper Bound) 78.78 82.11 86.21 85.76 83.22(±0.19) Table 2: Performance on cross-domain data The best score (except the Gold) in each column that is significantly (p < 0.05) better than the second best is marked bold, while those are better but not significantly are underlined. jority voting of original annotations, while PMV substitutes them with model prediction learned from DP. AMV extends the model by using all prediction, while AWV uses majority voting weighted by each annotator’s training F1 score. The results show the effectiveness of AWV, which could augment training data and well approximate the ground truth to supervise the attention module for estimating the expertise of annotator on the current sentence. We can also infer labels on the test set by conducting AWV on predictions of the underlying model with each annotator-specific components. However, it leads to heavy computationconsuming and unsatisfying performance, whose test F1 score is 77.35(±0.08). We can also train a traditional BLSTM-CRF model with the same AMV labels. Its result is 78.93(±0.13), which is lower than CONNET and show the importance of extracted source-specific components. Performance on simulated datasets. To analyze the impact of annotator quality, we split the origin train set into z folds and each fold could be used to train a CRF model whose reliability could be represented as r = 1/z assuming a model with less training data would have stronger bias and less generalization. We tried 5 settings where z = {5, 10, 15, 30, 50} and randomly select 5 folds for each setting. When the reliability level is too low, i.e. 1/50, only the base model is used for prediction without annotator representations. Shown in Fig. 5(a), CONNET achieves significant improvements over MVT-SLM and competitive performance as Crowd-Cat, especially when annotators are less reliable. Regarding the annotator quantity, we split the train set into 50 subsets (r = 1/50) and randomly select {5, 10, 15, 30, 50} folds for simulation. Fig. 5(b) shows CONNET is superior to baselines and able to well deal with many annotators while there is no obvious relationship between the performance and annotator quantity in baselines. We can see the performance of our model 2142 Figure 6: Heatmap of averaged attention scores from each source domain to each target domain. increases as the number of annotators and, regardless of the number of annotators, our method consistently outperforms than other baselines. 5.5 Cross-Domain Adaptation Performance The results of each task on each domain are shown in Tab. 2. We can see that CONNET performs the best on most of the domains and achieves the highest average score for all tasks. We report the accuracy for POS tagging and classification, and the chunk-level F1 score for NER. We can see that CONNET achieves the highest average score on all tasks. MTL-MVT is similar to our decoupling phase and performs much worse. Naively doing unweighted voting does not work well. The attention can be viewed as implicitly doing weighted voting on the feature level. MTL-BEA relies on a probabilistic model to conduct weighted voting over predictions, but unlike our approach, its voting process is independent from the input context. It is probably why our model achieves higher scores. This demonstrates the importance of assigning weights to domains based on the input sentence. Tri-Training trains on the concatenated data from all sources also performs worse than CONNET, which suggests the importance of a multi-task structure to model the difference among domains. The performance of Crowd-Add is unstable (high standard deviation) and very low on the NER task, because the size of the crowd vectors is not controllable and thus may be too large. On the other hand, the size of the crowd vectors in Crowd-Cat can be controlled and tuned. These two methods leverage domain-invariant knowledge only but not domain-specific knowledge and thus does not have comparable performance. 5.6 Analyzing Learned Attention We analyzed the attention scores generated by the attention module on the OntoNotes dataset. For each sentence in the target domain we collected the attention score of each source domain, and finally the attention scores are averaged for each source-target pair. Fig. 6 shows all the sourceto-target average attention scores. We can see that some domains can contribute to other related domains. For example, bn (broadcast news) and nw (newswire) are both about news and they contribute to each other; bn and bc (broadcast conversation) are both broadcast and bn contributes to bc; bn and nw both contributes to mz (magzine) probably because they are all about news; wb (web) and tc (telephone conversation) almost make no positive contribution to any other, which is reasonable because they are informal texts compared to others and they are not necessarily related to the other. Overall the attention scores can make some sense. It suggests that the attention is aware of relations between different domains and can contribute to the model. 6 Conclusion In this paper, we present CONNET for learning a sequence tagger from multi-source supervision. It could be applied in two practical scenarios: learning with crowd annotations and cross-domain adaptation. In contrast to prior works, CONNET learns fine-grained representations of each source which are further dynamically aggregated for every unseen sentence in the target data. Experiments show that our model is superior to previous crowd-sourcing and unsupervised domain adaptation sequence labeling models. The proposed learning framework also shows promising results on other NLP tasks like text classification. Acknowledgements This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, United States Office Of Naval Research under Contract No. N660011924033, and NSF SMA 18-29268. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. We would like to thank all the collaborators in USC INK research lab for their constructive feedback on the work. 2143 References John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. of ACL, pages 440–447. Xilun Chen and Claire Cardie. 2018. Multinomial adversarial networks for multi-domain text classification. In Proc. of NAACL-HLT, pages 1226–1240, New Orleans, Louisiana. Association for Computational Linguistics. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proc. of EMNLP, pages 1914–1925, Brussels, Belgium. Association for Computational Linguistics. A. P. Dawid and A. M. Skene. 1979. Maximum likelihood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20–28. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B, 39(1):1–38. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL-HLT, pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Mark Dredze, Partha Pratim Talukdar, and Koby Crammer. 2009. Sequence learning from data with multiple labels. In ECML/PKDD Workshop on Learning from Multi-Label Data. Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. 2015. Joint cross-domain classification and subspace learning for unsupervised adaptation. Pattern Recognition Letters, 65:60–66. Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, David Balduzzi, and Wen Li. 2016. Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision, pages 597–613. Springer. Melody Y. Guan, Varun Gulshan, Andrew M. Dai, and Geoffrey E. Hinton. 2017. Who said what: Modeling individual labelers improves classification. In AAAI. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90\% solution. In Proc. of NAACL-HLT. John Lafferty. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML, pages 282–289. Morgan Kaufmann. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proc. of NAACL-HLT, pages 260–270, San Diego, California. Association for Computational Linguistics. Ouyu Lan, Su Zhu, and Kai Yu. 2018. Semi-supervised training using adversarial multi-task learning for spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6049–6053. IEEE. Dong-Ho Lee, Rahul Khanna, Bill Yuchen Lin, Jamin Chen, Seyeon Lee, Qinyuan Ye, Elizabeth Boschee, Leonardo Neves, and Xiang Ren. 2020. Leanlife: A label-efficient annotation framework towards learning from explanation. In Proc. of ACL (Demo). Bill Y Lin, Frank Xu, Zhiyi Luo, and Kenny Zhu. 2017. Multi-channel bilstm-crf model for emerging named entity recognition in social media. In Proc. of ACL Workshop, pages 160–165. Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar, and Xiang Ren. 2020. Triggerner: Learning with entity triggers as explanations for named entity recognition. In ACL. Bill Yuchen Lin, Dongho Lee, Frank F. Xu, Ouyu Lan, and Xiang Ren. 2019. Alpacatag: An active learning-based crowd annotation framework for sequence tagging. In Proc. of ACL (Demo). Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proc. of EMNLP. Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. 2018. Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling. In Proc. of EMNLP. Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, and Jiawei Han. 2017. Heterogeneous supervision for relation extraction: A representation learning approach. In Proc. of EMNLP, pages 46– 56. Association for Computational Linguistics. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S Yu. 2014. Transfer joint matching for unsupervised domain adaptation. In Proc. of CVPR, pages 1410–1417. Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to chinese word segmentation. In Proc. of SIGHAN Workshop. Xuezhe Ma and Eduard Hovy. 2016a. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proc. of ACL, pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. 2144 Xuezhe Ma and Eduard Hovy. 2016b. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proc. of ACL, pages 1064–1074. Association for Computational Linguistics. Tzu Ming Harry Hsu, Wei Yu Chen, Cheng-An Hou, Yao-Hung Hubert Tsai, Yi-Ren Yeh, and Yu-Chiang Frank Wang. 2015. Unsupervised domain adaptation with imbalanced cross-domain data. In Proc. of ICCV, pages 4121–4129. Shotaro Misawa, Motoki Taniguchi, Yasuhide Miura, and Tomoko Ohkuma. 2017. Character-based bidirectional LSTM-CRF with words and characters for Japanese named entity recognition. In Proc. of ACL Workshop, pages 97–102, Copenhagen, Denmark. Association for Computational Linguistics. David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3–26. An Thanh Nguyen, Byron Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annotations. In Proc. of ACL, pages 299–309. Association for Computational Linguistics. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for ner. In Proc. of ACL, pages 151–164. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. of EMNLP. Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, and Christopher R. 2016. Data programming: Creating large training sets, quickly. Advances in neural information processing systems, 29. Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research, 11(Apr):1297–1322. Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine Learning, 95(2):165–181. Filipe Rodrigues and Francisco C Pereira. 2018. Deep learning from crowds. In Thirty-Second AAAI Conference on Artificial Intelligence. Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. In Proc. of ACL, pages 1044– 1054, Melbourne, Australia. Association for Computational Linguistics. Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric tri-training for unsupervised domain adaptation. In Proc. of ICML, pages 2988– 2997. JMLR. org. Edwin Simpson, Jonas Pfeiffer, and Iryna Gurevych. 2020. Low resource sequence tagging with weak labels. In AAAI 2020. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In Proc. of EMNLP, EMNLP ’08, pages 254–263, Stroudsburg, PA, USA. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proc. of NAACL-HLT, pages 142–147. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Xuan Wang, Yu Zhang, Xiang Ren, Yuhao Zhang, Marinka Zitnik, Jingbo Shang, Curtis Langlotz, and Jiawei Han. 2018. Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics, page bty869. Zihan Wang, Jingbo Shang, Liyuan Liu, Lihao Lu, Jiacheng Liu, and Jiawei Han. 2019. Crossweigh: Training named entity tagger from imperfect annotations. In EMNLP/IJCNLP. Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proc. of NAACL-HLT, pages 672–682. Amir Zeldes. 2017. The gum corpus: creating multilayer resources in the classroom. Language Resources and Evaluation, 51(3):581–612. 2145 A Analysis of ConNet with BLSTM Encoder A.1 Case study on learning with crowd annotations To better understand the effect and benefit of CONNET, we do some case study on AMTC realworld dataset with 47 annotators. We look into some more instances to investigate the ability of attention module to find right annotators in Fig. 7 and Tab. 3. Sentence 1-12 contains a specific entity type respectively while 13-16 contains multiple different entities. Compared with expertise of annotators, we can see that the attention module would give more weight on annotators who have competitive performance and preference on the included entity type. Although top selected annotators for ORG has relatively lower expertise on ORG than PER and LOC, they are actually the top three annotators with highest expertise on ORG. B Result of ConNet with Transformer Encoder To demonstrate the generalization of our framework, we re-implement CONNET and some baselines (MTV-SLM, Crowd-Add, Gold) with Transformer-CRF as the base model. Specifically, the base model takes Transformer as the encoder for CRF, which has shown its effectiveness in many NLP tasks (Vaswani et al., 2017; Devlin et al., 2019). Transformer models sequences with self-attention and eliminates all recurrence. Following the experimental settings from (Vaswani et al., 2017), we set the number of heads for multihead attention as 8, the dimension of keys and values as 64, and the hidden size of the feed-forward layers as 1024. We conduct experiments with crowd-annotation dataset AMTC on NER task and cross-domain dataset UD on POS task, which are described in Section 5.1. Results are shown in Table 4. We can see our model outperforms over other baselines in both tasks and applications. 1 Defender [PER Hassan Abbas] rose to intercept a long ball into the area in the 84th minute but only managed to divert it into the top corner of [PER Bitar] ’s goal . 2 [ORG Plymouth] 4 [ORG Exeter] 1 3 Hosts [LOC UAE] play [LOC Kuwait] and [LOC South Korea] take on [LOC Indonesia] on Saturday in Group A matches . 4 The former [MISC Soviet] republic was playing in an [MISC Asian Cup] finals tie for the first time . 5 [PER Bitar] pulled off fine saves whenever they did . 6 [PER Coste] said he had approached the player two months ago about a comeback . 7 [ORG Goias] 1 [ORG Gremio] 3 8 [ORG Portuguesa] 1 [ORG Atletico Mineiro] 0 9 [LOC Melbourne] 1996-12-06 10 On Friday for their friendly against [LOC Scotland] at [LOC Murrayfield] more than a year after the 30year-old wing announced he was retiring following differences over selection . 11 Scoreboard in the [MISC World Series] 12 Cricket - [MISC Sheffield Shield] score . 13 “ He ended the [MISC World Cup] on the wrong note , ” [PER Coste] said . 14 Soccer [ORG Leeds] ’ [PER Bowyer] fined for part in fast-food fracas . 15 [ORG Rugby Union] - [PER Cuttitta] back for [LOC Italy] after a year . Table 3: Sample instances in Fig. 3 and Fig. 7 with NER annotations including PER (red), ORG (blue), LOC (violet) and MISC (orange). 2146 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Overall PER ORG LOC MISC (a) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 Annotator ID snt5 snt6 snt7 snt8 snt9 snt10 snt11 snt12 snt13 snt14 snt15 (b) 0.0 0.2 0.4 0.6 0.8 −0.5 0.0 0.5 1.0 1.5 2.0 Figure 7: Visualizations of (a) the expertise of annotators; (b) attention weights for additional sample sentences to Fig. 3. Details of samples are described in Tab. 3. Methods AMTC UD Precision(%) Recall(%) F1-score(%) Accuracy(%) MVT-SLM 72.21(±1.63) 51.72(±3.58) 60.21(±1.87) 87.23(±0.51) Crowd-Add (Nguyen et al., 2017) 75.32(±1.41) 50.80(±0.30) 60.68(±0.67) 88.20(±0.36) CONNET (Ours) 76.86(±0.33) 56.43(±3.32) 65.05(±2.32) 89.27(±0.31) Gold (Upper Bound) 81.24(±1.25) 80.52(±0.37) 80.87(±0.79) 90.45(±0.71) Table 4: Performance of methods with Transformer-CRF as the base model on crowd-annotation NER dataset AMTC and cross-domain POS dataset UD.
2020
193
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2147–2157 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2147 MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification Jiaao Chen Georgia Tech [email protected] Zichao Yang CMU [email protected] Diyi Yang Georgia Tech [email protected] Abstract This paper presents MixText, a semisupervised learning method for text classification, which uses our newly designed data augmentation method called TMix. TMix creates a large amount of augmented training samples by interpolating text in hidden space. Moreover, we leverage recent advances in data augmentation to guess low-entropy labels for unlabeled data, hence making them as easy to use as labeled data. By mixing labeled, unlabeled and augmented data, MixText significantly outperformed current pre-trained and fined-tuned models and other state-ofthe-art semi-supervised learning methods on several text classification benchmarks. The improvement is especially prominent when supervision is extremely limited. We have publicly released our code at https: //github.com/GT-SALT/MixText. 1 Introduction In the era of deep learning, research has achieved extremely good performance in most supervised learning settings (LeCun et al., 2015; Yang et al., 2016). However, when there is only limited labeled data, supervised deep learning models often suffer from over-fitting (Xie et al., 2019). This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations due to the need of large amount of time, money, and expertise to obtain enough labeled data. As a result, semi-supervised learning has received much attention to utilize both labeled and unlabeled data for different learning tasks, as unlabeled data is always much easier and cheaper to collect (Chawla and Karakoulas, 2011). This work takes a closer look at semi-supervised text classification, one of the most fundamental tasks in language technology communities. Prior research on semi-supervised text classification can Figure 1: TMix takes in two text samples x and x′ with labels y and y′, mixes their hidden states h and h′ at layer m with weight λ into ˜h, and then continues forward passing to predict the mixed labels ˜y. be categorized into several classes: (1) utilizing variational auto encoders (VAEs) to reconstruct the sentences and predicting sentence labels with latent variables learned from reconstruction such as (Chen et al., 2018; Yang et al., 2017; Gururangan et al., 2019); (2) encouraging models to output confident predictions on unlabeled data for selftraining like (Lee, 2013; Grandvalet and Bengio, 2004; Meng et al., 2018); (3) performing consistency training after adding adversarial noise (Miyato et al., 2019, 2017) or data augmentations (Xie et al., 2019); (4) large scale pretraining with unlabeld data, then finetuning with labeled data (Devlin et al., 2019). Despite the huge success of those models, most prior work utilized labeled and unlabeled data separately in a way that no supervision can transit from labeled to unlabeled data or from unlabeled to labeled data. As a result, most semisupervised models can easily still overfit on the very limited labeled data, despite unlabeled data is 2148 abundant. To overcome the limitations, in this work, we introduce a new data augmentation method, called TMix (Section 3), inspired by the recent success of Mixup (Gururangan et al., 2019; Berthelot et al., 2019) on image classifications. TMix, as shown in Figure 1, takes in two text instances, and interpolates them in their corresponding hidden space. Since the combination is continuous, TMix has the potential to create infinite mount of new augmented data samples, thus can drastically avoid overfitting. Based on TMix, we then introduce a new semi-supervised learning method for text classification called MixText (Section 4) to explicitly model the relationships between labeled and unlabeled samples, thus overcoming the limitations of previous semi-supervised models stated above. In a nutshell, MixText first guesses low-entropy labels for unlabeled data, then uses TMix to interpolate the label and unlabeled data. MixText can facilitate mining implicit relations between sentences by encouraging models to behave linearly in-between training examples, and utilize information from unlabeled sentences while learning on labeled sentences. In the meanwhile, MixText exploits several semi-supervised learning techniques to further utilize unlabeled data including selftarget-prediction (Laine and Aila, 2016), entropy minimization (Grandvalet and Bengio, 2004), and consistency regularization (Berthelot et al., 2019; Xie et al., 2019) after back translations. To demonstrate the effectiveness of our method, we conducted experiments (Section 5) on four benchmark text classification datasets and compared our method with previous state-of-the-art semi-supervised method, including those built upon models pre-trained with large amount of unlabeled data, in terms of accuracy on test sets. We further performed ablation studies to demonstrate each component’s influence on models’ final performance. Results show that our MixText method significantly outperforms baselines especially when the given labeled training data is extremely limited. 2 Related Work 2.1 Pre-training and Fine-tuning Framework The pre-training and fine-tuning framework has achieved huge success on NLP applications in recent years, and has been applied to a variety of NLP tasks (Radford et al., 2018; Chen et al., 2019; Akbik et al., 2019). Howard and Ruder (2018) proposed to pre-train a language model on a large general-domain corpus and fine-tune it on the target task using some novel techniques like discriminative fine-tuning, slanted triangular learning rates, and gradual unfreezing. In this manner, such pretrained models show excellent performance even with small amounts of labeled data. Pre-training methods are often designed with different objectives such as language modeling (Peters et al., 2018; Howard and Ruder, 2018; Yang et al., 2019b) and masked language modeling (Devlin et al., 2019; Lample and Conneau, 2019). Their performances are also improved with training larger models on more data (Yang et al., 2019b; Liu et al., 2019). 2.2 Semi-Supervised Learning on Text Data Semi-supervised learning has received much attention in the NLP community (Gururangan et al., 2019; Clark et al., 2018; Yang et al., 2015), as unlabeled data is often plentiful compared to labeled data. For instance, Gururangan et al. (2019); Chen et al. (2018); Yang et al. (2017) leveraged variational auto encoders (VAEs) in a form of sequenceto-sequence modeling on text classification and sequential labeling. Miyato et al. (2017) utilized adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings. Yang et al. (2019a) took advantage of hierarchy structures to utilize supervision from higher level labels to lower level labels. Xie et al. (2019) exploited consistency regularization on unlabeled data after back translations and tf-idf word replacements. Clark et al. (2018) proposed crossveiw training for unlabeled data, where they used an auxiliary prediction modules that see restricted views of the input (e.g., only part of a sentence) and match the predictions of the full model seeing the whole input. 2.3 Interpolation-based Regularizers Interpolation-based regularizers (e.g., Mixup) have been recently proposed for supervised learning (Zhang et al., 2017; Verma et al., 2019a) and semisupervised learning (Berthelot et al., 2019; Verma et al., 2019b) for image-format data by overlaying two input images and combining image labels as virtual training data and have achieved state-ofthe-art performances across a variety of tasks like image classification and network architectures. Different variants of mixing methods have also been designed such as performing interpolations in the input space (Zhang et al., 2017), combining inter2149 polations and cutoff (Yun et al., 2019), and doing interpolations in the hidden space representations (Verma et al., 2019a,c). However, such interpolation techniques have not been explored in the NLP field because most input space in text is discrete, i.e., one-hot vectors instead of continues RGB values in images, and text is generally more complex in structures. 2.4 Data Augmentations for Text When labeled data is limited, data augmentation has been a useful technique to increase the amount of training data. For instance, in computer vision, images are shifted, zoomed in/out, rotated, flipped, distorted, or shaded with a hue (Perez and Wang, 2017) for training data augmentation. But it is relatively challenging to augment text data because of its complex syntactic and semantic structures. Recently, Wei and Zou (2019) utilized synonym replacement, random insertion, random swap and random deletion for text data augmentation. Similarly, Kumar et al. (2019) proposed a new paraphrasing formulation in terms of monotone submodular function maximization to obtain highly diverse paraphrases, and Xie et al. (2019) and Chen et al. (2020) applied back translations (Sennrich et al., 2015) and word replacement to generate paraphrases on unlabeled data for consistency training. Other work which also investigates noise and its incorporation into semi-supervised named entity classification (Lakshmi Narayan et al., 2019; Nagesh and Surdeanu, 2018). 3 TMix In this section, we extend Mixup–a data augmentation method originally proposed by (Zhang et al., 2017) for images–to text modeling. The main idea of Mixup is very simple: given two labeled data points (xi, yi) and (xj, yj), where x can be an image and y is the one-hot representation of the label, the algorithm creates virtual training samples by linear interpolations: ˜x = mix(xi, xj) =λxi + (1 −λ)xj, (1) ˜y = mix(yi, yj) =λyi + (1 −λ)yj, (2) where λ ∈[0, 1]. The new virtual training samples are used to train a neural network model. Mixup can be interpreted in different ways. On one hand, Mixup can be viewed a data augmentation approach which creates new data samples based on the original training set. On the other hand, it enforces a regularization on the model to behave linearly among the training data. Mixup was demonstrated to work well on continuous image data (Zhang et al., 2017). However, extending it to text seems challenging since it is infeasible to compute the interpolation of discrete tokens. To this end, we propose a novel method to overcome this challenge — interpolation in textual hidden space. Given a sentence, we often use a multilayer model like BERT (Devlin et al., 2019) to encode the sentences to get the semantic representations, based on which final predictions are made. Some prior work (Bowman et al., 2016) has shown that decoding from an interpolation of two hidden vectors generates a new sentence with mixed meaning of two original sentences. Motivated by this, we propose to apply interpolations within hidden space as a data augment method for text. For an encoder with L layers, we choose to mixup the hidden representation at the m-th layer, m ∈[0, L]. As demonstrated in Figure 1, we first compute the hidden representations of two text samples separately in the bottom layers. Then we mix up the hidden representations at layer m, and feed the interpolated hidden representations to the upper layers. Mathematically, denote the l-th layer in the encoder network as gl(.; θ), hence the hidden representation of the l-th layer can be computed as hl = gl(hl−1; θ). For two text samples xi and xj, define the 0-th layer as the embedding layer, i.e., hi 0 = WExi, hj 0 = WExj, then the hidden representations of the two samples from the lower layers are: hi l =gl(hi l−1; θ), l ∈[1, m], hj l =gl(hj l−1; θ), l ∈[1, m]. The mixup at the m-th layer and continuing forward passing to upper layers are defined as: ˜hm = λhi m + (1 −λ)hj m, ˜hl = gl(˜hl−1; θ), l ∈[m + 1, L]. We call the above method TMix and define the new mixup operation as the whole process to get ˜hL: TMix(xi, xj; g(.; θ), λ, m) = ˜hL. By using an encoder model g(.; θ), TMix interpolates textual semantic hidden representations as a type of data augmentation. In contrast with Mixup defined in the data space in Equation 1, 2150 TMix depends on an encoder function, hence defines a much broader scope for computing interpolations. For ease of notation, we drop the explicit dependence on g(.; θ), λ and m in notations and denote it simply as TMix(xi, xj) in the following sections. In our experiments, we sample the mix parameter λ from a Beta distribution for every batch to perform the interpolation : λ ∼Beta(α, α), λ = max(λ, 1 −λ), in which α is the hyper-parameter to control the distribution of λ. In TMix, we mix the labels in the same way as Equation 2 and then use the pairs (˜hL, ˜y) as inputs for downstream applications. Instead of performing mixup at random input layers like Verma et al. (2019a), choosing which layer of the hidden representations to mixup is an interesting question to investigate. In our experiments, we use 12-layer BERT-base (Devlin et al., 2019) as our encoder model. Recent work (Jawahar et al., 2019) has studied what BERT learned at different layers. Specifically, the authors found {3,4,5,6,7,9,12} layers have the most representation power in BERT and each layer captures different types of information ranging from surface, syntactic to semantic level representation of text. For instance, the 9-th layer has predictive power in semantic tasks like checking random swapping of coordinated clausal conjuncts, while the 3-rd layer performs best in surface tasks like predicting sentence length. Building on those findings, we choose the layers that contain both syntactic and semantic information as our mixing layers, namely M = {7, 9, 12}. For every batch, we randomly sample m, the layer to mixup representations, from the set M computing the interpolation. We also performed ablation study in Section 5.5 to show how TMix’s performance changes with different choice of mix layer sets. Text classification Note that TMix provides a general approach to augment text data, hence can be applied to any downstream tasks. In this paper, we focus on text classification and leave other applications as potential future work. In text classification, we minimize the KL-divergence between the mixed labels and the probability from the classifier as the supervision loss: LTMix = KL(mix(yi, yj)||p(TMix(xi, xj); φ) where p(.; φ) is a classifier on top of the encoder model. In our experiments, we implement the classifier as a two-layer MLP, which takes the mixed representation TMix(xi, xj) as input and returns a probability vector. We jointly optimize over the encoder parameters θ and the classifier parameters φ to train the whole model. 4 Semi-supervised MixText In this section, we demonstrate how to utilize the TMix to help semi-supervised learning. Given a limited labeled text set Xl = {xl 1, ..., xl n}, with their labels Yl = {yl 1, ..., yl n} and a large unlabeled set Xu = {xu 1, ..., xu m}, where n and m are the number of data points in each set. yl i ∈{0, 1}C is a one-hot vector and C is the number of classes. Our goal is to learn a classifier that efficiently utilizes both labeled data and unlabeled data. We propose a new text semi-supervised learning framework called MixText 1. The core idea behind our framework is to leverage TMix both on labeled and unlabeled data for semi-supervised learning. To fulfill this goal, we come up a label guessing method to generate labels for the unlabeled data in the training process. With the guessed labels, we can treat the unlabeled data as additional labeled data and perform TMix for training. Moreover, we combine TMix with additional data augmentation techniques to generate large amount of augmented data, which is a key component that makes our algorithm work well in setting with extremely limited supervision. Finally, we introduce an entropy minimization loss that encourages the model to assign sharp probabilities on unlabeled data samples, which further helps to boost performance when the number of classes C is large. The overall architecture is shown in Figure 2. We will explain each component in detail. 4.1 Data Augmentation Back translations (Edunov et al., 2018) is a common data augmentation technique and can generate diverse paraphrases while preserving the semantics of the original sentences. We utilize back translations to paraphrase the unlabeled data. For each xu i in the unlabeled text set Xu, we generate K 1Note that MixText is a semi-supervised learning framework while TMix is a data augmentation approach. 2151 Figure 2: Overall Architecture of MixText. MixText takes in labeled data and unlabeled data, conducts augmentations and predicts labels for unlabeled data, performs TMix over labeled and unlabeled data, and computes supervised loss, consistency loss and entropy minimization term. augmentations xa i,k = augmentk(xu i ), k ∈[1, K] by back translations with different intermediate languages. For example, we can translate original sentences from English to German and then translate them back to get the paraphrases. In the augmented text generation, we employ random sampling with a tunable temperature instead of beam search to ensure the diversity. The augmentations are then used for generating labels for the unlabeled data, which we describe below. 4.2 Label Guessing For an unlabeled data sample xu i and its K augmentations xa i,k, we generate the label for them using weighted average of the predicted results from the current model: yu i = 1 wori + P k wk (worip(xu i ) + K X k=1 wkp(xa i,k))) Note that yu i is a probability vector. We expect the model to predict consistent labels for different augmentations. Hence, to enforce the constraint, we use the weighted average of all predictions, rather than the prediction of any single data sample, as the generated label. Moreover, by explicitly introducing the weight wori and wk, we can control the contributions of different quality of augmentations to the generated labels. Our label guessing method improves over (Tarvainen and Valpola, 2017) which utilizes teacher and student models to predict labels for unlabeled data, and UDA (Xie et al., 2019) that just uses p(xu i ) as generated labels. To avoid the weighted average being too uniform, we utilize a sharpening function over predicted labels. Given a temperature hyper-parameter T: Sharpen(yu i , T) = (yu i ) 1 T ||(yu i ) 1 T ||1 , where ||.||1 is l1-norm of the vector. When T →0, the generated label becomes a one-hot vector. 4.3 TMix on Labeled and Unlabeled Data After getting the labels for unlabeled data, we merge the labeled text Xl, unlabeled text Xu and unlabeled augmentation text Xa = {xa i,k} together to form a super set X = Xl ∪Xu ∪Xa. The corresponding labels are Y = Yl ∪Yu ∪Ya, where Ya = {ya i,k} and we define ya i,k = yu i , i.e., the all augmented samples share the same generated label as the original unlabeled sample. In training, we randomly sample two data points x, x′ ∈X, then we compute TMix(x, x′), mix(y, y′) and use the KL-divergence as the loss: LTMix = Ex,x′∈XKL(mix(y, y′)||p(TMix(x, x′)) Since x, x′ are randomly sampled from X, we interpolate text from many different categories: mixup among among labeled data, mixup of labeled and unlabeled data and mixup of unlabeled data. Based on the categories of the samples, the loss can be divided into two types: Supervised loss When x ∈Xl, the majority information we are actually using is from the labeled data, hence training the model with supervised loss. 2152 Consistency loss When the samples are from unlabeled or augmentation set, i.e., x ∈Xu ∪Xa, most information coming from unlabeled data, the KL-divergence is a type of consistency loss, constraining augmented samples to have the same labels with the original data sample. 4.4 Entropy Minimization To encourage the model to produce confident labels on unlabeled data, we propose to minimize the entropy of prediction probability on unlabeled data as a self-training loss: Lmargin = Ex∈Xumax(0, γ −||yu||2 2), where γ is the margin hyper-parameter. We minimize the entropy of the probability vector if it is larger than γ. Combining the two losses, we get the overall objective function of MixText: LMixText = LTMix + γmLmargin. 5 Experiments 5.1 Dataset and Pre-processing We performed experiment with four English text classification benchmark datasets: AG News (Zhang et al., 2015), BPpedia (Mendes et al., 2012), Yahoo! Answers (Chang et al., 2008) and IMDB (Maas et al., 2011). We used the original test set as our test set and randomly sampled from the training set to form the training unlabeled set and development set. The dataset statistics and split information are presented in Table 1. For unlabeled data, we selected German and Russian as intermediate languages for back translations using FairSeq2, and the random sampling temperature was 0.9. Here is an example, for a news from AG News dataset: “Oil prices rallied to a record high above $55 a bar-rel on Friday on rising fears of a winter fuel supply crunch and robust economic growth in China, the world’s number two user”, the augment texts through German and Russian are: “Oil prices surged to a record high above $55 a barrel on Friday on growing fears of a winter slump and robust economic growth in world No.2 China” and “Oil prices soared to record highs above $55 per barrel on Friday amid growing fears over a winter reduction in U.S. oil inventories and robust economic growth in China, the world’s second-biggest oil consumer”. 2https://github.com/pytorch/fairseq 5.2 Baselines To test the effectiveness of our method, we compared it with several recent models: • VAMPIRE (Gururangan et al., 2019): VAriational Methods for Pretraining In Resourcelimited Environments(VAMPIRE) pretrained a unigram document model as a variational autoencoder on in-domain, unlabeled data and used its internal states as features in a downstream classifier. • BERT (Devlin et al., 2019): We used the pretrained BERT-based-uncased model3 and finetuned it for the classification. In details, we used average pooling over the output of BERT encoder and the same two-layer MLP as used in MixText to predict the labels. • UDA (Xie et al., 2019): Since we do not have access to TPU and need to use smaller amount of unlabeled data, we implemented Unsupervised Data Augmentation(UDA) using pytorch by ourselves. Specifically, we used the same BERT-based-uncased model, unlabeled augment data and batch size as our MixText, used original unlabeled data to predict the labels with the same softmax sharpen temperature as our MixText and computed consistency loss between augmented unlabeled data. 5.3 Model Settings We used BERT-based-uncased tokenizer to tokenize the text, bert-based-uncased model as our text encoder, and used average pooling over the output of the encoder, a two-layer MLP with a 128 hidden size and tanh as its activation function to predict the labels. The max sentence length is set as 256. We remained the first 256 tokens for sentences that exceed the limit. The learning rate is 1e-5 for BERT encoder, 1e-3 for MLP. For α in the beta distribution, generally, when labeled data is fewer than 100 per class, α is set as 2 or 16, as larger α is more likely to generate λ around 0.5, thus creating “newer” data as data augmentations; when labeled data is more than 200 per class, α is set to 0.2 or 0.4, as smaller α is more likely to generate λ around 0.1, thus creating “similar” data as adding noise regularization. For TMix, we only utilize the labeled dataset as the settings in Bert baseline, and set the batch size 3https://pypi.org/project/ pytorch-transformers/ 2153 Dataset Label Type Classes Unlabeled Dev Test AG News News Topic 4 5000 2000 1900 DBpedia Wikipeida Topic 14 5000 2000 5000 Yahoo! Answer QA Topic 10 5000 5000 6000 IMDB Review Sentiment 2 5000 2000 12500 Table 1: Dataset statistics and dataset split. The number of unlabeled data, dev data and test data in the table means the number of data per class. Datset Model 10 200 2500 Dataset Model 10 200 2500 AG News VAMPIRE 83.9 86.2 DBpedia VAMPIRE BERT 69.5 87.5 90.8 BERT 95.2 98.5 99.0 TMix* 74.1 88.1 91.0 TMix* 96.8 98.7 99.0 UDA 84.4 88.3 91.2 UDA 97.8 98.8 99.1 MixText* 88.4 89.2 91.5 MixText* 98.5 98.9 99.2 Yahoo! VAMPIRE 59.9 70.2 IMDB VAMPIRE 82.2 85.8 BERT 56.2 69.3 73.2 BERT 67.5 86.9 89.8 TMix* 58.6 69.8 73.5 TMix* 69.3 87.4 90.3 UDA 63.2 70.2 73.6 UDA 78.2 89.1 90.8 MixText* 67.6 71.3 74.1 MixText* 78.7 89.4 91.3 Table 2: Performance (test accuracy(%)) comparison with baselines. The results are averaged after three runs to show the significance (Dror et al., 2018), each run takes around 5 hours. Models are trained with 10, 200, 2500 labeled data per class. VAMPIRE, Bert, and TMix do not use unlabeled data during training while UDA and MixText utilize unlabeled data. * means our models. as 8. In MixText, we utilize both labeled data and unlabeled data for training using the same settings as in UDA. We set K = 2, i.e., for each unlabeled data we perform two augmentations, specifically German and Russian. The batch size is 4 for labeled data and 8 for unlabeled data. 0.5 is used as a starting point to tune temperature T. In our experiments, we set 0.3 for AG News, 0.5 for DBpedia and Yahoo! Answer, and 1 for IMDB. 5.4 Results We evaluated our baselines and proposed methods using accuracy with 5000 unlabeled data and with different amount of labeled data per class ranging from 10 to 10000 (5000 for IMDB). 5.4.1 Varying the Number of Labeled Data The results on different text classification datasets are shown in Table 2 and Figure 3. All transformer based models (BERT, TMix, UDA and MixText) showed better performance compared to VAMPIRE since larger models were adopted. TMix outperformed BERT, especially when labeled data was limited like 10 per class. For instance, model accuracy improved from 69.5% to 74.1% on AG News with 10 labeled data, demonstrating the effectiveness of TMix. When unlabeled data was introduced in UDA, it outperformed TMix such as from 58.6% to 63.2% on Yahoo! with 10 labeled data, because more data was used and consistency regularization loss was added. Our proposed MixText consistently demonstrated the best performances when compared to different baseline models across four datasets, as MixText not only incorporated unlabeled data and utilized implicit relations between both labeled data and unlabeled data via TMix, but also had better label guessing on unlabeled data through weighted average among augmented and original sentences. 5.4.2 Varying the Number of Unlabeled Data We also conducted experiments to test our model performances with 10 labeled data and different amount of unlabeled data (from 0 to 10000) on AG News and Yahoo! Answer, shown in Figure 4. With more unlabeled data, the accuracy became much higher on both AG News and Yahoo! Answer, which further validated the effectiveness of the usage of unlabeled data. 5.4.3 Loss on Development Set To explore whether our methods can avoid overfitting when given limited labeled data, we plotted 2154 Figure 3: Performance (test accuracy (%)) on AG News, DBpedia, Yahoo! Answer and IMDB with 5000 unlabeled data and varying number of labeled data per class for each model. Figure 4: Performance (test accuracy (%)) on AG News (y axis on the right) and Yahoo! Answer (y axis on the left) with 10 labeled data and varying number of unlabeled data per class for MixText. the losses on development set during the training on IMDB and Yahoo! Answer with 200 labeled data per class in Figure 5. We found that the loss on development sets tends to increase a lot in around 10 epochs for Bert, indicating that the model overfitted on training set. Although UDA can alleviate the overfitting problems with consistency regularization, TMix and MixText showed more stable trends and lower loss consistently. The loss curve for TMix also indicated that it can help solving overfitting problems even without extra data. 5.5 Ablation Studies We performed ablation studies to show the effectiveness of each component in MixText. 5.5.1 Different Mix Layer Set in TMix We explored different mixup layer set M for TMix and the results are shown in Table 3. Based on (Jawahar et al., 2019), the {3,4,5,6,7,9,12} are the most informative layers in BERT based model and each of them captures different types of informaFigure 5: Loss on development set on IMDB and Yahoo! Answer in each epoch while training with 200 labeled data and 5000 unlabeled data per class. tion (e.g., surface, syntactic, or semantic). We chose to mixup using different subsets of those layers to see which subsets gave the optimal performance. When no mixup is performed, our model accuracy was 69.5%. If we just mixup at the input and lower layers ({0, 1, 2}), there seemed no performance increase. When doing mixup using different layer sets (e.g., {3,4}, or {6,7,9}), we found large differences in terms of model performances: {3,4} that mainly contains surface information like sentence length does not help text classification a lot, thus showing weaker performance. The 6th layer captures depth of the syntactic tree which also does not help much in classifications. Our model achieved the best performance at {7, 9, 12}; this layer subset contains most of syntactic and semantic information such as the sequence of top level constituents in the syntax tree, the object number in main clause, sensitivity to word order, and the sensitivity to random replacement of a noun/verb. 2155 Mixup Layers Set Accuracy(%) ∅ 69.5 {0,1,2} 69.3 {3,4} 70.4 {6,7,9} 71.9 {7,9,12} 74.1 {6,7,9,12} 72.2 {3,4,6,7,9,12} 71.6 Table 3: Performance (test accuracy (%)) on AG News with 10 labeled data per class with different mixup layers set for TMix. ∅means no mixup. Model Accuracy(%) MixText 67.6 - weighted average 67.1 - TMix 63.5 - unlabeled data 58.6 - all 56.2 Table 4: Performance (test accuracy (%)) on Yahoo! Answer with 10 labeled data and 5000 unlabeled data per class after removing different parts of MixText. 5.5.2 Remove Different Parts from MixText We also measured the performance of MixText by stripping each component each time and displayed the results in Table 4. We observed the performance drops after removing each part, suggesting that all components in MixText contribute to the final performance. The model performance decreased most significantly after removing unlabeled data which is as expected. Comparing to weighted average prediction for unlabeled data, the decrease from removing TMix was larger, indicating that TMix has the largest impact other than unlabeled data, which also proved the effectiveness of our proposed Text Mixup, an interpolation-based regularization and augmentation technique. 6 Conclusion To alleviate the dependencies of supervised models on labeled data, this work presented a simple but effective semi-supervised learning method, MixText, for text classification, in which we also introduced TMix, an interpolation-based augmentation and regularization technique. Through experiments on four benchmark text classification datasets, we demonstrated the effectiveness of our proposed TMix technique and the Mixup model, which have better testing accuracy and more stable loss trend, compared with current pre-training and fine-tuning models and other state-of-the-art semi-supervised learning methods. For future direction, we plan to explore the effectiveness of MixText in other NLP tasks such as sequential labeling tasks and other real-world scenarios with limited labeled data. Acknowledgement We would like to thank the anonymous reviewers for their helpful comments, and Chao Zhang for his early feedback. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. DY is supported in part by a grant from Google. References Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724–728, Minneapolis, Minnesota. Association for Computational Linguistics. David Berthelot, Nicholas Carlini, Ian J. Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semisupervised learning. CoRR, abs/1905.02249. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10–21. Ming-Wei Chang, Lev Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: Dataless classification. In Proceedings of the 23rd National Conference on Artificial Intelligence Volume 2, AAAI’08, pages 830–835. AAAI Press. Nitesh V. Chawla and Grigoris I. Karakoulas. 2011. Learning from labeled and unlabeled data: An empirical study across techniques and domains. CoRR, abs/1109.2047. Jiaao Chen, Jianshu Chen, and Zhou Yu. 2019. Incorporating structured commonsense knowledge in story completion. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6244–6251. 2156 Jiaao Chen, Yuwei Wu, and Diyi Yang. 2020. Semisupervised Models via Data Augmentation for Classifying Interactive Affective Responses. In Workshop On Affective Content Analysis, The ThirtyFourth AAAI Conference on Artificial Intelligence, AAAI 2020. Mingda Chen, Qingming Tang, Karen Livescu, and Kevin Gimpel. 2018. Variational sequential labelers for semi-supervised learning. In Proc. of EMNLP. Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc V Le. 2018. Semi-supervised sequence modeling with cross-view training. arXiv preprint arXiv:1809.08370. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. CoRR, abs/1808.09381. Yves Grandvalet and Yoshua Bengio. 2004. Semisupervised learning by entropy minimization. In Proceedings of the 17th International Conference on Neural Information Processing Systems, NIPS’04, pages 529–536, Cambridge, MA, USA. MIT Press. Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classification. CoRR, abs/1906.02242. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619, Minneapolis, Minnesota. Association for Computational Linguistics. Samuli Laine and Timo Aila. 2016. Temporal ensembling for semi-supervised learning. CoRR, abs/1610.02242. Pooja Lakshmi Narayan, Ajay Nagesh, and Mihai Surdeanu. 2019. Exploration of noise strategies in semisupervised named entity classification. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 186– 191, Minneapolis, Minnesota. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature Cell Biology, 521(7553):436–444. Dong-Hyun Lee. 2013. Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks. ICML 2013 Workshop : Challenges in Representation Learning (WREPL). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 142–150, Stroudsburg, PA, USA. Association for Computational Linguistics. Pablo N. Mendes, Max Jakob, and Christian Bizer. 2012. Dbpedia for nlp: A multilingual cross-domain knowledge base. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2018. Weakly-supervised neural text classification. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, pages 983–992, New York, NY, USA. ACM. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2017. Adversarial training methods for semisupervised text classification. In International Conference on Learning Representations. 2157 Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2019. Virtual adversarial training: A regularization method for supervised and semisupervised learning. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1979–1993. Ajay Nagesh and Mihai Surdeanu. 2018. An exploration of three lightly-supervised representation learning approaches for named entity classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2312–2324. Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. CoRR, abs/1712.04621. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine translation models with monolingual data. CoRR, abs/1511.06709. Antti Tarvainen and Harri Valpola. 2017. Weightaveraged consistency targets improve semisupervised deep learning results. CoRR, abs/1703.01780. Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. 2019a. Manifold mixup: Better representations by interpolating hidden states. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6438–6447, Long Beach, California, USA. PMLR. Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. 2019b. Interpolation consistency training for semi-supervised learning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI19, pages 3635–3641. International Joint Conferences on Artificial Intelligence Organization. Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, and Jian Tang. 2019c. Graphmix: Regularized training of graph neural networks for semi-supervised learning. ArXiv, abs/1909.11715. Jason W. Wei and Kai Zou. 2019. EDA: easy data augmentation techniques for boosting performance on text classification tasks. CoRR, abs/1901.11196. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019a. Let’s make your request more persuasive: Modeling persuasive strategies via semi-supervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3620–3630. Diyi Yang, Miaomiao Wen, and Carolyn Rose. 2015. Weakly supervised role identification in teamwork interactions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1671–1680. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019b. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. CoRR, abs/1702.08139. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. 2019. Cutmix: Regularization strategy to train strong classifiers with localizable features. CoRR, abs/1905.04899. Hongyi Zhang, Moustapha Ciss´e, Yann N. Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. CoRR, abs/1710.09412. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. CoRR, abs/1509.01626.
2020
194
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2158–2170 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2158 MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices Zhiqing Sun1∗, Hongkun Yu2, Xiaodan Song2, Renjie Liu2, Yiming Yang1, Denny Zhou2 1Carnegie Mellon University {zhiqings, yiming}@cs.cmu.edu 2Google Brain {hongkuny, xiaodansong, renjieliu, dennyzhou}@google.com Abstract Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resourcelimited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERTLARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an invertedbottleneck incorporated BERTLARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3× smaller and 5.5× faster than BERTBASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUE score of 77.7 (0.6 lower than BERTBASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERTBASE). 1 Introduction The NLP community has witnessed a revolution of pre-training self-supervised models. These models usually have hundreds of millions of parameters (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019). Among these models, BERT (Devlin et al., 2018) ∗This work was done when the first author was an intern at Google Brain. shows substantial accuracy improvements. However, as one of the largest models ever in NLP, BERT suffers from the heavy model size and high latency, making it impractical for resource-limited mobile devices to deploy the power of BERT in mobile-based machine translation, dialogue modeling, and the like. There have been some efforts that taskspecifically distill BERT into compact models (Turc et al., 2019; Tang et al., 2019; Sun et al., 2019; Tsai et al., 2019). To the best of our knowledge, there is not yet any work for building a taskagnostic lightweight pre-trained model, that is, a model that can be generically fine-tuned on different downstream NLP tasks as the original BERT does. In this paper, we propose MobileBERT to fill this gap. In practice, task-agnostic compression of BERT is desirable. Task-specific compression needs to first fine-tune the original large BERT model into a task-specific teacher and then distill. Such a process is much more complicated (Wu et al., 2019) and costly than directly fine-tuning a task-agnostic compact model. At first glance, it may seem straightforward to obtain a task-agnostic compact BERT. For example, one may just take a narrower or shallower version of BERT, and train it until convergence by minimizing a convex combination of the prediction loss and distillation loss (Turc et al., 2019; Sun et al., 2019). Unfortunately, empirical results show that such a straightforward approach results in significant accuracy loss (Turc et al., 2019). This may not be that surprising. It is well-known that shallow networks usually do not have enough representation power while narrow and deep networks are difficult to train. Our MobileBERT is designed to be as deep as BERTLARGE while each layer is made much narrower via adopting bottleneck structures and balancing between self-attentions and feed-forward 2159 Multi-Head Attention Add & Norm Feed Forward Add & Norm Lx Multi-Head Attention Add & Norm Feed Forward Add & Norm Add & Norm Lx Linear Linear Multi-Head Attention Add & Norm Feed Forward Add & Norm Add & Norm Lx Linear Linear xF (b) feature map transfer attention transfer (a) (c) Embedding Embedding Embedding Classifier Classifier Classifier Figure 1: Illustration of three models: (a) BERT; (b) Inverted-Bottleneck BERT (IB-BERT); and (c) MobileBERT. In (b) and (c), red lines denote inter-block flows while blue lines intra-block flows. MobileBERT is trained by layer-to-layer imitating IB-BERT. networks (Figure 1). To train MobileBERT, a deep and thin model, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERTLARGE model (IB-BERT). Then, we conduct knowledge transfer from IB-BERT to MobileBERT. A variety of knowledge transfer strategies are carefully investigated in our empirical studies. Empirical evaluations1 show that MobileBERT is 4.3× smaller and 5.5× faster than BERTBASE, while it can still achieve competitive results on well-known NLP benchmarks. On the natural language inference tasks of GLUE, MobileBERT can achieve a GLUE score of 77.7, which is only 0.6 lower than BERTBASE, with a latency of 62 ms on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBER obtains a dev F1 score of 90.3/80.2, which is even 1.5/2.1 higher than BERTBASE. 2 Related Work Recently, compression of BERT has attracted much attention. Turc et al. (2019) propose to pre-train the smaller BERT models to improve task-specific knowledge distillation. Tang et al. (2019) distill BERT into an extremely small LSTM model. Tsai et al. (2019) distill a multilingual BERT into smaller BERT models on sequence labeling tasks. Clark et al. (2019b) use several single-task BERT 1The code and pre-trained models will be available at https://github.com/google-research/ google-research/tree/master/mobilebert. models to teach a multi-task BERT. Liu et al. (2019a) distill knowledge from an ensemble of BERT models into a single BERT. Concurrently to our work, Sun et al. (2019) distill BERT into shallower students through knowledge distillation and an additional knowledge transfer of hidden states on multiple intermediate layers. Jiao et al. (2019) propose TinyBERT, which also uses a layer-wise distillation strategy for BERT but in both pre-training and fine-tuning stages. Sanh et al. (2019) propose DistilBERT, which successfully halves the depth of BERT model by knowledge distillation in the pre-training stage and an optional fine-tuning stage. In contrast to these existing literature, we only use knowledge transfer in the pre-training stage and do not require a fine-tuned teacher or data augmentation (Wu et al., 2019) in the down-stream tasks. Another key difference is that these previous work try to compress BERT by reducing its depth, while we focus on compressing BERT by reducing its width, which has been shown to be more effective (Turc et al., 2019). 3 MobileBERT In this section, we present the detailed architecture design of MobileBERT and training strategies to efficiently train MobileBERT. The specific model settings are summarized in Table 1. These settings are obtained by extensive architecture search experiments which will be presented in Section 4.1. 2160 BERTLARGE BERTBASE IB-BERTLARGE MobileBERT MobileBERTTINY embedding hembedding 1024 768 128 no-op no-op 3-convolution hinter 1024 768 512 body Linear hinput     1024 16 1024     1024 4096 1024     ×24     768 12 768     768 3072 768     ×12   512 1024 !   512 4 1024     1024 4096 1024   1024 512 !   ×24   512 128 !   512 4 128     128 512 128  ×4 128 512 !   ×24   512 128 !   128 4 128     128 512 128  ×2 128 512 !   ×24 houtput MHA hinput #Head houtput FFN hinput hFFN houtput Linear hinput houtput #Params 334M 109M 293M 25.3M 15.1M Table 1: The detailed model settings of a few models. hinter, hFFN, hembedding, #Head and #Params denote the inter-block hidden size (feature map size), FFN intermediate size, embedding table size, the number of heads in multi-head attention, and the number of parameters, respectively. 3.1 Bottleneck and Inverted-Bottleneck The architecture of MobileBERT is illustrated in Figure 1(c). It is as deep as BERTLARGE, but each building block is made much smaller. As shown in Table 1, the hidden dimension of each building block is only 128. On the other hand, we introduce two linear transformations for each building block to adjust its input and output dimensions to 512. Following the terminology in (He et al., 2016), we refer to such an architecture as bottleneck. It is challenging to train such a deep and thin network. To overcome the training issue, we first construct a teacher network and train it until convergence, and then conduct knowledge transfer from this teacher network to MobileBERT. We find that this is much better than directly training MobileBERT from scratch. Various training strategies will be discussed in a later section. Here, we introduce the architecture design of the teacher network which is illustrated in Figure 1(b). In fact, the teacher network is just BERTLARGE while augmented with inverted-bottleneck structures (Sandler et al., 2018) to adjust its feature map size to 512. In what follows, we refer to the teacher network as IB-BERTLARGE. Note that IB-BERT and MobileBERT have the same feature map size which is 512. Thus, we can directly compare the layerwise output difference between IB-BERT and MobileBERT. Such a direct comparison is needed in our knowledge transfer strategy. It is worth pointing out that the simultaneously introduced bottleneck and inverted-bottleneck structures result in a fairly flexible architecture design. One may either only use the bottlenecks for MobileBERT (correspondingly the teacher becomes BERTLARGE) or only the invertedbottlenecks for IB-BERT (then there is no bottleneck in MobileBERT) to align their feature maps. However, when using both of them, we can allow IB-BERTLARGE to preserve the performance of BERTLARGE while having MobileBERT sufficiently compact. 3.2 Stacked Feed-Forward Networks A problem introduced by the bottleneck structure of MobileBERT is that the balance between the Multi-Head Attention (MHA) module and the FeedForward Network (FFN) module is broken. MHA and FFN play different roles in the Transformer architecture: The former allows the model to jointly attend to information from different subspaces, while the latter increases the non-linearity of the model. In original BERT, the ratio of the parameter numbers in MHA and FFN is always 1:2. But in the bottleneck structure, the inputs to the MHA are from wider feature maps (of inter-block size), while the inputs to the FFN are from narrower bottlenecks (of intra-block size). This results in that the MHA modules in MobileBERT relatively contain more parameters. To fix this issue, we propose to use stacked feedforward networks in MobileBERT to re-balance the relative size between MHA and FFN. As illustrated in Figure 1(c), each MobileBERT layer contains one MHA but several stacked FFN. In MobileBERT, we use 4 stacked FFN after each MHA. 2161 3.3 Operational Optimizations By model latency analysis2, we find that layer normalization (Ba et al., 2016) and gelu activation (Hendrycks and Gimpel, 2016) accounted for a considerable proportion of total latency. Therefore, we propose to replace them with new operations in our MobileBERT. Remove layer normalization We replace the layer normalization of a n-channel hidden state h with an element-wise linear transformation: NoNorm(h) = γ ◦h + β, (1) where γ, β ∈Rn and ◦denotes the Hadamard product. Please note that NoNorm has different properties from LayerNorm even in test mode since the original layer normalization is not a linear operation for a batch of vectors. Use relu activation We replace the gelu activation with simpler relu activation (Nair and Hinton, 2010). 3.4 Embedding Factorization The embedding table in BERT models accounts for a substantial proportion of model size. To compress the embedding layer, as shown in Table 1, we reduce the embedding dimension to 128 in MobileBERT. Then, we apply a 1D convolution with kernel size 3 on the raw token embedding to produce a 512 dimensional output. 3.5 Training Objectives We propose to use the following two knowledge transfer objectives, i.e., feature map transfer and attention transfer, to train MobileBERT. Figure 1 illustrates the proposed layer-wise knowledge transfer objectives. Our final layer-wise knowledge transfer loss Lℓ KT for the ℓth layer is a linear combination of the two objectives stated below: Feature Map Transfer (FMT) Since each layer in BERT merely takes the output of the previous layer as input, the most important thing in layerwise knowledge transfer is that the feature maps of each layer should be as close as possible to those of the teacher. In particular, the mean squared error between the feature maps of the MobileBERT 2A detailed analysis of effectiveness of operational optimizations on real-world inference latency can be found in Section 4.6.1. student and the IB-BERT teacher is used as the knowledge transfer objective: Lℓ FMT = 1 TN T X t=1 N X n=1 (Htr t,ℓ,n −Hst t,ℓ,n)2, (2) where ℓis the index of layers, T is the sequence length, and N is the feature map size. In practice, we find that decomposing this loss term into normalized feature map discrepancy and feature map statistics discrepancy can help stabilize training. Attention Transfer (AT) The attention mechanism greatly boosts the performance of NLP and becomes a crucial building block in Transformer and BERT (Clark et al., 2019a; Jawahar et al., 2019). This motivates us to use self-attention maps from the well-optimized teacher to help the training of MobileBERT in augmentation to the feature map transfer. In particular, we minimize the KL-divergence between the per-head self-attention distributions of the MobileBERT student and the IB-BERT teacher: Lℓ AT = 1 TA T X t=1 A X a=1 DKL(atr t,ℓ,a||ast t,ℓ,a), (3) where A is the number of attention heads. Pre-training Distillation (PD) Besides layerwise knowledge transfer, we can also use a knowledge distillation loss when pre-training MobileBERT. We use a linear combination of the original masked language modeling (MLM) loss, next sentence prediction (NSP) loss, and the new MLM Knowledge Distillation (KD) loss as our pretraining distillation loss: LPD = αLMLM + (1 −α)LKD + LNSP , (4) where α is a hyperparameter in (0, 1). 3.6 Training Strategies Given the objectives defined above, there can be various combination strategies in training. We discuss three strategies in this paper. Auxiliary Knowledge Transfer In this strategy, we regard intermediate knowledge transfer as an auxiliary task for knowledge distillation. We use a single loss, which is a linear combination of knowledge transfer losses from all layers as well as the pre-training distillation loss. 2162 Encoder Block Embedding Encoder Block Encoder Block Classifier 3-layer teacher Embedding Embedding Embedding Encoder Block Encoder Block Encoder Block Encoder Block Encoder Block Encoder Block 3-stage knowledge transfer of student knowledge transfer copy Embedding Encoder Block Encoder Block Encoder Block Classifier further distillation knowledge distillation Encoder Block Embedding Encoder Block Encoder Block Classifier 3-layer teacher Encoder Block Embedding Encoder Block Encoder Block joint knowledge transfer of student Embedding Encoder Block Encoder Block Encoder Block Classifier further distillation Encoder Block Embedding Encoder Block Encoder Block Classifier 3-layer teacher Embedding Encoder Block Encoder Block Encoder Block Classifier knowledge transfer as auxiliary task (a) (b) (c) Figure 2: Diagrams of (a) auxiliary knowledge transfer (AKT), (b) joint knowledge transfer (JKT), and (c) progressive knowledge transfer (PKT). Lighter colored blocks represent that they are frozen in that stage. Joint Knowledge Transfer However, the intermediate knowledge of the IB-BERT teacher (i.e. attention maps and feature maps) may not be an optimal solution for the MobileBERT student. Therefore, we propose to separate these two loss terms, where we first train MobileBERT with all layerwise knowledge transfer losses jointly, and then further train it by pre-training distillation. Progressive Knowledge Transfer One may also concern that if MobileBERT cannot perfectly mimic the IB-BERT teacher, the errors from the lower layers may affect the knowledge transfer in the higher layers. Therefore, we propose to progressively train each layer in the knowledge transfer. The progressive knowledge transfer is divided into L stages, where L is the number of layers. Diagram of three strategies Figure 2 illustrates the diagram of the three strategies. For joint knowledge transfer and progressive knowledge transfer, there is no knowledge transfer for the beginning embedding layer and the final classifier in the layerwise knowledge transfer stage. They are copied from the IB-BERT teacher to the MobileBERT student. Moreover, for progressive knowledge transfer, when we train the ℓth layer, we freeze all the trainable parameters in the layers below. In practice, we can soften the training process as follows. When training a layer, we further tune the lower layers with a small learning rate rather than entirely freezing them. 4 Experiments In this section, we first present our architecture search experiments which lead to the model settings in Table 1, and then present the empirical #Params hinter hintra #Head SQuAD (a) 356M 1024 1024 16 88.2 (b) 325M 768 1024 16 88.6 (c) 293M 512 1024 16 88.1 (d) 276M 384 1024 16 87.6 (e) 262M 256 1024 16 87.0 (f) 293M 512 1024 4 88.3 (g) 92M 512 512 4 85.8 (h) 33M 512 256 4 84.8 (i) 15M 512 128 4 82.0 Table 2: Experimental results on SQuAD v1.1 dev F1 score in search of good model settings for the IB-BERTLARGE teacher. The number of layers is set to 24 for all models. results on benchmarks from MobileBERT and various baselines. 4.1 Model Settings We conduct extensive experiments to search good model settings for the IB-BERT teacher and the MobileBERT student. We start with SQuAD v1.1 dev F1 score as the performance metric in the search of model settings. In this section, we only train each model for 125k steps with 2048 batch size, which halves the training schedule of original BERT (Devlin et al., 2018; You et al., 2019). Architecture Search for IB-BERT Our design philosophy for the teacher model is to use as small inter-block hidden size (feature map size) as possible, as long as there is no accuracy loss. Under this guideline, we design experiments to manipulate the inter-block size of a BERTLARGE-sized IB-BERT, and the results are shown in Table 2 with labels (a)-(e). We can see that reducing the interblock hidden size doesn’t damage the performance 2163 hintra #Head (#Params) #FFN (#Params) SQuAD 192 6 (8M) 1 (7M) 82.6 160 5 (6.5M) 2 (10M) 83.4 128 4 (5M) 4 (12.5M) 83.4 96 3 (4M) 8 (14M) 81.6 Table 3: Experimental results on SQuAD v1.1 dev F1 score in search of good model settings for the MobileBERT student. The number of layers is set to 24 and the inter-block hidden size is set to 512 for all models. of BERT until it is smaller than 512. Hence, we choose IB-BERTLARGE with its inter-block hidden size being 512 as the teacher model. One may wonder whether we can also shrink the intra-block hidden size of the teacher. We conduct experiments and the results are shown in Table 2 with labels (f)-(i). We can see that when the intra-block hidden size is reduced, the model performance is dramatically worse. This means that the intra-block hidden size, which represents the representation power of non-linear modules, plays a crucial role in BERT. Therefore, unlike the interblock hidden size, we do not shrink the intra-block hidden size of our teacher model. Architecture Search for MobileBERT We seek a compression ratio of 4× for BERTBASE, so we design a set of MobileBERT models all with approximately 25M parameters but different ratios of the parameter numbers in MHA and FFN to select a good MobileBERT student model. Table 3 shows our experimental results. They have different balances between MHA and FFN. From the table, we can see that the model performance reaches the peak when the ratio of parameters in MHA and FFN is 0.4 ∼0.6. This may justify why the original Transformer chooses the parameter ratio of MHA and FFN to 0.5. We choose the architecture with 128 intra-block hidden size and 4 stacked FFNs as the MobileBERT student model in consideration of model accuracy and training efficiency. We also accordingly set the number of attention heads in the teacher model to 4 in preparation for the layer-wise knowledge transfer. Table 1 demonstrates the model settings of our IB-BERTLARGE teacher and MobileBERT student. One may wonder whether reducing the number of heads will harm the performance of the teacher model. By comparing (a) and (f) in Table 2, we can see that reducing the number of heads from 16 to 4 does not affect the performance of IB-BERTLARGE. 4.2 Implementation Details Following BERT (Devlin et al., 2018), we use the BooksCorpus (Zhu et al., 2015) and English Wikipedia as our pre-training data. To make the IB-BERTLARGE teacher reach the same accuracy as original BERTLARGE, we train IB-BERTLARGE on 256 TPU v3 chips for 500k steps with a batch size of 4096 and LAMB optimizer (You et al., 2019). For a fair comparison with the original BERT, we do not use training tricks in other BERT variants (Liu et al., 2019b; Joshi et al., 2019). For MobileBERT, we use the same training schedule in the pre-training distillation stage. Additionally, we use progressive knowledge transfer to train MobileBERT, which takes additional 240k steps over 24 layers. In ablation studies, we halve the pretraining distillation schedule of MobileBERT to accelerate experiments. Moreover, in the ablation study of knowledge transfer strategies, for a fair comparison, joint knowledge transfer and auxiliary knowledge transfer also take additional 240k steps. For the downstream tasks, all reported results are obtained by simply fine-tuning MobileBERT just like what the original BERT does. To finetune the pre-trained models, we search the optimization hyperparameters in a search space including different batch sizes (16/32/48), learning rates ((1-10) * e-5), and the number of epochs (210). The search space is different from the original BERT because we find that MobileBERT usually needs a larger learning rate and more training epochs in fine-tuning. We select the model for testing according to their performance on the development (dev) set. 4.3 Results on GLUE The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of 9 natural language understanding tasks. We compare MobileBERT with BERTBASE and a few state-of-the-art pre-BERT models on the GLUE leaderboard3: OpenAI GPT (Radford et al., 2018) and ELMo (Peters et al., 2018). We also compare with three recently proposed compressed BERT models: BERT-PKD (Sun et al., 2019), and DistilBERT (Sanh et al., 2019). To further show the advantage of MobileBERT over recent small BERT models, we also evaluate a smaller variant of our 3https://gluebenchmark.com/leaderboard 2164 #Params #FLOPS Latency CoLA SST-2 MRPC STS-B QQP MNLI-m/mm QNLI RTE GLUE 8.5k 67k 3.7k 5.7k 364k 393k 108k 2.5k ELMo-BiLSTM-Attn 33.6 90.4 84.4 72.3 63.1 74.1/74.5 79.8 58.9 70.0 OpenAI GPT 109M 47.2 93.1 87.7 84.8 70.1 80.7/80.6 87.2 69.1 76.9 BERTBASE 109M 22.5B 342 ms 52.1 93.5 88.9 85.8 71.2 84.6/83.4 90.5 66.4 78.3 BERTBASE-6L-PKD* 66.5M 11.3B 92.0 85.0 70.7 81.5/81.0 89.0 65.5 BERTBASE-4L-PKD†* 52.2M 7.6B 24.8 89.4 82.6 79.8 70.2 79.9/79.3 85.1 62.3 BERTBASE-3L-PKD* 45.3M 5.7B 87.5 80.7 68.1 76.7/76.3 84.7 58.2 DistilBERTBASE-6L† 62.2M 11.3B 92.0 85.0 70.7 81.5/81.0 89.0 65.5 DistilBERTBASE-4L† 52.2M 7.6B 32.8 91.4 82.4 76.1 68.5 78.9/78.0 85.2 54.1 TinyBERT* 14.5M 1.2B 43.3 92.6 86.4 79.9 71.3 82.5/81.8 87.7 62.9 75.4 MobileBERTTINY 15.1M 3.1B 40 ms 46.7 91.7 87.9 80.1 68.9 81.5/81.6 89.5 65.1 75.8 MobileBERT 25.3M 5.7B 62 ms 50.5 92.8 88.8 84.4 70.2 83.3/82.6 90.6 66.2 77.7 MobileBERT w/o OPT 25.3M 5.7B 192 ms 51.1 92.6 88.8 84.8 70.5 84.3/83.4 91.6 70.4 78.5 Table 4: The test results on the GLUE benchmark (except WNLI). The number below each task denotes the number of training examples. The metrics for these tasks can be found in the GLUE paper (Wang et al., 2018). “OPT” denotes the operational optimizations introduced in Section 3.3. †denotes that the results are taken from (Jiao et al., 2019). *denotes that it can be unfair to directly compare MobileBERT with these models since MobileBERT is task-agnosticly compressed while these models use the teacher model in the fine-tuning stage. #Params SQuAD v1.1 SQuAD v2.0 EM F1 EM F1 DocQA + ELMo 65.1 67.6 BERTBASE 109M 80.8 88.5 74.2† 77.1† DistilBERTBASE-6L 66.6M 79.1 86.9 DistilBERTBASE-6L‡ 66.6M 78.1 86.2 66.0 69.5 DistilBERTBASE-4L‡ 52.2M 71.8 81.2 60.6 64.1 TinyBERT 14.5M 72.7 82.1 65.3 68.8 MobileBERTTINY 15.1M 81.4 88.6 74.4 77.1 MobileBERT 25.3M 82.9 90.0 76.2 79.2 MobileBERT w/o OPT 25.3M 83.4 90.3 77.6 80.2 Table 5: The results on the SQuAD dev datasets. †marks our runs with the official code. ‡denotes that the results are taken from (Jiao et al., 2019). model with approximately 15M parameters called MobileBERTTINY4, which reduces the number of FFNs in each layer and uses a lighter MHA structure. Besides, to verify the performance of MobileBERT on real-world mobile devices, we export the models with TensorFlow Lite5 APIs and measure the inference latencies on a 4-thread Pixel 4 phone with a fixed sequence length of 128. The results are listed in Table 4. 6 From the table, we can see that MobileBERT is very competitive on the GLUE benchmark. MobileBERT achieves an overall GLUE score of 77.7, which is only 0.6 lower than BERTBASE, while be4The detailed model setting of MobileBERTTINY can be found in Table 1 and in the appendix. 5https://www.tensorflow.org/lite 6We follow Devlin et al. (2018) to skip the WNLI task. MNLI-m QNLI MRPC SST-2 SQuAD MobileBERTTINY 82.0 89.9 86.7 91.6 88.6 + Quantization 82.0 89.8 86.3 91.6 88.4 MobileBERT 83.9 91.0 87.5 92.1 90.0 + Quantization 83.9 90.8 87.0 91.9 90.0 Table 6: Results of MobileBERT on GLUE dev accuracy and SQuAD v1.1 dev F1 score with 8-bit Quantization. ing 4.3× smaller and 5.5× faster than BERTBASE. Moreover, It outperforms the strong OpenAI GPT baseline by 0.8 GLUE score with 4.3× smaller model size. It also outperforms all the other compressed BERT models with smaller or similar model sizes. Finally, we find that the introduced operational optimizations hurt the model performance a bit. Without these optimizations, MobileBERT can even outperforms BERTBASE by 0.2 GLUE score. 4.4 Results on SQuAD SQuAD is a large-scale reading comprehension datasets. SQuAD1.1 (Rajpurkar et al., 2016) only contains questions that always have an answer in the given context, while SQuAD2.0 (Rajpurkar et al., 2018) contains unanswerable questions. We evaluate MobileBERT only on the SQuAD dev datasets, as there is nearly no single model submission on SQuAD test leaderboard. We compare our MobileBERT with BERTBASE, DistilBERT, and a strong baseline DocQA (Clark and Gardner, 2017). 2165 Setting #FLOPS Latency LayerNorm & gelu 5.7B 192 ms LayerNorm & relu 5.7B 167 ms NoNorm & gelu 5.7B 92 ms NoNorm & relu 5.7B 62 ms Table 7: The effectiveness of operational optimizations on real-world inference latency for MobileBERT. MNLI-m QNLI MRPC SST-2 SQuAD AKT 83.0 90.3 86.8 91.9 88.2 JKT 83.5 90.5 87.5 92.0 89.7 PKT 83.9 91.0 87.5 92.1 90.0 Table 8: Ablation study of MobileBERT on GLUE dev accuracy and SQuAD v1.1 dev F1 score with Auxiliary Knowledge Transfer (AKT), Joint Knowledge Transfer (JKT), and Progressive Knowledge Transfer (PKT). As shown in Table 5, MobileBERT outperforms a large margin over all the other models with smaller or similar model sizes. 4.5 Quantization We apply the standard post-training quantization in TensorFlow Lite to MobileBERT. The results are shown in Table 6. We find that while quantization can further compress MobileBERT by 4×, there is nearly no performance degradation from it. This indicates that there is still a big room in the compression of MobileBERT. 4.6 Ablation Studies 4.6.1 Operational Optimizations We evaluate the effectiveness of the two operational optimizations introduced in Section 3.3, i.e., replacing layer normalization (LayerNorm) with NoNorm and replacing gelu activation with relu activation. We report the inference latencies using the same experimental setting as in Section 4.6.1. From Table 7, we can see that both NoNorm and relu are very effective in reducing the latency of MobileBERT, while the two operational optimizations do not reduce FLOPS. This reveals the gap between the real-world inference latency and the theoretical computation overhead (i.e., FLOPS). 4.6.2 Training Strategies We also study how the choice of training strategy, i.e., auxiliary knowledge transfer, joint knowledge transfer, and progressive knowledge transfer, can affect the performance of MobileBERT. As shown MNLI-m QNLI MRPC SST-2 BERTLARGE 86.6 92.1† 87.8 93.7 IB-BERTLARGE 87.0 93.2 87.3 94.1 BERTBASE 84.4 91.1† 86.7 92.9 MobileBERT (bare) 80.8 88.2 84.3 90.1 + PD 81.1 88.9 85.5 91.7 + PD + FMT 83.8 91.1 87.0 92.2 + PD + FMT + AT 84.4 91.5 87.0 92.5 Table 9: Ablation on the dev sets of GLUE benchmark. BERTBASE and the bare MobileBERT (i.e., w/o PD, FMT, AT, FMT & OPT) use the standard BERT pretraining scheme. PD, AT, FMT, and OPT denote Pretraining Distillation, Attention Transfer, Feature Map Transfer, and operational OPTimizations respectively. †marks our runs with the official code. in Table 8, progressive knowledge transfer consistently outperforms the other two strategies. We notice that there is a significant performance gap between auxiliary knowledge transfer and the other two strategies. We think the reason is that the intermediate layer-wise knowledge (i.e., attention maps and feature maps) from the teacher may not be optimal for the student, so the student needs an additional pre-training distillation stage to fine-tune its parameters. 4.6.3 Training Objectives We finally conduct a set of ablation experiments with regard to Attention Transfer (AT), Feature Map Transfer (FMT) and Pre-training Distillation (PD). The operational OPTimizations (OPT) are removed in these experiments to make a fair comparison between MobileBERT and the original BERT. The results are listed in Table 9. We can see that the proposed Feature Map Transfer contributes most to the performance improvement of MobileBERT, while Attention Transfer and Pre-training Distillation also play positive roles. We can also find that our IB-BERTLARGE teacher is as powerful as the original IB-BERTLARGE while MobileBERT degrades greatly when compared to its teacher. So we believe that there is still a big room in the improvement of MobileBERT. 5 Conclusion We have presented MobileBERT which is a taskagnostic compact variant of BERT. Empirical results on popular NLP benchmarks show that MobileBERT is comparable with BERTBASE while being much smaller and faster. MobileBERT can 2166 enable various NLP applications7 to be easily deployed on mobile devices. In this paper, we show that 1) it is crucial to keep MobileBERT deep and thin, 2) bottleneck/invertedbottleneck structures enable effective layer-wise knowledge transfer, and 3) progressive knowledge transfer can efficiently train MobileBERT. We believe our findings are generic and can be applied to other model compression problems. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo. Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge. TAC. Cristian Buciluˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Z. Chen, H. Zhang, X. Zhang, and L. Zhao. 2018. Quora question pairs. Quora. Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019a. What does bert look at? an analysis of bert’s attention. arXiv preprint arXiv:1906.04341. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D Manning, and Quoc V Le. 2019b. Bam! born-again multi-task networks for natural language understanding. arXiv preprint arXiv:1907.04829. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. William B Dolan and Chris. Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing. 7https://tensorflow.org/lite/models/ bert_qa/overview Fei Gao, Lijun Wu, Li Zhao, Tao Qin, Xueqi Cheng, and Tie-Yan Liu. 2018. Efficient sequence learning with group recurrent networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 799–808. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Andrew Howard, Mark Sandler, Grace Chu, LiangChieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. 2019. Searching for mobilenetv3. arXiv preprint arXiv:1905.02244. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360. Ganesh Jawahar, Benoˆıt Sagot, Djam´e Seddah, Samuel Unicomb, Gerardo I˜niguez, M´arton Karsai, Yannick L´eo, M´arton Karsai, Carlos Sarraute, ´Eric Fleury, et al. 2019. What does bert learn about the structure of language? In 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. Yoon Kim and Alexander M Rush. 2016. Sequencelevel knowledge distillation. arXiv preprint arXiv:1606.07947. Oleksii Kuchaiev and Boris Ginsburg. 2017. Factorization tricks for lstm networks. arXiv preprint arXiv:1703.10722. 2167 Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226. Hector J Levesque, Ernest Davis, and Leora. Morgenstern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning., volume 46, page 47. Zhuohan Li, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019. Hint-based training for nonautoregressive translation. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Ping Luo, Zhenyao Zhu, Ziwei Liu, Xiaogang Wang, and Xiaoou Tang. 2016. Face model compression by distilling knowledge from neurons. In Thirtieth AAAI Conference on Artificial Intelligence. Brian W Matthews. 1975. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)Protein Structure, 405(2):442–451. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510– 4520. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher. Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP., pages 1631–1642. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. arXiv preprint arXiv:1908.09355. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. 2019. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820–2828. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and practical bert models for sequence labeling. arXiv preprint arXiv:1909.00100. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2018. Neural network acceptability judgments. arXiv preprint 1805.12471. Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT. 2168 Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In International Conference on Computational Science, pages 84–95. Springer. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Doyeob Yeo, Ji-Roon Bae, Nae-Soo Kim, Cheol-Sig Pyo, Junho Yim, and Junmo Kim. 2018. Sequential knowledge transfer in teacher-student framework using densely distilled flow-based information. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 674–678. IEEE. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962. Sergey Zagoruyko and Nikos Komodakis. 2016. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. arXiv preprint arXiv:1612.03928. Ting Zhang, Guo-Jun Qi, Bin Xiao, and Jingdong Wang. 2017. Interleaved group convolutions. In Proceedings of the IEEE International Conference on Computer Vision, pages 4373–4382. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6848–6856. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19– 27. Appendix for “MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices” A Extra Related Work on Knowledge Transfer Exploiting knowledge transfer to compress model size was first proposed by Buciluˇa et al. (2006). The idea was then adopted in knowledge distillation (Hinton et al., 2015), which requires the smaller student network to mimic the class distribution output of the larger teacher network. Fitnets (Romero et al., 2014) make the student mimic the intermediate hidden layers of the teacher to train narrow and deep networks. Luo et al. (2016) show that the knowledge of the teacher can also be obtained from the neurons in the top hidden layer. Similar to our proposed progressive knowledge transfer scheme, Yeo et al. (2018) proposed a sequential knowledge transfer scheme to distill knowledge from a deep teacher into a shallow student in a sequential way. Zagoruyko and Komodakis (2016) proposed to transfer the attention maps of the teacher on images. Li et al. (2019) proposed to transfer the similarity of hidden states and word alignment from an autoregressive Transformer teacher to a non-autoregressive student. B Extra Related Work on Compact Architecture Design While much recent research has focused on improving efficient Convolutional Neural Networks (CNN) for mobile vision applications (Iandola et al., 2016; Howard et al., 2017; Zhang et al., 2017, 2018; Sandler et al., 2018; Tan et al., 2019; Howard et al., 2019), they are usually tailored for CNN. Popular lightweight operations such as depth-wise convolution (Howard et al., 2017) cannot be directly applied to Transformer or BERT. In the NLP literature, the most relevant work can be group LSTMs (Kuchaiev and Ginsburg, 2017; Gao et al., 2018), which employs the idea of group convolution (Zhang et al., 2017, 2018) into Recurrent Neural Networks (RNN). C Visualization of Attention Distributions We visualize the attention distributions of the 1st and the 12th layers of a few models in the ablation study for further investigation. They are shown in Figure 3. We find that the proposed attention transfer can help the student mimic the attention distributions of the teacher very well. Surprisingly, we find that the attention distributions in the attention heads of ”MobileBERT(bare)+PD+FMT” are exactly a re-order of those of ”MobileBERT(bare)+PD+FMT+AT” (also the teacher model), even if it has not been trained by the attention transfer objective. This phenomenon indicates that multi-head attention is a crucial and unique part of the non-linearity of BERT. Moreover, it can explain the minor improvements of Attention Transfer in the ablation study table, since the alignment of feature maps lead to the alignment of attention distributions. 2169 L1 H1 L1 H2 L1 H3 L1 H4 L12 H1 L12 H2 L12 H3 L12 H4 MobileBERT (bare) + PD + FMT + AT IB-BERT Teacher MobileBERT (bare) MobileBERT (bare) + PD + FMT MobileBERT (bare) + PD Figure 3: The visualization of the attention distributions in some attention heads of the IB-BERT teacher and different MobileBERT models. D Extra Experimental Settings For a fair comparison with original BERT, we follow the same pre-processing scheme as BERT, where we mask 15% of all WordPiece (Kudo and Richardson, 2018) tokens in each sequence at random and use next sentence prediction. Please note that MobileBERT can be potentially further improved by several training techniques recently introduced, such as span prediction (Joshi et al., 2019) or removing next sentence prediction objective (Liu et al., 2019b). We leave it for future work. In pre-training distillation, the hyperparameter α is used to balance the original masked language modeling loss and the distillation loss. Following (Kim and Rush, 2016), we set α to 0.5. E Architecture of MobileBERTTINY We use a lighter MHA structure for MobileBERTTINY. As illustrated in Figure 4, in stead of using hidden states from the inter-block feature maps as inputs to MHA, we use the reduced intra-block feature maps as key, query, and values in MHA for MobileBERTTINY. This can effectively reduce the parameters in MHA modules, but might harm the model capacity. F GLUE Dataset In this section, we provide a brief description of the tasks in the GLUE benchmark (Wang et al., 2018). CoLA The Corpus of Linguistic Acceptability (Warstadt et al., 2018) is a collection of English acMulti-Head Attention Add & Norm Feed Forward Add & Norm Add & Norm Linear Linear xF (c) Embedding Classifier Figure 4: Illustration of MobileBERTTINY. red lines denote inter-block flows while blue lines intra-block flows. ceptability judgments drawn from books and journal articles on linguistic theory. The task is to predict whether an example is a grammatical English sentence and is evaluated by Matthews correlation coefficient (Matthews, 1975). SST-2 The Stanford Sentiment Treebank (Socher et al., 2013) is a collection of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence and is evaluated by accuracy. 2170 MRPC The Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005) is a collection of sentence pairs automatically extracted from online news sources. They are labeled by human annotations for whether the sentences in the pair are semantically equivalent. The performance is evaluated by both accuracy and F1 score. STS-B The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. The task is to predict these scores and is evaluated by Pearson and Spearman correlation coefficients. QQP The Quora Question Pairs8 (Chen et al., 2018) dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent and is evaluated by both accuracy and F1 score. MNLI The Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment ), contradicts the hypothesis (contradiction), or neither (neutral) and is evaluated by accuracy on both matched (indomain) and mismatched (cross-domain) sections of the test data. QNLI The Question-answering NLI dataset is converted from the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016). The task is to determine whether the context sentence contains the answer to the question and is evaluated by the test accuracy. RTE The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges (Bentivogli et al., 2009). The task is to predict whether sentences in a sentence pair are entailment and is evaluated by accuracy. WNLI The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun 8https://data.quora.com/ First-Quora-Dataset-Release-Question-Pairs from a list of choices. We follow Devlin et al. (2018) to skip this task in our experiments, because few previous works do better than predicting the majority class for this task.
2020
195
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2171–2176 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2171 On Importance Sampling-Based Evaluation of Latent Language Models Robert L. Logan IV Univ. of California, Irvine [email protected] Matt Gardner Allen Institute for AI [email protected] Sameer Singh Univ. of California, Irvine [email protected] Abstract Language models that use additional latent structures (e.g., syntax trees, coreference chains, and knowledge graph links) provide several advantages over traditional language models. However, likelihood-based evaluation of these models is often intractable as it requires marginalizing over the latent space. Existing methods avoid this issue by using importance sampling. Although this approach has asymptotic guarantees, analysis is rarely conducted on the effect of decisions such as sample size, granularity of sample aggregation, and the proposal distribution on the reported estimates. In this paper, we measure the effect these factors have on perplexity estimates for three different latent language models. In addition, we elucidate subtle differences in how importance sampling is applied, which can have substantial effects on the final estimates, as well as provide theoretical results that reinforce the validity of importance sampling for evaluating latent language models. 1 Introduction Latent language models are generative models of text that jointly represent the text and the latent structure underlying it, such as: the syntactic parse, coreference chains between entity mentions, or links of entities and relations mentioned in the text to an external knowledge graph. The benefits of modeling such structure include interpretability (Hayashi et al., 2020), better performance on tasks requiring structure (Dyer et al., 2016; Ji et al., 2017), and improved ability to generate consistent mentions of entities (Clark et al., 2018) and factually accurate text (Logan et al., 2019). Unfortunately, demonstrating that these models provide better performance than traditional language models by evaluating their likelihood on benchmark data can be difficult, as exact computation requires marginalizing over all possible latent structures. Existing approaches evaluate their models by estimating likelihoods using importance sampling, i.e. a weighted average over latent states sampled from a proposal distribution. Although convergence of importance sampled estimates is asymptotically guaranteed, results are typically produced using a small number of samples for which this guarantee does not necessarily apply. Furthermore, these works employ a variety of heuristics—such as sampling from proposal distributions that are conditioned on future gold tokens the model is being evaluated on, and changing the temperature of the proposal distribution—without providing measurements of the effect these decisions have on estimated perplexity, and often omitting details crucial to replicating their results. In this paper, we seek to fill in this missing knowledge, and put this practice on more rigorous footing. First, we review the theory of importance sampling, providing proof that importance sampled perplexity estimates are stochastic upper bounds of the true perplexity—a previously unnoted justification for this evaluation technique. In addition, we compile a list of common practices used in three previous works—RNNG (Dyer et al., 2016), EntityNLM (Ji et al., 2017) and KGLM (Logan et al., 2019)—and uncover a difference in the granularity at which importance samples are aggregated in these works that has a substantial effect on the final estimates. We also investigate a direct marginalization alternative to importance sampling based on beam search that produces strict bounds, and in some cases, has similar performance. Last, we perform experiments to measure the effect of varying sample size, aggregation method, and choice of proposal distribution for these models, an analysis that is missing from previous work. From these results we conclude a set of best practices to be used in future work. 2172 x Kawhi to join L.A. Clippers . He ... EntityNLM t 1 0 0 1 1 0 1 ... e 1 ∅ ∅ 2 2 ∅ 1 ... l 1 1 1 2 1 1 1 ... KGLM t new ∅ ∅ related ∅ related ... s ∅ ∅ ∅ kawhi_leonard ∅ kawhi_leonard ... r ∅ ∅ ∅ playerFor ∅ reflexive ... o kawhi_leonard ∅ ∅ la_clippers ∅ kawhi_leonard ... Figure 1: EntityNLM and KGLM latent states. For EntityNLM, z = (t, e, l), where t denotes whether the token is part of a mention, e denotes the coreference cluster, and l denotes the remaining mention length. For KGLM, z = (t, s, r, o), where t has the same meaning, and s, r and o associate tokens to edges in a knowledge graph. 2 Inference in Latent LMs In this section, we provide an overview of importance sampling-based inference in latent language models, as well as some key theoretical results. Latent LMs A latent language model is a generative model which estimates the joint distribution p(x, z) of a sequence of text x = (x1, . . . , xT) and its underlying latent structure z. In this paper, we focus on three models: • RNNG (Dyer et al., 2016) which models syntactic structure, • EntityNLM (Ji et al., 2017) which models coreference chains, and • KGLM (Logan et al., 2019) which models links to an external knowledge graph. Example latent states for EntityNLM and KGLM are depicted in Figure 1, showing latent coreference chains and links to the knowledge graph. Other notable latent language models include the NKLM (Ahn et al., 2016) and LRLM (Hayashi et al., 2020); we do not study them since they use alternatives to importance sampling (e.g., the forward-backward algorithm). Perplexity The standard evaluation metric for language models is perplexity: PPL = exp −1 T T X t=1 log p(xt|x<t) , (1) where p(xt|x<t) is the marginal likelihood of the token xt conditioned on the previous tokens x<t. By the chain rule of probabilities p(x) = QT t=1 p(xt|x<t). Perplexity can be intractable to compute for latent language models since it requires marginalizing out the latent variable (e.g., p(x) = P z p(x, z)) whose state space is often exponential in the length of the text. Importance Sampling Existing approaches instead use importance sampling (Kahn, 1950) to estimate an approximate marginal probability: ˆp(x) = 1 K K X k=1 p(x, zk) q(zk) , (2) where q(z) is an arbitrary proposal distribution and z1, . . . , zK ∼q(z). It is well known that ˆp(x) is an unbiased estimator: Ezk∼q(z)  ˆp(x) = p(x), (3) provided that q(z) > 0 whenever p(z) > 0. For proof and further details on importance sampling, we refer the reader to Owen (2013). Stochastic Upper Bound A consequence of Eqn (3) is that, due to Jensen’s inequality: Ezk∼q(z) log ˆp(x) ≤log p(x). (4) In other words, importance sampled estimates of a model’s perplexity are stochastic upper bounds of the true perplexity. This property has not been stated in prior work on latent language modeling, yet is an important consideration since it implies that importance sampled perplexities can be reliably used to compare against existing baselines. Limiting Behavior Another important observation is that importance sampled estimates of perplexity are consistent, e.g., will converge as the number of samples approaches infinity. To prove this, we first observe that ˆp(x) is consistent, which is a well-known consequence of the strong law of large numbers (Geweke, 1989). Accordingly, log ˆp(x) is also consistent due to the continuous mapping theorem (Van der Vaart, 2000). 2173 3 Common Practices Implementing importance sampling for evaluating latent language models involves a number of decisions that need to be made. We need to select the number of samples, choose the proposal distribution, and decide whether to aggregate importance sampled estimates at the instance or corpus level. We list the practices used in previous work.1 Sample Size Typically, only 100 samples are used for computing the perplexity. A notable exception is Kim et al. (2019)’s follow-up to RNNG that uses 1000 samples. Proposal Distribution Previous work uses proposal distributions q(z|x) that are essentially discriminative versions of the generative model (e.g., they are models that predict the latent state conditioned on the text), with one key distinction: they are conditioned not only on the sequence of tokens that have been observed so far, but also on future tokens that the model will be evaluated on (a trait we will refer to as peeking). This conditioning behavior does not contradict any of the assumptions in Eqn’s (3) and (4), and is useful in preventing generation of invalid structures (for instance, parse trees with more leaves then there are words in the text), or ones that are inconsistent with future tokens. Dyer et al. (2016) and Kim et al. (2019) also increase the entropy of the proposal distribution by dividing logits by a temperature parameter τ (respectively using τ = 1.25 and τ = 2.0). Aggregation An oft-overlooked fact (unnoted in previous work) is that Eqn (2) can be substituted into Eqn (1) in multiple ways. Letting xC = {x1, . . . xN} denote a corpus of evaluation data comprised of instances (token sequences) xn, estimates can be formed at the instance level: d PPLI = exp −1 T N X n=1 log ˆp(xn) , (5) or at the corpus level: d PPLC = exp −1 T log ˆp(xC) ! , (6) i.e., average is either over each instance or the whole corpus.2 RNNG and EntityNLM perform instance-level aggregation, whereas KGLM performs corpus-level aggregation. Note that these 1Based both on the cited papers and available source code. 2 One could also consider token-level estimates. To our knowledge, these have been unused by existing work. 0 200 400 600 800 1000 84 86 88 90 Perplexity RNNG τ = 0.5 τ = 0.9 τ = 1.0 τ = 1.1 τ = 2.0 No Peeking 0 200 400 600 800 1000 108 110 112 114 116 118 Perplexity ENTITYNLM 0 200 400 600 800 1000 Sample Size 25 50 75 100 125 Perplexity KGLM Figure 2: Effect of increasing the number of samples on instance-level perplexity estimates for different proposal distributions. formulations are equivalent when not aggregating over samples, i.e. for non-latent language models. 4 Critical Evaluation Thus far, research has neglected to measure the effectiveness of the practices detailed in Section 3. In the following section, we perform experiments to determine whether reporting estimates obtained from small sample sizes is warranted, as well as better understand the consequences of peeking and scaling the temperature of the proposal distribution. Setup For our experiments, we use Kim et al. (2019)’s RNNG implementation3, and Logan et al. (2019)’s EntityNLM and KGLM implementations4. For RNNG and KGLM we use the pre3https://github.com/harvardnlp/urnng 4https://github.com/rloganiv/kglm-model 2174 trained model weights. For EntityNLM we train the model from scratch following the procedure described by Ji et al. (2017); results may not be directly comparable due to differences in data preprocessing and hyperparameters. We evaluate models on the datasets used in their original papers: RNNG is evaluated on the Penn Treebank corpus (Marcus et al., 1993), EntityNLM is evaluated on English data from the CoNLL 2012 shared task (Pradhan et al., 2014), and KGLM is evaluated on the Linked WikiText-2 corpus (Logan et al., 2019). Experiments For EntityNLM and KGLM, we experiment with two kinds of proposal distributions: (1) the standard peeking proposal distribution that conditions on future evaluation data, and (2) a non-peeking variant that is conditioned only on the data observed by the model (this is akin to estimating perplexity by ancestral sampling). For RNNG we only experiment with peeking proposals, since a non-peeking variant generates invalid parse trees. For the peeking proposal distribution, we experiment with applying temperatures τ ∈[0.5, 0.9, 1.0, 1.1, 2.0, 5.0]. We report both corpus-level and instance-level estimates, as well as bounds produced using a direct, beam marginalization method we describe later. Sample Size We plot instance-level perplexity estimates as sample size is varied in Figures 2 and 3. We observe that the curves are monotonically decreasing in all settings. Consistent with our observation that importance sampled estimates of perplexity are a stochastic upper bound, this demonstrates that the bound is improved as sample size increases. Furthermore, none of the curves exhibit any signs of convergence even after drawing orders of magnitude more samples (Figure 3); the estimated model perplexities continue to improve. Thus, the performance of these models is likely better than the originally reported estimates. Aggregation Final estimates of perplexity computed using both corpus- and instance-level estimates are provided in Table 1. We note that instance-level estimates are uniformly lower by a wide margin. For example, using a temperature of τ = 1.1 the estimated KGLM perplexity is approximately 10 nats lower using instance-level estimates. This is substantially better than the perplexity of 43 nats reported by Logan et al. (2019). Proposal Distribution These results also appear to indicate that choice of proposal distribution has a substantial effect on estimated perplexity. However, RNNG Ent KGLM Corpus-level τ = 0.5 94.4 122.6 101.9 τ = 0.9 96.0 122.7 59.3 τ = 1.0 96.7 120.8 48.2 τ = 1.1 97.9 120.7 41.7 τ = 2.0 121.6 120.5 170.0 τ = 5.0 734.0 152.5 7,468.7 No Peeking 131.7 86.8 Instance-level τ = 0.5 85.3 113.5 99.3 τ = 0.9 84.4 110.6 48.1 τ = 1.0 84.2 110.0 36.6 τ = 1.1 84.0 109.9 29.6 τ = 2.0 83.8 109.0 90.7 τ = 5.0 97.2 129.6 3,756.1 No Peeking 113.9 71.9 Table 1: Final perplexity estimates using different proposal distributions, estimated at both the instance and corpus level. τ is temperature, and No Peeking refers to proposal distributions that are not conditioned on future outputs. RNNG Ent KGLM k = 1 96.3 150.2 153.7 k = 10 87.0 147.1 152.6 k = 100 84.3 144.5 Table 2: Strict perplexity upper bounds obtained by marginalizing over the top-k states predicted by q(z|x) using beam search. it could also be the case that the observed differences in performance across proposal distributions are due to random chance. We investigate whether this is the case for EntityNLM by examining the approximate density of perplexity estimates after drawing 100 importance samples (shown in Figure 4).5 Our results illustrate that the estimates are relatively stable; although there is some overlap between the better performing temperature values, the order of the modes matches the order reported in Table 1, and there is clear separation from the estimates produced when τ = 0.5 or by the nonpeeking proposal distribution. Due to the relative cost of sampling we did not replicate this experiment for RNNG and KGLM.6 5Obtained by Monte Carlo sampling 100 times. 6 Figs 3 & 4 took 1 week on a cluster of 15 NVidia 1080Tis. 2175 0 2000 4000 6000 8000 10000 Sample Size 106 108 110 112 114 116 118 Perplexity ENTITYNLM τ = 0.5 τ = 0.9 τ = 1.0 τ = 1.1 τ = 2.0 No Peeking Figure 3: EntityNLM instance-level perplexity estimates as the number of samples is increased to 10K. In general, we observe the peeking proposal distributions produce better estimates, and that better performance is obtained using temperatures that slightly increase the entropy of the proposal distribution (e.g., τ ∈[1.1, 2.0]), although the ideal amount varies across models. We also observe that the relative performance of proposal distributions is mostly preserved as the number of samples is increased. This suggests that good temperature parameters can be quickly identified by running many experiments with a small number of samples. Beam Marginalization An alternative to importance sampling is to directly marginalize over a subset of z values where we expect p(x|z) is large. Specifically, we propose using the top-k most likely values of z identified by performing beam search using the proposal distribution q(z|x). We will refer to this as beam marginalization. Because marginalization is only performed over a subset of the space, this method produces a strict upper bound of the true perplexity. Perplexity bounds obtained using beam marginalization are reported in Table 2. This method produces bounds close to the instance-level importance sampled estimates for RNNG, but does not perform well for the other models. This is likely due to the fact that latent space of RNNG (which operates on sentences and parse trees) is much smaller than EntityNLM and KGLM (which operate on documents and coreference chains/knowledge graphs). Best Practices From these results we recommend the following practices for future work utilizing importance sampling: (1) aggregate importance samples at the instance level, (2) condition on all avail111 113 115 117 119 Perplexity (100 Samples) ENTITYNLM Figure 4: Approximate density of EntityNLM perplexity estimates after drawing 100 importance samples (colors same as Figure 3). able information when designing proposals, (3) try increased temperatures when generating samples from the proposal distribution, good temperatures can be identified using relatively few samples, and (4) utilize as many samples as possible. In addition, consider using beam marginalization in applications where strict upper bounds are needed. 5 Conclusion We investigate the application of importance sampling to evaluating latent language models. Our contributions include: (1) showing that importance sampling produces stochastic upper bounds of perplexity, thereby justifying the use of such estimates for comparing language model performance, (2) a concise description of (sometimes unstated) common practices used in applying this technique, (3) a simple direct marginalization-based alternative to importance sampling, and (4) experimental results demonstrating the effect of sample size, sampling distribution, and granularity on estimates. While this work helps clarify and validate existing results, we also observe that none of the estimates appear to converge even after drawing large numbers of samples. Thus, we encourage future research into obtaining tighter bounds on latent LM perplexity, possibly by using more powerful proposal distributions that consider entire documents as context, or by considering methods such as annealed importance sampling. Acknowledgements We would like to thank Alex Boyd for helpful discussions. This work was funded in part by Allen Institute of Artificial Intelligence, the NSF award #IIS-1817183, and in part by the DARPA MCS program under contract No. N660011924033 with the United States Office of Naval Research. 2176 References Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. arXiv preprint arXiv:1608.00318. Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. 2018. Neural text generation in stories using entity representations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2250–2260, New Orleans, Louisiana. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. John Geweke. 1989. Bayesian inference in econometric models using monte carlo integration. Econometrica, 57(6):1317–1339. Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2020. Latent relation language models. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), New York, USA. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1830– 1839, Copenhagen, Denmark. Association for Computational Linguistics. Herman Kahn. 1950. Random sampling (monte carlo) techniques in neutron attenuation problems–i. Nucleonics, 6(5):27–passim. Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1105–1117, Minneapolis, Minnesota. Association for Computational Linguistics. Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5962–5971, Florence, Italy. Association for Computational Linguistics. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Art B. Owen. 2013. Monte Carlo theory, methods and examples. Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2014, page 30. NIH Public Access. Aad W Van der Vaart. 2000. Asymptotic statistics, volume 3. Cambridge university press.
2020
196
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177–2190 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2177 SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization Haoming Jiang ∗ Georgia Tech [email protected] Pengcheng He, Weizhu Chen Microsoft Dynamics 365 AI {penhe,wzchen}@microsoft.com Xiaodong Liu, Jianfeng Gao Microsoft Research {xiaodl,jfgao}@microsoft.com Tuo Zhao Georgia Tech [email protected] Abstract Transfer learning has fundamentally changed the landscape of natural language processing (NLP). Many state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely high complexity of pre-trained models, aggressive fine-tuning often causes the fine-tuned model to overfit the training data of downstream tasks and fail to generalize to unseen data. To address such an issue in a principled manner, we propose a new learning framework for robust and efficient fine-tuning for pre-trained models to attain better generalization performance. The proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the complexity of the model; 2. Bregman proximal point optimization, which is an instance of trustregion methods and can prevent aggressive updating. Our experiments show that the proposed framework achieves new state-of-the-art performance on a number of NLP tasks including GLUE, SNLI, SciTail and ANLI. Moreover, it also outperforms the state-of-the-art T5 model, which is the largest pre-trained model containing 11 billion parameters, on GLUE. 1 1 Introduction The success of natural language processing (NLP) techniques relies on huge amounts of labeled data in many applications. However, large amounts of labeled data are usually prohibitive or expensive to obtain. To address this issue, researchers have resorted to transfer learning. Transfer learning considers the scenario, where we have limited labeled data from the target domain for a certain task, but we have relevant tasks ∗Work was done during an internship at Microsoft Dynamics 365 AI. 1https://github.com/namisan/mt-dnn with a large amount of data from different domains (also known as out-of-domain data). The goal is to transfer the knowledge from the high-resource domains to the low-resource target domain. Here we are particularly interested in the popular twostage transfer learning framework (Pan and Yang, 2009). The first stage is pre-training, where a high-capacity model is trained for the out-ofdomain high-resource relevant tasks. The second stage is fine-tuning, where the high-capacity model is adapted to the low-resource task in the target domain. For many applications in NLP, most popular transfer learning methods choose to pre-train a large language model, e.g., ELMo (Peters et al., 2018), GPT (Radford et al., 2019) and BERT (Devlin et al., 2019). Such a language model can capture general semantic and syntactic information that can be further used in downstream NLP tasks. The language model is particularly attractive, because it can be trained in a completely unsupervised manner with huge amount of unlabeled data, which are extremely cheap to fetch from internet nowadays. The resulting extremely large multidomain text corpus allows us to train huge language models. To the best of our knowledge, by far the largest language model, T5, has an enormous size of about 11 billion parameters (Raffel et al., 2019). For the second fine-tuning stage, researchers adapt the pre-trained language model to the target task/domain. They usually replace the top layer of the language model by a task/domainspecific sub-network, and then continue to train the new model with the limited data of the target task/domain. Such a fine-tuning approach accounts for the low-resource issue in the target task/domain, and has achieved state-of-the-art performance in many popular NLP benchmarks (Devlin et al., 2019; Liu et al., 2019c; Yang et al., 2178 2019; Lan et al., 2019; Dong et al., 2019; Raffel et al., 2019). Due to the limited data from the target task/domain and the extremely high complexity of the pre-trained model, aggressive fine-tuning often makes the adapted model overfit the training data of the target task/domain and therefore does not generalize well to unseen data. To mitigate this issue, the fine-tuning methods often rely on hyper-parameter tuning heuristics. For example, Howard and Ruder (2018) use a heuristic learning rate schedule and gradually unfreeze the layers of the language model to improve the fine-tune performance; Peters et al. (2019) give a different suggestion that they only adapt certain layers and freeze the others; (Houlsby et al., 2019; Stickland and Murray, 2019) propose to add additional layers to the pre-trained model and fine-tune both of them or only the additional layers. However, these methods require significant tuning efforts. To fully harness the power of fine-tuning in a more principled manner, we propose a new learning framework for robust and efficient fine-tuning on the pre-trained language models through regularized optimization techniques. Specifically, our framework consists of two important ingredients for preventing overfitting: (I) To effectively control the extremely high complexity of the model, we propose a Smoothnessinducing Adversarial Regularization technique. Our proposed regularization is motivated by local shift sensitivity in existing literature on robust statistics. Such regularization encourages the output of the model not to change much, when injecting a small perturbation to the input. Therefore, it enforces the smoothness of the model, and effectively controls its capacity (Mohri et al., 2018). (II) To prevent aggressive updating, we propose a class of Bregman Proximal Point Optimization methods. Our proposed optimization methods introduce a trust-region-type regularization (Conn et al., 2000) at each iteration, and then update the model only within a small neighborhood of the previous iterate. Therefore, they can effectively prevent aggressive updating and stabilize the finetuning process. We compare our proposed method with several state-of-the-art competitors proposed in (Zhu et al., 2020; Liu et al., 2019b,c; Lan et al., 2019; Raffel et al., 2019) and show that our proposed method significantly improves the training stability and generalization, and achieves comparable or better performance on multiple NLP tasks. We highlight that our single model with 356M parameters (without any ensemble) can achieve three state-of-the-art results on GLUE, even compared with all existing ensemble models and the T5 model (Raffel et al., 2019), which contains 11 billion parameters. Furthermore, we also demonstrate that the proposed framework complements with SOTA fine-tuning methods (Liu et al., 2019b) and outperforms the T5 model. We summarize our contribution as follows: 1. We introduce the smoothness-inducing adversarial regularization and proximal point optimization into large scale language model fine-tuning; 2. We achieve state-of-the-art results on several popular NLP benchmarks (e.g., GLUE, SNLI, SciTail, and ANLI). Notation: We use f(x; θ) to denote a mapping f associated with the parameter θ from input sentences x to an output space, where the output is a multi-dimensional probability simplex for classification tasks and a scalar for regression tasks. ΠA denotes the projection operator to the set A. DKL(P||Q) = P k pk log(pk/qk) denotes the KL-divergence of two discrete distributions P and Q with the associated parameters of pk and qk, respectively. 2 Background The transformer models were originally proposed in Vaswani et al. (2017) for neural machine translation. Their superior performance motivated Devlin et al. (2019) to propose a bidirectional transformer-based language model named BERT. Specifically, Devlin et al. (2019) pre-trained the BERT model using a large corpus without any human annotation through unsupervised learning tasks. BERT motivated many follow-up works to further improve the pre-training by introducing new unsupervised learning tasks (Yang et al., 2019; Dong et al., 2019; Joshi et al., 2020), enlarging model size (Lan et al., 2019; Raffel et al., 2019), enlarging training corpora (Liu et al., 2019c; Yang et al., 2019; Raffel et al., 2019) and multi-tasking (Liu et al., 2019a,b). The pre-trained language model is then adapted to downstream tasks and further fine-tuned. Specifically, the top layer of the language model can be replaced by a task-specific layer and then continue to train on downstream tasks. To prevent overfitting, existing heuristics include choosing a 2179 small learning rate or a triangular learning rate schedule, and a small number of iterations, and other fine-tuning tricks mentioned in (Howard and Ruder, 2018; Peters et al., 2019; Houlsby et al., 2019; Stickland and Murray, 2019). Our proposed regularization technique is related to several existing works (Miyato et al., 2018; Zhang et al., 2019; Shu et al., 2018). These works consider similar regularization techniques, but target at other applications with different motivations, e.g., semi-supervised learning, unsupervised domain adaptation and harnessing adversarial examples in image classification. Our proposed optimization technique covers a large class of Bregman proximal point methods in existing literature on optimization, including vanilla proximal point method proposed in Rockafellar (1976), generalized proximal point method (Teboulle, 1997; Eckstein, 1993), accelerated proximal point method, and other variants (G¨uler, 1991, 1992; Parikh et al., 2014). There is a related fine-tuning method – FreeLB Zhu et al. (2020), which adapted a robust adversarial training method. However, our framework focuses on the local smoothness, leading to a significant performance improvement. More discussion and comparison are provided in Section 4. 3 The Proposed Method We describe the proposed learning framework – SMART for robust and efficient fine-tuning of pre-trained language models. Our framework consists of two important ingredients: SMoothness-inducing Adversarial Regularization and BRegman pRoximal poinT opTimization2. 3.1 Smoothness-Inducing Adversarial Regularization We propose to impose an explicit regularization to effectively control the model complexity at the fine-tuning stage. Specifically, given the model f(·; θ) and n data points of the target task denoted by {(xi, yi)}n i=1, where xi’s denote the embedding of the input sentences obtained from the first embedding layer of the language model and yi’s are the associated labels, our method essentially solves the following optimization for fine-tuning: minθ F(θ) = L(θ) + λsRs(θ), (1) where L(θ) is the loss function defined as L(θ) = 1 n Pn i=1 ℓ(f(xi; θ), yi), 2The complete name of our proposed method is SMAR3T2, but we use SMART for notational simplicity. and ℓ(·, ·) is the loss function depending on the target task, λs > 0 is a tuning parameter, and Rs(θ) is the smoothness-inducing adversarial regularizer. Here we define Rs(θ) as Rs(θ) = 1 n n X i=1 max ∥exi−xi∥p≤ϵ ℓs(f(exi; θ), f(xi; θ)), where ϵ > 0 is a tuning parameter. Note that for classification tasks, f(·; θ) outputs a probability simplex and ℓs is chosen as the symmetrized KL-divergence, i.e., ℓs(P, Q) = DKL(P||Q) + DKL(Q||P); For regression tasks, f(·; θ) outputs a scalar and ℓs is chosen as the squared loss, i.e., ℓs(p, q) = (p −q)2. Note that the computation of Rs(θ) involves a maximization problem and can be solved efficiently by projected gradient ascent. We remark that the proposed smoothnessinducing adversarial regularizer was first used in Miyato et al. (2018) for semi-supervised learning with p = 2, and then in Shu et al. (2018) for unsupervised domain adaptation with p = 2, and more recently in Zhang et al. (2019) for harnessing the adversarial examples in image classification with p = ∞. To the best of our knowledge, we are the first applying such a regularizer to fine-tuning of pre-trained language models. The smoothness-inducing adversarial regularizer is essentially measuring the local Lipschitz continuity of f under the metric ℓs. More precisely speaking, the output of f does not change much if we inject a small perturbation (ℓp norm bounded by ϵ) to xi. Therefore, by minimizing the objective in (1), we can encourage f to be smooth within the neighborhoods of all xi’s. Such a smoothnessinducing property is particularly helpful to prevent overfitting and improve generalization on a low resource target domain for a certain task. An illustration is provided in Figure 1. Note that the idea of measuring the local Lipschitz continuity is similar to the local shift sensitivity criterion in existing literature on robust statistics, which dates back to 1960’s (Hampel, 1974; Huber, 2011). This criterion has been used to characterize the dependence of an estimator on the value of one of the sample points. 3.2 Bregman Proximal Point Optimization We propose to develop a class of Bregman proximal point optimization methods to solve (1). Such optimization methods impose a strong penalty at 2180 (a) (b) Figure 1: Decision boundaries learned without (a) and with (b) smoothness-inducing adversarial regularization, respectively. The red dotted line in (b) represents the decision boundary in (a). As can be seen, the output f in (b) does not change much within the neighborhood of training data points. each iteration to prevent the model from aggressive update. Specifically, we use a pre-trained model as the initialization denoted by f(·; θ0). At the (t + 1)-th iteration, the vanilla Bregman proximal point (VBPP) method takes θt+1 = argminθ F(θ) + µDBreg(θ, θt), (2) where µ > 0 is a tuning parameter, and DBreg(·, ·) is the Bregman divergence defined as DBreg(θ, θt) = 1 n Pn i=1 ℓs(f(xi; θ), f(xi; θt)), where ℓs is defined in Section 3.1. As can be seen, when µ is large, the Bregman divergence at each iteration of the VBPP method essentially serves as a strong regularizer and prevents θt+1 from deviating too much from the previous iterate θt. This is also known as the trust-region type iteration in existing optimization literature (Conn et al., 2000). Consequently, the Bregman proximal point method can effectively retain the knowledge of the out-of-domain data in the pre-trained model f(·; θ0). Since each subproblem (2) of VBPP does not admit a closed-form solution, we need to solve it using SGD-type algorithms such as ADAM. Note that we do not need to solve each subproblem until convergence. A small number of iterations are sufficient to output a reliable initial solution for solving the next subproblem. Moreover, the Bregman proximal point method is capable of adapting to the information geometry (See more details in Raskutti and Mukherjee (2015)) of machine learning models and achieving better computational performance than the standard proximal point method (i.e., DBreg(θ, θt) = ∥θ −θt∥2 2) in many applications. Acceleration by Momentum. Similar to other optimization methods in existing literature, we can accelerate the Bregman proximal point method Algorithm 1 SMART: We use the smoothnessinducing adversarial regularizer with p = ∞and the momentum Bregman proximal point method. Notation: For simplicity, we denote gi(exi, ¯θs) = 1 |B| P xi∈B ∇exℓs(f(xi; ¯θs), f(exi; ¯θs)) and AdamUpdateB denotes the ADAM update rule for optimizing (3) using the mini-batch B; ΠA denotes the projection to A. Input: T: the total number of iterations, X: the dataset, θ0: the parameter of the pre-trained model, S: the total number of iteration for solving (2), σ2: the variance of the random initialization for exi’s, Tex: the number of iterations for updating exi’s, η: the learning rate for updating exi’s, β: momentum parameter. 1: eθ1 ←θ0 2: for t = 1, .., T do 3: ¯θ1 ←θt−1 4: for s = 1, .., S do 5: Sample a mini-batch B from X 6: For all xi ∈B, initialize exi ←xi + νi with νi ∼N(0, σ2I) 7: for m = 1, .., Tex do 8: egi ← gi(exi,¯θs) ∥gi(exi,¯θs)∥∞ 9: exi ←Π∥exi−x∥∞≤ϵ(exi + ηegi) 10: end for 11: ¯θs+1 ←AdamUpdateB(¯θs) 12: end for 13: θt ←¯θS 14: eθt+1 ←(1 −β)¯θS + βeθt 15: end for Output: θT by introducing an additional momentum to the update. Specifically, at the (t + 1)-th iteration, the momentum Bregman proximal point (MBPP) method takes θt+1 = argminθ F(θ) + µDBreg(θ, eθt), (3) where eθt = (1 −β)θt + βeθt−1 is the exponential moving average and β ∈(0, 1) is the momentum parameter. The MBPP method is also called the “Mean Teacher” method in existing literature (Tarvainen and Valpola, 2017) and has been shown to achieve state-of-the-art performance in popular semi-supervised learning benchmarks. For convenience, we summarize the MBPP method in Algorithm 1. 2181 4 Experiment – Main Results We demonstrate the effectiveness of SMART for fine-tuning large language models using GLUE (Wang et al., 2018) by comparing with existing state-of-the-art methods. Dataset details can be found in Appendix A. 4.1 Implementation Details Our implementation of SMART is based on BERT3 (Wolf et al., 2019), RoBERTa 4 (Liu et al., 2019c), MT-DNN 5 (Liu et al., 2020b) and HNN6. We used ADAM (Kingma and Ba, 2014) and RADAM (Liu et al., 2020a) as our optimizers with a learning rate in the range ∈{1 × 10−5, 2 × 10−5, 3 × 10−5, 5 × 10−5} and a batch size ∈ {16, 32, 64}. The maximum number of epochs was set to 6. A linear learning rate decay schedule with warm-up of 0.1 was used, unless stated otherwise. We also set the dropout rate of all the task specific layers as 0.1, except 0.3 for MNLI and 0.05 for CoLA. To avoid gradient exploding, we clipped the gradient norm within 1. All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens. For SMART, we set the perturbation size ϵ = 10−5 and σ = 10−5. We set µ = 1 and λs ∈{1, 3, 5}. The learning rate η in Algorithm 1 is set to 10−3. We set β = 0.99 for the first 10% of the updates (t ≤0.1T) and β = 0.999 for the rest of the updates (t > 0.1T) following (Tarvainen and Valpola, 2017). Lastly, we simply set S = 1, Tex = 1 in Algorithm 1. 4.2 GLUE Main Results We compare SMART with a range of strong baselines including large pre-trained models and approaches with adversarial training, and a list of state-of-the-art models that have been submitted to the GLUE leaderboard. SMART is a generic framework, we evaluate our framework on two pre-trained models, the BERTBASE model (Devlin et al., 2019) and the RoBERTaLARGE model (Liu et al., 2019c), which are available publicly. Most of our analyses are done with the BERTBASE to make our results comparable to other work, since BERTBASE has been widely used as a baseline. To make our result comparable to other state-of-theart models, we also evaluate the framework on the 3https://github.com/huggingface/transformers 4https://github.com/pytorch/fairseq 5https://github.com/namisan/mt-dnn 6https://github.com/namisan/mt-dnn/tree/master/hnn RoBERTaLARGE model. • BERT (Devlin et al., 2019): This is the BERTBASE model released by the authors. In Devlin et al. (2019), authors only reported the development results on a few tasks, thus we reproduced the baseline results, which are denoted by BERTReImp. • RoBERTa (Liu et al., 2019c): This is the RoBERTaLARGE released by authors, and we present the reported results on the GLUE dev. • PGD, FreeAT, FreeLB (Zhu et al., 2020): They are three adversarial training approaches built on top of the RoBERTaLARGE. • SMART: our proposed method as described in section 3. We use both the BERTBASE model (SMARTBERT) and the RoBERTaLARGE model (SMARTRoBERTa) as the pretrained model to evaluate the effectiveness of SMART. The main results are reported in Table 1. This table can be clustered into two groups based on different pretrained models: the BERTBASE model (the first group) and the RoBERTaLARGE model (the second group). The detailed discussions are as follows. For a fair comparison, we reproduced the BERT baseline (BERTReImp), since several results on the GLUE development set were missed. Our reimplemented BERT baseline is even stronger than the originally reported results in Devlin et al. (2019). For instance, the reimplemented model obtains 84.5% (vs. 84.4%) on MNLI in-domain development in terms of accuracy. On SST-2, BERTReImp outperforms BERT by 0.2% (92.9% vs. 92.7%) accuracy. All these results demonstrate the fairness of our baselines. Comparing with two strong baselines BERT and RoBERTa 7, SMART, including SMARTBERT and SMARTRoBERTa, consistently outperforms them across all 8 GLUE tasks by a big margin. Comparing with BERT, SMARTBERT obtained 85.6% (vs. 84.5%) and 86.0% (vs. 84.4%) in terms of accuracy, which is 1.1% and 1.6% absolute improvement, on the MNLI in-domain and out-domain settings. Even comparing with the state-of-the-art model RoBERTa, SMARTRoBERTa improves 0.8% (91.1% vs. 90.2%) on MNLI indomain development set. Interestingly, on the 7In our experiments, we use BERT referring the BERTBASE model, which has 110 million parameters, and RoBERTa referring the RoBERTaLARGE model, which has 356 million parameters, unless stated otherwise. 2182 Model MNLI-m/mm QQP RTE QNLI MRPC CoLA SST STS-B Acc Acc/F1 Acc Acc Acc/F1 Mcc Acc P/S Corr BERTBASE BERT (Devlin et al., 2019) 84.4/88.4 -/86.7 92.7 BERTReImp 84.5/84.4 90.9/88.3 63.5 91.1 84.1/89.0 54.7 92.9 89.2/88.8 SMARTBERT 85.6/86.0 91.5/88.5 71.2 91.7 87.7/91.3 59.1 93.0 90.0/89.4 RoBERTaLARGE RoBERTa (Liu et al., 2019c) 90.2/92.2/86.6 94.7 -/90.9 68.0 96.4 92.4/PGD (Zhu et al., 2020) 90.5/92.5/87.4 94.9 -/90.9 69.7 96.4 92.4/FreeAT (Zhu et al., 2020) 90.0/92.5/86.7 94.7 -/90.7 68.8 96.1 92.4/FreeLB (Zhu et al., 2020) 90.6/92.6/88.1 95.0 -/91.4 71.1 96.7 92.7/SMARTRoBERTa 91.1/91.3 92.4/89.8 92.0 95.6 89.2/92.1 70.6 96.9 92.8/92.6 Table 1: Main results on GLUE development set. The best result on each task produced by a single model is in bold and “-” denotes the missed result. Model /#Train CoLA SST MRPC STS-B QQP MNLI-m/mm QNLI RTE WNLI AX Score #param 8.5k 67k 3.7k 7k 364k 393k 108k 2.5k 634 Human Performance 66.4 97.8 86.3/80.8 92.7/92.6 59.5/80.4 92.0/92.8 91.2 93.6 95.9 87.1 Ensemble Models RoBERTa1 67.8 96.7 92.3/89.8 92.2/91.9 74.3/90.2 90.8/90.2 98.9 88.2 89.0 48.7 88.5 356M FreeLB2 68.0 96.8 93.1/90.8 92.4/92.2 74.8/90.3 91.1/90.7 98.8 88.7 89.0 50.1 88.8 356M ALICE3 69.2 97.1 93.6/91.5 92.7/92.3 74.4/90.7 90.7/90.2 99.2 87.3 89.7 47.8 89.0 340M ALBERT4 69.1 97.1 93.4/91.2 92.5/92.0 74.2/90.5 91.3/91.0 99.2 89.2 91.8 50.2 89.4 235M∗ MT-DNN-SMART† 69.5 97.5 93.7/91.6 92.9/92.5 73.9/90.2 91.0/90.8 99.2 89.7 94.5 50.2 89.9 356M Single Model BERTLARGE 5 60.5 94.9 89.3/85.4 87.6/86.5 72.1/89.3 86.7/85.9 92.7 70.1 65.1 39.6 80.5 335M MT-DNN6 62.5 95.6 90.0/86.7 88.3/87.7 72.4/89.6 86.7/86.0 93.1 75.5 65.1 40.3 82.7 335M T58 70.8 97.1 91.9/89.2 92.5/92.1 74.6/90.4 92.0/91.7 96.7 92.5 93.2 53.1 89.7 11,000M SMARTRoBERTa 65.1 97.5 93.7/91.6 92.9/92.5 74.0/90.1 91.0/90.8 95.4 87.9 91.88 50.2 88.4 356M Table 2: GLUE test set results scored using the GLUE evaluation server. The state-of-the-art results are in bold. All the results were obtained from https://gluebenchmark.com/leaderboard on December 5, 2019. SMART uses the classification objective on QNLI. Model references: 1 Liu et al. (2019c); 2Zhu et al. (2020); 3Wang et al. (2019); 4Lan et al. (2019); 5 Devlin et al. (2019); 6 Liu et al. (2019b); 7 Raffel et al. (2019) and 8 He et al. (2019), Kocijan et al. (2019). ∗ALBERT uses a model similar in size, architecture and computation cost to a 3,000M BERT (though it has dramatically fewer parameters due to parameter sharing). † Mixed results from ensemble and single of MT-DNN-SMART and with data augmentation. MNLI task, the performance of SMART on the out-domain setting is better than the in-domain setting, e.g., (86.0% vs. 85.6%) by SMARTBERT and (91.3% vs. 91.1%) by SMARTRoBERTa, showing that our proposed approach alleviates the domain shifting issue. Furthermore, on the small tasks, the improvement of SMART is even larger. For example, comparing with BERT, SMARTBERT obtains 71.2% (vs. 63.5%) on RTE and 59.1% (vs. 54.7%) on CoLA in terms of accuracy, which are 7.7% and 4.4% absolute improvement for RTE and CoLA, respectively; similarly, SMARTRoBERTa outperforms RoBERTa 5.4% (92.0% vs. 86.6%) on RTE and 2.6% (70.6% vs. 68.0%) on CoLA. We also compare SMART with a range of models which used adversarial training such as FreeLB. From the bottom rows in Table 1, SMART outperforms PGD and FreeAT across the all 8 GLUE tasks. Comparing with the current state-of-the-art adversarial training model, FreeLB, SMART outperforms it on 6 GLUE tasks out of a total of 8 tasks (MNLI, RTE, QNLI, MRPC, SST-2 and STS-B) showing the effectiveness of our model. Table 2 summarizes the current state-of-the-art models on the GLUE leaderboard. SMART obtains a competitive result comparing with T5 (Raffel et al., 2019), which is the leading model at the GLUE leaderboard. T5 has 11 billion parameters, 2183 while SMART only has 356 millions. Among this super large model (T5) and other ensemble models (e.g., ALBERT, ALICE), SMART, which is a single model, still sets new state-of-the-art results on SST-2, MRPC and STS-B. By combining with the Multi-task Learning framework (MT-DNN), MT-DNN-SMART obtains new state-of-the-art on GLUE, pushing the GLUE benchmark to 89.9%. More discussion will be provided in Section 5.3. 5 Experiment – Analysis and Extension In this section, we first analyze the effectiveness of each component of the proposed method. We also study that whether the proposed method is complimentary to multi-task learning. We further extend SMART to domain adaptation and use both SNLI (Bowman et al., 2015) and SciTail (Khot et al., 2018) to evaluate the effectiveness. Finally, we verified the robustness of the proposed method on ANLI (Nie et al., 2019). 5.1 Ablation Study Note that due to the limitation of time and computational resources, all the experiments reported below are based on the BERTBASE model. In this section, we study the importance of each component of SMART: smoothness-inducing adversarial regularization and Bregman proximal point optimization. All models in this study used the BERTBASE as the encoder for fast training. Furthermore, we also include the BERTBASE model as an additional baseline for a fair comparison. SMART denotes the proposed model. Then we set λs to 0, which denotes as -Rs. The model with µ = 0 is noted as -DBreg. Model MNLI RTE QNLI SST MRPC Acc Acc Acc Acc Acc BERT 84.5 63.5 91.1 92.9 89.0 SMART 85.6 71.2 91.7 93.0 91.3 -Rs 84.8 70.8 91.3 92.8 90.8 -DBreg 85.4 71.2 91.6 92.9 91.2 Table 3: Ablation study of SMART on 5 GLUE tasks. Note that all models used the BERTBASE model as their encoder. The results are reported in Table 3. It is expected that the removal of either component (smooth regularization or proximal point method) in SMART would result in a performance drop. For example, on MNLI, removing smooth regularization leads to a 0.8% (85.6% vs. 84.8) performance drop, while removing the Breg proximal point optimization, results in a performance drop of 0.2% (85.6% vs. 85.4%). It demonstrates that these two components complement each other. Interestingly, all three proposed models outperform the BERT baseline model demonstrating the effectiveness of each module. Moreover, we obersere that the generalization performance benefits more from SMART on small datasets (i.e., RTE and MRPC) by preventing overfitting. 5.2 Error Analysis To understand why SMART improves the performance, we analyze it on the ambiguous samples of MNLI dev set containing 3 classes, where each sample has 5 annotations. Based on the degree of agreement between these annotations, we divide the samples into 4 categories: 1) 5/0/0 all five annotations are the same; 2) 4/1/0 four annotations are the same; 3) 3/2/0 three annotations are the same and the other two annotations are the same; 4) 3/1/1 three annotations are the same and the other two annotations are different. Figure 2 summarizes the results in terms of both accuracy and KL-divergence: −1 n Pn i=1 P3 j=1 pj(xi) log(fj(xi)). For a given sample xi, the KL-Divergence evaluates the similarity between the model prediction {fj(xi)}3 j=1 and the annotation distribution {pj(xi)}3 j=1. We observe that SMARTRoBERTa outperforms RoBERTa across all the settings. Further, on high degree of ambiguity (low degree of agreement), SMARTRoBERTa obtains an even larger improvement showing its robustness to ambiguity. 5.3 SMART with Multi-task Learning It has been shown that multi-task learning (MTL, Caruana (1997); Liu et al. (2015, 2019b)) has a regularization effect via alleviating overfitting to a specific task. One question is whether MTL helps SMART as well. In this section, we are going to answer this question. Following Liu et al. (2019b), we first “pre-trained” shared embeddings using MTL with SMART, denoted as MT-DNNSMART 8, and then adapted the training data on each task on top of the shared embeddings. We also include a baseline which fine-tuned each task 8Due to limitation of computational resources, we only trained jointly using MTL on MNLI, RTE, QNLI, SST and MRPC, while MT-DNN was trained on the whole GLUE tasks except CoLA. 2184 60 65 70 75 80 85 90 95 100 Accuracy 90.6 97.2 89.5 68.0 69.7 91.1 97.7 90.7 67.4 70.7 MNLI Match 90.1 97.4 87.1 63.4 72.1 91.3 97.9 89.2 65.4 76.1 MNLI Mismatch RoBERTa SMART All 5/0/0 4/1/0 3/2/0 3/1/1 0.00 0.25 0.50 0.75 1.00 1.25 1.50 KL-Divergence 0.36 0.08 0.61 0.97 1.12 0.26 0.08 0.41 0.66 0.76 All 5/0/0 4/1/0 3/2/0 3/1/1 0.35 0.08 0.62 0.94 1.15 0.25 0.08 0.43 0.64 0.77 Figure 2: Score breakdown by degree of agreement. on the publicly released MT-DNN checkpoint 9, which is indicated as MT-DNN-SMARTv0. Model MNLI RTE QNLI SST MRPC Acc Acc Acc Acc F1 BERT 84.5 63.5 91.1 92.9 89.0 MT-DNN 85.3 79.1 91.5 93.6 89.2 SMART 85.6 71.2 91.6 93.0 91.3 MT-DNN-SMARTv0 85.7 80.2 92.0 93.3 91.5 MT-DNN-SMART 85.7 81.2 92.0 93.5 91.7 Table 4: Comparison between SMART and MTL. We observe that both MT-DNN and SMART consistently outperform the BERT model on all five GLUE tasks. Furthermore, SMART outperforms MT-DNN on MNLI, QNLI, and MRPC, while it obtains worse results on RTE and SST, showing that MT-DNN is a strong counterpart for SMART. By combining these two models, MTDNN-SMARTv0 enjoys advantages of both and thus improved the final results. For example, it achieves 85.7% (+0.1%) on MNLI and 80.2% (+1.1%) on RTE comparing with the best results of MT-DNN and SMART demonstrating that these two techniques are orthogonal. Lastly we also trained SMART jointly and then finetuned on each task like Liu et al. (2019b). We observe that MTDNN-SMART outperformes MT-DNN-SMARTv0 and MT-DNN across all 5 tasks (except MT-DNN 9It is from: https://github.com/namisan/mt-dnn. Note that we did not use the complicated answer module, e.g., SAN (Liu et al., 2018). Model 0.1% 1% 10% 100% SNLI Dataset (Dev Accuracy%) #Training Data 549 5,493 54,936 549,367 BERT 52.5 78.1 86.7 91.0 MT-DNN 82.1 85.2 88.4 91.5 MT-DNN-SMART 82.7 86.0 88.7 91.6 SciTail Dataset (Dev Accuracy%) #Training Data 23 235 2,359 23,596 BERT 51.2 82.2 90.5 94.3 MT-DNN 81.9 88.3 91.1 95.8 MT-DNN-SMART 82.3 88.6 91.3 96.1 Table 5: Domain adaptation on SNLI and SciTail. on SST) showing that SMART improves the generalization of MTL. 5.4 Domain Adaptation In this section, we evaluate our model on the domain adaptation setting. Following Liu et al. (2019b), we start with the default training/dev/test set of SNLI and SciTail. Then, we randomly sample 0.1%, 1%, 10% and 100% of its training data, which is used to train a model. The results are reported in Table 5. We observe that both MT-DNN and MT-DNN-SMART significantly outperform the BERT baseline. Comparing with MT-DNN, MT-DNN-SMART also achieves some improvements indicating the robustness of SMART. Furthermore, MT-DNNSMART outperforms current state-of-the-art on the SNLI/SciTail test. 5.5 Results on SNLI and SciTail In Table 7, we compare our methods, using all in-domain training data, against several state-ofthe-art models. We observe that SMART obtains the same improvement on SNLI in the BERT setting. Combining SMART with MT-DNN achieves a significant improvement, e.g., our BASE model even outperforms the BERTLARGE model. Similar observation is found on SciTail and in the BERTLARGE model setting. We see that incorporating SMART into MT-DNN achieves new stateof-the-art results on both SNLI and SciTail, pushing benchmarks to 91.7% on SNLI and 95.2% on SciTail. 5.6 Robustness One important property of the machine learning model is its robustness to adversarial attack. We 2185 Method Dev Test R1 R2 R3 All R1 R2 R3 All MNLI + SNLI + ANLI + FEVER BERTLARGE (Nie et al., 2019) 57.4 48.3 43.5 49.3 44.2 XLNetLARGE (Nie et al., 2019) 67.6 50.7 48.3 55.1 52.0 RoBERTaLARGE (Nie et al., 2019) 73.8 48.9 44.4 53.7 49.7 SMARTRoBERTa-LARGE 74.5 50.9 47.6 57.1 72.4 49.8 50.3 57.1 ANLI RoBERTaLARGE (Nie et al., 2019) 71.3 43.3 43.0 51.9 SMARTRoBERTa-LARGE 74.2 49.5 49.2 57.1 72.4 50.3 49.5 56.9 Table 6: Experiment Result for Each Round of ANLI. Model Dev Test SNLI Dataset (Accuracy%) BERTBASE 91.0 90.8 BERTBASE+SRL(Zhang et al., 2018) 90.3 MT-DNNBASE 91.4 91.1 SMARTBERT-BASE 91.4 91.1 MT-DNN-SMARTBASEv0 91.7 91.4 MT-DNN-SMARTBASE 91.7 91.5 BERTLARGE+SRL(Zhang et al., 2018) 91.3 BERTLARGE 91.7 91.0 MT-DNNLARGE 92.2 91.6 MT-DNN-SMARTLARGEv0 92.6 91.7 SciTail Dataset (Accuracy%) GPT (Radford et al., 2018) 88.3 BERTBASE 94.3 92.0 MT-DNNBASE 95.8 94.1 SMARTBERT-BASE 94.8 93.2 MT-DNN-SMARTBASEv0 96.0 94.0 MT-DNN-SMARTBASE 96.1 94.2 BERTLARGE 95.7 94.4 MT-DNNLARGE 96.3 95.0 SMARTBERT-LARGE 96.2 94.7 MT-DNN-SMARTLARGEv0 96.6 95.2 Table 7: Results on the SNLI and SciTail dataset. test our model on an adversarial natural language inference (ANLI) dataset (Nie et al., 2019). We evaluate the performance of SMART on each subset (i.e., R1,R2,R3) of ANLI dev and test set. The results are presented in Table 6. Table 6 shows the results of training on combined NLI data (ANLI (Nie et al., 2019) + MNLI (Williams et al., 2018) + SNLI (Bowman et al., 2015) + FEVER (Thorne et al., 2018)) and training on only ANLI data. In the combined data setting, we obverse that SMARTRoBERTa-LARGE obtains the best performance compared with all the strong baselines, pushing benchmarks to 57.1%. In case of the RoBERTaLARGE baseline, SMARTRoBERTa-LARGE outperforms 3.4% absolute improvement on dev and 7.4% absolute improvement on test, indicating the robustness of SMART. We obverse that in the ANLI-only setting, SMARTRoBERTa-LARGE outperforms the strong RoBERTaLARGE baseline with a large margin, +5.2% (57.1% vs. 51.9%) 6 Conclusion We propose a robust and efficient computation framework, SMART, for fine-tuning large scale pre-trained natural language models in a principled manner. The framework effectively alleviates the overfitting and aggressive updating issues in the fine-tuning stage. SMART includes two important ingredients: 1) smooth-inducing adversarial regularization; 2) Bregman proximal point optimization. Our empirical results suggest that SMART improves the performance on many NLP benchmarks (e.g., GLUE, SNLI, SciTail, ANLI) with the state-of-the-art pre-trained models (e.g., BERT, MT-DNN, RoBERTa). We also demonstrate that the proposed framework is applicable to domain adaptation and results in a significant performance improvement. Our proposed fine-tuning framework can be generalized to solve other transfer learning problems. We will explore this direction as future work. Acknowledgments We thank Jade Huang, Niao He, Chris Meek, Liyuan Liu, Yangfeng Ji, Pengchuan Zhang, Oleksandr Polozov, Chenguang Zhu and Keivn Duh for valuable discussions and comments, and Microsoft Research Technology Engineering team for setting up GPU machines. We also thank the anonymous reviewers for valuable discussions. 2186 References Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment. Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC09. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Daniel Cer, Mona Diab, Eneko Agirre, I˜nigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14. Andrew R Conn, Nicholas IM Gould, and Ph L Toint. 2000. Trust region methods, volume 1. Siam. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW’05, pages 177–190, Berlin, Heidelberg. Springer-Verlag. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. pages 13042–13054. Jonathan Eckstein. 1993. Nonlinear proximal point algorithms using bregman functions, with applications to convex programming. Mathematics of Operations Research, 18(1):202–226. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1–9, Prague. Association for Computational Linguistics. Osman G¨uler. 1991. On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization, 29(2):403– 419. Osman G¨uler. 1992. New proximal point algorithms for convex minimization. SIAM Journal on Optimization, 2(4):649–664. Frank R Hampel. 1974. The influence curve and its role in robust estimation. Journal of the american statistical association, 69(346):383–393. Pengcheng He, Xiaodong Liu, Weizhu Chen, and Jianfeng Gao. 2019. A hybrid neural network model for commonsense reasoning. In Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pages 13–21. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Peter J Huber. 2011. Robust statistics. Springer. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4837–4842. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. 2187 Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020a. On the variance of the adaptive learning rate and beyond. In Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020). Xiaodong Liu, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for natural language inference. arXiv preprint arXiv:1804.07888. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 912–921. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496, Florence, Italy. Association for Computational Linguistics. Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020b. The microsoft toolkit of multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:2002.07972. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semisupervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979– 1993. Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. 2018. Foundations of machine learning. MIT press. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. Sinno Jialin Pan and Qiang Yang. 2009. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359. Neal Parikh, Stephen Boyd, et al. 2014. Proximal algorithms. Foundations and Trends R⃝in Optimization, 1(3):127–239. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Matthew E Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. ACL 2019, page 7. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Garvesh Raskutti and Sayan Mukherjee. 2015. The information geometry of mirror descent. IEEE Transactions on Information Theory, 61(3):1451–1457. R Tyrrell Rockafellar. 1976. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877–898. Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. 2018. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. Asa Cooper Stickland and Iain Murray. 2019. Bert and pals: Projected attention layers for efficient adaptation in multi-task learning. In International Conference on Machine Learning, pages 5986–5995. 2188 Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204. Marc Teboulle. 1997. Convergence of proximallike algorithms. SIAM Journal on Optimization, 7(4):1069–1083. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. EMNLP 2018, page 353. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754–5764. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, pages 7472–7482. Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Shexia He, and Hai Zhao. 2018. I know what you want: Semantic learning for text comprehension. Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2020. Freelb: Enhanced adversarial training for natural language understanding. 2189 A Datasets The GLUE benchmark, SNLI, SciTail and ANLI is briefly introduced in the following sections. The detailed description can be found in (Wang et al., 2018; Bowman et al., 2015; Khot et al., 2018; Nie et al., 2019). Table 8 summarizes the information of these tasks. • GLUE. The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding (NLU) tasks. As shown in Table 8, it includes question answering (Rajpurkar et al., 2016), linguistic acceptability (Warstadt et al., 2019), sentiment analysis (Socher et al., 2013), text similarity (Cer et al., 2017), paraphrase detection (Dolan and Brockett, 2005), and natural language inference (NLI) (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009; Levesque et al., 2012; Williams et al., 2018). The diversity of the tasks makes GLUE very suitable for evaluating the generalization and robustness of NLU models. • SNLI. The Stanford Natural Language Inference (SNLI) dataset contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr30 corpus and hypotheses are manually annotated (Bowman et al., 2015). This is the most widely used entailment dataset for NLI. The dataset is used only for domain adaptation in this study. • SciTail This is a textual entailment dataset derived from a science question answering (SciQ) dataset (Khot et al., 2018). The task involves assessing whether a given premise entails a given hypothesis. In contrast to other entailment datasets mentioned previously, the hypotheses in SciTail are created from science questions while the corresponding answer candidates and premises come from relevant web sentences retrieved from a large corpus. As a result, these sentences are linguistically challenging and the lexical similarity of premise and hypothesis is often high, thus making SciTail particularly difficult. The dataset is used only for domain adaptation in this study. • ANLI. The Adversarial Natural Language Inference (ANLI, Nie et al. (2019)) is a new largescale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa. B Hyperparameters As for the sensitivities of hyper-parameters, in general the performance of our method is not very sensitive to the choice of hyper-parameters as detailed below. • We only observed slight differences in model performance when λs ∈[1, 10], µ ∈[1, 10] and ϵ ∈[10−5, 10−4]. When λs ≥100, µ ≥100 or ϵ ≥10−3, the regularization is unreasonably strong. When λs ≤0.1, µ ≤0.1 or ϵ <= 1e −6, the regularization is unreasonably weak. • The algorithm is not sensitive to σ, any σ ≤ϵ works well. • p = ∞makes the size of perturbation constraint to be the same regardless of the number of dimensions. For p = 2, adversarial perturbation is sensitive to the number of dimensions (A higher dimension usually requires a larger perturbation), especially for sentences with different length. As a result, we need to make less tuning effort for p = ∞. For other values of p, the associated projections are computationally inefficient. • We observed a minor improvement by using a larger S or a larger Tex. The minor improvement comes with an increased cost of computation. When S = Tex = 1, SMART requires 3 more forward passes and 3 more backward passes per iteration, compared with direct fine-tuning. In practice, it takes about 3 times the original training time. In terms of memory usage, it approximately doubles the GPU memory usage. • We set β = 0.99 for the first 10% of the updates (t <= 0.1T) and β = 0.999 for the rest of the updates (t > 0.1T) following (Tarvainen and Valpola, 2017), which works well in practice. 2190 Corpus Task #Train #Dev #Test #Label Metrics Single-Sentence Classification (GLUE) CoLA Acceptability 8.5k 1k 1k 2 Matthews corr SST Sentiment 67k 872 1.8k 2 Accuracy Pairwise Text Classification (GLUE) MNLI NLI 393k 20k 20k 3 Accuracy RTE NLI 2.5k 276 3k 2 Accuracy WNLI NLI 634 71 146 2 Accuracy QQP Paraphrase 364k 40k 391k 2 Accuracy/F1 MRPC Paraphrase 3.7k 408 1.7k 2 Accuracy/F1 QNLI QA/NLI 108k 5.7k 5.7k 2 Accuracy Text Similarity (GLUE) STS-B Similarity 7k 1.5k 1.4k 1 Pearson/Spearman corr Pairwise Text Classification SNLI NLI 549k 9.8k 9.8k 3 Accuracy SciTail NLI 23.5k 1.3k 2.1k 2 Accuracy ANLI NLI 163k 3.2k 3.2k 3 Accuracy Table 8: Summary of the four benchmarks: GLUE, SNLI, SciTail and ANLI.
2020
197
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2191–2197 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2191 Stolen Probability: A Structural Weakness of Neural Language Models David Demeter Northwestern University [email protected] Gregory Kimmel H. Lee Moffitt Cancer Center [email protected] Doug Downey Allen Institute for AI [email protected] Abstract Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when assigning probability. We present numerical, theoretical and empirical analyses showing that words on the interior of the convex hull in the embedding space have their probability bounded by the probabilities of the words on the hull. 1 Introduction Neural Network Language Models (NNLMs) have evolved rapidly over the years from simple feed forward nets (Bengio et al., 2003) to include recurrent connections (Mikolov et al., 2010) and LSTM cells (Zaremba et al., 2014), and most recently transformer architectures (Dai et al., 2019; Radford et al., 2019). This has enabled ever-increasing performance on benchmark data sets. However, one thing has remained relatively constant: the softmax of a dot product as the output layer. NNLMs generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. We show that the dot product distance metric introduces a limitation that bounds the expressiveness of NNLMs, enabling some words to “steal” probability from other words simply due to their relative placement in the embedding space. We call this limitation the stolen probability effect. While the net impact of this limitation is small in terms of the perplexity measure on which NNLMs are evaluated, we show that the limitation results in significant errors in certain cases. As an example, consider a high probability word sequence like “the United States of America” that ends with a relatively infrequent word such as “America”. Infrequent words are often associated with smaller embedding norms, and may end up inside the convex hull of the embedding space. As we show, in such a case it is impossible for the NNLM to assign a high probability to the infrequent word that completes the high-probability sequence. Numerical, theoretical and empirical analyses are presented to establish that the stolen probability effect exists. Experiments with n-gram models, which lack this limitation, are performed to quantify the impact of the effect. 2 Background In a NNLM, words wi are represented as vectors xi in a high-dimensional embedding space. Some combination of these vectors xc = {xi}i∈c are used to represent the preceding context c, which are fed into a a neural unit as features to generate a prediction vector ht. NNLMs generate a probability distribution over a vocabulary of words wi to predict the next word in a sequence wt using a model of the form: P(wt|c) = σ(f(xc, θNNLM)) (1) where σ is the softmax function, f is a neural unit that generates the prediction vector ht, and θNNLM are the parameters of the neural unit. A dot product between the prediction vector ht and all word vectors xi is taken to calculate a set of distances, which are then used to form logits: zit = xi · hT t + bi (2) 2192 where bi is a word-specific bias term. Logits are used with the softmax function to generate a probability distribution over the vocabulary V such that: P(wt = wi|c) = ezit P V ezvt (3) We refer to this calculation of logits and transformation into a probability distribution as the dotproduct softmax. 3 Problem Definition NNLMs learn very different embeddings for different words. In this section we show that this can make it impossible for words with certain embeddings to ever be assigned high probability in any context. We start with a brief examination of the link between embedding norm and probability, which motivates our analysis of the stolen probability effect in terms of a word’s position in the embedding space relative to the convex hull of the embedding space. 3.1 Embedding Space Analysis The dot product used in Eq. 2 can be written in polar coordinates as: zit =∥xi∥∥ht∥cos(θi) + bi (4) where θi is the angle between xi and ht. The dotproduct softmax allocates probability to word wi in proportion to zit’s value relative to the value of other logits (see Eq. 3). Setting aside the bias term bi for the moment (which is shown empirically to be irrelevant to our analysis in Section 4.2), this means that word A with a larger norm than word B will be assigned higher probability when the angles θA and θB are the same. More generally, the relationship between embedding norms and the angles formed with prediction points ht can be expressed as: ∥xA∥ ∥xB∥> cos(θB) cos(θA) (5) when word A has a higher probability than word B. Empirical results (not presented) confirm that NNLMs organize the embedding space such that word vector norms are widely distributed, while their angular displacements relative to a reference vector fall into a narrow range. This suggests that the norm terms in Eq. 4 dominate the calculation of logits, and thereby probability. Figure 1: Numerical Illustration of the Stolen Probability Effect. Panels (i) and (ii) show the embedding of four words in a 2D embedding space. Word A is on the convex hull in panel (i), and interior to the convex hull in panel (ii). Panels (iii) and (iv) show the probability that would be assigned by the dot-product softmax to A for a range of prediction points ht in the x, y plane. When word A is on the convex hull, it can achieve nearly 100% probability for an ht prediction point in the far lower-left quadrant (see panel (iii)). When word A is interior to the convex hull, its maximum probability is bounded by any word on the convex hull (see panel (iv)). 3.2 Theoretical Analysis While an analysis of how embedding norms impact the assignment of probability is informative, the stolen probability effect is best analyzed in terms of a word’s position in the embedding space relative to the convex hull of the embedding space. A convex hull is the smallest set of points forming a convex polygon that contains all other points in a Euclidean space. Theorem 1. Let C be the convex hull of the embeddings {xi} of a vocabulary V . If an embedding xi for word wi ∈V is interior to C, then the maximum probability P(wi) assigned to wi using a dot-product softmax is bounded by the probability assigned to at least one word wi whose embedding is on the convex hull. (see Appendix A for proof). 3.3 Numerical Analysis The stolen probability effect can be illustrated numerically in a 2D Euclidean space (see Figure 1). We show two configurations of an embedding space, one where target word A is on the convex hull (Panel i) and another where A is on the interior (Panel ii). Under both configurations, a NNLM 2193 trained to the maximum likelihood objective would seek to assign probability such that P(A) = 1.0. For the first configuration, this is achievable for an ht in the far lower-left quadrant (Panel iii). However, when A is in the interior, there is no ht that exists where the dot-product softmax can assign a probability approaching 1.0 (Panel iv). A similar illustration in 3D is presented in Appendix B. 4 Experiments In this section we provide empirical evidence showing that words interior to the convex hull are probability-impoverished due to the stolen probability effect and analyze the impact of this phenomenon on different models. 4.1 Methods We perform our evaluations using the AWD-LSTM (Merity et al., 2017) and the Mixture of Softmaxes (MoS) (Yang et al., 2017) language models. Both models are trained on the Wikitext-2 corpus (Merity et al., 2016) using default hyperparameters, except for dimensionality which is set to d = {50, 100, 200}. The AWD-LSTM model is trained for 500 epochs and the MoS model is trained for 200 epochs, resulting in perplexities as shown in Table 1. The Quickhull algorithm (Barber et al., 1996) is among the most popular algorithms used to detect the convex hull in Euclidean space. However, we found it to be intractably slow for embedding spaces above ten dimensions, and therefore resorted to approximate methods. We relied upon an identity derivable from the properties of a convex hull which states that a point p ∈Rd is vertex of the convex hull of {xi} if there exists a vector ht ∈Rd such that for all xi: ⟨ht, xi −p⟩< 0. (6) where ⟨·⟩is the dot-product. Searching for directions ht which satisfy Eq 6 is not computationally feasible. Instead, we rely upon a high-precision, low-recall approximate method to eliminate potential directions for ht which do not satisfy Eq. 6. We call this method our detection algorithm. If the set of remaining directions is not empty, then p is classified as a vertex, otherwise p is classified as an interior point. The detection algorithm is anchored by the insight that all vectors parallel to the difference vector Train Test ω Interior Model d PPL PPL (radians) Points AWD 50 140.6 141.8 50π/128 6,155 AWD 100 73.3 97.8 55π/128 5,205 AWD 200 44.9 81.6 58π/128 2,064 MoS 50 51.7 76.8 53π/128 4,631 Mos 100 34.8 67.4 57π/128 4,371 MoS 200 25.5 64.2 59π/128 2,009 Table 1: Perplexities and Detection Results. Each model was trained using default hyper-parameters except for dimensions d as shown and number of training epochs. The AWD-LSTM models we trained for 500 epochs and the MoS models were trained for 200 epochs. Each ordinal plane of an n-Sphere in the embedding space was discretized into arcs of 2π/256. The angle φ of the difference vector xi −p formed each word type embedded at p is mapped to one of these arcs. Directions on the interval (φ ± ω) are eliminated from consideration per Eq 6, and words for which all directions have been eliminated as classified as interior. The increment ω was set to the lowest value that would classify at least 1,000 words as interior. ⃗xi −⃗p do not satisfy Eq. 6. It is also true that all directions in the range (φ + ω, φ −ω) will not satisfy Eq. 6, where φ is the direction of the difference vector and ω is some increment less than π/2. The detection algorithm was validated in lower dimensional spaces where an exact convex hull could be computed (e,g. up to d = 10). It consistently classified interior points with precision approaching 100% and recall of 68% when evaluated on the first 10 dimensions of the MoS model with d = 100. 4.2 Results Applying the detection algorithm to our models yields word types being classified into distinct interior and non-interior sets (see Table 1). We ranked the top 500 words of each set by the maximum probability they achieved on the training corpora1, and plot these values in Figure 2, showing a clear distinction between interior and non-interior sets. The maximum trigram probabilities (Stolcke, 2002) smoothed with KN3 for the same top 500 words in each set (separately sorted) are also shown. The difference between NNLM and trigram curves for interior words shows that models like n-grams, which do not utilize a dot-product softmax, are not subject to the stolen probability effect and can assign higher probabilities. A random set of words equal 1We present our results on the training set because here, our goal is to characterize the expressiveness of the models rather than their ability to generalize. 2194 Figure 2: Maximum Probability of Top 500 Interior and Non-Interior Words. The MoS model with d = 100 struggles to assign high probability to interior words, while trigrams were able to capture more accurate statistics. This behavior is absent for non-interior words. NonTriModel d Interior Rand gram Interior AWD 50 44.3 8.1 20.7 0.004 AWD 100 89.2 31.3 15.6 0.018 AWD 200 99.0 43.3 12.5 0.113 MoS 50 76.5 22.9 16.8 0.4 MoS 100 92.9 50.5 22.6 8.6 MoS 200 97.3 51.4 30.9 40.0 Table 2: Average Maximum Probability for Top 500 Words. The average probability mass for each word set (expressed as percents) is calculated by averaging the maximum probability on the training corpora achieved for the top 500 words of each set. in size to the interior set was also constructed by uniform sampling, and ranked on the top 500 words. A comparison between the random and interior sets provides evidence that our detection algorithm is effective at separating the interior and non-interior sets, and is not simply performing random sampling. Our results can be more compactly presented by considering the average probability mass assigned to the top 500 words for each set (see Table 2). The impact of the stolen probability effect for each model can quantified as the difference between the interior set and each of the three reference sets (noninterior, random, and trigram) in the table. The interior average maximum probability is generally much smaller than those of the reference sets. Another way to quantify the impact of the stolen probability effect is to overcome the bound on the interior set by constructing an ensemble with trigram statistics. We constructed a targeted ensemble of the MoS model with d = 100 and a trigram model—unlike a standard ensemble, the trigram model is only used in contexts that are likely to indicate an interior word: specifically, those that precede at least one interior word in the training set. Otherwise, we default to the NNLM probability. When we ensemble, we assign weights of 0.8 to the NNLM, 0.2 to the trigram (selected using the training set). Overall, the targeted ensemble improved training perplexity from 34.8 to 33.6, and test perplexity from 67.4 to 67.0. The improvements on the interior words themselves were much larger: training perplexities for interior words improved from 700.0 to 157.2, and test improved from 645.6 to 306.7. Improvement on the interior words is not unexpected given the differences observed in Figure 2. The overall perplexity differences, while small in magnitude, suggest that ensembling with a model that lacks the stolen probability limitation may provide some boost to a NNLM. Returning to the question of bias terms, we find empirically that bias terms are relatively small, averaging −0.13 and 0.02 for the interior and noninterior sets of the MoS model with d = 100, respectively. We note that the bias terms are wordspecific and can only adjust the stolen probability effect by a constant factor. That is, it does not change the fact that words in the interior set are probability-bounded. All of our empirical results are calculated on a model with a bias term, demonstrating that the stolen probability effect persists with bias terms. 4.3 Analysis Attributes of the stolen probability effect analyzed in this work are distinct from the softmax bottleneck (Yang et al., 2017). The softmax bottleneck argues that language modeling can be formulated as a factorization problem, and that the resulting model’s expressiveness in limited by the rank of the word embedding matrix. While we also argue that the expressiveness of a NNLM is limited for structural reasons, the stolen probability effect that we study is best understood as a property of the arrangement of the embeddings in space, rather than the dimensionality of the space. Our numerical and theoretical analyses presented do not rely upon any particular number of dimensions, and our experiments show that the stolen probability effect holds over a range of dimensions. However, there is a steady increase of average probability mass assigned to the interior set 2195 as model dimensionality increases, suggesting that there are limits to the stolen probability effect. This is not unexpected. As the capacity of the embedding space increases with additional dimensions, the model has additional degrees of freedom in organizing the embedding space. The vocabulary of the Wikitext-2 corpus is small compared to other more recent corpora. We believe that larger vocabularies will offset (at least partially) the additional degrees of freedom associated with higher dimensional embedding spaces. We leave the exploration of this question as future research. We acknowledge that our results can also be impacted by the approximate nature of our detection algorithm. Without the ability to precisely detect detect the convex hull for any of our embedding spaces, we can not make precise claims about its performance. The difference between average probability mass assigned to random and interior sets across all models evaluated suggests that the detection algorithm succeeds at identifying words with substantially lower maximum probabilities than a random selection of words. In Section 3.1 we motivated our analysis of the stolen probability effect by examining the impact of embeddings norms on probability assignment. One natural question is to ask is “Does our detection algorithm simply classify embeddings with small norms as interior points?” Our results suggest that this is not the case. The scatter plot of embedding norm versus maximum probability (see Figure 3) shows that words classified as interior points frequently have lower norms. This is expected, since points interior to the convex hull are by definition not located in extreme regions of the embedding space. The embedding norms for words in the interior set range between 1.4 and 2.6 for the MoS model with d = 100. Average maximum probabilities for words in this range are 1.4% and 4.1% for interior and non-interior sets of the MoS model with d = 100, respectively, providing evidence that the detection algorithm is not merely identifying word with small embedding norms. Lastly, we note that the interior sets of the AWDLSTM models are particularly probability impoverished relative to the more powerful MoS models. We speculate that the perplexity improvements of the MoS model may be due in part to mitigating the stolen probability effect. Exploration of the stolen probability effect in more powerful NNLM architectures using dot-product softmax output layers is Figure 3: Maximum Probability vs. Embedding Norm. Examining maximum word probability as a function of embedding norm for the MoS model with d = 100 shows that interior words are associated with smaller embedding norms and lower maximum probabilities. However, many non-interior words with comparably small norms have substantially higher maximum probabilities. another item of future research. 5 Related Work Other work has explored alternative softmax configurations, including a mixture of softmaxes, adaptive softmax and a Taylor Series softmax (Yang et al., 2017; Grave et al., 2016; de Br´ebisson and Vincent, 2015). There is also a body of work that analyzes the properties of embedding spaces (Burdick et al., 2018; Mimno and Thompson, 2017). We do not seek to modify the softmax. Instead we present an analysis of how the structural bounds of an NNLM limit its expressiveness. 6 Conclusion We present numerical, theoretical and empirical analyses showing that the dot-product softmax limits a NNLM’s expressiveness for words on the interior of a convex hull of the embedding space. This is structural weakness of NNLMs with dot-product softmax output layers, which we call the stolen probability effect. Our experiments show that the effect is relatively common in smaller neural language models. Alternative architectures that can overcome the stolen probability effect are an item of future work. Acknowledgments This work was supported in part by NSF Grant IIS-1351029. We thank the anonymous reviewers and Northwestern’s Theoretical Computer Science group for their insightful comments and guidance. 2196 References C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa. 1996. The quickhull algorithm for convex hulls. ACM Trans. Math. Softw., 22:469–483. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. JOURNAL OF MACHINE LEARNING RESEARCH, 3:1137–1155. Alexandre de Br´ebisson and Pascal Vincent. 2015. An exploration of softmax alternatives belonging to the spherical loss family. CoRR, abs/1511.05042. Laura Burdick, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In NAACL-HLT. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL. Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. 2016. Efficient softmax approximation for gpus. ArXiv, abs/1609.04309. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. ArXiv, abs/1708.02182. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. ArXiv, abs/1609.07843. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan ernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. David M. Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In EMNLP. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In INTERSPEECH. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2017. Breaking the softmax bottleneck: A high-rank rnn language model. ArXiv, abs/1711.03953. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. ArXiv, abs/1409.2329. x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x 2197 A Proof of Theorems Proof of Theorem 1. Let P = {x1, . . . , xN} be the set of all words. We can form the convex hull of this set. If p is interior, then for all v, there exists an xi ∈P such that ⟨v, xi −p⟩> 0. We argue by contradiction. Suppose that p is interior and that for all v, we have that ⟨v, xi −p⟩≤0 for all xi ∈P. This implies that all points in our set P lay strictly on one side of the hyperplane made perpendicular to v through p. This would imply that p was actually on the convex hull, a contradiction. This implies that for any test point h, an interior point will be bounded by at least one point in P. That is ⟨h, p⟩< ⟨h, xi⟩for some xi ∈P. Plugging into the softmax function we see that: P(p) = exp(⟨h, p⟩) exp(⟨h, p⟩) + P j̸=p exp(⟨h, xj⟩) ≤ 1 1 + exp(⟨h, xi −p⟩) Letting ∥h∥→∞shows that P(p) →0. This shows that interior points are probability deficient. We also note that letting ∥h∥→0 gives the base probability P(p) = 1/|P|. The contrapositive of the above statement implies that if ∄v, where ∀xi ∈p we have ⟨v, xi−p⟩≤0, then p is on the convex hull. In fact, we also note that if p was a vertex, the inequality would be strict, which implies that one can find a test point such that the probability P(p) →1. The more interesting case is if the point p is on the convex hull, but not a vertex. In this case we define the set Ω(p, h) = {xi ∈P | ⟨h, p − xi⟩= 0}. This corresponds to the set of points lying directly on the hyperplane perpendicular to h, running through p. This set is nonempty. Then we see that: P(p) = exp(⟨h, p⟩) P j exp(⟨h, xj⟩) ≤ exp(⟨h, p⟩) P j∈Ω(p,h) exp(⟨h, xj⟩) = 1 |Ω(p, h)| x B 3D Numerical Illustration Figure 4: Numerical Illustration in 3D The top panels show six words in a flattened cross-section of 3D space. Points A, B, C, D and E are embedded at (0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0) and (0.5, 0.5, 1) respectively. In the top-left panel, F is embedded outside of the convex hull at (0.65, 0.35, 1.5), and in the top-right panel F is embedded inside of the convex hull at (0.65, 0.35, 0.5). Subsequent panels show cross sections of the probability of F for test points in the plains z = {0.0, 2.0, 4.0, 6.0}, numerically illustrating the stolen probability effect in 3D.
2020
198
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2198–2208 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2198 Taxonomy Construction of Unseen Domains via Graph-based Cross-Domain Knowledge Transfer Chao Shang1, Sarthak Dash2, Md Faisal Mahbub Chowdhury2, Nandana Mihindukulasooriya2, Alfio Gliozzo2 1University of Connecticut, Storrs, CT, USA 2IBM Research AI, Yorktown Heights, NY, USA [email protected], [email protected] [email protected], [email protected], [email protected] Abstract Extracting lexico-semantic relations as graphstructured taxonomies, also known as taxonomy construction, has been beneficial in a variety of NLP applications. Recently Graph Neural Network (GNN) has shown to be powerful in successfully tackling many tasks. However, there has been no attempt to exploit GNN to create taxonomies. In this paper, we propose Graph2Taxo, a GNN-based cross-domain transfer framework for the taxonomy construction task. Our main contribution is to learn the latent features of taxonomy construction from existing domains to guide the structure learning of an unseen domain. We also propose a novel method of directed acyclic graph (DAG) generation for taxonomy construction. Specifically, our proposed Graph2Taxo uses a noisy graph constructed from automatically extracted noisy hyponym-hypernym candidate pairs, and a set of taxonomies for some known domains for training. The learned model is then used to generate taxonomy for a new unknown domain given a set of terms for that domain. Experiments on benchmark datasets from science and environment domains show that our approach attains significant improvements correspondingly over the state of the art. 1 Introduction Taxonomy has been exploited in many Natural Language Processing (NLP) applications, such as question answering (Harabagiu et al., 2003), query understanding (Hua et al., 2017), recommendation systems (Friedrich and Zanker, 2011), etc. Automatic taxonomy construction is highly challenging as it involves the ability to recognize – (i) a set of types (i.e. hypernyms) from a text corpus, (ii) instances (i.e. hyponyms) of each type, and (iii) is-a (i.e. hypernymy) hierarchy between types. Existing taxonomies (e.g., WordNet (Miller et al., 1990)) are far from being complete. Taxonomies specific to many domains are either entirely absent or missing. In this paper, we focus on construction of taxonomies for such unseen domains1. Since taxonomies are expressed as directed acyclic graphs (DAGs) (Suchanek et al., 2008), taxonomy construction can be formulated as a DAG generation problem. There has been considerable research on Graph Neural Networks (GNN) (Sperduti and Starita, 1997; Gori et al., 2005) over the years; particularly inspired by the convolutional GNN (Bruna et al., 2014) where graph convolution operations were defined in the Fourier domain. In a similar spirit to convolutional neural networks (CNNs), GNN methods aggregate neighboring information based on the connectivity of the graph to create node embeddings. GNN has been applied successfully in many tasks such as matrix completion (van den Berg et al., 2017), manifold analysis (Monti et al., 2017), predictions of community (Bruna et al., 2014), knowledge graph completion (Shang et al., 2019), and representations of network nodes (Hamilton et al., 2017; Kipf and Welling, 2017). To the best of our knowledge, there has been no attempt to exploit GNN for taxonomy construction. Our proposed framework, Graph2Taxo, is the first to show that a GNN-based model using a crossdomain noisy graph can substantially improve the taxonomy construction of unseen domains (e.g., Environment) by exploiting taxonomy of one or more seen domains (e.g., Food). (The task is described in detail in Section 3.1.) Another novelty of our approach is we are the first to apply the acyclicity constraint-based DAG structure learning model (Zheng et al., 2018; Yu et al., 2019) for taxonomy generation task. The input of Graph2Taxo is a cross-domain 1By unseen domain, we refer to a domain for which taxonomy is not available to the system. 2199 noisy graph constructed by connecting noisy candidate is-a pairs, which are extracted from a large corpus using standard linguistic pattern-based approaches (Hearst, 1992). It is noisy because pattern-based approaches are prone to poor coverage as well as wrong extractions. In addition, it is cross-domain because the noisy is-a pairs are extracted from a large-scale corpus which contains a collection of text from multiple domains. Our proposed neural model directly encodes the structural information from a noisy graph into the embedding space. Since the links between domains are also used in our model, it has not only structural information of multiple domains but also crossdomain information. We demonstrate effectiveness of our proposed method on science and environment datasets (Bordea et al., 2016), and show significant improvements on F-score over the state of the art. 2 Related Work Taxonomy construction (also known as taxonomy induction) is a well-studied problem. Most of the existing works follow two sequential steps to construct taxonomies from text corpora (Wang et al., 2017). First, is-a pairs are extracted using patternbased or distributional methods. Then, a taxonomy is constructed from these is-a pairs. The pattern-based methods, pioneered by Hearst (1992), detect is-a relation of a term pair (x, y) using the appearance of x and y in the same sentence through some lexical patterns or linguistic rules (Ritter et al., 2009; Luu et al., 2014). Snow et al. (2004) represented each (x, y) term-pair as the multiset of dependency paths connecting their co-occurrences in a corpus, which is also regarded as a path-based method. An alternative approach for detecting is-a relation is the distributional methods (Baroni et al., 2012; Roller et al., 2014), using the distributional representation of terms to directly predict relations. As for the step of taxonomy construction using the extracted is-a pairs, most of the approaches do it by incrementally attaching new terms (Snow et al., 2006; Shen et al., 2012; Alfarone and Davis, 2015; Wu et al., 2012). Mao et al. (2018) is the first to present a reinforcement learning based approach, named TaxoRL, for this task. For each term pair, its representation in TaxoRL is obtained by the path LSTM encoder, the word embeddings of both terms, and the embeddings of features. Recently, Dash et al. (2020) argued that strict partial orders2 correspond more directly to DAGs. They proposed a neural network architecture, called Strict Partial Order Network (SPON), that enforces asymmetry and transitive properties as soft constraints. Empirically, they showed that such a network produces better results for detecting hyponym-hypernym pairs on a number of datasets for different languages and domains in both supervised and unsupervised settings. Many graph-based methods such as Kozareva and Hovy (2010) and Luu et al. (2014) regard the task of hypernymy organization as a hypernymy detection problem followed by a graph pruning problem. For the graph pruning task, various graphtheoretic approaches such as optimal branching algorithm (Velardi et al., 2013), Edmond’s algorithm (Karp, 1971) and Tarjan’s algorithm (Tarjan, 1972) have been used over the years. In addition to these, Wang et al. (2017) mentions several other graphbased taxonomy induction approaches. In contrast, our approach formulates the taxonomy construction task as a DAG generation problem instead of an incremental taxonomy learning (Mao et al., 2018), which differentiates it when compared with the existing methods. In addition, our approach uses the knowledge from existing domains (Bansal et al., 2014; Gan et al., 2016) to build the taxonomies of missing domains. 3 The Graph2Taxo Framework In this section, we first formulate the problem statement and then introduce our proposed Graph2Taxo framework as a solution. We describe the individual components of this framework in detail, along with justifications of how and why these components come together as a solution. Figure 1: An illustration of our GNN-based crossdomain transfer framework for taxonomy construction. 2A binary relation that is transitive, irreflexive and asymmetric. 2200 3.1 Problem Definition The problem addressed in this paper is, given a list of domain-specific terms from a target unseen (aka missing) domain as input, how to construct a taxonomy for that target unseen domain. In other words, the problem addressed in this paper is how to organize these terms into a taxonomy. This problem can be further abstracted out as follows: Given a large input corpus and a set of gold taxonomies Ggold from some known domains (different from the target domain), our task is to learn a model (trained using the corpus and taxonomies of known domains) to construct multiple taxonomies for the target unseen domains. As a solution to the aforementioned problem, we propose a GNN-based cross-domain transfer framework for taxonomy construction (see Figure 1), called Graph2Taxo which consists of a crossdomain graph encoder and a DAG generator. The first step in our proposed approach is to build a cross-domain noisy graph as an input to our Graph2Taxo model. In this step, we extract candidate is-a pairs from a large collection of input corpora that spans multiple domains. To do so, we used the output of Panchenko et al. (2016), which is a combination of standard substring matching and pattern-based approaches. Since such patternbased approaches are too rigid, the corresponding output not only suffers from recall (i.e., missing is-a pairs) but also contains incorrect (i.e., noisy) pairs due to the ambiguity of language and richness in syntactic expression and structure in the input corpora. For example, consider the phrase “... animals other than dogs such as cats ...”. As (Wu et al., 2012) noted, pattern-based approaches will extract (cat is-a dog) rather than (cat is-a animal). Based on the noisy is-a pairs, we construct a directed graph Ginput = (Vinput, Einput), which is a cross-domain noisy graph. Here, Vinput denotes a set of terms, and (vi, vj) ∈Einput if and only if (vi, vj) belongs to the list of extracted noisy is-a pairs. The input document collection spans multiple domains, therefore Einput not only has intradomain edges, but also has cross-domain edges (see Figure 1). Graph2Taxo is a subgraph generation model which uses the large cross-domain noisy graph as the input. Given a list of terms for a target unseen domain, it aims to learn a taxonomy structure for the corresponding domain as a DAG. Graph2Taxo takes advantage of additional knowledge in the form of previously known gold taxonomies {Ggold,i, 1 ≤i ≤Nknown} to train a learning model. During inference phase, the model receives a list of terms from the target unseen domain and aims to build a taxonomy by using the input terms. Here, Nknown denotes the number of previously known taxonomies used during the training phase. This problem of distilling directed acyclic substructures (taxonomies of many domains) using a large cross-domain noisy graph is challenging, because of relatively lower overlap between noisy edges in Einput and true edges in the available taxonomies in hand. The following sections describe our proposed Cross-domain Graph Encoder and the DAG Generator in further detail. 3.2 Cross-domain Graph Encoder This subsection describes the Cross-domain Graph Encoder in Figure 1 for embedding generation. This embedding generation algorithm uses two strategies, namely Neighborhood aggregation and Semantic clustering aggregation. 3.2.1 Neighborhood Aggregation This is the first of the two strategies used for embedding generation. Let A ∈Rn×n be the adjacency matrix of the noisy graph Ginput, where n is the size of Vinput. Let hl i represent the feature representation for the node vi in the l-th layer and thus Hl ∈Rn×dl denotes the intermediate representation matrix. The initial matrix H0 is randomly initialized from a standard normal distribution. We use the adjacency matrix A and the node representation matrix Hl to iteratively update the representation of a particular node by aggregating representations of its neighbors. This is done by using a GNN. Formally, a GNN layer (Gilmer et al., 2017; Hamilton et al., 2017; Xu et al., 2019) employs the general message-passing architecture which consists of a message propagation function M to get messages from neighbors and a vertex update function U. The message passing works via the following equations, ml+1 v = M(hl u) ∀u ∈N(v) hl+1 v = U(hl v, ml+1 v ) where N(v) denotes the neighbors of node v and m is the message. In addition, we use the following 2201 definitions for M and U functions, M(hl u) = X u∈N(v) Avuhl u, ∀u ∈N(v) U(hl v, ml+1 v ) = σ(ml+1 v Θl + hl vΘl) where Θl ∈Rdl×dl+1 denotes trainable parameters for layer l and σ represents an activation function. Let ˜A = A + I, here I is the identity matrix, the information aggregation strategy described above can be abstracted out as, Hl+1 = GNNl(A, Hl) = σ( ˜AHlΘl) 3.2.2 Semantic Clustering Aggregation This is the second of the two strategies used for embedding generation, which operates on the output of the previous step. The learned representations from the previous step are highly likely not to be uniformly distributed in the Euclidean Space, but rather form a bunch of clusters. In this regard, we propose a soft clustering-based pooling-unpooling step, that uses semantic clustering aggregating for learning better model representations. In essence, this step shares the similarity information for any pair of terms in the vocabulary. Analogous to an auto-encoder, the pooling layer adaptively creates a smaller cluster graph comprising of a set of cluster nodes, whose representations are learned based on a trainable cluster assignment matrix. This idea of using an assignment matrix was first proposed by the DiffPool (Ying et al., 2018) approach. On the other hand, the unpooling layer decodes the cluster graph into the original graph using the same cluster assignment matrix learned in the pooling layer. The learned semantic cluster nodes can be thought of as “bridges” between nodes from the same or different clusters to pass messages. Mathematically speaking, we learn a soft cluster assignment matrix Sl ∈Rn×nc at layer l using the GNN model, where nc is the number of clusters. Each row in Sl corresponds to one of n nodes in layer l and each column corresponds to one of the nc clusters. As a first step, the pooling layer uses the adjacency matrix A and the node feature matrix Hl to generate a soft cluster assignment matrix as, Sl = softmax(GNNl,cluster(A, Hl)) (1) where the softmax is a row-wise softmax function, Θl cluster ∈Rdl×nc denotes all trainable parameters in GNNl,cluster. Since the matrix Sl is calculated based on node embeddings, nodes with similar features and local structure will have similar cluster assignment. As the final step, the pooling layer generates an adjacency matrix Ac for the cluster graph and a new embedding matrix containing cluster node representations Hl c as follows, Hl c = (Sl)T Hl ∈Rnc×dl Ac = (Sl)T ASl ∈Rnc×nc A GNN operation is used within the small cluster graph, Hl+1 c = GNNl(Ac, Hl c) ∈Rnc×dl+1 to further propagate messages from the neighboring clusters. The trainable parameters in GNNl are Θl ∈Rdl×dl+1. For passing clustering information to the original graph, the unpooling layer restores the original graph using cluster assignment matrix, as follows, ˜Hl+1 = SlHl+1 c ∈Rn×dl+1 The output of the pooling-unpooling layer results in the node representations possessing latent cluster information. Finally, we combine the neighborhood aggregation and semantic clustering aggregation strategies via a residual connection, as, Hl+1 = concate( ˜Hl+1, Hl) where concate means concatenate the two matrices. Hl+1 is the output of this pooling-unpooling step. Figure 2: An illustration of DAG generator. 3.3 DAG Generator The DAG generator takes in the noisy graph Ginput and representations of all the vocabulary terms (output of Section 3.2) as input, encodes acyclicity as 2202 a soft-constraint (as described below), and outputs a distribution of edges within Ginput that encodes the likelihood of true is-a relationships. This output distribution is finally used to induce taxonomies, i.e., DAGs of is-a relationships. In each training step, DAG generator is applied to one domain (see Figure 2), using a noisy graph G, which is a subgraph from Ginput, as a training sample and a DAG is generated for that domain. Here let Nt denote the number of (hypo, hyper) pairs belonging to the edge set of G. During the training, we also know label vector label ∈{0, 1}Nt for these Nt pairs, based on whether they belong to the gold known taxonomy. 3.3.1 Edge Prediction For each edge within the noisy graph G, our DAG generator estimates the probability that the edge represents a valid hypernymy relationship. Our model estimates this probability through the use of a convolution operation illustrated in Figure 2. For each edge (hypo, hyper), in the first step the term embeddings and edge features are concatenated as follows, vpair = concate(vhypo, vhyper, vfeas) where vhypo and vhyper are the embeddings for hypo and hyper nodes (from Section 3.2) and vfeas denotes a feature vector for the edge (hypo, hyper), which includes edge frequency and substring features. The substring features includes ends with, contains, prefix match, suffix match, length of longest common substring (LCS), length difference and a boolean feature denoting whether LCS in Vinput (the set of terms) or not. Inspired by ConvE model (Dettmers et al., 2018), a well known convolution based algorithm for link prediction, we apply a 1D convolution operation on vpair. We use a convolution operation since it increases the expressiveness of the DAG Generator through additional interaction between participating embeddings. For the convolution operation, we make use of C different kernels parameterized by {wc, 1 ≤c ≤ C}. The 1D convolution operation is then calculated as follows, vc = [Uc(vpair, 0), ..., Uc(vpair, dv −1)] (2) Uc(vpair, p) = K−1 X τ=0 ωc(τ)ˆvpair(p + τ)) (3) where K denotes the kernel width, dv denotes the size of vpair, p denotes the position to start the kernel operation and the kernel parameters ωc are trainable. In addition, ˆvpair denotes the padded version of vpair, wherein the padding strategy is as follows. If |K| is odd, we pad vpair with ⌊K/2⌋ zeros on both the sides. On the other hand, if |K| is even, we pad ⌊K/2⌋−1 zeros at the beginning, and ⌊K/2⌋zeros at the end of vpair. Here, ⌊value⌋ returns the floor of value. Each kernel c generates a vector vc, according to Equation 2. As there are C different kernels, this results in the generation of C different vectors which are then concatenated together to form one vector VC, i.e. VC = concatenate(v0, v1, . . . , vC). The probability p(hypo,hyper) of a given edge (hypo, hyper) expressing a hypernymy relationship can then be estimated using the following scoring function, p(hypo,hyper) = sigmoid(V T C W) (4) where W denotes the parameter matrix of a fully connected layer, as illustrated in Figure 2. Finally, for the loss calculations, we make use of differentiable F1 loss (Huang et al., 2015), Precision = PNt−1 t=0 pt × labelt PNt−1 t=0 pt Recall = PNt−1 t=0 pt × labelt PNt−1 t=0 labelt LF1 = 2 × Precision × Recall Precision + Recall 3.3.2 DAG Constraint The edge prediction step alone does not guarantee that the generated graph is acyclic. Learning DAG from data is an NP-hard problem (Chickering, 1995; Chickering et al., 2004). To this effect, one of the first works that formulate the acyclic structure learning task as a continuous optimization problem was introduced by Zheng et al. (2018). In that paper, the authors note that the trace of Bk denoted by tr(Bk), for a non-negative adjacency matrix B ∈Rn×n counts the number of length-k cycles in a directed graph. Hence, positive entries within the diagonal of Bk suggests the existence of cycles. Or, in other words, B has no cycle if and only if P∞ k=1 Pn i=1(Bk)ii = 0. However, calculating Bk for every value of k, i.e. repeated matrix exponentiation, is impractical and can easily exceed machine precision. To solve this 2203 problem, Zheng et al. (2018) makes use of Taylor Series expansion as eB = P∞ k=0 Bk k! , and show that a non-negative matrix B is a DAG iff, ∞ X k=1 n X i=1 (Bk)ii k! = tr(eB) −n = 0 To make sure this constraint is useful for an arbitrary weighted matrix with both positive and negative values, a Hadamard product B = A ◦A is used, which leads us to the following theorem. Theorem 1 (Zheng et al., 2018) A matrix A ∈ Rn×n is a DAG if and only if: tr(eA◦A) −n = 0 where tr represents the trace of a matrix, ◦represents the Hadamard product and eB equals matrix exponential of B. Since the matrix exponential may not be available in all deep learning frameworks, (Yu et al., 2019) propose an alternative constraint that is practically convenient as follows. Lemma 2 (Yu et al., 2019) Let α = c/m > 0 for some c. For any complex λ, since (1 + α|λ|)m ≤ ec|λ|, the DAG constraint from Theorem 1 can be relaxed and stated as follows, h(A) = tr  (I + αA ◦A)n −n = 0 where α is a hyper-parameter. Finally, using an augmented Lagrangian approach, we propose the combined loss function, L = LF1 + λh(A) + ρ 2h(A)2 where λ and ρ are the hyper-parameters. During the backpropagation, the gradients will be passed back to all domains through the intra-domain and crossdomain edges from Ginput to update all parameters. 4 Experiments We evaluate Graph2Taxo on Semeval-2016 Task 13: Taxonomy Extraction Evaluation3, otherwise known as TExEval-2 task (Bordea et al., 2016). All experiments are implemented in PyTorch. Code is publicly available at https://github.com/IBM/ gnn-taxo-construction. 3Semeval-2016 Task 13: http://alt.qcri.org/ semeval2016/task13 Domain Source V E Science WordNet 429 452 Eurovoc 125 124 Combined 453 465 Environment Eurovoc 261 261 Table 1: Dataset statistics for TExEval-2 task obtained from Bordea et al. (2016). The Vertices(V ) and Edges(E) columns represent structural measures of taxonomies for English language only. 4.1 Benchmark Datasets For experiments, we used the English environment and the science taxonomies within the TExEval-2 benchmark datasets. These datasets do not come with any training data, but a list of terms and the task is to build a meaningful taxonomy using these terms. The science domain terms come from Wordnet, Eurovoc and a manually constructed taxonomy (henceforth referred to as combined), whereas the terms for environment domain comes from Eurovoc taxonomy only. Table 1 shows the dataset statistics. We chose to evaluate our proposed approach on environment and science taxonomies only, because we wanted to compare our approach with the existing state-of-the-art system named TaxoRL (Mao et al., 2018) as well as with TAXI, the winning system in the TExEval-2 task. Note that we use the same datasets with TaxoRL (Mao et al., 2018) for TExEval-2 task. In addition, we used the dataset from Bansal et al. (2014) as gold taxonomies (i.e. sources of additional knowledge), Ggold = {Ggold,i, 1 ≤i ≤ Nknown} that are known apriori. This dataset is a set of medium-sized full-domain taxonomies consisting of bottom-out full subtrees sampled from Wordnet, and contains 761 taxonomies in total. To test our model for taxonomy prediction (and to remove overlap), we removed any taxonomy from Ggold which had term overlap with the set of provided terms for science and environment domains within TExEval-2 task. Because of this, we get 621 non-overlapping taxonomies in total, partitioned by 80-20 ratio to create training and validation datasets respectively. 4.2 Experimental Settings We ran our experiments in two different settings. In each of them, we train on a different noisy input graph (and the same gold taxonomies as mentioned before), and evaluate on the science and environ2204 Science Science Science Science Environment (Combined) (Eurovoc) (WordNet) (Average) (Eurovoc) Model Pe Re Fe Pe Re Fe Pe Re Fe Pe Re Fe Pe Re Fe Baseline 0.63 0.29 0.39 0.62 0.21 0.31 0.69 0.27 0.38 0.65 0.26 0.36 0.50 0.21 0.30 JUNLP 0.14 0.31 0.19 0.13 0.36 0.19 0.21 0.31 0.25 0.16 0.33 0.21 0.13 0.23 0.17 USAAR 0.38 0.26 0.31 0.63 0.15 0.25 0.82 0.19 0.31 0.61 0.20 0.29 0.81 0.15 0.25 TAXI 0.39 0.35 0.37 0.30 0.33 0.31 0.37 0.38 0.38 0.35 0.35 0.35 0.34 0.27 0.30 TaxoRLA – – – – – – – – – 0.57 0.33 0.42 0.38 0.24 0.29 TaxoRLB – – – – – – – – – 0.38 0.38 0.38 0.32 0.32 0.32 Graph2Taxo1 0.91 0.31 0.46 0.78 0.26 0.39 0.82 0.32 0.46 0.84 0.30 0.44 0.89 0.24 0.37 Graph2Taxo2 0.90 0.33 0.48 0.79 0.33 0.46 0.77 0.32 0.46 0.82 0.33 0.47 0.67 0.28 0.39 Table 2: Results on TExEval-2 task: Taxonomy Extraction Evaluation (a.k.a TExEval-2). First four rows represent participating systems in the TExEval-2 task, whose performances are taken from Bordea et al. (2016). TaxoRLA/B illustrate the performance of a Reinforcement Learning system by Mao et al. (2018) under the Partial and Full setting respectively. Graph2Taxo1/2 represent our proposed algorithm under both the settings as described in Section 4.2. All results reported above are rounded to 2 decimal places. ment domains, within TExEval-2 task. In the first setting, we used the same input as TaxoRL (Mao et al., 2018) for a fair comparison. This input of TaxoRL consists of term pairs and associated dependency path information between them, which has been extracted from three public web-based corpora. For Graph2Taxo, we only make use of the term pairs to create a noisy input graph. In the second setting, we used data4 provided by TAXI (Panchenko et al., 2016), which comprises of a list of candidate is-a pairs extracted based on substrings and lexico-syntactic patterns. We used these noisy candidate pairs to create a noisy graph. A Graph2Taxo model is then trained on the noisy graph obtained in each of the two settings. In the test phase, all candidate term-pairs for which both terms are present in the test vocabulary are scored (between 0 and 1) by the trained Graph2Taxo model. A threshold of 0.5 is applied, and the candidate pairs scoring beyond this threshold are accumulated together as the predicted taxonomy Gpred. Notice that there are different optimal thresholds for different tasks. We get better performance if we tune the thresholds. However, we chose a harder task and proved our model has better performance than others even we simply use 0.5 as the threshold. In addition, We specify the hyper-parameter ranges for our experiments: learning rate {0.01, 0.005, 0.001}, number of kernels {5, 10, 20} and number of clusters {10, 30, 50, 100}. Finally, Adam optimizer (Kingma and Ba, 2015) is used for all experiments. Evaluation Metrics. Given a gold taxonomy 4Data is available at http://panchenko.me/data/ joint/taxi/res/resources.tgz Ggold (as part of the TExEval-2 benchmark dataset) and a predicted taxonomy Gpred (by our proposed Graph2Taxo approach), we evaluate Gpred using Edge Precision, Edge Recall and F-score measures as defined in Bordea et al. (2016). 4.3 Hyper-parameters We use the following hyper-parameter configuration for training the model. We set dropout to 0.3, number of kernels C to 10, kernel size K to 5, learning rate to 0.001 and initial embedding size to 300. For the loss function, we use the λ = 1.0 and ρ = 0.5. In addition, number of clusters nc is set to 50 for all our experiments. In the scenario wherein the input resource comes from TAXI, only hyponym-hypernym candidate pairs observed more than 10 times are used to create a noisy graph. Also, we use one pooling and one unpooling layer for our experiments. We use dropouts in two places, one at the end of the cross-domain encoder module, and the other after the Conv1D operation. Our models are trained using NVIDIA Tesla P100 GPUs. 4.4 Results and Discussions Table 2 shows the results on the TExEval-2 task Evaluation on science and environment domains. The first row represents a string-based baseline method (Bordea et al., 2016), that exploits term compositionality to hierarchically relate terms. For example, it extracts pairs such as (Statistics Department, Department) from the provided Wikipedia corpus, and utilizes aforementioned technique to construct taxonomy. The next three rows in Table 2, namely, TAXI, JUNLP and USAAR are some of the top perform2205 Science Science Science Environment (Combined) (Eurovoc) (WordNet) (Eurovoc) Model Pe Re Fe Pe Re Fe Pe Re Fe Pe Re Fe Graph2Taxo(2GNN+SC+Res) 0.90 0.33 0.48 0.79 0.33 0.46 0.77 0.32 0.46 0.67 0.28 0.39 Graph2Taxo(2GNN+Res) 0.92 0.32 0.48 0.83 0.29 0.43 0.80 0.31 0.45 0.73 0.26 0.38 Graph2Taxo(2GNN) 0.90 0.33 0.48 0.81 0.29 0.42 0.81 0.31 0.45 0.74 0.25 0.37 Graph2Taxo(NoConstraint) 0.92 0.32 0.48 0.81 0.28 0.41 0.83 0.31 0.45 0.76 0.25 0.37 Graph2Taxo(Without Feas) 0.82 0.33 0.47 0.73 0.27 0.39 0.70 0.33 0.45 0.61 0.23 0.33 Graph2Taxo(AddEmbeddings) 0.90 0.33 0.48 0.80 0.33 0.47 0.77 0.32 0.46 0.71 0.28 0.40 Table 3: Ablation tests reporting the Precision, Recall and F-score, across Science and Environment domains. The first block of values reports results by ablating each layer utilized within Graph2Taxo model. In the second block, we demonstrate that addition of constraint does in fact improve performance. In the third block, we illustrate that the importance of features vfeas for improving performance. The final block uses pretrained fastText embeddings to initialize our Graph2Taxo model, and then fine tunes based on our training data. All results reported above are rounded off to 2 decimal places. ing systems that participated in the TExEval-2 task. Furthermore, TaxoRLA,B illustrates the performance of a Reinforcement Learning system by under the Partial induction and Full induction settings respectively (Mao et al., 2018). Since Mao et al. (2018) has shown that it outperforms other methods such as Gupta et al. (2017); Bansal et al. (2014), we only compare the results of our proposed Graph2Taxo approach against the state-ofthe-art system TaxoRL. Finally, Graph2Taxo1 and Graph2Taxo2 depict the results of our proposed algorithm under both aforementioned settings, i.e. using the input resources of TaxoRL in the first scenario, and using the resources of TAXI in the second scenario. In each of these settings, we find that the overall precision of our proposed Graph2Taxo approach is far better than all the other existing approaches, demonstrating the strong ability of Graph2Taxo to find true relations. Meanwhile, the recall of our proposed Graph2Taxo approach is comparable to that of the existing state-of-the-art approaches. Combining the precision and recall metrics, we observe that Graph2Taxo outperforms existing state-of-the-art approaches on the F-score, by a significant margin. For example, for the Science (Average) domain, Graph2Taxo2 improves over TaxoRL’s F-score by 5%. For the Environment (Eurovoc) domain, our model improves TaxoRL’s F-score by 7% on the TExEval-2 task. Besides, our proposed model has high scalability. For example, the GNN method has been trained for a large graph, including about 1 million nodes (Kipf and Welling, 2017). Besides, the GNN part can be replaced by any improved GNN methods (Hamilton et al., 2017; Gao et al., 2018) designed for large-scale graphs. Ablation Tests. Table 3 shows the results of proposed Graph2Taxo in the second setting for the ablation experiments (divided into four blocks), which indicates the contribution of each layer used in our Graph2Taxo model. In Table 3, all the experiments are run three times, and the average values of the three runs are reported. Furthermore, in Figure 3, we randomly choose Science (Eurovoc) domain as the one to report the error-bars (corresponding to the standard-deviation values) for our experiments. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2GNN+SC+Res 2GNN+Res 2GNN No Constraint Without Feas Results on Science (Eurovoc) domain with Error Bars F1 Score Recall Precision Figure 3: Results on Science (Eurovoc) domain: The average Precision, Recall and F-score values and their standard error values. It is clear that addition of Residual Layer and SC Layer lowers the variance of the results. The first block of values in Table 3 illustrates results by ablating layers from within our Graph2Taxo model. Comparing the first two rows, it’s evident that adding a Semantic Cluster (SC) layer improves recall at the cost of precision, however improving the overall F-score. This improve2206 ment is clearly seen for the Science (Eurovoc) domain, wherein we have an increase of 3%. In the second block, we show that the addition of constraints improves performance. Row 4 represents a Graph2Taxo i.e. 2GNN+SC+Res setup, but without any constraint. Adding the DAG Constraint (Row 1) to this yields can get a better Fscore. Specifically, we observe a major increase of +5% F1 for the Science (Eurovoc) domain. In the third block, we remove the features vfeas as mentioned in section 3.3.1. The results, i.e. row 5 in Table 3 shows that these features are critical in improving the performance of our proposed system on both Science (Eurovoc) and Environment (Eurovoc) domains. Note that these features denoted as vfeas are not a novelty of our proposed method, but rather have been used by existing state-of-the-art approaches. Finally, we study the effect of initializing our model using pre-trained embeddings, rather than initializing at random. Specifically, we initialize the input matrix H0 of our Graph2Taxo model with pre-trained fastText5 embeddings. Our model using fastText embeddings improves upon Row 1 by a margin of 4% in precision values for the Environment (Eurovoc) domain, but unfortunately has no significant effect on the F-score. Hence, we have not used pre-trained embeddings in reporting the results in Table 2. We provide an illustration of the output of the Graph2Taxo model in Figure 4, for the Environment domain.The generated taxonomy in this example contains multiple trees, which serve the purpose of generating taxonomical classifications. As future work, we plan to figure out different strategies to connect the subtrees into a large graph for better DAG generation. Figure 4: A simple example of the taxonomy generated by Graph2Taxo in the environment domain. 5https://fasttext.cc 5 Conclusion We have introduced a GNN-based cross-domain knowledge transfer framework Graph2Taxo, which makes use of a cross-domain graph structure, in conjunction with an acyclicity constraint-based DAG learning for taxonomy construction. Furthermore, our proposed model encodes acyclicity as a soft constraint and shows that the overall model outperforms state of the art. In the future, we would like to figure out different strategies to merge individual gains, obtained by separate application of the DAG constraint, into a setup that can take the best of both precision and recall improvements, and put forth a better performing system. We also plan on looking into strategies to improve recall of the constructed taxonomy. Acknowledgments The authors would like to thank Dr. Jie Chen from MIT-IBM Watson AI Lab and Prof. Jinbo Bi from the University of Connecticut for in-depth discussions on model construction. References Daniele Alfarone and Jesse Davis. 2015. Unsupervised learning of an IS-A taxonomy from a limited domain-specific corpus. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1434–1441. AAAI Press. Mohit Bansal, David Burkett, Gerard de Melo, and Dan Klein. 2014. Structured learning for taxonomy induction with belief propagation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1041–1051, Baltimore, Maryland. Association for Computational Linguistics. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32, Avignon, France. Association for Computational Linguistics. Rianne van den Berg, Thomas N. Kipf, and Max Welling. 2017. Graph convolutional matrix completion. CoRR, abs/1706.02263. Georgeta Bordea, Els Lefever, and Paul Buitelaar. 2016. SemEval-2016 task 13: Taxonomy extraction evaluation (TExEval-2). In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1081–1091, San Diego, 2207 California. Association for Computational Linguistics. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral networks and locally connected networks on graphs. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. David Maxwell Chickering. 1995. Learning bayesian networks is np-complete. In Learning from Data - Fifth International Workshop on Artificial Intelligence and Statistics, AISTATS 1995, Key West, Florida, USA, January, 1995. Proceedings, pages 121–130. Springer. David Maxwell Chickering, David Heckerman, and Christopher Meek. 2004. Large-sample learning of bayesian networks is np-hard. J. Mach. Learn. Res., 5:1287–1330. Sarthak Dash, Md Faisal Mahbub Chowdhury, Alfio Gliozzo, Nandana Mihindukulasooriya, and Nicolas Rodolfo Fauceglia. 2020. Hypernym detection using strict partial order networks. In The ThirtyFourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, USA, February 7 - February 12, 2020. AAAI Press. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), New Orleans, USA, February 2-7, 2018, pages 1811–1818. AAAI Press. Gerhard Friedrich and Markus Zanker. 2011. A taxonomy for generating explanations in recommender systems. AI Magazine, 32(3):90–98. Chuang Gan, Yi Yang, Linchao Zhu, Deli Zhao, and Yueting Zhuang. 2016. Recognizing an action using its name: A knowledge-based approach. Int. J. Comput. Vis., 120(1):61–77. Hongyang Gao, Zhengyang Wang, and Shuiwang Ji. 2018. Large-scale learnable graph convolutional networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018, London, UK, August 1923, 2018, pages 1416–1424. ACM. Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70, pages 1263– 1272. PMLR. Marco Gori, Gabriele Monfardini, and Franco Scarselli. 2005. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pages 729–734. IEEE. Amit Gupta, R´emi Lebret, Hamza Harkous, and Karl Aberer. 2017. Taxonomy induction using hypernym subsequences. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 - 10, 2017. William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, NeurIPS 2017, 4-9 December 2017, Long Beach, CA, USA, pages 1024–1034. Sanda M. Harabagiu, Steven J. Maiorano, and Marius Pasca. 2003. Open-domain textual question answering techniques. Nat. Lang. Eng., 9(3):231–267. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In 14th International Conference on Computational Linguistics, COLING 1992, Nantes, France, August 23-28, 1992, pages 539–545. Wen Hua, Zhongyuan Wang, Haixun Wang, Kai Zheng, and Xiaofang Zhou. 2017. Understand short texts by harvesting and analyzing semantic knowledge. IEEE Trans. Knowl. Data Eng., 29(3):499–512. Hao Huang, Haihua Xu, Xianhui Wang, and Wushour Silamu. 2015. Maximum f1-score discriminative training criterion for automatic mispronunciation detection. IEEE ACM Trans. Audio Speech Lang. Process., 23(4):787–797. Richard M. Karp. 1971. A simple derivation of edmonds’ algorithm for optimum branchings. Networks, 1(3):265–272. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1110–1118, Cambridge, MA. Association for Computational Linguistics. Anh Tuan Luu, Jung-jae Kim, and See Kiong Ng. 2014. Taxonomy construction using syntactic contextual evidence. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 810–819, Doha, Qatar. Association for Computational Linguistics. 2208 Yuning Mao, Xiang Ren, Jiaming Shen, Xiaotao Gu, and Jiawei Han. 2018. End-to-end reinforcement learning for automatic taxonomy induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2462–2472, Melbourne, Australia. Association for Computational Linguistics. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography. Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, Jan Svoboda, and Michael M. Bronstein. 2017. Geometric deep learning on graphs and manifolds using mixture model cnns. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society. Alexander Panchenko, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, C´edrick Fairon, Simone Paolo Ponzetto, and Chris Biemann. 2016. TAXI at SemEval-2016 task 13: a taxonomy induction method based on lexico-syntactic patterns, substrings and focused crawling. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), San Diego, California. Association for Computational Linguistics. Alan Ritter, Stephen Soderland, and Oren Etzioni. 2009. What is this, anyway: Automatic hypernym discovery. In Learning by Reading and Learning to Read, Papers from the 2009 AAAI Spring Symposium, Technical Report SS-09-07, Stanford, California, USA, March 23-25, 2009, pages 88–93. AAAI. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1025– 1036, Dublin, Ireland. Dublin City University and Association for Computational Linguistics. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 3060–3067. AAAI Press. Wei Shen, Jianyong Wang, Ping Luo, and Min Wang. 2012. A graph-based approach for ontology population with named entities. In 21st ACM International Conference on Information and Knowledge Management, CIKM’12, Maui, HI, USA, October 29 - November 02, 2012, pages 345–354. ACM. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Processing Systems, NIPS 2004, December 13-18, 2004, Vancouver, British Columbia, Canada. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 801–808, Sydney, Australia. Association for Computational Linguistics. Alessandro Sperduti and Antonina Starita. 1997. Supervised neural networks for the classification of structures. IEEE Trans. Neural Networks, 8(3):714– 735. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2008. YAGO: A large ontology from wikipedia and wordnet. J. Web Semant. Robert Endre Tarjan. 1972. Depth-first search and linear graph algorithms. SIAM J. Comput., 1(2):146– 160. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. OntoLearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665–707. Chengyu Wang, Xiaofeng He, and Aoying Zhou. 2017. A short survey on taxonomy learning from text corpora: Issues, resources and recent advances. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark. Association for Computational Linguistics. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Qili Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, May 20-24, 2012, pages 481–492. ACM. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful are graph neural networks? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L. Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, NeurIPS 2018, 3-8 December 2018, Montr´eal, Canada, pages 4805–4815. Yue Yu, Jie Chen, Tian Gao, and Mo Yu. 2019. DAGGNN: DAG structure learning with graph neural networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 915 June 2019, Long Beach, California, USA, volume 97, pages 7154–7163. PMLR. Xun Zheng, Bryon Aragam, Pradeep Ravikumar, and Eric P. Xing. 2018. Dags with NO TEARS: continuous optimization for structure learning. In Advances in Neural Information Processing Systems, NeurIPS 2018, 3-8 December 2018, Montr´eal, Canada, pages 9492–9503.
2020
199
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7–18 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 7 Predicting Depression in Screening Interviews from Latent Categorization of Interview Prompts Alex Rinaldi Department of Computer Science UC Santa Cruz [email protected] Jean E. Fox Tree Department of Psychology UC Santa Cruz [email protected] Snigdha Chaturvedi Department of Computer Science University of North Carolina at Chapel Hill [email protected] Abstract Despite the pervasiveness of clinical depression in modern society, professional help remains highly stigmatized, inaccessible, and expensive. Accurately diagnosing depression is difficult– requiring time-intensive interviews, assessments, and analysis. Hence, automated methods that can assess linguistic patterns in these interviews could help psychiatric professionals make faster, more informed decisions about diagnosis. We propose JLPC, a method that analyzes interview transcripts to identify depression while jointly categorizing interview prompts into latent categories. This latent categorization allows the model to identify high-level conversational contexts that influence patterns of language in depressed individuals. We show that the proposed model not only outperforms competitive baselines, but that its latent prompt categories provide psycholinguistic insights about depression. 1 Introduction Depression is a dangerous disease that effects many. A 2017 study by Weinberger et al. (2018) finds that one in five US adults experienced depression symptoms in their lifetime. Weinberger et al. also identify depression as a significant risk factor for suicidal behavior. Unfortunately, professional help for depression is not only stigmatized, but also expensive, timeconsuming and inaccessible to a large population. Lakhan et al. (2010) explain that there are no laboratory tests for diagnosing psychiatric disorders; instead these disorders must be identified through screening interviews of potential patients that require time-intensive analysis by medical experts. This has motivated developing automated depression detection systems that can provide confidential, inexpensive and timely preliminary triaging that can help individuals in seeking help from medical experts. Such systems can help psychiatric professionals by analyzing interviewees for predictive behavioral indicators that could serve as additional evidence (DeVault et al., 2014). Language is a well-studied behavioral indicator for depression. Psycholinguistic studies by Segrin (1990), Rude et al. (2004), and Andreasen (1976) identify patterns of language in depressed individuals, such as focus on self and detachment from community. To capitalize on this source of information, recent work has proposed deep learning models that leverage linguistic features to identify depressed individuals (Mallol-Ragolta et al., 2019). Such deep learning models achieve high performance by uncovering complex, unobservable patterns in data at the cost of transparency. However, in the sensitive problem domain of diagnosing psychiatric disorders, a model should offer insight about its functionality in order for it to be useful as a clinical support tool. One way for a model to do this is utilizing the structure of the input (interview transcript) to identify patterns of conversational contexts that can help professionals in understanding how the model behaves in different contexts. A typical interview is structured as pairs of prompts and responses such that participant responses follow interviewer prompts (such as “How have you been feeling lately?”). Intuitively, each interviewer prompt serves as a context that informs how its response should be analyzed. For example, a short response like “yeah” could communicate agreement in response to a question such as “Are you happy you did that?”, but the same response could signal taciturnity or withdrawal (indicators of depression) in response to an encouraging prompt like “Nice!”. To enable such contextdependent analysis, the model should be able to group prompts based on the types of conversa8 tional context they provide. To accomplish this, we propose a neural Joint Latent Prompt Categorization (JLPC) model that infers latent prompt categories. Depending on a prompt’s category, the model has the flexibility to focus on different signals for depression in the corresponding response. This prompt categorization is learned jointly with the end task of depression prediction. Beyond improving prediction accuracy, the latent prompt categorization makes the proposed model more transparent and offers insight for expert analysis. To demonstrate this, we analyze learned prompt categories based on existing psycholinguistic research. We also test existing hypotheses about depressed language with respect to these prompt categories. This not only offers a window into the model’s working, but also can be used to design better clinical support tools that analyze linguistic cues in light of the interviewer prompt context. Our key contributions are: • We propose an end-to-end, data-driven model for predicting depression from interview transcripts that leverages the contextual information provided by interviewer prompts • Our model jointly learns latent categorizations of prompts to aid prediction • We conduct robust experiments to show that our model outperforms competitive baselines • We analyze the model’s behavior against existing psycholinguistic theory surrounding depressed language to demonstrate the interpretability of our model 2 Joint Latent Prompt Categorization We propose a Joint Latent Prompt Categorization (JLPC) model that jointly learns to predict depression from interview transcripts while grouping interview prompts into latent categories.1. The general problem of classifying interview text is defined as follows: let X denote the set of N interview transcripts. Each interview Xi is a sequence of j conversational turns consisting of interviewer’s prompts and participant’s responses: Xi = {(Pij, Rij) for j = {1...Mi}, where Mi is the number of turns in Xi, Pij is the jth prompt in the ith interview, and Rij is the participant’s re1Code and instructions for reproducing our results are available at https://github.com/alexwgr/ LatentPromptRelease sponse to that prompt. Together, (Pij, Rij) form the jth turn in ith interview. Each interview Xi is labeled with a ground-truth class Yi ∈{1, ..C}, where C is the number of possible labels. In our case, there are two possible labels: depressed or not depressed. Our model, shown in Figure 1, takes as input an interview Xi and outputs the predicted label ˆYi. Our approach assumes that prompts and responses are represented as embeddings Pij ∈RE and Rij ∈RE respectively. We hypothesize that prompts can be grouped into latent categories (K in number) such that corresponding responses will exhibit unique, useful patterns. To perform a soft assignment of prompts to categories, for each prompt, our model computes a category membership vector hij = [h1 ij, · · · , hK ij ]. It represents the probability distribution for the jth prompt of the ith interview over each of K latent categories. hij is computed as a function φ of Pij and trainable parameters θCI (illustrated as the Category Inference layer in Figure 1): hij = φ(Pij, θCI) (1) Based on these category memberships for each prompt, the model then analyzes the corresponding responses so that unique patterns can be learned for each category. Specifically, we form K category-aware response aggregations. Each of these aggregations, ¯Rk i ∈RE, is a category-aware representation of all responses of the ith interview with respect to the kth category. ¯Rk i = 1 Zk i Mi X j=1 hk ij × Rij (2) Zk i = Mi X j=1 hk ij (3) where, hk ij is the kth scalar component of the latent category distribution vector hij and Zk i is a normalizer added to prevent varying signal strength, which interferes with training. We then compute the output class probability vector yi as a function ψ of the response aggregations [¯R1 i , · · · , ¯RK i ] and trainable parameters θD (illustrated as the Decision Layer in Figure 1). yi = ψ(¯R1 i , · · · , ¯RK i , θD) (4) The predicted label ˆYi is selected as the class with the highest probability based on yi. 9 Figure 1: The architecture of our JLPC model with K = 3. For each prompt Pij in interview i, the Category Inference layer computes a latent category membership vector, hij. These are used as weights to form K separate Category-Aware Response Aggregations, which in turn are used by the Decision Layer to predict the output. 2.1 The Category Inference Layer We compute the latent category membership for all prompts in interview i using a feed-forward layer with K outputs and softmax activation: φ(Pij, θCI) = σ(rowj(PiWCI + BCI)) (5) As shown in Equation 1, φ(Pij, θCI) produces the desired category membership vector hij over latent categories for the jth prompt of the ith interview. Pi ∈RM×E is defined as [Pi1, · · · , PiM]T , where M is the maximum conversation length in Xi and Pim = 0E for all Mi < m ≤M. PiWCI + BCI computes a matrix where row j is a vector of energies for the latent category distribution for prompt j, and σ denotes the softmax function. WCI ∈RE×K and BCI ∈RK are the trainable parameters for this layer: θCI = {WCI, BCI}. 2.2 The Decision Layer The Decision Layer models the probabilities for each output class (depressed and not-depressed) using a feed-forward layer over the concatenation ¯Ri of response aggregations [¯R1 i , · · · , ¯RK i ]. This allows each response aggregation ¯Rk i to contribute to the final classification through a separate set of trainable parameters. ψ(¯R1 i , · · · , ¯RK i , θD) = σ(¯RT i WD + BD) (6) As shown in Equation 4, ψ(¯R1 i , · · · , ¯RK i , θD) produces the output class probability vector yi. WD ∈R(E∗K)×C and BD ∈RC are the trainable parameters for the decision layer: θD = {WD, BD}. We then compute the cross entropy loss L(Y, ˆY ) between ground truth labels and yi. 2.3 Entropy regularization The model’s learning goal as described above only allows the output prediction error to guide the separation of prompts into useful categories. However, in order to encourage the model to learn distinct categories, we employ entropy regularization (Grandvalet and Bengio, 2005) by penalizing overlap in the latent category distributions for prompts. That is, we compute the following entropy term using components of the category membership vector hij from Equation 1: E(Xi) = 1 ui N X i=1 Mi X j=1 Ej(Xi) (7) where, Ej(Xi) = − K X k=1 hk ij ln hk ij (8) ui = N X i=1 Mi (9) Finally, the model’s overall learning goal minimizes entropy regularized cross entropy loss: arg min θ L(Y, ˆY ) + λE(Xi) 10 where, λ is a hyper-parameter that controls the strength of the entropy regularization term. 2.4 Leveraging Prompt Representations in the Decision Layer While prompt representations are used to compute latent category assignments, the model described so far (JLPC) cannot directly leverage prompt features in the final classification. To provide this capability, we define two additional model variants with pre-aggregation and post-aggregation prompt injection: JLPCPre and JLPCPost, respectively. JLPCPre is similar to the JLPC model, except that it aggregates both prompt and response representations based on prompt categories. In other words, the aggregated response representation, ¯Rk i in Equation 2, is computed as: ¯Rk i = 1 Zk i Mi X j=1 hk ij[ Pij, Rij ] JLPCPost is also similar to JLPC except that it includes the average of prompt representations as additional input to the decision layer. That is, Equation 6 is modified to the following: ψ(¯R1 i , · · · , ¯RK i , θD) = σ([¯Pi, ¯Ri]T WD + BD) (10) ¯Pi is the uniformly-weighted average of prompt representations in Xi. 3 Dataset We evaluate our model on the Distress Analysis Interview Corpus (DAIC) (Gratch et al., 2014). DAIC consists of text transcripts of interviews designed to emulate a clinical assessment for depression. The interviews are conducted between human participants and a human-controlled digital avatar. Each interview is labeled with a binary depression rating based on a score threshold for the 9th revision of the Patient Health Questionnaire (PHQ-9). In total, there are 170 interviews, with 49 participants identified as depressed. To achieve stable and robust results given the small size of the DAIC dataset, we report performance over 10 separate splits of the dataset into training, validation, and test sets. For each split, 70% is used as training data, and 20% of the training data is set aside as validation data. 3.1 Preprocessing and Representation DAIC interview transcripts are split into utterances based on pauses in speech and speaker change, so we concatenate adjacent utterances by the same speaker to achieve a prompt-response structure. We experiment with two types of continuous representations for prompts and responses: averaged word embeddings from the pretrained GloVe model (Pennington et al., 2014), and sentence embeddings from the pretrained BERT model (Devlin et al., 2019). Further details are given in Appendix A.1. Reported results use GloVe embeddings because they led to better validation scores. 3.2 Exclusion of Predictive Prompts Our preliminary experiments showed that it is possible to achieve better-than-random performance on the depression identification task using only the set of prompts (excluding the responses). This is possibly because the interviewer identified some individuals as potentially depressed during the interview, resulting in predictive follow-up prompts (for example, “How long ago were you diagnosed?”). To address this, we iteratively remove predictive prompts until the development performance using prompts alone is not significantly better than random (see Appendix A.3). This is to ensure our experiments evaluate the content of prompts and responses rather than fitting to any bias in question selection by the DAIC corpus interviewers, and so are generalizable to other interview scenarios, including future fully-automated ones. 4 Experiments We now describe our experiments and analysis. 4.1 Baselines Our experiments use the following baselines: • The RO baseline only has access to responses. It applies a dense layer to the average of response representations for an interview. • The PO baseline only has access to prompts, following the same architecture as RO. • The PR baseline has access to both prompts and responses. It applies a dense layer to the average of prompt and response concatenations. 11 Model F1 depressed F1 not depr. Random 0.303 (0.081) 0.690 (0.044) PO 0.246 (0.080) 0.784 (0.032) RO 0.309 (0.121) 0.798 (0.031) PR 0.324 (0.121) 0.787 (0.030) BERT 0.362 (0.080) 0.780 (0.062) JLPC 0.416 (0.110) 0.761 (0.057) JLPCPre 0.358 (0.121) 0.776 (0.037) JLPCPost 0.440 (0.080) 0.768 (0.078) Table 1: Mean F1 scores for the positive (depressed) and negative (not depressed) across the 10 test sets. Standard deviation is reported in parentheses. Two of the proposed models, JLPC and JLPCPost, improve over baselines including the BERT fine-tuned model (Devlin et al., 2019), with the JLPCPost achieving a statistically significant improvement (p < 0.05). • BERT refers to the BERT model (Devlin et al., 2019) fine-tuned on our dataset (see Appendix A.2). 4.2 Training details All models are trained using the Adam optimizer. We use mean validation performance to select hyper-parameter values: number of epochs = 1300, learning rate = 5 × 10−4, number of prompt categories K = 11 and entropy regularization strength λ = 0.1. 4.3 Quantitative Results We computed the F1 scores of the positive (depressed) and negative (not-depressed) classes averaged over the 10 test sets. Given the class imbalance in the DAIC dataset, we compare models using F1 score for the depressed class. As an additional baseline, we also implemented methods from Mallol-Ragolta et al. (2019) but do not report their performance since their model performs very poorly (close to random) when we consider averaged performance over 10 test sets. This is likely because of the large number of parameters required by the hierarchical attention model. Table 1 summarizes our results. The belowrandom performance of the PO baseline is expected, since the prompts indicative of depression were removed as described in Section 3.2. This indicates the remaining prompts, by themselves, are not sufficient to accurately classify interviews. The RO model performs better, indicating the response information is more useful. The PR baseline improves over the RO baseline indicating that Figure 2: Ablation study on validation set demonstrating the importance of prompt categorization and entropy regularization for our model. the combination of prompt and response information is informative. The BERT model, which also has access to prompts and responses, shows a reasonable improvement over all baselines. JLPC and JLPCPost outperform the baselines, with JLPCPost achieving a statistically significant improvement over both the PR and BERT baselines (p < 0.05).2 This indicates the utility of our prompt-category aware analysis of the interviews. 4.4 Ablation study We analyzed how the prompt categorization and entropy regularization contribute to our model’s validation performance. The contributions of each component are visualized in Figure 2. Our analysis shows that while both components are important, latent prompt categorization yields the highest contribution to the model’s performance. 4.5 Analyzing Prompt Categories Beyond improving classification performance, the latent categorization of prompts yields insight about conversational contexts relevant for analyzing language patterns in depressed individuals. To explore the learned categories, we isolate interviews from the complete corpus that are correctly labeled by our best-performing model. We say that the model “assigns” an interview prompt to a given category if the prompt’s membership for that category (Equation 1) is stronger than for other categories. We now describe the various prompts assigned to different categories.3 Firstly, all prompts that are questions like “Tell me more about that”, “When was the last time you had an argument?”, etc. are grouped together into 2Statistical significance is calculated from the test prediction using two-sided T-test for independent samples of scores 3To verify consistency of prompt categorization, we rerun the model with multiple initialization and they all yielded the same general trends as described in the paper. 12 a single category, which we refer to as the Starters category. Previous work has identified usefulness of such questions as conversation starters since they assist in creating a sense of closeness (Mcallister et al., 2004; Heritage and Robinson, 2006). Secondly, there are several categories reserved exclusively for certain backchannels. Backchannels are short utterances that punctuate longer turns by another conversational participant (Yngve, 1970; Goodwin, 1986; Bavelas et al., 2000). Specifically, the model assigns the backchannels “mhm,” “mm,” “nice,” and “awesome” each to separate categories. Research shows that it is indeed useful to consider the effects different types of backchannels separately. For example, Bavelas et al. (2000) propose a distinction between specific backchannels (such as “nice” and “awesome”) and generic backchannels (such as “mm” and “mhm”), and Tolins and Fox Tree (2014) demonstrated that each backchannel type serves a different purpose in conversation. Thirdly, apart from starters and backchannels, the model isolates one specific prompt - “Have you been diagnosed with depression?”4 into a separate category. Clearly, this is an important prompt and it is encouraging to see that the model isolates it as useful. Interestingly, the model assigns the backchannel “aw” to the same category as “Have you been diagnosed with depression?” suggesting that responses to both prompts yield similar signals for depression. Lastly, the remaining five categories are empty - no prompt in the corpus has maximum salience with any of them. A likely explanation for this observation stems from the choice of normalizing factor Zk i in Equation 3: it causes ¯Rk i to regress to the unweighted average of response embeddings when all prompts in an interview have low salience with category k. Repeated empty categories then function as an “ensemble model” for the average response embeddings, potentially improving predictive performance. 4.6 Category-based Analysis of Responses The prompt categories inferred by our JLPCPost model enable us to take a data-driven approach to investigating the following category-specific psycholinguistic hypotheses about depression: 4Note that this prompt was not removed in Section 3.2 since by itself, the prompt’s presence is not predictive of depression (without considering the response). Starters Backchannels D ND D ND RL 23.2 27.2 19.9 15.1 DMF (×10−2) 6.55 7.31 7.98 8.55 Table 2: Indicators for social skills: mean response length (RL) and discourse marker/filler rates (DMF) for responses to prompts in starters and backchannel (collectively representing “mhm”, “mm”, “nice”, and “awesome”) categories, for depressed (D) and notdepressed (ND) participants. Statistically significant differences are underlined (p < 0.05). Both measures are significantly lower for the depressed class for responses to starters, but not to backchannels. H1 Depression correlates with social skill deficits (Segrin, 1990) H2 Depressed language is vague and qualified (Andreasen, 1976) H3 Depressed language is self-focused and detached from community (Rude et al., 2004) For hypothesis H1, we evaluate measures of social skill in responses to different categories of prompts. While research in psychology uses several visual, linguistic and paralinguistic indicators of social skills, in this paper we focus on two indicators that are measurable in our data: average response length in tokens and the rate of spoken-language fillers and discourse markers usage.5 The first measure - response length - can be seen as a basic measure of taciturnity. The second measure - usage of fillers and discourse markers - can be used as proxy for conversational skills, since speakers use these terms to manage conversations (Fox Tree, 2010). Christenfeld (1995) and Lake et al. (2011) also find that discourse marker usage correlates with social skill. Following is the list of fillers and discourse markers: “um”, “uh”, “you know”, “well”, “oh”, “so”, “I mean”, and “like”. Table 2 shows the values of these measures for social skill for responses to backchannels and starters categories. We found that both measures were significantly lower for responses to starters-category prompts for depressed participants as opposed to not-depressed participants (p < 0.05). However, the measures showed no significant difference between depressed and notdepressed individuals for responses to categories 5We compute this measure as the ratio of discourse marker and filler occurrences to number of tokens, averaged over responses. 13 representing backchannels (“mhm,” “mm,” “awesome,” and “nice”). Note that a conversation usually begins with prompts from the starters category and thereafter backchannels are used to encourage the speaker to continue speaking (Goodwin, 1986). Given this, our results suggest that depressed individuals in the given population indeed initially demonstrate poorer social skills than notdepressed individuals, but the effect levels off as the interviewer encourages them to keep speaking using backchannels. Given this, our results suggest that depressed individuals in the given population indeed initially demonstrate poorer social skills than not depressed individuals, but the effect stops being visible as the conversation continues, either because the depressed individuals become more comfortable talking or because the interviewers’ encouragement through backchannels elicits more contributions. Hypotheses H2 and H3 - regarding qualified language and self-focus, respectively - involve semantic qualities of depressed language. To explore these hypotheses, we use a reverse engineering approach to determine salient words for depression in responses to each prompt category. We describe this reverse engineering approach as follows: since the aggregated representation of an individual’s responses in a category (¯Rk i computed in Equation 2) resides in the same vector space as individual word embeddings, we can identify words in our corpus that produce the strongest (positive) signal for depression in various categories. 6 We refer to these as signal words. Signal words are ranked not by their frequency in the dataset, but by their predictive potential the strength of association between the word’s semantic representation and a given category. We evaluate hypotheses H2 and H3 by observing semantic similarities between these signal words and the language themes identified by the hypotheses. Selections from the top 10 signal words for depression associated with categories corresponding to starters, specific backchannels, and generic backchannels are shown in Figure 3. Figure 3 shows hypothesis H2 is supported by 6A word’s signal strength is computed for a given category k by taking the dot product of the word’s embedding with the weights in the decision layer corresponding to category k. Large positive numbers correspond to positive predictions and vice versa. Since the Decision Layer is a dot product with all response aggregations, it is intuitive to compute prediction strength for a group of categories by adding together prediction strengths from individual groups. Figure 3: Signal words associated with language in depressed individuals. Columns represent various types of prompts (Starters, Generic Backchannels and Specific Backchannels). The bottom half shows ranked lists of signal words from the responses. Blue words are strongly indicative and red words are least indicative of depression. signal words in responses to generic backchannels; words such as “theoretical” and “plausible” constitute qualified language, and in the context of generic backchannels, the proposed model identifies them as predictive of depression. Similarly, hypothesis H3 is also supported in responses to generic backchannels. The model identifies words related to community (“kids,” “neighborhood,” “we”) as strong negative signals for depression, supporting that depressed language reflects detachment from community. However, the model only focuses on these semantic themes in responses to generic backchannel categories. As we found in our evaluation of hypothesis H1, the model localizes cues for depression to specific contexts. Signal words for depression in responses to the starters category are more reflective of our findings for hypothesis H1: the model focuses on short, low-semantic-content words that could indicate social skill deficit. For example, Figure 3 shows we identified “wow” as a signal word for the starters category. In one example from the corpus, a depressed participant uses “wow” to express uncomfortability with an emotional question: the interviewer asks, “Tell me about the last time you were really happy,” and the interviewee responds, “wow (laughter) um.” For responses to specific backchannels, strong signal words reflect themes of goals and desires 14 (“wished,” “mission,” “accomplished”). Psychologists have observed a correlation between depression and goal commitment and pursuit (Vergara and Roberts, 2011; Klossek, 2015), and our finding indicates that depressed individuals discuss goal-related themes as response to specific backchannels. Overall, our model’s design not only helps in reducing its opacity but also informs psycholinguistic analysis, making it more useful as part of an informed decision-making process. Our analysis indicates that even though research has shown strong correlation between depression and various interpersonal factors such as social skills, self-focus and usage of qualified language, clinical support tools should focus on these factors in light of conversational cues. 4.7 Sources of Error In this section, we analyze major sources of error. We apply a similar reverse engineering method as in Section 4.6. For prompts in each category, we consider corresponding responses that result in strong incorrect signals (false positive or false negative) based on the category’s weights in the decision layer. We focus on the categories with the most significance presence in the dataset: the categories corresponding to starters, the “mhm” backchannel, and the prompt “Have you been diagnosed with depression?”. For the starters category, false positive-signal responses tend to contain a high presence of fillers and discourse markers (“uh,” “huh,” “post mm traumatic stress uh no uh uh,” “hmm”). It is possible that because the model learned to focus on short, low-semantic-content responses, it incorrectly correlates presence of fillers and discourse markers with depression. For the “mhm” category, we identified several false negatives, in which the responses included concrete words like “uh nice environment”, “I love the landscape”, and “I love the waters”. Since the “mhm” category focuses on vague, qualified language to predict depression (see Figure 3), the presence of concrete words in these responses could have misled the model. For the “Have you been diagnosed with depression?” category, the misclassified interviews contained short responses to this prompt like “so,” “never,” “yes,” “yeah,” and “no,” as well as statements containing the word “depression.” For this category, the model seems to incorrectly correlate short responses and direct mentions of depression with the depressed class. 5 Related Work Much work exists at the intersection of natural language processing (NLP), psycholinguistics, and clinical psychology. For example, exploring correlations between counselor-patient interaction dynamics and counseling outcomes (Althoff et al., 2016); studying linguistic development of mental healthcare counsellors (Zhang et al., 2019); identifying differences in how people disclose mental illnesses across gender and culture (De Choudhury et al., 2017); predicting a variety of mental health conditions from social media posts (Sekulic and Strube, 2019; De Choudhury et al., 2013a; Guntuku et al., 2019; Coppersmith et al., 2014); and analyzing well-being (Smith et al., 2016) and distress (Buechel et al., 2018). Specifically, many researchers have used NLP methods for identifying depression (Morales et al., 2017). They focus on for predicting depression from Twitter posts (Resnik et al., 2015; De Choudhury et al., 2013b; Jamil et al., 2017), Facebook updates (Schwartz et al., 2014), student essays (Resnik et al., 2013), etc. Previous works have also focused on predicting depression severity from screening interview data (Yang et al., 2016; Sun et al., 2017; Pampouchidou et al., 2016). Unlike ours, these approaches rely on audio, visual, and text input. More recent approaches are based on deep learning. Yang et al. (2017) propose a CNNbased model leveraging jointly trained paragraph vectorizations, Al Hanai et al. (2018) propose an LSTM-based model fusing audio features with Doc2Vec representations of response text, Makiuchi et al. (2019) combine LSTM and CNN components, and Mallol-Ragolta et al. (2019) propose a model that uses a hierarchical attention mechanism. However, these approaches are more opaque and difficult to interpret. Other approaches are similar to ours in the sense that they utilize the structure provided by interview prompts. Al Hanai et al. (2018) and Gong and Poellabauer (2017) propose models that extract separate sets of features for responses to each unique prompt in their corpus. However, these approaches require manually identifying unique prompts. Our model can instead automatically learn new, task-specific categorization of prompts. 15 Lubis et al. (2018) perform a K-means clustering of prompt to assign prompts to latent dialogue act categories. These are used as features in a neural dialogue system. Our approach expands upon this idea of incorporating a separate unsupervised clustering step by allowing the learning goal to influence the clustering. Our approach is also related to that of Chaturvedi et al. (2014) in that it automatically categorizes various parts of the conversation. However, they use domain-specific handcrafted features and discrete latent variables for this categorization. Our approach instead can leverage the neural architecture to automatically identify features useful for this categorization. To the best of our knowledge, our approach is the first deep learning approach that jointly categorizes prompts to learn context-dependent patterns in responses. 6 Conclusion This paper addressed the problem of identifying depression from interview transcripts. The proposed model analyzes the participant’s responses in light of various categories of prompts provided by the interviewer. The model jointly learns these prompt categories while identifying depression. We show that the model outperforms competitive baselines and we use the prompt categorization to investigate various psycholinguistic hypotheses. Depression prediction is a difficult task which requires especially trained experts to conduct interviews and do their detailed analysis (Lakhan et al., 2010). While the absolute performance of our model is low for immediate practical deployment, it improves upon existing methods and at the same time, unlike modern methods, provides insight about the model’s workflow. For example, our findings show how language of depressed individuals changes when interviewers use backchannels to encourage continued speech. We hope that this combination will encourage the research community to make more progress in this direction. Future work can further investigate temporal patterns in how language used by depressed people evolves over the course of an interaction. References Tuka Al Hanai, Mohammad Ghassemi, and James Glass. 2018. Detecting Depression with Audio/Text Sequence Modeling of Interviews. In Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018, pages 1716–1720. Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health. Transactions of the Association for Computational Linguistics, 4:463–476. Nancy J. C. Andreasen. 1976. Linguistic Analysis of Speech in Affective Disorders. Archives of General Psychiatry, 33(11):1361. Janet B. Bavelas, Linda Coates, and Trudy Johnson. 2000. Listeners as co-narrators. Journal of Personality and Social Psychology, 79(6):941–952. Sven Buechel, Anneke Buffone, Barry Slaff, Lyle Ungar, and Jo˜ao Sedoc. 2018. Modeling Empathy and Distress in Reaction to News Stories. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4758–4765. Snigdha Chaturvedi, Dan Goldwasser, and Hal Daum´e III. 2014. Predicting instructor’s intervention in MOOC forums. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1501–1511, Baltimore, Maryland. Association for Computational Linguistics. Nicholas Christenfeld. 1995. Does it hurt to say um? Journal of Nonverbal Behavior, 19:171–186. Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying Mental Health Signals in Twitter. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 51–60, Baltimore, Maryland, USA. Association for Computational Linguistics. Munmun De Choudhury, Scott Counts, and Eric Horvitz. 2013a. Predicting postpartum changes in emotion and behavior via social media. In 2013 ACM SIGCHI Conference on Human Factors in Computing Systems, CHI ’13, Paris, France, April 27 - May 2, 2013, pages 3267–3276. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013b. Predicting Depression via Social Media. In Proceedings of the Seventh International Conference on Weblogs and Social Media, ICWSM 2013, Cambridge, Massachusetts, USA, July 8-11, 2013. Munmun De Choudhury, Sanket S. Sharma, Tomaz Logar, Wouter Eekhout, and Ren´e Clausen Nielsen. 2017. Gender and Cross-Cultural Differences in Social Media Disclosures of Mental Illness. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW ’17, pages 353–369, Portland, Oregon, USA. ACM Press. 16 David DeVault, Ron Artstein, Grace Benn, Teresa Dey, Edward Fast, Alesia Gainer, Kallirroi Georgila, Jonathan Gratch, Arno Hartholt, Margaux Lhommet, Gale M. Lucas, Stacy Marsella, Fabrizio Morbini, Angela Nazarian, Stefan Scherer, Giota Stratou, Apar Suri, David R. Traum, Rachel Wood, Yuyu Xu, Albert A. Rizzo, and Louis-Philippe Morency. 2014. SimSensei Kiosk: A Virtual Human Interviewer for Healthcare Decision Support. In International conference on Autonomous Agents and Multi-Agent Systems, AAMAS ’14, Paris, France, May 5-9, 2014, pages 1061–1068. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Jean E. Fox Tree. 2010. Discourse Markers across Speakers and Settings. Language and Linguistics Compass, 4(5):269–281. Yuan Gong and Christian Poellabauer. 2017. Topic Modeling Based Multi-modal Depression Detection. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge - AVEC ’17, pages 69–76, Mountain View, California, USA. ACM Press. Charles Goodwin. 1986. Between and within: Alternative sequential treatments of continuers and assessments. Human Studies, 9(2-3):205–217. Yves Grandvalet and Yoshua Bengio. 2005. Semisupervised Learning by Entropy Minimization. page 8. Jonathan Gratch, Ron Artstein, Gale Lucas, Giota Stratou, Stefan Scherer, Angela Nazarian, Rachel Wood, Jill Boberg, David DeVault, Stacy Marsella, David Traum, Skip Rizzo, and Louis-Philippe Morency. 2014. The Distress Analysis Interview Corpus of human and computer interviews. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, LREC 2014, Reykjavik, Iceland, May 26-31, 2014, pages 3123–3128. Sharath Chandra Guntuku, Daniel Preotiuc-Pietro, Johannes C. Eichstaedt, and Lyle H. Ungar. 2019. What Twitter Profile and Posted Images Reveal about Depression and Anxiety. In Proceedings of the Thirteenth International Conference on Web and Social Media, ICWSM 2019, Munich, Germany, June 11-14, 2019, pages 236–246. John Heritage and Jeffrey Robinson. 2006. The Structure of Patients’ Presenting Concerns: Physicians’ Opening Questions. Health communication, 19:89– 102. Zunaira Jamil, Diana Inkpen, Prasadith Buddhitha, and Kenton White. 2017. Monitoring Tweets for Depression to Detect At-risk Users. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology — From Linguistic Signal to Clinical Reality, pages 32–40, Vancouver, BC. Association for Computational Linguistics. Ulrike Klossek. 2015. The Role of Goals and Goal Orientation as Predisposing Factors for Depression. Ph.D. thesis, University of Exeter. Johanna K. Lake, Karin R. Humphreys, and Shannon Cardy. 2011. Listener vs. speaker-oriented aspects of speech: Studying the disfluencies of individuals with autism spectrum disorders. Psychonomic Bulletin & Review, 18(1):135–140. Shaheen E Lakhan, Karen Vieira, and Elissa Hamlat. 2010. Biomarkers in psychiatry: drawbacks and potential for misuse. International Archives of Medicine, 3(1):1. Nurul Lubis, Sakriani Sakti, Koichiro Yoshino, and Satoshi Nakamura. 2018. Unsupervised Counselor Dialogue Clustering for Positive Emotion Elicitation in Neural Dialogue System. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 161–170, Melbourne, Australia. Association for Computational Linguistics. Mariana Rodrigues Makiuchi, Tifani Warnita, Kuniaki Uto, and Koichi Shinoda. 2019. Multimodal Fusion of BERT-CNN and Gated CNN Representations for Depression Detection. In Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop - AVEC ’19, pages 55–63, Nice, France. ACM Press. Adria Mallol-Ragolta, Ziping Zhao, Lukas Stappen, Nicholas Cummins, and Bj¨orn W. Schuller. 2019. A Hierarchical Attention Network-Based Approach for Depression Detection from Transcribed Clinical Interviews. In Interspeech 2019, pages 221–225. ISCA. Margaret Mcallister, Beth Matarasso, Barbara Dixon, and C Shepperd. 2004. Conversation starters: reexamining and reconstructing first encounters within the therapeutic relationship. Journal of Psychiatric and Mental Health Nursing, 11. Michelle Morales, Stefan Scherer, and Rivka Levitan. 2017. A Cross-modal Review of Indicators for Depression Detection Systems. In Proceedings of the Fourth Workshop on Computational Linguistics and Clinical Psychology — From Linguistic Signal to Clinical Reality, pages 1–12, Vancouver, BC. Association for Computational Linguistics. Anastasia Pampouchidou, Kostas Marias, Fan Yang, Manolis Tsiknakis, Olympia Simantiraki, Amir Fazlollahi, Matthew Pediaditis, Dimitris Manousos, Alexandros Roniotis, Georgios Giannakakis, Fabrice Meriaudeau, and Panagiotis Simos. 2016. Depression Assessment by Fusing High and Low Level 17 Features from Audio, Video, and Text. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge - AVEC ’16, pages 27–34, Amsterdam, The Netherlands. ACM Press. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Philip Resnik, William Armstrong, Leonardo Claudino, Thang Nguyen, Viet-An Nguyen, and Jordan Boyd-Graber. 2015. Beyond LDA: Exploring Supervised Topic Modeling for Depression-Related Language in Twitter. In Proceedings of the 2nd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 99–107, Denver, Colorado. Association for Computational Linguistics. Philip Resnik, Anderson Garron, and Rebecca Resnik. 2013. Using Topic Modeling to Improve Prediction of Neuroticism and Depression in College Students. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1348–1353, Seattle, Washington, USA. Association for Computational Linguistics. Stephanie Rude, Eva-Maria Gortner, and James Pennebaker. 2004. Language use of depressed and depression-vulnerable college students. Cognition & Emotion, 18(8):1121–1133. H. Andrew Schwartz, Johannes Eichstaedt, Margaret L. Kern, Gregory Park, Maarten Sap, David Stillwell, Michal Kosinski, and Lyle Ungar. 2014. Towards Assessing Changes in Degree of Depression through Facebook. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 118–125, Baltimore, Maryland, USA. Association for Computational Linguistics. Chris Segrin. 1990. A meta-analytic review of social skill deficits in depression. Communication Monographs, 57(4):292–308. Ivan Sekulic and Michael Strube. 2019. Adapting Deep Learning Methods for Mental Health Prediction on Social Media. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019), pages 322–327, Hong Kong, China. Association for Computational Linguistics. Laura Smith, Salvatore Giorgi, Rishi Solanki, Johannes Eichstaedt, H. Andrew Schwartz, Muhammad Abdul-Mageed, Anneke Buffone, and Lyle Ungar. 2016. Does ‘well-being’ translate on Twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2042–2047, Austin, Texas. Association for Computational Linguistics. Bo Sun, Yinghui Zhang, Jun He, Lejun Yu, Qihua Xu, Dongliang Li, and Zhaoying Wang. 2017. A Random Forest Regression Method With Selected-Text Feature For Depression Assessment. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge - AVEC ’17, pages 61–68, Mountain View, California, USA. ACM Press. Jackson Tolins and Jean E. Fox Tree. 2014. Addressee backchannels steer narrative development. Journal of Pragmatics, 70:152–164. Chrystal Vergara and John E. Roberts. 2011. Motivation and goal orientation in vulnerability to depression. Cognition and Emotion, 25(7):1281–1290. A. H. Weinberger, M. Gbedemah, A. M. Martinez, D. Nash, S. Galea, and R. D. Goodwin. 2018. Trends in depression prevalence in the USA from 2005 to 2015: widening disparities in vulnerable groups. Psychological Medicine, 48(8):1308–1315. Le Yang, Dongmei Jiang, Lang He, Ercheng Pei, Meshia C´edric Oveneke, and Hichem Sahli. 2016. Decision Tree Based Depression Classification from Audio Video and Language Information. In Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge - AVEC ’16, pages 89–96, Amsterdam, The Netherlands. ACM Press. Le Yang, Dongmei Jiang, Xiaohan Xia, Ercheng Pei, Meshia C´edric Oveneke, and Hichem Sahli. 2017. Multimodal Measurement of Depression Using Deep Learning Models. In Proceedings of the 7th Annual Workshop on Audio/Visual Emotion Challenge - AVEC ’17, pages 53–59, Mountain View, California, USA. ACM Press. V. H. Yngve. 1970. On getting a word in edgewise. In Chicago Linguistics Society, 6th Meeting, pages 567–578. Justine Zhang, Robert Filbin, Christine Morrison, Jaclyn Weiser, and Cristian Danescu-Niculescu-Mizil. 2019. Finding Your Voice: The Linguistic Development of Mental Health Counselors. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 946–947. A Appendices A.1 Continuous representation of utterances For continuous representation using the GloVe model, we use the pretrained 100-dimensional embeddings (Pennington et al., 2014). The representation of an utterance is computed as the average of embeddings for words in the utterance, with 0100 used to represent words not in the pretrained vocabulary. Based on the pretrained vocabulary, contractions (e.g. “can’t”) are decomposed. For continuous representation with the 18 BERT model, utterances are split into sequences of sub-word tokens following the authors’ specifications (Devlin et al., 2019), and the pretrained BERT (Base, Uncased) model computes a 768dimensional position-dependent representation. A.2 Training the BERT Model For the BERT model, all interviews were truncated to fit the maximum sequence length of the pretrained BERT model (Base, Uncased): 512 subword tokens. Truncation occurs by alternating between removing prompt and response tokens until the interview length in tokens is adequate. Devlin et al. (2019) suggest trying a limited number of combinations of learning rate and training epochs to optimize the BERT classification model. Specifically, the paper recommends combinations of 2, 3, or 4 epochs and learning rates of 2E-5, 3E-5, and 5E-5. We noted that validation and test scores were surprisingly low (significantly below random) using these combinations, and posited that the small number of suggested epochs could have resulted from the authors only evaluating BERT on certain types of datasets. Accordingly, we evaluated up to 50 epochs with the suggested learning rates and selected a learning rate of 2E-5 with 15 epochs based on validation results. A.3 Exclusion of prompts The goal of removing prompts is to prevent a classifier from identifying participants as depressed based on certain prompts simply being present in the interview, such as “How long ago were you diagnosed [with depression]?” While some prompts are clear indicators, early tests showed that even with these prompts removed, other prompts were predictors for the participant being depressed for no obvious reason, indicating a bias in the design in the interview. Rather than using arbitrary means to determine whether prompts could be predictive, we used a machine-learning based algorithm to identify and remove predictive prompts from interviews. After the division of interviews into turns as described in Section 3.1, we extracted the set of distinct prompts Pdistinct from all interviews (with no additional preprocessing). We then iteratively performed 10 logistic regression experiments using the same set of splits described in Section 4.2. In a given experiment, each interview was represented as an indicator vector with |Pdistinct| dimensions, such that position p is set to 1 if prompt p ∈{1, · · · , |Pdistinct|} is present in the interview, and 0 otherwise. Logistic Regression was optimized on the vector representations for the training interviews. The predicted F1 score for the depressed class on the validation set was recorded for each experiment. The average weight vector for the 10 Logistic regression models was computed. The prompt corresponding to the highest weight was removed from Pdistinct and added to a separate set D of predictive prompts. The process was repeated until the mean validation F1 score was less than the random baseline for the dataset (see Section 4.3). The final set of 31 prompts D had to be removed from the dataset before the baselines and proposed approaches could be evaluated. The design of the DAIC interview posed a challenge, however: the same prompt can appear in many interviews, but preceded by unique interjections by the interviewer, such as “mhm,” “nice,” and “I see”. We refer to this interjections as “prefixes.” We manually compiled a list of 37 prefixes that commonly reoccur in interviews. For all interviews, if a prompt from Pdistinct occurred in the interview after prefixes were ignored, then both the prompt and its corresponding response were removed from the interview before training. This resulted in an removing an average of 13.64 turns from each interview in the dataset.
2020
2
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 208–224 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 208 Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs Dong Bok Lee1∗Seanie Lee1,3∗Woo Tae Jeong3 Donghwan Kim3 Sung Ju Hwang1,2 KAIST1, AITRICS2, 42Maru Inc.3, South Korea {markhi,lsnfamily02,sjhwang82}@kaist.ac.kr {wtjeong,scissors}@42maru.com Abstract One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer (QA) pairs for a target text domain with human annotation. An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context or from large amount of unstructured texts (e.g. Wikipedia). In this work, we propose a hierarchical conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts, while maximizing the mutual information between generated QA pairs to ensure their consistency. We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (InfoHCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semisupervised learning) for training, against stateof-the-art baseline models. The results show that our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training. 1 1 Introduction Extractive Question Answering (QA) is one of the most fundamental and important tasks for natural language understanding. Thanks to the increased complexity of deep neural networks and use of knowledge transfer from the language models pretrained on large-scale corpora (Peters et al., 2018; Devlin et al., 2019; Dong et al., 2019), the stateof-the-art QA models have achieved human-level performance on several benchmark datasets (Rajpurkar et al., 2016, 2018). However, what is also * Equal contribution 1The generated QA pairs and the code can be found at https://github.com/seanie12/Info-HCVAE Paragraph (Input) Philadelphia has more murals than any other u.s. city, thanks in part to the 1984 creation of the department of recreation’s mural arts program, . . . The program has funded more than 2,800 murals Q1 which city has more murals than any other city? A1 philadelphia Q2 why philadelphia has more murals? A2 the 1984 creation of the department of recreation’s mural arts program Q3 when did the department of recreation’ s mural arts program start ? A3 1984 Q4 how many murals funded the graffiti arts program by the department of recreation? A4 more than 2,800 Table 1: An example of QA pairs generated with our framework. The paragraph is an extract from Wikipedia provided by Du and Cardie (2018). For more examples, please see Appendix D. crucial to the success of the recent data-driven models, is the availability of large-scale QA datasets. To deploy the state-of-the-art QA models to real-world applications, we need to construct high-quality datasets with large volumes of QA pairs to train them; however, this will be costly, requiring a massive amount of human efforts and time. Question generation (QG), or Question-Answer pair generation (QAG), is a popular approach to overcome this data scarcity challenge. Some of the recent works resort to semi-supervised learning, by leveraging large amount of unlabeled text (e.g. Wikipedia) to generate synthetic QA pairs with the help of QG systems (Tang et al., 2017; Yang et al., 2017; Tang et al., 2018; Sachan and Xing, 2018). However, existing QG systems have overlooked an important point that generating QA pairs from a context consisting of unstructured texts, is essentially a one-to-many problem. Sequence-tosequence models are known to generate generic sequences (Zhao et al., 2017a) without much variety, as they are trained with maximum likelihood estimation. This is highly suboptimal for QAG 209 since the contexts given to the model often contain richer information that could be exploited to generate multiple QA pairs. To tackle the above issue, we propose a novel probabilistic deep generative model for QA pair generation. Specifically, our model is a hierarchical conditional variational autoencoder (HCVAE) with two separate latent spaces for question and answer conditioned on the context, where the answer latent space is additionally conditioned on the question latent space. During generation, this hierarchical conditional VAE first generates an answer given a context, and then generates a question given both the answer and the context, by sampling from both latent spaces. This probabilistic approach allows the model to generate diverse QA pairs focusing on different parts of a context at each time. Another crucial challenge of the QG task is to ensure the consistency between a question and its corresponding answer, since they should be semantically dependent on each other such that the question is answerable from the given answer and the context. In this paper, we tackle this consistency issue by maximizing the mutual information (Belghazi et al., 2018; Hjelm et al., 2019; Yeh and Chen, 2019) between the generated QA pairs. We empirically validate that the proposed mutual information maximization significantly improves the QA-pair consistency. Combining both the hierarchical CVAE and the InfoMax regularizer together, we propose a novel probabilistic generative QAG model which we refer to as Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE). Our Info-HCVAE generates diverse and consistent QA pairs even from a very short context (see Table 1). But how should we quantitatively measure the quality of the generated QA pairs? Popular evaluation metrics (e.g. BLEU (Papineni et al., 2002), ROUGE (Lin and Hovy, 2002), METEOR (Banerjee and Lavie, 2005)) for text generation only tell how similar the generated QA pairs are to GroundTruth (GT) QA pairs, and are not directly correlated with their actual quality (Nema and Khapra, 2018; Zhang and Bansal, 2019). Therefore, we use the QA-based Evaluation (QAE) metric proposed by Zhang and Bansal (2019), which measures how well the generated QA pairs match the distribution of GT QA pairs. Yet, in a semi-supervised learning setting where we already have GT labels, we need novel QA pairs that are different from GT QA pairs for the additional QA pairs to be truly effective. Thus, we propose a novel metric, Reverse QAE (R-QAE), which is low if the generated QA pairs are novel and diverse. We experimentally validate our QAG model on SQuAD v1.1 (Rajpurkar et al., 2016), Natural Questions (Kwiatkowski et al., 2019), and TriviaQA (Joshi et al., 2017) datasets, with both QAE and R-QAE using BERT-base (Devlin et al., 2019) as the QA model. Our QAG model obtains high QAE and low R-QAE, largely outperforming stateof-the-art baselines using a significantly smaller number of contexts. Further experimental results for semi-supervised QA on the three datasets using the SQuAD as the labeled dataset show that our model achieves significant improvements over the state-of-the-art baseline (+2.12 on SQuAD, +5.67 on NQ, and +1.18 on Trivia QA in EM). Our contribution is threefold: • We propose a novel hierarchical variational framework for generating diverse QA pairs from a single context, which is, to our knowledge, the first probabilistic generative model for questionanswer pair generation (QAG). • We propose an InfoMax regularizer which effectively enforces the consistency between the generated QA pairs, by maximizing their mutual information. This is a novel approach in resolving consistency between QA pairs for QAG. • We evaluate our framework on several benchmark datasets by either training a new model entirely using generated QA pairs (QA-based evaluation), or use both ground-truth and generated QA pairs (semi-supervised QA). Our model achieves impressive performances on both tasks, largely outperforming existing QAG baselines. 2 Related Work Question and Question-Answer Pair Generation Early works on Question Generation (QG) mostly resort to rule-based approaches (Heilman and Smith, 2010; Lindberg et al., 2013; Labutov et al., 2015). However, recently, encoder-decoder based neural architectures (Du et al., 2017; Zhou et al., 2017) have gained popularity as they outperform rule-based methods. Some of them use paragraph-level information (Du and Cardie, 2018; Song et al., 2018; Liu et al., 2019; Zhao et al., 2018; Kim et al., 2019; Sun et al., 2018) as additional information. Reinforcement learning is a popular 210 approach to train the neural QG models, where the reward is defined as the evaluation metrics (Song et al., 2017; Kumar et al., 2018), or the QA accuracy/likelihood (Yuan et al., 2017; Hosking and Riedel, 2019; Zhang and Bansal, 2019). State-ofthe-art QG models (Alberti et al., 2019; Dong et al., 2019; Chan and Fan, 2019) use pre-trained language models. Question-Answer Pair Generation (QAG) from contexts, which is our main target, is a relatively less explored topic tackled by only a few recent works (Du and Cardie, 2018; Alberti et al., 2019; Dong et al., 2019). To the best of our knowledge, we are the first to propose a probabilistic generative model for end-to-end QAG; Yao et al. (2018) use VAE for QG, but they do not tackle QAG. Moreover, we effectively resolve the QApair consistency issue by maximizing their mutual information with an InfoMax regularizer (Belghazi et al., 2018; Hjelm et al., 2019; Yeh and Chen, 2019), which is another contribution of our work. Semi-supervised QA with QG With the help of QG models, it is possible to train the QA models in a semi-supervised learning manner to obtain improved performance. Tang et al. (2017) apply dual learning to jointly train QA and QG on unlabeled dataset. Yang et al. (2017) and Tang et al. (2018) train QG and QA in a GAN framework (Goodfellow et al., 2014). Sachan and Xing (2018) propose a curriculum learning to supervise the QG model to gradually generate difficult questions for the QA model. Dhingra et al. (2018) introduce a cloze-style QAG method to pretrain a QA model. Zhang and Bansal (2019) propose to filter out low-quality synthetic questions by the answer likelihood. While we focus on the answerable setting in this paper, few recent works tackle the unanswerable settings. Zhu et al. (2019) use neural networks to edit answerable questions into unanswerable ones, and perform semi-supervised QA. Alberti et al. (2019) and Dong et al. (2019) convert generated questions into unanswerable ones using heuristics, and filter or replace corresponding answers based on EM or F1. Variational Autoencoders Variational autoencoders (VAEs) (Kingma and Welling, 2014) are probabilistic generative models used in a variety of natural language understanding tasks, including language modeling (Bowman et al., 2016), dialogue generation (Serban et al., 2017; Zhao et al., 2017b; Park et al., 2018; Du et al., 2018; Qiu et al., 2019), and machine translation (Zhang et al., 2016; Su et al., 2018; Deng et al., 2018). In this work, we propose a novel hierarchical conditional VAE framework with an InfoMax regularization for generating a pair of samples with high consistency. 3 Method Our goal is to generate diverse and consistent QA pairs to tackle the data scarcity challenge in the extractive QA task. Formally, given a context c which contains M tokens, c = (c1, . . . , cM), we want to generate QA pairs (x, y) where x = (x1, . . . , xN) is the question containing N tokens and y = (y1, . . . , yL) is its corresponding answer containing L tokens. We aim to tackle the QAG task by learning the conditional joint distribution of the question and answer given the context, p(x, y|c), from which we can sample the QA pairs: (x, y) ∼p(x, y|c) We estimate p(x, y|c) with a probabilistic deep generative model, which we describe next. 3.1 Hierarchical Conditional VAE We propose to approximate the unknown conditional joint distribution p(x, y|c), with a variational autoencoder (VAE) framework (Kingma and Welling, 2014). However, instead of directly learning a common latent space for both question and answer, we model p(x, y|c) in a hierarchical conditional VAE framework with a separate latent space for question and answer as follows: pθ(x, y|c) = Z zx X zy pθ(x|zx, y, c)pθ(y| zx, zy, c)· pψ(zy|zx, c)pψ(zx|c)dzx where zx and zy are latent variables for question and answer respectively, and the pψ(zx|c) and pψ(zy|zx, c) are their conditional priors following an isotropic Gaussian distribution and a categorical distribution (Figure 1-(a)). We decompose the latent space of question and answer, since the answer is always a finite span of context c, which can be modeled well by a categorical distribution, while a continuous latent space is a more appropriate choice for question since there could be unlimited valid questions from a single context. Moreover, we design the bi-directional dependency flow of joint distribution for QA. By leveraging hierarchical structure, we enforce the answer latent variables 211 Figure 1: The conceptual illustration of the proposed HCVAE model encoding and decoding question and its corresponding answer jointly. The dashed line refers to the generative process of HCVAE. Figure 2: The directed graphical model for HCVAE. The gray and white nodes denote observed and latent variables. to be dependent on the question latent variables in pψ(zy|zx, c) and achieve the reverse dependency by sampling question x ∼pθ(x|zx, y, c). We then use a variational posterior qφ(·) to maximize the Evidence Lower Bound (ELBO) as follows (The complete derivation is provided in Appendix A): log pθ(x, y|c) ≥Ezx∼qφ(zx|x,c)[log pθ(x|zx, y, c)] + Ezy∼qφ(zy|zx,y,c)[log pθ(y|zy, c)] −DKL[qφ(zy|zx, y, c)||pψ(zy|zx, c)] −DKL[qφ(zx|x, c)||pψ(zx|c)] =: LHCVAE where θ, φ, and ψ are the parameters of the generation, posterior, and prior network, respectively. We refer to this model as a Hierarchical Conditional Variational Autoencoder (HCVAE) framework. Figure 2 shows the directed graphical model of our HCVAE. The generative process is as follows: 1. Sample question L.V.: zx ∼pψ(zx | c) 2. Sample answer L.V.: zy ∼pψ(zy | zx, c) 3. Generate an answer: y ∼pθ(y | zy, c) 4. Generate a question: x ∼pθ(x | zx, y, c) Embedding We use the pre-trained word embedding network from BERT (Devlin et al., 2019) for posterior and prior networks, whereas the whole BERT is used as a contextualized word embedding model for the generative networks. For the answer encoding, we use a binary token type id of BERT. Specifically, we encode all context tokens as 0s, except for the tokens which are part of answer span (highlighted words of context in Figure 1-(a) or -(c)), which we encode as 1s. We then feed the sequence of the word token ids, token type ids, and position ids into the embedding layer to encode the answer-aware context. We fix all the embedding layers in HCVAE during training. Prior Networks We use two different conditional prior networks pψ(zx|c), pψ(zy|zx, c) to model context-dependent priors (the dashed lines in Figure 1-(a)). To obtain the parameters of isotropic Gaussian N(µ, σ2I) for pψ(zx|c), we use a bidirectional LSTM (Bi-LSTM) to encode the word embeddings of the context into the hidden representations, and then feed them into a Multi-Layer Perceptron (MLP). We model pψ(zy|zx, c) following a categorical distribution Cat(π), by computing the parameter π from zx and the hidden representation of the context using another MLP. Posterior Networks We use two conditional posterior networks qφ(zx|x, c), qφ(zy|zx, y, c) to approximate true posterior distributions of latent variables for both question x and answer y. We use two Bi-LSTM encoders to output the hidden representations of question and context given their word embeddings. Then, we feed the two hidden representations into MLP to obtain the parameters of Gaussian distribution, µ′ and σ′ (upper right corner in Figure 1-(a)). We use the reparameterization trick (Kingma and Welling, 2014) to train the model with backpropagation since the stochastic sampling process zx ∼qφ(zx|x, c) is nondifferentiable. We use another Bi-LSTM to encode the word embedding of answer-aware context into the hidden representation. Then, we feed the hidden representation and zx into MLP to compute the parameters π′ of categorical distribution (lower right corner in Figure 1-(a)). We use the categorical reparameterization trick with gumbel-softmax 212 (Maddison et al., 2017; Jang et al., 2017) to enable backpropagation through sampled discrete latent variables. Answer Generation Networks Since we consider extractive QA, we can factorize pθ(y|zy, c) into pθ(ys|zy, c) and pθ(ye|zy, c), where ys and ye are the start and the end position of an answer span (highlighted words in Figure 1-(b)), respectively. To obtain MLE estimators for both, we first encode the context c into the contextualized word embedding of Ec = {ec 1, . . . , ec M} with the pre-trained BERT. We compute the final hidden representation of context and the latent variable zy with a heuristic matching layer (Mou et al., 2016) and a Bi-LSTM: fi = [ec i ; zy; |ec i −zy |; ec i ⊙zy] −→ h i = −−−−→ LSTM([fi, −→ h i−1]) ←− h i = ←−−−− LSTM([fi, ←− h i+1]) H = [ −→ h i; ←− h i ]M i=1 where zy is linearly transformed, and H ∈Rdy×M is the final hidden representation. Then, we feed H into two separate linear layers to predict ys and ye. Question Generation Networks We design the encoder-decoder architecture for our QG network by mainly adopting from our baselines (Zhao et al., 2018; Zhang and Bansal, 2019). For encoding, we use pre-trained BERT to encode the answer-specific context into the contextualized word embedding, and then use a two-layer Bi-LSTM to encode it into the hidden representation (in Figure 1-(c)). We apply a gated self-attention mechanism (Wang et al., 2017) to the hidden representation to better capture long-term dependencies within the context, to obtain a new hidden representation ˆH ∈Rdx×M. The decoder is a two-layered LSTM which receives the latent variable zx as an initial state. It uses an attention mechanism (Luong et al., 2015) to dynamically aggregate ˆH at each decoding step into a context vector of sj, using the j-th decoder hidden representation dj ∈Rdx (in Figure 1-(c)). Then, we feed dj and sj into MLP with maxout activation (Goodfellow et al., 2013) to compute the final hidden representation ˆdj as follows: d0 = zx, dj = LSTM([ex j−1, dj−1]) rj = ˆHT Wadj, aj = softmax(rj), sj = ˆHaj ˆdj = MLP([ dj; sj ]) where zx is linearly transformed, and ex j is the j-th question word embedding. The probability vector over the vocabulary is computed as p(xj| x<j, zx, y, c) = softmax(Weˆdj). We initialize the weight matrix We as the pretrained word embedding matrix and fix it during training. Further, we use the copy mechanism (Zhao et al., 2018), so that the model can directly copy tokens from the context. We also greedily decode questions to ensure that all stochasticity comes from the sampling of the latent variables. 3.2 Consistent QA Pair Generation with Mutual Information Maximization One of the most important challenges of the QAG task is enforcing consistency between the generated question and its corresponding answer. They should be semantically consistent, such that it is possible to predict the answer given the question and the context. However, neural QG or QAG models often generate questions irrelevant to the context and the answer (Zhang and Bansal, 2019) due to the lack of the mechanism enforcing this consistency. We tackle this issue by maximizing the mutual information (MI) of a generated QA pair, assuming that an answerable QA pair will have high MI. Since an exact computation of MI is intractable, we use a neural approximation. While there exist many different approximations (Belghazi et al., 2018; Hjelm et al., 2019), we use the estimation proposed by Yeh and Chen (2019) based on Jensen-Shannon Divergence: MI(X; Y ) ≥Ex,y∼P[log g(x, y)] + 1 2E˜x,y∼N[log(1 −g(˜x, y))] + 1 2Ex,˜y∼N[log(1 −g(x, ˜y))] =: LInfo where EP and EN denote expectation over positive and negative examples. We generate negative examples by shuffling the QA pairs in the minibatch, such that a question is randomly associated with an answer. Intuitively, the function g(·) acts like a binary classifier that discriminates whether QA pair is from joint distribution or not. We empirically find that the following g(·) effectively achieves our goal of consistent QAG: g(x, y) = sigmoid(xT Wy) where x = 1 N P i ˆdi and y = 1 L P j ˆhj are summarized representations of question and answer, respectively. Combined with the ELBO, the final 213 objective of our Info-HCVAE is as follows: max Θ LHCVAE + λLInfo where Θ includes all the parameters of φ, ψ, θ and W, and λ controls the effect of MI maximization. In all experiments, we always set the λ as 1. 4 Experiment 4.1 Dataset Stanford Question Answering Dataset v1.1 (SQuAD) (Rajpurkar et al., 2016). This is a reading comprehension dataset consisting of questions obtained from crowdsourcing on a set of Wikipedia articles, where the answer to every question is a segment of text or a span from the corresponding reading passage. We use the same split used in Zhang and Bansal (2019) for the fair comparison. Natural Questions (NQ) (Kwiatkowski et al., 2019). This dataset contains realistic questions from actual user queries to a search engine, using Wikipedia articles as context. We adapt the dataset provided from MRQA shared task (Fisch et al., 2019) and convert it into the extractive QA format. We split the original validation set in half, to use as validation and test for our experiments. TriviaQA (Joshi et al., 2017). This is a reading comprehension dataset containing question-answerevidence triples. The QA pairs and the evidence (contexts) documents are authored and uploaded by Trivia enthusiasts. Again, we only choose QA pairs of which answers are span of contexts. HarvestingQA 2 This dataset contains top-ranking 10K Wikipedia articles and 1M synthetic QA pairs generated from them, by the answer span extraction and QG system proposed in (Du and Cardie, 2018). We use this dataset for semi-supervised learning. 4.2 Experimental Setups Implementation Details In all experiments, we use BERT-base (d = 768) (Devlin et al., 2019) as the QA model, setting most of the hyperparameters as described in the original paper. For both HCVAE and Info-HCVAE, we set the hidden dimensionality of the Bi-LSTM to 300 for posterior, prior, and answer generation networks, and use the dimensionality of 450 and 900 for the encoder and the decoder of the question generation network. We set the dimensionality of zx as 50, and define zy to be set of 2https://github.com/xinyadu/ harvestingQA 10-way categorical variables zy = {z1, . . . , z20}. For training the QA model, we fine-tune the model for 2 epochs. We train both the QA model and Info-HCVAE with Adam optimizer (Kingma and Ba, 2015) with the batch size of 32 and the initial learning rate of 5 · 10−5 and 10−3 respectively. For semi-supervised learning, we first pre-train BERT on the synthetic data for 2 epochs and fine-tune it on the GT dataset for 2 epochs. To prevent posterior collapse, we multiply 0.1 to the KL divergence terms of question and answer (Higgins et al., 2017). For more details of the datasets and experimental setup, please see Appendix C. Baselines We experiment two variants of our model against several baselines: 1. Harvest-QG: An attention-based neural QG model with a neural answer extraction system (Du and Cardie, 2018). 2. Maxout-QG: A neural QG model based on maxout copy mechanism with a gated selfattetion (Zhao et al., 2018), which uses BERT as the word embedding as suggested by Zhang and Bansal (2019). 3. Semantic-QG: A neural QG model based on Maxout-QG with semantic-enhanced reinforcement learning (Zhang and Bansal, 2019). 4. HCVAE: Our HCVAE model without the InfoMax regularizer. 5. Info-HCVAE: Our full model with the InfoMax regularizer. For the baselines, we use the same answer spans extracted by the answer extraction system (Du and Cardie, 2018). 4.3 Quantitative Analysis QAE and R-QAE One of crucial challenges with generative models is a lack of a good quantitative evaluation metric. We adopt QA-based Evaluation (QAE) metric proposed by Zhang and Bansal (2019) to measure the quality of QA pair. QAE is obtained by first training the QA model on the synthetic data, and then evaluating the QA model with human annotated test data. However, QAE only measures how well the distribution of synthetic QA pairs matches the distribution of GT QA pairs, and does not consider the diversity of QA pairs. Thus, we propose Reverse QA-based Evaluation (R-QAE), which is the accuracy of the QA model trained on the human-annotated QA pairs, evaluated on the generated QA pairs. If the synthetic 214 Method QAE (↑) R-QAE (↓) SQuAD (EM/F1) Harvesting-QG 55.11/66.40 64.77/78.85 Maxout-QG 56.08/67.50 62.49/78.24 Semantic-QG 60.49/71.81 74.23/88.54 HCVAE 69.46/80.79 37.57/61.24 Info-HCVAE 71.18/81.51 38.80/60.73 Natural Questions (EM/F1) Harvesting-QG 27.91/41.23 49.89/70.01 Maxout-QG 30.98/44.96 49.96/70.03 Semantic-QG 30.59/45.29 58.42/79.23 HCVAE 31.45/46.77 32.78/55.12 Info-HCVAE 37.18/51.46 29.39/53.04 TriviaQA (EM/F1) Harvesting-QG 21.32/30.21 29.75/47.73 Maxout-QG 24.58/34.32 31.56/49.92 Semantic-QG 27.54/38.25 37.45/58.15 HCVAE 30.20/40.88 34.41/48.16 Info-HCVAE 35.45/44.11 21.65/37.65 Table 2: QAE and R-QAE results on three datasets. All results are the performances on our test set. Harvest Maxout Semantic HCVAE Info-QG -QG -QG HCVAE 111.74 114.58 112.94 113.89 117.41 Table 3: The results of mutual information estimation. The results are based on QA pairs generated from H×10%. data covers larger distribution than the human annotated training data, R-QAE will be lower. However, note that having a low R-QAE is only meaningful when the QAE is high enough since trivially invalid questions may also yield low R-QAE. Results We compare HCVAE and Info-HCVAE with the baseline models on SQuAD, NQ, and TriviaQA. We use 10% of Wikipedia paragraphs from HarvestingQA (Du and Cardie, 2018) for evaluation. Table 2 shows that both HCVAE and InfoHCVAE significantly outperforms all baselines by large margin in QAE on all three datasets, while obtaining significantly lower R-QAE, which shows that our model generated both high-quality and diverse QA pairs from the given context. Moreover, Info-HCVAE largely outperforms HCVAE, which demonstrates the effectiveness of our InfoMax regularizer for enforcing QA-pair consistency. Figure 3 shows the accuracy as a function of number of QA pairs. Our Info-HCVAE outperform all baselines by large margins using orders of magnitude smaller number of QA pairs. For example, Info-HCVAE achieves 61.38 points using 12K QA pairs, outperforming Semantic-QG that use 10 times larger number of QA pairs. We also report 104 105 106 50 60 70 # of QA pairs (log-scaled) QA-based Evaluation (EM) Harvest-QG Maxout-QG Semantic-QG Info-HCVAE Figure 3: QAE vs. # of QA pairs (log-scaled) on SQuAD. Method QAE (↑) R-QAE (↓) Baseline 56.08/67.50 62.49/78.24 +Q-latent 58.66/70.54 40.00/62.02 +A-latent 69.46/80.79 37.57/61.24 +InfoMax 71.18/81.51 38.80/60.73 Table 4: QAE and R-QAE results of the ablation study on SQuAD dataset. All the results are the performances on our test set. the score of xT Wy as an approximate estimation of mutual information (MI) between QA pairs generated by each method in Table 3; our Info-HCVAE yields the largest value of MI estimation. Ablation Study We further perform an ablation study to see the effect of each model component. We start with the model without any latent variables, which is essentially a deterministic Seq2Seq model (denoted as Baseline in Table 4). Then, we add in the question latent variable (+Q-latent) and then the answer latent variable (+A-latent), to see the effect of probabilistic latent variable modeling and hierarchical modeling respectively. The results in Table 4 shows that both are essential for improving both the quality (QAE) and diversity (R-QAE) of the generated QA pairs. Finally, adding in the InfoMax regularization (+InfoMax) further improves the performance by enhancing the consistency of the generated QA pairs. 4.4 Qualitative Analysis Human Evaluation As a qualitative analysis, we first conduct a pairwise human evaluation of the QA pairs generated by our Info-HCVAE and MaxoutQG on 100 randomly selected paragraphs. Specifically, 20 human judges performed blind quality assessment of two sets of QA pairs that are presented in a random order, each of which contained two to five QA pairs. Each set of QA pairs is evalu215 Method Diversity Consistency Overall Baseline 26% 34% 30% Ours 47% 50% 52% Tie 27% 16% 18% Table 5: The results of human judgement in terms of diversity, consistency, and overall quality on the generated QA pairs. Paragraph The scotland act 1998 which was passed by and given royal assent by queen Elizabeth ii on 19 november 1998, governs functions and role of the scottish parliament and delimits its legislative competence . . . GT what act sets forth the functions of the scottish parliament? O-1 which act was passed in 1998? O-2 which act governs role of the scottish parliament? O-3 which act was passed by queen Elizabeth ii? O-4 which act gave the scottish parliament the responsibility to determine its legislative policy? Table 6: Examples of one-to-many mapping of our InfoHCVAE. The answer is highlighted by pink. GT denotes the ground-truth question. O- denotes questions generated by Info-HCVAE. ated in terms of the overall quality, diversity, and consistency between the generated QA pairs and the context. The results in Table 5 show that the QA pairs generated by our Info-HCVAE is evaluated to be more diverse and consistent, compared to ones generated by the baseline models. One-to-Many QG To show that our Info-HCVAE can effectively tackle one-to-many mapping problem for question generation, we qualitatively analyze the generated questions for given a context and an answer from the SQuAD validation set. Specifically, we sample the question latent variables multiple times using the question prior network pψ(zx | c), and then feed them to question generation networks pθ(x | zx, y, c) with the answer. The example in Table 6 shows that our InfoHCVAE generates diverse and semantically consistent questions given an answer. We provide more qualitative examples in Appendix D. Latent Space Interpolation To examine if InfoHCVAE learns meaningful latent space of QA pairs, we qualitatively analyze the QA pairs generated by interpolating between two latent codes of it on SQuAD training set. We first encode zx from two QA pairs using posterior networks of qφ(zx|x, c), and then sample zy from interpolated values of zx using prior networks pψ(zy|zx, c) to generate corresponding QA pairs. Table 7 shows that the semantic of the QA pairs generated smoothly transit from one latent to another with high diversity and consistency. We provide more qualitative examples Paragraph ... Atop the main building’ s gold dome is a golden statue of the virgin mary. ... Next to the main building is the basilica of the sacred heart. Immediately behind the basilica is the grotto, ... a marian place of prayer and reflection. ... At the end of the main drive ..., is a simple, modern stone statue of mary. Ori1 Q what is the grotto at notre dame? A a marian place of prayer and reflection Gen Q where is the grotto at? A a marian place of prayer and reflection Q what place is behind the basilica of prayer? A grotto Q what is next to the main building at notre dame? A the basilica of the sacred heart Q what is at the end of the main drive? A stone statue of mary Ori2 Q what sits on top of the main building at notre dame? A a golden statue of the virgin mary Table 7: QA pairs generated by interpolating between two latent codes encoded by our posterior networks. Ori1 and Ori2 are from training set of SQuAD. in Appendix D. 4.5 Semi-supervised QA We now validate our model in a semi-supervised setting, where the model uses both the ground truth labels and the generated labels to solve the QA task, to see whether the generated QA pairs help improve the performance of a QA model in a conventional setting. Since such synthetic datasets consisting of generated QA pairs may inevitably contain some noise (Zhang and Bansal, 2019; Dong et al., 2019; Alberti et al., 2019), we further refine the QA pairs by using the heuristic suggested by Dong et al. (2019), to replace the generated answers whose F1 score to the prediction of the QA model trained on the human annotated data is lower than a set threshold. We select the threshold of 40.0 for the QA pair refinement model via cross-validation on the SQuAD dataset, and used it for the experiments. Please see Appendix C for more details. SQuAD We first perform semi-supervised QA experiments on SQuAD using the synthetic QA pairs generated by our model. For the contexts, we use both the paragraphs in the original SQuAD (S) dataset, and the new paragraphs in the HarvestingQA dataset (H). Using Info-HCVAE, we generate 10 different QA pairs by sampling from the latent spaces (denoted as S×10). For the baseline, we use Semantic-QG (Zhang and Bansal, 2019) with the beam search size of 10 to obtain the same number of QA pairs. We also generate new QA pairs 216 Data EM F1 SQuAD 80.25 88.23 Semantic-QG (baseline) +S×10 81.20 (+0.95) 88.36 (+0.13) +H×100% 81.03 (+0.78) 88.79 (+0.56) +S×10 + H×100% 81.44 (+1.19) 88.72 (+0.49) Info-HCVAE (ours) +S×10 82.09 (+1.84) 89.11 (+0.88) +H×10% 81.37 (+1.12) 88.85 (+0.62) +H×20% 81.68 (+1.43) 89.06 (+0.93) +H×30% 81.76 (+1.51) 89.12 (+0.89) +H×50% 82.17 (+1.92) 89.38 (+1.15) +H×100% 82.37 (+2.12) 89.63 (+1.40) +S×10 + H×100% 82.19 (+1.94) 89.84 (+1.59) Table 8: The results of semi-supervised QA experiments on SQuAD. All the results are the performances on our test set. using different portions of paragraphs provided in HarvestingQA (denoted as H×10%-H×100%), by sampling one latent variable per context. Table 8 shows that our framework improves the accuracy of the BERT-base model by 2.12 (EM) and 1.59 (F1) points, significantly outperforming Semantic-QG. NQ and TriviaQA Our model is most useful when we do not have any labeled data for a target dataset. To show how well our QAG model performs in such a setting, we train the QA model using only the QA pairs generated by our model trained on SQuAD and test it on the target datasets (NQ and TriviaQA). We generate multiple QA pairs from each context of the target dataset, sampling from the latent space one to ten times (denoted by N×110 or T×1-10 in Table 9). Then, we fine-tune the QA model pretrained on the SQuAD dataset with the generated QA pairs from the two datasets. Table 9 shows that as we augment training data with larger number of synthetic QA pairs, the performance of the QA model significantly increases, significantly outperforming the QA model trained on SQuAD only. Yet, models trained with our QAG still largely underperform models trained with human labels, due to the distributional discrepancy between the source and the target dataset. 5 Conclusion We proposed a novel probabilistic generative framework for generating diverse and consistent questionanswer (QA) pairs from given texts. Specifically, our model learns the joint distribution of question and answer given context with a hierarchically conditional variational autoencoder, while enforcing consistency between generated QA pairs by maximizing their mutual information with a novel InData EM F1 Natural Questions SQuAD 42.77 57.29 +N×1 46.70 (+3.94) 61.08 (+3.79) +N×2 46.95 (+4.19) 61.34 (+4.05) +N×3 47.73 (+4.96) 61.98 (+4.69) +N×5 48.19 (+5.42) 62.21 (+4.92) +N×10 48.44 (+5.67) 62.69 (+5.40) NQ 61.65 73.91 TriviaQA SQuAD 48.96 57.98 +T×1 49.65 (+0.69) 59.13 (+1.21) +T×2 50.01 (+1.05) 59.08 (+1.10) +T×3 49.71 (+0.75) 59.49 (+1.51) +T×5 50.14 (+1.18) 59.21 (+1.23) +T×10 49.65 (+0.69) 59.20 (+1.22) Trivia 64.55 70.42 Table 9: The result of semi-supervised QA experiments on Natural Questions and TriviaQA dataset. All results are the performance on our test set. foMax regularizer. To our knowledge, ours is the first successful probabilistic QAG model. We evaluated the QAG performance of our model by the accuracy of the BERT-base QA model trained using the generated questions on multiple datasets, on which it largely outperformed the state-of-theart QAG baseline (+6.59-10.69 in EM), even with a smaller number of QA pairs. We further validated our model for semi-supervised QA, where it improved the performance of the BERT-base QA model on the SQuAD by 2.12 in EM, significantly outperforming the state-of-the-art model. As future work, we plan to extend our QAG model to a meta-learning framework, for generalization over diverse datasets. Acknowledgements This work was supported by the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF2018R1A5A1059921), Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2019-0-01410, Research Development of Question Generation for Deep Learning based Semantic Search Domain Extension, No.2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion, No.2019- 000075, and Artificial Intelligence Graduate School Program (KAIST)). 217 References Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. 2019. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, R. Devon Hjelm, and Aaron C. Courville. 2018. Mutual information neural estimation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016. Ying-Hong Chan and Yao-Chung Fan. 2019. A recurrent bert-based model for question generation. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In Advances in Neural Information Processing Systems, NIPS 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Bhuwan Dhingra, Danish Danish, and Dheeraj Rajagopal. 2018. Simple and effective semi-supervised question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NACCL-HLT, 2018. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, NeurIPS, 2019. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. EMNLP 2018. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading comprehension. In EMNLP 2019 MRQA Workshop. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, NIPS 2014. Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. 2013. Maxout networks. In Proceedings of the 30th International Conference on International Conference on Machine, ICML 2013. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Proceedings of the 2010 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NACCL-HLT 2010. Irina Higgins, Lo¨ıc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations, ICLR 2017. R Devon Hjelm, Alex Fedorov, Samuel LavoieMarchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, ICLR 2019. Tom Hosking and Sebastian Riedel. 2019. Evaluating rewards for question generation models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2019. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of 218 the Association for Computational Linguistics, ACL 2017. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2019. Improving neural question generation using answer separation. In Proceedings of the AAAI Conference on Artificial Intelligence. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014. Vishwajeet Kumar, Ganesh Ramakrishnan, and YuanFang Li. 2018. A framework for automatic question generation from text using deep reinforcement learning. CoRR. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, TACL 2019. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational, ACL 2015. Chin-Yew Lin and Eduard Hovy. 2002. Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic SummarizationVolume 4. David Lindberg, Fred Popowich, John C. Nesbit, and Philip H. Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, ENLG 2013. Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learning to generate questions by learningwhat not to generate. In The World Wide Web Conference, WWW 2019. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017. Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. 2016. Natural language inference by tree-based convolution and heuristic matching. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. Preksha Nema and Mitesh M. Khapra. 2018. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL 2002. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NACCL-HLT 2018. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. Lisong Qiu, Juntao Li, Wei Bi, Dongyan Zhao, and Rui Yan. 2019. Are training samples correlated? learning to generate dialogue responses with multiple references. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, ACL 2019. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016. Mrinmaya Sachan and Eric P. Xing. 2018. Selftraining for jointly learning to ask and answer questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence 2018. 219 Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for question generation and question answering. CoRR. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2018. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence 2018. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, EMNLP 2018. Duyu Tang, Nan Duan, Tao Qin, and Ming Zhou. 2017. Question answering and question generation as dual tasks. CoRR. Duyu Tang, Nan Duan, Zhao Yan, Zhirui Zhang, Yibo Sun, Shujie Liu, Yuanhua Lv, and Ming Zhou. 2018. Learning to collaborate for question answering and asking. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. CoRR. Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised QA with generative domain-adaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching machines to ask questions. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018. Yi-Ting Yeh and Yun-Nung Chen. 2019. Qainfomax: Learning robust question answering system by mutual information maximization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019. Xingdi Yuan, Tong Wang, C¸ aglar G¨ulc¸ehre, Alessandro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017. Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. CoRR. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017a. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017b. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In Natural Language Processing and Chinese Computing - 6th CCF International Conference, NLPCC 2017. Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, and Ting Liu. 2019. Learning to ask unanswerable questions for machine reading comprehension. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. 220 Appendix A Derivation of Variational Lower Bound Theorem. If we assume conditional independence of y and zx, i.e., pθ(y|zx, zy, c) = pθ(y|zy, c), log pθ(x, y|c) ≥LHCVAE Proof. log pθ(x, y|c) = log Z zx X zy pθ(x|zx, y, c)· pθ(y|zx, zy, c)pψ(zy|zx, c)pψ(zx|c)dzx = log Z zx pθ(x|zx, y, c)pψ(zx|c)qφ(zx|x, c) qφ(zx|x, c)· X zy pθ(y|zy, c)pψ(zy|zx, c)qφ(zy|zx, y, c) qφ(zy|zx, y, c)dzx = log Z zx pθ(x|zx, y, c)pψ(zx|c)qφ(zx|x, c) qφ(zx|x, c) · Eqφ(zy|zx,y,c) " pθ(y|zy, c)pψ(zy|zx, c) qφ(zy|zx, y, c) # dzx = log Eqφ(z|x,c){pθ(x|zx, y, c)pψ(zx|c) qφ(zx|x, c) · Eqφ(zy|zx,y,c) " pθ(y|zy, c)pψ(zy|zx, c) qφ(zy|zx, y, c) # } ≥Eqφ(z|x,c){log pθ(x|zx, y, c)pψ(zx|c) qφ(zx|x, c) + log Eqφ(zy|zx,y,c) " pθ(y|zy, c)pψ(zy|zx, c) qφ(zy|zx, y, c) # } = Eqφ(z|x,c)[log pθ(x|zx, y, c)] −DKL[qφ(zx|x, c)||pψ(zx|c)] + Eqφ(z|x,c){ log Eqφ(zy|zx,y,c) " pθ(y|zy, c)pψ(zy|zx, c) qφ(zy|zx, y, c) # } ≥Eqφ(zx|x,c)[log pθ(x|zx, y, c)] −DKL[qφ(zx|x, c)||pψ(zx|c)] + Eqφ(zx|x,c){Eqφ(zy|zx,y,c)[log pθ(y|zy, c)] −DKL[qφ(zy|zx, y, c)||pψ(zy|zx, c)]} ≈Eqφ(zx|x,c)[log pθ(x|zx, y, c)] −DKL[qφ(zx|x, c)||pψ(zx|c)] + Eqφ(zy|zx,y,c)[log pθ(y|zy, c)] −DKL[qφ(zy|zx, y, c)||pψ(zy|zx, c)] B Datatset The statistics and the data resource are summarized in Table 10. SQuAD We tokenize questions and contexts with WordPiece tokenizer from BERT. To fairly compare our proposed methods with the existing semisupervised QA, we follow Zhang and Bansal (2019)’s split, which divides original development set from SQuAD v1.1 (Rajpurkar et al., 2016) into new validation set and test set. We adopt most of the codes from Wolf et al. (2019) for preprocessing data, training, and evaluating the BERT-base QA model. Natural Questions Other than the original Natural Questions (Kwiatkowski et al., 2019) dataset, we use subset of the dataset provided by MRQA shared task (Fisch et al., 2019) for extractive QA. As semi-supervised setting with SQuAD, we split the validation set provided from MRQA into half for validation set and the others for test set. All the tokens from question and context are tokenized with WordPiece tokenizer from BERT. We generate QA pairs from context not containing html tag, and evaluate QA model with the official MRQA evaluation scripts. TriviaQA For TriviaQA (Joshi et al., 2017), we also use the training set from MRQA shared task, and divide the development set from MRQA into half for validation set and the other for test set. All the tokens from question and context are tokenized with WordPiece tokenizer from BERT. For evaluation, we follow the MRQA’s official evaluation procedure. HarvestingQA3 We use paragraphs from HarvestingQA dastaset (Du and Cardie, 2018) to generate QA pairs for QA-based Evaluation (QAE) and Reverse QA-based Evaluation (R-QAE). For the baseline QG models such as Maxout-QG and SemanticQG, we use the same answer spans from the dataset. For the experiments of Maxout-QG baseline, we train the model and generate new questions from the context and answer, while the questions generated by Semantic-QG are provided by the authors (Zhang and Bansal, 2019). C Training Details Maxout-QG We use Adam (Kingma and Ba, 2015) optimizer with the batch size of 64 and set the initial learning rate of 10−3. We always set the 3https://github.com/xinyadu/ harvestingQA 221 Datasets Train (#) Valid (#) Source SQuAD 86,588 10,507 Crowd-sourced questions from Wikipedia paragraph Natural Questions 104,071 12,836 Questions from actual userfor searching Wikipedia paragraph TriviaQA 74,160 7,785 Question and answer pairs authored by trivia enthusaists from the Web HarvestQA 1,259,691 Generated by neural networks from top-ranking 10,000 Wikipedia articles Table 10: The statistics and the data source of SQuAD, Natural Questions, TriviaQA, and HarvestingQA. Replace EM F1 F1 ≤0.0 82.4 89.39 F1 ≤20.0 83.11 89.65 F1 ≤40.0 83.32 89.79 F1 ≤60.0 83.20 89.78 F1 ≤80.0 83.09 89.75 Table 11: The effect of F1-based replacement strategy in semi-supervised setting of SQuAD+H×100%. All results are the performance on validation set of Zhang and Bansal (2019). beam size of 10 for decoding. We also evaluate the Maxout-QG model on our SQuAD validation set with BLEU4 (Papineni et al., 2002), and get 15.68 points. Selection of Threshold for Replacement As mentioned in our paper, we use the threshold of 40.0 selected via cross-validation of the QA model performance, using both the full SQuAD and HarvestingQA dataset for QAG. The detailed selection processes are as follows: 1) train QA model on only human annotated data, 2) compute F1 score of generated QA pairs, and 3) if the F1 score is lower than the threshold, replace the generated answer with the prediction of QA model. We investigate the optimal value of threshold among [20.0, 40.0, 60.0, 80.0] using our validation set of SQuAD. Table 11 shows the results of cross-validation on the validation set. The optimal value of 40.0 is used for semisupervised experiments on Natural Questions and TriviaQA. For fully unlabeled semi-supervised experiments on Natural Questions and TriviaQA, the QA model is only trained on SQuAD and used to replace the synthetic QA pairs (denoted in our paper as N×1-10, T×1-10). Semi-supervised learning For the semisupervised learning experiment on SQuAD, we follow Zhang and Bansal (2019)’s split for a fair comparison. Specifically, we receive the unique IDs for QA pairs from the authors and use exactly the same validation and test set as theirs. For the Natural Questions and TriviaQA experiments, we use our own split as mentioned in the above. We generate QA pairs from the paragraphs of Wikipedia extracted by Du and Cardie (2018) and train BERT-base QA model with the synthetic data for two epochs. Then we further train the model with human-annotated training data for two more epochs. The catastrophic forgetting reported in Zhang and Bansal (2019) does not occur in our cases. We use Adam optimizer (Kingma and Ba, 2015) with batch size 32 and follow the learning rate scheduling as described in (Devlin et al., 2019) with initial learning rate 2 · 10−5 and 3 · 10−5 for synthetic and human annotated data, respectively. D Qualitative Examples The qualitative examples in Table 12, 13, 14 are shown in the next page. 222 Paragraph-1 Near Tamins-Reichenau the Anterior Rhine and the Posterior Rhine join and form the Rhine. . . . This section is nearly 86km long, and descends from a height of 599m to 396m. It flows through a wide glacial alpine valley known as the Rhine Valley (German: Rheintal). Near Sargans a natural dam, only a few metres high, . . . The Alpine Rhine begins in the most western part of the Swiss canton of Graub¨unden, . . . Q-1: how long is the rhine? A-1: 86km long Q-2: how large is the dam? A-2: a few metres high Q-3: where does the anterior rhine and the posterior rhine join the rhine? A-3: Tamins-Reichneau Q-4: what type of valley does the rhine flows through? A-4: glacial alpine Q-5: what is the rhine valley in german? A-5: Rheintal Q-6: where deos the alpine rhine begin? A-7: Swiss canton of Graub¨unden Paragraph-2 Victoria is the centre of dairy farming in Australia. It is home to 60% of Australia’s 3 million dairy cattle and produces nearly two-thirds of the nation’s milk, almost 6.4 billion litres. The state also has 2.4 million beef cattle, with more than 2.2 million cattle and calves slaughtered each year. In 2003–04, Victorian commercial fishing crews and aquaculture industry produced 11,634 tonnes of seafood valued at nearly $109 million. . . . Q-1: what industry produced 11,63 million tonnes of seafood in 2003-04 ? A-1: aquaculture Q-2: what type of cattle is consumed in Victoria? A-2: beef Q-3: in what year did victorian commercial fishing and aquaculture industry produce a large amount of seafood? A-3: 2003–04 Q-4: how many cattle and calves each year are slaughtered annually? A-4: 2.2 million Q-5: how much of the nation’s milk is produced by the dairy? A-5: two-thrids Paragraph-3 A teacher’s role may vary among cultures. Teachers may provide instruction in literacy and numeracy, craftsmanship or vocational training, the arts, religion, civics, community roles, or life skills. Q-1: what do a teacher’s role vary? A-1: culture Q-2: what do teachers provide instruction in? A-2: vocational training Q-3: what is one thing a teacher may provide instruction for? A-3: community roles Q-4: what is one of the skills that teachers provide in? A-4: life skills Table 12: Examples of QA pairs generated by our Info-HCVAE. We sample multiple latent variables from pψ(·), and feed them to generation networks. All the paragraphs are from validation set of SQuAD. 223 Paragraph-1 Super bowl 50 was an american football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24 – 10 to earn their third super bowl title. . . . GT which NFL team represented the AFC at super bowl 50? Ours-1 what team did the American Football Conference represent? Ours-2 who won the 2015 American Football Conference? Ours-3 which team defeated the carolina panthers? Ours-4 who defeated the panthers in 2015? Ours-5 what team defeated the carolina panthers in the 2015 season? Ours-6 who was the champion of the American Football League in the 2015 season? Ours-7 what team won the 2015 American Football Conference? Paragraph-2 ...Some clergy offer healing services, while exorcism is an occasional practice by some clergy in the united methodist church in Africa. ... GT in what country does some clergy in the umc occasionally practice exorcism? Ours-1 in what country do some clergy in the united methodist church take place? Ours-2 in what country is exorcism practice an occasional practice? Ours-3 use of exorcism is an occasional practice in what country? Ours-4 is exorcism usually an occasional practice in what country? Paragraph-3 ..., the city was the subject of a song , “walking into fresno” , written by hall of fame guitarist Bill Aken . . . GT who wrote “walking in fresno”? Ours-1 who wrote “walking into fresno”? Ours-2 “walking into fresno” was written by whom? Ours-3 the song “walking into fresno” was written by whom? Table 13: Examples of one-to-many mapping of our Info-HCVAE. Answers are highlighted by pink. We sample multiple question latent variables from pψ(zx | c), and feed them to question generation networks with a fixed answer. GT denotes ground-truth question, and Seq2Seq denotes question generated by Maxout-QG. All the paragraphs, ground truth questions, and answers are from validation set of SQuAD. 224 Paragraph-1 Notre Dame is known for its competitive admissions, with the incoming class enrolling in fall 2015 admitting 3,577 from a pool of 18,156 (19.7%). The academic profile of the enrolled class continues to rate among the top 10 to 15 in the nation for national research universities. ... 1,400 of the 3,577 (39.1% ) were admitted under the early action plan. Ori1 Q where does notre dame rank in terms of academic profile among research universities in the us? A the top 10 to 15 in the nation Gen Q where does the academic profile of notre dame rank? A the top 10 to 15 Q what was the rate of the incoming class enrolling in the fall of 2015? A 3,577 from a pool of 18,156 (19.7%) Q how many students attended notre dame? A 3,577 Ori2 Q what percentage of students at notre dame participated in the early action program? A 39.1% Paragraph-2 ...begun as a one-page journal in September 1876, the scholastic magazine is issued twice monthly and . . . In 1987, when some students believed that the observer began to show a . . . In spring 2008 an undergraduate journal for political science research, beyond politics, made its debut. Ori1 Q when did the scholastic magazine of notre dame begin publishing? A september 1876 Gen Q when was the scholastic magazine published? A 1876 Q in what year did notre dame get its liberal newspaper? A 1987 Q how often is the scholastic magazine published ? A twice Ori2 Q in what year did notre dame begin its undergraduate journal ? A 2008 Paragraph-3 As at most other universities, notre dame’s students run a number of news media outlets. The nine student - run outlets include ..., and several magazines and journals. . . . . the dome yearbook is published annually. . . . Ori1 Q what is the daily student paper at notre dame called? A the observer Gen Q how many student media outlets are there at notre dame? A nine student - run outlets include three Q what type of media is the student paper at notre dame? A a number of news media Q how often is the dome published? A annually Q how many magazines are published at notre dame ? A several Ori2 Q how many student news papers are found at notre dame ? A three Table 14: QA pairs generated by interpolating between two latent codes encoded by our posterior networks. Ori1 and Ori2 are from training set of SQuAD.
2020
20
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2209–2213 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2209 To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on Resource Rich Tasks Sinong Wang, Madian Khabsa, Hao Ma Faceboook AI, Seattle, USA {sinongwang, mkhabsa, haom}@fb.com Abstract Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly. 1 Introduction Language modeling has emerged as an effective pretraining approach in wide variety of NLP models. Multiple techniques have been proposed, including bi-directional language modeling (Peters et al., 2018), masked language models (Devlin et al., 2018), and variants of denoising auto-encoder approaches (Lewis et al., 2019; Raffel et al., 2019; Joshi et al., 2019). Today, it is rare to examine a leaderboard without finding the top spots occupied by some variant of a pretraining method.1 The future of NLP appears to be paved by pretraining a universal contextual representation on wikipedia-like data at massive scale. Attempts along this path have pushed the frontier to up 10× to the size of wikipedia (Raffel et al., 2019). However, the success of these experiments is mixed: although improvements have been observed, the downstream task is usually data-limited. There is evidence that large-scale pretraining does not always lead to state-of-the-art results (Raffel et al., 2019), especially on tasks such as machine translation, where abundance of training data, and the 1https://super.gluebenchmark.com/leaderboard existence of strong augmentation methods such as back translation might have limited the benefit of pretraining. This paper examines the pretraining benefits of downstream tasks as the number of training samples increases. To answer this question, we focus on multi-class text classification since: (i) it is one of most important problems in NLP with applications spanning multiple domains. (ii) large sums of training data exists for many text classification tasks, or can be obtained relatively cheaply through crowd workers (Snow et al., 2008). We choose three sentiment classification datasets: Yelp review (yel, 2019), Amazon sports and electronics review (Ni et al., 2019), ranging in size from 6 to 18 million examples. 2 We finetune a RoBERTa model (Liu et al., 2019) with increments of the downstream dataset, and evaluate the performance at each increment. For example, on the Yelp dataset whose size is 6 million, we train the models on subsets of the data with each subset size being in the sequence (60k, 600K, 1.8M, 3M .., 6M). For comparison, we also train a vanilla BiLSTM, and another BiLSTM which uses pretrained Roberta token embeddings. We observe that when both models are trained on 1% of the data, the gap between BiLSTM and RoBERTa models is at its peak, but as the training dataset size increases, the BiLSTM model accuracy keeps on increasing whereas RoBERTa’s accuracy remain mostly flat. As the dataset size increases, the accuracy gap shrinks to within 1%. Our study suggests that collecting data and training on the target tasks is a solution worth considering, especially in production environments where accuracy is not the only considered factor, rather inference latency is often just as crucial. We benchmarked the inference latency of the these models on 2These datasets are the largest publicly available classifiaction datasets that we are aware of. 2210 both CPU and GPU for different batch sizes, and as expected, we observe at least 20× speedup for the BiLSTM compared to the RoBERTa. This paper provides new experimental evidence and discussions for people to rethink the MLM pre-training paradigm in NLP, at least for resource rich tasks. 2 Related Works Scaling the number of training examples has long been identified as source of improvement for machine learning models in multiple domains including NLP (Banko and Brill, 2001), computer vision (Deng et al., 2009; Sun et al., 2017) and speech (Amodei et al., 2016). Previous work has suggested that deep learning scaling may be predictable empirically (Hestness et al., 2017), with model size scaling sub-linearly with training data size. (Sun et al., 2017) concluded that accuracy increases logarithmally with respect to training data size. However, these studies have focused on training models in the the fully supervised setting, without pretraining. One closer work is (He et al., 2019) where it is shown that randomly initialized standard computervision models perform no worse than their ImageNet pretrained counterparts. However, our work focuses on text classification. We do not examine the benefit of pretraining, at large, rather we focus on the benefit of pretraining for resource rich tasks. Another concurrent work that is still under review, in (Nakkiran and Sutskever, 2020) observes that, in some translation task such as IWSLT14, small language models exhibit even lower test loss compared to the large transformer model when the number of training samples increases. 3 Experiments 3.1 Task and Data We focus on a multi-class sentiment classification task: given the user reviews, predict the rating in five points scale {1, 2, 3, 4, 5}. The experiments are conducted on the following three benchmark datasets. • Yelp Challenge (yel, 2019) contains text reviews, tips, business and check-in sets in Yelp. We use the 6.7m user reviews with ratings as our dataset. • Amazon Reviews (Ni et al., 2019) contains product reviews (ratings, text, helpfulness votes) from Amazon. We choose two categories: sports / outdoors, and electronics as two separate datasets. We only use the review text as input features. The distribution across five ratings of each dataset is illustrated in Table 1. In our experiment, all the above data is split into 90% for training and 10% for testing. Dataset Size 1 2 3 4 5 Yelp 6.69M 15% 8% 11% 22% 44% Sports 11.9M 7% 5% 7% 16% 65% Electronics 18.6M 11% 5% 7% 16% 61% Table 1: Data size and percentage of samples in each (n)-star category 3.2 Models We choose the following three types of pretrained and vanilla models: • RoBERTa (Liu et al., 2019) RoBERTa is a transformer-based model pretrained with masked language modeling objectives on a large corpus. We finetune our classification task on both Roberta-Base (12 layers, 768 hidden, 12 heads) and Roberta-Large (24 layers, 1024 hidden, 16 heads). • LSTM (Hochreiter and Schmidhuber, 1997) We use a bidirectional LSTM with a maxpooling layer on top of the hidden states, followed by a linear layer. Token embeddings of size 128 are randomly initialized. • LSTM + Pretrained Token Embedding Similar to the previous setup, except we initialized the token embeddings with Roberta pretrained token embedding (Base: 768-dimensional embedding, Large: 1024dimensional embedding). The embeddings are frozen during training. For fair comparison, all the above models share the same vocabulary and BPE tokenizer (Sennrich et al., 2015). 3.3 Experimental Setup We use the Adam optimizer and the following hyperparameter sweep for each model. (i) RoBERTa is finetuned with the following learning rates {5e −6, 1e5, 1.5e −5, 2e −5}, with linear warm up in the first 5% of steps followed by a linear 2211 Figure 1: Accuracy Gap of Roberta, BiLSTM trained on different amount of data Models Yelp Sports Electronics Params Accuracy ∆ Accuracy ∆ Accuracy ∆ Roberta-Large 78.85 79.65 79.07 304M Roberta-Base 78.44 0.41 79.45 0.20 78.84 0.23 86M LSTM-4-512 + Large 77.14 1.71 78.80 0.85 78.16 0.92 25M LSTM-4-512 + Base 77.07 1.78 78.72 0.93 78.07 1.0 24M LSTM-4-256 + Large 77.02 1.83 78.76 0.89 78.12 0.95 7.4M LSTM-4-256 + Base 77.03 1.82 78.62 1.03 77.98 1.09 6.8M LSTM-4-256 76.37 2.48 78.38 1.27 77.76 1.31 4.8M LSTM-2-256 76.09 2.76 78.18 1.47 77.57 1.5 2.4M Table 2: Test Accuracy of Roberta-base, BiLSTM, and BiLSTM with Roberta Pretrained Token Embedding when trained on the full dataset. The ∆column shows the difference between each model’s accuracy and that of RobertaLarge. For LSTM models, LSTM-n-k denotes an LSTM model with n layers and k cells. + Large or + Base indicate the use of Roberta Large or Roberta Base token embeddings, respectively. The number of parameters does not count the size of embedding table. decay to 0. The batch size is set to 32, with dropout being 0.1. (ii) For the LSTM, it is trained with a constant learning rate from the sequence: {2.5e −4, 5e −4, 7.5e −4, 1e −3}. The batch size is set to 64. We train each model on 8 GPUs for 10 epochs and perform early stopping based on accuracy on the test set. The maximum sequence length of input was set to 512 for all models. 4 Results 4.1 Impact of Data Size We first investigate the effect of varying the number of training samples, for fixed model and training procedure. We train different models using {1%, 10%, 30%, 50%, 70%, 90%} amount of data to mimic the “low-resource”, “medium-resource” and “high-resource” regime. Figure 1 shows that the accuracy delta between the LSTM and RoBERTa models at different percentages of the training data. From the plot, we observe the following phenomena: (i) Pretrained models exhibit a diminishing return behavior as the size of the target data grows. When we increase the number of training examples, the accuracy gap between Roberta and LSTM shrinks. For example, when both models are trained with 1% of the Yelp dataset, the accuracy gap is around 9%. However, as we increases the amount of training data to 90%, the accuracy gap drops to within 2%. The same behaviour is observed on both Amazon review datasets, with the initial gap starting at almost 5% for 1% of the training data, then shrinking all the way to within one point when most of the training data is used. (ii) Using the pretrained RoBERTa token embeddings can further reduce the accuracy gap especially when training data is limited. For example, in the Yelp review data, a 4-layers LSTM with pretrained embeddings provides additional 3 per2212 cent gain compared to its counterparts. As Table 2 shows, an LSTM with pretrained RoBERTa token embeddings always outperforms the ones with random token initialization. This suggests that the embeddings learned during pretraining RoBERTa may constitute an efficient approach for transfer learning the knowledge learned in these large MLM. We further report the accuracy metric of each model using all the training data. The full results are listed in Table 2. We observe that the accuracy gap is less than 1% on the Amazon datasets. even compared to 24 layers RoBERTa-large model. As for the Yelp dataset, the accuracy gap is within 2 percent from the RoBERTa-large model, despite an order of magnitude difference in the number of parameters. 4.2 Inference Time We also investigate the inference time of the three type of models on GPU and CPU. The CPU inference time is tested on Intel Xeon E5-2698 v4 with batch size 128. The GPU inference time is tested on NVIDIA Quadro P100 with batch size ∈{128, 256, 384}. The maximum sequence length is 512. We run 30 times for each settings and take the average. The results are listed in TABLE 3. Model CPU GPU Batch size 128 128 256 384 Roberta-Base 323 16.1 16.1 16.1 Roberta-Large 950 55.5 55.5 LSTM-2-256 15.2 0.47 0.43 0.42 LSTM-4-256 28.1 1.17 0.94 0.86 LSTM-4-256+Base 35.2 1.33 1.09 1.02 LSTM-4-256+Large 37.5 1.33 1.17 1.07 LSTM-4-512+Base 64.8 3.52 3.20 3.13 LSTM-4-512+Large 64.8 3.36 3.32 3.26 Table 3: Inference time (ms) of Roberta, BiLSTM on CPU and GPU Not surprisingly, the LSTM model is at least 20 time faster even when compared to the RobertaBase. Note that the P100 will be out of memory when batch size is 384 for Roberta-Large. Another observation is that although using the Roberta pretrained token embedding introduces 10 times more model parameters compared to vanilla BiLSTM, the inference time only increases by less than 25%. This is due to the most additional parameters are from a simple linear transformation. 5 Discussion Our findings in this paper indicate that increasing the number of training examples for ‘standard’ models such as LSTM leads to performance gains that are within 1 percent of their massively pretrained counterparts. Due to the fact that there is no good large scale question answering dataset, it is not clear if the same findings would hold on this type of NLP tasks, which are more challenging and semantic-based. In the future work, we will run more experiments if there are some other large scale open datasets. Despite sentiment analysis being a crucial text classification task, it is possible, though unlikely, that the patterns observed here are limited to sentiment analysis tasks only. The rationale behinds that is that pretrained LSTMs have kept up very well with transformer-based counterparts on many tasks (Radford et al.). One way to interpret our results is that ‘simple’ models have better regularization effect when trained on large amount of data, as also evidenced in the concurrent work (Nakkiran and Sutskever, 2020).The other side of the argument in interpreting our results is that MLM based pretraining still leads to improvements even as the data size scales into the millions. In fact, with a pretrained model and 2 million training examples, it is possible to outperform an LSTM model that is trained with 3× more examples. 6 Conclusion Finetuning BERT-style models on resource-rich downstream tasks is not well studied. In this paper, we reported that, when the downstream task has sufficiently large amount of training exampes, i.e., millions, competitive accuracy results can be achieved by training a simple LSTM, at least for text classification tasks. We further discover that reusing the token embeddings learned during BERT pretraining in an LSTM model leads to significant improvements. The findings of this work have significant implications on both the practical aspect as well as the research on pretraining. For industrial applications where there is a trade-off typically between accuracy and latency, our findings suggest it might be feasible to gain accuracy for faster models by collecting more training examples. 2213 References 2019. Yelp dataset challenge. https://www.yelp.com/dataset/challenge. Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning, pages 173–182. Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th annual meeting on association for computational linguistics, pages 26– 33. Association for Computational Linguistics. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Kaiming He, Ross Girshick, and Piotr Doll´ar. 2019. Rethinking imagenet pre-training. In Proceedings of the IEEE International Conference on Computer Vision, pages 4918–4927. Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. 2017. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Kaplun G. Bansal Y. Yang T. Barak B. Nakkiran, P. and I. Sutskever. 2020. Deep double descent: Where bigger models and more data hurt. ICLR 2020. Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188–197. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing, pages 254–263. Association for Computational Linguistics. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852.
2020
200
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2214–2220 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2214 Why Overfitting Isn’t Always Bad: Retrofitting Cross-Lingual Word Embeddings to Dictionaries Mozhi Zhang∗ CS and UMIACS University of Maryland [email protected] Yoshinari Fujinuma∗ Computer Science University of Colorado [email protected] Michael J. Paul Information Science University of Colorado [email protected] Jordan Boyd-Graber UMD CS, iSchool, UMIACS, and LSC and Google Research Zürich [email protected] Abstract Cross-lingual word embeddings (CLWE) are often evaluated on bilingual lexicon induction (BLI). Recent CLWE methods use linear projections, which underfit the training dictionary, to generalize on BLI. However, underfitting can hinder generalization to other downstream tasks that rely on words from the training dictionary. We address this limitation by retrofitting CLWE to the training dictionary, which pulls training translation pairs closer in the embedding space and overfits the training dictionary. This simple post-processing step often improves accuracy on two downstream tasks, despite lowering BLI test accuracy. We also retrofit to both the training dictionary and a synthetic dictionary induced from CLWE, which sometimes generalizes even better on downstream tasks. Our results confirm the importance of fully exploiting the training dictionary in downstream tasks and explains why BLI is a flawed CLWE evaluation. 1 Introduction Cross-lingual word embeddings (CLWE) map words across languages to a shared vector space. Recent supervised CLWE methods follow a projection-based pipeline (Mikolov et al., 2013). Using a training dictionary, a linear projection maps pre-trained monolingual embeddings to a multilingual space. While CLWE enable many multilingual tasks (Klementiev et al., 2012; Guo et al., 2015; Zhang et al., 2016; Ni et al., 2017), most recent work only evaluates CLWE on bilingual lexicon induction (BLI). Specifically, a set of test words are translated with a retrieval heuristic (e.g., nearest neighbor search) and compared against gold translations. BLI accuracy is easy to compute and captures the desired property of CLWE that translation pairs should be close. However, BLI accuracy ∗⋆Equal contribution Retrofit Training Dictionary Original CLWE Updated CLWE Source Embedding Target Embedding Project Synthetic Dictionary Figure 1: To fully exploit the training dictionary, we retrofit projection-based CLWE to the training dictionary as a post-processing step (pink parts). To preserve correctly aligned translations in the original CLWE, we optionally retrofit to a synthetic dictionary induced from the original CLWE (orange parts). does not always correlate with accuracy on downstream tasks such as cross-lingual document classification and dependency parsing (Ammar et al., 2016; Fujinuma et al., 2019; Glavas et al., 2019). Let’s think about why that might be. BLI accuracy is only computed on test words. Consequently, BLI hides linear projection’s inability to align all training translation pairs at once; i.e., projectionbased CLWE underfit the training dictionary. Underfitting does not hurt BLI test accuracy, because test words are excluded from the training dictionary in BLI benchmarks. However, words from the training dictionary may be nonetheless predictive in downstream tasks; e.g., if “good” is in the training dictionary, knowing its translation is useful for multilingual sentiment analysis. In contrast, overfitting the training dictionary hurts BLI but can improve downstream models. We show this by adding a simple post-processing step to projection-based pipelines (Figure 1). After training supervised CLWE with a projection, we retrofit (Faruqui et al., 2015) the CLWE to the same training dictionary. This step pulls training translation pairs closer and overfits: the updated embeddings have perfect BLI training accuracy, but 2215 BLI test accuracy drops. Empirically, retrofitting improves accuracy in two downstream tasks other than BLI, confirming the importance of fully exploiting the training dictionary. Unfortunately, retrofitting to the training dictionary may inadvertently push some translation pairs further away. To balance between fitting the training dictionary and generalizing on other words, we explore retrofitting to both the training dictionary and a synthetic dictionary induced from the CLWE. Adding the synthetic dictionary keeps some correctly aligned translations in the original CLWE and can further improve downstream models by striking a balance between training and test BLI accuracy. In summary, our contributions are two-fold. First, we explain why BLI does not reflect downstream task accuracy. Second, we introduce two post-processing methods to improve downstream models by fitting the training dictionary better. 2 Limitation of Projection-Based CLWE This section reviews projection-based CLWE. We then discuss how BLI evaluation obscures the limitation of projection-based methods. Let X ∈Rd×n be a pre-trained d-dimensional word embedding matrix for a source language, where each column xi ∈Rd is the vector for word i from the source language with vocabulary size n, and let Z ∈Rd×m be a pre-trained word embedding matrix for a target language with vocabulary size m. Projection-based CLWE maps X and Z to a shared space. We focus on supervised methods that learn the projection from a training dictionary D with translation pairs (i, j). Mikolov et al. (2013) first propose projectionbased CLWE. They learn a linear projection W ∈ Rd×d from X to Z by minimizing distances between translation pairs in a training dictionary: min W ∑︂ (i,j)∈D ∥Wxi −zj∥2 2. (1) Recent work improves this method with different optimization objectives (Dinu et al., 2015; Joulin et al., 2018), orthogonal constraints on W (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017), pre-processing (Zhang et al., 2019), and subword features (Chaudhary et al., 2018; Czarnowska et al., 2019; Zhang et al., 2020). Projection-based methods underfit—a linear projection has limited expressiveness and cannot perfectly align all training pairs. Unfortunately, this weakness is not transparent when using BLI as the standard evaluation for CLWE, because BLI test sets omit training dictionary words. However, when the training dictionary covers words that help downstream tasks, underfitting limits generalization to other tasks. Some BLI benchmarks use frequent words for training and infrequent words for testing (Mikolov et al., 2013; Conneau et al., 2018). This mismatch often appears in real-world data, because frequent words are easier to find in digital dicitonaries (Czarnowska et al., 2019). Therefore, training dictionary words are often more important in downstream tasks than test words. 3 Retrofitting to Dictionaries To fully exploit the training dictionary, we explore a simple post-processing step that overfits the dictionary: we first train projection-based CLWE and then retrofit to the training dictionary (pink parts in Figure 1). Retrofitting was originally introduced for refining monolingual word embeddings with synonym constraints from a lexical ontology (Faruqui et al., 2015). For CLWE, we retrofit using the training dictionary D as the ontology. Intuitively, retrofitting pulls translation pairs closer while minimizing deviation from the original CLWE. Let X′ and Z′ be CLWE trained by a projection-based method, where X′ = WX are the projected source embeddings and Z′ = Z are the target embeddings. We learn new CLWE Xˆ and Zˆ by minimizing L = La + Lb, (2) where La is the squared distance between the updated CLWE from the original CLWE: La = α∥Xˆ −X′∥2 + α∥Zˆ −Z′∥2, (3) and Lb is the total squared distance between translations in the dictionary: Lb = ∑︂ (i,j)∈D βij∥xˆi −zˆj∥2. (4) We use the same α and β as Faruqui et al. (2015) to balance the two objectives. Retrofitting tends to overfit. If α is zero, minimizing Lb collapses each training pair to the same vector. Thus, all training pairs are perfectly aligned. In practice, we use a non-zero α for regularization, but the updated CLWE still have perfect training BLI accuracy (Figure 2). If the training dictionary covers predictive words, we expect retrofitting to improve downstream task accuracy. 2216 DE ES FR IT JA RU ZH 40 60 80 100 Original (train) Original (test) +retrofit (train) +retrofit (test) +synthetic (train) +synthetic (test) BLI accuracy for PROC DE ES FR IT JA RU ZH 40 60 80 100 Original (train) Original (test) +retrofit (train) +retrofit (test) +synthetic (train) +synthetic (test) BLI accuracy for CCA DE ES FR IT JA RU ZH 40 60 80 100 Original (train) Original (test) +retrofit (train) +retrofit (test) +synthetic (train) +synthetic (test) BLI accuracy for RCSLS Figure 2: Train and test accuracy (P@1) for BLI on MUSE; Projection-based CLWE underfit the training dictionary (gray), but retrofitting to the training dictionary overfits (pink). Adding a synthetic dictionary balances between training and test accuracy (orange). 3.1 Retrofitting to Synthetic Dictionary While retrofitting brings pairs in the training dictionary closer, the updates may also separate translation pairs outside of the dictionary because retrofitting ignores words outside the training dictionary. This can hurt both BLI test accuracy and downstream task accuracy. In contrast, projectionbased methods underfit but can discover translation pairs outside the training dictionary. To keep the original CLWE’s correct translations, we retrofit to both the training dictionary and a synthetic dictionary induced from CLWE (orange, Figure 1). Early work induces dictionaries from CLWE through nearest-neighbor search (Mikolov et al., 2013). We instead use cross-domain similarity local scaling (Conneau et al., 2018, CSLS), a translation heuristic more robust to hubs (Dinu et al., 2015) (a word is the nearest neighbor of many words). We build a synthetic dictionary D′ with word pairs that are mutual CSLS nearest neighbors. We then retrofit the CLWE to a combined dictionary D ∪D′. The synthetic dictionary keeps closely aligned word pairs in the original CLWE, which sometimes improves downstream models. 4 Experiments We retrofit three projection-based CLWE to their training dictionaries and synthetic dictionaries.1 We evaluate on BLI and two downstream tasks. While retrofitting decreases test BLI accuracy, it often improves downstream models. 4.1 Embeddings and Dictionaries We align English embeddings with six target languages: German (DE), Spanish (ES), French (FR), Italian (IT), Japanese (JA), and Chinese (ZH). We use 300-dimensional fastText vectors trained on Wikipedia and Common Crawl (Grave et al., 2018). We lowercase all words, only keep the 200K most frequent words, and apply five rounds of Iterative Normalization (Zhang et al., 2019). We use dictionaries from MUSE (Conneau et al., 2018), a popular BLI benchmark, with standard splits: train on 5K source word translations and test on 1.5K words for BLI. For each language, we train three projection-based CLWE: canonical correlation analysis (Faruqui and Dyer, 2014, CCA), 1Code at https://go.umd.edu/retro_clwe. 2217 DE ES FR IT JA RU ZH AVG 40 60 80 100 Original +retrofit +synthetic Document classification with PROC DE ES FR IT JA RU ZH AVG 20 40 60 80 Original +retrofit +synthetic Dependency parsing with PROC DE ES FR IT JA RU ZH AVG 40 60 80 100 Original +retrofit +synthetic Document classification with CCA DE ES FR IT JA RU ZH AVG 20 40 60 80 Original +retrofit +synthetic Dependency parsing with CCA DE ES FR IT JA RU ZH AVG 40 60 80 100 Original +retrofit +synthetic Document classification with RCSLS DE ES FR IT JA RU ZH AVG 20 40 60 80 Original +retrofit +synthetic Dependency parsing with RCSLS Figure 3: For each CLWE, we report accuracy for document classification (left) and unlabeled attachment score (UAS) for dependency parsing (right). Compared to the original embeddings (gray), retrofitting to the training dictionary (pink) improves average downstream task scores, confirming that fully exploiting the training dictionary helps downstream tasks. Adding a synthetic dictionary (orange) further improves test accuracy in some languages. Procrustes analysis (Conneau et al., 2018, PROC), and Relaxed CSLS loss (Joulin et al., 2018, RCSLS). We retrofit these CLWE to the training dictionary (pink in figures) and to both the training and the synthetic dictionary (orange in figures). In MUSE, words from the training dictionary have higher frequencies than words from the test set.2 For example, the most frequent word in the English-French test dictionary is “torpedo”, while the training dictionary has translations for frequent words such as “the” and “good”. As discussed in §2, more frequent words are likely to be more salient in downstream tasks, so underfitting these more frequent training pairs hurts generalization to downstream tasks.3 4.2 Intrinsic Evaluation: BLI We first compare BLI accuracy on both training and test dictionaries (Figure 2). We use CSLS to translate words with default parameters. The original projection-based CLWE have the highest test accuracy but underfit the training dictionary. Retrofitting to the training dictionary perfectly 2https://github.com/facebookresearch/ MUSE/issues/24 3A pilot study confirms that retrofitting to infrequent word pairs is less effective. fits the training dictionary but drops test accuracy. Retrofitting to the combined dictionary splits the difference: higher test accuracy but lower train accuracy. These three modes offer a continuum between BLI test and training accuracy. 4.3 Extrinsic Evaluation: Downstream Tasks We compare CLWE on two downstream tasks: document classification and dependency parsing. We fix the embeddng layer of the model to CLWE and use the zero-shot setting, where a model is trained in English and evaluated in the target language. Document Classification Our first downstream task is document-level classification. We use MLDoc, a multilingual classification benchmark (Schwenk and Li, 2018) using the standard split with 1K training and 4K test documents. Following Glavas et al. (2019), we use a convolutional neural network (Kim, 2014). We apply 0.5 dropout to the final layer, run Adam (Kingma and Ba, 2015) with default parameters for ten epochs, and report the average accuracy of ten runs. Dependency Parsing We also test on dependency parsing, a structured prediction task. We use Universal Dependencies (Nivre et al., 2019, 2218 v2.4) with the standard split. We use the biaffine parser (Dozat and Manning, 2017) in AllenNLP (Gardner et al., 2017) with the same hyperparameters as Ahmad et al. (2019). To focus on the influence of CLWE, we remove part-of-speech features (Ammar et al., 2016). We report the average unlabeled attachment score (UAS) of five runs. Results Although training dictionary retrofitting lowers BLI test accuracy, it improves both downstream tasks’ test accuracy (Figure 3). This confirms that over-optimizing the test BLI accuracy can hurt downstream tasks because training dictionary words are also important. The synthetic dictionary further improves downstream models, showing that generalization to downstream tasks must balance between BLI training and test accuracy. Qualitative Analysis As a qualitative example, coordinations improve after retrofitting to the training dictionary. For example, in the German sentence “Das Lokal ist sauber, hat einen gemütlichen ‘Raucherraum’ und wird gut besucht”, the bar (“Das Lokal”) has three properties: it is clean, has a smoking room, and is popular. However, without retrofitting, the final property “besucht” is connected to “hat” instead of “sauber”; i.e., the final clause stands on its own. After retrofitting to the English-German training dictionary, “besucht” is moved closer to its English translation “visited” and is correctly parsed as a property of the bar. 5 Related Work Previous work proposes variants of retrofitting broadly called semantic specialization methods. Our pilot experiments found similar trends when replacing retrofitting with Counter-fitting (Mrkši´c et al., 2016) and Attract-Repel (Mrkši´c et al., 2017), so we focus on retrofitting. Recent work applies semantic specialization to CLWE by using multilingual ontologies (Mrkši´c et al., 2017), transferring a monolingual ontology across languages (Ponti et al., 2019), and asking bilingual speakers to annotate task-specific keywords (Yuan et al., 2019). We instead re-use the training dictionary of the CLWE. Synthetic dictionaries are previously used to iteratively refine a linear projection (Artetxe et al., 2017; Conneau et al., 2018). These methods still underfit because of the linear constraint. We instead retrofit to the synthetic dictionary to fit the training dictionary better while keeping some generalization power of projection-based CLWE. Recent work investigates cross-lingual contextualized embeddings as an alternative to CLWE (Eisenschlos et al., 2019; Lample and Conneau, 2019; Huang et al., 2019; Wu and Dredze, 2019; Conneau et al., 2020). Our method may be applicable, as recent work also applies projections to contextualized embeddings (Aldarmaki and Diab, 2019; Schuster et al., 2019; Wang et al., 2020; Wu et al., 2020). 6 Conclusion and Discussion Popular CLWE methods are optimized for BLI test accuracy. They underfit the training dictionary, which hurts downstream models. We use retrofitting to fully exploit the training dictionary. This post-processing step improves downstream task accuracy despite lowering BLI test accuracy. We then add a synthetic dictionary to balance BLI test and training accuracy, which further helps downstream models on average. BLI test accuracy does not always correlate with downstream task accuracy because words from the training dictionary are ignored. An obvious fix is adding training words to the BLI test set. However, it is unclear how to balance between training and test words. BLI accuracy assumes that all test words are equally important, but the importance of a word depends on the downstream task; e.g., “the” is irrelevant in document classification but important in dependency parsing. Therefore, future work should focus on downstream tasks instead of BLI. We focus on retrofitting due to its simplicity. There are other ways to fit the dictionary better; e.g., using a non-linear projection such as a neural network. We leave the exploration of non-linear projections to future work. Acknowledgement This research is supported by NSF grant IIS1564275 and by ODNI, IARPA, via the BETTER Program contract #2019-19051600005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 2219 References Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Conference of the North American Chapter of the Association for Computational Linguistics. Hanan Aldarmaki and Mona Diab. 2019. Contextaware cross-lingual mapping. In Conference of the North American Chapter of the Association for Computational Linguistics. Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of Empirical Methods in Natural Language Processing. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the Association for Computational Linguistics. Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, and Jaime Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword representations. In Proceedings of Empirical Methods in Natural Language Processing. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don’t forget the long tail! A comprehensive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of Empirical Methods in Natural Language Processing. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of the International Conference on Learning Representations Workshop Track. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the International Conference on Learning Representations. Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, and Jeremy Howard. 2019. MultiFiT: Efficient multi-lingual language model fine-tuning. In Proceedings of Empirical Methods in Natural Language Processing. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. Conference of the North American Chapter of the Association for Computational Linguistics. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the European Chapter of the Association for Computational Linguistics. Yoshinari Fujinuma, Jordan Boyd-Graber, and Michael J. Paul. 2019. A resource-free evaluation metric for cross-lingual word embeddings based on graph modularity. In Proceedings of the Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Language Resources and Evaluation Conference. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the Association for Computational Linguistics. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In Proceedings of Empirical Methods in Natural Language Processing. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of Empirical Methods in Natural Language Processing. 2220 Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of Empirical Methods in Natural Language Processing. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of International Conference on Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. In Proceedings of Advances in Neural Information Processing Systems. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gaši´c, Lina M. Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Conference of the North American Chapter of the Association for Computational Linguistics. Nikola Mrkši´c, Ivan Vuli´c, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gaši´c, Anna Korhonen, and Steve Young. 2017. Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics, 5:309–324. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the Association for Computational Linguistics. Joakim Nivre, Mitchell Abrams, Željko Agi´c, and et al. 2019. Universal dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University. Edoardo Maria Ponti, Ivan Vuli´c, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Cross-lingual semantic specialization via lexical relation induction. In Proceedings of Empirical Methods in Natural Language Processing. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Conference of the North American Chapter of the Association for Computational Linguistics. Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In Proceedings of the Language Resources and Evaluation Conference. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the International Conference on Learning Representations. Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, and Jaime Carbonell. 2020. Crosslingual alignment vs joint training: A comparative study and a simple unified framework. In Proceedings of the International Conference on Learning Representations. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Emerging crosslingual structure in pretrained language models. In Proceedings of the Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of Empirical Methods in Natural Language Processing. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Conference of the North American Chapter of the Association for Computational Linguistics. Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, and Jordan Boyd-Graber. 2019. Interactive refinement of cross-lingual word embeddings. arXiv preprint arXiv:1911.03070. Mozhi Zhang, Yoshinari Fujinuma, and Jordan BoydGraber. 2020. Exploiting cross-lingual subword similarities in low-resource document classification. In Association for the Advancement of Artificial Intelligence. Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or sh¯ojo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In Proceedings of the Association for Computational Linguistics. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag – multilingual POS tagging via coarse mapping between embeddings. In Conference of the North American Chapter of the Association for Computational Linguistics.
2020
201
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2221–2234 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2221 XtremeDistil: Multi-stage Distillation for Massive Multilingual Models Subhabrata Mukherjee Microsoft Research AI Redmond, WA [email protected] Ahmed Hassan Awadallah Microsoft Research AI Redmond, WA [email protected] Abstract Deep and large pre-trained language models are the state-of-the-art for various natural language processing tasks. However, the huge size of these models could be a deterrent to using them in practice. Some recent works use knowledge distillation to compress these huge models into shallow ones. In this work we study knowledge distillation with a focus on multilingual Named Entity Recognition (NER). In particular, we study several distillation strategies and propose a stage-wise optimization scheme leveraging teacher internal representations, that is agnostic of teacher architecture, and show that it outperforms strategies employed in prior works. Additionally, we investigate the role of several factors like the amount of unlabeled data, annotation resources, model architecture and inference latency to name a few. We show that our approach leads to massive compression of teacher models like mBERT by upto 35x in terms of parameters and 51x in terms of latency for batch inference while retaining 95% of its F1-score for NER over 41 languages. 1 Introduction Motivation: Pre-trained language models have shown state-of-the-art performance for various natural language processing applications like text classification, named entity recognition and questionanswering. A significant challenge facing practitioners is how to deploy these huge models in practice. For instance, models like BERT Large (Devlin et al., 2019), GPT 2 (Radford et al., 2019), Megatron (Shoeybi et al., 2019) and T5 (Raffel et al., 2019) have 340M, 1.5B, 8.3B and 11B parameters respectively. Although these models are trained offline, during prediction we need to traverse the deep neural network architecture stack involving a large number of parameters. This significantly increases latency and memory requirements. Knowledge distillation (Hinton et al., 2015; Ba and Caruana, 2014) earlier used in computer vision provides one of the techniques to compress huge neural networks into smaller ones. In this, shallow models (called students) are trained to mimic the output of huge models (called teachers) based on a transfer set. Similar approaches have been recently adopted for language model distillation. Limitations of existing work: Recent works (Liu et al., 2019; Zhu et al., 2019; Tang et al., 2019; Turc et al., 2019) leverage soft logits from teachers as optimization targets for distilling students, with some notable exceptions from concurrent work. Sun et al. (2019); Sanh (2019); Aguilar et al. (2019); Zhao et al. (2019) additionally use internal teacher representations as additional signals. However, these methods are constrained by architectural considerations like embedding dimension in BERT and transformer architecture. This makes it difficult to massively compress models (without being able to reduce network width) or adopt alternate architecture. For instance, we observe BiLSTMS as students to be more accurate than Transformers for low latency configurations. Some concurrent works (Turc et al., 2019); (Zhao et al., 2019) adopt pre-training or dual training to distil students of arbitrary architecture. However, pre-training is expensive in terms of time and computational resources. Additionally, most of the above works are geared for distilling language models for GLUE tasks (Wang et al., 2018). There has been some limited exploration of such techniques for sequence tagging tasks like NER (Izsak et al., 2019; Shi et al., 2019) or multilingual tasks (Tsai et al., 2019). However, these works also suffer from similar drawbacks as mentioned before. Overview of XtremeDistil: In this work, we compare distillation strategies used in all the above XtremeDistil: Multilingual pre-TRainEd ModEl Distillation 2222 works and propose a new scheme outperforming prior ones. In this, we leverage teacher internal representations to transfer knowledge to the student. However, in contrast to prior work, we are not restricted by the choice of student architecture. This allows representation transfer from Transformerbased teacher model to BiLSTM-based student model with different embedding dimensions and disparate output spaces. We also propose a stagewise optimization scheme to sequentially transfer most general to task-specific information from teacher to student for better distillation. Overview of our task: Unlike prior works mostly focusing on GLUE tasks in a single language, we employ our techniques to study distillation for massive multilingual Named Entity Recognition (NER) over 41 languages. Prior work on multilingual transfer on the same (Rahimi et al., 2019) (MMNER) requires knowledge of source and target language whereby they judiciously select pairs for effective transfer resulting in a customized model for each language. In our work, we adopt Multilingual Bidirectional Encoder Representations from Transformer (mBERT) as our teacher and show that it is possible to perform language-agnostic joint NER for all languages with a single model that has a similar performance but massively compressed in contrast to mBERT and MMNER. The closest one to this work is that of (Tsai et al., 2019) where mBERT is leveraged for multilingual NER. We discuss this in details and use their strategy as a baseline. We show our distillation strategy to be better leading to a higher compression and faster inference. We also investigate several unexplored dimensions of distillation like the impact of unlabeled transfer data and annotation resources, choice of multilingual word embeddings, architectural variations and inference latency. Our techniques obtain massive compression of teacher models like mBERT by upto 35x in terms of parameters and 51x in terms of latency for batch inference while retaining 95% of its performance for massive multilingual NER, and matching or outperforming it for classification tasks. Overall, our work makes the following contributions: • Method: We propose a distillation method leveraging internal representations and parameter projection that is agnostic of teacher architecture. • Inference: To learn model parameters, we propose stage wise optimization schedule with gradual unfreezing outperforming prior schemes. • Experiments: We perform distillation for multilingual NER on 41 languages with massive compression and comparable performance to huge models1. We also perform classification experiments on four datasets where our compressed models perform at par with significantly larger teachers. • Study: We study the influence of several factors on distillation like the availability of annotation resources for different languages, model architecture, quality of multilingual word embeddings, memory footprint and inference latency. Problem Statement: Consider a sequence x = ⟨xk⟩with K tokens and y = ⟨yk⟩as the corresponding labels. Consider Dl = {⟨xk,l⟩, ⟨yk,l⟩} to be a set of n labeled instances with X = {⟨xk,l⟩} denoting the instances and Y = {⟨yk,l⟩} the corresponding labels. Consider Du = {⟨xk,u⟩} to be a transfer set of N unlabeled instances from the same domain where n ≪N. Given a teacher T (θt), we want to train a student S(θs) with θ being trainable parameters such that |θs| ≪|θt| and the student is comparable in performance to the teacher based on some evaluation metric. In the following section, the superscript ‘t’ always represents the teacher and ‘s’ denotes the student. 2 Related Work Model compression and knowledge distillation: Prior works in the vision community dealing with huge architectures like AlexNet and ResNet have addressed this challenge in two ways. Works in model compression use quantization (Gong et al., 2014), low-precision training and pruning the network, as well as their combination (Han et al., 2016) to reduce the memory footprint. On the other hand, works in knowledge distillation leverage student teacher models. These approaches include using soft logits as targets (Ba and Caruana, 2014), increasing the temperature of the softmax to match that of the teacher (Hinton et al., 2015) as well as using teacher representations (Romero et al., 2015) (refer to (Cheng et al., 2017) for a survey). Recent and concurrent Works: Liu et al. (2019); Zhu et al. (2019); Clark et al. (2019) leverage ensembling to distil knowledge from several multitask deep neural networks into a single model. Sun et al. (2019); Sanh (2019);Aguilar et al. (2019) train student models leveraging architectural knowledge 1Code and resources available at: https://aka.ms/ XtremeDistil 2223 of the teacher models which adds architectural constraints (e.g., embedding dimension) on the student. In order to address this shortcoming, more recent works combine task-specific distillation with pre-training the student model with arbitrary embedding dimension but still relying on transformer architectures (Turc et al., 2019); (Jiao et al., 2019); (Zhao et al., 2019). Izsak et al. (2019); Shi et al. (2019) extend these for sequence tagging for Part-of-Speech (POS) tagging and Named Entity Recognition (NER) in English. The one closest to our work Tsai et al. (2019) extends the above for multilingual NER. Most of these works rely on general corpora for pre-training and task-specific labeled data for distillation. To harness additional knowledge, (Turc et al., 2019) leverage task-specific unlabeled data. (Tang et al., 2019; Jiao et al., 2019) use rule-and embedding-based data augmentation. 3 Models The Student: The input to the model are Edimensional word embeddings for each token. To capture sequential information in the sentence, we use a single layer Bidirectional Long Short Term Memory Network (BiLSTM). Given a sequence of K tokens, a BiLSTM computes a set of K vectors h(xk) = [−−−→ h(xk); ←−−− h(xk)] as the concatenation of the states generated by a forward (−−−→ h(xk)) and backward LSTM (←−−− h(xk)). Assuming the number of hidden units in the LSTM to be H, each hidden state h(xk) is of dimension 2H. Probability distribution for the token label at timestep k is given by: p(s)(xk) = softmax(h(xk) · W s) (1) where W s ∈R2H.C and C is number of labels. Consider one-hot encoding of the token labels, such that yk,l,c = 1 for yk,l = c, and yk,l,c = 0 otherwise for c ∈C. The overall cross-entropy loss computed over each token obtaining a specific label in each sequence is given by: LCE = − X xl,yl∈Dl X k X c yk,c,l log p(s) c (xk,l) (2) We train the student model end-to-end minimizing the above cross-entropy loss over labeled data. The Teacher: Pre-trained language models like ELMO (Peters et al., 2018), BERT (Devlin et al., 2019) and GPT (Radford et al., 2018, 2019) have shown state-of-the-art performance for several tasks. We adopt BERT as the teacher – specifically, the multilingual version of BERT (mBERT) with 179MM parameters trained over 104 languages with the largest Wikipedias. mBERT does not use any markers to distinguish languages during pre-training and learns a single language-agnostic model trained via masked language modeling over Wikipedia articles from all languages. Tokenization: Similar to mBERT, we use WordPiece tokenization with 110K shared WordPiece vocabulary. We preserve casing, remove accents, split on punctuations and whitespace. Fine-tuning the Teacher: The pre-trained language models are trained for general language modeling objectives. In order to adapt them for the given task, the teacher is fine-tuned end-to-end with task-specific labeled data Dl to learn parameters ˜θt using cross-entropy loss as in Equation 2. 4 Distillation Features Teacher fine-tuning gives us access to task-specific representations for distilling the student. To this end, we use different kinds of teacher information. 4.1 Teacher Logits Logits as logarithms of predicted probabilities provide a better view of the teacher by emphasizing on the different relationships learned by it across different instances. Consider pt(xk) to be the classification probability of token xk as generated by the fine-tuned teacher with logit(pt(xk)) representing the corresponding logits. Our objective is to train a student model with these logits as targets. Given the hidden state representation h(xk) for token xk, we can obtain the corresponding classification score (since targets are logits) as: rs(xk) = W r · h(xk) + br (3) where W r ∈RC·2H and br ∈RC are trainable parameters and C is the number of classes. We want to train the student neural network end-toend by minimizing the element-wise mean-squared error between the classification scores given by the student and the target logits from the teacher as: LLL = 1 2 X xu∈Du X k ||rs(xk,u) −logit(pt(xk,u; ˜θt))||2 (4) 4.2 Internal Teacher Representations Hidden representations: Recent works (Sun et al., 2019; Romero et al., 2015) have shown the 2224 hidden state information from the teacher to be helpful as a hint-based guidance for the student. Given a large collection of task-specific unlabeled data, we can transfer the teacher’s knowledge to the student via its hidden representations. However, this poses a challenge in our setting as the teacher and student models have different architectures with disparate output spaces. Consider hs(xk) and zt l(xk; ˜θt) to be the representations generated by the student and the lth deep layer of the fine-tuned teacher respectively for a token xk. Consider xu ∈Du to be the set of unlabeled instances. We will later discuss the choice of the teacher layer l and its impact on distillation. Projection: To make all output spaces compatible, we perform a non-linear projection of the parameters in student representation hs to have same shape as teacher representation zt l for each token xk: ˜zs(xk) = Gelu(W f · hs(xk) + bf) (5) where W f ∈R|zt l |·2H is the projection matrix, bf ∈R|zt l | is the bias, and Gelu (Gaussian Error Linear Unit) (Hendrycks and Gimpel, 2016) is the non-linear projection function. |zt l| represents the embedding dimension of the teacher. This transformation aligns the output spaces of the student and teacher and allows us to accommodate arbitrary student architecture. Also note that the projections (and therefore the parameters) are shared across tokens at different timepoints. The projection parameters are learned by minimizing the KL-divergence (KLD) between the student and the lth layer teacher representations: LRL = X xu∈Du X k KLD(˜zs(xk,u), zt l(xk,u; ˜θt)) (6) Multilingual word embeddings: A large number of parameters reside in the word embeddings. For mBERT a shared multilingual WordPiece vocabulary of V = 110K tokens and embedding dimension of D = 768 leads to 92MM parameters. To have massive compression, we cannot directly incorporate mBERT embeddings in our model. Since we use the same WordPiece vocabulary, we are likely to benefit more from these embeddings than from Glove (Pennington et al., 2014) or FastText (Bojanowski et al., 2016). We use a dimensionality reduction algorithm like Singular Value Decomposition (SVD) to project the mBERT word embeddings to a lower dimensional space. Given mBERT word embedding maAlgorithm 1: Multi-stage distillation. Fine-tune teacher on Dl and update ˜θt ; for stage in {1,2,3} do Freeze all student layers l′ ∈{1 · · · L}; if stage=1 then output = ˜zs(xu) ; target = teacher representations on Du from the lth layer as zt l(xu; ˜θt) ; loss = RRL ; end if stage=2 then output = rs(xu) ; target = teacher logits on Du as logit(pt(xu; ˜θt)) ; loss = RLL ; end if stage=3 then output = ps(xl) ; target = yl ∈Dl ; loss = RCE ; end for layer l′ ∈{L · · · 1} do Unfreeze l′ ; Update parameters θs l′, θs l′+1 · · · θs L by minimizing the optimization loss between student output and teacher target end end trix of dimension V ×D, SVD finds the best Edimensional representation that minimizes sum of squares of the projections (of rows) to the subspace. 5 Training We want to optimize the loss functions for representation LRL, logits LLL and cross-entropy LCE. These optimizations can be scheduled differently to obtain different training regimens as follows. 5.1 Joint Optimization In this, we optimize the following losses jointly: 1 |Dl| X {xl,yl}∈Dl α · LCE(xl, yl)+ 1 |Du| X {xu,yu}∈Du  β · LRL(xu, yu) + γ · LLL(xu, yu)  (7) where α, β and γ weigh the contribution of different losses. A high value of α makes the student focus more on easy targets; whereas a high value of γ leads focus to the difficult ones. The above loss is computed over two different task-specific data segments. The first part involves cross-entropy loss over labeled data, whereas the second part involves representation and logit loss over unlabeled data. 2225 5.2 Stage-wise Training Instead of optimizing all loss functions jointly, we propose a stage-wise scheme to gradually transfer most general to task-specific representations from teacher to student. In this, we first train the student to mimic teacher representations from its lth layer by optimizing RRL on unlabeled data. The student learns the parameters for word embeddings (θw), BiLSTM (θb) and projections ⟨W f, bf⟩. In the second stage, we optimize for the crossentropy RCE and logit loss RLL jointly on both labeled and unlabeled data respectively to learn the corresponding parameters W s and ⟨W r, br⟩. The above can be further broken down in two stages, where we sequentially optimize logit loss RLL on unlabeled data and then optimize crossentropy loss RCE on labeled data. Every stage learns parameters conditioned on those learned in previous stage followed by end-to-end fine-tuning. 5.3 Gradual Unfreezing One potential drawback of end-to-end fine-tuning for stage-wise optimization is ‘catastrophic forgetting’ (Howard and Ruder, 2018) where the model forgets information learned in earlier stages. To address this, we adopt gradual unfreezing – where we tune the model one layer at a time starting from the configuration at the end of previous stage. We start from the top layer that contains the most task-specific information and allow the model to configure the task-specific layer first while others remain frozen. The latter layers are gradually unfrozen one by one and the model trained till convergence. Once a layer is unfrozen, it maintains the state. When the last layer (word embeddings) is unfrozen, the entire network is trained end-toend. The order of this unfreezing scheme (top-tobottom) is reverse of that in (Howard and Ruder, 2018) and we find this to work better in our setting with the following intuition. At the end of the first stage on optimizing RRL, the student learns to generate representations similar to that of the lth layer of the teacher. Now, we need to add only a few task-specific parameters (⟨W r, br⟩) to optimize for logit loss RLL with all others frozen. Next, we gradually give the student more flexibility to optimize for task-specific loss by tuning the layers below where the number of parameters increases with depth (|⟨W r, br⟩| ≪|θb| ≪|θw|). We tune each layer for n epochs and restore model to the best configuration based on validation Dataset Labels Train Test Unlabeled NER Wikiann-41 11 705K 329K 7.2MM Classification IMDB 2 25K 25K 50K DBPedia 14 560K 70K AG News 4 120K 7.6K Elec 2 25K 25K 200K Table 1: Full dataset summary. Work PT TA Distil. Sanh (2019) Y Y D1 Turc et al. (2019) Y N D1 Liu et al. (2019); Zhu et al. (2019); Shi et al. (2019); Tsai et al. (2019); Tang et al. (2019); Izsak et al. (2019); Clark et al. (2019) N N D1 Sun et al. (2019) N Y D2 Jiao et al. (2019) N N D2 Zhao et al. (2019) Y N D2 XtremeDistil (ours) N N D4 Table 2: Different distillation strategies. D1 leverages soft logits with hard labels. D2 uses representation loss. PT denotes pre-training with language modeling. TA depicts students constrained by teacher architecture. loss on a held-out set. Therefore, the model retains best possible performance from any iteration. Algorithm 1 shows overall processing scheme. 6 Experiments Dataset Description: We evaluate our model XtremeDistil for multilingual NER on 41 languages and same setting as in (Rahimi et al., 2019). This data is derived from WikiAnn NER corpus (Pan et al., 2017) and partitioned into training, development and test sets. All NER results are reported in this test set for a fair comparison between existing works. We report the average F1-score (µ) and standard deviation σ between scores across 41 languages for phrase-level evaluation. Refer to Figure 2 for language codes and corresponding distribution of training labels. We also perform experiments with data from four other domains (refer to Table 1): IMDB (Maas et al., 2011), SST2 (Socher et al., 2013) and Elec (McAuley and Leskovec, 2013) for sentiment analysis for movie and electronics product reviews, DbPedia (Zhang et al., 2015) and Ag News (Zhang et al., 2015) for topic classification of Wikipedia and news articles. NER Tags: The NER corpus uses IOB2 tagging strategy with entities like LOC, ORG and PER. Following mBERT, we do not use language markers and share these tags across all languages. We 2226 Strategy Features Transfer = 0.7MM Transfer = 1.4MM Transfer = 7.2MM D0 Labels per lang. 71.26 (6.2) D0-S Labels across all lang. 81.44 (5.3) D1 Labels and Logits 82.74 (5.1) 84.52 (4.8) 85.94 (4.8) D2 Labels, Logits and Repr. 82.38 (5.2) 83.78 (4.9) 85.87 (4.9) D3.1 (S1) Repr. (S2) Labels and Logits 83.10 (5.0) 84.38 (5.1) 86.35 (4.9) D3.2 + Gradual unfreezing 86.77 (4.3) 87.79 (4.0) 88.26 (4.3) D4.1 (S1) Repr. (S2) Logits (S3) Labels 84.82 (4.7) 87.07 (4.2) 87.87 (4.1) D4.2 + Gradual unfreezing 87.10 (4.2) 88.64 (3.8) 88.52 (4.1) Table 3: Comparison of several strategies with average F1-score (and standard deviation) across 41 languages over different transfer data size. Si depicts separate stages and corresponding optimized loss functions. use additional syntactic markers like {CLS, SEP, PAD} and ‘X’ for marking segmented wordpieces contributing a total of 11 tags (with shared ‘O’). 6.1 Evaluating Distillation Strategies Baselines: A trivial baseline (D0) is to learn models one per language using only corresponding labels for learning. This can be improved by merging all instances and sharing information across all languages (D0-S). Most of the concurrent and recent works (refer to Table 2 for an overview) leverage logits as optimization targets for distillation (D1). A few exceptions also use teacher internal representations along with soft logits (D2). For our model we consider multi-stage distillation, where we first optimize representation loss followed by jointly optimizing logit and cross-entropy loss (D3.1) and further improving it by gradual unfreezing of neural network layers (D3.2). Finally, we optimize the loss functions sequentially in three stages (D4.1) and improve it further by unfreezing mechanism (D4.2). We further compare all strategies while varying the amount of unlabeled transfer data for distillation (hyper-parameter settings in Appendix). Results: From Table 3, we observe all strategies that share information across languages to work better (D0-S vs. D0) with soft logits adding more value than hard targets (D1 vs. D0-S). Interestingly, we observe simply combining representation loss with logits (D3.1 vs. D2) hurts the model. We observe this strategy to be vulnerable to the hyperparameters (α, β, γ in Eqn. 7) used to combine multiple loss functions. We vary hyper-parameters in multiples of 10 and report best numbers. Stage-wise optimizations remove these hyperparameters and improve performance. We also observe the gradual unfreezing scheme to improve both stage-wise distillation strategies significantly. Stage Unfreezing Layer F1 Std. Dev. 2 Linear (⟨W r, br⟩) 0 0 2 Projection (⟨W f, bf⟩) 2.85 3.9 2 BiLSTM (θb) 81.64 5.2 2 Word Emb (θw) 85.99 4.4 3 Softmax (W s) 86.38 4.2 3 Projection (⟨W f, bf⟩) 87.65 3.9 3 BiLSTM (θb) 88.08 3.9 3 Word Emb (θw) 88.64 3.8 Table 4: Gradual F1-score improvement over multiple distillation stages in XtremeDistil . Model F1 Std. Dev. mBERT-single (Devlin et al., 2019) 90.76 3.1 mBERT (Devlin et al., 2019) 91.86 2.7 MMNER (Rahimi et al., 2019) 89.20 2.8 XtremeDistil (ours) 88.64 3.8 Table 5: F1-score comparison of different models with standard deviation across 41 languages. Focusing on the data dimension, we observe all models to improve as more and more unlabeled data is used for transferring teacher knowledge to student. However, we also observe the improvement to slow down after a point where additional unlabeled data does not yield significant benefits. Table 4 shows the gradual performance improvement in XtremeDistil after every stage and unfreezing various neural network layers. 6.2 Performance, Compression and Speedup Performance: We observe XtremeDistil in Table 5 to perform competitively with other models. mBERT-single models are fine-tuned per language with corresponding labels, whereas mBERT is finetuned with data across all languages. MMNER results are reported from Rahimi et al. (2019). Figure 2 shows the variation in F1-score across different languages with variable amount of training data for different models. We observe all the models to follow the general trend with some aber2227 (50,100) (50,200) (50,400) (50,600) (100,100) (100,200) (100,400) (100,600) (200,100) (200,200) (200,400) (200,600) (300,100) (300,200) (300,400) (300,600) 0 5 10 15 20 25 30 35 40 84 84.5 85 85.5 86 86.5 87 87.5 88 88.5 89 Parameter Compression F1 Measure (a) Parameter compression vs. F1-score. (50,100) (100,100) (200,100) (300,100) (50,200) (100,200) (200,200) (300,200) (50,400) (200,400) (100,400) (300,400) (100,600) (50,600) (200,600) (300,600) 0 10 20 30 40 50 60 70 80 84 84.5 85 85.5 86 86.5 87 87.5 88 88.5 89 Inference Speedup F1 Measure (b) Inference speedup vs. F1-score. Figure 1: Variation in XtremeDistil F1-score with parameter and latency compression against mBERT. Each point in the linked scatter plots depict a setting with corresponding embedding dimension and BiLSTM hidden states as (E, H). Data point (50, 200) in both figures correspond to 35x compression and 51x speedup. rations for languages with less training labels. Parameter compression: XtremeDistil performs at par with MMNER in terms of F1-score while obtaining at least 41x compression. Given L languages, MMNER learns (L −1) ensembled and distilled models, one for each target language. Each of the MMNER language-specific models is comparable in size to our single multilingual model. We learn a single model for all languages, thereby, obtaining a compression factor of at least L = 41. Figure 1a shows the variation in F1-scores of XtremeDistil and compression against mBERT with different configurations corresponding to the embedding dimension (E) and number of BiLSTM hidden states (2×H). We observe that reducing the embedding dimension leads to great compression with minimal performance loss. Whereas, reducing the BiLSTM hidden states impacts the performance more and contributes less to the compression. Inference speedup: We compare the runtime inference efficiency of mBERT and our model in a single P100 GPU for batch inference (batch size = 32) on 1000 queries of sequence length 32. We average the time taken for predicting labels for all the queries for each model aggregated over 100 runs. Compared to batch inference, the speedups are less for online inference (batch size = 1) at 17x on Intel(R) Xeon(R) CPU (E5-2690 v4 @2.60GHz) (refer to Appendix for details). Model #Transfer Samples F1 MMNER 62.1 mBERT 79.54 XtremeDistil 4.1K 19.12 705K 76.97 1.3MM 77.17 7.2MM 77.26 Table 6: F1-score comparison for low-resource setting with 100 labeled samples per language and transfer set of different sizes for XtremeDistil . Figure 1b shows the variation in F1-scores of XtremeDistil and inference speedup against mBERT with different (linked) parameter configurations as before. As expected, the performance degrades with gradual speedup. We observe that parameter compression does not necessarily lead to an inference speedup. Reduction in the word embedding dimension leads to massive model compression, however, it does not have a similar effect on the latency. The BiLSTM hidden states, on the other hand, constitute the real latency bottleneck. One of the best configurations leads to 35x compression, 51x speedup over mBERT retaining nearly 95% of its performance. 6.3 Low-resource NER and Distillation Models in all prior experiments are trained on 705K labeled instances across all languages. In this setting, we consider only 100 labeled samples for each language with a total of 4.1K instances. From Table 6, we observe mBERT to outperform MMNER by more than 17 percentage points with XtremeDistil closely following suit. Furthermore, we observe our model’s performance to improve with the transfer set size depicting the importance of unlabeled transfer data for knowledge distillation. As before, a lot of additional data has marginal contribution. 6.4 Word Embeddings From Table 7 we observe randomly initialized word embeddings to work quite well. Multilingual FastText embeddings (Bojanowski et al., 2016) lead to minor improvement due to 38% overlap between FastText tokens and mBERT wordpieces. English Glove does much better. We experiment with dimensionality reduction techniques and find SVD to work better leading to marginal improvement over mBERT embeddings before reduction. As expected, fine-tuned mBERT embeddings perform better than that from pre-trained checkpoints. 2228 0 5 10 15 20 25 70 75 80 85 90 95 100 af hi sq bn lt lv mk tl bs et sl ta ar bg ca cs da de el en es fa fi fr he hr hu id it ms nl no pl pt ro ru sk sv tr uk vi XtremeDistil MBERT-Single MBERT MMNER #Train-Samples Figure 2: F1-score comparison for different models across 41 languages. The y-axis on the left shows the scores, whereas the axis on the right (plotted against blue dots) shows the number of training labels (in thousands). Word Embedding F1 Std. Dev. SVD + mBERT (fine-tuned) 88.64 3.8 mBERT (fine-tuned) 88.60 3.9 SVD + mBERT (pre-trained) 88.54 3.9 PCA + PPA (d=14) (Raunak et al., 2019) 88.35 3.9 PCA + PPA (d=17) (Raunak et al., 2019) 88.25 4.0 Glove (Pennington et al., 2014) 88.16 4.0 FastText (Bojanowski et al., 2016) 87.91 3.9 Random 87.43 4.1 Table 7: Impact of using various word embeddings for initialization on multilingual distillation. SVD, PCA, FastText and Glove use 300-dim. word embeddings. 6.5 Architectural Considerations Which teacher layer to distil from? The topmost teacher layer captures more task-specific knowledge. However, it may be difficult for a shallow student to capture this knowledge given its limited capacity. On the other hand, the less-deep representations at the middle of teacher model are easier to mimic by shallow student. From Table 8 we observe the student to benefit most from distilling the 6th or 7th layer of the teacher. Layer F1Std. (l) score Dev. 11 88.46 3.8 9 88.31 3.8 7 88.64 3.8 6 88.64 3.8 Layer F1Std. (l) score Dev. 4 88.19 4 2 88.50 4 1 88.51 4 Table 8: Comparison of XtremeDistil performance on distilling representations from lth mBERT layer. Comparison of student architecture. Recent works leverage both BiLSTM and Transformer as students. In this experiment, we vary the embedding dimension and hidden states for BiLSTM-, and embedding dimension and depth for Transformer-based students to obtain configurations with similar inference latency. Each of 13 configurations in Figure 3 depict F1-scores obtained (50,100) (200,100) (300,100) (50,200) (300,200) (50,400) (200,400) (100,400) (300,400) (50,600) (100,600) (200,600) (300,600) (48,2) (144,1) (72,2) (96,2) (132,2) (204,2) (228,2) (240,2) (252,2) (228,3) (240,3) (252,3) (276,3) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 72 74 76 78 80 82 84 86 1 2 3 4 5 6 7 8 9 10 11 12 13 BiLSTM F-score Transformer Fscore BiLSTM Latency Transformer Latency Figure 3: BiLSTM and Transformer F1-score (left yaxis) vs. inference latency (right y-axis) in 13 different settings with corresponding embedding dimension and width / depth of the student as (E, W/D). by students of different architecture but similar latency (refer to Table 15 in Appendix for statistics) – for strategy D0-S in Table 3. We observe that for low-latency configurations BiLSTMs with hidden states {2×100, 2×200} work better than 2-layer Transformers. Whereas, the latter starts performing better with more than 3-layers although with a higher latency compared to the aforementioned BiLSTM configurations. 6.6 Distillation for Text Classification We switch gear and focus on classification tasks. In contrast to sequence tagging, we use the last hidden state of the BiLSTM as the final sentence representation for projection, regression and softmax. Table 9 shows the distillation performance of XtremeDistil with different teachers on four benchmark text classification datasets. We observe the student to almost match the teacher performance for all of the datasets. The performance also improves with a better teacher, although the improvement is marginal as the student capacity saturates. Table 10 shows the distillation performance with only 500 labeled samples per class. The distilled student improves over the non-distilled version by 19.4 percent and matches the teacher performance for all of the tasks demonstrating the impact of distillation for low-resource settings. 2229 Data Student Distil Distil BERT BERT no distil. (Base) (Large) (Base) (Large) AG 89.71 92.33 94.33 92.12 94.63 IMDB 89.37 91.22 91.70 91.70 93.22 Elec 90.62 93.55 93.56 93.46 94.27 DB 98.64 99.10 99.06 99.26 99.20 Table 9: Distillation performance with BERT. Dataset Student Student BERT no distil. with distil. Large AG News 85.85 90.45 90.36 IMDB 61.53 89.08 89.11 Elec 65.68 91.00 90.41 DBpedia 96.30 98.94 98.94 Table 10: Distillation with BERT Large on 500 labeled samples per class. Comparison with other distillation techniques: SST-2 (Socher et al., 2013) from GLUE (Wang et al., 2018) has been used as a test bed for other distillation techniques for single instance classification tasks (as in this work). Table 11 shows the accuracy comparison of such methods reported in SST-2 development set with the same teacher. We extract 11.7MM sentences from all IMDB movie reviews in Table 1 to form the unlabeled transfer set for distillation. We obtain the best performance on distilling with BERT Large (uncased, whole word masking model) than BERT Base – demonstrating a better student performance with a better teacher and outperforming other methods. 7 Summary Teacher hidden representation and distillation schedule: Internal teacher representations help in distillation, although a naive combination hurts the student model. We show that a distillation schedule with stagewise optimization, gradual unfreezing with a cosine learning rate scheduler (D4.1 + D4.2 in Table 3) obtains the best performance. We also show that the middle layers of the teacher are easier to distil by shallow students and result in the best performance (Table 8). Additionally, the student performance improves with bigger and better teachers (Tables 9 and 11). Model Transfer Set Acc. BERT Large Teacher 94.95 XtremeDistil SST+Imdb 93.35 BERT Base Teacher 92.78 XtremeDistil SST+Imdb 92.89 Sun et al. (2019) SST 92.70 Turc et al. (2019) SST+IMDB 91.10 Table 11: Model accuracy on of SST-2 (dev. set). Student architecture: We compare different student architectures like BiLSTM and Transformer in terms of configuration and performance (Figure 3, Table 15 in Appendix), and observe BiLSTM to perform better at low-latency configurations, whereas the Transformer outperforms the former with more depth and higher latency budget. Unlabeled transfer data: We explored data dimension in Tables 3 and 6 and observed unlabeled data to be the key for knowledge transfer from pretrained teachers to shallow students and bridge the performance gap. We observed a moderate amount of unlabeled transfer samples (0.7-1.5 MM) lead to the best student, whereas larger amounts of transfer data does not result in significant gains. This is particularly helpful for low-resource NER (with only 100 labeled samples per language as in Table 6). Performance trade-off: Parameter compression does not necessarily reduce inference latency, and vice versa. We explored model performance with parameter compression, inference latency and F1 to show trade-off in Fig. 1 and Table 16 in Appendix. Multilingual word embeddings: Random initialization of word embeddings work well. A better initialization, which is also parameter-efficient, is given by Singular Value Decomposition (SVD) over fine-tuned mBERT word embeddings with the best performance for downstream task (Table 7). Generalization: The outlined distillation techniques and strategies are model-, architecture-, and language-agnostic and can be easily extended to arbitrary tasks and languages, although we only focus on NER and classification in this work. Massive compression: Our techniques demonstrate massive compression (35x for parameters) and inference speedup (51x for latency) while retaining 95% of the teacher performance allowing deep pre-trained models to be deployed in practice. 8 Conclusions We develop XtremeDistil for massive multi-lingual NER and classification that performs close to huge pre-trained models like MBERT but with massive compression and inference speedup. Our distillation strategy leveraging teacher representations agnostic of its architecture and stage-wise optimization schedule outperforms existing ones. We perform extensive study of several distillation dimensions like the impact of unlabeled transfer set, embeddings and student architectures, and make interesting observations outlined in summary. 2230 References Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Edward Guo. 2019. Knowledge distillation from internal representations. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 2654–2662. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. CoRR, abs/1710.09282. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. 2019. Bam! born-again multi-task networks for natural language understanding. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir D. Bourdev. 2014. Compressing deep convolutional networks using vector quantization. CoRR, abs/1412.6115. Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 328–339. Peter Izsak, Shira Guskin, and Moshe Wasserblat. 2019. Training compact models for low resource entity tagging using pre-trained language models. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. CoRR, abs/1904.09482. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 2011, Portland, Oregon, USA, pages 142–150. Julian J. McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Seventh ACM Conference on Recommender Systems, RecSys ’13, Hong Kong, China, October 12-16, 2013, pages 165–172. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683. 2231 Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164, Florence, Italy. Association for Computational Linguistics. Vikas Raunak, Vivek Gupta, and Florian Metze. 2019. Effective dimensionality reduction for word embeddings. Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019). Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2015. Fitnets: Hints for thin deep nets. In 3rd International Conference on Learning Representations, ICLR2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Victor Sanh. 2019. Introducing distilbert, a distilled version of bert. https://medium.com/ huggingface/distilbert-8cf3380435b5. Yangyang Shi, Mei-Yuh Hwang, Xin Lei, and Haoyu Sheng. 2019. Knowledge distillation for recurrent neural network language modeling with trust regularization. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Mohammad Shoeybi, Mostofa Ali Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455–465, Sofia, Bulgaria. Association for Computational Linguistics. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from BERT into simple neural networks. CoRR, abs/1903.12136. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and practical bert models for sequence labeling. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-read students learn better: On the importance of pre-training compact models. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 649– 657. Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2019. Extreme language model compression with optimal subwords and shared projections. Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, and Guotong Xie. 2019. PANLP at MEDIQA 2019: Pre-trained language models, transfer learning and knowledge distillation. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 380–388, Florence, Italy. Association for Computational Linguistics. 2232 A Appendices A.1 Implementation XtremeDistil uses Tensorflow. Code and resources available at: https://aka.ms/XtremeDistil. A.2 Parameter Configurations All the analyses in the paper — except compression and speedup experiments that vary embedding dimension E and BiLSTM hidden states H — are done with the following model configuration in Table 12 with the best F1-score. Optimizer Adam is used with cosine learning rate scheduler (lr high = 0.001, lr low = 1e −8). The model corresponding to the 35x parameter compression and 51x speedup for batch inference uses E = 50 and H = 2 × 200. Parameter Value SVD + MBERT word emb. dim. E=300 BiLSTM hidden states H=2×600 Dropout 0.2 Batch size 512 Teacher layer 7 Optimizer Adam Table 12: XtremeDistil config. with best F1 = 88.64. Following hyper-parameter tuning was done to select dropout rate and batch size. Dropout Rate F1-score 1e-4 87.94 0.1 88.36 0.2 88.49 0.3 88.46 0.6 87.26 0.8 85.49 Table 13: Impact of dropout. Batch size F1-score 128 87.96 512 88.4 1024 88.24 2048 88.13 4096 87.63 Table 14: Impact of batch size. 2233 BiLSTM Transformer Emb. Hidden F1Params Latency Emb. Depth Params Latency F1Dim. States Score (MM) Dim. (MM) Score 50 100 80.26 4.7 0.311 48 2 4.4 0.307 76.67 200 100 79.21 18.1 0.354 144 1 13.4 0.357 78.49 300 100 79.63 27 0.385 72 2 6.7 0.388 77.98 50 200 81.22 5.1 0.472 96 2 9 0.47 79.19 300 200 80.04 27.7 0.593 132 2 12.5 0.6 80 50 400 81.98 6.5 0.892 204 2 19.7 0.88 80.96 200 400 80.61 20.2 0.978 228 2 22.1 0.979 80.87 100 400 81.54 11.1 1 240 2 23.3 1.03 80.79 300 400 80.16 29.4 1.06 252 2 24.6 1.075 80.84 50 600 81.78 8.5 1.5 228 3 22.7 1.448 83.75 100 600 81.94 13.1 1.53 240 3 24 1.498 84.07 200 600 80.7 22.5 1.628 252 3 25.3 1.591 84.08 300 600 81.42 31.8 1.766 276 3 28 1.742 84.06 Table 15: Pairwise BiLSTM and Transformer configurations (with varying embedding dimension, hidden states and depth) vs. latency and F1 scores for distillation strategy D0 −S. Embedding BiLSTM F1Std. Params Params Speedup Speedup Dimension States score Dev. (MM) (Compression) (bsz=32) (bsz=1) 300 600 88.64 3.8 31.8 5.6 14 8 200 600 88.5 3.8 22.5 8 15 9 300 400 88.21 4 29.4 6.1 23 11 200 400 88.16 3.9 20.2 8.9 25 12 100 600 87.93 4.1 13.1 13.7 16 9 100 400 87.7 4 11.1 16.1 24 13 50 600 87.67 4 8.5 21.1 16 10 300 200 87.54 4.1 27.7 6.5 40 15 200 200 87.47 4.2 18.7 9.6 46 16 50 400 87.19 4.3 6.5 27.5 27 13 100 200 86.89 4.2 9.6 18.6 49 15 50 200 86.46 4.3 5.1 35.1 51 16 300 100 86.19 4.3 27 6.6 62 16 200 100 85.88 4.4 18.1 9.9 68 17 100 100 85.64 4.5 9.2 19.5 74 15 50 100 84.6 4.7 4.7 38.1 77 16 Table 16: Parameter compression and inference speedup vs. F1-score with varying embedding dimension and BiLSTM hidden states. Online inference is in Intel( R) Xeon(R) CPU (E5-2690 v4 @2.60GHz) and batch inference is in a single P100 GPU for distillation strategy D4. 2234 Lang #Train Ours BERT MBERT MMNER af 5 87 89 91 84 hi 5 84 85 88 85 sq 5 91 93 93 88 bn 10 91 83 95 95 lt 10 87 89 90 86 lv 10 90 92 93 91 mk 10 92 93 94 91 tl 10 94 88 95 93 bs 15 91 93 93 92 et 15 89 92 91 90 sl 15 92 93 94 92 ta 15 77 82 84 84 ar 20 85 88 89 88 bg 20 90 93 93 90 ca 20 91 94 93 91 cs 20 91 92 93 90 da 20 91 93 93 90 de 20 84 89 89 86 el 20 86 90 90 89 en 20 78 83 84 81 es 20 90 92 93 90 fa 20 90 92 93 93 fi 20 89 91 92 89 fr 20 87 91 91 88 he 20 79 85 85 85 hr 20 90 92 93 89 hu 20 90 93 93 90 id 20 92 92 93 91 it 20 88 93 92 89 ms 20 90 92 93 91 nl 20 89 93 92 89 no 20 91 93 93 90 pl 20 88 91 92 89 pt 20 89 92 93 90 ro 20 93 94 94 92 ru 20 85 88 90 86 sk 20 92 93 94 91 sv 20 94 95 95 93 tr 20 90 92 93 90 uk 20 88 92 93 89 vi 20 89 91 92 88 Table 17: F1-scores of different models per language. BERT represents MBERT fine-tuned separately for each language. Other models including XtremeDistil (ours) is jointly fine-tuned over all languages.
2020
202
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2235–2245 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2235 A Girl Has A Name: Detecting Authorship Obfuscation Asad Mahmood Zubair Shafiq Padmini Srinivasan The University of Iowa {asad-mahmood,zubair-shafiq,padmini-srinivasan}@uiowa.edu Abstract Authorship attribution aims to identify the author of a text based on the stylometric analysis. Authorship obfuscation, on the other hand, aims to protect against authorship attribution by modifying a text’s style. In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model. An obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated – a decision that is key to the adversary interested in authorship attribution. We show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average F1 score of 0.87. The reason for the lack of stealthiness is that these obfuscators degrade text smoothness, as ascertained by neural language models, in a detectable manner. Our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity. 1 Introduction Authorship attribution aims to identify the author of a text using stylometric techniques designed to capitalize on differences in the writing style of different authors. Owing to recent advances in machine learning, authorship attribution methods can now identify authors with impressive accuracy (Abbasi and Chen, 2008) even in challenging settings such as cross-domain (Overdorf and Greenstadt, 2016) and at a large-scale (Narayanan et al., 2012; Ruder et al., 2016). Such powerful authorship attribution methods pose a threat to privacyconscious users such as journalists and activists who may wish to publish anonymously (Times, 2018; Anonymous, 2018). Authorship obfuscation, a protective countermeasure, aims to evade authorship attribution by obfuscating the writing style in a text. Since it is challenging to accomplish this manually, researchers have developed automated authorship obfuscation methods that can evade attribution while preserving semantics (PAN, 2018). However, a key limitation of prior work is that authorship obfuscation methods do not consider the adversarial threat model where the adversary is “obfuscation aware” (Karadzhov et al., 2017; Potthast et al., 2018; Mahmood et al., 2019). Thus, in addition to evading attribution and preserving semantics, it is important that authorship obfuscation methods are “stealthy” – i.e., they need to hide the fact that text was obfuscated from the adversary. In this paper, we investigate the stealthiness of state-of-the-art authorship obfuscation methods. Our intuition is that the application of authorship obfuscation results in subtle differences in text smoothness (as compared to human writing) that can be exploited for obfuscation detection. To capitalize on this intuition, we use off-theshelf pre-trained neural language models such as BERT and GPT-2 to extract text smoothness features in terms of word likelihood. We then use these as features to train supervised machine learning classifiers. The results show that we can accurately detect whether or not a text is obfuscated. Our findings highlight that existing authorship obfuscation methods themselves leave behind stylistic signatures that can be detected using neural language models. Our results motivate future research on developing stealthy authorship obfuscation methods for the adversarial threat model where the adversary is obfuscation aware. Our key contributions are as follows: • We study the problem of obfuscation detection for state-of-the-art authorship obfuscation methods. This and the underlying property of stealthiness has been given scant attention in the literature. We also note that this problem is potentially more challenging 2236 than the related one of synthetic text detection since most of the original text can be retained during obfuscation. • We explore 160 distinct BERT and GPT-2 based neural language model architectures designed to leverage text smoothness for obfuscation detection. • We conduct a comprehensive evaluation of these architectures on 2 different datasets. Our best architecture achieves F1 of 0.87, on average, demonstrating the serious lack of stealthiness of existing authorship obfuscation methods. Paper Organization: The rest of this paper proceeds as follows. Section 2 summarizes related work on authorship obfuscation and obfuscation detection. Section 3 presents our proposed approach for obfuscation detection using neural language models. Section 4 presents details of our experimental setup including the description of various authorship obfuscation and obfuscation detection methods. We present the experimental results in Section 5 before concluding. The relevant source code and data are available at https://github.com/asad1996172/ Obfuscation-Detection. 2 Related Work In this section, we separately discuss prior work on authorship obfuscation and obfuscation detection. 2.1 Authorship Obfuscation Given the privacy threat posed by powerful authorship attribution methods, researchers have started to explore text obfuscation as a countermeasure. Early work by Brennan et al. (2012) instructed users to manually obfuscate text such as by imitating the writing style of someone else. Anonymouth (McDonald et al., 2012, 2013) was proposed to automatically identify the words and phrases that were most revealing of an author’s identity so that these could be manually obfuscated by users. Follow up research leveraged automated machine translation to suggest alternative sentences that can be further tweaked by users (Almishari et al., 2014; Keswani et al., 2016). Unfortunately, these methods are not effective or scalable because it is challenging to manually obfuscate text even with some guidance. Moving towards full automation, the digital text forensics community (Potthast and Hagen, 2018) has developed rule-based authorship obfuscators (Mansoorizadeh et al., 2016; Karadzhov et al., 2017; Castro-Castro et al., 2017). For example, Karadzhov et al. (2017) presented a rule-based obfuscation approach to adapt the style of a text towards the “average style” of the text corpus. Castro et al. (2017) presented another rule-based obfuscation approach to “simplify” the style of a text. Researchers have also proposed search and model based approaches for authorship obfuscation. For example, Mahmood et al. (2019) proposed a genetic algorithm approach to “search” for words that when changed, using a sentimentpreserving word embedding, would have the maximum adverse effect on authorship attribution. Bevendorff et al. (2019) proposed a heuristicbased search algorithm to find words that when changed using operators such as synonyms or hypernyms, increased the stylistic distance to the author’s text corpus. Shetty et al. (2018) used Generative Adversarial Networks (GANs) to “transfer” the style of an input text to a target style. Emmery et al. (2018) used auto-encoders with a gradient reversal layer to “de-style” an input text (aka style invariance). 2.2 Obfuscation Detection Prior work has successfully used stylometric analysis to detect manual authorship obfuscation (Juola, 2012; Afroz et al., 2012). The intuition is that humans tend to follow a particular style as they try to obfuscate a text. In a related area, Shahid et al. (2017) used stylometric analysis to detect whether or not a document was “spun” by text spinners. We show later that these stylometric-methods do not accurately detect more advanced automated authorship obfuscation methods. There is increasing interest in distinguishing synthetic text generated using deep learning based language models such as BERT and GPT-2 from human written text. Using contextual word likelihoods, as estimated using a pre-trained language model (Radford et al., 2019), Gehrmann et al. (2019) were able to raise the accuracy of humans at detecting synthetic text from 54% to 72%. Zellers et al. (2019) showed that a classifier based on a language model can accurately detect synthetic text generated by the same language model. 2237 However, the detection accuracy degrades when different language models are used to generate and to detect. Bakhtin et al. (2019) also showed that the detection accuracy degrades when the synthetic text is generated using a language model trained on a different corpus. In summary, recent research has leveraged language models to detect their generated synthetic text. However, in obfuscation we start with human written text and make modifications such that text semantics is still preserved. This is in part achieved by retaining chunks of the original writing. Thus, the quirks of the obfuscator will be mingled in unpredictable proportions and ways with the author’s original writing style. This makes the detection of obfuscated text different and potentially more challenging than synthetic text detection. To the best of our knowledge, this work presents the first systematic study of the detection of automatically obfuscated text. 3 Proposed Approach 3.1 Intuition An automated authorship obfuscator changes the input text so that it evades authorship attribution while preserving semantics. The quality and smoothness of automated text transformations using the state-of-the-art obfuscators differ from that of human written text (Mahmood et al., 2019). Therefore, the intuition behind our obfuscation detectors is to exploit the differences in text smoothness between human written and obfuscated texts. We capture text smoothness using powerful pretrained context aware neural language models.1 A text with a relatively greater proportion of high likelihood words is likely to be more smooth. 3.2 Detector Architectures Figure 1 shows the pipeline of our method for detecting whether or not a given text is obfuscated. First, a language model is used to extract the likelihood (in the form of probability or rank) for each word in the text. Second, these likelihoods are used to build a smoothness representation for the text. This is input to a supervised machine learning model that is trained to classify the text as human written or obfuscated. The three steps correspond to three significant architectural dimensions 1BERT: https://ai.googleblog.com/2018/11/opensourcing-bert-state-of-art-pre.html; GPT-2: https://openai.com/blog/better-language-models of our detectors with multiple algorithmic options in each dimension. Combinations of choices along each dimension yield different architectures that can be used by an adversary to detect obfuscated documents. We detail each dimension next. 3.2.1 Word likelihood extraction Given a word sequence, language models are designed to predict the next word. They do this by building contextual models of word occurrences as probability distributions over the full vocabulary. Then some heuristic is used to pick the next word e.g., select the word with the highest probability. In our case, instead of word prediction, we extract the likelihood from the language model (either as a probability or as a rank) for each word in the text given its context. The language model has a critical role. Thus, we use neural language models with deep architectures and trained on large amounts of data which are better at identifying both long-term and short-term context. In order to imitate an adversary who may not have the significant resources needed to train such models, we use off-the-shelf pre-trained neural language models. Specifically, we choose well-known context-aware neural language models GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2018). We choose both as they use different approaches. GPT-2 has been shown to perform better than BERT (Gehrmann et al., 2019) at synthetic text detection, with word rank giving higher performance than word probability. Their relative merit for obfuscation detection is unknown. 1) GPT-2. GPT-2 released by Open AI in 2019 uses at its core, a variation of the “transformer” architecture, an attention based model (Vaswani et al., 2017) and is trained on text from 45 million outbound links on Reddit (40 GB worth of text). We use GPT-2 to compute the conditional probability for word i as p(wi|w1...i−1). The position of wi in the sorted list (descending order of probability) of vocabulary words gives the word rank. The authors (Radford et al., 2019) trained four versions of GPT-2 differing in architecture size. Of these, we used the small and medium versions containing 117M and 345M parameters, respectively. The authors eventually also released a large version containing 762M parameters and a very large version containing 1542M parameters.2 We did not use 2https://openai.com/blog/gpt-2-6-month-follow-up/ 2238 Language Model 1) GPT-2 117M 2) GPT-2 345M 3) BERT base 4) BERT large w1 w2 w3 wn Input Text Word Likelihood Extraction Feature Representation Probabilities Ranks 0.2 0.8 0.3 0.7 1 50 20 100 Binning VGG19 1) SVM 2) RFC 3) KNN 4) ANN 5) GNB Classification Model Figure 1: Pipeline for obfuscation detection them because only the small and medium versions were released at the time of our experimentation. 2) BERT. BERT released by Google in 2018 is also based on “Transformers”. It is trained on text from Wikipedia (2.5B words) and BookCorpus (800M words). BERT considers a bidirectional context unlike the uni-directional context considered by GPT-2. Thus, in BERT the conditional occurrence probability for word i is p(wi|wi−k...i−1, wi+1...i+k) where k is the window size on each direction. Rank is computed in the similar way as GPT-2. We use both pre-trained BERT: BERT BASE with 110M parameters and BERT LARGE with 340M parameters. We implement likelihood extraction for both GPT-2 and BERT, using code made available by the Giant Language Model Test Room (GLTR) tool.3 3.2.2 Feature Representation We experiment with two different representations of smoothness. Each is explored with occurrence probabilities and with ranks. 1) Binning based features: Text smoothness is represented by the likelihood of words in text. A text with a greater proportion of high likelihood words is likely to be smoother. We aggregate this information using fixed size bins representing different likelihood ranges. For probabilities we create bin sizes of 0.001, 0.005 and 0.010. For ranks we create bin sizes of 10, 50 and 100. Thus for example, one feature representation is to consider bins of ranks from 0 to 10, 11 to 20, 21 to 30 etc. Each bin contains the proportion of words in the document with likelihood in that range. 2) Image based features: Since the word likelihood values received from language models are in essence signals, we explore signal detection approaches as well. For example, for audio classifi3https://github.com/HendrikStrobelt/detecting-fake-text cation (Hershey et al., 2017) store plots of the logmel spectogram of the audios as images and then apply image classification methods. VGG (Simonyan and Zisserman, 2014), was one of the top performers of the different classifiers they tested. Inspired by them, we explore obfuscation detection via image classification. Specifically, we explore a transfer learning approach wherein we use the VGG-19 classifier4 trained for image classification on ImageNet dataset5. For our method, we sort the extracted likelihood values for the text in descending order and then plot these values saving it as an image. This image is then processed by the pre-trained VGG-19. We extract the document’s 6 representation from the last flatten layer of VGG-19 (before the fully connected layers) as it contains high-level information regarding edges and patterns in the image. We expect this resulting feature representation vector to capture information regarding text smoothness. 3.2.3 Classification We experiment with Support Vector Machine (SVM) with a linear kernel, Random Forest Classifier (RFC) an ensemble learning method, K Nearest Neighbor (KNN) which is a nonparametric method, Artificial Neural Network (ANN) which is a parametric method, and Gaussian Naive Bayes (GNB) which is a probabilistic method. All classifiers are trained using default parameters from scikit-learn7 except for ANN, where we use lbfgs solver instead of adam because it is more performant and works well on smaller datasets. 3.2.4 Detection Architectures Options selected for each dimension combine to form a distinct obfuscation detection architecture. 4https://keras.io/applications/#vgg19 5http://www.image-net.org/ 6Terms ‘text’ and ‘document’ are used interchangeably 7https://scikit-learn.org/stable/ 2239 With 4 language models giving probabilities or ranks as output, 4 features (3 binning based features and 1 image based feature) and 5 different classifiers we experiment with a total of 160 distinct architectures. The assumption here is that a determined adversary will similarly look for the most effective obfuscation detector. 4 Experimental Setup 4.1 Authorship Obfuscation Approaches As state-of-the-art automated authorship obfuscators we identified the top two systems (Potthast et al., 2018) from PAN, a shared CLEF task.8 We also chose Mutant-X, a search based system presented in (Mahmood et al., 2019), which shows better performance than the PAN obfuscation systems. These are detailed next. Document Simplification (Castro-Castro et al., 2017). This approach obfuscates by applying rulebased text simplifications on the input document. The process is as follows. 1) If the number of contractions in the document is greater than the number of expansions, then replace all contractions with expansions otherwise replace all expansions with contractions. 2) Simplify by removing parenthetical texts that do not contain any named entity, discourse markers or appositions. 3) Replace words with synonyms that haven’t been already used in the text. We implement this approach and refer to it as DS-PAN17. Style Neutralization (Karadzhov et al., 2017). This system is also a rule-based text obfuscator. First they calculate the average values for the whole corpus for stylometric features such as stopword to non stopword ratio, punctuation to word count ratio and average number of words per sentence. Next, they calculate the values of same stylomteric features for the input document. Finally, using text transformation rules (e.g., replace ! with !!, merge or split sentences etc.) they move the document’s stylometric feature values towards the corpus averages. We evaluate this approach using the code provided by the authors and refer to it as SN-PAN16. MUTANT-X (Mahmood et al., 2019) This system uses a genetic algorithm (GAs) in combination with an authorship attribution system to identify words that when changed would have the highest positive effect towards obfuscation. Text 8https://pan.webis.de/ transformations are done using a sentiment preserving variation of Word2Vec (Yu et al., 2017). The authors present two versions: MUTANT-X writeprintsRFC, built using a traditional machine learning based authorship attribution system and MUTANT-X embeddingCNN, built using a deep learning based authorship attribution system. We evaluate MUTANT-X embeddingCNN using code made available by authors. 4.2 Data We use the two data collections which were used by (Mahmood et al., 2019). 1) Extended Brennan Greenstadt corpus. This text corpus from (Brennan et al., 2012) contains 699 documents written by 45 unique authors. Documents are mainly academic in nature but they do not contain any citations and section headings and have under 500 words, as instructed by data collectors. We refer to this collection as EBG. 2) Blog authorship corpus. This text corpus which is from (Schler et al., 2006) contains more than 600,000 blogger.com blog posts written by 19,320 unique authors. These texts are more casual in nature and are written without any instructions. We refer to this collection as BLOG. The authors in (Mahmood et al., 2019) tested their GA obfuscators using 49 EBG and 200 BLOG documents written by 10 authors. 4.2.1 Obfuscated versus Evaded Datasets A document is considered obfuscated if it has been processed by an authorship obfuscation tool. Thus we processed each document in the 49 EBG and 200 BLOG sets using each of the three obfuscation systems in turn (DS-PAN17, SN-PAN16 and MUTANT-X embeddingCNN). This resulted in 127 obfuscated EBG documents and 571 obfuscated BLOG documents.9 An obfuscated document is viewed as an evaded document if it successfully evades authorship attribution. Using the writeprintsRFC authorship attributor (Mahmood et al., 2019) for this purpose, we process all obfuscated documents (127 and 571 for EBG and BLOG respectively). This resulted 9DS-PAN17, SN-PAN16 and MUTANT-X embeddingCNN respectively created 49, 49 and 29 obfuscated documents for EBG whereas for BLOG they created 200, 200 and 171 obfuscated documents. Lower number of documents generated by MUTANT-X is due to the fact that it only obfuscates documents that are correctly classified by the attribution classifier it uses. 2240 Dataset Train Test Original Obfuscated/ Evaded Original Obfuscated/ Evaded EBG obfuscated 431 80 268 47 EBG evaded 236 29 235 30 BLOG obfuscated 2000 292 1900 277 BLOG evaded 700 71 1000 113 Table 1: Number of original and obfuscated/evaded documents in train and test sets of each of the four datasets in 59 evaded EBG documents (out of 127) and 184 evaded BLOG documents (out of 571). Despite the type and extent of transformations made, the remaining obfuscated documents did not evade the attributor. The evaded documents are those where the modification strategy somehow crossed an implicit threshold for evading authorship attribution. Thus, we surmise that the evaded documents are likely to be relatively less smooth. It is noteworthy that evaded documents are the realistic goal from a privacy seeking user’s perspective. Thus, in addition to assessing detection for obfuscated data, we also assess detection for evaded data for both (EBG and BLOG) obfuscated datasets. 4.2.2 Final datasets and cross validation For each dataset, we use the original and obfuscated/evaded documents by 5 (out of 10) authors for training and the documents by the remaining 5 authors for testing. An important decision to make here is with regards to the distribution of original documents versus obfuscated (or evaded) ones in our training and testing set. We have no real world scenario to mimic in that we have not encountered any real world use of automated obfuscators and their outputs. Therefore we make the datasets under a reasonable assumption that original documents are in the vast majority. Thus we create train and test datasets with roughly 85% 90% original documents. We do this by adding more original documents from each data collection to achieve these proportions. Overall, we ensure that each author’s documents (of any version) are all either in the train or in the test set. Table 1 provides the size details for our obfuscated and evaded datasets. 4.3 Obfuscation Detection Methods 4.3.1 Methods from our architecture We propose a total of 160 distinct architectures (see 3.2.4). Testing these for each of the four datasets, we conduct a total of 640 distinct obfuscation detection experiments. In addition, we explore three other methods; the first is inspired by recent research to detect synthetic text. The other two were used previously to detect manually obfuscated documents. 4.3.2 Other Methods 1) GLTR (Gehrmann et al., 2019). The authors present a tool to help humans distinguish between original and machine generated synthetic text. Their tool uses pretrained language models to extract word likelihoods and presents their plot to humans making the decision. Their best model uses GPT-2 117M language model to extract word ranks which are then put in 4 unequal range bins: 0-10, 10-100, 100-1000 and greater than 1000. We test the same method on our obfuscation detection problem. For obfuscation detection, we tested this feature set with linear SVM and RFC. Among these, linear SVM performed the best. 2) Character trigrams + KNN (Juola, 2012). This paper presents a stylometric approach to detect manually obfuscated documents. Firstly, they preprocess each document by unifying case and separating punctuation (e.g., !! becomes ! !). Then they represent each document using character trigrams. Finally the classification is performed using KNN with normalized cosine distance. We implement this approach to evaluate it against automated authorship obfuscaters. 3) Writeprints + SVM (Afroz et al., 2012). This paper also uses stylometric features and is also focused on detecting manual obfuscation. It is also similar to the approach in (Shahid et al., 2017), which uses stylometric features to perform spun document detection. Afroz et al. tested with 3 different feature sets of which writeprints was the best. Their writeprints feature set contains character related features (e.g., total characters, percentage of uppercase letters etc.), word related features (e.g., total words, frequency of large words etc.) and other features including frequencies of digits, special characters, function words etc. They use this feature set with SVM (using poly kernel) to 2241 perform obfuscation detection. On our datasets, we found linear SVM to be working better than the polynomial one, so we report results with linear SVM. We implement this writeprints approach with linear SVM as our final detector. 5 Results Summary trends: After averaging we find that for obfuscation detection, 25% of all 160 architectures achieve F1 score greater than 0.76, 50% achieve F1 score greater than 0.72 and a high 75% of them were able to achieve F1 score greater than 0.52. Figure 2 summarizes the performances of all 160 different architectures across the four datasets. We see that obfuscation detection is easier in EBG than in BLOG with median EBG F1 scores being significantly better than median BLOG F1 scores (notches do not overlap (Krzywinski and Altman, 2014)). This can be explained by the fact that EBG contains scholarly articles that are relatively more consistent in their smoothness than blogs. This likely makes it easier to pick up on the difference in smoothness caused by obfuscated documents in EBG than in BLOG. We can also see that evaded documents achieve a higher maximum F1 score than obfuscated documents. This confirms our intuition presented in 4.2.1, that evaded documents are likely to be less smooth and therefore easier to detect than obfuscated documents. However, we also see that F1 scores for evaded datasets are less stable (greater box size) than obfuscated datasets. We believe that this is due to the fact that there are fewer documents in evaded datasets as compared to their respective obfuscated datasets (see Table 1). Performance evaluation: In terms of architecture selection, instead of choosing randomly across 160 architectures, we make the following assumpEBG obfuscated EBG evaded BLOG obfuscated BLOG evaded 0.0 0.2 0.4 0.6 0.8 1.0 F1 score Figure 2: Notched box plots for obfuscation detection F1 scores using all 160 architectures for each dataset. Dataset Models P R F1 EBG obfuscated BERT LARGE + ranks + VGG-19 + RFC 1.00 0.85 0.92 BERT LARGE + ranks + VGG-19 + SVM 0.98 0.83 0.90 GLTR + SVM 1.00 0.70 0.83 Writeprints + SVM 0.67 0.38 0.49 Character trigrams + KNN 0.64 0.15 0.24 EBG evaded BERT LARGE + probs + bins(0.010) + ANN 1.00 0.90 0.95 BERT BASE + probs + VGG19 + GNB 1.00 0.90 0.95 GLTR + SVM 1.00 0.80 0.89 Writeprints + SVM 0.79 0.63 0.70 Character trigrams + KNN 1.00 0.17 0.29 BLOG obfuscated BERT BASE + probs + VGG19 + ANN 0.85 0.71 0.77 BERT BASE + probs + VGG19 + SVM 0.79 0.74 0.77 GLTR + SVM 0.92 0.40 0.56 Writeprints + SVM 0.71 0.41 0.52 Character trigrams + KNN 0.41 0.50 0.45 BLOG evaded GPT-2 345M + ranks + VGG-19 + GNB 0.82 0.83 0.83 BERT BASE + probs + VGG19 + ANN 0.79 0.81 0.80 GLTR + SVM 0.86 0.55 0.67 Writeprints + SVM 0.84 0.62 0.71 Character trigrams + KNN 0.86 0.50 0.63 Table 2: Obfuscation detection results (P: precision, R: recall, F1: F1 score). tion. We assume that the adversary is knowledgeable about the various choices, tests these alternatives and employs the best configuration. Thus, we present results for the best models, based on F1 scores for obfuscation detection, achievable by the adversary (Table 2). Table 2 also presents results for the three additional methods presented in section 4.3.2. Our best BERT and GPT2 combinations outperform all other methods across each of the four datasets in F1 score and recall. Along with (GLTR + SVM) these achieve the best precision for the EBG datasets. In BLOG obfuscated, GLTR based method achieves the highest precision whereas in BLOG evaded both the GLTR based method and character trigrams method top the chart - however in each case with a sizeable penalty paid in recall and therefore in F1 score. In summary, we see that using the best of methods the adversary can detect evaded and obfuscated documents with F1 score of 0.77 or higher (average 0.87 across datasets) which indicates that the tested state-of-the-art obfuscators are far from stealthy. 5.1 Detector Architecture Choices Analysis Now we analyze the effect of different choices made within each of the three dimensions depicted in Figure 1. As mentioned earlier, for a privacy seeking user evading author attribution is more im2242 SVM RFC KNN ANN GNB 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded SVM RFC KNN ANN GNB 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded GPT-2 117M GPT-2 345M BERT base BERT large 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded GPT-2 117M GPT-2 345M BERT base BERT large 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded Binning Image 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded Binning Image 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded probabilities ranks 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded probabilities ranks 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded (d) Classifiers (a) Language Models (c) Feature Types F1 score (b) Likelihood Types Figure 3: Notched box plots of F1 scores for all dimensions across the two evaded datasets. For each dataset every notched boxplot in (a) is generated from 40 experiments (experiments correspond to architectures), (b) is generated from 80 experiments, (c) is generated from 120 experiments for binning and 40 for image whereas (d) is generated from 32 different experimental combinations. portant than just obfuscation. So, in this section we present architecture analysis results only for evaded datasets involving 320 experiments (160 each for EBG evaded and BLOG evaded). 5.1.1 Dimension 1: Language model & output type Figure 3 (a) presents notched box plots comparing distribution of F1 scores achieved by language models across both datasets. In EBG evaded, BERT language models achieve higher maximum F1 score (0.95) than GPT-2 (0.90 - 0.91). On the other hand, in BLOG evaded, GPT-2 345M achieves higher maximum F1 score (0.83) than others (0.75 - 0.80). Relatively, BERT shows greater consistency in F1 score (box size) than GPT-2 in both datasets. We believe that the bidirectional nature of BERT helps in capturing context and consequently smoothness better than GPT-2 which is uni-directional. While the difference in maximum F1 score between ranks and probabilities is slight for each dataset (Figure 3 (b)) box sizes show the spread in F1 scores is smaller with probabilities than with ranks. Upon further investigation, we find that experiments which use probabilities with image based features have an inter-quartile range of 0.05 and 0.1 for EBG and BLOG respectively whereas for experiments using probabilities with binning based features, this range is 0.32 for both datasets. On the other hand, inter-quartile range for experiments using ranks with image based features is 0.08 and 0.05 for EBG and BLOG whereas for experiments using ranks with binning based features, this range is 0.49 and 0.42 respectively. This shows that for both datasets, greater variation in F1 scores for ranks as compared to probabilities is caused by binning based features. We believe that binning ranks with fixed bin sizes (10, 50, 100) is less stable for both BERT and GPT-2 which have different limits of ranks - this could account for the larger inter-quartile range using ranks. 5.1.2 Dimension 2: Feature type The box sizes in Figure 3 (c) show that image based features exhibit strikingly greater stability in F1 scores than binning based features. Image based features also achieve significantly higher median F1 score than with binning for both datasets. This can in part be explained by the observation stated earlier that some bin size choices tested perform much worse than others because of not being fine-tuned. There is no difference between feature types in maximum F1 score for EBG whereas in BLOG, image based feature achieve somewhat higher maximum F1 score (0.83) than binning based features (0.78). We believe that the reason why image based features work so well is that VGG-19, the image model we use to extract features, is powerful enough to recognize the slopes in plots which represent the smoothness in our case. 2243 5.1.3 Dimension 3: Classifier Figure 3 (d), shows that for EBG, ANN and GNB achieve higher maximum F1 score (0.95), whereas for BLOG, GNB achieve higher maximum F1 score (0.83). KNN and ANN consistently achieve far more stable F1 scores than other classification methods. In both datasets, KNN achieves significantly higher median F1 score than other classification methods. ANN also follows the same pattern with the exception of GNB in BLOG evaded. We believe that the reason why KNN and ANN achieve relatively high and stable performance is in their nature of being able to adapt to diverse and complex feature spaces. 5.2 Takeaway In summary we conclude that BERT with probabilities is a good choice for dimension 1. (We remind the reader that in contrast, in the area of synthetic text detection (Gehrmann et al., 2019) GPT2 had the edge over BERT). Image based features are a clear winner in dimension 2 while KNN and ANN are the best candidates for dimension 3. Key to note as well is that the top performing architectures in Table 2 differ across datasets indicating the need for dataset specific choices. 5.3 Insights Figure 4 validates our intuition from Section 3 that the text generated by obfuscators is less smooth than the original text. Using EBG obfuscated dataset and BERT BASE for illustration, we first sort words in a document by estimated probability and plot average probability at each rank. The steeper the fall in the curve, the lower the smoothness of text. This plot shows that original documents are generally more smooth than obfuscated documents. The average detection error rates (Mutant-X embeddingCNN: 0.72, SNPAN16: 0.48, and DS-PAN17: 0.07) are also consistent with the plot. These results show that Mutant-X is the most stealthy obfuscator while DS-PAN17 is the least stealthy obfuscator. 6 Conclusion In this paper, we showed that the state-of-the-art authorship obfuscation methods are not stealthy. We showed that the degradation in text smoothness caused by authorship obfuscators allow a detector to distinguish between obfuscated documents and original documents. Our proposed 0 100 200 300 400 500 Sorted words in documents 0.0 0.2 0.4 0.6 0.8 1.0 Average occurence probability Original Mutant-X embeddingCNN DS-PAN17 SN-PAN16 Figure 4: Comparison between different obfuscators and original documents on the basis of average sorted probabilities extracted by BERT BASE for EBG obfuscated dataset. obfuscation detectors were effective at classifying obfuscated and evaded documents (F1 score as high as 0.92 and 0.95, respectively). Our findings point to future research opportunities to build stealthy authorship obfuscation methods. We suggest that obfuscation methods should strive to preserve text smoothness in addition to semantics. References 2018. PAN @ CLEF 2018 - Author Obfuscation. https://pan.webis.de/clef18/ pan18-web/author-obfuscation.html. Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity-level identification and similarity detection in cyberspace. ACM Transactions on Information Systems (TOIS), 26(2):7. Sadia Afroz, Michael Brennan, and Rachel Greenstadt. 2012. Detecting hoaxes, frauds, and deception in writing style online. In 2012 IEEE Symposium on Security and Privacy, pages 461–475. IEEE. Mishari Almishari, Ekin Oguz, and Gene Tsudik. 2014. Fighting Authorship Linkability with Crowdsourcing. In ACM Conference on Online Social Networks (COSN). Anonymous. 2018. I’m an Amazon Employee. My Company Shouldn’t Sell Facial Recognition Tech to Police. Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. Real or Fake? Learning to Discriminate Machine from Human Generated Text. arXiv preprint arXiv:1906.03351. 2244 Janek Bevendorff, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Heuristic Authorship Obfuscation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1098–1108. Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. ACM Transactions on Information and System Security (TISSEC), 15(3):12. Daniel Castro-Castro, Reynier Ortega Bueno, and Rafael Munoz. 2017. Author Masking by Sentence Transformation. In Notebook for PAN at CLEF. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Chris Emmery, Enrique Manjavacas, and Grzegorz Chrupała. 2018. Style Obfuscation by Invariance. 27th International Conference on Computational Linguistics. Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical detection and visualization of generated text. pages 111–116. Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. CNN architectures for largescale audio classification. In 2017 IEEE international conference on acoustics, speech and signal processing (icassp), pages 131–135. IEEE. Patrick Juola. 2012. Detecting stylistic deception. In Proceedings of the Workshop on Computational Approaches to Deception Detection, pages 91–96. Association for Computational Linguistics. Georgi Karadzhov, Tsvetomila Mihaylova, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, and Preslav Nakov. 2017. The case for being average: A mediocrity approach to style masking and author obfuscation. In International Conference of the CrossLanguage Evaluation Forum for European Languages, pages 173–185. Springer. Yashwant Keswani, Harsh Trivedi, Parth Mehta, and Prasenjit Majumder. 2016. Author Masking through Translation. In Notebook for PAN at CLEF 2016, pages 890–894. Martin Krzywinski and Naomi Altman. 2014. Points of significance: visualizing samples with box plots. Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019. A Girl Has No Name: Automated Authorship Obfuscation using Mutant-X. Proceedings on Privacy Enhancing Technologies, 2019(4):54–71. Muharram Mansoorizadeh, Taher Rahgooy, Mohammad Aminiyan, and Mahdy Eskandari. 2016. Author obfuscation using WordNet and language models. In Notebook for PAN at CLEF 2016. Andrew WE McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, and Rachel Greenstadt. 2012. Use fewer instances of the letter “i”: Toward writing style anonymization. In International Symposium on Privacy Enhancing Technologies Symposium, pages 299–318. Springer. Andrew W.E. McDonald, Jeffrey Ulman, Marc Barrowclift, and Rachel Greenstadt. 2013. Anonymouth Revamped: Getting Closer to Stylometric Anonymity. In PETools: Workshop on Privacy Enhancing Tools, volume 20. Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the feasibility of internet-scale author identification. In 2012 IEEE Symposium on Security and Privacy, pages 300–314. IEEE. Rebekah Overdorf and Rachel Greenstadt. 2016. Blogs, twitter feeds, and reddit comments: Crossdomain authorship attribution. Proceedings on Privacy Enhancing Technologies, 2016(3):155–171. Martin Potthast, Felix Schremmer, Matthias Hagen, and Benno Stein. 2018. Overview of the author obfuscation task at pan 2018: A new approach to measuring safety. In CLEF (Working Notes). Schremmer Potthast and Stein Hagen. 2018. Overview of the Author Obfuscation Task at PAN 2018: A New Approach to Measuring Safety. In Notebook for PAN at CLEF 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. arXiv preprint arXiv:1609.06686. Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199–205. Usman Shahid, Shehroze Farooqi, Raza Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2017. Accurate detection of automatically spun content via stylometric analysis. In 2017 IEEE International Conference on Data Mining (ICDM), pages 425–434. IEEE. Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: author attribute anonymity by adversarial training of neural machine translation. In 27th 2245 USENIX Security Symposium (USENIX Security 18), pages 1633–1650. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. NY Times. 2018. I am part of the resistance inside the trump administration. NY Times. Retrieved from https://www. nytimes. com/2018/09/05/.../trumpwhite-house-anonymous-resistance. html. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Liang-Chih Yu, Jin Wang, K Robert Lai, and Xuejie Zhang. 2017. Refining word embeddings using intensity scores for sentiment analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(3):671–681. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. Conference on Neural Information Processing Systems (NeurIPS).
2020
203
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2246–2251 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2246 DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference Ji Xin1,2, Raphael Tang1,2, Jaejun Lee1, Yaoliang Yu1,2, and Jimmy Lin1,2 1David R. Cheriton School of Computer Science, University of Waterloo 2Vector Institute for Artificial Intelligence {ji.xin,r33tang,j474lee,yaoliang.yu,jimmylin}@uwaterloo.ca Abstract Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications. However, they are also notorious for being slow in inference, which makes them difficult to deploy in realtime applications. We propose a simple but effective method, DeeBERT, to accelerate BERT inference. Our approach allows samples to exit earlier without passing through the entire model. Experiments show that DeeBERT is able to save up to ∼40% inference time with minimal degradation in model quality. Further analyses show different behaviors in the BERT transformer layers and also reveal their redundancy. Our work provides new ideas to efficiently apply deep transformer-based models to downstream tasks. Code is available at https://github.com/castorini/ DeeBERT. 1 Introduction Large-scale pre-trained language models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019) have brought significant improvements to natural language processing (NLP) applications. Despite their power, they are notorious for being enormous in size and slow in both training and inference. Their long inference latencies present challenges to deployment in real-time applications and hardwareconstrained edge devices such as mobile phones and smart watches. To accelerate inference for BERT, we propose DeeBERT: Dynamic early exiting for BERT. The inspiration comes from a well-known observation in the computer vision community: in deep convolutional neural networks, higher layers typically produce more detailed and finer-grained features (Zeiler and Fergus, 2014). Therefore, we Figure 1: DeeBERT model overview. Grey blocks are transformer layers, orange circles are classification layers (off-ramps), and blue arrows represent inference samples exiting at different layers. hypothesize that, for BERT, features provided by the intermediate transformer layers may suffice to classify some input samples. DeeBERT accelerates BERT inference by inserting extra classification layers (which we refer to as off-ramps) between each transformer layer of BERT (Figure 1). All transformer layers and offramps are jointly fine-tuned on a given downstream dataset. At inference time, after a sample goes through a transformer layer, it is passed to the following off-ramp. If the off-ramp is confident of the prediction, the result is returned; otherwise, the sample is sent to the next transformer layer. In this paper, we conduct experiments on BERT and RoBERTa with six GLUE datasets, showing that DeeBERT is capable of accelerating model inference by up to ∼40% with minimal model quality degradation on downstream tasks. Further analyses reveal interesting patterns in the models’ transformer layers, as well as redundancy in both BERT and RoBERTa. 2 Related Work BERT and RoBERTa are large-scale pre-trained language models based on transformers (Vaswani et al., 2017). Despite their groundbreaking power, there have been many papers trying to examine and exploit their over-parameterization. Michel et al. (2019) and Voita et al. (2019) analyze redundancy 2247 in attention heads. Q-BERT (Shen et al., 2019) uses quantization to compress BERT, and LayerDrop (Fan et al., 2019) uses group regularization to enable structured pruning at inference time. On the knowledge distillation side, TinyBERT (Jiao et al., 2019) and DistilBERT (Sanh et al., 2019) both distill BERT into a smaller transformer-based model, and Tang et al. (2019) distill BERT into even smaller non-transformer-based models. Our work is inspired by Cambazoglu et al. (2010), Teerapittayanon et al. (2017), and Huang et al. (2018), but mainly differs from previous work in that we focus on improving model efficiency with minimal quality degradation. 3 Early Exit for BERT inference DeeBERT modifies fine-tuning and inference of BERT models, leaving pre-training unchanged. It adds one off-ramp for each transformer layer. An inference sample can exit earlier at an off-ramp, without going through the rest of the transformer layers. The last off-ramp is the classification layer of the original BERT model. 3.1 DeeBERT at Fine-Tuning We start with a pre-trained BERT model with n transformer layers and add n off-ramps to it. For fine-tuning on a downstream task, the loss function of the ith off-ramp is Li(D; θ) = 1 |D| X (x,y)∈D H(y, fi(x; θ)), (1) where D is the fine-tuning training set, θ is the collection of all parameters, (x, y) is the feature– label pair of a sample, H is the cross-entropy loss function, and fi is the output of the ith off-ramp. The network is fine-tuned in two stages: 1. Update the embedding layer, all transformer layers, and the last off-ramp with the loss function Ln. This stage is identical to BERT fine-tuning in the original paper (Devlin et al., 2019). 2. Freeze all parameters fine-tuned in the first stage, and then update all but the last offramp with the loss function Pn−1 i=1 Li. The reason for freezing parameters of transformer layers is to keep the optimal output quality for the last off-ramp; otherwise, transformer layers are no longer optimized solely for the last off-ramp, generally worsening its quality. Algorithm 1 DeeBERT Inference (Input: x) for i = 1 to n do zi = fi(x; θ) if entropy(zi) < S then return zi end if end for return zn 3.2 DeeBERT at Inference The way DeeBERT works at inference time is shown in Algorithm 1. We quantify an off-ramp’s confidence in its prediction using the entropy of the output probability distribution zi. When an input sample x arrives at an off-ramp, the off-ramp compares the entropy of its output distribution zi with a preset threshold S to determine whether the sample should be returned here or sent to the next transformer layer. It is clear from both intuition and experimentation that a larger S leads to a faster but less accurate model, and a smaller S leads to a more accurate but slower one. In our experiments, we choose S based on this principle. We also explored using ensembles of multiple layers instead of a single layer for the off-ramp, but this does not bring significant improvements. The reason is that predictions from different layers are usually highly correlated, and a wrong prediction is unlikely to be “fixed” by the other layers. Therefore, we stick to the simple yet efficient single output layer strategy. 4 Experiments 4.1 Experimental Setup We apply DeeBERT to both BERT and RoBERTa, and conduct experiments on six classification datasets from the GLUE benchmark (Wang et al., 2018): SST-2, MRPC, QNLI, RTE, QQP, and MNLI. Our implementation of DeeBERT is adapted from the HuggingFace Transformers Library (Wolf et al., 2019). Inference runtime measurements are performed on a single NVIDIA Tesla P100 graphics card. Hyperparameters such as hidden-state size, learning rate, fine-tune epoch, and batch size are kept unchanged from the library. There is no early stopping and the checkpoint after full fine-tuning is chosen. 2248 SST-2 MRPC QNLI RTE QQP MNLI-(m/mm) Acc Time F1 Time Acc Time Acc Time F1 Time Acc Time BERT-base Baseline 93.6 36.72s 88.2 34.77s 91.0 111.44s 69.9 61.26s 71.4 145min 83.9/83.0 202.84s DistilBERT −1.4 −40% −1.1 −40% −2.6 −40% −9.4 −40% −1.1 −40% −4.5 −40% DeeBERT −0.2 −21% −0.3 −14% −0.1 −15% −0.4 −9% −0.0 −24% −0.0/−0.1 −14% −0.6 −40% −1.3 −31% −0.7 −29% −0.6 −11% −0.1 −39% −0.8/−0.7 −25% −2.1 −47% −3.0 −44% −3.1 −44% −3.2 −33% −2.0 −49% −3.9/−3.8 −37% RoBERTa-base Baseline 94.3 36.73s 90.4 35.24s 92.4 112.96s 67.5 60.14s 71.8 152min 87.0/86.3 198.52s LayerDrop −1.8 −50% −4.1 −50% DeeBERT +0.1 −26% +0.1 −25% −0.1 −25% −0.6 −32% +0.1 −32% −0.0/−0.0 −19% −0.0 −33% +0.2 −28% −0.5 −30% −0.4 −33% −0.0 −39% −0.1/−0.3 −23% −1.8 −44% −1.1 −38% −2.5 −39% −1.1 −35% −0.6 −44% −3.9/−4.1 −29% Table 1: Comparison between baseline (original BERT/RoBERTa), DeeBERT, and other acceleration methods. LayerDrop only reports results on SST-2 and MNLI. Time savings of DistilBERT and LayerDrop are estimated by reported model size reduction. 4.2 Main Results We vary DeeBERT’s quality–efficiency trade-off by setting different entropy thresholds S, and compare the results with other baselines in Table 1. Model quality is measured on the test set, and the results are provided by the GLUE evaluation server. Efficiency is quantified with wall-clock inference runtime1 on the entire test set, where samples are fed into the model one by one. For each run of DeeBERT on a dataset, we choose three entropy thresholds S based on quality–efficiency trade-offs on the development set, aiming to demonstrate two cases: (1) the maximum runtime savings with minimal performance drop (< 0.5%), and (2) the runtime savings with moderate performance drop (2% −4%). Chosen S values differ for each dataset. We also visualize the trade-off in Figure 2. Each curve is drawn by interpolating a number of points, each of which corresponds to a different threshold S. Since this only involves a comparison between different settings of DeeBERT, runtime is measured on the development set. From Table 1 and Figure 2, we observe the following patterns: • Despite differences in baseline performance, both models show similar patterns on all datasets: the performance (accuracy/F1 score) stays (mostly) the same until runtime saving reaches a certain turning point, and then starts 1This includes both CPU and GPU runtime. to drop gradually. The turning point typically comes earlier for BERT than for RoBERTa, but after the turning point, the performance of RoBERTa drops faster than for BERT. The reason for this will be discussed in Section 4.4. • Occasionally, we observe spikes in the curves, e.g., RoBERTa in SST-2, and both BERT and RoBERTa in RTE. We attribute this to possible regularization brought by early exiting and thus smaller effective model sizes, i.e., in some cases, using all transformer layers may not be as good as using only some of them. Compared with other BERT acceleration methods, DeeBERT has the following two advantages: • Instead of producing a fixed-size smaller model like DistilBERT (Sanh et al., 2019), DeeBERT produces a series of options for faster inference, which users have the flexibility to choose from, according to their demands. • Unlike DistilBERT and LayerDrop (Fan et al., 2019), DeeBERT does not require further pretraining of the transformer model, which is much more time-consuming than fine-tuning. 4.3 Expected Savings As the measurement of runtime might not be stable, we propose another metric to capture efficiency, 2249 0 25 50 75 Runtime Savings (%) 75 80 85 90 Accuracy (%) base: SST-2 BERT RoBERTa 0 25 50 75 Runtime Savings (%) 82.5 85.0 87.5 90.0 92.5 F1 Score (%) base: MRPC BERT RoBERTa 0 25 50 75 Runtime Savings (%) 60 70 80 90 Accuracy (%) base: QNLI BERT RoBERTa 0 25 50 75 Runtime Savings (%) 50 55 60 65 70 Accuracy (%) base: RTE BERT RoBERTa 0 25 50 75 Runtime Savings (%) 50 60 70 80 90 Accuracy (%) base: QQP BERT RoBERTa 0 25 50 75 Runtime Savings (%) 60 65 70 75 80 85 Accuracy (%) base: MNLI BERT RoBERTa Figure 2: DeeBERT quality and efficiency trade-offs for BERT-base and RoBERTa-base models. 0 10 20 30 40 50 60 70 80 90 Expected Savings (%) 0 10 20 30 40 50 60 70 80 Measured Savings (%) base: SST-2 BERT RoBERTa 0 10 20 30 40 50 60 70 80 90 Expected Savings (%) 0 10 20 30 40 50 60 70 80 Measured Savings (%) base: MRPC BERT RoBERTa Figure 3: Comparison between expected saving (xaxis) and actual measured saving (y-axis), using BERTbase and RoBERTa-base models. called expected saving, defined as 1 − Pn i=1 i × Ni Pn i=1 n × Ni , (2) where n is the number of layers and Ni is the number of samples exiting at layer i. Intuitively, expected saving is the fraction of transformer layer execution saved by using early exiting. The advantage of this metric is that it remains invariant between different runs and can be analytically computed. For validation, we compare this metric with 5 10 Exit Layer 70 80 90 Accuracy (%) base: SST-2 BERT RoBERTa 5 10 Exit Layer 82.5 85.0 87.5 90.0 92.5 F1 Score (%) base: MRPC BERT RoBERTa 5 10 Exit Layer 60 70 80 90 Accuracy (%) base: QNLI BERT RoBERTa 5 10 Exit Layer 50 55 60 65 70 Accuracy (%) base: RTE BERT RoBERTa 5 10 Exit Layer 50 60 70 80 90 F1 Score (%) base: QQP BERT RoBERTa 5 10 Exit Layer 40 50 60 70 80 Accuracy (%) base: MNLI BERT RoBERTa Figure 4: Accuracy of each off-ramp for BERT-base and RoBERTa-base. measured saving in Figure 3. Overall, the curves show a linear relationship between expected savings and measured savings, indicating that our reported runtime is a stable measurement of DeeBERT’s efficiency. 4.4 Layerwise Analyses In order to understand the effect of applying DeeBERT to both models, we conduct further analyses on each off-ramp layer. Experiments in this section are also performed on the development set. Output Performance by Layer. For each offramp, we force all samples in the development set to exit here, measure the output quality, and visualize the results in Figure 4. From the figure, we notice the difference between BERT and RoBERTa. The output quality of BERT improves at a relatively stable rate as the index of the exit off-ramp increases. The output quality of RoBERTa, on the other hand, stays almost unchanged (or even worsens) for a few layers, then rapidly improves, and reaches a saturation point be2250 0 25 50 75 Runtime Savings (%) 75 80 85 90 95 Accuracy (%) large: SST-2 BERT RoBERTa 0 10 20 Exit Layer 70 80 90 Accuracy (%) large: SST-2 BERT RoBERTa 0 25 50 75 Runtime Savings (%) 82.5 85.0 87.5 90.0 92.5 F1 Score (%) large: MRPC BERT RoBERTa 0 10 20 Exit Layer 82 84 86 88 90 92 F1 Score (%) large: MRPC BERT RoBERTa Figure 5: Results for BERT-large and RoBERTa-large. fore BERT does. This provides an explanation for the phenomenon mentioned in Section 4.2: on the same dataset, RoBERTa often achieves more runtime savings while maintaining roughly the same output quality, but then quality drops faster after reaching the turning point. We also show the results for BERT-large and RoBERTa-large in Figure 5. From the two plots on the right, we observe signs of redundancy that both BERT-large and RoBERTa-large share: the last several layers do not show much improvement compared with the previous layers (performance even drops slightly in some cases). Such redundancy can also be seen in Figure 4. Number of Exiting Samples by Layer. We further show the fraction of samples exiting at each off-ramp for a given entropy threshold in Figure 6. Entropy threshold S = 0 is the baseline, and all samples exit at the last layer; as S increases, gradually more samples exit earlier. Apart from the obvious, we observe additional, interesting patterns: if a layer does not provide better-quality output than previous layers, such as layer 11 in BERT-base and layers 2–4 and 6 in RoBERTa-base (which can be seen in Figure 4, top left), it is typically chosen by very few samples; popular layers are typically those that substantially improve over previous layers, such as layer 7 and 9 in RoBERTabase. This shows that an entropy threshold is able to choose the fastest off-ramp among those with comparable quality, and achieves a good trade-off between quality and efficiency. 0% 50% 100% S=0.01 Savings=25% AccDrop=0.2% BERT-base: SST-2 0% 50% 100% Fraction of Dataset S=0.05 Savings=34% AccDrop=0.7% 1 3 5 7 9 11 Exit Layer 0% 50% 100% S=0.4 Savings=61% AccDrop=5.8% 0% 50% 100% S=0.3 Savings=34% AccDrop=0.1% RoBERTa-base: SST-2 0% 50% 100% Fraction of Dataset S=0.55 Savings=55% AccDrop=1.3% 1 3 5 7 9 11 Exit Layer 0% 50% 100% S=0.6 Savings=61% AccDrop=3.7% Figure 6: Number of output samples by layer for BERTbase and RoBERTa-base. Each plot represents a separate entropy threshold S. 5 Conclusions and Future Work We propose DeeBERT, an effective method that exploits redundancy in BERT models to achieve better quality–efficiency trade-offs. Experiments demonstrate its ability to accelerate BERT’s and RoBERTa’s inference by up to ∼40%, and also reveal interesting patterns of different transformer layers in BERT models. There are a few interesting questions left unanswered in this paper, which would provide interesting future research directions: (1) DeeBERT’s training method, while maintaining good quality in the last off-ramp, reduces model capacity available for intermediate off-ramps; it would be important to look for a method that achieves a better balance between all off-ramps. (2) The reasons why some transformer layers appear redundant2 and why DeeBERT considers some samples easier than others remain unknown; it would be interesting to further explore relationships between pre-training and layer redundancy, sample complexity and exit layer, and related characteristics. Acknowledgment We thank anonymous reviewers for their insightful suggestions. We also gratefully acknowledge funding support from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Computational resources used in this work were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute. 2For example, the first and last four layers of RoBERTabase on SST-2 (Figure 4, top left). 2251 References B. Barla Cambazoglu, Hugo Zaragoza, Olivier Chapelle, Jiang Chen, Ciya Liao, Zhaohui Zheng, and Jon Degenhardt. 2010. Early exit optimizations for additive machine learned ranking systems. In Proceedings of the Third ACM International Conference on Web Search and Data Mining (WSDM 2010), pages 411–420, New York, New York. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Angela Fan, Edouard Grave, and Armand Joulin. 2019. Reducing transformer depth on demand with structured dropout. arXiv:1909.11556. Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. 2018. Multi-scale dense networks for resource efficient image classification. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, BC, Canada. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv:1909.10351. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems 32, pages 14014–14024. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237, New Orleans, Louisiana. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv:1910.01108. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2019. Q-BERT: Hessian based ultra low precision quantization of BERT. arXiv:1909.05840. Raphael Tang, Yao Lu, and Jimmy Lin. 2019. Natural language generation for effective knowledge distillation. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 202–208, Hong Kong, China. Surat Teerapittayanon, Bradley McDanel, and HsiangTsung Kung. 2017. BranchyNet: Fast inference via early exiting from deep neural networks. arXiv:1709.01686. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art natural language processing. arXiv:1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: generalized autoregressive pretraining for language understanding. arXiv:1906.08237. Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Proceedings of the 13th European Conference on Computer Vision (ECCV 2014), pages 818–833, Z¨urich, Switzerland.
2020
204
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2252–2257 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2252 Efficient Strategies for Hierarchical Text Classification: External Knowledge and Auxiliary Tasks Kervy Rivas Rojas, Gina Bustamante, Arturo Oncevay‡, Marco A. Sobrevilla Cabezudo† Research Group on Artificial Intelligence, Pontificia Universidad Cat´olica del Per´u, Peru ‡School of Informatics, University of Edinburgh, Scotland †Instituto de Ciˆencias Matem´aticas e de Computac¸˜ao, Universidade de S˜ao Paulo, Brazil [email protected], [email protected], [email protected], [email protected] Abstract In hierarchical text classification, we perform a sequence of inference steps to predict the category of a document from top to bottom of a given class taxonomy. Most of the studies have focused on developing novels neural network architectures to deal with the hierarchical structure, but we prefer to look for efficient ways to strengthen a baseline model. We first define the task as a sequence-to-sequence problem. Afterwards, we propose an auxiliary synthetic task of bottom-up-classification. Then, from external dictionaries, we retrieve textual definitions for the classes of all the hierarchy’s layers, and map them into the word vector space. We use the class-definition embeddings as an additional input to condition the prediction of the next layer and in an adapted beam search. Whereas the modified search did not provide large gains, the combination of the auxiliary task and the additional input of classdefinitions significantly enhance the classification accuracy. With our efficient approaches, we outperform previous studies, using a drastically reduced number of parameters, in two well-known English datasets. 1 Introduction Hierarchical text classification (HTC) aims to categorise a textual description within a set of labels that are organized in a structured class hierarchy (Silla and Freitas, 2011). The task is perceived as a more challenging problem than flat text classification, since we need to consider the relationships of the nodes from different levels in the class taxonomy (Liu et al., 2019). Both flat text classification and HTC have been tackled using traditional machine learning classifiers (Liu et al., 2005; Kim et al., 2006) or deep neural networks (Peng et al., 2018; Conneau et al., 2017). Nevertheless, the majority of the latest approaches consider models with a large number of parameters that require extended training time. In the flat-classification scenario, some studies have addressed the problem of efficiency by proposing methods that do not focus on the model architecture, but in external ways of improving the results (Joulin et al., 2017; Howard and Ruder, 2018). However, the listed strategies are still underdeveloped for HTC, and the most recent and effective methods are still computationally expensive (Yang et al., 2019; Banerjee et al., 2019). The described context opens our research question: How can we improve HTC at a lower computational cost? Therefore, our focus and main contributions are: • A robust model for HTC, with few parameters and short training time, that follows the paradigm of sequence-to-sequence learning. • The practical application of an auxiliary (and not expensive) task that strengthens the model capacity for prediction in a bottom-up scheme. • An exploration of strategies that take advantage of external information about textual definition of the classes. We encode the definitions in the word vector space and use them in: (1) each prediction step and (2) an adapted beam search. 2 Efficient strategies for hierarchical text classification 2.1 Sequence-to-sequence approach Hierarchical classification resembles a multi-label classification where there are hierarchical relationships between labels, i. e., labels at lower levels are conditioned by labels at higher levels in the hierarchy. For that reason, we differ from previous work and address the task as a sequence-to-sequence problem, where the encoder receives a textual description and the decoder generates a class at each 2253 step (from the highest to the lowest layer in the hierarchy). Our baseline model thereafter is a sequenceto-sequence neural network (Sutskever et al., 2014) composed of: Embedding layer: To transform a word into a vector wi, where i ∈{1,...,N} and N is the number of tokens in the input document. We use pre-trained word embeddings from Common Crawl (Grave et al., 2018) for the weights of this layer, and we do not fine-tune them during training time. Encoder: It is a bidirectional GRU (Cho et al., 2014) unit that takes as input a sequence of word vectors and computes a hidden vector hi per each i time step of the sequence. Attention layer: We employ the attention variant of Bahdanau et al. (2015), and generate a context vector ai for each encoder output hi. Decoder: To use the context ai and hidden hi vectors to predict the clj ljk class of the hierarchy, where j ∈{1,...,M}. M is the number of levels in the class taxonomy, lj represents the j-th layer of the hierarchy, and ljk is the k-th class in level lj. Similar to the encoder, we use a bidirectional GRU. 2.2 Auxiliary task For an input sequence of words, the model predicts a sequence of classes. Given the nature of recurrent neural networks, iterating over a sequence stores historical information. Therefore, for the last output computation we could take the previous inputs into consideration. Previous work in HTC (Kowsari et al., 2017; Sinha et al., 2018) usually starts by predicting the most general category (Parent node) and continues to a more specific class (Child nodes) each time. However, by following the common approach, the prediction of the most specific classes will have a smaller impact than the more general ones when the error propagates. In this way, it could be harder to learn the relationship of the last target class with the upper ones. Inspired by reversing the order of words in the input sequence (Sutskever et al., 2014), we propose an auxiliary synthetic task that changes the order of the target class levels in the output sequence. In other words, we go upward from the child nodes to the parent. With the proposed task, the parent and child nodes will have a similar impact on the error propagation, and the network could learn more robust representations. 2.3 Class-definition embeddings for external knowledge integration We analyze the potential of using textual definitions of classes for external knowledge integration. For each class clj ljk in any level lj of the hierarchy, we could obtain a raw text definition from an external dictionary to compute a vector representation cv, that from now on we call the class definition vector (CDV). We thereafter use the CDV representations with the two following strategies. 2.3.1 Parent node conditioning (PNC) For a given document D, we classify it among the target classes C = (cl1 l1k,...,clM lMk), where M is the number of layers in the taxonomy. In our approach, we predict the highest-level class cl1 l1k and then use its CDV representation cvl1 l1k as an additional input (alongside the encoder outputs) to the attention layer for the prediction of the next level class cl2 l2k. We continue the process for all the layers of the class hierarchy. 2.3.2 Adapted beam search Beam search is a search strategy commonly used in neural machine translation (Freitag and AlOnaizan, 2017), but the algorithm can be used in any problem that involves word-by-word decoding. We assess the impact of applying beam search in HTC, and introduce an adapted version that takes advantage of the computed CDV representations: T X i=0 logP(yi|x, y1, ..., yt−1) + CD(z, yi) (1) In each step of the decoding phase, we predict a class that belongs to the corresponding level of the class hierarchy. Given a time step i, the beam search expands all the k (beam size) possible class candidates and sort them by their logarithmic probability. In addition to the original calculation, we compute the cosine distance between the CDV of a class candidate and the average vector of the word embeddings from the textual description z that we want to classify (CD component in Equation 1). We add the new term to the logarithmic probability of each class candidate, re-order them based on the new score, and preserve the top-k candidates. Our intuition behind the added component is similar to the shallow fusion in the decoder of a 2254 WOS DBpedia Number of documents 46,985 342,782 Classes in level 1 7 9 Classes in level 2 143 70 Classes in level 3 NA 219 Table 1: Information of WOS and DBPedia corpora neural machine translation system (Gulcehre et al., 2017). Thus, the class-definition representation might introduce a bias in the decoding, and help to identify classes with similar scores in the classification model. 3 Experimental setup Datasets. We test our model and proposed strategies in two well-known hierarchical text classification datasets previously used in the evaluation of state-of-the-art methods for English: Web of Science (WOS; Kowsari et al., 2017) and DBpedia (Sinha et al., 2018). The former includes parent classes of scientific areas such as Biochemistry or Psychology, whereas the latter considers more general topics like Sports Season, Event or Work. General information for both datasets is presented in Table 1. Model, hyper-parameters and training. We use the AllenNLP framework (Gardner et al., 2018) to implement our methods. Our baseline consists of the model specified in §2.1. For all experiments, we use 300 units in the hidden layer, 300 for embedding size, and a batch size of 100. During training time, we employ Adam optimiser (Kingma and Ba, 2014) with default parameters (β1 = 0.9, β2 = 0.98, ε = 10−9). We also use a learning rate of 0.001, that is divided by ten after four consecutive epochs without improvements in the validation split. Furthermore, we apply a dropout of 0.3 in the bidirectional GRU encoderdecoder, clip the gradient with 0.5, and train the model for 30 epochs. For evaluation, we select the best model in the validation set of the 30 epochs concerning the accuracy metric. Settings for the proposed strategies. • For learning with the auxiliary task, we interleave the loss function between the main prediction task and the auxiliary task (§2.2) every two epochs with the same learning rate. We aim for both tasks to have equivalent relevance in the network training. • To compute the class-definition vectors, we extract the textual definitions using the Oxford Dictionaries API1. We vectorize each token of the descriptions using pre-trained Common Crawl embeddings (the same as in the embedding layer) and average them. • For the beam search experiments, we employ a beam size (k) of five, and assess both the original and adapted strategies. We note that the sequence-to-sequence baseline model use a beam size of one2. 4 Results and discussion Table 2 presents the average accuracy results of our experiments with each proposed method over the test set. For all cases, we maintain the same architecture and hyper-parameters in order to estimate the impact of the auxiliary task, parent node conditioning, and the beam search variants independently. Moreover, we examine the performance of the combination of our approaches3. In the individual analysis, we observe that the parent node conditioning and the auxiliary task provides significant gains over the seq2seq baseline, which support our initial hypothesis about the relevance of the auxiliary loss and the information of the parent class. Conversely, we note that the modified beam search strategy has the lowest gain of all the experiments in WOS, although it provides one of the best scores for DBpedia. One potential reason is the new added term for the k-top candidates selection (see Eq. 1), as it strongly depends on the quality of the sentence representation. The classes of WOS includes scientific areas that are usually more complex to define than the categories of the DBpedia database4. We also notice that the accuracy increment is relatively higher for all experiments on the WOS corpus than on DBpedia. A primary reason might be the number of documents in each dataset, as DBpedia contains almost seven times the number 1https://developer.oxforddictionaries.com/ 2In preliminary experiments, we considered a beam size of ten, but we did not note a significant improvement. 3We tried all the possible combinations, but only report the ones that offer an improvement over the individual counterparts. 4Averaging words vectors to generate a sentence embedding is an elemental approach. Further work could explore the encoding of the class-definition embeddings directly from the training data, or to weight the scores of the classification model and the similarity score to balance the contribution of each term. 2255 WOS DBpedia Individual strategies seq2seq baseline 78.84 ± 0.17 95.12 ± 0.01 Auxiliary task ∗78.93 ± 0.52 ∗95.21 ± 0.16 Parent node conditioning (PNC) ∗79.01 ± 0.18 ∗95.26 ± 0.09 Beam search (original) ∗78.90 ± 0.25 ∗95.25 ± 0.01 Beam search (modified) ∗78.90 ± 0.28 ∗95.26 ± 0.01 Combined strategies Auxiliary task + PNC [7M params.] ∗79.79 ± 0.45 ∗95.23 ± 0.13 Beam search (original) + PNC ∗79.18 ± 0.19 ∗95.30 ± 0.10 Beam search (modified) + PNC ∗79.18 ± 0.23 ∗95.30 ± 0.11 Auxiliary task + PNC + Beam search (orig.) ∗79.92 ± 0.51 ∗95.26 ± 0.12 Auxiliary task + PNC + Beam search (mod.) ∗79.87 ± 0.49 ∗95.26 ± 0.12 Previous work HDLTex (Kowsari et al., 2017) [5B params.] 76.58 92.10 Sinha et al. (2018) [34M params.] 77.46 93.72 Table 2: Test accuracy (↑higher is better) for our proposed strategies, tested separately and combined, and a comparison with previous classifiers. Reported values are averaged across five runs, and ∗indicates Almost Stochastic Dominance (Dror et al., 2019) over the seq2seq baseline with a significance level of 0.05. The amount of parameters of each combined strategies is up to seven million. of documents of WOS. If we have a large number of training samples, the architecture is capable of learning how to discriminate correctly between classes only with the original training data. However, in less-resourced scenarios, our proposed approaches with external knowledge integration could achieve a high positive impact. As our strategies are orthogonal and focus on different parts of the model architecture, we proceed to combine them and assess their joint performance. In the case of WOS, we observe that every combination of strategies improves the single counterparts, and the best accuracy is achieved by the merge of the auxiliary task and PNC, but with an original beam search of size five. Concerning DBpedia, most of the results are very close to each other, given the high accuracy provided since the seq2seq baseline. However, we note the relevance of combining the PNC strategy with the original or modified beam search to increase the performance. Finally, we compare our strategies to the best HTC models reported in previous studies (Kowsari et al., 2017; Sinha et al., 2018). We then observe that the results of our methods are outstanding in terms of accuracy and number of parameters. Moreover, the training time of each model takes around one hour (for the 30 epochs), and the proposed auxiliary task do not add any significant delay. 5 Related work Most of the studies for flat text classification primarily focus on proposing a variety of novel neural architectures (Conneau et al., 2017; Zhang et al., 2015). Other approaches involve a transfer learning step to take advantage of unlabelled data. McCann et al. (2017) used the encoder unit of a neural machine translation model to provide context for other natural language processing models, while Howard and Ruder (2018) pre-trained a language model on a general-domain monolingual corpus and then fine-tuned it for text classification tasks. In HTC, there are local or global strategies (Silla and Freitas, 2011). The former exploits local information per layer of the taxonomy, whereas the latter addresses the task with a single model for all the classes and levels. Neural models show excellent performance for both approaches (Kowsari et al., 2017; Sinha et al., 2018). Furthermore, other studies focus on using transfer learning for introducing dependencies between parent and child categories (Banerjee et al., 2019) and deep reinforcement learning to consider hierarchy information during inference (Mao et al., 2019). The incorporation of external information in neural models has offered potential in different tasks, such as in flat text classification. By using categorical metadata of the target classes (Kim et al., 2019) and linguistic features at word-level (Margatina et al., 2019), previous studies have notably improved flat-text classification at a moderate com2256 putational cost. Besides, Liu et al. (2016) outperform several state-of-the-art classification baselines by employing multitask learning. To our knowledge, the latter strategies are not explicitly exploited for HTC. For this reason, our study focuses on the exploration and evaluation of methods that enable hierarchical classifiers to achieve an overall accuracy improvement with the least increasing complexity as possible. 6 Conclusion We presented a bag of tricks to efficiently improve hierarchical text classification by adding an auxiliary task of reverse hierarchy prediction and integrating external knowledge (vectorized textual definitions of classes in a parent node conditioning scheme and in the beam search). Our proposed methods established new state-of-the-art results with class hierarchies on the WOS and DBpedia datasets in English. Finally, we also open a path to study integration of knowledge into the decoding phase, which can benefit other tasks such as neural machine translation. Acknowledgements We are thankful to the Informatics’ support team at PUCP, and specially to Corrado Daly. We also appreciate the collaboration of Robert Aduviri and Fabricio Monsalve in a previous related project that build up our research question. Besides, we thanks the comments of Fernando Alva-Manchego on a draft version and the feedback of our anonymous reviewers. Finally, we acknowledge the support of NVIDIA Corporation with the donation of a Titan Xp GPU used for the study. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Siddhartha Banerjee, Cem Akkaya, Francisco PerezSorrosal, and Kostas Tsioutsiouliklis. 2019. Hierarchical transfer learning for multi-label text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6295–6300, Florence, Italy. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1107–1116, Valencia, Spain. Association for Computational Linguistics. Rotem Dror, Segev Shlomov, and Roi Reichart. 2019. Deep dominance - how to properly compare deep neural models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2773–2785, Florence, Italy. Association for Computational Linguistics. Markus Freitag and Yaser Al-Onaizan. 2017. Beam search strategies for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 56–60, Vancouver. Association for Computational Linguistics. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1– 6, Melbourne, Australia. Association for Computational Linguistics. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, and Yoshua Bengio. 2017. On integrating a language model into neural machine translation. Computer Speech & Language, 45:137 – 148. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In 56th Annual Meeting of the Association for Computational Linguistics. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics. Jihyeok Kim, Reinald Kim Amplayo, Kyungjae Lee, Sua Sung, Minji Seo, and Seung-won Hwang. 2019. Categorical metadata representation for customized 2257 text classification. Transactions of the Association for Computational Linguistics, 7:201–215. Sang-Bum Kim, Kyoung-Soo Han, Hae-Chang Rim, and Sung Hyon Myaeng. 2006. Some effective techniques for naive bayes text classification. IEEE Trans. on Knowl. and Data Eng., 18(11):1457–1466. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, Matthew S Gerber, and Laura E Barnes. 2017. HDLTex: Hierarchical deep learning for text classification. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 364–371. IEEE. Liqun Liu, Funan Mu, Pengyu Li, Xin Mu, Jing Tang, Xingsheng Ai, Ran Fu, Lifeng Wang, and Xing Zhou. 2019. NeuralClassifier: An open-source neural hierarchical multi-label text classification toolkit. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87–92, Florence, Italy. Association for Computational Linguistics. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In Proceedings of the TwentyFifth International Joint Conference on Artificial Intelligence, IJCAI’16, pages 2873–2879. AAAI Press. Tie-Yan Liu, Yiming Yang, Hao Wan, Hua-Jun Zeng, Zheng Chen, and Wei-Ying Ma. 2005. Support vector machines classification with a very large-scale taxonomy. SIGKDD Explor. Newsl., 7(1):36–43. Yuning Mao, Jingjing Tian, Jiawei Han, and Xiang Ren. 2019. Hierarchical text classification with reinforced label assignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 445–455, Hong Kong, China. Association for Computational Linguistics. Katerina Margatina, Christos Baziotis, and Alexandros Potamianos. 2019. Attention-based conditioning methods for external knowledge integration. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3944– 3951, Florence, Italy. Association for Computational Linguistics. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6297–6308. Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pages 1063–1072, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Carlos N. Silla and Alex A. Freitas. 2011. A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery, 22(1-2):31–72. Koustuv Sinha, Yue Dong, Jackie Chi Kit Cheung, and Derek Ruths. 2018. A hierarchical neural attentionbased text classifier. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 817–823, Brussels, Belgium. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Pengcheng Yang, Fuli Luo, Shuming Ma, Junyang Lin, and Xu Sun. 2019. A deep reinforced sequence-toset model for multi-label classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5252–5258, Florence, Italy. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc.
2020
205
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2258–2269 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2258 Investigating the Effect of Auxiliary Objectives for the Automated Grading of Learner English Speech Transcriptions Hannah Craighead1∗ Andrew Caines2 Paula Buttery2 Helen Yannakoudakis3 1 Computer Laboratory, University of Cambridge, U.K. [email protected] 2 ALTA Institute & Computer Laboratory, University of Cambridge, U.K. {andrew.caines|paula.buttery}@cl.cam.ac.uk 3 Dept. of Informatics, King’s College London, U.K. [email protected] Abstract We address the task of automatically grading the language proficiency of spontaneous speech based on textual features from automatic speech recognition transcripts. Motivated by recent advances in multi-task learning, we develop neural networks trained in a multi-task fashion that learn to predict the proficiency level of non-native English speakers by taking advantage of inductive transfer between the main task (grading) and auxiliary prediction tasks: morpho-syntactic labeling, language modeling, and native language identification (L1). We encode the transcriptions with both bi-directional recurrent neural networks and with bi-directional representations from transformers, compare against a featurerich baseline, and analyse performance at different proficiency levels and with transcriptions of varying error rates. Our best performance comes from a transformer encoder with L1 prediction as an auxiliary task. We discuss areas for improvement and potential applications for text-only speech scoring. 1 Introduction The growing demand for the ability to communicate in English means that both academic and commercial efforts are increasing to provide automated tutoring and assessment systems. These educational systems address the increasing need for online resources to help students learn and to map users to the validated proficiency scales which play a critical role in securing education and work opportunities (British Council, 2013). Language learning applications delivered through smart speakers such as Amazon Alexa and Google Home are a novel form of educational technology. These offer obvious benefits to users in terms of immediacy, interaction and ∗Currently at Google U.K. convenience. However, it remains challenging for application providers to assess language content collected through these means. Audio recordings are not returned to the developers for privacy reasons: instead only text responses are returned, the output of automated speech recognition (ASR) systems. This sets a new task in educational applications: the automated proficiency assessment of speech based on transcriptions alone. In this paper we report on our efforts to grade learner English transcriptions obtained from ASR systems, comparing a feature-rich baseline with neural networks trained on multi-task objectives. To assess spontaneous speech, automated grading systems tend to use a combination of features extracted from the audio recording and the transcription resulting from ASR. For instance, SpeechRaterTM by the Educational Testing Service uses text-based features based on frequency counts and lexical unigrams – among others, the number of word tokens per second, the length of interpausal units in words, the vocabulary size normalized by recording duration – and score predictions are made using linear regression (Zechner et al., 2007, 2009; Higgins et al., 2011). However, without the audio recordings, proficiency scoring must be performed based on the text alone. Thus robust methods for text-only speech scoring need to be developed to ensure the reliability and validity of educational applications in scenarios such as smart speakers. Relatively few automated speech graders use neural approaches that incorporate text-based features from transcripts. Chen et al. (2018) used a linear regression model on the concatenated high-level representation outputs of two separate RNNs for sequential audio and text inputs; Qian et al. (2018) use a bi-directional RNN which uses word embeddings concatenated with an encoding of the given prompt and an attention mechanism over all tokens to predict grades. 2259 In this work, we address the task of automatically grading the language proficiency of spontaneous speech based on ASR transcriptions only, and seek to investigate the extent to which current state-ofthe-art neural approaches to language assessment are effective for the task at hand. Specifically, we make the following contributions: 1. We develop a multi-task framework that leverages inductive transfer between our main task (grading spoken language proficiency) and auxiliary objectives – predicting morphosyntactic labels, the learner’s first (‘native’) language (L1) and language modeling (LM). 2. We investigate the performance of two encoder types for the speech scoring task: bidirectional recurrent neural networks, and bidirectional representations from transformers. 3. We analyze model performance under different conditions: namely, with and without filled pauses included in the transcriptions, with varying rates of word error in the ASR transcriptions, and according to the proficiency of the student response. 4. We make our code publicly available for others to use for benchmarking and replication experiments.1 In contrast to feature-based scoring, we instead train neural networks on ASR transcriptions which are labeled with proficiency scores assigned by human examiners, and guide the networks with objectives that prioritize language understanding. To the best of our knowledge, there has been no previous work using text-based auxiliary training objectives in automated speech grading systems. 2 Related Work Automated grading of student responses to exam questions until recently tended to adopt featurebased approaches to score prediction, for instance using distinctive word or part-of-speech n-grams (Page and Paulus, 1968; Attali and Burstein, 2004; Bhat and Yoon, 2015; Sakaguchi et al., 2015), as well as grammatical errors and phrase-structure rules (Yannakoudakis et al., 2011; Andersen et al., 1https://github.com/hcraighead/ automated-english-transcription-grader; the corpus we work with is not publicly available as it is private exams data, but the code repository allows you to work with any set of English texts and proficiency scores. 2013). More recently, word and character embeddings have served as input to deep neural network models, with a final regression layer predicting the score (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong et al., 2017; Jin et al., 2018). The advantage of the latter approach is the relative ease of data pre-processing since text representations are learned through distributional methods rather than hand-crafted features. The field of NLP has seen advances recently thanks to a shift from fixed word embeddings to contextualized representations such as ELMo (Peters et al., 2018) and those which can be obtained from large transformer models such as BERT (Devlin et al., 2019). Similarly in text scoring, some have incorporated contextualized word embeddings to improve performance (Nadeem et al., 2019). We now apply such approaches to the grading of spoken transcriptions in a scenario where the audio, or information derived from it, is not available. In other words the task is analogous to essay scoring except for the presence of characteristic speech features such as false starts, repetitions and filled pauses (Moore et al., 2015; Carter and McCarthy, 2017). This poses a particular challenge as most models used in data pre-processing and representation learning have been trained on written not spoken texts (Caines et al., 2017). Furthermore, most existing approaches to speech grading do have access to audio features, and indeed extract a large number of prosodic or duration-based features (Zechner et al., 2009; Higgins et al., 2011; Loukina et al., 2017; Wang et al., 2018a). Prosodic and phonological features extracted from the audio and ASR model are undoubtedly useful for human assessment of speech proficiency and for providing feedback. On the other hand, previous work suggests that models trained solely on ASR text-based features are competitive with those using only acoustic features or a combination of the two (Loukina and Cahill, 2016). Their interpretation of these results was that the transcription offers some proxy information for prosodic and phonological performance – for instance the presence of hesitation and silence markers, the number of word tokens in the transcription, and the transcription errors which might arise from mispronunciations. We instead allow our models to learn from auxiliary (morpho-syntactic and other) tasks: multi-task learning has been shown to help in automated essay 2260 Train Valid Test Total Candidates 691 297 225 1213 Transcriptions 4,589 1,982 1488 8,059 Total words 205,311 91,224 67,832 343,367 Mean response length (words) 44.7 46.0 45.6 42.6 Table 1: Training, validation and test split statistics. scoring (Cummins and Rei, 2018) and grammatical error detection of learner English essays (Rei and Yannakoudakis, 2017), whilst information about a learner’s native language has been shown to help in error detection for English and the grading of Norwegian essays (Rozovskaya and Roth, 2011; Johan Berggren et al., 2019). Furthermore, multi-task learning objectives can allow the model to learn more general features of language and composition, and a much richer set of representations (Sanh et al., 2019), without relying on the availability of any external linguistic tools or annotations at inference time. 3 Data We train our models using spoken responses collected from candidates taking Cambridge Assessment’s BULATS examination2. The spoken section of the BULATS exam tests candidates’ proficiency in business English through monologue responses to a series of prompts. The candidate may speak for up to one minute in each response and we include only the prompts which invite spontaneous responses (we exclude the prompts which require reading aloud of given sentences, and prompts asking for personal information about the candidates). There are seven such prompts in each exam. Fortysix unique versions of the BULATS exam are represented in the training and test sets, meaning that there are 322 unique prompts (7 ∗46). Each response has been assigned a score between 0 and 6 by expert human examiners, with scoring increments of .5 available and with each whole integer mapping to a proficiency level on the Common European Framework of Reference for Languages (CEFR): a fail (score of 0), beginner (scores of 1, 2: A1 and A2); intermediate (scores 3, 4: B1 and B2); advanced (scores 5, 6: C1 and C2). Examiners are required to consider five attributes of each candidate’s speaking proficiency: pronun2https://www.cambridgeenglish.org/ exams-and-tests/bulats; now discontinued and replaced by the Linguaskill Business exam. ciation, hesitation, language resource, coherence and task achievement. In the transcription-only scenario, we cannot assess the first component, have only a proxy for the second in terms of filled pause occurrence (‘umm’, ‘err’, etc), but still have access to the other three components through the ASR transcriptions. Our data comes from 1213 exam candidates with six first languages in approximately uniform distribution: Arabic, Dutch, French, Polish, Thai and Vietnamese. The distribution of candidates over proficiency levels is approximately normal, with a peak over the intermediate scores (Figure 1). The train/validation/test split across candidates is roughly 55 : 25 : 20 as detailed by Table 1. Each candidate’s recordings are transcribed by a teacher–student ASR system with a latticefree maximum-mutual-information acoustic model (Kanda et al., 2017). The teacher–student training procedure uses Kullback–Leibler divergence between the word sequence posteriors from the student model and a teacher ensemble as the loss function (Wong and Gales, 2016). The result is a computationally efficient ASR system, as the student is able to decode in a single run to a similar level of performance as an ensemble decoder requiring multiple runs (Hinton et al., 2014). There is more information about the ASR system in Wang et al. (2018b). We also evaluate performance on manual transcriptions of the test set, in order to assess the impact of ASR errors on our models. A native speaker of English was asked to transcribe the recordings as faithfully as possible to include hesitations, disfluencies and partial words. A subset of 230 recordings were transcribed by a second native speaker: inter-annotator agreement on this subset is high (Cohen’s κ = .898). Compared against the annotator’s manual transcriptions, the word error rate of the ASR is 19.5% overall, but with variance from 32% for speakers with a score of 1, to 15% for speakers with scores 5 and 6. To be able to predict morpho-syntactic labels, 2261 Figure 1: Distribution of proficiency scores in the training and test sets. Figure 2: Transcription length distributions at different proficiency levels. we parse the data using UDPipe (Wijffels, 2018), trained on the Universal Dependencies (UD) English Web Treebank 2.4 made up of 255k words and 16.6k sentences from weblogs, newsgroups, emails, reviews, and Yahoo! answers (Silveira et al., 2014). We use UDPipe to automatically generate Penn Treebank part of speech (POS) tags (Taylor et al., 2003) and UDs (Nivre et al., 2016) for our training data. Filled pauses were excluded before parsing, so that they would not affect the parse of other words in the transcription, but were then re-inserted with null parse values, in case they serve as a useful signal to the language proficiency models. Transcriptions were parsed as whole units: we did not attempt to delimit speech-units. For the most part this results in fairly lengthy, but not impractically long, word sequences. The ASR transcriptions are on average 44 word tokens long (σ = 33.0), with a minimum of 2 tokens, a maximum of 179, and 50% of the texts being between 23 and 54 tokens long. As seen in Figure 2, the distribution of transcription length differs according to proficiency level: the failing grades tend to be very short responses, the beginner level responses are a little longer, and the bulk of intermediate responses are between 25 and 50 tokens long (recordings are between 20 and 60 seconds duration). 4 Model architecture The speech grader3 takes a sequence of token embeddings [x1, . . . , xn] as input and predicts a proficiency level score. Tokens are first converted to vector representations xt, and then passed through an encoder. We trial two different encoders: a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019). The encoding is passed through the prediction head, a series of linear layers and activation functions, where the final activation function is bound to the scoring scale (0-6). The model uses mean squared error (MSE) as the loss function Escore for the main task. LSTM encoder The bi-directional LSTM encoder uses the word-level tokenization provided by UDPipe. For each token, the hidden states of the two LSTMs are concatenated, creating a context-aware hidden state ht = [−→ ht; ←− ht]. The hidden layers that are formed at the final timesteps of the bidirectional LSTM (h1, hn) are concatenated for the scoring prediction head. BERT encoder The BERT encoder uses a pretrained model checkpoint and tokenizer, specifically bert-base-uncased, provided by the HuggingFace Transformer library (Wolf et al., 2019). 3All of our models were built using PyTorch (Paszke et al., 2019). 2262 Figure 3: Encoder architecture of automated speech grader using a bi-directional LSTM for one time step t: two auxiliary objective architecture (GR and POS) on the left; LM objective architecture on the right. BERT’s tokenizer uses the WordPiece model (Zhang, 2016), resulting in a much larger vocabulary than the LSTM encoder. BERT embeddings are extracted from a transformer trained with a masked LM objective: a percentage of input tokens are masked and then the network learns to predict the masked tokens. BERT is also trained with a second objective: given two input sequences, it predicts whether one sequence directly follows another. A sequence level embedding is produced by pooling the hidden states of the special first token, [CLS], resulting in a 768 dimensional embedding. Auxiliary objectives We further extend the model to incorporate auxiliary objectives, and experiment with four different tasks: language modelling (LM), native language prediction (L1), POS-tagging, and UD prediction where we predict the UD type of a dependent with its head (see Section 3). These auxiliary objectives are based on previous work indicating that learning to make such predictions aids in tasks such as essay scoring and grammatical error detection (Cheng et al., 2015; Rei and Yannakoudakis, 2017; Cummins and Rei, 2018; Johan Berggren et al., 2019; Bell et al., 2019). Specifically, for the last three tasks, we predict a label y per word xt (Figure 3; left). Each task s is assigned an individual prediction head, identical to the scoring head described above, followed by a softmax layer that produces a probability distribution over the set of output labels to replace the bounded scoring activation function. When using BERT, our model only predicts labels for auxiliary objectives on the first token of a word, in an identical fashion to Devlin et al. (2019)’s evaluation of BERT on named entity recognition. The LM objective is implemented differently for each model. The LSTM (Figure 3; right), has two additional hidden layers (Rei, 2017): −→ mt = tanh −→ Wl −→ ht and ←− mt = tanh ←− Wl ←− ht, where −→ Wl and LM L1 POS UD LSTM 0.1 0.01 0.005 0.001 BERT 0.05 0.5 0.1 0.01 Table 2: Weighting values for auxiliary objectives scores for the LSTM and BERT encoders. ←− Wl are direction-specific weight matrices. The surrounding tokens wt−1 and wt+1 are then predicted based on each hidden state using a softmax output layer. In contrast, the BERT model implements the same masked language modeling objective as utilized during pre-training. We implement this identically to Devlin et al. (2019): 15% of tokens in the sequence are randomly selected to be masked, and of those, 80% are masked, 10% are replaced with another token and 10% are unchanged. The loss is only computed over the selected tokens. Note that filled pauses are not utilized for auxiliary objectives. The overall loss function E is adapted using a similar approach to Cummins and Rei (2018): a weighted sum of the scoring loss (main task) Escore and the auxiliary task losses Eaux, where T is the total number of auxiliary tasks. All of the auxiliary tasks use cross-entropy loss where yx,l is the predicted probability of token x having label l, and ˜yx,l has the value 1 when l is the correct label for token x and 0 otherwise. Eaux = −1 T T X t=1 L X l=1 ˜yt,llog(yt,l) (1) E = (1 −α) × Escore + α × Eaux (2) Model hyper-parameters are tuned based on MSE on the validation set. The model is optimized using Adam (Kingma and Ba, 2014), with a learning rate of 0.001 that linearly decreases during training, for 3-5 epochs (when trained with no, 2263 RMSE PCC ≤0.5 ≤1.0 RMSE PCC ≤0.5 ≤1.0 Baseline 1.086 0.685 50.7 82.1 1.086 0.685 50.7 82.1 LSTM BERT Task RMSE PCC ≤0.5 ≤1.0 RMSE PCC ≤0.5 ≤1.0 Scoring 1.022 0.681 39.496 69.530 0.921 0.762 45.060 75.134 +LM 1.011† 0.689† 40.282† 70.289† 0.910 0.767 45.665 76.169 +L1 1.014 0.687 39.812 69.765 0.908 0.769† 45.659 76.310 +POS 1.006† 0.693† 40.074 70.356† 0.918 0.763 44.892 75.383 +UD 1.010† 0.689† 39.872† 70.309 0.920 0.762 44.940 75.336 Combo 1.005† 0.690† 40.390† 70.114† Table 3: Evaluation of the baselines, LSTM and BERT encoders for speech grading, with a single-task scoring objective and various auxiliary tasks (LM: language modeling, L1: native language identification, POS: part-ofspeech tagging, UD: Universal dependency relations, Combo: POS+UD+L1). † indicates significant difference (paired t-test, α = 0.05) compared to the single-task scoring model. a single, or multiple auxiliary objectives respectively). Responses are processed in batches of 8 and are padded/truncated to a length of 128. LSTM token embeddings of size 300 are randomly initialized and fine-tuned during training.4 The LSTM has 3 hidden layers with hidden state sizes of 256 for each direction. Weightings for each of the auxiliary objectives were selected by evaluation on the validation set and are outlined in Table 2. Baseline model Our baseline approach is a feature-based model of the type which has been used in previous research (Vajjala and Rama, 2018; Yannakoudakis et al., 2018). Specifically, we train a linear regression model and use as features tf– idf weighted word and POS n-grams (up to trigrams), grammatical constructions extracted from the phrase-structure trees, the length of the transcript, and the number of errors, estimated by counting the number of trigrams that are absent from a large background corpus of correct English (Ferraresi et al., 2008). Evaluation Our primary metric is root-meansquare error (RMSE), which results in real valued average distances from the gold standard examiner scores on our 0–6 scale. For each model we also report Pearson’s correlation coefficient with the true scores and the percent of predictions which are within a half or one score from the reference score (≤0.5 and ≤1.0). These can be thought of as tolerable error thresholds where being out-by-two can have severe consequences for the student (for example, affecting employment or education prospects). Bear in 4Initial experiments showed that fixed pre-trained word embeddings such as GloVe (Pennington et al., 2014) do not improve performance further. mind that human examiners are thought to correlate on proficiency scoring at about 0.8, and that most exams are graded by a single examiner, and the idea of tolerable error becomes relevant to human as well as machine scoring. It would be a useful exercise to collect within 0.5 and within 1.0 scores from human examiners. 5 Results We ran a series of experiments to analyze the impact that data pre-processing and encoder design have on the performance of our automated speech grader. All results presented are computed over 10 repetitions, include filled pause information and use an ASR system with a WER of 19.5% (see Section 3) unless otherwise stated. 5.1 Encoder Table 3 compares the results for the two different encoders: LSTM and BERT. Using BERT significantly increases the performance of the speech grader, RMSE reduces by approximately 0.1 and the number of responses graded within 0.5 or 1 point of examiner provided score increases by approximately 5.5%. 5.2 Auxiliary objectives Our results, in Table 3, indicate that certain auxiliary objectives can improve the performance of our automated speech grader. The LSTM gains significantly when applying multi-task learning from POS, UD or LM prediction tasks. It is also possible that these objectives help to account for errors in ASR by identifying instances where the expected word or morpho-syntactic label differs from the provided input. 2264 Figure 4: RMSE of LSTM and BERT speech graders trained and tested on ASR systems of decreasing WER. We also trained models for all possible combinations of auxiliary objectives. While several of these were significantly better than the scoring only model, only one, LSTM with POS+UD+L1 (‘combo’), produced better results than the best performing single task model. These results were not significantly better than the single-task POS prediction model, though we did not explore tuning the alpha weighting values for the combination models. In contrast, BERT only receives a significant improvement in grading ability when using the L1 prediction task. Since BERT already has linguistic knowledge from external pre-training, it is likely that the L1 prediction helps to identify mistakes that are typical of particular L1 learners and the level of proficiency these errors equate to. No combinations of auxiliary objectives led to any improvement for the BERT encoder. 5.3 Impact of ASR performance To investigate the impact that ASR system quality has on an automated speech grader, we train models using output from ASR systems with varying word error rates. We then evaluate these models on output from each ASR system to analyze the grader’s dependence on the word error idiosyncrasies of the system used during training. We also evaluate on manual transcriptions provided by annotators. The ASR systems have WER’s of 25.5%, 21.7% and 19.5% on the test set. Figure 4 shows, as expected, that training a speech grader with data from an ASR system with lower word error rates produces better results. However, it is interesting to note that this holds true even when evaluating with data from inferior ASR systems. These results suggest that the speech grader is relatively invariant to the quality of the ASR it is being evaluated on within the range of word error rates we have tested. Difference in ASR quality has a bigger influence on the RMSE when using an LSTM encoder compared to a BERT encoder. BERT’s tolerance for errors in input makes sense when considering that one of its training objectives attempts to recover the ground truth after the input is perturbed. Interestingly, both models perform poorly on manually transcribed data. A contribution to this is the quality of the manual transcriptions themselves, which will have an error rate far below those of the ASR systems. Moreover, three fundamental differences in transcription format are that the human transcriber has access to an ‘unclear’ token for occasions where the audio quality is poor or the candidate’s voice is obscured: the ASR on the other hand will attempt to transcribe such portions of the audio with real words from the vocabulary. Secondly, there are many more filled pauses in the human transcriptions than in the ASR: in total 9% of word tokens are filled pauses in the manual transcription, versus 5.1% for the best ASR. Thirdly, the manual transcriptions are about 7% longer than the machine transcriptions, a consequence of the human transcribers more accurately picking up details in the audio recording, and transcribing more words than the ASR systems. All these differences mean that the manual transcriptions are quite different from the ASR transcriptions the speech graders are trained on, therefore the models perform less well. 5.4 Impact of filled pauses Though this task aims to utilize only textual features to perform automated speech grading, limited 2265 LSTM model BERT model Test data Test data With FPs FPs removed With FPs FPs removed Training data RMSE PCC RMSE PCC RMSE PCC RMSE PCC With FPs 1.022 0.681 1.026 0.681 0.921 0.762 0.926† 0.761 FPs removed 1.021 0.682 0.917 0.762 Table 4: Evaluation of the LSTM (left) and BERT (right) single-task scoring models with filled pauses retained in the training and test sets (With FPs) and when they are filtered out (FPs removed). † indicates significant difference (paired t-test, α = 0.05) compared to the default result with FPs in train and test. Baseline LSTM Combo BERT+L1 Score RMSE ≤0.5 ≤1.0 RMSE ≤0.5 ≤1.0 RMSE ≤0.5 ≤1.0 0 2.180 0.0 17.6 1.920 3.5 27.6 1.660 10.3 48.3 1 1.400 8.0 69.0 1.220 24.0 54.0 1.170 31.0 53.0 2 1.040 38.9 80.0 1.000 34.4 69.9 1.000 31.7 64.5 3 0.824 57.8 90.3 0.850 44.1 73.9 0.788 48.6 79.6 4 0.721 68.4 94.0 0.756 53.3 82.7 0.735 56.3 86.2 5 0.950 52.1 83.1 0.867 41.8 77.0 0.677 59.2 87.8 6 1.710 21.4 33.3 1.530 4.8 14.3 1.210 14.3 47.6 Table 5: Performance of the baseline, LSTM combo and BERT+L1 models at different proficiency levels, RMSE and within 0.5, within 1.0 percentages. fluency information is available via the filled pause tokens output by the ASR system. These tokens are inserted into a transcription when the ASR has recognized one of a finite set of forms such as, ‘err’, ‘umm’, etc. We examine the dependence of our automated speech graders on filled pauses to accurately predict proficiency in two ways. Firstly, we train and evaluate models without filled pause information. Secondly, we evaluate models trained with filled pause information on the test set with filled pause information removed. Removing filled pause tokens when training and evaluating produced better results for both speech grader models, but not significantly so (Table 4). However, when evaluating a model trained with filled pause information on ASR output excluding filled pauses, the BERT model significantly worsens (RMSE 0.926 versus 0.921). This suggests that filled pauses only add noise to the training process, and that they should be excluded before auto-marking takes place. We further inspected the occurrence of filled pauses in the training and test sets, and found no strong correlation between the filled pause frequencies in the transcriptions and the gold scores awarded by the examiner (ρ = −0.0268). This either indicates that the candidates hesitate as much as each other no matter their proficiency level, perhaps due to the pressure of an exam setting or the task of spoken monologues in a second language, or it indicates that filled pauses are a ubiquitous feature of spoken language used for planning and discourse management purposes (Maclay and Osgood, 1959; Clark and Fox Tree, 2002; Tottie, 2019). In any case, by removing them from the transcriptions, both the LSTM and BERT models are better able to assign a proficiency level to the text. 5.5 Proficiency level performance analysis To assess the performance of the baseline against our best LSTM combo and BERT+L1 models at different proficiency levels, we treated our seven integer scores (from 0 to 6) as classes, rounding .5 scores up, and evaluated RMSE, within 0.5 and within 1.0 on a per-level basis (Table 5). Recall that 0 maps to a failing grade, scores of 1 and 2 are classed as beginner, 3 and 4 as intermediate proficiency, and 5 −6 as an advanced learner of English. We see that the baseline performs relatively well largely because of strong performance in the range 2 to 4 where its RMSE is almost as low as those for BERT+L1, and its within 0.5 and 1.0 percentages are higher. This is because the baseline largely predicts scores in that range, 2 to 4 (90% of its predictions), whereas we see a greater spread of 2266 scores predicted by the LSTM and BERT models and consequent improvements at the edges of the scoring range. RMSE generally decreases as we move from the baseline to LSTM combo to BERT+L1. BERT+L1 is much better than LSTM combo at predicting scores of 0, performs about the same for scores of 1 and 2, and then improves again towards the upper end of the scoring scale. Even with BERT+L1 there is variance in performance by proficiency level. The most difficult to grade accurately are those responses at the top and bottom of the scoring scale. This seems more a reflection of the distribution of training data we obtained, rather than an inherent linguistic difficulty in identifying low or high performance English: the bulk of training instances are between 3 and 5 (Figure 1), and it is possible that the models drift towards the central grades as an example of more conservative learning. This merits further investigation in future, either by data down-sampling to balance the training distribution, or artificial error generation to up-sample the edge cases. 6 Conclusion We presented an effective approach to grading spontaneous speech based on ASR transcriptions only, without direct access to the audio recording or features derived from it. Our best performing model involves a BERT encoder with first language prediction as an auxiliary task. We showed that this model improves on alternative LSTM-based models, and over a feature-rich baseline, by better predicting scores at the edges of the proficiency scale, while also offering (smaller) gains at the central points on the scale. Its error is on average less than 1, and 76% of its predictions are within 1 grade of the examiners’ gold scores. We recognise that without the audio signal, some information is lost that would be useful for speech assessment – namely prosodic and phonemic features – but that assessment on transcriptions alone has a use case in educational technology for home assistants. Furthermore such applications may become increasingly relevant as organisations reduce the types of data they collect from the end user due to privacy concerns. Further work should be undertaken in terms of scoring validity and the robustness of such an approach, before such models are applied to any ‘high stakes’ (i.e. exam) scenario, as opposed to the kind of at-home practice apps we have discussed in this paper. We also showed that the models improve as they are trained on increasingly accurate ASR transcriptions, though performance deteriorates when they are evaluated on manual transcriptions. We surmise that this is because of stylistic differences in the machine and human transcriptions, and that adaptation of the models to manual transcriptions will help mitigate the drop in performance. Additional experiments indicated that the removal of filled pauses from the transcriptions was beneficial to the scoring models, and that scoring performance is best for the middle grades of the scoring range. Further research is needed to improve machine assessment at the upper and lower ends of the scoring scale, although these are the scores for which the least training data exists. Therefore future work could include different sampling methods, generation of synthetic data, or training objectives which reward models which are less conservatively drawn to the middle of the scoring scale. Finally, we acknowledge that speaking proficiency in a second language is a multi-faceted construct made up of more than the features which can be drawn from transcriptions (Galaczi et al., 2011; Lim, 2018). For instance, the speaker’s prosody, pronunciations and disfluencies are also contributing factors. However, given the text-only constraints faced by third-party application developers for home assistants, the proficiency assessment models we present in this work allow for progress in providing low-stakes assessment and continuous practice for language learners, with the caveat that fuller speaking skills should be taught and assessed with the complete construct in mind. Acknowledgements This paper reports on research supported by Cambridge Assessment, University of Cambridge. We thank Kate Knill of the Engineering Department, University of Cambridge for access to the BULATS datasets, as well as Manny Rayner and Nikolaos Tsourakis at the University of Geneva for helpful discussion. We also thank the NVIDIA Corporation for the donation of the Titan X Pascal GPU used in this research. The first author was funded by the Searle Fund, the Benson & Carslaw Fund, and Emmanuel College, Cambridge. 2267 References Dimitrios Alikaniotis, Helen Yannakoudakis, and Marek Rei. 2016. Automatic text scoring using neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Øistein E. Andersen, Helen Yannakoudakis, Fiona Barker, and Tim Parish. 2013. Developing and testing a self-assessment and tutoring system. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications. Yigal Attali and Jill Burstein. 2004. Automated essay scoring with e-rater R⃝v. 2.0. ETS Research Report Series, 2004(2):i–21. Samuel Bell, Helen Yannakoudakis, and Marek Rei. 2019. Context is key: Grammatical error detection with contextual word representations. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 103–115. Suma Bhat and Su-Youn Yoon. 2015. Automatic assessment of syntactic complexity for spontaneous speech scoring. Speech Communication, 67:42–57. British Council. 2013. The English effect: the impact of English, what it’s worth to the UK and why it matters to the world. Andrew Caines, Michael McCarthy, and Paula Buttery. 2017. Parsing transcripts of speech. In Proceedings of the First Workshop on Speech-Centric Natural Language Processing. Ronald Carter and Michael McCarthy. 2017. Spoken grammar: where are we and where are we going? Applied Linguistics, 38:1–20. Lei Chen, Jidong Tao, Shabnam Ghaffarzadegan, and Yao Qian. 2018. End-to-end neural network based automated speech scoring. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multitask RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Herbert Clark and Jean Fox Tree. 2002. Using uh and um in spontaneous speaking. Cognition, 84:73–111. Ronan Cummins and Marek Rei. 2018. Neural multitask learning in automated assessment. arXiv preprint arXiv:1801.06830. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Fei Dong, Yue Zhang, and Jie Yang. 2017. Attentionbased recurrent convolutional neural network for automatic essay scoring. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google, pages 47–54. Evelina D. Galaczi, Angela ffrench, Chris Hubbard, and Anthony Green. 2011. Developing assessment scales for large-scale speaking tests: a multiplemethod approach. Assessment in Education: Principles, Policy & Practice, 18(3):217–237. Derrick Higgins, Xiaoming Xi, Klaus Zechner, and David Williamson. 2011. A three-stage approach to the automated scoring of spontaneous spoken responses. Computer Speech & Language, 25(2):282– 306. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2014. Distilling the knowledge in a neural network. In Proceedings of the NeurIPS Deep Learning and Representation Learning Workshop. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. TDNN: A two-stage deep neural network for prompt-independent automated essay scoring. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stig Johan Berggren, Taraka Rama, and Lilja Øvrelid. 2019. Regression or classification? automated essay scoring for Norwegian. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Naoyuki Kanda, Yusuke Fujita, and Kenji Nagamatsu. 2017. Investigation of lattice-free maximum mutual information-based acoustic models with sequencelevel Kullback-Leibler divergence. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Gad S. Lim. 2018. Conceptualizing and operationalizing second language speaking assessment: Updating the construct for a new century. Language Assessment Quarterly, 15(3):215–218. Anastassia Loukina and Aoife Cahill. 2016. Automated scoring across different modalities. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications. 2268 Anastassia Loukina, Nitin Madnani, and Aoife Cahill. 2017. Speech-and text-driven features for automated scoring of english speaking tasks. In Proceedings of the Workshop on Speech-Centric Natural Language Processing. Howard Maclay and Charles Osgood. 1959. Hesitation phenomena in spontaneous English speech. Word. Russell Moore, Andrew Caines, Calbert Graham, and Paula Buttery. 2015. Incremental dependency parsing and disfluency detection in spoken learner English. In Proceedings of the 18th International Conference on Text, Speech and Dialogue (TSD). Berlin: Springer-Verlag. Farah Nadeem, Huy Nguyen, Yang Liu, and Mari Ostendorf. 2019. Automated essay scoring with discourse-aware neural models. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of LREC. Ellis B Page and Dieter H Paulus. 1968. The analysis of essays by computer. final report. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of NeurIPS. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Yao Qian, Rutuja Ubale, Matthew Mulholland, Keelan Evanini, and Xinhao Wang. 2018. A prompt-aware neural network approach to content-based scoring of non-native spontaneous speech. In 2018 IEEE Spoken Language Technology Workshop (SLT). Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. Alla Rozovskaya and Dan Roth. 2011. Algorithm selection and model adaptation for ESL correction tasks. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Keisuke Sakaguchi, Michael Heilman, and Nitin Madnani. 2015. Effective feature integration for automated short answer scoring. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language T echnologies. Victor Sanh, Thomas Wolf, and Sebastian Ruder. 2019. A hierarchical multi-task approach for learning embeddings from semantic tasks. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI 2019). Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Christopher D. Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC2014). Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Ann Taylor, Mitchell Marcus, and Beatrice Santorini. 2003. The Penn treebank: an overview. In Treebanks, pages 5–22. Springer. Gunnel Tottie. 2019. From pause to word: uh, um and er in written American English. English Language and Linguistics, 23(1):105–130. Sowmya Vajjala and Taraka Rama. 2018. Experiments with universal CEFR classification. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. Yu Wang, Mark Gales, Kate Knill, Konstantinos Kyriakopoulos, Andrey Malinin, Rogier van Dalen, and Mohammad Rashid. 2018a. Towards automatic assessment of spontaneous spoken English. Speech Communication, 104:47–56. Yu Wang, JHM Wong, Mark Gales, Katherine Knill, and Anton Ragni. 2018b. Sequence teacher-student training of acoustic models for automatic free speaking language assessment. In 2018 IEEE Spoken Language Technology Workshop (SLT). Jan Wijffels. 2018. udpipe: Tokenization, parts of speech tagging, lemmatization and dependency parsing with the ‘UDPipe’ ‘NLP’ toolkit. R package version 0.6, 1. 2269 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Jeremy Wong and Mark Gales. 2016. Sequence student-teacher training of deep neural networks. In Proceedings of INTERSPEECH. Helen Yannakoudakis, Øistein E Andersen, Ardeshir Geranpayeh, Ted Briscoe, and Diane Nicholls. 2018. Developing an automated writing placement system for ESL learners. Applied Measurement in Education, 31(3):251–267. Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading ESOL texts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Klaus Zechner, Derrick Higgins, and Xiaoming Xi. 2007. SpeechRaterTM: a construct-driven approach to scoring spontaneous non-native speech. In Workshop on Speech and Language Technology in Education. Klaus Zechner, Derrick Higgins, Xiaoming Xi, and David Williamson. 2009. Automatic scoring of nonnative spontaneous speech in tests of spoken English. Speech Communication, 51:883–895. Bill Zhang. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
2020
206
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2270–2282 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2270 SPECTER: Document-level Representation Learning using Citation-informed Transformers Arman Cohan†∗ Sergey Feldman†∗ Iz Beltagy† Doug Downey† Daniel S. Weld†,‡ †Allen Institute for Artificial Intelligence ‡Paul G. Allen School of Computer Science & Engineering, University of Washington {armanc,sergey,beltagy,dougd,danw}@allenai.org Abstract Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, the embeddings power strong performance on end tasks. We propose SPECTER, a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, SPECTER can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SCIDOCS, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that SPECTER outperforms a variety of competitive baselines on the benchmark.1 1 Introduction As the pace of scientific publication continues to increase, Natural Language Processing (NLP) tools that help users to search, discover and understand the scientific literature have become critical. In recent years, substantial improvements in NLP tools have been brought about by pretrained neural language models (LMs) (Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019). While such models are widely used for representing individual words ∗Equal contribution 1 https://github.com/allenai/specter or sentences, extensions to whole-document embeddings are relatively underexplored. Likewise, methods that do use inter-document signals to produce whole-document embeddings (Tu et al., 2017; Chen et al., 2019) have yet to incorporate stateof-the-art pretrained LMs. Here, we study how to leverage the power of pretrained language models to learn embeddings for scientific documents. A paper’s title and abstract provide rich semantic content about the paper, but, as we show in this work, simply passing these textual fields to an “off-the-shelf” pretrained language model—even a state-of-the-art model tailored to scientific text like the recent SciBERT (Beltagy et al., 2019)—does not result in accurate paper representations. The language modeling objectives used to pretrain the model do not lead it to output representations that are helpful for document-level tasks such as topic classification or recommendation. In this paper, we introduce a new method for learning general-purpose vector representations of scientific documents. Our system, SPECTER,2 incorporates inter-document context into the Transformer (Vaswani et al., 2017) language models (e.g., SciBERT (Beltagy et al., 2019)) to learn document representations that are effective across a wide-variety of downstream tasks, without the need for any task-specific fine-tuning of the pretrained language model. We specifically use citations as a naturally occurring, inter-document incidental supervision signal indicating which documents are most related and formulate the signal into a triplet-loss pretraining objective. Unlike many prior works, at inference time, our model does not require any citation information. This is critical for embedding new papers that have not yet been cited. In experiments, we show that SPECTER’s representations substantially outperform the state2SPECTER: Scientific Paper Embeddings using Citationinformed TransformERs 2271 of-the-art on a variety of document-level tasks, including topic classification, citation prediction, and recommendation. As an additional contribution of this work, we introduce and release SCIDOCS3 , a novel collection of data sets and an evaluation suite for documentlevel embeddings in the scientific domain. SCIDOCS covers seven tasks, and includes tens of thousands of examples of anonymized user signals of document relatedness. We also release our training set (hundreds of thousands of paper titles, abstracts and citations), along with our trained embedding model and its associated code base. 2 Model 2.1 Overview Our goal is to learn task-independent representations of academic papers. Inspired by the recent success of pretrained Transformer language models across various NLP tasks, we use the Transformer model architecture as basis of encoding the input paper. Existing LMs such as BERT, however, are primarily based on masked language modeling objective, only considering intra-document context and do not use any inter-document information. This limits their ability to learn optimal document representations. To learn high-quality documentlevel representations we propose using citations as an inter-document relatedness signal and formulate it as a triplet loss learning objective. We then pretrain the model on a large corpus of citations using this objective, encouraging it to output representations that are more similar for papers that share a citation link than for those that do not. We call our model SPECTER, which learns Scientific Paper Embeddings using Citation-informed TransformERs. With respect to the terminology used by Devlin et al. (2019), unlike most existing LMs that are “fine-tuning based”, our approach results in embeddings that can be applied to downstream tasks in a “feature-based” fashion, meaning the learned paper embeddings can be easily used as features, with no need for further task-specific fine-tuning. In the following, as background information, we briefly describe how pretrained LMs can be applied for document representation and then discuss the details of SPECTER. 3https://github.com/allenai/scidocs Transformer (initialized with SciBERT) Related paper (P+) Query paper (PQ) Unrelated paper (P−) Triplet loss =max n d PQ, P+ −d PQ, P− + m  , 0 o Figure 1: Overview of SPECTER. 2.2 Background: Pretrained Transformers Recently, pretrained Transformer networks have demonstrated success on various NLP tasks (Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019); we use these models as the foundation for SPECTER. Specifically, we use SciBERT (Beltagy et al., 2019) which is an adaptation of the original BERT (Devlin et al., 2019) architecture to the scientific domain. The BERT model architecture (Devlin et al., 2019) uses multiple layers of Transformers (Vaswani et al., 2017) to encode the tokens in a given input sequence. Each layer consists of a self-attention sublayer followed by a feedforward sublayer. The final hidden state associated with the special [CLS] token is usually called the “pooled output”, and is commonly used as an aggregate representation of the sequence. Document Representation Our goal is to represent a given paper P as a dense vector v that best represents the paper and can be used in downstream tasks. SPECTER builds embeddings from the title and abstract of a paper. Intuitively, we would expect these fields to be sufficient to produce accurate embeddings, since they are written to provide a succinct and comprehensive summary of the paper.4 As such, we encode the concatenated title and abstract using a Transformer LM (e.g., SciBERT) and take the final representation of the [CLS] token as the output representation of the paper:5 v = Transformer(input)[CLS], (1) where Transformer is the Transformer’s forward function, and input is the concatenation of the [CLS] token and WordPieces (Wu et al., 2016) of the title and abstract of a paper, separated by 4We also experimented with additional fields such as venues and authors but did not find any empirical advantage in using those (see §6). See §7 for a discussion of using the full text of the paper as input. 5It is also possible to encode title and abstracts individually and then concatenate or combine them to get the final embedding. However, in our experiments this resulted in sub-optimal performance. 2272 the [SEP] token. We use SciBERT as our model initialization as it is optimized for scientific text, though our formulation is general and any Transformer language model instead of SciBERT. Using the above method with an “off-the-shelf” SciBERT does not take global inter-document information into account. This is because SciBERT, like other pretrained language models, is trained via language modeling objectives, which only predict words or sentences given their in-document, nearby textual context. In contrast, we propose to incorporate citations into the model as a signal of inter-document relatedness, while still leveraging the model’s existing strength in modeling language. 2.3 Citation-Based Pretraining Objective A citation from one document to another suggests that the documents are related. To encode this relatedness signal into our representations, we design a loss function that trains the Transformer model to learn closer representations for papers when one cites the other, and more distant representations otherwise. The high-level overview of the model is shown in Figure 1. In particular, each training instance is a triplet of papers: a query paper PQ, a positive paper P+ and a negative paper P−. The positive paper is a paper that the query paper cites, and the negative paper is a paper that is not cited by the query paper (but that may be cited by P+). We then train the model using the following triplet margin loss function: L = max n d PQ, P+ −d PQ, P− + m  , 0 o (2) where d is a distance function and m is the loss margin hyperparameter (we empirically choose m = 1). Here, we use the L2 norm distance: d(PA, PB) = ∥vA −vB∥2, where vA is the vector corresponding to the pooled output of the Transformer run on paper A (Equation 1).6 Starting from the trained SciBERT model, we pretrain the Transformer parameters on the citation objective to learn paper representations that capture document relatedness. 2.4 Selecting Negative Distractors The choice of negative example papers P−is important when training the model. We consider two sets of negative examples: the first set simply consists of randomly selected papers from the corpus. 6We also experimented with other distance functions (e..g, normalized cosine), but they underperformed the L2 loss. Given a query paper, intuitively we would expect the model to be able to distinguish between cited papers, and uncited papers sampled randomly from the entire corpus. This inductive bias has been also found to be effective in content-based citation recommendation applications (Bhagavatula et al., 2018). But, random negatives may be easy for the model to distinguish from the positives. To provide a more nuanced training signal, we augment the randomly drawn negatives with a more challenging second set of negative examples. We denote as “hard negatives” the papers that are not cited by the query paper, but are cited by a paper cited by the query paper, i.e. if P1 cite −−→P2 and P2 cite −−→P3 but P1 ̸cite −−→P3, then P3 is a candidate hard negative example for P1. We expect the hard negatives to be somewhat related to the query paper, but typically less related than the cited papers. As we show in our experiments (§6), including hard negatives results in more accurate embeddings compared to using random negatives alone. 2.5 Inference At inference time, the model receives one paper, P, and it outputs the SPECTER’s Transfomer pooled output activation as the paper representation for P (Equation 1). We note that for inference, SPECTER requires only the title and abstract of the given input paper; the model does not need any citation information about the input paper. This means that SPECTER can produce embeddings even for new papers that have yet to be cited, which is critical for applications that target recent scientific papers. 3 SCIDOCS Evaluation Framework Previous evaluations of scientific document representations in the literature tend to focus on small datasets over a limited set of tasks, and extremely high (99%+) AUC scores are already possible on these data for English documents (Chen et al., 2019; Wang et al., 2019). New, larger and more diverse benchmark datasets are necessary. Here, we introduce a new comprehensive evaluation framework to measure the effectiveness of scientific paper embeddings, which we call SCIDOCS. The framework consists of diverse tasks, ranging from citation prediction, to prediction of user activity, to document classification and paper recommendation. Note that SPECTER will not be further fine-tuned on any of the tasks; we simply plug in the embeddings as features for each task. Below, we describe each of the 2273 tasks in detail and the evaluation data associated with it. In addition to our training data, we release all the datasets associated with the evaluation tasks. 3.1 Document Classification An important test of a document-level embedding is whether it is predictive of the class of the document. Here, we consider two classification tasks in the scientific domain: MeSH Classification In this task, the goals is to classify scientific papers according to their Medical Subject Headings (MeSH) (Lipscomb, 2000).7 We construct a dataset consisting of 23K academic medical papers, where each paper is assigned one of 11 top-level disease classes such as cardiovascular diseases, diabetes, digestive diseases derived from the MeSH vocabulary. The most populated category is Neoplasms (cancer) with 5.4K instances (23.3% of the total dataset) while the category with least number of samples is Hepatitis (1.7% of the total dataset). We follow the approach of Feldman et al. (2019) in mapping the MeSH vocabulary to the disease classes. Paper Topic Classification This task is predicting the topic associated with a paper using the predefined topic categories of the Microsoft Academic Graph (MAG) (Sinha et al., 2015)8. MAG provides a database of papers, each tagged with a list of topics. The topics are organized in a hierarchy of 5 levels, where level 1 is the most general and level 5 is the most specific. For our evaluation, we derive a document classification dataset from the level 1 topics, where a paper is labeled by its corresponding level 1 MAG topic. We construct a dataset of 25K papers, almost evenly split over the 19 different classes of level 1 categories in MAG. 3.2 Citation Prediction As argued above, citations are a key signal of relatedness between papers. We test how well different paper representations can reproduce this signal through citation prediction tasks. In particular, we focus on two sub-tasks: predicting direct citations, and predicting co-citations. We frame these as ranking tasks and evaluate performance using MAP and nDCG, standard ranking metrics. 7https://www.nlm.nih.gov/mesh/meshhome. html 8https://academic.microsoft.com/ Direct Citations In this task, the model is asked to predict which papers are cited by a given query paper from a given set of candidate papers. The evaluation dataset includes approximately 30K total papers from a held-out pool of papers, consisting of 1K query papers and a candidate set of up to 5 cited papers and 25 (randomly selected) uncited papers. The task is to rank the cited papers higher than the uncited papers. For each embedding method, we require only comparing the L2 distance between the raw embeddings of the query and the candidates, without any additional trainable parameters. Co-Citations This task is similar to the direct citations but instead of predicting a cited paper, the goal is to predict a highly co-cited paper with a given paper. Intuitively, if papers A and B are cited frequently together by several papers, this shows that the papers are likely highly related and a good paper representation model should be able to identify these papers from a given candidate set. The dataset consists of 30K total papers and is constructed similar to the direct citations task. 3.3 User Activity The embeddings for similar papers should be close to each other; we use user activity as a proxy for identifying similar papers and test the model’s ability to recover this information. Multiple users consuming the same items as one another is a classic relatedness signal and forms the foundation for recommender systems and other applications (Schafer et al., 2007). In our case, we would expect that when users look for academic papers, the papers they view in a single browsing session tend to be related. Thus, accurate paper embeddings should, all else being equal, be relatively more similar for papers that are frequently viewed in the same session than for other papers. To build benchmark datasets to test embeddings on user activity, we obtained logs of user sessions from a major academic search engine. We define the following two tasks on which we build benchmark datasets to test embeddings: Co-Views Our co-views dataset consists of approximately 30K papers. To construct it, we take 1K random papers that are not in our train or development set and associate with each one up to 5 frequently co-viewed papers and 25 randomly selected papers (similar to the approach for citations). Then, we require the embedding model to rank the 2274 co-viewed papers higher than the random papers by comparing the L2 distances of raw embeddings. We evaluate performance using standard ranking metrics, nDCG and MAP. Co-Reads If the user clicks to access the PDF of a paper from the paper description page, this is a potentially stronger sign of interest in the paper. In such a case we assume the user will read at least parts of the paper and refer to this as a “read” action. Accordingly, we define a “co-reads” task and dataset analogous to the co-views dataset described above. This dataset is also approximately 30K papers. 3.4 Recommendation In the recommendation task, we evaluate the ability of paper embeddings to boost performance in a production recommendation system. Our recommendation task aims to help users navigate the scientific literature by ranking a set of “similar papers” for a given paper. We use a dataset of user clickthrough data for this task which consists of 22K clickthrough events from a public scholarly search engine. We partitioned the examples temporally into train (20K examples), validation (1K), and test (1K) sets. As is typical in clickthrough data on ranked lists, the clicks are biased toward the top of original ranking presented to the user. To counteract this effect, we computed propensity scores using a swap experiment (Agarwal et al., 2019). The propensity scores give, for each position in the ranked list, the relative frequency that the position is over-represented in the data due to exposure bias. We can then compute de-biased evaluation metrics by dividing the score for each test example by the propensity score for the clicked position. We report propensity-adjusted versions of the standard ranking metrics Precision@1 ( ˆ P@1) and Normalized Discounted Cumulative Gain ( ˆ nDCG). We test different embeddings on the recommendation task by including cosine embedding distance9 as a feature within an existing recommendation system that includes several other informative features (title/author similarity, reference and citation overlap, etc.). Thus, the recommendation experiments measure whether the embeddings can boost the performance of a strong baseline system on an end task. For SPECTER, we also perform an online A/B test to measure whether its advantages 9Embeddings are L2 normalized and in this case cosine distance is equivalent to L2 distance. on the offline dataset translate into improvements on the online recommendation task (§5). 4 Experiments Training Data To train our model, we use a subset of the Semantic Scholar corpus (Ammar et al., 2018) consisting of about 146K query papers (around 26.7M tokens) with their corresponding outgoing citations, and we use an additional 32K papers for validation. For each query paper we construct up to 5 training triples comprised of a query, a positive, and a negative paper. The positive papers are sampled from the direct citations of the query, while negative papers are chosen either randomly or from citations of citations (as discussed in §2.4). We empirically found it helpful to use 2 hard negatives (citations of citations) and 3 easy negatives (randomly selected papers) for each query paper. This process results in about 684K training triples and 145K validation triples. Training and Implementation We implement our model in AllenNLP (Gardner et al., 2018). We initialize the model from SciBERT pretrained weights (Beltagy et al., 2019) since it is the stateof-the-art pretrained language model on scientific text. We continue training all model parameters on our training objective (Equation 2). We perform minimal tuning of our model’s hyperparameters based on the performance on the validation set, while baselines are extensively tuned. Based on initial experiments, we use a margin m=1 for the triplet loss. For training, we use the Adam optimizer (Kingma and Ba, 2014) following the suggested hyperparameters in Devlin et al. (2019) (LR: 2e-5, Slanted Triangular LR scheduler10 (Howard and Ruder, 2018) with number of train steps equal to training instances and cut fraction of 0.1). We train the model on a single Titan V GPU (12G memory) for 2 epochs, with batch size of 4 (the maximum that fit in our GPU memory) and use gradient accumulation for an effective batch size of 32. Each training epoch takes approximately 1-2 days to complete on the full dataset. We release our code and data to facilitate reproducibility. 11 Task-Specific Model Details For the classification tasks, we used a linear SVM where embedding vectors were the only features. The C hyperparameter was tuned via a held-out validation set. 10Learning rate linear warmup followed by linear decay. 11https://github.com/allenai/specter 2275 For the recommendation tasks, we use a feedforward ranking neural network that takes as input ten features designed to capture the similarity between each query and candidate paper, including the cosine similarity between the query and candidate embeddings and manually-designed features computed from the papers’ citations, titles, authors, and publication dates. Baseline Methods Our work falls into the intersection of textual representation, citation mining, and graph learning, and we evaluate against stateof-the-art baselines from each of these areas. We compare with several strong textual models: SIF (Arora et al., 2017), a method for learning document representations by removing the first principal component of aggregated word-level embeddings which we pretrain on scientific text; SciBERT (Beltagy et al., 2019) a state-of-the-art pretrained Transformer LM for scientific text; and Sent-BERT (Reimers and Gurevych, 2019), a model that uses negative sampling to tune BERT for producing optimal sentence embeddings. We also compare with Citeomatic (Bhagavatula et al., 2018), a closely related paper representation model for citation prediction which trains content-based representations with citation graph information via dynamically sampled triplets, and SGC (Wu et al., 2019a), a state-of-the-art graph-convolutional approach. For completeness, additional baselines are also included; due to space constraints we refer to Appendix A for detailed discussion of all baselines. We tune hyperparameters of baselines to maximize performance on a separate validation set. 5 Results Table 1 presents the main results corresponding to our evaluation tasks (described in §3). Overall, we observe substantial improvements across all tasks with average performance of 80.0 across all metrics on all tasks which is a 3.1 point absolute improvement over the next-best baseline. We now discuss the results in detail. For document classification, we report macro F1, a standard classification metric. We observe that the classifier performance when trained on our representations is better than when trained on any other baseline. Particularly, on the MeSH (MAG) dataset, we obtain an 86.4 (82.0) F1 score which is about a ∆= + 2.3 (+1.5) point absolute increase over the best baseline on each dataset respectively. Our evaluation of the learned representations on predicting user activity is shown in the “User activity” columns of Table 1. SPECTER achieves a MAP score of 83.8 on the co-view task, and 84.5 on coread, improving over the best baseline (Citeomatic in this case) by 2.7 and 4.0 points, respectively. We observe similar trends for the “citation” and “co-citation” tasks, with our model outperforming virtually all other baselines except for SGC, which has access to the citation graph at training and test time.12 Note that methods like SGC cannot be used in real-world setting to embed new papers that are not cited yet. On the other hand, on cocitation data our method is able to achieve the best results with nDCG of 94.8, improving over SGC with 2.3 points. Citeomatic also performs well on the citation tasks, as expected given that its primary design goal was citation prediction. Nevertheless, our method slightly outperforms Citeomatic on the direct citation task, while substantially outperforming it on co-citations (+2.0 nDCG). Finally, for recommendation task, we observe that SPECTER outperforms all other models on this task as well, with nDCG of 53.9. On the recommendations task, as opposed to previous experiments, the differences in method scores are generally smaller. This is because for this task the embeddings are used along with several other informative features in the ranking model (described under task-specific models in §4), meaning that embedding variants have less opportunity for impact on overall performance. We also performed an online study to evaluate whether SPECTER embeddings offer similar advantages in a live application. We performed an online A/B test comparing our SPECTER-based recommender to an existing production recommender system for similar papers that ranks papers by a textual similarity measure. In a dataset of 4,113 clicks, we found that SPECTER ranker improved clickthrough rate over the baseline by 46.5%, demonstrating its superiority. We emphasize that our citation-based pretraining objective is critical for the performance of SPECTER; removing this and using a vanilla SciBERT results in decreased performance on all tasks. 12For SGC, we remove development and test set citations and co-citations during training. We also remove incoming citations from development and test set queries as these would not be available at test time in production. 2276 Task → Classification User activity prediction Citation prediction Recomm. Avg. Subtask → MAG MeSH Co-View Co-Read Cite Co-Cite Model ↓/ Metric → F1 F1 MAP nDCG MAP nDCG MAP nDCG MAP nDCG ˆ nDCG ˆ P@1 Random 4.8 9.4 25.2 51.6 25.6 51.9 25.1 51.5 24.9 51.4 51.3 16.8 32.5 Doc2vec (2014) 66.2 69.2 67.8 82.9 64.9 81.6 65.3 82.2 67.1 83.4 51.7 16.9 66.6 Fasttext-sum (2017) 78.1 84.1 76.5 87.9 75.3 87.4 74.6 88.1 77.8 89.6 52.5 18.0 74.1 SIF (2017) 78.4 81.4 79.4 89.4 78.2 88.9 79.4 90.5 80.8 90.9 53.4 19.5 75.9 ELMo (2018) 77.0 75.7 70.3 84.3 67.4 82.6 65.8 82.6 68.5 83.8 52.5 18.2 69.0 Citeomatic (2018) 67.1 75.7 81.1 90.2 80.5 90.2 86.3 94.1 84.4 92.8 52.5 17.3 76.0 SGC (2019a) 76.8 82.7 77.2 88.0 75.7 87.5 91.6 96.2 84.1 92.5 52.7 18.2 76.9 SciBERT (2019) 79.7 80.7 50.7 73.1 47.7 71.1 48.3 71.7 49.7 72.6 52.1 17.9 59.6 Sent-BERT (2019) 80.5 69.1 68.2 83.3 64.8 81.3 63.5 81.6 66.4 82.8 51.6 17.1 67.5 SPECTER (Ours) 82.0 86.4 83.6 91.5 84.5 92.4 88.3 94.9 88.1 94.8 53.9 20.0 80.0 Table 1: Results on the SCIDOCS evaluation suite consisting of 7 tasks. 6 Analysis In this section, we analyze several design decisions in SPECTER, provide a visualization of its embedding space, and experimentally compare SPECTER’s use of fixed embeddings against a finetuning approach. Ablation Study We start by analyzing how adding or removing metadata fields from the input to SPECTER alters performance. The results are shown in the top four rows of Table 2 (for brevity, here we only report the average of the metrics from each task). We observe that removing the abstract from the textual input and relying only on the title results in a substantial decrease in performance. More surprisingly, adding authors as an input (along with title and abstract) hurts performance.13 One possible explanation is that author names are sparse in the corpus, making it difficult for the model to infer document-level relatedness from them. As another possible reason of this behavior, tokenization using Wordpieces might be suboptimal for author names. Many author names are out-of-vocabulary for SciBERT and thus, they might be split into sub-words and shared across names that are not semantically related, leading to noisy correlation. Finally, we find that adding venues slightly decreases performance,14 except on document classification (which makes sense, as we would expect venues to have high correlation 13We experimented with both concatenating authors with the title and abstract and also considering them as an additional field. Neither were helpful. 14Venue information in our data came directly from publisher provided metadata and thus was not normalized. Venue normalization could help improve results. CLS USR CITE REC Avg. SPECTER 84.2 88.4 91.5 36.9 80.0 −abstract 82.2 72.2 73.6 34.5 68.1 + venue 84.5 88.0 91.2 36.7 79.9 + author 82.7 72.3 71.0 34.6 67.3 No hard negatives 82.4 85.8 89.8 36.8 78.4 Start w/ BERT-Large 81.7 85.9 87.8 36.1 77.5 Table 2: Ablations: Numbers are averages of metrics for each evaluation task: CLS: classification, USR: User activity, CITE: Citation prediction, REC: Recommendation, Avg. average over all tasks & metrics. with paper topics). The fact that SPECTER does not require inputs like authors or venues makes it applicable in situations where this metadata is not available, such as matching reviewers with anonymized submissions, or performing recommendations of anonymized preprints (e.g., on OpenReview). One design decision in SPECTER is to use a set of hard negative distractors in the citation-based finetuning objective. The fifth row of Table 2 shows that this is important—using only easy negatives reduces performance on all tasks. While there could be other potential ways to include hard negatives in the model, our simple approach of including citations of citations is effective. The sixth row of the table shows that using a strong general-domain language model (BERT-Large) instead of SciBERT in SPECTER reduces performance considerably. This is reasonable because unlike BERT-Large, SciBERT is pretrained on scientific text. Visualization Figure 2 shows t-SNE (van der Maaten, 2014) projections of our embeddings (SPECTER) compared with the SciBERT baseline 2277 (a) SPECTER (b) SciBERT Figure 2: t-SNE visualization of paper embeddings and their corresponding MAG topics. for a random set of papers. When comparing SPECTER embeddings with SciBERT, we observe that our embeddings are better at encoding topical information, as the clusters seem to be more compact. Further, we see some examples of crosstopic relatedness reflected in the embedding space (e.g., Engineering, Mathematics and Computer Science are close to each other, while Business and Economics are also close to each other). To quantify the comparison of visualized embeddings in Figure 2, we use the DBScan clustering algorithm (Ester et al., 1996) on this 2D projection. We use the completeness and homogeneity clustering quality measures introduced by Rosenberg and Hirschberg (2007). For the points corresponding to Figure 2, the homogeneity and completeness values for SPECTER are respectively 0.41 and 0.72 compared with SciBERT’s 0.19 and 0.63, a clear improvement on separating topics using the projected embeddings. Comparison with Task Specific Fine-Tuning While the fact that SPECTER does not require finetuning makes its paper embeddings less costly to use, often the best performance from pretrained Transformers is obtained when the models are finetuned directly on each end task. We experiment with fine-tuning SciBERT on our tasks, and find this to be generally inferior to using our fixed representations from SPECTER. Specifically, we finetune SciBERT directly on task-specific signals instead of citations. To fine-tune on task-specific data (e.g., user activity), we used a dataset of coviews with 65K query papers, co-reads with 14K query papers, and co-citations (instead of direct citations) with 83K query papers. As the end tasks are ranking tasks, for all datasets we construct up to 5 triplets and fine-tune the model using triplet ranking loss. The positive papers are sampled from Training signal CLS USR CITE REC All SPECTER 84.2 88.4 91.5 36.9 80.0 SciBERT fine-tune on co-view 83.0 84.2 84.1 36.4 76.0 SciBERT fine-tune on co-read 82.3 85.4 86.7 36.3 77.1 SciBERT fine-tune on co-citation 82.9 84.3 85.2 36.6 76.4 SciBERT fine-tune on multitask 83.3 86.1 88.2 36.0 78.0 Table 3: Comparison with task-specific fine-tuning. the most co-viewed (co-read, or co-cited) papers corresponding to the query paper. We also include both easy and hard distractors as when training SPECTER (for hard negatives we choose the least non-zero co-viewed (co-read, or co-cited) papers). We also consider training jointly on all task-specific training data sources in a multitask training process, where the model samples training triplets from a distribution over the sources. As illustrated in Table 3, without any additional final task-specific fine-tuning, SPECTER still outperforms a SciBERT model fine-tuned on the end tasks as well as their multitask combination, further demonstrating the effectiveness and versatility of SPECTER embeddings.15 7 Related Work Recent representation learning methods in NLP rely on training large neural language models on unsupervised data (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Beltagy et al., 2019; Liu et al., 2019). While successful at many sentenceand token-level tasks, our focus is on using the models for document-level representation learning, which has remained relatively under-explored. There have been other efforts in document representation learning such as extensions of word vectors to documents (Le and Mikolov, 2014; Ganesh et al., 2016; Liu et al., 2017; Wu et al., 2018; Gysel et al., 2017), convolution-based methods (Liu et al., 2018; Zamani et al., 2018), and variational autoencoders (Holmer and Marfurt, 2018; Wang et al., 2019). Relevant to document embedding, sentence embedding is a relatively well-studied area of research. Successful approaches include seq2seq models (Kiros et al., 2015), BiLSTM Siamese networks (Williams et al., 2018), leveraging supervised data from other corpora (Conneau et al., 2017), and using discourse relations (Nie et al., 2019), and BERT-based methods (Reimers and Gurevych, 2019). Unlike our proposed method, 15We also experimented with further task-specific finetuning of our SPECTER on the end tasks but we did not observe additional improvements. 2278 the majority of these approaches do not consider any notion of inter-document relatedness when embedding documents. Other relevant work combines textual features with network structure (Tu et al., 2017; Zhang et al., 2018; Bhagavatula et al., 2018; Shen et al., 2018; Chen et al., 2019; Wang et al., 2019). These works typically do not leverage the recent pretrained contextual representations and with a few exceptions such as the recent work by Wang et al. (2019), they cannot generalize to unseen documents like our SPECTER approach. Context-based citation recommendation is another related application where models rely on citation contexts (Jeong et al., 2019) to make predictions. These works are orthogonal to ours as the input to our model is just paper title and abstract. Another related line of work is graphbased representation learning methods (Bruna et al., 2014; Kipf and Welling, 2017; Hamilton et al., 2017a,b; Wu et al., 2019a,b). Here, we compare to a graph representation learning model, SGC (Simple Graph Convolution) (Wu et al., 2019a), which is a state-of-the-art graph convolution approach for representation learning. SPECTER uses pretrained language models in combination with graph-based citation signals, which enables it to outperform the graph-based approaches in our experiments. SPECTER embeddings are based on only the title and abstract of the paper. Adding the full text of the paper would provide a more complete picture of the paper’s content and could improve accuracy (Cohen et al., 2010; Lin, 2008; Schuemie et al., 2004). However, the full text of many academic papers is not freely available. Further, modern language models have strict memory limits on input size, which means new techniques would be required in order to leverage the entirety of the paper within the models. Exploring how to use the full paper text within SPECTER is an item of future work. Finally, one pain point in academic paper recommendation research has been a lack of publicly available datasets (Chen and Lee, 2018; Kanakia et al., 2019). To address this challenge, we release SCIDOCS, our evaluation benchmark which includes an anonymized clickthrough dataset from an online recommendations system. 8 Conclusions and Future Work We present SPECTER, a model for learning representations of scientific papers, based on a Transformer language model that is pretrained on citations. We achieve substantial improvements over the strongest of a wide variety of baselines, demonstrating the effectiveness of our model. We additionally introduce SCIDOCS, a new evaluation suite consisting of seven document-level tasks and release the corresponding datasets to foster further research in this area. The landscape of Transformer language models is rapidly changing and newer and larger models are frequently introduced. It would be interesting to initialize our model weights from more recent Transformer models to investigate if additional gains are possible. Another item of future work is to develop better multitask approaches to leverage multiple signals of relatedness information during training. We used citations to build triplets for our loss function, however there are other metrics that have good support from the bibliometrics literature (Klavans and Boyack, 2006) that warrant exploring as a way to create relatedness graphs. Including other information such as outgoing citations as additional input to the model would be yet another area to explore in future. Acknowledgements We thank Kyle Lo, Daniel King and Oren Etzioni for helpful research discussions, Russel Reas for setting up the public API, Field Cady for help in initial data collection and the anonymous reviewers (especially Reviewer 1) for comments and suggestions. This work was supported in part by NSF Convergence Accelerator award 1936940, ONR grant N00014-18-1-2193, and the University of Washington WRF/Cable Professorship. References Anant K. Agarwal, Ivan Zaitsev, Xuanhui Wang, Cheng Yen Li, Marc Najork, and Thorsten Joachims. 2019. Estimating position bias without intrusive interventions. In WSDM. Waleed Ammar, Dirk Groeneveld, Chandra Bhagavatula, Iz Beltagy, Miles Crawford, Doug Downey, Jason Dunkelberger, Ahmed Elgohary, Sergey Feldman, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler C. Murray, HsuHan Ooi, Matthew E. Peters, Joanna Power, Sam Skjonsberg, Lucy Lu Wang, Christopher Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the literature graph in semantic scholar. In NAACL-HLT. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. 2279 A simple but tough-to-beat baseline for sentence embeddings. In ICLR. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. In EMNLP. Chandra Bhagavatula, Sergey Feldman, Russell Power, and Waleed Ammar. 2018. Content-Based Citation Recommendation. In NAACL-HLT. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL. Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2014. Spectral networks and locally connected networks on graphs. ICLR. Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, and Lawrence Carin. 2019. Improving textual network embedding with global attention via optimal transport. In ACL. Tsung Teng Chen and Maria Lee. 2018. Research Paper Recommender Systems on Big Scholarly Data. In Knowledge Management and Acquisition for Intelligent Systems. K. Bretonnel Cohen, Helen L. Johnson, Karin M. Verspoor, Christophe Roeder, and Lawrence Hunter. 2010. The structural and content aspects of abstracts versus bodies of full text journal articles are different. BMC Bioinformatics, 11:492–492. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Martin Ester, Hans-Peter Kriegel, J¨org Sander, Xiaowei Xu, et al. 1996. A Density-based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In KDD. Sergey Feldman, Waleed Ammar, Kyle Lo, Elly Trepman, Madeleine van Zuylen, and Oren Etzioni. 2019. Quantifying Sex Bias in Clinical Studies at Scale With Automated Data Extraction. JAMA. J Ganesh, Manish Gupta, and Vijay K. Varma. 2016. Doc2sent2vec: A novel two-phase approach for learning document representation. In SIGIR. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS). Christophe Van Gysel, Maarten de Rijke, and Evangelos Kanoulas. 2017. Neural Vector Spaces for Unsupervised Information Retrieval. ACM Trans. Inf. Syst. Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017a. Inductive Representation Learning on Large Graphs. In NIPS. William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017b. Inductive representation learning on large graphs. In NIPS. Erik Holmer and Andreas Marfurt. 2018. Explaining away syntactic structure in semantic document representations. ArXiv, abs/1806.01620. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In ACL. Chanwoo Jeong, Sion Jang, Hyuna Shin, Eunjeong Lucy Park, and Sungchul Choi. 2019. A context-aware citation recommendation model with bert and graph convolutional networks. ArXiv, abs/1903.06464. Anshul Kanakia, Zhihong Shen, Darrin Eide, and Kuansan Wang. 2019. A Scalable Hybrid Research Paper Recommender System for Microsoft Academic. In WWW. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. ArXiv, abs/1412.6980. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. ICLR. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS. Richard Klavans and Kevin W. Boyack. 2006. Identifying a better measure of relatedness for mapping science. Journal of the Association for Information Science and Technology, 57:251–263. Jey Han Lau and Timothy Baldwin. 2016. An empirical evaluation of doc2vec with practical insights into document embedding generation. In Rep4NLP@ACL. Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In ICML. Jimmy J. Lin. 2008. Is searching full text more effective than searching abstracts? BMC Bioinformatics, 10:46–46. Carolyn E Lipscomb. 2000. Medical Subject Headings (MeSH). Bulletin of the Medical Library Association. 2280 Chundi Liu, Shunan Zhao, and Maksims Volkovs. 2018. Unsupervised Document Embedding with CNNs. ArXiv, abs/1711.04168v3. Pengfei Liu, King Keung Wu, and Helen M. Meng. 2017. A Model of Extended Paragraph Vector for Document Categorization and Trend Analysis. IJCNN. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar S. Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. ArXiv, abs/1907.11692. Laurens van der Maaten. 2014. Accelerating t-SNE Using Tree-based Algorithms. Journal of Machine Learning Research. Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning Sentence Representations from Explicit Discourse Relations. In ACL. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. arXiv. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In LREC. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERTNetworks. In EMNLP. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A Conditional Entropy-based External Cluster Evaluation Measure. In EMNLP. J Ben Schafer, Dan Frankowski, Jon Herlocker, and Shilad Sen. 2007. Collaborative filtering recommender systems. In The adaptive web. Springer. Martijn J. Schuemie, Marc Weeber, Bob J. A. Schijvenaars, Erik M. van Mulligen, C. Christiaan van der Eijk, Rob Jelier, Barend Mons, and Jan A. Kors. 2004. Distribution of information in biomedical abstracts and full-text publications. Bioinformatics, 20(16):2597–604. Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Improved semantic-aware network embedding with fine-grained word alignment. In EMNLP. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Paul Hsu, and Kuansan Wang. 2015. An Overview of Microsoft Academic Service (MAS) and Applications. In WWW. Cunchao Tu, Han Liu, Zhiyuan Liu, and Maosong Sun. 2017. Cane: Context-aware network embedding for relation modeling. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NIPS. Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, and Lawrence Carin. 2019. Improving textual network learning with variational homophilic embeddings. In Advances in Neural Information Processing Systems, pages 2074–2085. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In NAACLHLT. Felix Wu, Amauri H. Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. 2019a. Simplifying graph convolutional networks. In ICML. Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Fangli Xu, Avinash Balakrishnan, Pin-Yu Chen, Pradeep Ravikumar, and Michael J Witbrock. 2018. Word Mover’s Embedding: From Word2Vec to Document Embedding. In EMNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. ArXiv, abs/1609.08144. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. 2019b. A Comprehensive Survey on Graph Neural Networks. ArXiv, abs/1901.00596. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. ArXiv, abs/1906.08237. Hamed Zamani, Mostafa Dehghani, W. Bruce Croft, Erik G. Learned-Miller, and Jaap Kamps. 2018. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In CIKM. Xinyuan Zhang, Yitong Li, Dinghan Shen, and Lawrence Carin. 2018. Diffusion maps for textual network embedding. In NeurIPS. 2281 A Appendix A - Baseline Details 1. Random Zero-mean 25-dimensional vectors were used as representations for each document. 2. Doc2Vec Doc2Vec is one of the earlier neural document/paragraph representation methods (Le and Mikolov, 2014), and is a natural comparison. We trained Doc2Vec on our training subset using Gensim ( ˇReh˚uˇrek and Sojka, 2010), and chose the hyperparameter grid using suggestions from Lau and Baldwin (2016). The hyperparameter grid used: {’window’: [5, 10, 15], ’sample’: [0, 10 ** -6, 10 ** -5], ’epochs’: [50, 100, 200]}, for a total of 27 models. The other parameters were set as follows: vector_size=300, min_count=3, alpha=0.025, min_alpha=0.0001, negative=5, dm=0, dbow=1, dbow_words=0. 3. Fasttext-Sum This simple baseline is a weighted sum of pretrained word vectors. We trained our own 300 dimensional fasttext embeddings (Bojanowski et al., 2017) on a corpus of around 3.1B tokens from scientific papers which is similar in size to the SciBERT corpus (Beltagy et al., 2019). We found that these pretrained embeddings substantially outperform alternative off-theshelf embeddings. We also use these embeddings in other baselines that require pretrained word vectors (i.e., SIF and SGC that are described below). The summed bag of words representation has a number of weighting options, which are extensively tuned on a validation set for best performance. 4. SIF The SIF method of Arora et al. (2017) is a strong text representation baseline that takes a weighted sum of pretrained word vectors (we use fasttext embeddings described above), then computes the first principal component of the document embedding matrix and subtracts out each document embedding’s projection to the first principal component. We used a held-out validation set to choose a from the range [1.0e-5, 1.0e-3] spaced evenly on a log scale. The word probability p(w) was estimated on the training set only. When computing term-frequency values for SIF, we used scikit-learn’s TfidfVectorizer with the same parameters as enumerated in the preceding section. sublinear_tf, binary, use_idf, smooth_idf were all set to False. Since SIF is a sum of pretrained fasttext vectors, the resulting dimensionality is 300. 5. ELMo ELMo (Peters et al., 2018) provides contextualized representations of tokens in a document. It can provide paragraph or document embeddings by averaging each token’s representation for all 3 LSTM layers. We used the 768-dimensional pretrained ELMo model in AllenNLP (Gardner et al., 2018). 6. Citeomatic The most relevant baseline is Citeomatic (Bhagavatula et al., 2018), which is an academic paper representation model that is trained on the citation graph via sampled triplets. Citeomatic representations are an L2 normalized weighted sum of title and abstract embeddings, which are trained on the citation graph with dynamic negative sampling. Citeomatic embeddings are 75-dimensional. 7. SGC Since our algorithm is trained on data from the citation graph, we also compare to a state-ofthe-art graph representation learning model: SGC (Simple Graph Convolution) (Wu et al., 2019a), which is a graph convolution network. An alternative comparison would have been GraphSAGE (Hamilton et al., 2017b), but SGC (with no learning) outperformed an unsupervised variant of GraphSAGE on the Reddit dataset16, Note that SGC with no learning boils down to graph propagation on node features (in our case nodes are academic documents). Following Hamilton et al. (2017a), we used SIF features as node representations, and applied SGC with a range of parameter k, which is the number of times the normalized adjacency is multiplied by the SIF feature matrix. Our range of k was 1 through 8 (inclusive), and was chosen with a validation set. For the node features, we chose the SIF model with a = 0.0001, as this model was observed to be a high-performing one. This baseline is also 300 dimensional. 8. SciBERT To isolate the advantage of SPECTER’s citation-based fine-tuning objective, we add a controlled comparison with SciBERT (Beltagy et al., 2019). Following Devlin et al. (2019) we take the last layer hidden state corresponding to the [CLS] token as the aggregate document representation.17 16There were no other direct comparisons in Wu et al. (2019a) 17We also tried the alternative of averaging all token representations, but this resulted in a slight performance decrease compared with the [CLS] pooled token. 2282 9. Sentence BERT Sentence BERT (Reimers and Gurevych, 2019) is a general-domain pretrained model aimed at embedding sentences. The authors fine-tuned BERT using a triplet loss, where positive sentences were from the same document section as the seed sentence, and distractor sentences came from other document sections. The model is designed to encode sentences as opposed to paragraphs, so we embed the title and each sentence in the abstract separately, sum the embeddings, and L2 normalize the result to produce a final 768-dimensional paper embedding.18 During hyperparameter optimization we chose how to compute TF and IDF values weights by taking the following non-redundant combinations of scikit-learn’s TfidfVectorizer (Pedregosa et al., 2011) parameters: sublinear_tf, binary, use_idf, smooth_idf. There were a total of 9 parameter combinations. The IDF values were estimated on the training set. The other parameters were set as follows: min_df=3, max_df=0.75, strip_accents=’ascii’, stop_words=’english’, norm=None, lowercase=True. For training of fasttext, we used all default parameters with the exception of setting dimension to 300 and minCount was set to 25 due to the large corpus. 18We used the ‘bert-base-wikipedia-sections-mean-tokens’ model released by the authors: https://github.com/ UKPLab/sentence-transformers
2020
207
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2283–2295 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2283 Semantic Scaffolds for Pseudocode-to-Code Generation Ruiqi Zhong Mitchell Stern Dan Klein Computer Science Division University of California, Berkeley {ruiqi-zhong,mitchell,klein}@berkeley.edu Abstract We propose a method for program generation based on semantic scaffolds, lightweight structures representing the high-level semantic and syntactic composition of a program. By first searching over plausible scaffolds then using these as constraints for a beam search over programs, we achieve better coverage of the search space when compared with existing techniques. We apply our hierarchical search method to the SPoC dataset for pseudocodeto-code generation, in which we are given line-level natural language pseudocode annotations and aim to produce a program satisfying execution-based test cases. By using semantic scaffolds during inference, we achieve a 10% absolute improvement in top-100 accuracy over the previous state-of-the-art. Additionally, we require only 11 candidates to reach the top-3000 performance of the previous best approach when tested against unseen problems, demonstrating a substantial improvement in efficiency. 1 Introduction Systems that can map from natural language descriptions of tasks or programs to executable code have the potential for great societal impact, helping to bridge the gap between non-expert users and basic automation or full-fledged software development. Accordingly, this area of research has garnered significant interest in recent years, with systems being devised for the translation of natural language specifications into database queries (Wang et al., 2018), if-then programs (Chen et al., 2016), game elements (Ling et al., 2016), and more. While much of the prior work in executable semantic parsing involves short descriptions being mapped into single-line programs, some tasks have recently been proposed that involve multiple natural language utterances on the input side and full programs on the output side, often reaching tens of Line Pseudocode Code 1 in function main int main() { 2 n is a long integer 0 long n = 0; 3 while n is less than o while (n < ‘o’) { 4 … … 5 close while scope } Translate while (n < o) { while (n < ‘o’) Other wrong candidates error: use of undeclared identifier 'o' error: missing '{' Figure 1: Pseudocode is translated to code for each line and combined to form a valid program. Certain combinations are invalid due to syntactic and semantic constraints. lines in length and including non-trivial state manipulation. Examples include the Magic the Gathering and Hearthstone datasets (Ling et al., 2016) derived from trading cards and Java or Python classes implementing their behavior in a game engine, the CONCODE dataset (Iyer et al., 2018) consisting of Java documentation strings and method bodies, and the NAPS and SPoC datasets (Zavershynskyi et al., 2018; Kulal et al., 2019) consisting of pseudocode annotations and source code for programming competition problems. Past approaches to these large-scale languageto-code tasks have typically employed sequencebased models (Ling et al., 2016) that do not account for structure on the output side, or tree-based models (Allamanis et al., 2015; Rabinovich et al., 2017a; Yin and Neubig, 2017; Hayati et al., 2018; Iyer et al., 2019) that incorporate the syntax but not the semantics of the output domain. However, if we want to generate programs that can be executed successfully, the inclusion of both syntactic and semantic constraints is crucial. As shown in Figure 1, while multiple program fragments may be syntactically correct and represent plausible translations of the corresponding pseudocode, not all of them will lead to executable programs. To address this, we propose a search procedure based on semantic scaffolds, lightweight sum2284 maries of higher-level program structure that include both syntactic information as well as semantic features such as variable declarations and scope constraints. See Section 3 for a more formal definition. While these do not encode the full spectrum of constraints used in some formal program synthesis tools (Solar-Lezama, 2009; Gulwani et al., 2017), they strike a balance between utility, speed, and ease of use, offering substantial improvements in system performance without a significant increase in complexity. In this work we focus on the Search-based Pseudocode to Code (SPoC) dataset (Kulal et al., 2019) due to its challenging multiline programs and availability of input-output test suites to evaluate denotation accuracy. The dataset contains line-level pseudocode annotations for 18,356 C++ programs provided by crowdsource workers from Amazon Mechanical Turk. As in the approach of Kulal et al. (2019), we first obtain candidate code fragments for each line using an off-the-shelf neural machine translation system. We then aim to find the highestscoring combination of fragments that results in a valid program. Although finding the optimal program under this setting is NP-hard when variable usage constraints are introduced (see Section A.3), we can approximate it with a hierarchical beam search. Our algorithm first searches for semantic scaffolds for the program, then assembles fragments together conditioned on these scaffolds. This hierarchical approach speeds up search, produces higher quality variations, and leads to substantial improvements in our system’s final accuracy. We achieve a new state-of-the-art by solving 55.1% of the test cases within 100 attempts. This represents a 10.4% absolute improvement over the previous best (Kulal et al., 2019), and reaches 81% of our model’s oracle performance. When tested against unseen problems (or crowd-workers), our top 11 (or top 52, respectively) candidates have the same performance as their top 3000 candidates, demonstrating marked gains in efficiency. We complement our results with a discussion of specific cases in which our semantic scaffolds use global program context to resolve ambiguities in the pseudocode. We also conduct a manual error analysis of 200 failures to better characterize the limitations of our method and suggest possible extensions for future work. Our contributions are summarized as follows: • We propose the use of semantic scaffolds to add semantic constraints to models for longform language-to-code generation tasks. • We introduce a hierarchical beam search algorithm that incorporates these constraints, resulting in heightened efficiency, better coverage of the search space, and stronger performance when compared with the standard approach. • We achieve a new state-of-the-art accuracy of 55.1% on the SPoC pseudocode-to-code dataset. 2 Pseudocode-to-Code Task In this work, we focus on the SPoC dataset introduced by Kulal et al. (2019). 2.1 Data This dataset consists of C++ solutions to problems from Codeforces, a competitive programming website, along with the input-output test cases used for each problem to evaluate correctness. It contains 18,356 programs in total with 14.7 lines per program on average. Each line is annotated with a natural language pseudocode description given by a crowd worker from Amazon Mechanical Turk. On average, there are 7.86 tokens per line of code and 9.08 tokens per pseudocode annotation. From the full dataset, 1,752 programs with annotations from unseen crowd workers and 1,820 programs for unseen problems are held out for evaluation. More details can be found in Kulal et al. (2019). 2.2 Task Suppose the target program has L lines. For each line l ∈[L], we are given a natural language pseudocode annotation xl and an indentation level il. Our goal is to find a candidate program y based on (x1, i1), . . . , (xL, iL) that can solve the given problem (i.e. pass all the test cases) using as few submission attempts as possible. The search efficiency of an algorithm is calculated as the fraction of problems it can solve using a budget of B attempts per problem, where an attempt includes both compiling a candidate program and running the test cases. As in Kulal et al. (2019), for each pseudocode line xl, we use an off-the-shelf neural machine translation system to obtain a set of C candidate code pieces Yl = {ylc | c ∈[C]}, where candidate code piece ylc has probability plc. A full candidate 2285 program y is a concatenation of candidate code pieces, one per line, and has score p(y): y = concatL l=1ylcl, p(y) = L Y l=1 plcl. (1) We aim to find valid high-scoring programs in our search procedure. 3 Combination Constraints Kulal et al. (2019) propose best-first search as a baseline, which enumerates all complete candidate programs in descending order by score. Using a priority queue, this algorithm can efficiently find the exact top B highest scoring candidates in time O(L log(BL)) per candidate. However, this approach ignores any dependence between different lines. For example, any of the code piece candidates in Figure 1 could potentially be used in a valid program, but if we naively combine certain subsets of candidates together, the resulting program will be invalid due to the use of undeclared variables or mismatching braces. To solve this problem, we propose to enforce certain syntactic and semantic constraints when combining candidate code pieces. 3.1 Syntactic Constraints The candidate program should adhere to the grammatical specification of the target language. However, since incorporating the complete set of C++ grammatical constraints would require significant engineering effort, we instead restrict our attention to the set of “primary expressions” consisting of high-level control structures such as if, else, for loops, function declarations, etc. As shown in Figure 2, we parse the candidate code pieces for each line into a list of primary expression symbols. In order for code pieces from consecutive lines to be used together, there must exist a grammatical derivation that combines their respective symbols. The complete list of primary expression can be found in the appendix; see Tables 6 and 7. Additionally, some production rules are associated with the start or end of a variable scope block. We require that the number of open scope blocks equals the indentation level il for each line l. 3.2 Symbol Table Constraints Each scope block is associated with a symbol table (Aho et al., 1986) keeping track of the variables that have been declared within that scope or any containing scopes. We extract the variable names used or declared by each code piece (Figure 3) and ensure that (1) undeclared variables are not used, and (2) variables are not redeclared within the same scope. After checking these constraints, any variables declared by a given code piece will be added to the symbol table associated with the current scope. These symbol table constraints are based on the semantic information of code pieces and are fundamentally different from previous AST-based syntactic constraints for code generation (Rabinovich et al., 2017b; Yin and Neubig, 2017). Formally, any context free grammar that specifies the same constraints requires at least exponential description complexity. We provide a proof adapted from Ellul et al. (2005) in Appendix A.2. 3.3 Syntactic and Semantic Scaffolds We note two properties of the aforementioned constraints. First, we can efficiently compute whether a program prefix can possibly lead to a full program that satisfies the constraints by using an incremental parser (Ghezzi and Mandrioli, 1979) and checking the symbol tables. Secondly, not all information from a code piece is necessary to verify the constraints. Accordingly, when multiple code piece candidates have the same primary expression symbols and variable declarations and usage, swapping between them would not affect the satisfiability of the constraints. For example, changing from a += 1 to a -= 1 will not change a compilable program into a non-compilable one, or vice versa. These two properties will help motivate the hierarchical beam search algorithm introduced in the next section. More formally, we take the configuration φ(ylc) of a line ylc to be the minimal set of features required to verify the above constraints. The prefix scaffold Sy,l = [φ(y1c1), φ(y2c2), . . . , φ(ylcl)] of a program y then contains all the information needed to verify the constraints for the first l lines. We can efficiently compute whether Sy,l1 is a valid prefix scaffold when l < L and whether Sy,L is a valid scaffold for a full program when l = L. 1To keep notation uncluttered, we sometimes use φ to denote a configuration, we ignore the subscript y of S when we refer to a general scaffold that is not necessarily associated with a specific program, and we ignore the subscript l = L of S when we refer to the scaffold of a full program. 2286 Code Pieces Extracted Primary Expressions int main() { int n, ans = 1; for (int i = 1; i <= n / 2 - 1; i++) cout << 2 << " "; if (n % 2 == 0) cout << 2 << endl; } return_type function_name ( ) {start terminal_stmt forstart terminal_parathenses terminal_stmtend if terminal_parathensesstart terminal_stmtend }end (a) Code pieces are parsed into Primary Expressions Symbols Symbol Production Rules Used function stmt* for_stmt if_stmt return_type function_name ( ) {start stmt* }end stmt* stmt for_stmt | if_stmt | terminal_stmt ifstart terminal_parathenses terminal_stmtend; (b) Production rules of Primary Expression Grammar function return type int main ( ) {start stmt* }end terminal stmt for stmt if stmt int n, ans = 1; forstart terminal parathenses … terminal stmt end (int i = 1; i <= n / 2 - 1; i++) cout << 2 << " "; (c) Abstract Syntax Tree of the code piece combination. function name Figure 2: Example primary expression grammar. Subscripts “start/end” refers to starting/ending variable scopes. const int N = 35; int main() { int n, h[N], count; main N Program Prefix Symbol Table per Scope n h count i Variable i declared in the third scope Variable i, n, count used in the third scope Variable Used/Declared main() scope for () scope file scope Next Line for (int i = 0; i < n; i ++) count++; extract Figure 3: Extracting variables used or declared at each scope for a given code piece to verify the symbol table constraints. 4 Constrained Search 4.1 Beam Search Our goal is to find the top B highest-scoring candidate programs that satisfy the aforementioned constraints. Unfortunately, finding whether even one solution exists is NP-hard (proof given in Section A.3). One way we can approximate the solution is to use a standard beam search. The beam maintains a list of hypothesis program prefixes along with their respective scores. We extend the beam by adding the candidate code pieces from the next line to each candidate program prefix if they form valid combinations under the constraints, then prune the hypotheses with scores outside of the top W. The algorithm ends after L steps, returning all the valid hypotheses in the final beam. 4.2 Scaffold Search Although beam search can approximate the top B solutions, the time complexity of beam search grows quadratically with the beam width W. Finding the top B candidates requires that W ≥B, and hence each candidate takes Ω(BL) (amortized) time to generate, which can become intractable if B is on the order of thousands. Even worse, beam search is often biased towards variations at the end of the program due to its greedy decisions, and can waste its budget on candidates that are unlikely to be the correct solution. This is in direct contrast to the computationally lighter baseline which generates the exact (unbiased) top candidates independently for each line without constraint. Can we combine the advantages of both algorithms? A key observation is that the assumption of independent scoring across different lines allows fast and unbiased full program candidate generation, while an expensive beam search is inevitably needed to deal with the inherent dependence between lines. Therefore, we propose a hierarchical beam search method that first uses beam search with a smaller beam width W to find likely scaffolds, including only the minimum dependency information between lines to satisfy the constraints, then scores candidates independently for each line conditioned on the scaffold. We assign probability p(φlγ) to configuration φlγ by marginalizing all code piece candidates at line l with configuration φlγ, and assign probability p(S) to scaffold S by multiplying the configuration probabilities from each line: p(φlγ) = X φ(ylc)=φlγ plc, p(S) = L Y i=1 p(S[i]). (2) Using this scoring function, we run a scaffold beam search with size W, then select the top K highest scoring scaffolds S1, S2 . . . SK. Next, to generate program candidates from a given scaffold S, we filter out all code pieces in Yl that do not have the configuration specified by S; in other words, the new set of code candidate pieces for each line l is Y S l = {ylc ∈Yl | φ(ylc) = S[l]}. (3) As a result, conditioned on a fixed scaffold S, code pieces from each line can be chosen independently and the resulting full program will be guaranteed to satisfy the aforementioned constraints. 2287 Given K candidate scaffolds, we enumerate the top full program candidate from each scaffold and choose the highest scoring one. This takes time O(K + L log(BL)) per candidate. In practice, we pick relatively small K and the running time has only logarithmic dependence on B. 4.3 Tradeoffs in Early Detection An alternative view on beam search is that it front loads the computation to reject invalid programs that do not satisfy the constraints earlier in the search process. A brute force alternative is to generate the next highest scoring candidates from the unconstrained baseline and reject invalid ones. This method is guaranteed to produce top-scoring solutions, but it might need arbitrarily many candidates to find a valid one. We need to compare the computational efficiency between these two methods. The most computationally expensive operation in constraint verification is to verify whether the next line is valid given the program prefix. Therefore, we count how many times this verifier function is called as a proxy to measure computational efficiency. We allow the brute force method to use as large a verifier function call quota as our “active” beam search method: it can validate/reject a program candidate until the quota is used up. Section 6.4 compares our scaffold search method against this brute force approach. The latter needs thousands of times more computation to attain the same level of performance as the former. 5 Implementation2 Empty Pseudocode Around 26% of the lines in the data set do not have pseudocode annotations. They usually correspond to lines of code that do not have semantically meaningful information, such as “int main() {”, “{”, “}”, etc. Kulal et al. (2019) replaced these empty pseudocode lines with the ground truth code, effectively giving this information away to the search algorithm. We did not use the gold code pieces for these lines, which makes our task more challenging. Model Training We use OpenNMT (Klein et al., 2017) with its default settings to translate pseudocode into code piece candidates. Our model is a two-layer LSTM seq2seq model with hidden size 512, an attention mechanism (Bahdanau et al., 2014) and copy pointers (Vinyals et al., 2015). 2Our implementation is available at https://github. com/ruiqi-zhong/SemanticScaffold. We estimate the fraction problems solvable given infinite search budget and 100 candidates per line as in Kulal et al. (2019) to obtain an oracle bound on performance. Due to slight difference in hyperparameters and tokenization method, our model has higher ceiling: on the unseen worker (problems) test set, the oracle performance3 is 74.4% (60.5%), compared to 71.4% (55.2%) in previous work. Across all test examples, the oracle performance is 68%. Parsing Code Pieces Since no off-the-shelf C++ parser extracts the information we need from code pieces, we implement our own primary expression parser to extract high level control information. We rely on the following heuristic assumptions to parse the code pieces generated by the model: (1) a code piece belongs to only one variable scope; (2) the generation of every primary expression terminal symbol lies in one line. Our parser fails on less than 0.01% of the code pieces in the dataset. While selecting the candidates for each line, we immediately reject the ungrammatical pieces we cannot parse. Without deliberate implementation optimization, this parsing operation takes on average 2.6 seconds to process all the top 100 code pieces for a problem – approximately the same wallclock time as 1 compilation attempt. Search Algorithm Hyperparameters As in Kulal et al. (2019), we consider the top C = 100 code pieces for each line. Unless otherwise mentioned, our default beam width W is 50 for scaffold search and we keep the top K = 20 scaffolds for the subsequent generation. 6 Search Performance 6.1 Metrics We evaluate a search algorithm A by computing the fraction of problem it can solve on the test set given evaluation budget B per problem, which we denote as fA(B). We plot fA against B and evaluate it at B = 1, 10, 100, 1000 for each algorithm A to compare performance. We note that the difference of f values between two algorithms becomes smaller and less informative as B increases. With infinite code piece candidates and budget, a brute force search can 3The oracle performance here is not a universal property of the data, but depends on the model used to generate the code pieces. 2288 Line Pseudocode Code Piece Candidates Syntactic Config SymTable 1 in function main int main() { long long n = 0; terminal_stmt ; declare n 2 n is a long integer 0 long n = 0; terminal_stmt ; declare n while (n < ‘o’) { while condition { use n 3 while n is less than o while (n < o ) { while condition { use n, o while (n < ‘o’) while condition use n 4 rest of the program omitted … (a) Candidate code pieces and configs ϕ (b) Search over scaffolds Marginalize over common Configs SymTable Configs differ Config 2 terminal_stmt declare n 3 while condition { use n Other scaffolds omitted ... error: use of undeclared identifier 'o' (c) Generate from scaffolds terminal_stmt declare n while condition { use n long long n = 0; long n = 0; 2 terminal_stmt declare n 3 while condition { use n, o while (n < ‘o’) { (d) Combine 2 long long n = 0; 3 while (n < ‘o’) { 2 long n = 0; 3 while (n < ‘o’) { Syntactic Configs differ Figure 4: (a) Candidate code pieces and their syntactic/Symtable configuration for each line; (b) use beam search to find highest scoring valid scaffolds; (c) given a scaffold, select code pieces that has the same configurations for each line. (d) combine code pieces to form full program. enumerate all possible programs, find the right solution and f converges to 1. Direct comparison on f values hence becomes meaningless as B increases. To address this deficiency, we define a lead metric lA1,A2(B) equal to the extra budget X needed by algorithm A2 to reach the same level of performance as A1 given budget B. Formally, lA1,A2(B) = inf{X | fA2(B + X) ≥fA1(B)}. (4) A visualization can be seen in Figure 5(c). We report our algorithms’ performance on the heldout test set with annotations from unseen crowd workers and with unseen problems separately. 6.2 Comparison of Constraints We compare four settings: • No Constraints: the best-first search method that scores lines independently. • Syntactic Constraints: the constraints on the primary expression and indentation level as described in section 3.1. • Symbol Table Constraints: both the syntactic constraints and the symbol table constraints described in section 3.2. We abbreviate this as SymTable. • Backoff: sometimes hierachical beam search with the SymTable constraints fails to return Figure 5: (a), (b) Comparison of f performance under different constraints. (c) a zoom in visualization on the definition of lead metrics (d) lead of SymTable constraint on Syntactic constraint on different test sets. any valid scaffold. We back off to just the Syntactic constraints if this happens. Additionally, we compare with the Previous stateof-the-art reported by Kulal et al. (2019). The results can be seen in Figure 5 and Table 1, where we use the constraint type as a shorthand for the search algorithm under this constraint. Without constraints, the baseline algorithm performs especially poorly because it needs syntactic context to select relevant code pieces for 26% of the lines with empty pseudocode. SymTable outperforms Syntactic. As shown in 2289 Test Against Unseen Workers Hierarchical Search (H), Beam Width W = 50 Constraint B=1 B=10 B=102 B=103 None 0.0% 8.1 % 29.2 % 44.3% Previous 30.7% 44.4% 53.7% 58.6% Syntactic 42.8 % 51.9% 59.3% 65.9% SymTable 45.8% 55.1% 62.6% 67.3% Backoff 46.0% 55.3% 62.8% 67.6% Test Against Unseen Problems Constraint B=1 B=10 B=102 B=103 None 0.0% 3.0% 11.5% 21.8% Previous 17.8% 28.4% 34.2% 38.3% Syntactic 27.5 % 35.4% 42.1% 47.8% SymTable 31.0% 39.2 46.0% 49.3% Backoff 31.2% 39.4% 46.1% 49.6% Table 1: Comparison of the fraction of program passed when B = 100,1,2,3 under different constraints; constraint satisfied by hierarchical beam search with the default hyper-parameters mentioned in Section 5. “Previous” refers to the previous state of the art model. Figure 5(d), the lead of SymTable on Syntactic grows linearly: the more these two algorithms search, the more budget is needed by Syntactic to reach the same level as SymTable. Syntactic needs nearly 600 more budget to have comparable performance with SymTable that uses 400 budget. We notice that all of our constrained search methods outperform the previous state-of-the-art. Averaged across all test examples, Backoff can solve 55.1% of the problems within 100 budget, which is ≈10% higher than the previous work. On unseen workers (problems), the top 11 (top 52) candidates of Backoff solve the same fraction of problems as the top 3000 candidates of the best performing algorithm in Kulal et al. (2019). 6.3 Regular vs. Hierarchical Beam Search We use regular beam search with beam width W = 200 to generate B = 100 valid candidate full programs. We did not experiment with B = 1000 because beam search with W ≥B ≥1000 is computationally intractable. For hierarchical beam search we experiment with W = 10, 25, 50 for scaffold search and keep the top K = min(W, 20) scaffolds for subsequent searches. Table 2 compares the performance of hierarchical beam search against regular beam search with different beam sizes under Syntactic and SymTable constraints. We find that if hierarchical beam search is used, even dropping the beam width Test Against Unseen Workers, Syntactic Method, Width B=1 B=10 B=102 H, W=10 42.8% 51.7% 59.1% H, W=25 42.8% 51.8% 59.3% H, W = 50 42.8% 51.9% 59.3% R, W=200 42.4% 51.3% 58.2% Test Against Unseen Workers, SymTable Method, Width B=1 B=10 B=102 H, W=10 45.4% 54.3% 61.0% H, W=25 45.6% 54.7% 61.9% H, W = 50 45.8% 55.1% 62.6% R, W=200 45.6% 54.9% 61.9% Table 2: Comparison of different beam size with Syntactic and SymTable constraint when tested against unseen workers. H/R refers to hierarchical/regular beam search and W is the beam width. The same results on unseen problems can be seen in appendix . from 50 to 10 leads to negligible change in performance. In contrast, even with a large beam width W = 200, regular beam search method cannot efficiently search for the solution and leads to a noticeable drop in performance. We observe a similar trend for SymTable: regular beam search with beam width W = 200 underperforms hierarchical search with beam width W = 25. However, if we further decrease the hierarchical beam search width from 25 to 10 in this setting, we observe a significant drop in performance, possibly because there are more variable usage variations than syntactic variations. 6.4 Scaffold Search vs. Brute Force Method We now compare scaffold search to the brute force algorithm as described in section 4.3. We make B = 50,000 attempts for the brute force method so that its performance can match at least the top 10 candidates of our constrained approach and make the lead metrics meaningful. To save computation and avoid compiling all 50,000 programs, we early reject every candidate that does not fulfill our constraints. The lead of our approaches against the brute force algorithm is shown in Figure 6. After being adjusted for the constraint checking quota used, the lead of our approach is tens of thousands ahead of the unconstrained approach. Scaffold search saves lot of computation by inducing a little overhead earlier in the search process. 2290 Figure 6: Lead of SymTable and Syntactic constraints on non-constrained approach with equal quota on test set with unseen (a) workers and (b) problems. 7 Analysis 7.1 Program Candidate Variations Beam search has the problem of producing fewer variations at the beginning of the search. Such a weakness might be tolerable if we only care about the top 1 candidate, but becomes disastrous in a search setting where we want the top B candidates, whose variation is typically spread across the entire program. We describe the following procedure to formally define this intuition. We first aggregate code piece choices for each line for all the top B programs. As shown in Figure 8(a), we construct a matrix such that each column corresponds to a full program candidate; the number r in the ith row and jth column means that on line i, the jth full program candidate chooses the rth code piece candidate (i.e. yici = yir). Then we can build a prefix tree (Figure 8(b)) by treating each column as a string, where each traversal from the root to a leaf is a complete candidate program y. We define the representative branch/program as a traversal from the root to a leaf that always chooses the child that contains the most leaves (with ties being broken randomly). For each of the remaining B −1 programs/traversals, we find the smallest line number where it starts to diverge from the representative branch. Among these B −1 programs, we count the fraction of divergences that take place in the first/second half of the lines. For example, in Figure 8(b), 0% of the divergences occur in the first half. We compare hierarchical vs. regular beam search under syntactic constraints with different beam widths W: hierarchical W = 10, 50 and regular W = 50, 200. We group the programs by length L, consider the top B = 25 attempted programs for each problem and report the fraction of divergences that occur in the first half of the program length for each group. Length L H 10 H 50 R 50 R 200 (0, 10] 45.4% 45.5% 43.6% 45.5% (10, 20] 63.2% 63.4% 58.2% 63.4% (20, 30] 63.6% 63.6% 56.8% 63.6% (30, 40] 67.2% 67.3% 58.2% 67.3% (40, ∞) 69.4% 68.8% 56.8% 68.8% Table 3: Fraction of divergence in the first half of the program, grouped by program length L. In the column headers, H/R represents Hierarchical/Regular beam search under Syntactic constraint, and the number represents beam width W. The column with the lowest fraction is underlined. The results can be seen in Table 3. For regular beam search, a moderate beam width W = 50 consistently brings fewer variations in the first half of the program, and it needs a larger W = 200 to fix this problem. In contrast, a small W for hierarchical beam search produces the same amount of variations in the first half of the program. The same statistics under SymTable constraints can be seen in the appendix (Table 5) and the conclusion holds similarly. 7.2 Rejection by Constraints In this section we give representative examples on what program candidates are rejected by our syntactic and symbol table constraints. Syntactic Constraints As mentioned in Section 5, about 26% of the lines do not have pseudocode. They may correspond to “}”, “int main(){”, “{”, ”return 0”, “};” or “;”. These lines need contextual information to select valid code pieces and na¨ıvely combining the top 1 candidate from each line independently will always produce grammatically invalid programs. Syntactic constraints also rule out stylistic ambiguities. For example, when there is only one statement within an if statement, the programmer can optionally include a curly brace. However, the pseudocode does not contain such detailed information about style. Both “if(...){” and “if(...)” might be valid, but only one of them can be correct given the context of a program. Our syntactic constraints, which contain a curly brace constraint, can help us select the right code piece. Symbol Table (SymTable) Constraints Pseudocode annotations are sometimes implicit about variable declarations. Given the instruction “set N to 222222”, both code pieces (1) “int N = 2291 Reason (percentage) Pseudocode Gold Solution Model Generation (a) Generation wrong (47.5%) let value1, value2, val, a, b be integers with val = 0 int value1, value2, val, a, b = 0 ; int value1, value2, val = 0, a, b; (b) Needs type disambiguation (12.5%) s[occur[i][j] + k] = letter - a + A s[occur[i][j] + k] = letter - 'a' + 'A'; s[occur[i][j] + k] = 'letter' - a + A; (c) Needs syntax disambiguation (0.5%) else if dB is less than dW } else if (dB < dW) { else if (dB < dW) (d) Variable name typos (15.0%) if lfg = 1 if (flg == 1) { if (lfg == 1) { (e) Pseudocode wrong (23.5%) set ans = 25*length of s ans += (25 * s.length()); int ans = 25 * s.length(); Figure 7: Categorized error analysis for lines that no generated code piece is functionally equivalent to the gold. The percentage in the parentheses refers to the fraction of this category out of the 200 samples. 1 Full Program Rank Line Number The 4 th full program candidate picked the rank 0 code piece in line 6 . 1 2 3 4 5 6 1 0 0 0 0 0 0 2 1 1 1 1 1 1 3 0 0 0 0 0 0 4 0 0 0 0 0 0 5 0 0 0 1 2 1 6 0 1 3 0 1 2 0 0 0 0 1 2 1 0 3 2 0 1 3 branches diverge from the representative branch at line 5. 3 first half program Figure 8: (a) A matrix that represents each candidate’s choices of code pieces for each line. (b) A prefix tree constructed by treating each column as a string; the representative branch is the second column and marked with red color. 222222;” and (2) “N = 222222;” are potentially valid. We might disambiguate this case with a SymTable constraint: if the variable is declared before in the same scope, then we know this code piece should not contain a repeated declaration and hence we should choose candidate (2); otherwise we should choose (1) to avoid using undeclared variables. SymTable constraints are also helpful when the pseudocode does not put quotation marks around string/character literals. Consider the instruction “if lucky is A then do the following” with the ground truth code piece “if (lucky == ’A’) {”. The model might misunderstand A as a variable name and generate “if (lucky == A) {”. This error can be ruled out by SymTable constraint if variable A is undeclared. However, SymTable constraints do not preclude all errors related to declarations. Consider the following generation where the last line is wrong: i n t now = −1, cnt = 0; for ( i n t i = 0; i < n ; ++ i ) { . . . / / some l i n e s omitted / / cnt = 1 , now = v [ i ] ; / / gold i n t cnt = 1 , now = v [ i ] ; / / pred } A programmer will usually not declare new variables in the last line of a variable scope. However, technically this is not an invalid statement and the SymTable constraint fails to reject this wrong candidate. Extra modelling is needed to take into account programming conventions and common sense. 7.3 Code Piece Error Analysis So far we have focused on combining independent candidates from each line together to search for the target program. This heavily depends on the underlying model to generate potentially correct code pieces. However, in 32% of the programs at least one “hard” line has no generated code piece that is functionally equivalent to the solution, thus indicating plenty of room for improvement. To help the readers understand the bottleneck for code piece generation and point out important future directions, we randomly sampled 200 “hard” lines and manually analyzed why the generation fails by looking at the top 1 candidate of the model. The error analysis is available on our GitHub. We group the failures into the following categories, giving a detailed breakdown and examples in Figure 7. (a) The model generation is wrong despite clear pseudocode; this typically happens when the gold code piece is long or highly compositional. (b, c) The pseudocode contains ambiguity; the model generation is reasonable but either needs (b) variable type clarification or (c) syntactic context. This requires incorporating contextual information of the program into the code piece generation process. (d, e) The pseudocode either (d) consists of variable name typos or (e) is completely wrong. References Alfred V Aho, Ravi Sethi, and Jeffrey D Ullman. 1986. Compilers, principles, techniques. Addison wesley, 7(8):9. Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gordon, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings 2292 of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 2123–2132. JMLR.org. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, and Mingcheng Chen. 2016. Latent attention for if-then program synthesis. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, pages 4581– 4589, USA. Curran Associates Inc. Keith Ellul, Bryan Krawetz, Jeffrey Shallit, and Mingwei Wang. 2005. Regular expressions: New results and open problems. Carlo Ghezzi and Dino Mandrioli. 1979. Incremental parsing. ACM Transactions on Programming Languages and Systems (TOPLAS), 1(1):58–70. Sumit Gulwani, Oleksandr Polozov, and Rishabh Singh. 2017. Program synthesis. Foundations and Trends R⃝in Programming Languages, 4(1-2):1– 119. Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, and Graham Neubig. 2018. Retrieval-based neural code generation. arXiv preprint arXiv:1808.10025. Srinivasan Iyer, Alvin Cheung, and Luke Zettlemoyer. 2019. Learning programmatic idioms for scalable semantic parsing. arXiv preprint arXiv:1904.09086. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. arXiv preprint arXiv:1808.09588. G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints. Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy Liang. 2019. Spoc: Search-based pseudocode to code. arXiv preprint arXiv:1906.04908. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Andrew Senior, Fumin Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017a. Abstract syntax networks for code generation and semantic parsing. arXiv preprint arXiv:1704.07535. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017b. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139–1149, Vancouver, Canada. Association for Computational Linguistics. Armando Solar-Lezama. 2009. The sketching approach to program synthesis. In Proceedings of the 7th Asian Symposium on Programming Languages and Systems, APLAS ’09, pages 4–13, Berlin, Heidelberg. Springer-Verlag. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Chenglong Wang, Po-Sen Huang, Alex Polozov, Marc Brockschmidt, and Rishabh Singh. 2018. Execution-guided neural program decoding. CoRR, abs/1807.03100. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In The 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada. Maksym Zavershynskyi, Alex Skidanov, and Illia Polosukhin. 2018. Naps: Natural program synthesis dataset. arXiv preprint arXiv:1807.03168. 2293 A Appendices A.1 Primary Expressions Table 6 contains the grammar we use for the syntactic constraint and Table 7 defines the generation of terminal symbols. A.2 CFG Description Size of SymTable We show that we cannot specify the SymTable constraint in a context free grammar without exponential description complexity w.r.t. the number of variables declared. The intuition is that, since repeated declarations of a variable are not allowed, we need to keep track of all the variables that have been declared every time when verifying whether the next line is valid; however, a CFG, when transformed into a pushdown automata, is only allowed to peek at the top of the stack to decide the state transition. This means the symbol on the top of the stack, the state, or the transition rule need to have full information of about whether each variable has been declared, which contains exponentially many possibilities w.r.t. the number of variables. Our proof is an adaptation of Ellul et al. (2005), which proves this property for the language that accepts all the permutations of a fixed number of variables. We refer the readers to this paper if more details of the proof are needed. To formalize, we consider a simple grammar of K characters {v1, . . . , vK}, where vi means, semantically, declaring the variable vi, and the language L consists of all the possible sequences of declarations that have no repetition. L = {concatk j=1vaj|aj1 ̸= aj2 if j1 ̸= j2, k ≤K} (5) We prove that Theorem 1 L has at least ˜Ω(1.37K) description complexity4 as a context free grammar. Intuitively, it means if we want to use a CFG to specify L, we need the sum of total length of the production rules and number of symbols to be at least exponential. Proof: Since we can convert any CFG with size B to Chomsky Normal Form (CNF) with size O(B2), the above statement would be implied if we prove that L needs ˜Ω(1.372K) = ˜Ω(1.89K) description size in Chomsky Normal Form. We use Lemma 31 from Ellul et al. (2005): 4 ˜Ωignores all the poly(K) multiplicative factors. Lemma 2 Let S be the start symbol of the CFG. Then for all w ∈L, there exists a symbol A with S =⇒∗αAβ =⇒∗w (6) such that if A yields y in w (i.e. w = αyβ), 1 3|w| ≤ |y| ≤2 3|w|. In other words, for any member of the language, we can find a symbol in the derivation responsible for between 1/3 and 2/3 of the final yield. Let PK be all sequences of permutations of the K variables and thus PK ⊂L. Then by Lemma 2, for every permutation π ∈PK we can find yield yπ that is yielded by a single symbol such that 1 3K ≤ |yπ| ≤2 3K. Now we consider two permutations π1 and π2. If yπ1 and yπ2 are yielded by the same symbol, then they must have the same length (this is the part where the proof is slightly different from Ellul et al. (2005)): suppose the contrary, w.l.o.g., let |yπ1| > |yπ2|. By the definition of a context free grammar, we can replace the sub-string yπ2 in π2 by yπ1 to create a new string y′ π2 which is still a member of L. We have |y′ π2| = K−|yπ2|+|yπ1| > K by assumption. However, there are in total K variables; by the pigeonhole principle there must be a variable that is declared twice, and hence y′ π2 /∈L and we obtain a contradiction. Then all the assumption needed by Theorem 30 in Ellul et al. (2005) hold and L has description complexity ˜Ω(1.89K) in CNF and hence L has description complexity ˜Ω(1.89K/2) = ˜Ω(1.37K). □ A.3 Hardness of Satisfying SymTable We show that combining code pieces from each line under the SymTable constraint is NP-Hard in general. We first remind the readers of the set packing problem: Definition 3 Assume the universe to be V, and suppose we are given a family of subsets S from the power set of V, i.e. P(V) = {S | S ⊆V} and S ⊆P(V). We want to determine whether we can find a packing K ⊆S for which all sets in K are pairwise disjoint and with size |K| ≥L for some fixed L > 0. This problem is called the set packing problem, and is known to be NP-complete. Following the notation in section A.2, for each line l ∈[L], we construct the C = |S| code piece candidates ylS for S ∈S as ylS = concatv∈Sv. (7) 2294 Test Against Unseen Problems, Syntactic Method, Width B=1 B=10 B=102 H, W=10 27.4% 35.3% 42.0% H, W=25 27.5% 35.4% 42.1% H, W=50 27.5% 35.4% 42.1% R, W=200 27.1% 34.7% 41.0% Test Against Unseen Problems, SymTable Method, Width B=1 B=10 B=102 H, W=10 30.3% 38.1% 43.1% H, W=25 30.9% 39.2% 45.7% H, W=50 31.0% 39.2% 45.9% R, W=200 30.7% 38.9% 45.4% Table 4: Comparison of different beam size with Syntactic and SymTable constraint when tested against unseen problems. H/R refers to hierarchical/regular beam search and W is the beam width. This table is structured similarly as 2 . Length L H 25 H 50 R 50 R 200 (0, 10] 40.7% 41.5% 39.4% 41.5% (10, 20] 60.9% 59.8% 54.3% 61.3% (20, 30] 62.2% 61.3% 54.2% 61.3% (30, 40] 66.0% 66.1% 56.8% 66.1% (40, ∞) 69.0% 68.7% 57.9% 68.7% Table 5: Fraction of divergence in the first half of the program, grouped by program length L. In the column headers, H/R represents Hierarchical/Regular beam search under SymTable constraint, and the number represents beam width W. We easily see that there is a set packing of size L if and only if there is a valid code piece combination under SymTable constraint (declarations need to be disjoint for each line). Hence we finish our reduction proof. □ A.4 Beam Search on Unseen Problems Table 4 contains similar information as in Table 2, except that the results are obtained on testing with unseen problems. The exact same conclusion holds: for regular beam search, small beam size hurts performance, but hierarchical beam search can solve this problem. A.5 Variation under SymTable Constraints Table 5 contains similar information as Table 3, but for SymTable constraints. The same trend holds: regular beam search with small beam size have fewer variations in the first half of the program. 2295 Symbol Production Rule program stmt program function program stmt for stmt if stmt while stmt dowhile stmt terminal stmt ; X∗ X∗X X ⟨EMPTY ⟩ function return type function name ( args) {start stmt∗}end return type function name ( type∗); args ⟨EMPTY ⟩ arg , args arg type arg name for stmt forstart terminal parentheses terminal stmtend; forstart terminal parentheses {stmt∗}end while stmt whilestart terminal parentheses terminal stmtend; whilestart terminal parentheses {stmt∗}end dowhile stmt dostart {stmt∗} while terminal parenthesesend; dostart terminal stmt while terminal parenthesesend; if stmt single if stmt elif stmt∗else stmt single if stmt elif stmt∗ single if stmt ifstart terminal parentheses terminal stmtend; ifstart terminal parentheses {stmt∗}end elif stmt elifstart terminal parentheses terminal stmtend; elifstart terminal parentheses {stmt∗}end else stmt elsestart terminal stmtend; elsestart {stmt∗}end Table 6: The full primary expression grammar we are using. Each line is a production rule. X is a generic symbol. Terminal Implementation terminal parentheses a string that has matching parentheses and starts with parentheses terminal stmt a string that does not contain “;”, “for”, “if”, “else”, “while”, “do” for, if, else, while, do reserved key words function name, arg name function name and function argument name return type, type type in C++ Table 7: The definition of the terminals appearing in Table 6
2020
208
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2296–2308 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2296 Can We Predict New Facts with Open Knowledge Graph Embeddings? A Benchmark for Open Link Prediction Samuel Broscheit, Kiril Gashteovski, Yanjie Wang, Rainer Gemulla Data and Web Science Group University of Mannheim, Germany [email protected], {k.gashteovski,ywang,rgemulla}@uni-mannheim.de, Abstract Open Information Extraction systems extract (“subject text”, “relation text”, “object text”) triples from raw text. Some triples are textual versions of facts, i.e., non-canonicalized mentions of entities and relations. In this paper, we investigate whether it is possible to infer new facts directly from the open knowledge graph without any canonicalization or any supervision from curated knowledge. For this purpose, we propose the open link prediction task, i.e., predicting test facts by completing (“subject text”, “relation text”, ?) questions. An evaluation in such a setup raises the question if a correct prediction is actually a new fact that was induced by reasoning over the open knowledge graph or if it can be trivially explained. For example, facts can appear in different paraphrased textual variants, which can lead to test leakage. To this end, we propose an evaluation protocol and a methodology for creating the open link prediction benchmark OLPBENCH. We performed experiments with a prototypical knowledge graph embedding model for open link prediction. While the task is very challenging, our results suggests that it is possible to predict genuinely new facts, which can not be trivially explained. 1 Introduction A knowledge graph (KG) (Hayes-Roth, 1983) is a set of (subject, relation, object)-triples, where the subject and object correspond to vertices, and relations to labeled edges. In curated KGs, each triple is fully disambiguated against a fixed vocabulary of entities1 and relations. An application for KGs, for example, is the problem of drug discovery based on bio-medical knowledge (Mohamed et al., 2019). The construction of a curated bio-medical KG, which is required for 1For brevity, “entities” denotes both entities (e.g. Prince) and concepts (e.g. musician) throughout the paper. “NBC Television” “NBC” “NBC-TV” NBC NewYorkCity Knowledge Graph Open Knowledge Graph “NYC” “New York City” Figure 1: Entities and relations in curated knowledge graphs vs. open knowledge graphs. such an approach, is challenging and constrained by the available amount of human effort and domain expertise. Many tools that could assist humans in KG construction (e.g., an entity linker) need a KG to begin with. Moreover, current methods for KG construction often rely on the rich structure of Wikipedia, such as links and infoboxes, which are not available for every domain. Therefore, we ask if it is possible to make predictions about, for example, new drug applications from raw text without the intermediate step of KG construction. Open information extraction systems (OIE) (Etzioni et al., 2011) automatically extract (“subject text”, “relation text”, “object text”)-triples from unstructured data such as text. We can view OIE data as an open knowledge graph (OKG) (Gal´arraga et al., 2014), in which vertices correspond to mentions of entities and edges to open relations (see Fig. 1). Our overarching interest is whether and how we can reason over an OKG without any canonicalization and without any supervision on its latent factual knowledge. The focus of this study are the challenges of benchmarking the inference abilities of models in such a setup. A common task that requires reasoning over a 2297 Open Link Prediction Link Prediction NBC ? NewYorkCity Question entity b) a) ? “NYC” “New York City” “NBC-TV” Question mention Answer entity Answer mentions Figure 2: Comparing evaluation of link prediction and open link prediction. KG is link prediction (LP). The goal of LP is to predict missing facts in a KG. In general, LP is defined as answering questions such as (NBC, headquarterIn, ?) or (?, headquarterIn, NewYorkCity); see Fig. 2a. In OKGs, we define open link prediction (OLP) as follows: Given an OKG and a question consisting of an entity mention and an open relation, predict mentions as answers. A predicted mention is correct if it is a mention of the correct answer entity. For example, given the question (“NBC-TV”, “has office in”, ?), correct answers include “NYC” and “New York”; see Fig. 2b). To evaluate LP performance, the LP model is trained on known facts and evaluated to predict unknown facts, i.e., facts not seen during training. A simple but problematic way to transfer this approach to OKGs is to sample a set of evaluation triples from the OKG and to use the remaining part of the OKG for training. To see why this approach is problematic, consider the test triple (“NBC-TV”, “has office in”, “New York”) and suppose that the triple (“NBC”, “has headquarter in”, “NYC”) is also part of the OKG. The latter triple essentially leaks the test fact. If we do not remove such facts from the training data, a successful models only paraphrases known facts but does not perform reasoning, i.e., does not predict genuinely new facts. Furthermore, we also want to quantify if there are other trivial explanations for the prediction of an evaluation fact. For example, how much can be predicted with simple popularity statistics, i.e., only the mention, e.g. (“NBC-TV”, ?), or only the relation, e.g. (“has office in”, ?). Such non-relational information also does not require reasoning over the graph. To experimentally explore whether it is possible to predict new facts, we focus on knowledge graph embedding (KGE) models (Nickel et al., 2016), which have been applied successfully to LP in KGs. Such models can be easily extended to handle the surface forms of mentions and open relations. Our contributions are as follows: We propose the OLP task, an OLP evaluation protocol, and a method to create an OLP benchmark dataset. Using the latter method, we created a large OLP benchmark called OLPBENCH, which was derived from the state-of-the-art OIE corpus OPIEC (Gashteovski et al., 2019). OLPBENCH contains 30M open triples, 1M distinct open relations and 2.5M distinct mentions of approximately 800K entities. We investigate the effect of paraphrasing and nonrelational information on the performance of a prototypical KGE model for OLP. We also investigate the influence of entity knowledge during model selection with different types of validation data. For training KGE models on such large datasets, we describe an efficient training method. In our experiments, we found the OLP task and OLPBENCH to be very challenging. Still, the KGE model we considered was able to predict genuinely new facts. We also show that paraphrasing and non-relational information can indeed dilute performance evaluation, but can be remedied by appropriate dataset construction and experimental settings. 2 Open Knowledge Graphs OKGs can be constructed in a fully automatic way. They are open in that they do not require a vocabulary of entities and relations. For this reason, they can capture more information than curated KGs. For example, different entity mentions can refer to different versions of an entity at different points of time, e.g., “Senator Barack Obama” and “President Barack Obama”. Similarly, relations may be of varying specificity: headquarterIn may be expressed directly by open relations such as “be based in” or “operate from” but may also be implied by “relocated their offices to”. In contrast to KGs, OKGs contain rich conceptual knowledge. For example, the triple (“a class action lawsuit”, “is brought by”, “shareholders”) does not directly encode entity knowledge, although it does provide information about entities that link to “a class action lawsuit” or “shareholders”. OKGs tend to be noisier and the factual knowledge is less certain than in a KG, however. They 2298 “NBC-TV” “Marseille” “Los Angeles” “has office in” ? “New York” “NYC” “John” Model ✓ ✓ ✓ 4 1 2 3 Correct LosAngeles NewYorkCity NewYorkCity LosAngeles Identified Answer Entities Ask model to predict a ranked list of mentions as answer for question NewYorkCity Test question “NYC” “New York City” “Los Angeles” Filtered Rank 5 1 2 3 4 Rank highest correct answer in filtered rank counts ✓ Filtered Evaluate one of the correct answer entities ? Filter other correct answer entities : : Map answer entities to mentions to identify correct answers Figure 3: Mention-ranking protocol: Example for computing the filtered rank for a test question. can not directly replace KGs. OKGs have mostly been used as a weak augmentation to KGs, e.g., to infer new unseen entities or to aid link prediction (see App. A for a comprehensive discussion of related work). Much of prior work that solely leverages OKGs without a reference KG—and therein is closest to our work—focused on canonicalization and left inference as a follow-up step (Cohen et al., 2000, inter alia). In contrast, we propose to evaluate inference in OKGs with OLP directly. 3 Open Link Prediction The open link prediction task is based on the link prediction task for KGs (Nickel et al., 2016), which we describe first. Let E be a set of entities, R be a set of relations, and T ⊆E × R × E be a knowledge graph. Consider questions of the form qh = (?, k, j) or qt = (i, k, ?), where i, j ∈E is a head and tail entity, respectively, and k ∈R is a relation. The link prediction problem is to provide answers that are correct but not yet present in T . In OKGs, only mentions of entities and open relations are observed. We model each entity mention and each open relation as a non-empty sequence of tokens from some vocabulary V (e.g., a set of words). Denote by M = V+ the set of all such sequences and observe that M is unbounded. An open knowledge graph T ⊂M×M×M consists of triples of form (i, k, j), where i, j ∈M are head and tail entity mentions, resp., and k ∈M is an open relation. Note that we overload notation for readability: i, j, and k refer to entity mentions and open relations in OKGs, but to disambiguated entities and relations in KGs. The intended meaning will always be clear from the context. We denote by M(E) and M(R) the sets of entity and relations present in T , respectively. The open link prediction task is to predict new and correct answers to questions (i, k, ?) or (?, k, j). Answers are taken from M(E), whereas questions may refer to arbitrary mentions of entities and open relations from M. For example, for the question (“NBC-TV”, “has office in”, ?), we expect an answer from the set of mentions {“New York”, “NYC”, . .. } of the entity NewYorkCity. Informally, an answer (i, k, j) is correct if there is a correct triple (e1, r, e2), where e1 and e2 are entities and r is a relation, such that i,j, and k are mentions of e1, e2, and r, respectively. 3.1 Evaluation protocol To describe our proposed evaluation protocol, we first revisit the most commonly used methodology to evaluate link prediction methods for KGs, i.e., the entity-ranking protocol (Bordes et al., 2013). Then, we discuss its adaptation to OLP, which we call the mention-ranking protocol (see Fig. 3). KGs and entity ranking. For each triple z = (i, k, j) in the evaluation data, a link prediction model ranks the answers for two questions, qt(z) = (i, k, ?) and qh(z) = (?, k, j). The model is evaluated based on the ranks of the correct entities j and i; this setting is called raw. When true answers for qt(z) and qh(z) other than j and i are filtered from the rankings, then the setting is called filtered. OKGs and mention ranking. In OLP, the model predicts a ranked list of mentions. But questions might have multiple equivalent true answers, 2299 i.e., answers that refer to the same entity but use different mentions. Our evaluation metrics are based on the highest rank of a correct answer mention in the ranking. For the filtered setting, the mentions of known answer entities other than the evaluated entity are filtered from the ranking. This mentionranking protocol thus uses knowledge of alternative mentions of the entity in the evaluation triple to obtain a suitable ranking. The mention-ranking protocol therefore requires (i) ground truth annotations for the entity mentions in the head and tail of the evaluation data, and (ii) a comprehensive set of mentions for these entities. 4 Creating the Open Link Prediction Benchmark OLPBENCH An OLP benchmark should enable us to evaluate a model’s capability to predict genuinely new facts, i.e., facts can not be trivially derived. Due to the nature of OKGs, paraphrasing of facts may leak facts from validation and test data into training, making the prediction of such evaluation facts trivial. Nevertheless, the creation of training and validation data should require as little human effort as possible so that the methodology can be readily applied to new domains. Our mention-ranking protocol uses knowledge about entities for disambiguation (of the evaluation data, not the training data), however, which requires human effort to create. We investigate experimentally to what extent this entity knowledge is necessary for model selection and, in turn, how much manual effort is required to create a suitable validation dataset. In the following, we describe the source dataset of OLPBENCH and discuss how we addressed the points above to create evaluation and training data. 4.1 Source Dataset OLPBENCH is based on OPIEC (Gashteovski et al., 2019), a recently published dataset of OIE triples that were extracted from the text of English Wikipedia with the state-of-the-art OIE system MinIE (Gashteovski et al., 2017). We used a subset of 30M distinct triples, comprised of 2.5M entity mentions and 1M open relations. In 1.25M of these triples, the subject and the object contained a Wikipedia link. Fig. 4 shows how a Wikipedia link is used to disambiguate a triple’s subject and object mentions. Tab. 1 shows an excerpt from the unlinked and linked triples. For the evaluation protocol, we collected a dictionary, where each entity Was the second ship of the United States Navy to be named for William Conway, who distinguished himself during the Civil War. en.wikipedia.org/wiki/William_Conway_(U.S._Navy) en.wikipedia.org/wiki/American_Civil_War Figure 4: Example for a triple extracted from Wikipedia. With a Wikipedia hyperlink, a mention is disambiguated to its entity. Inversely this yields a mapping from an entity to all its mentions. is mapped to all possible mentions. See App. B for more details about the dataset creation. 4.2 Evaluation Data From the source dataset, we created validation and test data with the following requirements: Data quality. The evaluation data should be challenging, and noise should be limited as much as possible. We chose a pragmatic and easy heuristic: we did not consider short relations with less than three tokens as candidates for sampling evaluation data. This decision was based on the following observations: (i) Due to the OPIEC’s extractions, short relations—e.g. (“kerry s. walters”, “is”, “professor emeritus”)—are often subsumed by longer relations—e.g. (“kerry s. walters”, “is professor emeritus of”, “philosophy”)—, which would always lead to leakage from the longer relation to the shorter relation. (ii) Longer relations are less likely to be easily captured by simple patterns that are already successfully used by KG construction methods, e.g. (“elizabeth of hungary”, “is the last member of”, “the house of ´arp´ad”). We conjecture that long relations are more interesting for evaluation to measure progress in reasoning with OKG data. (iii) The automatically extracted entity annotations were slightly noisier for short relations; e.g., (“marc anthony”, “is” “singer”) had the object entity annotation SinFrenos. Human effort for data creation. The mentionranking protocol uses knowledge about entities for disambiguation. We want to experimentally quantify the influence of this entity knowledge on model selection, i.e., whether entity knowledge is necessary to find a good model. If so, human expertise is necessary to create the validation data. While our goal is to require almost no human domain expertise to learn a good model, the size of validation data is much smaller than the size of the training data. Therefore, this effort—if helpful—may be 2300 subject relation object subject mentions object mentions Unlinked conway has plot henry s. conway is field marshal conway tearle has members highway 319 begins outside conway bloomsbury bought conway publishing mike conway is teammate of toyota w. conway gordon served as adc to gen. p. maitland w. conway gordon entered the service Linked willam conway distinguished himself civil war willam conway civil war during conway american civil war terry venables is manager of fc barcelona terry venables fc barcelona f.c. barcelona futbol club barcelona cf barcelona barcelona background music is composed by hikaru nanase the background music masumi ito background music hikaru nanase background score Table 1: Example from the unlinked and linked data in OLPBENCH. For the unlinked data, we show the first of 3443 triples from the unlinked data containing the token ”conway“. For the linked data, we show the triples and also the alternative mentions for their entities. The first linked triple is about William Conway (U.S. Navy). feasible. To investigate this, we perform model selection performed with three different validation datasets that require increasingly more human effort to create: VALID-ALL (no effort), VALIDMENTION (some effort) and VALID-LINKED (most amount of human effort). TEST and VALID-LINKED data. Sample 10K triples with relations that have at least three tokens from the 1.25M linked triples. In these triples, the subject and object mentions have an annotation for their entity, which allows the mention-ranking protocol to identify alternative mentions of the respective entities. VALID-MENTION data. Proceed as in VALIDLINKED but discard the entity links. During validation, no access to alternative mentions is possible so that the mention-ranking protocol cannot be used. Nevertheless, the data has the same distribution as the test data. Such validation data may be generated automatically using a named entity recognizer, if one is available for the target domain. VALID-ALL data. Sample 10K triples with relations that have at least three tokens from the entire 30M unlinked and linked triples. This yields mostly triples from the unlinked portion. These triples may also include common nouns such as “a nice house” or “the country”. Entity links are discarded, i.e., the mention-ranking protocol cannot be used for validation. 4.3 Training Data To evaluate LP models for KGs, evaluation facts are generated by sampling from the KG. Given an evaluation triple (i, k, j), the simplest action to avoid leakage from the training data is to remove only this evaluation triple from training. For KGs, it was observed this simple approach is not satisfactory in that evaluation answers may still leak and thus can be trivially inferred (Toutanova et al., 2015; Dettmers et al., 2018). For example, an evaluation triple (a, siblingOf, b) can be trivially answered with the training triple (b, siblingOf, a). In OKGs, paraphrases of relations pose additional sources of leakage. For example, the relations “is in” and “located in” may contain many of the same entity pairs. For evaluation triple (i, k, j), such leakage can be prevented by removing any other relation between i and j from the training data. However, individual tokens in the arguments or relations may also cause leakage. For example, information about test triple (“NBC-TV”, “has office in”, “NYC”) is leaked by triples such as (“NBC Television”, “has NYC offices in”, “Rockefeller Plaza”) even though it has different arguments. Fig. 5 visualizes this example. We use three levels of leakage removal from training: SIMPLE, BASIC, and THOROUGH. To match evaluation triple (i, k, j) with training triples, we ignored word order and stopwords. 2301 "NBC-TV" “New York’s NBC” RockefellerCenter “NYC” "New York City" “Rockefeller Plaza” “Comcast” NBC NewYorkCity Link Prediction Open Link Prediction "NBC Television" Figure 5: Examples of test fact leakage into training data; comparing link prediction and open link prediction. The example test triples are (NBC, headquarterIn, NewYorkCity) and (“NBC-TV”, “is in”, “NYC”), respectively. Link Prediction: (1) the sampled test triple (2) any link between the test triple’s arguments could leak the test fact; Open Link Prediction: (1) the sampled open test triple, (2) consider any link between any mention of the open triple’s arguments, (3) consider test fact leakage from the tokens in the open triple’s arguments or relation. Underlined tokens are the source of leakage. SIMPLE removal. Only the triple (i, k, j) is removed. Triples with alternative mentions for i or j are kept. BASIC removal. (i, k, j) as well as (j, k, i) are removed from the training data. Triples with with alternative mentions of i and j are also removed. THOROUGH removal. Additionally to BASIC removal, we also remove triples from training matched by the following patterns. The patterns are explained with the example (“J. Smith”, “is defender of”, “Liverpool”): (a) (i, ∗, j) and (j, ∗, i). E.g., matches (“J. Smith”, “is player of”, “Liverpool”). (b) (i, k + j, ∗) and (∗, k + i, j).2 E.g., matches (“J. Smith”, “is Liverpool’s defender on”, “Saturday”). (c) (i + k + j, ∗, ∗) and (∗, ∗, i + k + j). E.g., matches (“Liverpool defender J. Smith”, “kicked”, “the ball”). For OLPBENCH, THOROUGH removed 196,717 more triples from the OKG than BASIC. Note that this yields three different training data sets. 2Other permutations of this pattern did not occur in our data. 5 Open Knowledge Graph Embeddings KG embedding (KGE) models have been successfully applied for LP in KGs, and they can be easily extended to handle surface forms, i.e., mentions and open relations. We briefly describe KGE models and their extension. Knowledge Graph Embedding (KGE) model. A KGE model (Nickel et al., 2016) associates an embedding with each entity and each relation. The embeddings are dense vector representations that are learned with an LP objective. They are used to compute a KGE model-specific score s(i, k, j) for a triple (i, k, j); the goal is to predict high scores for true triples and low scores for wrong triples. KGE model with composition. For our experiments, we considered composition functions to create entity and relation representations from the tokens of the surface form. Such an approach has been used, for example, by Toutanova et al. (2015) to produce open relation embedding via a CNN. A model that reads the tokens of mentions and open relations can, in principle, handle any mention and open relation as long as the tokens have been observed during training. We use a general model architecture that combines a relational model and a composition func2302 ( “Jamie” “Carragher”, “is” “defender” “of”, “Liverpool” ) mention/relation tokens token embeddings mention/relation embeddings score for triple Figure 6: KGE model with composition. The tokens in triple (i, k, j) are first embedded individually and then composed into mention or relation embeddings. Finally, a KGE model RM is used to compute the triple’s score. tion, see Fig. 6. Formally, let V(E)+ be the set of non-empty token sequences over the token vocabulary V(E) of entity mentions. We denote by d, o ∈N+ the size of the embeddings of entities and relations. We first embed each entity mention into a continuous vector space via an entity mention embedding function f : V(E)+ →Rd. Similarly, each open relation is embedded into a continuous vector space via a relation embedding function g : V(R)+ →Ro. The embeddings are then fed into a relational scoring function RM : Rd × Ro × Rd →R. Given a triple (i, k, j), where i, j ∈V(E)+ and k ∈V(R)+, our model computes the final score as s(i, k, j) = RM( f(i), g(k), f(j) ). 6 Experiments In our experimental study, we investigated whether a simple prototypical OLP model can predict genuinely new facts or if many successful predictions can be trivially explained by leakage or nonrelational information. Our goal was to study the effectiveness and necessity of the mention-ranking protocol and leakage removal, and how much human effort is necessary to create suitable validation data. Finally, we inspected data and model quality. We first describe the models and their training, then the performance metrics, and finally the evaluation. In our experimental results, model performance dropped by ≈25% with THOROUGH leakage removal so that leakage due to paraphrasing is indeed a concern. We also implemented two diagnostic models that use non-relational information (only parts of a triple) to predict answers. These models reached ≈20–25% of the prototypical model’s performance, which indicates that relational modelling is important. In our quality and error analysis, we found that at least 74% of the prediction errors were not due to noisy data. A majority of incorrectly predicted entity mentions have a type similar to the one of the true entity. 6.1 Models and Training Prototypical model. We use COMPLEX (Trouillon et al., 2016) as relational model, which is an efficient bilinear model and has shown state-of-theart results. For the composition functions f and g, we used an LSTM (Hochreiter and Schmidhuber, 1997) with one layer and the hidden size equivalent to the token embedding size. We call this model COMPLEX-LSTM.3 Diagnostic models. To expose potential biases in the data, we employ two diagnostic models to discover how many questions can simply be answered without looking at the whole question, i.e., by exploiting non-relational information. Given question (i, k, ?), the model PREDICT-WITH-REL considers (r, ?) for scoring. E.g., for question (“Jamie Carragher”, “is defender of”, ?), we actually ask (“is defender of”, ?). This is likely to work reasonably for relations that are specific about the potential answer entities; e.g., predicting popular football clubs for (“is defender of”, ?). The model uses scoring functions st : Ro × Rd →R and sh : Rd × Ro →R for questions (i, k, ?) and (?, k, j) respectively: st(k, e) = g(k)T f(j), sh(i, k) = f(i)T g(k) Likewise, the PREDICT-WITH-ENT model ignores the relation by computing a score for pair (i, j). We use se(i, j) = f(i)T f(j) Training. See App. C for details about the hyperparameters, training and model selection. Performance metrics. For evaluating a model’s predictions, we use the ranking metrics mean reciprocal rank (MRR) and HITS@k. MRR is sensitive to the top-3 ranks with rapidly decaying reward, 3In a preliminary study, we investigated COMPLEX, ANALOGY, DISTMULT and RESCAL as relational models. COMPLEX was the most efficient and best performing model. For composition functions, we also investigated unigram pooling, bi-gram pooling with CNNs, self-attention and LSTMs. Here LSTMs worked well consistently. See App. E for additional results. 2303 Leakage Model Removal Model Selection MRR HITS@1 HITS@10 HITS@50 PRED-WITH-ENT LINKED 0.0 0.0 0.0 0.0 SIMPLE PRED-WITH-REL LINKED 1.5 0.8 2.6 5.4 COMPLEX-LSTM LINKED 6.5 3.8 11.6 20.7 PRED-WITH-ENT LINKED 0.0 0.0 0.0 0.0 BASIC PRED-WITH-REL LINKED 1.0 0.5 1.6 3.6 COMPLEX-LSTM LINKED 4.8 2.6 8.9 17.6 PRED-WITH-ENT LINKED 0.0 0.0 0.0 0.0 THOROUGH PRED-WITH-REL LINKED 1.0 0.6 1.5 3.3 COMPLEX-LSTM LINKED 3.9 2.1 7.0 14.6 COMPLEX-LSTM ALL 2.7 1.5 4.7 9.1 COMPLEX-LSTM MENTION 3.8 2.1 7.1 14.1 Table 2: Test results. Comparing COMPLEX-LSTM, PREDICT-WITH-ENT and PREDICT-WITH-REL with all removal settings. Model selection on VALID-LINKED for all settings except in THOROUGH, where we also show VALID-MENTION and VALID-LINKED. Results in percent. while HITS@k equally rewards correct answers in the top-k ranks. See App. D for a more formal definition of MRR and HITS@k. The ranks are based on mention ranking for VALID-LINKED and TEST and on entity-ranking (treating distinct mentions as distinct entities) for VALID-ALL and VALID-MENTION. 6.2 Results Influence of leakage. In Tab. 2, we observed that BASIC leakage removal of evaluation data lowers the performance of all models considerably in contrast to the SIMPLE leakage removal. With the THOROUGH leakage removal, performace drops further; e.g., HITS@50 performance dropped by ≈25% from SIMPLE. This confirms our conjecture that leakage can trivially explain some successful predictions. Most predictions, however, cannot be explained by paraphrasing leakage. Influence of non-relational information. In Tab. 2, we see that PREDICT-WITH-ENT, which essentially learns popularity statistics between entity mentions, has no success on the evaluation data. However, PREDICT-WITH-REL reaches ≈ 20−25% of HITS@50 performance of COMPLEXLSTM by simply predicting popular mentions for a relation, even in the THOROUGH setting. Effectiveness of mention-ranking. Tab. 3 shows validation results for the three types of validation data for COMPLEX-LSTM and THOROUGH removal. The evaluation protocol has access to alternative mentions only in VALID-LINKED, but not in VALID-ALL and VALID-MENTION. Clearly, using VALID-LINKED results in higher metrics when models associate different mentions to an answer entity. Influence of model selection. The THOROUGH block of Tab. 2 shows the results for model selection based on VALID-ALL, VALID-MENTION or VALID-LINKED. In VALID-ALL, many triples contain common nouns instead of entity mentions, while in VALID-MENTION or VALID-LINKED triples have entity mentions in both arguments. Model selection based on VALID-ALL clearly picked a weaker model than model selection based on VALID-LINKED, i.e., it led to a drop of ≈35% of HITS@50 performance. However, there is no improvement when we pick a model based on VALIDLINKED versus VALID-MENTION. Thus, computing the MRR using alternative entity mentions did not improve model selection, even though—as Tab. 3 shows—the mention-ranking protocol gives more credit when alternative mentions are ranked higher. Our results suggest that it may suffice to use validation data that contains entity mentions but avoid costly entity disambiguation. Overall performance. In Tab. 2 we observed that performance numbers seem generally low. For comparison, the HITS@10 of COMPLEX on FB15k-237—a standard evaluation dataset for LP in curated KGs—lies between 45% and 55%. We conjecture that this drop may be due to: (i) The 2304 Leakage Model Removal Model Selection MRR HITS@1 HITS@10 HITS@50 COMPLEX-LSTM ALL 2.9 1.8 5.0 8.9 THOROUGH COMPLEX-LSTM MENTION 3.6 2.0 6.5 13.0 COMPLEX-LSTM LINKED 4.2 2.3 7.5 14.9 Table 3: Validation results. Comparing the performances of COMPLEX-LSTM for different validation datasets. Types of prediction errors correct sense / wrong entity 68.0 % wrong sense 13.5 % noise 18.5 % Types of data errors triple has error 12.0 % mention is generic 14.0 % Table 4: Error assessment of 100 sampled HITS@50 (filtered) prediction errors from VALID-LINKED. level of uncertainty and noise in the training data, i.e., uninformative or even misleading triples in OKGs (Gashteovski et al., 2019). (ii) Our evaluation data is mostly from the more challenging long tail. (iii) OKGs might be fragmented, thus inhibiting information flow. Also, note that the removal of evaluation data from training removes evidence for the evaluated long-tail entities. (iv) Naturally, in LP, we do not know all the true answers to questions. Thus, the filtered rank might still contain many true predictions. In OLP, we expect this effect to be even stronger, i.e., the filtered ranking metrics are lower than in the KG setting. Still, like in KG evaluation, with a large enough test set, the metrics allow for model comparisons. Model and data errors. We inspected predictions for VALID-LINKED from COMPLEX-LSTM trained on THOROUGH. We sampled 100 prediction errors, i.e., triples for which no correct predicted mention appeared in the filtered top-50 rank. We classified prediction errors by inspecting the top-3 ranks and judged their consistency. We classified triple quality judging the whole triple. We counted an error as correct sense / wrong entity, when the top-ranked mentions are semantically sensible, i.e. for (“Irving Azoff”, “was head of”, ?) the correct answer would be “MCA Records”, but the model predicted other record companies. We counted an error as wrong sense when—for the same example—the model mostly consistently predicted other companies or music bands, but not other record companies. If the predictions are inconsistent, we counted the error as noise. An additional quality assessment is the number of wrong triples caused by extraction errors in OPIEC, e.g., (“Finland”, “is the western part of”, “the balkan peninsula”), (“William Macaskill”, “is vice-president of”, “giving”), or errors in alternative mentions. We also looked for generic mentions in the evaluation data. Such mentions contain mostly conceptual knowledge like in (“computer science”, “had backgrounds in”, “mathematics”). Other generic triples, like (“Patrick E.”, “joined the team in”, “the season”), have conceptual meaning, but miss context to disambiguate “the season”. The results in Tab. 4 suggest that the low performance in the experiments is not due to noisy evaluation data. 74% of the examined prediction errors on VALID-LINKED contained correct, nongeneric facts. The shown model errors raise the question of whether there is enough evidence in the data to make better predictions. 7 Conclusion We proposed the OLP task and a method to create an OLP benchmark. We created the large OLP benchmark OLPBENCH, which will be made publicly available4. We investigated the effect of leakage of evaluation facts, non-relational information, and entity-knowledge during model selection using a prototypical open link prediction model. Our results indicate that most predicted true facts are genuinely new. Acknowledgments The first author would like to gratefully thank the NVIDIA Corporation for the donation of a TITAN Xp GPU that was used in this research. 4https://www.uni-mannheim.de/dws/ research/resources/olpbench/ 2305 References Antoine Bordes, Nicolas Usunier, Alberto Garc´ıaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2787– 2795. William W. Cohen, Henry Kautz, and David McAllester. 2000. Hardening soft information sources. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’00, pages 255–259, New York, NY, USA. ACM. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1811– 1818. John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open information extraction: The second generation. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 3–10. Luis Gal´arraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM 2014, Shanghai, China, November 3-7, 2014, pages 1679– 1688. Luis Gal´arraga, Geremy Heitz, Kevin Murphy, and Fabian M. Suchanek. 2014. Canonicalizing open knowledge bases. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, CIKM ’14, pages 1679–1688, New York, NY, USA. ACM. Kiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. Minie: Minimizing facts in open information extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2630–2640. Kiril Gashteovski, Sebastian Wanner, Sven Hertling, Samuel Broscheit, and Rainer Gemulla. 2019. OPIEC: an open information extraction corpus. CoRR, abs/1904.12324. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2010, Chia Laguna Resort, Sardinia, Italy, May 13-15, 2010, pages 249–256. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. 2017. Knowledge transfer for out-of-knowledge-base entities : A graph neural network approach. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1802–1808. Frederick Hayes-Roth. 1983. Building expert systems, volume 1 of Advanced book program. AddisonWesley. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Sameh K. Mohamed, Aayah Nounu, and V´ıt Nov´aˇcek. 2019. Drug target discovery using knowledge graph embeddings. In Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing, SAC ’19, page 11–18, New York, NY, USA. Association for Computing Machinery. Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33. Fabio Petroni, Luciano Del Corro, and Rainer Gemulla. 2015. CORE: context-aware open relation extraction with factorization machines. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1763–1773. Jay Pujara, Hui Miao, Lise Getoor, and William Cohen. 2013. Knowledge graph identification. In The Semantic Web – ISWC 2013, pages 542–557, Berlin, Heidelberg. Springer Berlin Heidelberg. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84, Atlanta, Georgia. Association for Computational Linguistics. Baoxu Shi and Tim Weninger. 2018. Open-world knowledge graph completion. In Proceedings of the 2306 Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1957–1964. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1499–1509. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 2071–2080. Shikhar Vashishth, Prince Jain, and Partha Talukdar. 2018. Cesi: Canonicalizing open knowledge bases using embeddings and side information. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pages 1317–1327, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Patrick Verga, Arvind Neelakantan, and Andrew McCallum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 613–622. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL, 6:287–302. Tien-Hsuan Wu, Zhiyong Wu, Ben Kao, and Pengcheng Yin. 2018. Towards practical open knowledge base canonicalization. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, pages 883–892, New York, NY, USA. ACM. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369–2380. A Related Work The following studies investigated KGs and OKGs in various ways, either by deriving KGs from OKGs or by using them jointly to improve inference in KGs. Unseen entities. Shi and Weninger (2018) introduced open-world knowledge base completion (OWKBC), which assumes a curated KG as basis. The goal is to obtain new triples with unseen entities and known relations from the KG. Shi and Weninger (2018) proposes a link prediction model that allows questions involving unseen entities. Their model leverages the KG, relevant text fragments, word embeddings as well as an entity resolution module. Other approaches use structural information from the KG itself. Hamaguchi et al. (2017) assigns an embedding to an unseen entity based on embeddings of its neighboring entities and relations, whereas Verga et al. (2017) encodes an unseen entity pair by averaging the embeddings of the relations that link to it. OpenIE-enhanced KGs. Universal schema models (Riedel et al., 2013) augment an existing KG with open relations between KG entities. Petroni et al. (2015) build upon Riedel et al.’s work by considering context information to improve the results further. Toutanova et al. (2015) embed open relations based on their tokens and dependency relations to augment the KG. In our work, we explore LP for OKGs, which differs in that only mentions are observed for both entities and relations. Neither a KG nor a vocabulary of entities is available during training and prediction. Canonicalizing open knowledge. Cohen et al. (2000); Pujara et al. (2013); Vashishth et al. (2018); Wu et al. (2018); Gal´arraga et al. (2014) are the closest in spirit to this study, as they also want to make OKGs accessible without using a reference knowledge base. Cohen et al. (2000) calls open information a soft database, while Pujara et al. (2013) calls it an extraction graph from which a latent KG has to be identified. Common to all those approaches is that their ultimate target is to create a symbolic database with disambiguated entities and distinct relations. Thus they canonicalize the entities and the relations. In contrast, we are not canonicalizing the OKG but reason directly on the OKG. Gal´arraga et al. (2014) directly evaluates the induction of entity clusters, while we evaluate this 2307 jointly in the context of LP. Reading comprehension QA and language modelling. Two recently published reading comprehension question answering datasets—QAngaroo (Welbl et al., 2018) and HotPotQA (Yang et al., 2018)—evaluate multi-hop reasoning over facts in a collection of paragraphs. In contrast to these approaches, OLP models reason over the whole graph, and the main goal is to investigate the learning of relational knowledge despite ambiguity and noise. We consider those two directions as complementary to each other. Also, in their task setup, they do not stipulate a concept of relations between entities, i.e., the relations are assumed to be a latent/inherent property of the text in which the entities occur. This is true as well for language models trained on raw text. It has been shown that such language models can answer questions in a zero-shot setting (Radford et al., 2019). The authors of the latter study inspected the training data to estimate the number of near duplicates to their test data and could show that their model seemed to be able to generalize, i.e., to reason about knowledge in the training data. TAC KBP Slot Filling. The TAC KBP Slot Filling challenge datasets provide a text corpus paired with canonicalized multi-hop questions. There are similarities to our work in terms of building knowledge from scratch and answering questions. The main difference is that our goal is to investigate the learning of knowledge without supervision on canonicalization and that we use link prediction questions to quantify model performance. If models in OLP show convincing progress, they could and should be applied to TAC KBP. B Dataset creation The process of deriving the dataset from OPIEC was as follows. Initially, the dataset contained over 340M non-distinct triples,5 which are enriched with metadata such as source sentence, linguistic annotations, confidence scores about the correctness of the extractions and the Wikipedia links in the triple’s subject or object. Triples of the following types are not useful for our purpose and are removed: (i) having a confidence score < 0.3,6 5The triples can be non-distinct, i.e., duplicates, when they have been extracted from different sentences. 6The confidence score is computed by a classifier that determines the probability of the triple having an extraction error. Refer to OPIEC’s publication for further description. (ii) having personal or possessive pronouns, whdeterminer, adverbs or determiners in one of their arguments, (iii) having a relation from an implicit appositive clause extraction, which we found to be very noisy, and (iv) having a mention or a relation that is longer than 10 tokens. This left 80M nondistinct triples. Next, we lowercased the remaining 60M distinct triples and collect an entity-mentions map from all triples that have an annotated entity. We collected token counts and created a mention token vocabulary with the top 200K most frequent tokens, and a relation token vocabulary with the top 50K most frequent tokens. This was done to ensure that each token is seen at least ≈50 times. Finally, we kept only the triples whose tokens were contained in these vocabularies, i.e., the final 30M distinct triples. C Training details C.1 Multi-Label Binary Classification Batch-Negative Example Loss Recent studies (Dettmers et al., 2018) obtained state-of-the-art results using multi-label binary classification over the full entity vocabulary. Let the cardinality of the OKG’s mention set be N = |Th ∪Tt|. A training instance is either a prefix (i, k) with label yik ∈{0, 1}N given by yik c = ( 1 if (i, k, c) ∈T 0 otherwise, for c ∈{1, .., N} or, likewise, a suffix (k, j) and ykj ∈{0, 1}N. Computing such a loss over the whole entity mention vocabulary is infeasible because (a) our entity mention vocabulary is very large and (b) we have to recompute the entity mention embeddings after each parameter update for each batch. To improve memory efficiency and speed, we devise a strategy to create negative examples dubbed batch negative examples. This method simplifies the batch construction by using only the entities in the batch as negative examples. Formally, after sampling the prefix and suffix instances for a batch b, we collect all true answers in a set ˆ Bb, such that the label vectors yik and ykj in batch b is defined over ˆ Bb and the loss in batch b is computed by Lik = 1 |Bb| X c∈ˆ Bb −[yik c · log σ(s(i, k, c)) +(1 −yik c ) · log(1 −σ(s(i, k, c)))] 2308 Leakage Model Removal Model Selection MRR HITS@1 HITS@10 HITS@50 COMPLEX-UNI ALL 2.2 0.8 4.7 10.2 COMPLEX-UNI MENTION 2.2 0.9 4.7 10.3 COMPLEX-UNI LINKED 2.2 0.9 4.7 10.3 DISTMULT-LSTM ALL 3.2 1.7 5.9 11.6 DISTMULT-LSTM MENTION 3.3 1.8 5.9 12.2 DISTMULT-LSTM LINKED 3.3 1.8 5.9 12.2 THOROUGH COMPLEX-LSTM-XL ALL 3.3 1.8 5.8 12.0 COMPLEX-LSTM-XL MENTION 3.6 1.9 6.6 13.9 COMPLEX-LSTM-XL LINKED 3.6 1.9 6.6 13.9 COMPLEX-LSTM ALL 2.7 1.5 4.7 9.1 COMPLEX-LSTM MENTION 3.8 2.1 7.1 14.1 COMPLEX-LSTM LINKED 3.9 2.1 7.0 14.6 Table 5: Additional Test results. Comparing DISTMULT-LSTM, COMPLEX-LSTM-XL with embedding size 768, COMPLEX-UNI with uni-gram pooling as composition function. Model selection on VALID, VALID-LINKED and VALID-MENTION, models trained on THOROUGH; Results in percent. and Lkj is computed likewise. With batch negative examples the mentions/entities appear in expectation proportional to their frequency in the training data as a “negative example”. C.2 Training settings We used Adagrad with mini batches (Duchi et al., 2011) with batch size 4096. The token embeddings were initialized with the Glorot initialization (Glorot and Bengio, 2010). One epoch takes ≈50 min with a TitanXp/1080Ti. We performed a grid search over the following hyperparameters: entity and relation token embedding sizes [256, 512], drop-out after the composition function f and g [0.0, 0.1], learning rate [0.05, 0.1, 0.2] and weight decay [10−6, 10−10]. We trained the models for 10 epochs and selected the hyperparameters, which achieved the best MRR with mention ranking on VALID-LINKED. We trained the final models for up to 100 epochs but did early stopping if no improvement occured within 10 epochs. D Performance Metrics Denote by M(E) all mentions from the dataset. Denote by Q the set of all questions generated from the evaluation data. Given a question qt ∈Q, we rank all m ∈M(E) by the scores s(i, k, m) (or s(m, k, j) for qh ∈Q), then filter the raw rank according to either the entity-ranking protocol or the mention-ranking protocol. Finally, we record the positions of the correct answers in the filtered ranking. MRR is defined as follows: For each question q ∈Q, let RRq be the filtered reciprocal rank of the top-ranked correct answer. MRR is the microaverage over {RRq | q ∈Q}. HITS@k is the proportion of the questions where at least one correct mention appears in the top k positions of the filtered ranking. E Additional Results Tab. 5 provides results for other models and hyperparameters. The COMPLEX-LSTM results from the Sec. 6 are given at the bottom for comparison. COMPLEX-LSTM-XL has a larger embedding size of 768, which did not help to improve the results. COMPLEX-UNI is the ComplEx model with the uni-gram pooling composition function, i.e., averaging the token embeddings. Compared to COMPLEX-LSTM it shows that LSTM as a composition function did yield better results. DISTMULTLSTM is the DistMult relational model (Yang et al., 2015) with an LSTM as composition function, which did not improve over COMPLEX-LSTM. In Summary, the results support the hyperparameters, model and composition function chosen for the experiments in Sec. 6. Overall, we observed that model selection based on VALID-ALL seems to have a higher variance because the model selected for COMPLEX-LSTM with VALID-ALL is outperformed by other models, whereas COMPLEXLSTM performed best for models selected with VALID-MENTION and VALID-LINKED.
2020
209
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 225–237 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 225 Learning to Ask More: Semi-Autoregressive Sequential Question Generation under Dual-Graph Interaction Zi Chai, Xiaojun Wan Wangxuan Institue of Computer Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {chaizi, wanxiaojun}@pku.edu.cn Abstract Traditional Question Generation (TQG) aims to generate a question given an input passage and an answer. When there is a sequence of answers, we can perform Sequential Question Generation (SQG) to produce a series of interconnected questions. Since the frequently occurred information omission and coreference between questions, SQG is rather challenging. Prior works regarded SQG as a dialog generation task and recurrently produced each question. However, they suffered from problems caused by error cascades and could only capture limited context dependencies. To this end, we generate questions in a semi-autoregressive way. Our model divides questions into different groups and generates each group of them in parallel. During this process, it builds two graphs focusing on information from passages, answers respectively and performs dual-graph interaction to get information for generation. Besides, we design an answer-aware attention mechanism and the coarse-to-fine generation scenario. Experiments on our new dataset containing 81.9K questions show that our model substantially outperforms prior works. 1 Introduction Question Generation (QG) aims to teach machines to ask human-like questions from a range of inputs such as natural language texts (Du et al., 2017), images (Mostafazadeh et al., 2016) and knowledge bases (Serban et al., 2016). In recent years, QG has received increasing attention due to its wide applications. Asking questions in dialog systems can enhance the interactiveness and persistence of humanmachine interactions (Wang et al., 2018). QG benefits Question Answering (QA) models through data augmentation (Duan et al., 2017) and joint learning (Sun et al., 2019). It also plays an important role in education (Heilman and Smith, 2010) and clinical (Weizenbaum et al., 1966) systems. Traditional Question Generation (TQG) is defined as the reverse task of QA, i.e., a passage and an answer (often a certain span from the passage) are provided as inputs, and the output is a question grounded in the input passage targeting on the given answer. When there is a sequence of answers, we can perform Sequential Question Generation (SQG) to produce a series of interconnected questions. Table 1 shows an example comparing the two tasks. Intuitively, questions in SQG are much more concise and we can regard them with given answers as QA-style conversations. Since it is more natural for human beings to test knowledge or seek information through coherent questions (Reddy et al., 2019), SQG has wide applications, e.g., enabling virtual assistants to ask questions based on previous discussions to get better user experiences. SQG is a challenging task in two aspects. First, information omissions between questions lead to complex context dependencies. Second, there are frequently occurred coreference between questions. Prior works regarded SQG as a dialog generation task (namely conversational QG) where questions are generated autoregressively (recurrently), i.e., a new question is produced based on previous outputs. Although many powerful dialog generation models can be adopted to address the challenges mentioned above, there are two major obstacles. First, these models suffer from problems caused by error cascades. Empirical results from experiments reveal that the later generated questions tend to become shorter with lower quality, especially becoming more irrelevant to given answers, e.g., “Why?”, “What else?”. Second, models recurrently generating each question struggle to capture complex context dependencies, e.g., long-distance coreference. Essentially, SQG is rather different from dialog generation since all answers are given in advance and they act as strict semantic constraints during text generation. 226 (1) A small boy named [John]1 was at the park one day. (2) He was [swinging]2 [on the swings]3 and [his friend]4 named [Tim]5 [played on the slide]6. (3) John wanted to play on the slide now. (4) He asked Tim [if he could play on the slide]7. (5) Tim said [no]8, and he cried. Turn TQG SQG Answer 1 Who was at the park? Who was at the park? John 2 What was John doing at the park? What was he doing there? swinging 3 Where was John swinging? On what? on the wings 4 Who was with John at the park? Who was he with? his friend 5 What is the name of John’s friend? Named? Tim 6 What was Tim doing? What was he doing? played on the side 7 What did John asked Tim? What did John asked him? if he could play on the slide 8 What did Tim say to John? What did he say? no Table 1: Comparison of Traditional Question Generation (TQG) and Sequential Question Generation (SQG). The given passage contains five sentences, and we mark the given answers in the passage as blue. To deal with these problems, we perform SQG in a semi-autoregressive way. More specifically, we divide target questions into different groups (questions in the same group are closely-related) and generate all groups in parallel. Especially, our scenario becomes non-autoregressive if each group only contains a single question. Since we eliminate the recurrent dependencies between questions in different groups, the generation process is much faster and our model can better deal with the problems caused by error cascades. To get information for the generation process, we perform dualgraph interaction where a passage-info graph and an answer-info graph are constructed and iteratively updated with each other. The passage-info graph is used for better capturing context dependencies, and the answer-info graph is used to make generated questions more relevant to given answers with the help of our answer-aware attention mechanism. Besides, a coarse-to-fine text generation scenario is adopted for the coreference resolution between questions. Prior works performed SQG on CoQA (Reddy et al., 2019), a high-quality dataset for conversational QA. As will be further illustrated, a number of data in CoQA are not suitable for SQG. Some researchers (Gao et al., 2019) directly discarded these data, but the remaining questions may become incoherent, e.g., the antecedent words for many pronouns are unclear. To this end, we build a new dataset from CoQA containing 81.9K relabeled questions. Above all, the main contributions of our work are: • We build a new dataset containing 7.2K passages and 81.9K questions from CoQA. It is the first dataset specially built for SQG as far as we know. • We perform semi-autoregressive SQG under dual-graph interaction. This is the first time that SQG is not regarded as a dialog generation task. We also propose an answer-aware attention mechanism and a coarse-to-fine generation scenario for better performance. • We use extensive experiments to show that our model outperforms previous work by a substantial margin. Further analysis illustrated the impact of different components. Dataset for this paper is available at https:// github.com/ChaiZ-pku/Sequential-QG. 2 Related Work 2.1 Traditional Question Generation TQG was traditionally tackled by rule-based methods (Lindberg et al., 2013; Mazidi and Nielsen, 2014; Hussein et al., 2014; Labutov et al., 2015), e.g., filling handcrafted templates under certain transformation rules. With the rise of data-driven learning approaches, neural networks (NN) have gradually taken the mainstream. Du et al. (2017) pioneered NN-based QG by adopting the Seq2seq architecture (Sutskever et al., 2014). Many ideas were proposed since then to make it more powerful, including answer position features (Zhou et al., 2017), specialized pointer mechanism (Zhao et al., 2018), self-attention (Scialom et al., 2019), answer separation (Kim et al., 2019), etc. In addition, enhancing the Seq2seq model into more complicated structures using variational inference, adversarial training and reinforcement learning (Yao et al., 2018; Kumar et al., 2019) have also gained much attention. There are also some works performing TQG under certain constraints, e.g., controlling the 227 topic (Hu et al., 2018) and difficulty (Gao et al., 2018) of questions. Besides, combining QG with QA (Wang et al., 2017; Tang et al., 2017; Sun et al., 2019) is also focused by many researchers. 2.2 Sequential Question Generation As human beings tend to use coherent questions for knowledge testing or information seeking, SQG plays an important role in many applications. Prior works regarded SQG as a dialog generation task (namely conversational QA). Pan et al. (2019) pretrained a model performing dialog generation, and then fine-tuned its parameters by reinforcement learning to make generated questions relevant to given answers. Gao et al. (2019) iteratively generated questions from previous outputs and leveraged off-the-shelf coreference resolution models to introduce a coreference loss. Besides, additional human annotations were performed on sentences from input passages for conversation flow modeling. Since SQG is essentially different from dialog generation, we discard its dialog view and propose the first semi-autoregressive SQG model. Compared with using the additional human annotation in Gao et al. (2019), our dual-graph interaction deals with context dependencies automatically. Besides, our answer-aware attention mechanism is much simpler than the fine-tuning process in Pan et al. (2019) to make outputs more answer-relevant. 3 Dataset As the reverse task of QA, QG is often performed on existing QA datasets, e.g., SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), etc. However, questions are independent in most QA datasets, making TQG the only choice. In recent years, the appearance of large-scale conversational QA datasets like CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018) makes it possible to train data-driven SQG models, and the CoQA dataset was widely adopted by prior works. Since the test set of CoQA is not released to the public, its training set (7.2K passages with 108.6K questions) was split into new training and validation set, and its validation set (0.5K passages with 8.0K questions) was used as the new test set. Different from traditional QA datasets where the answers are certain spans from given passages, answers in CoQA are free-form text1 with cor1Only 66.8% of the answers overlap with the passage after ignoring punctuations and case mismatches. responding evidence highlighted in the passage. This brings a big trouble for QG. As an example, consider the yes/no questions counting for 19.8% among all questions. Given the answer “yes” and a corresponding evidence “...the group first met on July 5 , 1967 on the campus of the Ohio state university...”, there are many potential outputs, e.g., “Did the group first met in July?”, “Was the group first met in Ohio state?”. When considering the context formed by previous questions, the potential outputs become even more (the original question in CoQA is “Was it founded the same year?”). When there are too many potential outputs with significantly different semantic meanings, training a converged QG model becomes extremely difficult. For this reason, Gao et al. (2019) directly discarded questions that cannot be answered by spans from passages. However, the remaining questions can become incoherent, e.g., antecedent words for many pronouns become unclear. To this end, we build a new dataset from CoQA by preserving all 7.7K passages and rewriting all questions and answers. More specifically, we first discarded questions that are unsuitable for SQG. To do so, three annotators were hired to vote for the preservation/deletion of each question. A question is preserved if and only if it can be answered by a certain span from the input passage2. As a result, most deleted questions were yes/no questions and unanswerable questions. Besides, the kappa score between results given by different annotators was 0.83, indicating that there was a strong interagreement between annotators. For the remaining QA-pairs, we preserved their original order and replaced all answers by spans from input passages. After that, we rewrote all questions to make them coherent. To avoid over-editing, annotators were asked to modify as little as possible. It turned out that in most cases, they only needed to deal with coreference since the prototype of pronouns were no longer existed. To further guarantee the annotation quality, we hired another project manager who daily examined 10% of the annotations from each annotator and provided feedbacks. The annotation was considered valid only when the accuracy of examined results surpasses 95%. Our annotation process took 2 months, and we finally got a dataset containing 7.7K passage with 81.9K QA-pairs. 2Using certain spans from input passages (instead of freeformed text) as answers is a conversion in QG. In this way, the number of potential output questions is greatly reduced. 228 Figure 1: Architecture of our model. The example is corresponding with Table 1 4 Model In this section, we formalize the SQG task and introduce our model in details. As shown in Figure 1, the model first builds a passage-info graph and an answer-info graph by its passage-info encoder and answer-info encoder respectively. After that, it performs dual-graph interaction to get representations for the decoder. Finally, different groups of questions are generated in parallel under a coarse-to-fine scenario. Both encoders and decoder take the form of Transformer architecture (Vaswani et al., 2017). 4.1 Problem Formalization In SQG, we input a passage composed by n sentences P = {Si}n i=1 and a sequence of l answers {Ai}l i=1, each Ai is a certain span of P. The target output is a series of questions {Qi}l i=1, where Qi can be answered by Ai according to the input passage P and previous QA-pairs. As mentioned above, we perform SQG in an semi-autoregressive way, i.e., target questions are divided into into different groups. Ideally, questions in the same group are expected to be closelyrelated, while questions in different groups should be as independent as possible. Our model takes a simple but effective unsupervised question clustering method. The intuition is: if two answers come from the same sentence, the two corresponding questions are likely to be closely-related. More specifically, if the k-th sentence Sk contains p answers from {Ai}l i=1, we cluster them into an answer-group Gans k = {Aj1, Aj2, ..., Ajp} where j1 < j2 < ... < jp are continuous indexes from {1, 2, ..., l}. By replacing each answer in Gans k with its corresponding question, we get a questiongroup Gques k = {Qj1, Qj2, ..., Qjp}, and we further define a corresponding target-output Tk as “Qj1 [sep] Qj2 [sep] ... [sep] Qjp” where “[sep]” is a special token. In Table 1, there are four target outputs T1, T2, T4, T5 (no T3 since the third sentence in Table 1 do not contain any answer), T2 is “What was he doing there? [sep] On What? [sep] ... [sep] What was Tim doing?” corresponding with the second sentence, and T5 is “What did he say?” corresponding with the last sentence. Supposing there are m answer- and question-groups, then our model generates all the m target-outputs in parallel, i.e., all questions are generated in a semi-autoregressive way. 4.2 Passage-Info encoder As shown in Figure 1, our passage-info encoder maps input sentences {Si}n i=1 into their sentence representations {si}n i=1 where every si ∈R2ds. We regard each sentence as a sequence of words and replace each word by its pre-trained word embeddings (Mikolov et al., 2013) which is a dense vector. After that, the sequence of word embeddings is sent to a Transformer-encoder that outputs a corresponding sequence of vectors. By averaging these vectors, we get the local representation slocal i ∈Rds of Si. After we get the local representations of all sentences {Si}n i=1 in passage P, another Transformerencoder is adopted to map the sequence {slocal i }n i=1 into {sglobal i }n i=1, where sglobal i ∈Rds is called the 229 Figure 2: Illustration of answer embeddings and an answer-attention head for the forth sentence in Table 1. global representation for Si. In other words, the passage-info encoder takes a hiarachical structure. We expect the local and global representations capture intra- and inter- sentence context dependencies respectively, and the final representation for Si is si = [slocal i ; sglobal i ] ∈R2ds. 4.3 Answer-Info Encoder As described in Section 4.1, the input answers are split into m answer-groups. For Gans k corresponding with the k-th sentence of the input passage, we define {Gans k , Sk} as a “rationale” Rk, and further obtain its representation rk ∈R2dr by our answerinfo encoder, which is based on a Transformerencoder regarding sentence Sk as its input. To further consider information from Gans k , two more components are added into the answer-info encoder, as shown in Figure 2. First, we adopt the answer-tag features. For each word wi in sentence Sk, the embedding layer computes [xw i ; xa i] ∈Rdr as its final embedding, where xw i is the pre-trained word embedding and xa i contains answer-tag features. More specifically, we give wi a label from {O, B, I} if it is “outside”, “the beginning of”, “inside of” any answer from Gans k , and use a vector corresponding with this label as xa i. Second, we design the answer-aware attention mechanism. In the multi-head attention layer, there are not only lh vanilla “self-attention heads”, but also la “answer-aware heads” for each answer in Gans k . In an answer-aware head corresponding with answer A, words not belonging to A are masked out during the attention mechanism. The output of the Transformer-encoder is a sequence of vectors Henc k = {henc k } (henc k ∈Rdr) corresponding with the input word sequence from Sk. After getting Henc k , we further send the sequence of vectors to a bi-directional GRU network (Chung et al., 2014) and take its last hidden state as the final rationale embedding rk ∈R2dr. 4.4 Graph Construction In our SQG task, the input passage contain n sentences, which can be represented by {si}n i=1 ∈ R2ds leveraging the passage-info encoder. Among all input sentences, only m of them contain certain answers (m ≤n), and we further define m rationales based on these sentences, {Gans F(j), SF(j)}m j=1, where the j-th rationale (j ∈{1, 2, ..., m}) corresponds with the F(j)-th sentence of the input passage (F(j) ∈{1, 2, ..., n}). For the example in Table 1, n = 5, m = 4, F(j) maps {1, 2, 3, 4} into {1, 2, 4, 5} respectively. Using the answer-info encoder, we can get representations {rF(j)}m j=1 ∈ R2ds for all rationales. We further build a passage-info graph V and an answer-info graph U based on these representations. For the rationale corresponding with the k-th sentence of the input passage, we add node uk, vk in graph U, V respectively. For the example in Table 1, U is compused by {u1, u2, u4, u5} and V is compused by {v1, v2, v4, v5}, as shown in Figure 1. The initial representation for uk is computed by: u(0) k = ReLU(Wu[rk; ek] + bu) ∈Rdg (1) where rk ∈R2dr is the rationale representation, ek ∈Rde is the embedding of index k, and Wu ∈ R(de+2dr)×dg, bu ∈Rdg are trainable parameters. And the initial representation for vk is: v(0) k = ReLU(Wv[sk; ek] + bv) ∈Rdg (2) where sk ∈R2ds is the sentence representation and Wv ∈R(de+2ds)×dg, bv ∈Rdg are parameters. After adding these points, there are m nodes in U and V respectively. For ui, uj ∈U corresponding with the i-th, j-th input sentences respectively, we add an edge between them if |i −j| < δ (δ is a hyper-parameter). Similarly, we add edges into V and the two graphs are isomorphic. 4.5 Dual-Graph Interaction In our answer-info graph U, node representations contain information focused on input answers. In the passage-info graph V, node representations capture inter- and intra-sentence context dependencies. As mentioned above, a good question should be 230 answer-relevant as well as capturing complex context dependencies. So we should combine information in both U and V. Our dual-graph interaction is a process where U and V iteratively update node representations with each other. At time step t, representations u(t−1) i , v(t−1) i are updated into u(t) i , v(t) i respectively under three steps. First, we introduce the information transfer step. Taking U as an example. Each u(t−1) i receives a(t) i from its neighbors (two nodes are neighbors if there is an edge between them) by: a(t) i = X uj∈N(ui) Wij u(t−1) j + bij (3) where N(ui) is composed by all neighbors of node ui and Wij ∈Rdg×dg, bij ∈Rdg are parameters controlling the information transfer. For ui, uj and ui′, uj′ whose |i −j| = |i′ −j′|, we use the same W and b. In other words, we can first create a sequence of matrices {W1, W2, ...} ∈Rdg×dg and vectors {b1, b2, ...} ∈Rdg, and then use |i −j| as the index to retrieve the corresponding Wij, bij. For graph V, we similarly compute ˜a(t) i = X vj∈N(vi) ˜ Wij v(t−1) j + ˜bij (4) In the second step, we compute multiple gates. For each u(t−1) i in U, we compute an “update gate” y(t) i and a “reset gate” z(t) i by: y(t) i = σ(Wy[a(t) i ; u(t−1) i ]) z(t) i = σ(Wz[a(t) i ; u(t−1) i ]) (5) where Wy, Wz ∈R2dg×dg are paramenters. Similarly, for each v(t−1) i in V we compute: ˜y(t) i = σ( ˜ Wy[˜a(t) i ; v(t−1) i ]) ˜z(t) i = σ( ˜ Wz[˜a(t) i ; v(t−1) i ]) (6) Finally, we perform the information interaction, where each graph updates its node representations under the control of gates computed by the other graph. More specifically, node representations are updated by: u(t) i = ˜z(t) i ⊙u(t−1) i + (1 −˜z(t) i ) ⊙ tanh(Wa[a(t) i ; ˜y(t) i ⊙u(t−1) i ]) v(t) i =z(t) i ⊙v(t−1) i + (1 −z(t) i ) ⊙ tanh( ˜ Wa[˜a(t) i ; y(t) i ⊙v(t−1) i ]) (7) The idea of using gates computed by the other graph to update node representations in each graph enables the information in input passage and answers interact more frequently, both of which act as strong constraints to the output questions. By iteratively performing the three steps for T times, we get the final representations u(T) i and v(T) i for ui ∈U and vi ∈V. 4.6 Decoder For the k-th input sentence Sk containing certain answers, our decoder generates the corresponding target-output Tk. As mentioned above, the generation process of all target-outputs are independent. The decoder is based on the Transformer-decoder containing a (masked) multi-head self-attention layer, a multi-head encoder-attention layer, a feedforward projection layer and the softmax layer. To compute keys and values for the multi-head encoder-attention layer, it leverages the outputs from our answer-info encoder, i.e., it uses Henc k described in Section 4.3 to generate Tk corresponding with the k-th sentence. To generate coherent questions, we need to capture the context dependencies between input answers and passages. To this end, both u(T) k and v(T) k , which comes from the dual-graph interaction process, are used as additional inputs for generating Tk. First, they are concatenated with the output of each head from both (masked) multi-head selfattention layer and multi-head encoder-attention layer before sending to the next layer. Second, they are concatenated with inputs of the feed-forward projection layer. The two representations are also expected to make generated questions more relevant to given inputs. 4.7 Coarse-To-Fine Generation Since the semi-autoregressive generation scenario makes it more challenging to deal with coreferences between questions (especially questions in different groups), we perform question generation in a coarse-to-fine manner. The decoder only needs to generate “coarse questions” where all pronouns are replaced by a placeholder “[p]”. To get final results, we use an additional pre-trained coreference resolution model to fill pronouns into different placeholders. To make a fair comparison, we use the coreference resolution model (Clark and Manning, 2016) adopted by prior works CoreNQG (Du and Cardie, 2018) and CorefNet (Gao et al., 2019). 231 Model BLEU1 BLEU2 BLEU3 ROUGE METEOR Length Seq2seq (Du et al., 2017) 28.72 10.16 6.30 31.75 13.10 5.78 CopyNet (See et al., 2017) 29.40 12.14 6.53 33.71 14.20 5.77 CoreNQG (Du and Cardie, 2018) 33.84 14.69 8.72 34.38 14.05 6.08 VHRED (Serban et al., 2017) 30.51 11.95 6.94 31.93 12.42 4.83 HRAN (Xing et al., 2018) 30.18 12.53 7.65 35.06 12.95 5.02 ReDR (Pan et al., 2019) 30.84 15.17 9.81 35.58 15.41 5.58 CorefNet (Gao et al., 2019) 32.72 16.01 10.97 37.48 16.09 5.96 Ours 35.70 19.64 12.06 38.15 17.26 6.03 Table 2: Experimental results. In each column, we bold / underline the best performance over all / baseline methods, respectively. Under the evaluation of BLEU, ROUGE-L and METEOR, our model differs from others (except the METEOR score of CorefNet) significantly based on the one-side paired t-test with p < 0.05. 5 Experiments In this section, we first introduce the three kinds of baselines. After that, we compare and analyse the results of different models under both automatic and human evaluation metrics. 5.1 Baselines We compared our model with seven baselines that can be divided into three groups. First, we used three TQG models: the Seq2seq (Du et al., 2017) model which pioneered NN-based QG, the CopyNet (See et al., 2017) model that introduced pointer mechanism, and CoreNQG (Du and Cardie, 2018) which used hybrid features (word, answer and coreference embeddings) for encoder and adopted copy mechanism for decoder. Second, since prior works regarded SQG as a conversation generation task, we directly used two powerful multi-turn dialog systems: the latent variable hierarchical recurrent encoder-decoder architecture VHRED (Serban et al., 2017), and the hierarchical recurrent attention architecture HRAN (Xing et al., 2018). Third, we used prior works mentioned above. For Pan et al. (2019), we adopted the ReDR model which had the best performance. For Gao et al. (2019), we used the CorefNet model. Although a CFNet in this paper got better results, it required additional human annotations denoting the relationship between input sentences and target questions. So it is unfair to compare CFNet with other methods. It is worth mentioning that when generating questions using the second and third groups of baselines, only previously generated outputs were used as dialog history, i.e., the gold standard questions are remain unknown (in some prior works, they were directly used as dialog history, which we think is inappropriate in practice). SQuAD CoQA Ours Passage 117 271 271 Question 10.1 5.5 6.6 Answer 3.2 2.7 3.2 Table 3: Average number of words in passage, question and answer in different datasets. 5.2 Automatic Evaluation Metrics Following the conventions, we used BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) as automatic evaluation metrics. We also computed the average word-number of generated questions. As shown in Table 2, our semi-autoregressive model outperformed other methods substantially. When we focus on the second and third groups of baselines regarding SQG as multi-turn dialog generation tasks, we can find that models from the third group are more powerful since they make better use of information from input passages. Besides, models from the second group tend to generate shortest questions. Finally, similar to the problem that dialog systems often generate dull and responses, these models also suffer from producing general but meaningless questions like “What?”, “How?”, “And else?”. When we compare the first and third groups of baselines (which are all QG models), it is not surprising that SQG models show more advantages than TQG models, as they take the relationships between questions into consideration. Besides, CorefNet gets better performance among all baselines, especially ReDR. This indicates that comparing with implicitly performing reinforcement learning through QA models, explicitly using target answers as inputs can be more effective. 232 CoreNQG CorefNet Ours Fluency 2.36 2.51 2.44 Coherence 1.53 2.04 2.17 Coreference 1.15 1.56 1.54 Answerability 1.12 1.18 1.45 Relevance 1.47 1.24 1.62 Table 4: Human evaluation results. Scores of each metric ranges between 1 to 3 and larger scores are better. Note that if we directly compare the performance between SQG task and TQG task under the same model (e.g., the Seq2seq model), evaluation scores for TQG tasks are much higher, which is not surprising since SQG is harder than TQG dealing with dependencies between questions. Another fact lies in the computation of automatic evaluation metrics. As shown in Table 2, questions in SQG datasets are much shorter than TQG. Since our automatic evaluation metrics are based on n-gram overlaps between generated and gold standard questions, the scores significantly go down with the growth of n (for this reason, the BLEU4 scores are not listed in Table 2). This also illustrates the importance of performing human evaluation. 5.3 Human Evaluation It is generally acknowledged that automatic evaluation metrics are far from enough for SQG. So we perform human evaluation in five aspects. Fluency measures if a question is grammatically correct and is fluent to read. Coherence measures if a question is coherent with previous ones. Coreference measures if a question uses correct pronouns. Answerability measures if a question is targeting on the given answer. Relevance measures if a question is grounded in the given passage. Since performing human evaluation is rather expensive and time-consuming, we picked up the best TQG model (CoreNQG), SQG model (CorefNet) to compare with our model. We randomly selected 20 passages from the test set with 207 given answers and asked 10 native speakers to evaluate the outputs of each model independently. Under each aspect, reviewers are asked to choose a score from {1, 2, 3}, where 3 indicates the best quality. The average scores for each evaluation metric are shown in Table 4. We can find that our model gets the best or competitive performance in each metric. When it comes to fluency, all models get high performance, and the CorefNet that outputs BLEU3 ROUGE METEOR No interact 11.35 37.31 17.05 Uni-graph 9.86 36.44 15.87 Uni-heads 10.33 37.48 16.24 No co2fine 11.75 37.92 17.17 Non-auto 7.79 33.62 14.83 Ours 12.06 38.15 17.26 Table 5: Results for ablation tests. shortest questions gets the best score. As for coherence, CoreNQG gets poor results since it generates questions independently. When it comes to coreference, our model only slightly lower than CorefNet, which added direct supervision to attention weights by a coreference resolution model. Finally, our model gets the best performance on both answerabity and relevance. However, it is worth noticing that all models get rather poor performances under these two aspects, indicating that making a concise question meaningful (i.e., targeting on given answers) with more information from input passage (i.e., performing proper information elimination) is a major challenge in SQG. Besides, as pointed out by Table 3, questions in our SQG dataset are significantly shorter compared with TQG dataset, making subtle errors much easier to be noticed. 6 Analysis 6.1 Ablation Test In this section, we perform ablation test to verify the influence of different components in our model. First, we modify Equation 7 into u(t) i = z(t) i ⊙u(t−1) i + (1 −z(t) i ) ⊙ tanh(Wa[a(t) i ; y(t) i ⊙u(t−1) i ]) v(t) i =˜z(t) i ⊙v(t−1) i + (1 −˜zi(t)) ⊙ tanh( ˜ Wa[˜a(t) i ; ˜y(t) i ⊙v(t−1) i ]) (8) to get the no interact model, i.e., two graphs are independently updated without any interaction. Second, we build a uni-graph model by removing the passage-info encoder (the remaining rationale graph is updated similarly to Li et al. (2015)). Third, we discard the attention-aware heads in the rationale encoder to get a uni-heads model. Then, we build the no co2fine model without the coarseto-fine generation scenario. Finally, we build a non-auto model that performs SQG in an nonautoregressive way, i.e., each question is generated in parallel. 233 Peter was a very sad puppy. He had been inside of the pet store for a very long time. In fact, he had been there for [three months]1! Peter had seen many other puppies find a person; he began to wonder why he could not get one. He thought that [maybe his fur was not pretty enough or maybe his bark was not loud enough]2. He tried and tried to please every person who came to the store, but they all picked smaller puppies. However, one day all of this changed. [Sammie]3 came into the store looking for [a golden puppy]4. She wanted a puppy she could snuggle with. It so happened that Peter was very sad and tired that day. Sammie came to hold him. Peter wanted to show off [his bark]5, but he was [too tired]6. He [fell right to sleep]7. Sammie loved him at once and loved holding him in her arms. Sammie took [Peter]8 home that day, and they made lots of fun memories. Turn Gold Standard CorefNet Ours 1 How long was Peter at pet store? How long he had been there? How long was Peter there? 2 Why couldn’t he get someone? What his fur was? What did he thought? 3 Who came into the store? Who came into the store? Who came into the store? 4 What for? What was Sammie looking? Who was she looking for? 5 What did peter wanted to show off? What Peter wanted show off? What he show off? 6 Why not? Why he wanted? What was he? 7 What did he do with her? And else? What did he do? 8 Who did she take? Who was Sammie took? What Sammie took that day? Table 6: Example outputs from different models. We mark the given answers in the passage as blue. As shown in Table 5, each component in our model plays an important part. Results for the no interact model indicate that compared with independently updating the passage-info graph and answer-info graph, making these information more interacted by our dual-graph interaction scenario is more powerful. Not surprisingly, the uni-graph model removing the passage encoder (i.e., less focusing on context dependencies between sentences from input passage), and the uni-heads model discarding our answer-aware attention mechanism (i.e., less focusing on given answers) get significant worse performance compared with our full model. Besides, our coarse-to-fine scenario helps to better deal with the dependencies between questions since there are widespread coreferences. Finally, although the architecture of non-auto model is a special case of our model where each group only contains a single question, the performance drops significantly, indicating the importance of using semi-autoregressive generation. However, the dualgraph interaction still makes its performance better than the Seq2seq and CopyNet in Table 2. 6.2 Running Examples In Table 6, we present some generated examples comparing our model and the strongest baseline CorefNet. On the one hand, our model performs better than CorefNet, especially that the output questions are more targeting on given answers (turn 2, 6, 7). It also correctly deals with coreferences (e.g., distinguishing “Peter” and “Sammie”). On the other hand, the generated questions have poor quality when gold standard questions involve more reasoning (turn 2, 6). Besides, the gold standard questions are more concise as well (turn 4, 6). 7 Conclusion In this paper, we focus on SQG which is an important yet challenging task. Different from prior works regarding SQG as a dialog generation task, we propose the first semi-autoregressive SQG model, which divides questions into different groups and further generates each group of closely-related questions in parallel. During this process, we first build a passage-info graph, an answer-info graph, and then perform dual-graph interaction to get representations capturing the context dependencies between passages and questions. These representations are further used during our coarse-to-fine generation process. To perform experiments, we analyze the limitation of existing datasets and create the first dataset specially used for SQG containing 81.9K questions. Experimental results show that our model outperforms previous works by a substantial margin. For future works, the major challenge is generating more meaningful, informative but concise questions. Besides, more powerful question clustering and coarse-to-fine generation scenarios are also worth exploration. Finally, performing SQG on other types of inputs, e.g., images and knowledge graphs, is an interesting topic. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. 234 References Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Kevin Clark and Christopher D Manning. 2016. Deep reinforcement learning for mention-ranking coreference models. arXiv preprint arXiv:1609.08667. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from wikipedia. arXiv preprint arXiv:1805.05942. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866–874. Yifan Gao, Piji Li, Irwin King, and Michael R Lyu. 2019. Interconnected question generation with coreference alignment and conversation flow modeling. arXiv preprint arXiv:1906.06893. Yifan Gao, Jianan Wang, Lidong Bing, Irwin King, and Michael R Lyu. 2018. Difficulty controllable question generation for reading comprehension. arXiv preprint arXiv:1807.03586. Michael Heilman and Noah A Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609– 617. Association for Computational Linguistics. Wenpeng Hu, Bing Liu, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2018. Aspect-based question generation. Hafedh Hussein, Mohammed Elmogy, and Shawkat Guirguis. 2014. Automatic english question generation system based on template driven scheme. International Journal of Computer Science Issues (IJCSI), 11(6):45. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2019. Improving neural question generation using answer separation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6602–6609. Vishwajeet Kumar, Ganesh Ramakrishnan, and YuanFang Li. 2019. Putting the horse before the cart: A generator-evaluator framework for question generation from text. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 812–821. Igor Labutov, Sumit Basu, and Lucy Vanderwende. 2015. Deep questions without deep understanding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 889–898. Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating natural language questions to support learning on-line. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 105–114. Karen Mazidi and Rodney D Nielsen. 2014. Linguistic considerations in automatic question generation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 321–326. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. arXiv preprint arXiv:1603.06059. Boyuan Pan, Hao Li, Ziyu Yao, Deng Cai, and Huan Sun. 2019. Reinforced dynamic reasoning for conversational question generation. arXiv preprint arXiv:1907.12667. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. 235 Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266. Thomas Scialom, Benjamin Piwowarski, and Jacopo Staiano. 2019. Self-attention architectures for answer-agnostic neural question generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6027– 6032. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generating factoid questions with recurrent neural networks: The 30m factoid question-answer corpus. arXiv preprint arXiv:1603.06807. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Yibo Sun, Duyu Tang, Nan Duan, Shujie Liu, Zhao Yan, Ming Zhou, Yuanhua Lv, Wenpeng Yin, Xiaocheng Feng, Bing Qin, et al. 2019. Joint learning of question answering and question generation. IEEE Transactions on Knowledge and Data Engineering. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Tong Wang, Xingdi Yuan, and Adam Trischler. 2017. A joint model for question answering and question generation. arXiv preprint arXiv:1706.01450. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. Joseph Weizenbaum et al. 1966. Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1):36–45. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Kaichun Yao, Libo Zhang, Tiejian Luo, Lili Tao, and Yanjun Wu. 2018. Teaching machines to ask questions. In IJCAI, pages 4546–4552. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer. 236 A Examples of Data Labeling In Table 7, we use a typical example to show how we relabeled CoQA. As introduced in our paper, we first deleted questions that cannot be answered by certain span from the passage. In Table 7, we deleted QA-pairs in turn 15, 18, 19 since they are yes/no questions, turn 3, 16 since the answer “female” is not a span from the input passage, and turn 13 since its answer is scattered in the sentence “Some of his cats have orange fur, some have black fur, some are spotted and one is white”. After deleting questions that are not suitable for SQG, we replaced the remaining answers into certain spans from the input passage. As shown in Table 7, in most cases the original answers were already a certain span. We slightly modified answers in turn 2, 7 from “Eight”, “Three” into “8”, “3” respectively. Finally, we rewrote all remaining questions to make them coherent. During this process, we mainly deal with information omission and coreference. In our example, we added a word “feline” into questions in turn 14 since the question 13 was deleted. B Details of Experiments We used the 200-dimentional pre-trained GloVe word embeddings 3 as initial value of word embeddings. During the training process, these embeddings were further fine-tuned. The NLTK4 package was used for sentence splitting and word tokenization. In our model, we set ds, dr, dg to 200, 256 and 128. For the passage-info encoder, we used 16 heads in the multil-attention layer. For the answerinfo encoder, we used 8 vanilla self-attention heads and additional 6 answer-aware heads for each answer. To construct the two graphs, we set δ into 3. In our dual-graph interaction, we set T into 4. To train our model, we used an Adam optimizer with momentums β1 = 0.9, β2 = 0.99 and ϵ = 10−8 to minimize the loss function. We varied the learning rate throughout training, including a warmup step and a decreasing step similar to the original Transformer. Besides, we applied dropout between 0.4 and 0.5 to prevent over-fitting. Our model was trained on two Nvidia RTX 2080Ti graphics cards. Since we noticed that the available baseline codes used different scripts to compute BLEU, 3https://nlp.stanford.edu/projects/ glove/ 4https://www.nltk.org/ ROUGE and METEOR, we used new scripts5 to compute the evaluation metrics in this paper. 5https://github.com/tylin/ coco-caption/tree/master/pycocoevalcap 237 Brendan loves cats. He owns 8 cats. He has 7 girl cats and only 1 boy cat. Brendan brushes the cats’ hair every day. He makes sure to feed them every morning and evening and always checks to see if the cats have water. Sometimes he feeds them special treats because he loves them. Each cat gets 3 treats. He doesn’t give them food like chips and cake and candy, because those foods aren’t good for cats. He likes to play with the cats. The cats like to chase balls of paper that Brendan makes for them. Some of his cats have orange fur, some have black fur, some are spotted and one is white. The white cat is Brendan’s favorite. She is the first cat he owned. Her name is Snowball. When he first got Snowball she was a kitten. His other cats are named Fluffy, Salem, Jackie, Cola, Snickers, Pumpkin and Whiskers. turn Original QA-Pairs New QA-Pairs 1 What does he care for? (cats) What does he care for? (cats) 2 How many does he have? (Eight) How many does he have? (8) 3 Are there more males or females? (females) Deleted 4 How many? (7 girl cats and only 1 boy cat) How many males and females? (7 girl cats and only 1 boy cat) 5 What is groomed? (cat’s hair) What is groomed? (cat’s hair) 6 What do they get fed? (treats) What do they get fed? (treats) 7 How many? (Three) How many? (3) 8 Why (because he loves them) Why (because he loves them) 9 What foods are avoided? (chips and cake and candy) What foods are avoided? (chips and cake and candy) 10 Why? (because those foods aren’t good for cats) Why? (because those foods aren’t good for cats) 11 What toys do they like? (balls of paper) What toys do they like? (balls of paper) 12 Who creates them? (Brendan) Who creates them? (Brendan) 13 What colors are the felines? (orange, black, spotted, and white) Deleted 14 Which is the most liked? (The white cat) Which is the most liked? (The white cat) 15 Is this his original one? (yes) Deleted 16 What is its gender? (female) Deleted 17 What does he call it? (Snowball) What does he call it? (Snowball) 18 Is there one called Binky? (No) Deleted 19 How about Scruff? (No) Deleted Table 7: Example for data labeling.
2020
21
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2309–2324 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2309 INFOTABS: Inference on Tables as Semi-structured Data Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, Vivek Srikumar School of Computing, University of Utah {vgupta,maitrey,pnokhiz,svivek}@cs.utah.edu Abstract In this paper, we observe that semi-structured tabulated text is ubiquitous; understanding them requires not only comprehending the meaning of text fragments, but also implicit relationships between them. We argue that such data can prove as a testing ground for understanding how we reason about information. To study this, we introduce a new dataset called INFOTABS, comprising of human-written textual hypotheses based on premises that are tables extracted from Wikipedia info-boxes. Our analysis shows that the semi-structured, multi-domain and heterogeneous nature of the premises admits complex, multi-faceted reasoning. Experiments reveal that, while human annotators agree on the relationships between a table-hypothesis pair, several standard modeling strategies are unsuccessful at the task, suggesting that reasoning about tables can pose a difficult modeling challenge. 1 Introduction Recent progress in text understanding has been driven by sophisticated neural networks based on contextual embeddings—e.g., BERT (Devlin et al., 2019), and its descendants—trained on massive datasets, such as SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and SQuAD (Rajpurkar et al., 2016). Several such models outperform human baselines on these tasks on the benchmark suites such as GLUE (Wang et al., 2019b). Reasoning about text requires a broad array of skills—making lexical inferences, interpreting the nuances of time and locations, and accounting for world knowledge and common sense. Have we achieved human-parity across such a diverse collection of reasoning skills? In this paper, we study this question by proposing an extension of the natural language inference (NLI) task (Dagan et al., 2005, and others). In Dressage Highest governing body International Federation for Equestrian Sports (FEI) Characteristics Contact No Team members Individual and team at international levels Mixed gender Yes Equipment Horse, horse tack Venue Arena, indoor or outdoor Presence Country or region Worldwide Olympic 1912 Paralympic 1996 H1: Dressage was introduced in the Olympic games in 1912. H2: Both men and women compete in the equestrian sport of Dressage. H3: A dressage athlete can participate in both individual and team events. H4: FEI governs dressage only in the U.S. Figure 1: A semi-structured premise (the table). Two hypotheses (H1, H2) are entailed by it, H3 is neither entailed nor contradictory, and H4 is a contradiction. NLI, which asks whether a premise entails, contradicts or is unrelated to a hypothesis, the premise and the hypothesis are one or more sentences. Understanding the premise requires understanding its linguistic structure and reasoning about it. We seek to separate these two components. Our work stems from the observation that we can make valid inferences about implicit information conveyed by the mere juxtaposition of snippets of text, as shown in the table describing Dressage in Figure 1. We introduce the INFOTABS dataset to study and model inference with such semi-structured data. Premises in our dataset consist of info-boxes that convey information implicitly, and thus require complex reasoning to ascertain the validity of hypotheses. For example, determining that the hypothesis H2 in Figure 1 entails the premise table requires looking at multiple rows of the table, un2310 derstanding the meaning of the row labeled Mixed gender, and also that Dressage is a sport. INFOTABS consists of 23,738 premisehypothesis pairs, where all premises are info-boxes, and the hypotheses are short sentences. As in the NLI task, the objective is to ascertain whether the premise entails, contradicts or is unrelated to the hypothesis. The dataset has 2,540 unique info-boxes drawn from Wikipedia articles across various categories, and all the hypotheses are written by Amazon’s Mechanical Turk workers. Our analysis of the data shows that ascertaining the label typically requires the composing of multiple types of inferences across multiple rows from the tables in the context of world knowledge. Separate verification experiments on subsamples of the data also confirm the high quality of the dataset. We envision our dataset as a challenging testbed for studying how models can reason about semistructured information. To control for the possibility of models memorizing superficial similarities in the data to achieve high performance, in addition to the standard train/dev/test split, our dataset includes two additional test sets that are constructed by systematically changing the surface forms of the hypothesis and the domains of the tables. We report the results of several families of approaches representing word overlap based models, models that exploit the structural aspect of the premise, and also derivatives of state-of-the-art NLI systems. Our experiments reveal that all these approaches underperform across the three test sets. In summary, our contributions are: 1. We propose a new English natural language inference dataset, INFOTABS, to study the problem of reasoning about semi-structured data. 2. To differentiate models’ ability to reason about the premises from their memorization of spurious patterns, we created three challenge test sets with controlled differences that employ similar reasoning as the training set. 3. We show that several existing approaches for NLI underperform on our dataset, suggesting the need for new modeling strategies. The dataset, along with associated scripts, are available at https://infotabs.github.io/. 2 The Case for Reasoning about Semi-structured Data We often encounter textual information that is neither unstructured (i.e., raw text) nor strictly structured (e.g., databases). Such data, where a structured scaffolding is populated with free-form text, can range from the highly verbose (e.g., web pages) to the highly terse (e.g. fact sheets, information tables, technical specifications, material safety sheets). Unlike databases, such semi-structured data can be heterogeneous in nature, and not characterized by pre-defined schemas. Moreover, we may not always have accompanying explanatory text that provides context. Yet, we routinely make inferences about such heterogeneous, incomplete information and fill in gaps in the available information using our expectations about relationships between the elements in the data. Understanding semi-structured information requires a broad spectrum of reasoning capabilities. We need to understand information in an ad hoc layout constructed with elements (cells in a table) that are text snippets, form fields or are themselves substructured (e.g., with a list of elements). Querying such data can require various kinds of inferences. At the level of individual cells, these include simple lookup (e.g., knowing that dressage takes place in an arena), to lexical inferences (e.g., understanding that Mixed Gender means both men and women compete), to understanding types of text in the cells (e.g., knowing that the number 1912 is a year). Moreover, we may also need to aggregate information across multiple rows (e.g., knowing that dressage is a non-contact sport that both men and women compete in), or perform complex reasoning that combines temporal information with world knowledge. We argue that a true test of reasoning should evaluate the ability to handle such semi-structured information. To this end, we define a new task modeled along the lines of NLI, but with tabular premises and textual hypotheses, and introduce a new dataset INFOTABS for this task. 3 The Need for Multi-Faceted Evaluation Before describing the new dataset, we will characterize our approach for a successful evaluation of automated reasoning. Recent work has shown that many datasets for NLI contain annotation biases or artifacts (e.g. Poliak et al., 2018). In other words, large models trained on such datasets are prone to learning spurious patterns—they can predict correct labels even with incomplete or noisy inputs. For instance, not and no in a hypothesis are correlated with contra2311 dictions (Niven and Kao, 2019). Indeed, classifiers trained on the hypotheses only (ignoring the premises completely) report high accuracy; they exhibit hypothesis bias, and achieving a high predictive performance does not need models to discover relationships between the premise and the hypothesis. Other artifacts are also possible. For example, annotators who generate text may use systematic patterns that “leak” information about the label to a model. Or, perhaps models can learn correlations that mimic reasoning, but only for one domain. With millions of parameters, modern neural networks are prone to overfitting to such imperceptible patterns in the data. From this perspective, if we seek to measure a model’s capability to understand and reason about inputs, we cannot rely on a single fixed test set to rank models. Instead, we need multiple test sets (of similar sizes) that have controlled differences from each other to understand how models handle changes along those dimensions. While all the test sets address the same task, they may not all be superficially similar to the training data. With this objective, we build three test sets, named α1, α2 and α3. Here, we briefly introduce them; §4 goes into specifics. Our first test set (α1) has a similar distribution as the training data in terms of lexical makeup of the hypotheses and the premise domains. The second, adversarial test set (α2), consists of examples that are also similar in distribution to the training set, but the hypothesis labels are changed by expert annotators changing as few words in the sentence as possible. For instance, if Album X was released in the 21st century is an entailment, the sentence Album X was released before the 21st century is a contradiction, with only one change. Models that merely learn superficial textual artifacts will get confused by the new sentences. For α2, we rewrite entailments as contradictions and vice versa, while the neutrals are left unaltered. Our third test set is the cross-domain (α3) set, which uses premises from domains that are not in the training split, but generally, necessitate similar types of reasoning to arrive at the entailment decision. Models that overfit domain-specific artifacts will underperform on α3. Note that, in this work, we describe and introduce three different test sets, but we expect that future work can identify additional dimensions along which models overfit their training data and construct the corresponding test sets. 4 The INFOTABS Dataset In this section, we will see the details of the construction of INFOTABS. We adapted the general workflow of previous crowd sourcing approaches for creating NLI tasks (e.g., Bowman et al., 2015) that use Amazon’s Mechanical Turk.1 Sources of Tables Our dataset is based on 2, 540 unique info-boxes from Wikipedia articles across multiple categories (listed in Appendix D). We did not include tables that have fewer than 3 rows, or have non-English cells (e.g., Latin names of plants) and technical information that may require expertise to understand (e.g., astronomical details about exoplanets). We also removed non-textual information from the table, such as images. Finally, we simplified large tables into smaller ones by splitting them at sub-headings. Our tables are isomorphic to key-value pairs, e.g., in Figure 1, the bold entries are the keys, and the corresponding entries in the same row are their respective values. Sentence generation Annotators were presented with a tabular premise and instructed to write three self-contained grammatical sentences based on the tables: one of which is true given the table, one which is false, and one which may or may not be true. The turker instructions included illustrative examples using a table and also general principles to bear in mind, such as avoiding information that is not widely known, and avoiding using information that is not in the table (including names of people or places). The turkers were encouraged not to restate information in the table, or make trivial changes such as the addition of words like not or changing numerical values. We refer the reader to the project website for a snapshot of the interface used for turking, which includes the details of instructions. We restricted the turkers to be from Englishspeaking countries with at least a Master’s qualification. We priced each HIT (consisting of one table) at 50¢. Following the initial turking phase, we removed grammatically bad sentences and rewarded workers whose sentences involved multiple rows in the table with a 10% bonus. Appendix C gives additional statistics about the turkers. Data partitions We annotated 2, 340 unique tables with nine sentences per table (i.e., three turkers 1Appendix A has more examples of tables with hypotheses. 2312 Data split # tables # pairs Train 1740 16538 Dev 200 1800 α1 test 200 1800 α2 test 200 1800 α3 test 200 1800 Table 1: Number of tables and premise-hypothesis pairs for each data split per table).2 We partitioned these tables into training, development (Dev), α1 and α2 test sets. To prevent an outsize impact of influential turkers in a split, we ensured that the annotator distributions in the Dev and test splits are similar to that of the training split. We created the α2 test set from hypotheses similar to those in α1, but from a separate set of tables, and perturbing them as described in §3. On an average, ∼2.2 words were changed per sentence to create α2, with no more than 2 words changing in 72% of the hypotheses. The provenance of α2 ensures that the kinds of reasoning needed for α2 are similar to those in α1 and the development set. For the α3 test set, we annotated 200 additional tables belonging to domains not seen in the training set (e.g., diseases, festivals). As we will see in §5, hypotheses in these categories involve a set of similar types of reasonings as α1, but with different distributions. In total, we collected 23, 738 sentences split almost equally among entailments, contradictions, and neutrals. Table 1 shows the number of tables and premise-hypothesis pairs in each split. In all the splits, the average length of the hypotheses is similar. We refer the reader to Appendix D for additional statistics about the data. Validating Hypothesis Quality We validated the quality of the data using Mechanical Turk. For each premise-hypothesis in the development and the test sets, we asked turkers to predict whether the hypothesis is entailed or contradicted by, or is unrelated to the premise table. We priced this task at 36¢ for nine labels. The inter-annotator agreement statistics are shown in Table 2, with detailed statistics in Appendix F. On all splits, we observed significant 2For tables with ungrammatical sentences, we repeated the HIT. As a result, a few tables in the final data release have more than 9 hypotheses. Dataset Cohen’s Human Majority Kappa Accuracy Agreement Dev 0.78 79.78 93.52 α1 0.80 84.04 97.48 α2 0.80 83.88 96.77 α3 0.74 79.33 95.58 Table 2: Inter-annotator agreement statistics inter-annotator agreement scores with Cohen’s Kappa scores (Artstein and Poesio, 2008) between 0.75 and 0.80. In addition, we see a majority agreement (at least 3 out of 5 annotators agree) of range between 93% and 97%. Furthermore, the human accuracy agreement between the majority and gold label (i.e., the label intended by the writer of the hypothesis), for all splits is in range 80% to 84%, as expected given the difficulty of the task. 5 Reasoning Analysis To study the nature of reasoning that is involved in deciding the relationship between a table and a hypothesis, we adapted the set of reasoning categories from GLUE (Wang et al., 2019b) to table premises. For brevity, here we will describe the categories that are not in GLUE and defined in this work for table premises. Appendix B gives the full list with definitions and examples. Simple look up refers to cases where there is no reasoning and the hypothesis is formed by literally restating what is in the table as a sentence; multi-row reasoning requires multiple rows to make an inference; and subjective/out-of-table inferences involve value judgments about a proposition or reference to information out of the table that is neither well known or common sense. All definitions and their boundaries were verified via several rounds of discussions. Following this, three graduate students independently annotated 160 pairs from the Dev and α3 test sets each, and edge cases were adjudicated to arrive at consensus labels. Figures 2a and 2b summarizes these annotation efforts. We see that we have a multifaceted complex range of reasoning types across both sets. Importantly, we observe only a small number of simple lookups, simple negations for contradictions, and mere syntactic alternations that can be resolved without complex reasoning. Many instances call for looking up multiple rows, and involve temporal and numerical reasoning. Indeed, 2313 as Figures 2c and 2d show, a large number of examples need at least two distinct kinds of reasoning; on an average, sentences in the Dev and α3 sets needed 2.32 and 1.79 different kinds of reasoning, respectively. We observe that semi-structured premises forced annotators to call upon world knowledge and common sense (KCS); 48.75% instances in the Dev set require KCS. (In comparison, in the MultiNLI data, KCS is needed in 25.72% of examples.) We conjecture that this is because information about the entities and their types is not explicitly stated in tables, and have to be inferred. To do so, our annotators relied on their knowledge about the world including information about weather, seasons, and widely known social and cultural norms and facts. An example of such common sense is the hypothesis that “X was born in summer” for a person whose date of birth is in May in New York. We expect that the INFOTABS data can serve as a basis for studying common sense reasoning alongside other recent work such as that of Talmor et al. (2019), Neutral hypotheses are more inclined to being subjective/out-of-table because almost anything subjective or not mentioned in the table is a neutral statement. Despite this, we found that in all evaluations in Appendix E (except those involving the adversarial α2 test set), our models found neutrals almost as hard as the other two labels, with only an ≈3% gap between the F-scores of the neutral label and the next best label. The distribution of train, dev, α1 and α2 are similar because the premises are taken from the same categories. However, tables for α3 are from different domains, hence not of the same distribution as the previous splits. This difference is also reflected in Figures 2a and 2b, as we see a different distribution of reasonings for each test set. This is expected; for instance, we cannot expect temporal reasoning from tables in a domain that does not contain temporal quantities. 6 Experiments and Results The goal of our experiments is to study how well different modeling approaches address the INFOTABS data, and also to understand the impact of various artifacts on them. First, we will consider different approaches for representing tables in ways that are amenable to modern neural models. 6.1 Representing Tables A key aspect of the INFOTABS task that does not apply to the standard NLI task concerns how premise tables are represented. As baselines for future work, let us consider several different approaches. 1. Premise as Paragraph (Para): We convert the premise table into paragraphs using fixed template applied to each row. For a table titled t, a row with key k and value v is written as the sentence The k of t are v. For example, for the table in Figure 1, the row with key Equipment gets mapped to the sentence The equipment of Dressage are horse, horse tack. We have a small number of exceptions: e.g., if the key is born or died, we use the following template: t was k on v. The sentences from all the rows in the table are concatenated to form the premise paragraph. While this approach does not result in grammatical sentences, it fits the interface for standard sentence encoders. 2. Premise as Sentence (Sent): Since hypotheses are typically short, they may be derived from a small subset of rows. Based on this intuition, we use the word mover distance (Kusner et al., 2015) to select the closest and the three closest sentences to the hypothesis from the paragraph representation (denoted by WMD-1 and WMD-3, respectively). 3. Premise as Structure 1 (TabFact): Following Chen et al. (2020), we represent tables by a sequence of key : value tokens. Rows are separated by a semi-colon and multiple values for the same key are separated by a comma. 4. Premise as Structure 2 (TabAttn): To study an attention based approach, such as that of Parikh et al. (2016), we convert keys and values into a contextually enriched vectors by first converting them into sentences using the Para approach above, and applying a contextual encoder to each sentence. From the token embeddings, we obtain the embeddings corresponding of the keys and values by mean pooling over only those tokens. 6.2 Modeling Table Inferences Based on the various representations of tables described above, we developed a collection of models for the table inference problem, all based on standard approaches for NLI. Due to space constraints, 2314 Reasoning Types Number of Examples 0 20 40 60 80 Coref Ellipsis Entity Type KCS Lexical Reasoning Multirow Named Entity Negation Numerical Quantification Simple Lookup Subjective/OOT Syntactic Alternation Temporal Contradiction Neutral Entailment (a) Number of examples per reasoning type in the Dev set Reasoning Types Number of Examples 0 20 40 60 80 Coref Ellipsis Entity Type KCS Lexical Reasoning Multirow Named Entity Negation Numerical Quantification Simple Lookup Subjective/OOT Syntactic Alternation Temporal Contradiction Neutral Entailment (b) Number of examples per reasoning type in the α3 set 13 21 16 3 0 8 21 13 9 3 14 20 14 5 0 Number of Reasonings per Example Number of Examples 0 5 10 15 20 25 1 2 3 4 5 Entailment Neutral Contradiction (c) Number of reasonings per example in the Dev set 23 19 7 0 26 21 7 3 18 22 12 1 Number of Reasonings per Example Number of Examples 0 10 20 30 1 2 3 4 Entailment Neutral Contradiction (d) Number of reasonings per example in the α3 set Figure 2: Distribution of the various kinds of reasoning in the Dev and α3 sets. The labels OOT and KCS are short for out-of-table and Knowledge & Common Sense, respectively. we give a brief description of the models here and refer the interested reader to the code repository for implementation details. For experiments where premises are represented as sentences or paragraphs, we evaluated a featurebased baseline using unigrams and bigrams of tokens. For this model (referred to as SVM), we used the LibLinear library (Fan et al., 2008). For these representations, we also evaluated a collection of BERT-class of models. Following the standard setup, we encoded the premise-hypothesis pair, and used the classification token to train a classifier, specifically a two-layer feedforward network that predicts the label. The hidden layer had half the size of the token embeddings. We compared RoBERTaL (Large), RoBERTaB (Base) and BERTB (Base) in our experiments. We used the above BERT strategy for the TabFact representations as well. For the TabAttn representations, we implemented the popular decomposable attention model (Parikh et al., 2016) using the premise key-value embeddings and hypothesis token embeddings with 512 dimensional attend and compare layers. We implemented all our models using the PyTorch with the transformers library (Wolf et al., 2019). We trained our models using Adagrad with a learning rate of 10−4, chosen by preliminary experiments, and using a dropout value of 0.2. All our results in the following sections are averages of models trained from three different random seeds. 6.3 Results Our experiments answer a series of questions. Does our dataset exhibit hypothesis bias? Before we consider the question of whether we can model premise-hypothesis relationships, let us first see if a model can learn to predict the entailment label without using the premise, thereby exhibiting an undesirable artifact. We consider three classes of models to study hypothesis bias in INFOTABS. Hypothesis Only (hypo-only): The simplest way to check for hypothesis bias is to train a classifier using only the hypotheses. Without a premise, a classifier should fail to correlate the hypothesis and the label. We represent the hypothesis in two ways a) using unigrams and bigrams for an SVM, and b) using a single-sentence BERT-class model. The results of the experiments are given in Table 3. 2315 Model Dev α1 α2 α3 Majority 33.33 33.33 33.33 33.33 SVM 59.00 60.61 45.89 45.89 BERTB 62.69 63.45 49.65 50.45 RoBERTaB 62.37 62.76 50.65 50.8 RoBERTaL 60.51 60.48 48.26 48.89 Table 3: Accuracy of hypothesis-only baselines on the INFOTABS Dev and test sets Dummy or Swapped Premise: Another approach to evaluate hypothesis bias is to provide an unrelated premise and train a full entailment model. We evaluated two cases, where every premise is changed to a (a) dummy statement (to be or not to be), or (b) a randomly swapped table that is represented as paragraph. In both cases, we trained a RoBERTaL classifier as described in §6.2. The results for these experiments are presented in Table 4. Premise Dev α1 α2 α3 dummy 60.02 59.78 48.91 46.37 swapped 62.94 65.11 52.55 50.21 Table 4: Accuracy with dummy/swapped premises Results and Analysis: Looking at the Dev and α1 columns of Tables 3 and 4, we see that these splits do have hypothesis bias. All the BERT-class models discover such artifacts equally well. However, we also observe that the performance on α2 and α3 data splits is worse since the artifacts in the training data do not occur in these splits. We see a performance gap of ∼12% as compared to Dev and α1 splits in all cases. While there is some hypothesis bias in these splits, it is much less pronounced. An important conclusion from these results is that the baseline for all future models trained on these splits should be the best premise-free performance. From the results here, these correspond to the swapped setting. How do trained NLI systems perform on our dataset? Given the high leaderboard accuracies of trained NLI systems, the question of whether these models can infer entailment labels using a linearization of the tables arises. To study this, we trained RoBERTaL models on the SNLI and MultiNLI datasets. The SNLI model achieves an accuracy of 92.56% on SNLI test set. The MultiNLI model achieves an accuracy of 89.0% on matched and 88.99% on the mismatched MultiNLI test set. We evaluate these models on the WMD-1 and the Para representations of premises. Premise Dev α1 α2 α3 Trained on SNLI WMD-1 49.44 47.5 49.44 46.44 Para 54.44 53.55 53.66 46.01 Trained on MultiNLI WMD-1 44.44 44.67 46.88 44.01 Para 55.77 53.83 55.33 47.28 Table 5: Accuracy of test splits with structured representation of premises with RoBERTaL trained on SNLI and MultiNLI training data Results and Analysis: In Table 5, all the results point to the fact that pre-trained NLI systems do not perform well when tested on INFOTABS. We observe that full premises slightly improve performance over the WMD-1 ones. This might be due to a) ineffectiveness of WMD to identify the correct premise sentence, and b) multi-row reasoning. Does training on the paragraph/sentence representation of a premise help? The next set of experiments compares BERT-class models and SVM trained using the paragraph (Para) and sentence (WMD-n) representations. The results for these experiments are presented in Table 6. Premise Dev α1 α2 α3 Train with SVM Para 59.11 59.17 46.44 41.28 Train with BERTB Para 63.00 63.54 52.57 48.17 Train with RoBERTaB Para 67.2 66.98 56.87 55.36 Train with RoBERTaL WMD-1 65.44 65.27 57.11 52.55 WMD-3 72.55 70.38 62.55 61.33 Para 75.55 74.88 65.55 64.94 Table 6: Accuracy of paragraph and sentence premise representation reported on SVM, BERTB, RoBERTaB and RoBERTaL Results and Analysis: We find that training with the INFOTABS training set improves model performance significantly over the previous baselines, 2316 except for the simple SVM model which relies on unigrams and bigrams. We see that RoBERTaL outperforms its base variant and BERTB by around ∼9% and ∼14% respectively. Similar to the earlier observation, providing full premise is better than selecting a subset of sentences. Importantly, α2 and α3 performance is worse than α1, not only suggesting the difficulty of these data splits, but also showing that models overfit both lexical patterns (based on α2) or domainspecific patterns (based on α3). Does training on premise encoded as structure help? Rather than linearizing the tables as sentences, we can try to encode the structure of the tables. We consider two representative approaches for this, TabFact and TabAttn, each associated with a different model as described in §6.2. The results for these experiments are listed in Table 7. Premise Dev α1 α2 α3 Train with BERTB TabFact 63.67 64.04 53.59 49.05 Train with RoBERTB TabFact 68.06 66.7 56.87 55.26 Train with RoBERTaL TabAttn 63.63 62.94 49.37 49.04 TabFact 77.61 75.06 69.02 64.61 Table 7: Accuracy on structured premise representation reported on BERTB, RoBERTaB and RoBERTaL Results and Analysis: The idea of using this family of models was to leverage the structural aspects of our data. We find that the TabAttn model, however, does not improve the performance. We assume that this might be due to the bag of words style of representation that the classifier employs. We find, however, that providing premise structure information helps the TabFact model perform better than the RoBERTaL+Para model. As before model performance drops for α2 and α3. How many types of reasoning does a trained system predict correctly? Using a RoBERTaL, which was trained on the paragraph (Para) representation, we analyzed the examples in Dev and α3 data splits that were annotated by experts for their types of reasoning (§5). Figure 3 shows the summary of this analysis. Results and Analysis: Figures 3a and 3b show the histogram of reasoning types among correctly predicted examples. Compared to Figures 2a and 2b, we see a decrease in correct predictions across all reasoning types for both Dev and α3 sets. In particular, in the Dev set, the model performs poorly for the knowledge & common sense, multi-row, coreference, and temporal reasoning categories. Discussion Our results show that: 1) INFOTABS contains a certain amount of artifacts which transformer-based models learn, but all models have a large gap to human performance; and 2) models accuracies drop on α2 and α3, suggesting that all three results together should be used to characterize the model, and not any single one of them. All our models are significantly worse than the human performance (84.04%, 83.88% and 79.33% for α1, α2 and α3 respectively). With a difference of ∼14% between our best model and the human performance, these results indicate that INFOTABS is a challenging dataset. 7 Related Work NLI Datasets Natural language inference/textual entailment is a well studied text understanding task, and has several datasets of various sizes. The annual PASCAL RTE challenges (Dagan et al., 2005, inter alia) were associated with several thousands of human-annotated entailment pairs. The SNLI dataset (Bowman et al., 2015) is the first large scale entailment dataset that uses image captions as premises, while the MultiNLI (Williams et al., 2018) uses premises from multiple domains. The QNLI and WNLI datasets provide a new perspective by converting the SQuAD question answering data (Rajpurkar et al., 2016) and Winograd Schema Challenge data (Levesque et al., 2012) respectively into inference tasks. More recently, SciTail (Khot et al., 2018) and Adversarial NLI (Nie et al., 2019) have focused on building adversarial datasets; the former uses information retrieval to select adversarial premises, while the latter uses iterative annotation cycles to confuse models. Reasoning Recently, challenging new datasets have emerged that emphasize complex reasoning. Bhagavatula et al. (2020) pose the task of determining the most plausible inferences based on observation (abductive reasoning). Across NLP, a lot of work has been published around different kinds of reasonings. To name a few, common sense (Talmor et al., 2019), temporal (Zhou et al., 2019), numerical (Naik et al., 2019; Wallace et al., 2019b) and 2317 Reasoning Types Number of Correct Prediction 0 20 40 60 80 Coref Ellipsis Entity Type KCS Lexical Reasoning Multirow Named Entity Negation Numerical Quantification Simple Lookup Subjective/OOT Syntactic Alternation Temporal Contradiction Neutral Entailment (a) Number of correct predictions per reasoning type in the Dev set Reasoning Types Number of Correct Prediction 0 20 40 60 80 Coref Ellipsis Entity Type KCS Lexical Reasoning Multirow Named Entity Negation Numerical Quantification Simple Lookup Subjective/OOT Syntactic Alternation Temporal Contradiction Neutral Entailment (b) Number of correct predictions per reasoning type in the α3 test set Figure 3: Number of correct predictions per reasoning type in the Dev and α3 splits. multi-hop (Khashabi et al., 2018) reasoning have all garnered immense research interest. Tables and Semi-structured data Tasks based on semi-structured data in the form of tables, graphs and databases (with entries as text) contain complex reasoning (Dhingra et al., 2019; Chen et al., 2020). Previous work has touched upon semantic parsing and question answering (e.g., Pasupat and Liang, 2015; Khashabi et al., 2016, and references therein), which typically work with tables with many entries that resemble database records. Our work is most closely related to TabFact (Chen et al., 2020), which considers databasestyle tables as premises with human-annotated hypotheses to form an inference task. While there are similarities in the task formulation scheme, our work presents an orthogonal perspective: (i) The Wikipedia tables premises of TabFact are homogeneous, i.e., each column in a table has structural redundancy and all entries have the same type. One can look at multiple entries of a column to infer extra information, e.g., all entries of a column are about locations. On the contrary, the premises in our dataset are heterogeneous. (ii) TabFact only considers entailment and contradiction; we argue that inference is non-binary with a third “undetermined” class (neutrals). (iii) Compared to our multi-faceted reasonings, the reasonings of the hypotheses in TabFact are limited and mostly numerical or comparatives. (iv) The α2 and α3 sets help us check for annotation and domain-specific artifacts. Artifacts Recently, pre-trained transformerbased models (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019, and others) have seemingly outperformed human performance on several NLI tasks (Wang et al., 2019b,a). However, it has been shown by Poliak et al. (2018); Niven and Kao (2019); Gururangan et al. (2018); Glockner et al. (2018); Naik et al. (2018); Wallace et al. (2019a) that these models exploit spurious patterns (artifacts) in the data to obtain good performance. It is imperative to produce datasets that allow for controlled study of artifacts. A popular strategy today is to use adversarial annotation (Zellers et al., 2018; Nie et al., 2019) and rewriting of the input (Chen et al., 2020). We argue that we can systematically construct test sets that can help study artifacts along specific dimensions. 8 Conclusion We presented a new high quality natural language inference dataset, INFOTABS, with heterogeneous semi-structured premises and natural language hypotheses. Our analysis showed that our data encompasses several different kinds of inferences. INFOTABS has multiple test sets that are designed to pose difficulties to models that only learn superficial correlations between inputs and the labels, rather than reasoning about the information. Via extensive experiments, we showed that derivatives of several popular classes of models find this new inference task challenging. We expect that the dataset can serve as a testbed for developing new kinds of models and representations that can handle semistructured information as first class citizens. Acknowledgements We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project; and reviewers their helpful comments. We acknowledge the support of the support of NSF Grants No. 1822877 and 1801446, and a generous gift from Google. 2318 References Ron Artstein and Massimo Poesio. 2008. Inter-coder Agreement for Computational Linguistics. Computational Linguistics, 34(4):555–596. Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2020. Abductive Commonsense Reasoning. In International Conference on Learning Representations. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A Large Annotated Corpus for Learning Natural Language Inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. TabFact : A Large-scale Dataset for Table-based Fact Verification. In International Conference on Learning Representations. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognising Textual Entailment Challenge. In Machine Learning Challenges Workshop, pages 177–190. Springer. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William Cohen. 2019. Handling Divergent Reference Texts when Evaluating Table-to-Text Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question Answering via Integer Programming over Semi-structured Knowledge. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A Textual Entailment Dataset from Science Question Answering. In Association for the Advancement of Artificial Intelligence. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From Word Embeddings to Document Distances. In International Conference on Machine Learning. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The Winograd Schema Challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. Aakanksha Naik, Abhilasha Ravichander, Carolyn Rose, and Eduard Hovy. 2019. Exploring Numeracy in Word Embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress Test Evaluation for Natural Language Inference. In Proceedings of the 27th International Conference on Computational Linguistics. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A New Benchmark for Natural Language Understanding. arXiv preprint arXiv:1910.14599. Timothy Niven and Hung-Yu Kao. 2019. Probing Neural Network Comprehension of Natural Language Arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2319 Panupong Pasupat and Percy Liang. 2015. Compositional Semantic Parsing on Semi-Structured Tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis Only Baselines in Natural Language Inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. In OpenAI Blog. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019a. Universal Adversarial Triggers for Attacking and Analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019b. Do NLP Models Know Numbers? Probing Numeracy in Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. In Advances in Neural Information Processing Systems. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019b. GLUE: A Multi-task Benchmark and Analysis Platform for Natural Language Understanding. In International Conference on Learning Representations. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. ArXiv. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. A Examples of Data Figure 4 shows two additional examples of table premises and their corresponding hypotheses available in the development set of INFOTABS. Kamloops Type Elected city council Mayor Ken Christian Governing body Kamloops City Council MP Cathy McLeod MLAs Peter Milobar, Todd Stone H1: Kamloops has a democracy structure. H2: If Ken Christian resigns as Mayor of Kamloops then Cathy McLeod will most likely replace him. H3: Kamloops is ruled by a president. Jefferson Starship Origin San Francisco California Genres Rock, hard rock, psychedelic rock, progressive rock, soft rock Years active 1970 - 1984, 1992 - present Labels RCA Grunt Epic Associated acts Jefferson Airplane Starship, KBC Band, Hot Tuna Website www.jeffersonstarship.net H1: Jefferson Starship was started on the West Coast of the United States. H2: Jefferson Starship won many awards for its music. H3: Jefferson Starship has performed continuously since the 1970s. Figure 4: Two semi-structured premises (the tables), and three hypotheses (H1: entailment, H2: Neutral, and H3: contradiction) that correspond to each table. 2320 B Reasoning for INFOTABS Our inventory of reasoning types is based on GLUE diagnostics (Wang et al., 2019b), but is specialized to the problem of reasoning about tables. Consequently, some categories from GLUE diagnostics may not be represented here, or may be merged into one category. We assume that the table is correct and complete. The former is always true for textual entailment, where we assume that the premise is correct. The latter need not be generally true. However, in our analysis, we assume that the table lists all the relevant information for a field. For example, in a table for a music group as in Figure 4, if there is a row called Labels, we will assume that the labels listed in that row are the only labels associated with the group. Note that a single premise-hypothesis pair may be associated with multiple types of reasoning. If the same reasoning type is employed multiple times in the same pair, we only mark it once. Simple lookup This is the simple case where there is no reasoning, and the hypothesis is formed by literally restating information in the table. For example, using the table in Figure 5, Femme aux Bras Crois´es is privately held. is a simple lookup. Multi-row reasoning Multiple rows in the table are needed to make an inference. This has the strong requirement that without multiple rows, there is no way to arrive at the conclusion. Exclude instances where multiple rows are used only to identify the type of the entity, which is then used to make an inference. The test for multi-row reasoning is: If a row is removed from the table, then the label for the hypothesis may change. Entity type Involves ascertaining the type of an entity in question (perhaps using multiple rows from the table), and then using this information to make an inference about the entity. This is separate from multi-row reasoning even if discovering the entity type might require reading multiple rows in the table. The difference is a practical one: we want to identify how many inferences in the data require multiple rows (both keys and values) separately from the ones that just use information about the entity type. We need to be able to identify an entity and its type separately to decide on this category. In addition, while multi-row reasoning, by definition, needs multiple rows, entity Femme aux Bras Crois´es Artist Pablo Picasso Year 1901-02 Medium Oil on canvas Dimensions 81 cm 58 cm (32 in 23 in) Location Privately held Figure 5: An example premise type may be determined by looking at one row. For instance, looking at Figure 5, one can infer that the entity type is a painting by only looking at the row with key value Medium. Lastly, ascertaining the entity type may require knowledge, but if so, then we will not explicitly mark the instance as Knowledge & Common Sense. For example, knowing that SNL is a TV show will be entity type and not Knowledge & Common Sense. Lexical reasoning Any inference that can be made using words, independent of the context of the words falls. For example, knowing that dogs are animals, and alive contradicts dead would fall into the category of lexical reasoning. This type of reasoning includes substituting words with their synonyms, hypernyms, hyponyms and antonyms. It also includes cases where a semantically equivalent or contradicting word (perhaps belonging to a different root word) is used in the hypothesis., e.g., replacing understand with miscomprehend. Lexical reasoning also includes reasoning about monotonicity of phrases. Negation Any explicit negation, including morphological negation (e.g., the word affected being mapped to unaffected). Negation changes the morphology without changing the root word, e.g., we have to add an explicit not. This category includes double negations, which we believe is rare in our data. For example, the introduction of the phrase not impossible would count as a double negation. If the word understand in the premise is replaced with not comprehend, we are changing the root word (understand to comprehend) and introducing a negation. So this change will be marked as both Lexical reasoning and Negation. Knowledge & Common Sense This category is related to the World Knowledge and Common Sense categories from GLUE. To quote the description from GLUE: “...the entailment rests not only on correct disambiguation of the sentences, but also application of extra knowledge, whether it is 2321 concrete knowledge about world affairs or more common-sense knowledge about word meanings or social or physical dynamics.” While GLUE differentiates between world knowledge and common sense, we found that this distinction is not always clear when reasoning about tables. So we do not make the distinction. Named Entities This category is identical to the Named Entities category from GLUE. It includes an understanding of the compositional aspect of names (for example, knowing that the University of Hogwarts is the same as Hogwarts). Acronyms and their expansions fall into this category (e.g., the equivalence of New York Stock Exchange as NYSE). Numerical reasoning Any form of reasoning that involves understanding numbers, counting, ranking, intervals and units falls under this group. This category also includes numerical comparisons and the use of mathematical operators to arrive at the hypothesis. Temporal reasoning Any inferences that involves reasoning about time fall into this category. There may be an overlap between other categories and this one. Any numerical reasoning about temporal quantities and the use of knowledge about time should be included here. Examples of temporal reasoning: • 9 AM is in the morning. (Since this is knowledge about time, we will only tag this as Temporal.) • 1950 is the 20th century. • 1950 to 1962 is twelve years. • Steven Spielberg was born in the winter of 1946. (If the table has the date—18th December, 1946—and the location of birth—Ohio, this sentence will have both knowledge & Common Sense and temporal reasoning. This is because one should be able to tell that the birth location is in the northern hemisphere (knowledge) and December is part of the Winter in the northern hemisphere (temporal reasoning)). Coreference This category includes cases where expressions refer to the same entity. However, we do not include the standard gamut of coreference phenomena in this category because the premise is not textual. We specifically include the following phenomena in this category: Pronoun coreference, where the pronoun in a hypothesis refers to a noun phrase either in the hypothesis or the table. E.g., Chris Jericho lives in a different state than he was born in. A noun phrase (not a named entity) in the hypothesis refers to a name of an entity in the table. For example, the table may say that Bob has three children, including John and the hypothesis says that Bob has a son. Here the phrase a son refers to the name John. If there is a pronoun involved, we should not treat it as entity type or knowledge even though knowledge may be needed to know that, say, Theresa May is a woman and so we should use the pronoun she. To avoid annotator confusion, when two names refer to each other, we label it only as the Named Entities category. For example, if the table talks about William Henry Gates III and the hypothesis describes Bill Gates, even though the two phrases do refer to each other, we will label this as Named Entities. Quantification Any reasoning that involves introducing a quantifier such as every, most, many, some, none, at least, at most, etc. in the hypothesis. This category also includes cases where prefixes such as multi- (e.g., multi-ethnic) are used to summarize multiple elements in the table. To avoid annotator confusion, we decide that the mere use of quantifiers like most and many is quantification. However, if the quantifier is added after comparing two numerical values in the table, the sentence is labeled to have numerical reasoning as well. Subjective/Out of table Subjective inferences refer to any inferences that involve either value judgment about a proposition or a qualitative analysis of a numerical quantity. Out of table inferences involve hypotheses that use extra knowledge that is neither a well known universal fact nor common sense. Such hypotheses may be written as factive or implicative constructions. Below are some examples of this category: • Based on a table about Chennai: Chennai is a very good city. • If the table says that John’s height is 6 feet, then the hypothesis that John is a tall person. may be subjective. However, if John’s 2322 height is 8 feet tall, then the statement that John is tall. is no longer subjective, but common sense. • If the table only says that John lived in Madrid and Brussels, and the hypothesis is John lived longer in Madrid than Brussels. This inference involves information that is neither well known nor common sense. • Based on the table of the movie Jaws, the hypothesis It is known that Spielberg directed Jaws falls in this category. The table may contain the information that Spielberg was the director, but this may or may not be well known. The latter information is out of the table. Syntactic Alternations This refers to a catch-all category of syntactic changes to phrases. This includes changing the preposition in a PP, activepassive alternations, dative alternations, etc. We expect that this category is rare because the premise is not text. However, since there are some textual elements in the tables, the hypothesis could paraphrase them. This category is different from reasoning about named entities. If a syntactic alternation is applied to a named entity (e.g., The Baltimore City Police being written as The Police of Baltimore City), we will label it as a Named Entity if, and only if, we consider both phrases as named entities. Otherwise, it is just a syntactic alternation. Below are some examples of this category: • New Orleans police officer being written as police officer of New Orleans. • Shakespeare’s sonnet being written as sonnet of Shakespeare. Ellipsis This category is similar in spirit to the category Ellipsis/Implicits in GLUE: “An argument of a verb or another predicate is elided in the text, with the reader filling in the gap.” Since in our case, the only well-formed text is in the hypothesis, we expect such gaps only in the hypothesis. (Compared to GLUE, where the description makes it clear that the gaps are in the premises and the hypotheses are constructed by filling in the gaps with either correct or incorrect referents.). For example, in a table about Norway that lists the per capita income as $74K, the hypothesis that The per capita income is $74K. elides the fact that this is about citizens of Norway, and not in general. C INFOTABS Worker Analysis Figure 6 shows the number of examples annotated by frequent top-n workers. We can see that the top 40 annotators annotated about 90% of the data. This observation is concordant with other crowdsourced data annotation projects such as SNLI and MultiNLI (Gururangan et al., 2018). Figure 6: Number of annotations by frequent annotators D INFOTABS Dataset Statistics In this section, we provide some essential statistics that will help in a better understanding of the dataset. Table 8 shows a split-wise analysis of premises and annotators. The table shows that there is a huge overlap between the train set and the other splits except α3. This is expected since α3 is from a different domain. Also, we observe that tables in α3 are longer. In the case of annotators, we see that most of our dataset across all splits was annotated by the same set of annotators. Table 9 presents information on the generated hypotheses. The table lists the average number of words in the hypotheses. This is important because a dissimilar mean value of words would induce the possibility of length bias, i.e., the length of the sentences would be a strong indicator for classification. Table 10 shows the overlap between hypotheses and premise tables across various splits. Stop words like a, the, it, of, etc. are removed. We observe that the overlap is almost similar across labels. Table 11 and 12 show the distribution of table categories in each split. We accumulate all the categories occurring for less than 3% for every split into the “Other” category. 2323 Split Train Dev α1 α2 α3 Number of Unique Keys 1558 411 466 332 409 Number of Unique Keys Intersection with Train 334 312 273 94 Average # of keys per table 8.8 8.7 8.8 8.8 13.1 Number of Distinct Annotators 121 35 37 31 23 Annotator Intersection with Train 33 37 30 19 Number of Instances annotated by a Train annotator 1794 1800 1797 1647 Table 8: Statistics of the premises and annotators across all discussed train-test splits Label Train Dev α1 α2 α3 Entail 9.80 9.71 9.90 9.33 10.5 Neutral 9.84 9.89 10.05 9.59 9.84 Contradict 9.37 9.72 9.84 9.40 9.86 Table 9: Mean length of the generated hypothesis sentences across all discussed train-test splits (standard deviation is in range 2.8 to 3.5) Label Train Dev α1 α2 α3 Entail 0.52 0.47 0.45 0.46 0.48 Neutral 0.46 0.44 0.44 0.49 0.46 Contradict 0.44 0.43 0.45 0.44 0.46 Table 10: Mean statistic of the hypothesis sentences word overlapped with premises tables across all discussed train-test splits (standard deviation is in range 0.17 to 0.22) E F1 Score Analysis The F1 scores per label for two model baselines are in Table 13. We observe that neutral is easier than entailment and contradiction for both baseline, which is expected as neutrals are mostly associated with subjective/out-of-table reasonings which makes them syntactically different and easier to predict correctly. Despite this, we found that in all evaluations in (§6) (except for α2 test set), our models found neutrals almost as hard as the other two labels, with only an ∼3% gap between the F-scores of the neutral label and the next best label. For α2 test set neutral are much easier than entailment and contradiction. This is expected as entailment and contradiction in α2 were adversarially flipped; hence, these predictions become remarkably harder compared to neutrals. Furthermore, α3 is the hardest data split, followed by α2 and α1. Category Train Dev α1 α2 Person 23.68 27 28.5 35.5 Musician 14.66 19 18.5 22.5 Movie 10.17 10 9 11.5 Album 9.08 7 3.5 4.5 City 8.05 8.5 8 7 Painting 5.98 4.5 4 3.5 Organization 4.14 2 1 0.5 Food / Drinks 4.08 4 4 3 Country 3.74 6 9 3.5 Animal 3.56 4.5 4 4 Sports 4.6 3.5 2.5 0.0 Book 2.18 0.5 3 2.5 Other 6.07 8.00 5.00 2.00 Table 11: Categories for all data splits (excluding α3) in percentage (%). Others (< 3%) include categories such as University, Event, Aircraft, Product, Game, Architecture, Planet, Awards, Wineyard, Airport, Language, Element, Car Category α3 (%) Diseases 20.4 Festival 17.41 Bus / Train Lines 14.93 Exams 8.46 Element 4.98 Air Crash 3.98 Bridge 3.98 Disasters 3.48 Smartphone 3.48 Other 18.9 Table 12: Categories for α3 datasplit. Others (< 3%) include categories such as Computer, Occupation, Restaurant, Engines, Equilibrium, OS, Cloud, Bus/Train Station, Coffee House, Cars, Bus/Train Provider, Hotel, Math, Flight 2324 Premise as Paragraph Split Entailment Neutral Contradiction Dev 76.19 79.02 72.73 α1 74.69 77.85 69.85 α2 57.06 80.36 62.14 α3 65.27 66.06 61.61 Premise as TabFact Split Entailment Neutral Contradiction Dev 77.69 79.45 74.77 α1 76.43 80.34 73.07 α2 55.34 80.83 64.44 α3 65.92 67.28 63.57 Table 13: F1 Score (%) with various baselines. All models are trained with RoBERTaL F Statistics of INFOTABS Verification Table 14 shows the detailed agreement statistics of verification for the development and the three test splits. For every premise-hypothesis pair, we asked five annotators to verify the label. The table details the verification agreement among the annotators, and also reports how many of these majority labels match the gold label (i.e., the label intended by the author of the hypothesis). We also report individual annotator label agreement by matching the annotator’s label with the gold label and majority label for an example. Finally, the table reports the Fleiss Kappa (across all five annotation labels) and the Cohen Kappa (between majority and gold label) for the development and the three test splits. We see that, on average, about 84.8% of individual labels match with the majority label across all verified splits. Also, an average of 75.15% individual annotations also match the gold label across all verified splits. From Table 14, we can calculate the percentage of examples with at least 3, 4, and 5 label agreements across 5 verifiers for all splits. For all splits, we have very high inter-annotator agreement of >95.85% for at-least 3, > 74.50% for at-least 4 and 43.91% for at-least 5 annotators. The number of these agreements match with the gold label are: >81.76% for at-least 3, > 67.09% for at-least 4 and 40.85% for at-least 5 for all splits. Exact agreement between annotators Dataset Number Gold/Total 3 350 / 469 Dev 4 529 / 601 5 550 / 605 no agreement 116 3 184 / 292 α1 4 459 / 533 5 863 / 922 no agreement 45 3 245 / 348 α2 4 453 / 537 5 812 / 857 no agreement 58 3 273 / 422 α2 4 441 / 524 5 706 / 765 no agreement 79 Individual agreement with gold / majority label Dataset Statistics Agreement (%) Dev Gold 71.12 Majority 81.65 α1 Gold 78.52 Majority 87.24 α2 Gold 77.74 Majority 86.32 α3 Gold 73.22 Majority 84.01 Average Gold 75.15 Majority 84.8 Kappa values across splits Dataset Fleiss Cohen Dev 0.4601 0.7793 α1 0.6375 0.7930 α2 0.5962 0.8001 α3 0.5421 0.7444 Table 14: Exact, Individual and Kappa values for verification’s statistics.
2020
210
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2325–2338 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2325 Interactive Machine Comprehension with Information Seeking Agents Xingdi Yuan†∗ Jie Fu‡♠∗ Marc-Alexandre Cˆot´e† Yi Tay♦ Christopher Pal‡♠♥ Adam Trischler† †Microsoft Research, Montr´eal ‡Polytechnique Montr´eal ♠Mila ♦Nanyang Technological University ♥Canada CIFAR AI Chair [email protected] [email protected] Abstract Existing machine reading comprehension (MRC) models do not scale effectively to realworld applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we “occlude” the majority of a document’s text and add context-sensitive commands that reveal “glimpses” of the hidden text to a model. We repurpose SQuAD and NewsQA as an initial case study, and then show how the interactive corpora can be used to train a model that seeks relevant information through sequential decision making. We believe that this setting can contribute in scaling models to web-level QA scenarios.1 1 Introduction Many machine reading comprehension (MRC) datasets have been released in recent years (Rajpurkar et al., 2016; Trischler et al., 2016; Nguyen et al., 2016; Reddy et al., 2018; Yang et al., 2018) to benchmark a system’s ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein. The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As ∗Equal contribution. 1The dataset and implementation of our baseline agents are publicly available at https://github.com/ xingdi-eric-yuan/imrc_public. Question: What was the Harvard endowment total in 2011 ? o1 Harvard has the largest university endowment in the world . WWWWWWWWWWWWWWWWWWWWwwwwwnext a1 o2 At the end of June 2009, it was worth $25.7 billion, about 30% less than at the same time in 2008. Ctrl+F Harvard a2 o3 In December 2008, Harvard announced that its endowment had lost 22% from July to October 2008, necessitating budget cuts. Ctrl+F 2011 a3 o4 As of September 2011 , it had nearly regained the loss suffered during the 2008 recession . Ctrl+F 2011 a4 o5 It was worth $ 32 billion in 2011 , up from $ 28 billion in September 2010 and $ 26 billion in 2009 . stop a5 Prediction: $ 32 billion Table 1: Example of the interactive machine reading comprehension behavior. pointed out by Sugawara et al. (2018), for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper ‘understanding’ of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from ‘spoon-feeding’ models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially. The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F 2326 to search for matches to a QUERY within partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to ‘feed themselves’ rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL). Our proposed approach lies outside of traditional QA work, the idea can be applied to almost all existing MRC datasets and models to study interactive information-seeking behavior. As a case study in this paper, we re-purpose two well known, related corpora with different difficulty levels for our iMRC task: SQuAD and NewsQA. Table 1 shows an example of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match weblevel QA and may lead to deeper comprehension of documents’ content. The main contributions of this work are as follows: 1. We describe a method to make MRC datasets interactive and formulate the new task as an RL problem. 2. We develop a baseline agent that combines a top performing MRC model and two state-ofthe-art RL optimization algorithms and test it on iMRC tasks. 3. We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting. 2 Related Works Skip-reading (Yu et al., 2017; Seo et al., 2017; Choi et al., 2017) is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are equally useful, and therefore learn to skip irrelevant tokens. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm (Williams, 1992). For example, the structural-jump-LSTM (Hansen et al., 2019) learns to skip and jump over chunks of text, whereas Han et al. (2019) designed a QA task where the model reads streaming data without knowing when the question will be provided. Skipreading approaches are limited in that they only consider jumping forward over a few consecutive tokens. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied (Sha et al., 2017; Shen et al., 2017). Compared to skip-reading and multi-pass reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and rereading. Specifically, an agent can choose to read forward, backward, or to jump to an arbitrary position depending on the query. This also distinguishes the model we develop in this work from ReasoNet (Shen et al., 2017), a model that decides when to stop forward reading. Recently, there has been various work on and around interactive environments. For instance, Nogueira and Cho (2016) proposed WebNav, a tool that automatically transforms a website into a goaldriven web navigation task. They train a neural agent to follow traces using supervised learning. Qi et al. (2019) proposed GoldEn Retriever, an iterative retrieve-and-read system that answers complex open-domain questions, which is also trained with supervised learning. Although an effective training strategy, supervised learning requires either human labeled or heuristically generated trajectories. However, there often exist multiple trajectories to solve each question, many of which may not be observed in the supervised data since it is difficult to exhaust all valid trajectories. Generalization can be limited when an agent is trained on such data. Bachman et al. (2016) introduced a collection of synthetic tasks to train and test informationseeking capabilities in neural models. Narasimhan et al. (2016) proposed an information extraction system that acquires and incorporates external evidence to improve extraction accuracy in domains with limited data. Geva and Berant (2018) proposed a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. iMRC is distinct from this body of literature in that it does not depend on extra meta information to build tree structures, it is partially-observable, and its action space is as large as 200,000 (much larger than, e.g., the 5 query templates in (Narasimhan et al., 2016) and tree search in (Geva and Berant, 2018)). Our work is also inspired directly by QAit (Yuan et al., 2019), a set of interactive question answering tasks developed on text-based games. However, QAit is based on 2327 Information Gathering ݋ܾݏ� ݋ܾݏ�+1 ܽܿݐ�݋݊� Encoder Action Generator Question Answerer iMRC �� �� ≠ stop =stop answer question Question Answering Figure 1: A demonstration of the proposed iMRC pipeline, in which the agent is illustrated as a shaded area. At a game step t, it encodes the question and text observation into hidden representations Mt. An action generator takes Mt as input to generate commands to interact with the environment. If the agent generates stop at this game step, Mt is used to answer question by a question answerer. Otherwise, the iMRC environment will provide new text observation in response to the generated action. synthetic and templated language which might not require strong language understanding components. This work extends the principle of interactivity to the natural language setting, by leveraging existing MRC tasks already written in natural language. Broadly speaking, our work is also linked to the query reformulation (QR) task in information retrieval literature (Nogueira and Cho, 2017). Specifically, QR aims to automatically rewrite a query so that it becomes more likely to retrieve relevant documents. Our task shares the spirit of iterative interaction between an agent (reformulator in QR) and an environment. However, the rewritten queries in QR tasks keep the semantic meaning of the original queries, whereas in our task, actions and queries across different game steps can change drastically — since our task requires an agent to learn a reasoning path (trajectory) towards answering a question, rather than to search the same concept repeatedly. 3 iMRC: Making MRC Interactive The iSQuAD and iNewsQA datasets are based on SQuAD v1.12 (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016). Both original datasets share similar properties. Specifically, each data-point consists of a tuple, {p, q, a}, where p represents a paragraph, q a question, and a is the answer. The answer is a word span defined by head and tail positions in p. NewsQA is more chal2We choose SQuAD v1.1 because in this preliminary study, we focus on extractive question answering. lenging because it has a larger vocabulary, more difficult questions, and longer source documents. Every paragraph p is split into a list of sentences S = {s1, s2, ..., sn}, where n stands for number of sentences in p. At the start of a question answering episode, an agent observes the question q, but rather than observing the entire paragraph p, it sees only the first sentence s1 while the rest is withheld. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer q. The agent should decide when to stop interacting and output an answer, but the number of interaction steps is limited.3 Once the agent has exhausted its step budget, it is forced to answer the question. 3.1 Interactive MRC as a POMDP As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) (Kaelbling et al., 1998). An iMRC data-point is a discrete-time POMDP defined by (S, T, A, Ω, O, R, γ), where γ ∈[0, 1] is the discount factor and the other elements are described in detail below. Environment States (S): The environment state at game step t in the game is st ∈S. It contains the environment’s underlying conditions (e.g., the semantics and information contained in the document, which part of the document has been revealed so far), much of which is hidden from an agent, the agent can only estimate the state from its partial observations. When the agent issues an action at, the environment transitions to state st+1 with probability T(st+1|st, at). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment). Actions (A): At each game step t, the agent issues an action at ∈A. We will elaborate on the action space of iMRC in § 3.2 and § 3.3. Observations (Ω): The text information perceived by the agent at a given game step t is the agent’s observation, ot ∈Ω, which depends on the environment state and the previous action with 3We use 20 as the maximum number of steps, because information revealed by 20 interactions can cover a large portion of the text in either an iSQuAD or iNewsQA paragraph. A reasonable step budget also speeds up training. 2328 probability O(ot|st). Again, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function (R): Based on its actions, the agent receives rewards rt = R(st, at). Its objective is to maximize the expected discounted sum of rewards E P t γtrt  . 3.2 Easy and Hard Modes As a question answering dataset, we adopt the standard output format of extractive MRC tasks, where a system is required to point to a span within a given paragraph p as its prediction. However, we define two difficulty levels in iMRC, which are based on different action spaces and dynamics during the interactive information gathering phase. Easy Mode: At a step t, an agent can issue one of the following four actions to interact with the (partially observable) paragraph p, where p consists of n sentences. Assume the agent’s observation ot corresponds to sentence sk, where 1 ≤k ≤n. • previous: jump to ( sn if k = 1, sk−1 otherwise; • next: jump to ( s1 if k = n, sk+1 otherwise; • Ctrl+F QUERY: jump to the sentence that contains the next occurrence of QUERY; • stop: terminate information gathering phase and ready to answer question. Hard Mode: Only the Ctrl+F and stop commands are available (i.e., an agent is forced to generate QUERY to navigate the partially observable paragraph p). 3.3 QUERY Types Given an objective (e.g., a question to answer), humans search by using both extractive and abstractive queries. For instance, when searching information about the actor “Dwayne Johnson”, one may either type his name or “The Rock” in a search engine. We believe abstractive query searching requires a deeper understanding of the question, and some background knowledge (one cannot refer to “Dwayne Johnson” as the “The Rock” if they know nothing about his wrestling career). Inspired by this observation, we study the following three settings, where in each, the QUERY is generated from different sources: Dataset iSQuAD iNewsQA #Training Games 82,441 92,550 Vocabulary Size 109,689 200,000 Avg. #Sentence / Document 5.1 29.5 Avg. Sentence Length 26.1 22.2 Avg. Question Length 11.3 7.6 Table 2: Statistics of iSQuAD and iNewsQA. 1. One token from the question: extractive QUERY generation with a relatively small action space. 2. One token from the union of the question and the current observation: still extractive QUERY generation, although in an intermediate level where the action space is larger. 3. One token from the dataset vocabulary: abstractive QUERY generation where the action space is huge (see Table 2 for statistics of iSQuAD and iNewsQA). 3.4 Evaluation Metric Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use F1 score to compare predicted answers against ground-truth, as in previous work. When there exist multiple groundtruth answers, we report the max F1 score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent’s performance during both its training and testing phases. Specifically, we report training curves and test results based on the best validation F1 scores. 4 Baseline Agent As a baseline agent, we adopt QA-DQN (Yuan et al., 2019), we modify it to enable extractive QUERY generation and question answering. As illustrated in Figure 1, the baseline agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a step t during the information gathering phase, the encoder reads observation string ot and question string q to generate the attention aggregated hidden representations Mt. Using Mt, the action generator outputs commands (depending on the mode, as defined in § 3.2) to interact with iMRC. The information-gathering phase terminates whenever the generated command is stop or the agent 2329 has used up its move budget. The question answerer takes the hidden representation at the terminating step to generate head and tail pointers as its answer prediction. 4.1 Model Structure In this section, we only describe the difference between the model our baseline agent uses and the original QA-DQN. We refer readers to (Yuan et al., 2019) for detailed information. In the following subsections, we use “game step t” to denote the tth round of interaction between an agent with the iMRC environment. 4.1.1 Action Generator Let Mt ∈RL×H denote the output of the encoder, where L is the length of observation string and H is hidden size of the encoder representations. The action generator takes Mt as input and generates rankings for all possible actions. As described in the previous section, a Ctrl+F command is composed of two tokens (the token “Ctrl+F” and the QUERY token). Therefore, the action generator consists of three multilayer perceptrons (MLPs): Rt = ReLU(MLPshared(mean(Mt))), Qt,action = MLPaction(Rt) · Mmode, Qt,query = MLPquery(Rt) · Mtype. (1) In which, Qt,action and Qt,query are Q-values of action token and QUERY token (when action token is “Ctrl+F”), respectively. Mmode is a mask, which masks the previous and next tokens in hard mode; Mtype is another mask which depends on the current QUERY type (e.g., when QUERY is extracted from the question q, all tokens absent from q are masked out). Probability distributions of tokens are further computed by applying softmax on Qt,action and Qt,query, respectively. 4.1.2 Question Answerer Following QANet (Yu et al., 2018), we append two extra stacks of transformer blocks on top of the encoder to compute head and tail positions: hhead = ReLU(MLP0([Mt; Mhead])), htail = ReLU(MLP1([Mt; Mtail])). (2) In which, [·; ·] denotes vector concatenation, Mhead ∈RL×H and Mtail ∈RL×H are the outputs of the two extra transformer stacks. Similarly, probability distributions of head and tail pointers over observation string ot can be computed by: phead = softmax(MLP2(hhead)), ptail = softmax(MLP3(htail)). (3) 4.2 Memory and Reward Shaping 4.2.1 Memory In iMRC tasks, some questions may not be easily answerable by observing a single sentence. To overcome this limitation, we provide an explicit memory mechanism to our baseline agent to serve as an inductive bias. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited number of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment is observed fully in memory, in which case our task degenerates to the standard MRC setting. We reset the memory slots episodically. 4.2.2 Reward Shaping Because the question answerer in our agent is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. In the same spirit as (Yuan et al., 2019), we also design a heuristic reward to guide agents to learn this behavior. In particular, we assign a reward if the agent halts at game step k and the answer is a sub-string of ok (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step k). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed). Note this sufficient information reward is part of the design of the baseline agent, whereas the question answering score is the only metric used to evaluate an agent’s performance on the iMRC task. 4.3 Training Strategy Since iMRC games are interactive environments and we have formulated the tasks as POMDPs (in § 3.1), it is natural to use RL algorithms to train the information gathering components of our agent. In this work, we study the performance of two widely used RL algorithms, one based on Q-Learning (DQN) and the other on Policy Gradients (A2C). When an agent has reached a sentence 2330 that contains sufficient information to answer the question, the task becomes a standard extractive QA setting, where an agent learns to point to a span from its observation. When this condition is met, it is also natural to adopt standard supervised learning methods to train the question answering component of our agent. In this section, we describe the 3 training strategies mentioned above. We provide implementation details in Appendix B. 4.3.1 Advantage Actor-Critic (A2C) Advantage actor-critic (A2C) was first proposed by Mnih et al. (2016). Compared to policy gradient computation in REINFORCE (Williams, 1992), ∇θJ(θ) = Eπ[ T X t=1 ∇θ log πθ(at|st)Gt], (4) where the policy gradient ∇θJ(θ) is updated by measuring the discounted future reward Gt from real sample trajectories, A2C utilizes the lower variance advantage function A(st, at) = Q(st, at) − V (st) in place of Gt. The advantage A(st, at) of taking action at at state st is defined as the value Q(st, at) of taking at minus the average value V (st) of all possible actions in state st. In the agent, a critic updates the state-value function V (s), whereas an actor updates the policy parameter θ for πθ(a|s), in the direction suggested by the critic. Following common practice, we share parameters between actor and critic networks. Specifically, all parameters other than MLPaction and MLPquery (both defined in Eqn. 1) are shared between actor and critic. 4.3.2 Deep Q-Networks (DQN) In Q-Learning (Watkins and Dayan, 1992; Mnih et al., 2015), given an interactive environment, an agent takes an action at in state st by consulting a state-action value estimator Q(s, a); this value estimator estimates the action’s expected long-term reward. Q-Learning helps the agent to learn an optimal value estimator. An agent starts from performing randomly and gradually updates its value estimator by interacting with the environment and propagating reward information. In our case, the estimated Q-value at game step t is simply the sum of Q-values of the action token and QUERY token as introduced in Eqn. 1: Qt = Qt,action + Qt,query. (5) In this work, we adopt the Rainbow algorithm (Hessel et al., 2017), which is a deep Q-network boosted by several extensions such as a prioritized replay buffer (Schaul et al., 2016). Rainbow exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games). 4.3.3 Negative Log-likelihood (NLL) During information gathering phase, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer) whenever the terminal observation string contains the groundtruth answer. We randomly sample mini-batches of such transitions to train the question answerer to minimize the negative log-likelihood loss. 5 Experimental Results In this study, we focus on four main aspects: 1. difficulty levels (easy | hard mode); 2. strategies for generating QUERY (from question | question and observation | vocabulary); 3. sizes of the memory queue (1 | 3 | 5); 4. RL algorithms for the information gathering phase (A2C | DQN) Regarding the four aspects, we report the baseline agent’s training performance followed by its generalization performance on test data. We use DQN and A2C to refer to our baseline agent trained with DQN and A2C, respectively. We set the maximum number of episodes (data points) to be 1 million, this is approximately 10 epochs in supervised learning tasks given the size of datasets. The agent may further improve after 1 million episodes, however we believe some meaningful and interesting trends can already be observed from the results. Besides, we hope to keep the wall clock time of the task reasonable4 to encourage the community to work on this direction. 5.1 Mastering Training Games It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique “game,” and there are hundreds of thousands of 4Basic experiment setting (e.g., QUERY from question, single slot memory) take about a day on a single NVIDIA P100 GPU. 2331 Figure 2: Training F1 scores in easy mode with different QUERY types and memory sizes. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 3: Training F1 scores in hard mode with different QUERY types and memory sizes. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. them. Therefore, as it is common practice in the RL literature, we study an agent’s training curves. Figure 2 and Figure 3 show the agent’s training performance (in terms of F1 score) in easy and hard mode, respectively. Due to the space limitations, we select several representative settings to discuss in this section. We provide the agent’s training and validation curves for all experiments, and its sufficient information rewards (as defined in § 4.2.2) in Appendix A. It is clear that our agent performs better on easy mode consistently across both datasets and all training strategies. This may due to the fact that the previous and next commands provide the agent an inefficient but guaranteed way to stumble on the sought-after sentence no matter the game. The Ctrl+F command matches human behavior more closely, but it is arguably more challenging (and interesting) for an RL agent to learn this behavior. RL agents may require extra effort and time to reach a desired state since they rely heavily on random exploration, and the Ctrl+F command leads to much larger action space to explore compared to commands such as next. Related to action space size, we observe that the agent performs best when pointing to the QUERY tokens from the question, whereas it performs worst when generating QUERY tokens from the entire vocabulary. This is particularly clear in hard mode, where agents are forced to use the Ctrl+F command. As shown in Table 2, both datasets have a vocabulary size of more than 100k, whereas the average length of questions is around 10 tokens. This indicates the action space for generating QUERY from entire vocabulary is much larger. This again suggests that for moving toward a more realistic problem setting where action spaces are huge, methods with better sample efficiency are needed. Experiments show that a larger memory queue almost always helps. Intuitively, with a memory mechanism (either explicit as in this work, or implicit as with a recurrent network aggregating rep2332 Dataset iSQuAD iNewsQA Easy Mode QUERY Agent Mem=1 =3 =5 Mem=1 =3 =5 Question A2C 0.245 (0.493) 0.357 (0.480) 0.386 (0.478) 0.210 (0.554) 0.316 (0.532) 0.333 (0.490) DQN 0.575 (0.770) 0.637 (0.738) 0.666 (0.716) 0.330 (0.708) 0.326 (0.619) 0.360 (0.620) Question+Memory A2C 0.221 (0.479) 0.484 (0.590) 0.409 (0.492) 0.199 (0.595) 0.233 (0.448) 0.253 (0.459) DQN 0.579 (0.784) 0.651 (0.734) 0.656 (0.706) 0.336 (0.715) 0.334 (0.626) 0.347 (0.596) Vocabulary A2C 0.223 (0.486) 0.314 (0.448) 0.309 (0.391) 0.192 (0.551) 0.224 (0.440) 0.224 (0.403) DQN 0.583 (0.774) 0.624 (0.738) 0.661 (0.731) 0.326 (0.715) 0.323 (0.590) 0.316 (0.593) Hard Mode Question A2C 0.147 (0.404) 0.162 (0.446) 0.158 (0.435) 0.166 (0.529) 0.160 (0.508) 0.164 (0.520) DQN 0.524 (0.766) 0.524 (0.740) 0.551 (0.739) 0.352 (0.716) 0.367 (0.632) 0.353 (0.613) Question+Memory A2C 0.160 (0.441) 0.150 (0.413) 0.156 (0.429) 0.163 (0.520) 0.160 (0.508) 0.164 (0.520) DQN 0.357 (0.749) 0.362 (0.729) 0.364 (0.733) 0.260 (0.692) 0.264 (0.645) 0.269 (0.620) Vocabulary A2C 0.161 (0.444) 0.163 (0.448) 0.160 (0.441) 0.160 (0.510) 0.167 (0.532) 0.162 (0.516) DQN 0.264 (0.728) 0.261 (0.719) 0.218 (0.713) 0.326 (0.694) 0.214 (0.680) 0.214 (0.680) Table 3: Test F1 scores in black and F1info scores (i.e., an agent’s F1 score iff sufficient information is in its observation when it terminates information gathering phase) in blue. resentations over game steps), an agent renders the environment closer to fully observed by exploring and storing observations. Presumably, a larger memory could further improve an agent’s performance; considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots defeats the purpose of our study of partially observable text environments. We observe that DQN generally performs better on iSQuAD whereas A2C sometimes works better on the harder iNewsQA task. However, we observe huge gap between them on generalization performance, which we discuss in a later subsection. Not surprisingly, our agent performs better in general on iSQuAD than on iNewsQA. As shown in Table 2, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to partially observable games with larger maps in the RL literature. We believe a better exploration (in our case, jumping) strategy that can decide where to explore next conditioned on what has already been seen may help agents to master such harder games. 5.2 Generalizing to Test Set To study an agent’s ability to generalize, we select the best performing checkpoint in each experimental setting on the validation set and report their test performance, as shown in Table 3. In addition, to support our claim that the more challenging part of iMRC tasks is information gathering rather than answering questions given sufficient information, we report the agents’ F1 scores when they have reached the piece of text that contains the answer, which we denote as F1info. From Table 3 (and validation curves provided in Appendix A) we observe trends that match with training curves. Due to the different sizes of action space, the baseline agents consistently performs better on the easy mode. For the same reason, the agent learns more efficiently when the QUERY token is extracted from the question. The best F1 score on hard mode is comparable to and even slightly higher than in easy mode on iNewsQA, which suggests our baseline agent learns some relatively general trajectories in solving training games that generalize to unseen games. It is also clear that during evaluation, a memory that stores experienced observations helps, since the agent almost always performs better with a memory size of 3 or 5 (when memory size is 1, each new observation overwrites the memory). While performing comparably with DQN during training, the agent trained with A2C generalizes noticeably worse. We suspect this is caused by the fundamental difference between the ways DQN and A2C explore during training. Specifically, DQN relies on either ǫ-greedy or Noisy Net (Fortunato et al., 2017), both of which explicitly force an agent to experience different actions during training. In A2C, exploration is performed implicitly by sampling from a probability distribution over the action space; although entropy regularization is applied, good exploration is still not guaranteed (if there are peaks in the probability distribution). This again suggests the importance of a good exploration strategy in the iMRC tasks, as in all RL tasks. Finally, we observe F1info scores are consistently 2333 higher than the overall F1 scores, and they have less variance across different settings. This supports our hypothesis that information gathering plays an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. 6 Discussion and Future Work In this work, we propose and explore the direction of converting MRC datasets into interactive, partially observable environments. We believe information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety, where knowledge is by design easily accessible to humans through interaction. Our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. It is complementary to existing MRC dataset and models, meaning almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search. As a concrete example, in real world environments such as the Internet, different pieces of knowledge are interconnected by hyperlinks. We could equip the agent with an action to “click” a hyperlink, which returns another webpage as new observations, thus allowing it to navigate through a large number of web information to answer difficult questions. iMRC is difficult and cannot yet be solved, however it clearly matches a human’s informationseeking behavior compared to most static and fullyobservable laboratory MRC benchmarks. It lies at the intersection of NLP and RL, which is arguably less studied in existing literature. For our baseline, we adopted off-the-shelf, top-performing MRC and RL methods, and applied a memory mechanism which serves as an inductive bias. Despite being necessary, our preliminary experiments do not seem sufficient. We encourage work on this task to determine what inductive biases, architectural components, or pretraining recipes are necessary or sufficient for MRC based on information-seeking. Our proposed setup presently uses only a single word as QUERY in the Ctrl+F command in an abstractive manner. However, a host of other options could be considered in future work. For example, a multi-word QUERY with fuzzy matching is more realistic. It would also be interesting for an agent to generate a vector representation of the QUERY in some latent space and modify it during the dynamic reasoning process. This could further be used to retrieve different contents by comparing with pre-computed document representations (e.g., in an open-domain QA dataset), with such behavior tantamount to learning to do IR. This extends traditional query reformulation for open-domain QA by allowing to drastically change the queries without strictly keeping the semantic meaning of the original queries. Acknowledgments The authors thank Mehdi Fatemi, Peter Potash, Matthew Hausknecht, and Philip Bachman for insightful ideas and discussions. We also thank the anonymous ACL reviewers for their helpful feedback and suggestions. References Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2016. Towards information-seeking agents. arXiv preprint arXiv:1612.02605. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 209–220. Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, R´emi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, and Shane Legg. 2017. Noisy networks for exploration. CoRR, abs/1706.10295. Mor Geva and Jonathan Berant. 2018. Learning to search in long documents using document structure. arXiv preprint arXiv:1806.03529. Moonsu Han, Minki Kang, Hyunwoo Jung, and Sung Ju Hwang. 2019. Episodic memory reader: Learning what to remember for question answering from streaming data. arXiv preprint arXiv:1903.06164. Christian Hansen, Casper Hansen, Stephen Alstrup, Jakob Grue Simonsen, and Christina Lioma. 2019. Neural speed reading with structural-jump-lstm. arXiv preprint arXiv:1904.00761. Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Daniel Horgan, Bilal Piot, Mohammad Gheshlaghi Azar, 2334 and David Silver. 2017. Rainbow: Combining improvements in deep reinforcement learning. CoRR, abs/1710.02298. Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533. Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquiring external evidence with reinforcement learning. CoRR, abs/1603.07954. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268. Rodrigo Nogueira and Kyunghyun Cho. 2016. Webnav: A new large-scale task for natural language based sequential decision making. CoRR, abs/1602.02261. Rodrigo Nogueira and Kyunghyun Cho. 2017. Taskoriented query reformulation with reinforcement learning. CoRR, abs/1704.04572. Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query generation. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. CoRR, abs/1606.05250. Siva Reddy, Danqi Chen, and Christopher D. Manning. 2018. Coqa: A conversational question answering challenge. CoRR, abs/1808.07042. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. 2016. Prioritized experience replay. In International Conference on Learning Representations, Puerto Rico. Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Neural speed reading via skim-rnn. arXiv preprint arXiv:1711.02085. Lei Sha, Feng Qian, and Zhifang Sui. 2017. Will repeated reading benefit natural language understanding? In National CCF Conference on Natural Language Processing and Chinese Computing, pages 366–379. Springer. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading comprehension questions easier? CoRR, abs/1808.09384. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. CoRR, abs/1611.09830. Christopher J. C. H. Watkins and Peter Dayan. 1992. Q-learning. Machine Learning, 8(3):279–292. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. CoRR, abs/1809.09600. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. CoRR, abs/1804.09541. Adams Wei Yu, Hongrae Lee, and Quoc V Le. 2017. Learning to skim text. arXiv preprint arXiv:1704.06877. Xingdi Yuan, Marc-Alexandre Cˆot´e, Jie Fu, Zhouhan Lin, Christopher Pal, Yoshua Bengio, and Adam Trischler. 2019. Interactive language learning by question answering. 2335 A Full Results We show our experimental results (training and validation curves) in Figure 4,5,6,7,8,9,10,11. B Implementation Details In all experiments, we use Adam (Kingma and Ba, 2014) as the step rule for optimization, with the learning rate set to 0.00025. We clip gradient norm at 5.0. We initialize all word embeddings by the 300-dimensional fastText (Mikolov et al., 2018) word vectors trained on Common Crawl (600B tokens), they are fixed during training. We randomly initialize character embeddings by 200dimensional vectors. In all transformer blocks, block size is 96. Dimensionality of MLPshared in Eqn. 1 is R96×150; dimensionality of MLPaction is R150×4 and R150×2 in easy mode (4 actions are available) and hard mode (only 2 actions are available), respectively; dimensionality of MLPquery is R150×V where V denotes vocabulary size of the dataset, as listed in Table 2. Dimensionalities of MLP0 and MLP1 in Eqn. 2 are both R192×150; dimensionalities of MLP2 and MLP3 in Eqn. 3 are both R150×1. During A2C training, we set the value loss coefficient to be 0.5, we use an entropy regularizer with coefficient of 0.01. We use a discount γ of 0.9 and mini-batch size of 20. During DQN training, we use a mini-batch of size 20 and push all transitions (observation string, question string, generated command, reward) into a prioritized replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network. we use a discount γ of 0.9. For noisy nets, we use a σ0 of 0.5. We update target network per 1000 episodes. For multistep returns, we sample n ∼Uniform[1, 2, 3]. When our agent terminates information gathering phase, we push the question answering transitions (observation string at this time, question string, ground-truth answer) into a question answering replay buffer. After every 5 game steps, we randomly sample a mini-batch of 64 such transitions from the question answering replay buffer and train the model using NLL loss. For more detail please refer to our open-sourced code. 2336 Figure 4: Training performance on iSQuAD, easy mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 5: Validation performance on iSQuAD, easy mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 6: Training performance on iSQuAD, hard mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. 2337 Figure 7: Validation performance on iSQuAD, hard mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 8: Training performance on iNewsQA, easy mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 9: Validation performance on iNewsQA, easy mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. 2338 Figure 10: Training performance on iNewsQA, hard mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5. Figure 11: Validation performance on iNewsQA, hard mode. Solid line: DQN, dashed line: A2C; number of memory slots: 1, 3, 5.
2020
211
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2339–2352 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2339 Syntactic Data Augmentation Increases Robustness to Inference Heuristics Junghyun Min1 R. Thomas McCoy1 Dipanjan Das2 Emily Pitler2 Tal Linzen1 1Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 2Google Research, New York, NY {jmin10, tom.mccoy, tal.linzen}@jhu.edu {dipanjand, epitler}@google.com Abstract Pretrained neural models such as BERT, when fine-tuned to perform natural language inference (NLI), often show high accuracy on standard datasets, but display a surprising lack of sensitivity to word order on controlled challenge sets. We hypothesize that this issue is not primarily caused by the pretrained model’s limitations, but rather by the paucity of crowdsourced NLI examples that might convey the importance of syntactic structure at the finetuning stage. We explore several methods to augment standard training sets with syntactically informative examples, generated by applying syntactic transformations to sentences from the MNLI corpus. The best-performing augmentation method, subject/object inversion, improved BERT’s accuracy on controlled examples that diagnose sensitivity to word order from 0.28 to 0.73, without affecting performance on the MNLI test set. This improvement generalized beyond the particular construction used for data augmentation, suggesting that augmentation causes BERT to recruit abstract syntactic representations. 1 Introduction In the supervised learning paradigm common in NLP, a large collection of labeled examples of a particular classification task is randomly split into a training set and a test set. The system is trained on this training set, and is then evaluated on the test set. Neural networks—in particular systems pretrained on a word prediction objective, such as ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019)—excel in this paradigm: with large enough pretraining corpora, these models match or even exceed the accuracy of untrained human annotators on many test sets (Raffel et al., 2019). At the same time, there is mounting evidence that high accuracy on a test set drawn from the same distribution as the training set does not indicate that the model has mastered the task. This discrepancy can manifest as a sharp drop in accuracy when the model is applied to a different dataset that illustrates the same task (Talmor and Berant, 2019; Yogatama et al., 2019), or as excessive sensitivity to linguistically irrelevant perturbations of the input (Jia and Liang, 2017; Wallace et al., 2019). One such discrepancy, where strong performance on a standard test set did not correspond to mastery of the task as a human would define it, was documented by McCoy et al. (2019b) for the Natural Language Inference (NLI) task. In this task, the system is given two sentences, and is expected to determine whether one (the premise) entails the other (the hypothesis). Most if not all humans would agree that NLI requires sensitivity to syntactic structure; for example, the following sentences do not entail each other, even though they contain the same words: (1) The lawyer saw the actor. (2) The actor saw the lawyer. McCoy et al. constructed the HANS challenge set, which includes examples of a range of such constructions, and used it to show that, when BERT is fine-tuned on the MNLI corpus (Williams et al., 2018), the fine-tuned model achieves high accuracy on the test set drawn from that corpus, yet displays little sensitivity to syntax; the model wrongly concluded, for example, that (1) entails (2). We consider two explanations as to why BERT fine-tuned on MNLI fails on HANS. Under the Representational Inadequacy Hypothesis, BERT fails on HANS because its pretrained representations are missing some necessary syntactic information. Under the Missed Connection Hypothesis, BERT extracts the relevant syntactic information from the input (cf. Goldberg 2019; 2340 Tenney et al. 2019), but it fails to use this information with HANS because there are few MNLI training examples that indicate how syntax should support NLI (McCoy et al., 2019b). It is possible for both hypotheses to be correct: there may be some aspects of syntax that BERT has not learned at all, and other aspects that have been learned, but are not applied to perform inference. The Missed Connection Hypothesis predicts that augmenting the training set with a small number of examples from one syntactic construction would teach BERT that the task requires it to use its syntactic representations. This would not only cause improvements on the construction used for augmentation, but would also lead to generalization to other constructions. In contrast, the Representational Inadequacy Hypothesis predicts that to perform better on HANS, BERT must be taught how each syntactic construction affects NLI from scratch. This predicts that larger augmentation sets will be required for adequate performance and that there will be little generalization across constructions. This paper aims to test these hypotheses. We constructed augmentation sets by applying syntactic transformations to a small number of examples from MNLI. Accuracy on syntactically challenging cases improved dramatically as a result of augmenting MNLI with only about 400 examples in which the subject and the object were swapped (about 0.1% of the size of the MNLI training set). Crucially, even though only a single transformation was used in augmentation, accuracy increased on a range of constructions. For example, BERT’s accuracy on examples involving relative clauses (e.g, The actors called the banker who the tourists saw ↛The banker called the tourists) was 0.33 without augmentation, and 0.83 with it. This suggests that our method does not overfit to one construction, but taps into BERT’s existing syntactic representations, providing support for the Missed Connection Hypothesis. At the same time, we also observe limits to generalization, supporting the Representational Inadequacy Hypothesis in those cases. 2 Background HANS is a template-generated challenge set designed to test whether NLI models have adopted three syntactic heuristics. First, the lexical overlap heuristic is the assumption that any time all of the words in the hypothesis are also in the premise, the label should be entailment. In the MNLI training set, this heuristic often makes correct predictions, and almost never makes incorrect predictions. This may be due to the process by which MNLI was generated: crowdworkers were given a premise and were asked to generate a sentence that contradicts or entails the premise. To minimize effort, workers may have overused lexical overlap as a shortcut to generating entailed hypotheses. Of course, the lexical overlap heuristic is not a generally valid inference strategy, and it fails on many HANS examples; e.g., as discussed above, the lawyer saw the actor does not entail the actor saw the lawyer. HANS also includes cases that are diagnostic of the subsequence heuristic (assume that a premise entails any hypothesis which is a contiguous subsequence of it) and the constituent heuristic (assume that a premise entails all of its constituents). While we focus on counteracting the lexical overlap heuristic, we will also test for generalization to the other heuristics, which can be seen as particularly challenging cases of lexical overlap. Examples of all constructions used to diagnose the three heuristics are given in Tables A.5, A.6 and A.7. Data augmentation is often employed to increase robustness in vision (Perez and Wang, 2017) and language (Belinkov and Bisk, 2018; Wei and Zou, 2019), including in NLI (Minervini and Riedel, 2018; Yanaka et al., 2019). In many cases, augmentation with one kind of example improves accuracy on that particular case, but does not generalize to other cases, suggesting that models overfit to the augmentation set (Jia and Liang, 2017; Ribeiro et al., 2018; Iyyer et al., 2018; Liu et al., 2019). In particular, McCoy et al. (2019b) found that augmentation with HANS examples generalized to a different word overlap challenge set (Dasgupta et al., 2018), but only for examples similar in length to HANS examples. We mitigate such overfitting to superficial properties by generating a diverse set of corpus-based examples, which differ from the challenge set both lexically and syntactically. Finally, Kim et al. (2018) used a similar augmentation approach to ours but did not study generalization to types of examples not in the augmentation set. 3 Generating Augmentation Data We generate augmentation examples from MNLI using two syntactic transformations: INVERSION, which swaps the subject and object of the source sentence, and PASSIVIZATION. For each of these transformations, we had two families of augmenta2341 Original MNLI example: There are 16 El Grecos in this small collection. → This small collection contains 16 El Grecos. Inversion (original premise): There are 16 El Grecos in this small collection. ↛ 16 El Grecos contain this small collection. Inversion (transformed hypothesis): This small collection contains 16 El Grecos. ↛ 16 El Grecos contain this small collection. Passivization (transformed hypothesis; non-entailment): This small collection contains 16 El Grecos. ↛ This small collection is contained by 16 El Grecos. Random shuffling with a random label: 16 collection small El contains Grecos This. ↛/→ collection This Grecos El small 16 contains. Table 1: A sample of syntactic augmentation strategies, with gold labels (→: entailment; ↛: non-entailment). For the full list, see Table A.1 in the Appendix. tion sets. The ORIGINAL PREMISE strategy keeps the original MNLI premise and transforms the hypothesis; and TRANSFORMED HYPOTHESIS uses the original MNLI hypothesis as the new premise, and the transformed hypothesis as the new hypothesis (see Table 1 for examples, and §A.2 for details). We experimented with three augmentation set sizes: small (101 examples), medium (405) and large (1215). All augmentation sets were much smaller than the MNLI training set (297k).1 We did not attempt to ensure the naturalness of the generated examples; e.g., in the INVERSION transformation, The carriage made a lot of noise was transformed into A lot of noise made the carriage. In addition, the labels of the augmentation dataset were somewhat noisy; e.g., we assumed that INVERSION changed the correct label from entailment to neutral, but this is not necessarily the case (if The buyer met the seller, it is likely that The seller met the buyer). As we show below, this noise did not hurt accuracy on MNLI. Finally, we included a random shuffling condition, in which an MNLI premise and its hypothesis were both randomly shuffled, with a random label. We used this condition to test whether a syntactically uninformed method could teach the model that, when word order is ignored, no reliable inferences can be made. 1The augmentation sets and the code used to generate them are available at https://github.com/aatlantise/ syntactic-augmentation-nli. 4 Experimental setup We added each augmentation set separately to the MNLI training set, and fine-tuned BERT on each resulting training set. Further fine-tuning details are in Appendix A.1. We repeated this process for five random seeds for each combination of augmentation strategy and augmentation set size, except for the most successful strategy (INVERSION + TRANSFORMED HYPOTHESIS), for which we had 15 runs for each augmentation size. Following McCoy et al. (2019b), when evaluating on HANS, we merged the neutral and contradiction labels produced by the model into a single non-entailment label. For both ORIGINAL PREMISE and TRANSFORMED HYPOTHESIS, we experimented with using each of the transformations separately, and with a combined dataset including both inversion and passivization. We also ran separate experiments with only the passivization examples with an entailment label, and with only the passivization examples with a non-entailment label. As a baseline, we used 100 runs of BERT fine-tuned on the unaugmented MNLI (McCoy et al., 2019a). We report the models’ accuracy on HANS, as well as on the MNLI development set (MNLI test set labels are not publicly available). We did not tune any parameters on this development set. All of the comparisons we discuss below are significant at the p < 0.01 level (based on two-sided t-tests). 5 Results Accuracy on MNLI was very similar across augmentation strategies and matched that of the unaugmented baseline (0.84), suggesting that syntactic augmentation with up to 1215 examples does not harm overall performance on the dataset. By contrast, accuracy on HANS varied significantly, with most models performing worse than chance (which is 0.50 on HANS) on non-entailment examples, suggesting that they adopted the heuristics (Figure 1). The most effective augmentation strategy, by a large margin, was inversion with a transformed hypothesis. Accuracy on the HANS word overlap cases for which the correct label is non-entailment— e.g., the doctor saw the lawyer ↛the lawyer saw the doctor—was 0.28 without augmentation, and 0.73 with the large version of this augmentation set. Simultaneously, this strategy decreased BERT’s accuracy on the cases where the heuristic makes the correct prediction (The tourists by the actor called the authors →The tourists called the authors); in 2342 Original premise Transformed hypothesis Passivization Inversion Combined 0 101 405 1215 0 101 405 1215 0% 50% 100% 0% 50% 100% 0% 50% 100% Number of augmentation examples Accuracy on HANS (lexical overlap cases only) The lexical overlap heuristic makes... ● A correct prediction An incorrect prediction Figure 1: Comparison of syntactic augmentation strategies. Dots represent accuracy on the HANS examples that diagnose the lexical overlap heuristic, as produced by each of the runs of BERT fine-tuned on MNLI combined with each augmentation data set. Horizontal bars indicate median accuracy across runs. Chance accuracy is 0.5. fact, the best model’s accuracy was similar across cases where lexical overlap made correct and incorrect predictions, suggesting that this intervention prevented the model from adopting the heuristic. The random shuffling method did not improve over the unaugmented baseline, suggesting that syntactically-informed transformations are essential (Table A.2). Passivization yielded a much smaller benefit than inversion, perhaps due to the presence of overt markers such as the word by, which may lead the model to attend to word order only when those are present. Intriguingly, even on the passive examples in HANS, inversion was more effective than passivization (large inversion augmentation: 0.13; large passivization augmentation: 0.01). Finally, inversion on its own was more effective than the combination of inversion and passivization. We now analyze in more detail the most effective strategy, inversion with a transformed hypothesis. First, this strategy is similar on an abstract level to the HANS subject/object swap category, but the two differ in vocabulary and some syntactic properties; despite these differences, performance on this HANS category was perfect (1.00) with medium and large augmentation, indicating that BERT benefited from the high-level syntactic structure of the transformation. For the small augmentation set, accuracy on this category was 0.53, suggesting that 101 examples are insufficient to teach BERT that subjects and objects cannot be freely swapped. Conversely, tripling the augmentation size from medium to large had a moderate and inconsistent effect across HANS subcases (see Appendix A.3 for case-by-case results); for clearer insight about the role of augmentation size, it may be necessary to sample this parameter more densely. Although inversion was the only transformation in this augmentation set, performance also improved dramatically on constructions other than subject/object swap (Figure 2); for example, the models handled examples involving a prepositional phrase better, concluding, for instance, that The judge behind the manager saw the doctors does not entail The doctors saw the manager (unaugmented: 0.41; large augmentation: 0.89). There was a much more moderate, but still significant, improvement on the cases targeting the subsequence heuristic; this smaller degree of improvement suggests that contiguous subsequences are treated separately from lexical overlap more generally. One exception was accuracy on “NP/S” inferences, such as the managers heard the secretary resigned ↛The managers heard the secretary, which improved dramatically from 0.02 (unaugmented) to 0.50 (large augmentation). Further improvements for subsequence cases may therefore require augmentation with examples involving subsequences. A range of techniques have been proposed over the past year for improving performance on HANS. These include syntax-aware models (Moradshahi et al., 2019; Pang et al., 2019), auxiliary models designed to capture pre-defined shallow heuristics so that the main model can focus on robust strategies 2343 Lexical overlap heuristic Subsequence heuristic Constituent heuristic 0 101 405 1215 0 101 405 1215 0 101 405 1215 0% 25% 50% 75% 100% Number of augmentation examples Accuracy on HANS The heuristic makes... ● A correct prediction An incorrect prediction Figure 2: Augmentation using subject/object inversion with a transformed hypothesis. Dots represent the accuracy on HANS examples diagnostic of each of the heuristics, as produced by each of the 15 runs of BERT fine-tuned on MNLI combined with each augmentation data set. Horizontal bars indicate median accuracy across runs. (Clark et al., 2019; He et al., 2019; Mahabadi and Henderson, 2019), and methods to up-weight difficult training examples (Yaghoobzadeh et al., 2019). While some of these approaches yield higher accuracy on HANS than ours, including better generalization to the constituent and subsequence cases (see Table A.4), they are not directly comparable: our goal is to assess how the prevalence of syntactically challenging examples in the training set affects BERT’s NLI performance, without modifying either the model or the training procedure. 6 Discussion Our best-performing strategy involved augmenting the MNLI training set with a small number of instances generated by applying the subject/object inversion transformation to MNLI examples. This yielded considerable generalization: both to another domain (the HANS challenge set), and, more importantly, to additional constructions, such as relative clauses and prepositional phrases. This supports the Missed Connection Hypothesis: a small amount of augmentation with one construction induced abstract syntactic sensitivity, instead of just “inoculating” the model against failing on the challenge set by providing it with a sample of cases from the same distribution (Liu et al., 2019). At the same time, the inversion transformation did not completely counteract the heuristic; in particular, the models showed poor performance on passive sentences. For these constructions, then, BERT’s pretraining may not yield strong syntactic representations that can be tapped into with a small nudge from augmentation; in other words, this may be a case where our Representational Inadequacy Hypothesis holds. This hypothesis predicts that pretrained BERT, as a word prediction model, struggles with passives, and may need to learn the properties of this construction specifically for the NLI task; this would likely require a much larger number of augmentation examples. The best-performing augmentation strategy involved generating premise/hypothesis pairs from a single source sentence—meaning that this strategy does not rely on an NLI corpus. The fact that we can generate augmentation examples from any corpus makes it possible to test if very large augmentation sets are effective (with the caveat, of course, that augmentation sentences from a different domain may hurt performance on MNLI itself). Ultimately, it would be desirable to have a model with a strong inductive bias for using syntax across language understanding tasks, even when overlap heuristics lead to high accuracy on the training set; indeed, it is hard to imagine that a human would ignore syntax entirely when understanding a sentence. An alternative would be to create training sets that adequately represent a diverse range of linguistic phenomena; crowdworkers’ (rational) preferences for using the simplest generation strategies possible could be counteracted by approaches such as adversarial filtering (Nie et al., 2019). In the interim, however, we conclude that data augmentation is a simple and effective strategy to mitigate known inference heuristics in models such as BERT. Acknowledgments This research was supported by a gift from Google, NSF Graduate Research Fellowship No. 1746891, and NSF Grant No. BCS-1920924. Our experiments were conducted using the Maryland Advanced Research Computing Center (MARCC). 2344 References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. Christopher Clark, Mark Yatskar, and Luke Zettlemoyer. 2019. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4067–4080, Hong Kong, China. Association for Computational Linguistics. Ishita Dasgupta, Demi Guo, Andreas Stuhlm¨uller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating compositionality in sentence embeddings. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 1596– 1601, Madison, WI. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 132–142, Hong Kong, China. Association for Computational Linguistics. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885. Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Association for Computational Linguistics. Juho Kim, Christopher Malon, and Asim Kadav. 2018. Teaching syntax by adversarial distraction. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 79–84, Brussels, Belgium. Association for Computational Linguistics. Nelson F. Liu, Roy Schwartz, and Noah A. Smith. 2019. Inoculation by fine-tuning: A method for analyzing challenge datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2171–2179, Minneapolis, Minnesota. Association for Computational Linguistics. Rabeeh Karimi Mahabadi and James Henderson. 2019. Simple but effective techniques to reduce biases. arXiv preprint arXiv:1909.06321. R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance. arXiv preprint arXiv:1911.02969. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Pasquale Minervini and Sebastian Riedel. 2018. Adversarially regularising neural NLI models to integrate logical background knowledge. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 65–74, Brussels, Belgium. Association for Computational Linguistics. Mehrad Moradshahi, Hamid Palangi, Monica S. Lam, Paul Smolensky, and Jianfeng Gao. 2019. HUBERT Untangles BERT to Improve Transfer across NLP Tasks. arXiv preprint arXiv:1910.12647. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. Deric Pang, Lucy H. Lin, and Noah A. Smith. 2019. Improving natural language inference with a pretrained parser. arXiv preprint arXiv:1909.08217. Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. 2345 Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865, Melbourne, Australia. Association for Computational Linguistics. Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911–4921, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Linguistics. Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6381–6387, Hong Kong, China. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Yadollah Yaghoobzadeh, Remi Tachet, T. J. Hazen, and Alessandro Sordoni. 2019. Robust natural language inference models with example forgetting. arXiv preprint arXiv:1911.03861. Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 250–255, Minneapolis, Minnesota. Association for Computational Linguistics. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373. 2346 A Appendix A.1 Fine-tuning details We used bert-base-uncased for all experiments. As is standard, we fine-tuned this pretrained model on MNLI by training a linear classifier to predict the label from the CLS token’s final layer embedding, while continuing to update BERT’s parameters (Devlin et al., 2019). The order of training examples was reshuffled for each model. All models were trained for three epochs. A.2 Generating augmentation examples The following list describes the augmentation strategies we used. Table A.1 illustrates all of these strategies as applied to a particular source sentence. Note that inversion generally changes the meaning of the sentence (the detective followed the suspect refers to a different event from the suspect followed the detective), but passivization on its own does not (the detective followed the suspect refers to the same event as the suspect was followed by the detective). • Inversion (original premise): For a source example (p, h, →), generate (p, INV(h), ↛), where INV returns the source sentence with the subject and object switched. Ignore source examples whose label is ↛. • Inversion (transformed hypothesis): For a source (p, h) (with any label), discard the premise p and generate (h, INV(h), ↛). • Passivization (original premise): For a source (p, h) (with any label), generate (p, PASS(h)), with the same label, where PASS returns the passive version of the source sentence (without changing its meaning). • Passivization (transformed hypothesis): For a source (p, h), discard the premise p, and generate two examples, one with an entailment label—(h, PASS(h), →)—and one with a nonentailment label—(h, PASS(INV(h)), ↛). We identified transitive sentences in MNLI that could serve as source sentences using the constituency parses provided with MNLI, excluding the noisier TELEPHONE genre. We did so by searching for matrix S nodes with exactly one NP daughter of the VP, where the subject and the object were both full noun phrases (i.e., neither were a personal pronoun such as me), and where the verb lemma was not be or have. We kept the original tense of the verb, and modified its agreement features if necessary (e.g., the movie stars Matt Dillon and Gary Sinise was transformed into Matt Dillon and Gary Sinise star the movie). The size of the largest augmentation set was 1215 for all strategies. This size was determined based on the largest augmentation dataset we could generate from MNLI for the inversion with original premise strategy using the procedure mentioned above. For fair comparison, we kept the same size even for strategies where we could have generated a larger dataset. We also created a Medium dataset by randomly sampling 405 of the cases identifying using the procedure above, as well as a small dataset with 101 examples. We performed this process only once for each strategy: as such, runs varied only in the classifier’s weight initialization and the order of examples but not in the augmentation examples included in training. To create the Combined augmentation dataset, we concatenated the inversion and passivization datasets, then randomly discarded half of the examples (to match the size of the combined dataset with the others). As with the other datasets, we only did this once: the Combined augmentation set was the same across runs. One consequence of this procedure is that the number of passivization and inversion examples was not exactly identical. A.3 Detailed Results The following tables provide the detailed results of our experiments. Table A.2 shows each strategy’s mean accuracy on MNLI, as well on the HANS cases that diagnose each of the three heuristics (the Lexical Overlap Heuristic, the Subsequence Heuristic, and the Constituent Heuristic), for which the correct label is non-entailment (↛). Table A.3 zooms in on the best-performing augmentation strategy—subject/object inversion with a transformed hypothesis—on BERT’s accuracy on HANS, both when the correct label is entailment (→) and when the label is non-entailment (↛). Finally, the last three tables detail the effect of augmentation by inversion with a transformed hypothesis on each of the 30 HANS subcases, broken down by the heuristic that they were designed to diagnose: the Lexical Overlap Heuristic (Table A.5), the Subsequence Heuristic (Table A.6), and the Constituent Heuristic (Table A.7). 2347 Original There are 16 El Grecos in this small collection. → This small collection contains 16 El Grecos. Inversion Original premise: There are 16 El Grecos in this small collection. ↛ 16 El Grecos contain this small collection. Transformed hypothesis: This small collection contains 16 El Grecos. ↛ 16 El Grecos contain this small collection. Passivization Original premise: There are 16 El Grecos in this small collection. → 16 El Grecos are contained by this small collection. Transformed hypothesis (entailment label): This small collection contains 16 El Grecos. → 16 El Grecos are contained by the small collection. Transformed hypothesis (non-entailment label): This small collection contains 16 El Grecos. ↛ This small collection is contained by 16 El Grecos. Random shuffling (with random label) are collection. small El this in 16 There Grecos ↛/→ collection This Grecos El small 16 contains. Table A.1: Syntactic augmentation strategies (full table). 2348 MNLI Overlap Subsequence Constituent S M L S M L S M L S M L Original premise Inversion .84 .84 .84 .07 .40 .44 .01 .06 .12 .06 .09 .12 Passivization .84 .84 .84 .23 .35 .54 .04 .05 .09 .13 .11 .15 Combined .84 .84 .84 .42 .25 .36 .07 .05 .04 .14 .15 .12 Transformed hypothesis Inversion .84 .84 .84 .46 .71 .73 .09 .25 .23 .17 .23 .18 Passivization .84 .84 .84 .41 .43 .31 .06 .06 .07 .13 .15 .17 Combined .84 .84 .84 .32 .64 .71 .06 .13 .28 .15 .26 .22 Pass. (only pos) .84 .84 .84 .30 .20 .29 .04 .04 .05 .10 .13 .11 Pass. (only neg) .84 .84 .85 .36 .45 .39 .06 .06 .06 .15 .13 .13 Random shuffling .84 .84 .84 .26 .19 .35 .05 .05 .06 .15 .14 .14 Unaugmented .84 .28 .05 .13 Table A.2: Accuracy of models trained using each augmentation strategy when evaluated on HANS examples diagnostic of each of the three heuristics—lexical overlap, subsequence and constituent—for which the correct label is non-entailment (↛). Augmentation set sizes are S (101 examples), M (405) and L (1215). Chance performance is 0.5. Subset of HANS Label Unaugmented Small Medium Large MNLI All 0.84 0.84 0.84 0.84 Subject/object swap ↛ 0.19 0.53 1.00 1.00 All other → 0.96 0.93 0.77 0.77 lexical overlap ↛ 0.30 0.44 0.64 0.66 Subsequence → 0.99 0.99 0.84 0.85 ↛ 0.05 0.09 0.25 0.23 Constituent → 0.99 0.98 0.97 0.97 ↛ 0.13 0.17 0.23 0.18 Table A.3: Effect on HANS accuracy of augmentation using subject/object inversion with a transformed hypothesis. Results are shown for BERT fined-tuned on the MNLI training set augmented with the three size of augmentation sets (101, 405 and 1215 examples), as well as for BERT fine-tuned on the unaugmented MNLI training set. 2349 Entailment Non-entailment Architecture or training method Overall L S C L S C Baseline (McCoy et al., 2019a) 0.57 0.96 0.99 0.99 0.28 0.05 0.13 Learned-Mixin + H (Clark et al., 2019) 0.69 0.68 0.84 0.81 0.77 0.45 0.60 DRiFt-HAND (He et al., 2019) 0.66 0.77 0.71 0.76 0.71 0.41 0.61 Product of experts (Mahabadi and Henderson, 2019) 0.67 0.94 0.96 0.98 0.62 0.19 0.30 HUBERT + (Moradshahi et al., 2019) 0.63 0.96 1.00 0.99 0.70 0.04 0.11 MT-DNN + LF (Pang et al., 2019) 0.61 0.99 0.99 0.94 0.07 0.07 0.13 BiLSTM forgettables (Yaghoobzadeh et al., 2019) 0.74 0.77 0.91 0.93 0.82 0.41 0.61 Ours: Inversion (transformed hypothesis), small 0.60 0.93 0.99 0.98 0.46 0.09 0.17 Inversion (transformed hypothesis), medium 0.63 0.77 0.84 0.97 0.71 0.25 0.23 Inversion (transformed hypothesis), large 0.62 0.77 0.85 0.97 0.73 0.23 0.18 Combined (transformed hypothesis), medium 0.65 0.92 0.96 0.98 0.64 0.13 0.26 Table A.4: HANS accuracy from various architectures and training methods, broken down by the heuristic that the example is diagnostic of and by its gold label, as well as overall accuracy on HANS. All but MT-DNN + LF use BERT as base model. L, S, and C stand for lexical overlap, subsequence, and constituent heuristics, respectively. Augmentation set sizes are n = 101 for small, n = 405 for medium, and n = 1215 for large. 2350 Subcase Unaugmented Small Medium Large Subject-object swap 0.19 0.53 1.00 1.00 The senators mentioned the artist. ↛The artist mentioned the senators. Sentences with PPs 0.41 0.61 0.81 0.89 The judge behind the manager saw the doctors. ↛The doctors saw the manager. Sentences with relative clauses 0.33 0.53 0.77 0.83 The actors called the banker who the tourists saw. ↛The banker called the tourists. Passives 0.01 0.04 0.29 0.13 The senators were helped by the managers. ↛The senators helped the managers. Conjunctions 0.45 0.59 0.69 0.81 The doctors saw the presidents and the tourists. ↛The presidents saw the tourists. Untangling relative clauses 0.98 0.94 0.74 0.76 The athlete who the judges saw called the manager. →The judges saw the athlete. Sentences with PPs 1.00 0.98 0.85 0.86 The tourists by the actor called the authors. →The tourists called the authors. Sentences with relative clauses 0.99 0.98 0.89 0.89 The actors that danced encouraged the author. →The actors encouraged the author. Conjunctions 0.83 0.78 0.68 0.66 The secretaries saw the scientists and the actors. →The secretaries saw the actors. Passives 1.00 0.99 0.67 0.67 The authors were supported by the tourists. →The tourists supported the authors. Table A.5: Subject/object inversion with a transformed hypothesis: results for the HANS subcases that are diagnostic of the lexical overlap heuristic, for four training regimens—unaugmented (trained only on MNLI), and with small (n = 101), medium (n = 405) and large (n = 1215) augmentation sets. Chance performance is 0.5. Top: cases in which the gold label is non-entailment. Bottom: cases in which the gold label is entailment. 2351 Subcase Unaugmented Small Medium Large NP/S 0.02 0.03 0.47 0.50 The managers heard the secretary resigned. ↛The managers heard the secretary. PP on subject 0.12 0.21 0.21 0.23 The managers near the scientist shouted. ↛The scientist shouted. Relative clause on subject 0.07 0.13 0.14 0.13 The secretary that admired the senator saw the actor. ↛The senator saw the actor. MV/RR 0.00 0.01 0.05 0.02 The senators paid in the office danced. ↛The senators paid in the office. NP/Z 0.06 0.09 0.41 0.25 Before the actors presented the doctors arrived. ↛The actors presented the doctors. Conjunctions 0.98 0.96 0.87 0.86 The actor and the professor shouted. →The professor shouted. Adjectives 1.00 1.00 0.92 0.91 Happy professors mentioned the lawyer. →Professors mentioned the lawyer. Understood argument 1.00 0.99 0.97 0.97 The author read the book. →The author read. Relative clause on object 0.99 0.98 0.70 0.71 The artists avoided the actors that performed. →The artists avoided the actors. PP on object 1.00 1.00 0.75 0.79 The authors called the judges near the doctor. →The authors called the judges. Table A.6: Subject/object inversion with a transformed hypothesis: results for the HANS subcases diagnostic of the subsequence heuristic, for four training regimens—unaugmented (trained only on MNLI), and with small (n = 101), medium (n = 405) and large (n = 1215) augmentation sets. Top: cases in which the gold label is non-entailment. Bottom: cases in which the gold label is entailment. 2352 Subcase Unaugmented Small Medium Large Embedded under preposition 0.41 0.43 0.57 0.49 Unless the senators ran, the professors recommended the doctor. ↛The senators ran. Outside embedded clause 0.00 0.01 0.02 0.01 Unless the authors saw the students, the doctors resigned. ↛The doctors resigned. Embedded under verb 0.17 0.25 0.28 0.22 The tourists said that the lawyer saw the banker. ↛The lawyer saw the banker. Disjunction 0.01 0.01 0.04 0.03 The judges resigned, or the athletes saw the author. ↛The athletes saw the author. Adverbs 0.06 0.13 0.25 0.13 Probably the artists saw the authors. ↛The artists saw the authors. Embedded under preposition 0.96 0.94 0.94 0.95 Because the banker ran, the doctors saw the professors. →The banker ran. Outside embedded clause 1.00 1.00 0.99 0.99 Although the secretaries slept, the judges danced. →The judges danced. Embedded under verb 0.99 0.99 0.98 0.97 The president remembered that the actors performed. →The actors performed. Conjunction 1.00 1.00 0.98 0.99 The lawyer danced, and the judge supported the doctors. →The lawyer danced. Adverbs 1.00 1.00 0.93 0.96 Certainly the lawyers advised the manager. →The lawyers advised the manager. Table A.7: Subject/object inversion with a transformed hypothesis: results for the HANS subcases diagnostic of the constituent heuristic, for four training regimens—unaugmented (trained only on MNLI), and with small (n = 101), medium (n = 405) and large (n = 1215) augmentation sets. Chance performance is 0.5. Top: cases in which the gold label is non-entailment. Bottom: cases in which the gold label is entailment.
2020
212
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2353–2358 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2353 Improved Speech Representations with Multi-Target Autoregressive Predictive Coding Yu-An Chung, James Glass Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139, USA {andyyuan,glass}@mit.edu Abstract Training objectives based on predictive coding have recently been shown to be very effective at learning meaningful representations from unlabeled speech. One example is Autoregressive Predictive Coding (Chung et al., 2019), which trains an autoregressive RNN to generate an unseen future frame given a context such as recent past frames. The basic hypothesis of these approaches is that hidden states that can accurately predict future frames are a useful representation for many downstream tasks. In this paper we extend this hypothesis and aim to enrich the information encoded in the hidden states by training the model to make more accurate future predictions. We propose an auxiliary objective that serves as a regularization to improve generalization of the future frame prediction task. Experimental results on phonetic classification, speech recognition, and speech translation not only support the hypothesis, but also demonstrate the effectiveness of our approach in learning representations that contain richer phonetic content. 1 Introduction Unsupervised speech representation learning, which aims to learn a function that transforms surface features, such as audio waveforms or spectrograms, to higher-level representations using only unlabeled speech, has received great attention recently (Baevski et al., 2019, 2020; Liu et al., 2020; Song et al., 2019; Jiang et al., 2019; Schneider et al., 2019; Chorowski et al., 2019; Pascual et al., 2019; Oord et al., 2018; Kamper, 2019; Chen et al., 2018; Chung and Glass, 2018; Chung et al., 2018; Milde and Biemann, 2018; Chung et al., 2016; Hsu et al., 2017). A large portion of these approaches leverage self-supervised training, where the learning target is generated from the input itself, and thus can train a model in a supervised manner. Chung et al. (2019) propose a method called Autoregressive Predictive Coding (APC), which trains an RNN to predict a future frame that is n steps ahead of the current position given a context such as the past frames. The training target can be easily generated by right-shifting the input by n steps. Their intuition is that the model is required to produce a good summarization of the past and encode such knowledge in the hidden states so as to accomplish the objective. After training, the RNN hidden states are taken as the learned representations, and are shown to contain speech information such as phonetic and speaker content that are useful in a variety of speech tasks (Chung and Glass, 2020). Following their intuition, in this work we aim to improve the generalization of the future frame prediction task by adding an auxiliary objective that serves as a regularization. We empirically demonstrate the effectiveness of our approach in making more accurate future predictions, and confirm such improvement leads to a representation that contains richer phonetic content. The rest of the paper is organized as follows. We start with a brief review of APC in Section 2. We then introduce our approach in Section 3. Experiments and analysis are presented in Section 4, followed by our conclusions in Section 5. 2 Autoregressive Predictive Coding Given a context of a speech signal represented as a sequence of acoustic feature vectors (x1, x2, . . . , xt), the objective of Autoregressive Predictive Coding (APC) is to use the context to infer a future frame xt+n that is n steps ahead of xt. Let x = (x1, x2, . . . , xN) denote a full utterance, where N is the sequence length, APC incorporates an RNN to process each frame xt sequentially and update its hidden state ht accordingly. For t = 1, . . . , N −n, the RNN produces 2354 Figure 1: Overview of our method. Lf is the original APC objective that aims to predict xt+n given a context (x1, x2, . . . , xt) with an autoregressive RNN. Our method first samples an anchor position, assuming it is time step t. Next, we build an auxiliary loss Lr that computes Lf of a past sequence (xt−s, xt−s+1, . . . , xt−s+ℓ−1) (see Section 3.1 for definitions of s and ℓ), using an auxiliary RNN (dotted line area). In this example, (n, s, ℓ) = (1, 4, 3). In practice, we can sample multiple anchor positions, and averaging over all of them gives us the final Lr. an output yt = W · ht, where W is an affinity matrix that maps ht back to the dimensionality of xt. The model is trained by minimizing the frame-wise L1 loss between the predicted sequence (y1, y2, . . . , yN−n) and the target sequence (x1+n, x2+n, . . . , xN): Lf(x) = N−n X t=1 |xt+n −yt|. (1) When n = 1, one can view APC as an acoustic version of neural LM (NLM) (Mikolov et al., 2010) by thinking of each acoustic frame as a token embedding, as they both use a recurrent encoder and aim to predict information about the future. A major difference between NLM and APC is that NLM infers tokens from a closed set, while APC predicts frames of real values. Once an APC model is trained, given an utterance (x1, x2, . . . , xN), we follow Chung et al. (2019) and take the output of the last RNN layer (h1, h2, . . . , hN) as its extracted features. 3 Proposed Methodology Our goal is to make APC’s prediction of xt+n given ht more accurate. In Section 4 we will show this leads to a representation that contains richer phonetic content. 3.1 Remembering more from the past An overview of our method is depicted in Figure 1. We propose an auxiliary loss Lr to improve the generalization of the main objective Lf (Equation 1). The idea of Lr is to refresh the current hidden state ht with the knowledge learned in the past. At time step t, we first sample a past sequence pt = (xt−s, xt−s+1, . . . , xt−s+ℓ−1), where s is how far the start of this sequence is from t and ℓis the length of pt. We then employ an auxiliary RNN, denoted as RNNaux, to perform predictive coding defined in Equation 1 conditioning on ht. Specifically, we initialize the hidden state of RNNaux with ht, and optimize it along with the corresponding Waux using Lf(pt), which equals to Pt−s+ℓ−1 t′=t−s |xt′+n −yt′|. Such a process reminds ht of what has been learned in ht−s, ht−s+1, . . . , ht−s+ℓ−1. For a training utterance x = (x1, x2, . . . , xN), we select each frame with probability P as an anchor position. Assume we end up with M anchor positions: a1, a2, . . . , aM. Each am defines a sequence pam = (xam−s, xam−s+1, . . . , xam−s+ℓ−1) before xam, which we use to compute Lf(pam). Averaging over all anchor positions gives the final auxiliary loss Lr: Lr(x) = 1 M M X m=1 Lf(pam). (2) The final APC objective combines Equations 1 and 2 with a balancing coefficient λ: Lm(x) = Lf(x) + λLr(x). (3) We re-sample the anchor positions for each x during each training iteration, while they all share the same RNNaux and Waux. 4 Experiments We demonstrate the effectiveness of Lr in helping optimize Lf, and investigate how the improvement is reflected in the learned representations. 2355 (a) Lr (auxiliary objective, Equation 2) (b) Lf (main objective, Equation 1) Figure 2: Validation loss of Lr (left) and Lf (right) on LibriSpeech dev-clean when training APC using different (n, s, ℓ) combinations. Each bar of the same color represents one (s, ℓ) combination. We use (−, −) to denote an APC optimized only with Lf. Bars are grouped by their n’s with different (s, ℓ) combinations within each group. 4.1 Setup We follow Chung et al. (2019) and use the audio portion of the LibriSpeech (Panayotov et al., 2015) train-clean-360 subset, which contains 360 hours of read speech produced by 921 speakers, for training APC. The input features are 80dimensional log Mel spectrograms, i.e., xt ∈R80. Both RNN and RNNaux are a 3-layer, 512-dim unidirectional GRU (Cho et al., 2014) network with residual connections between two consecutive layers (Wu et al., 2016). Therefore, W, Waux ∈ R512×80. λ is set to 0.1 and the sampling probability P is set to 0.15, that is, each frame has a 15% of chance to be selected as an anchor position. P and λ are selected based on the validation loss of Lf on a small data split. All models are trained for 100 epochs using Adam (Kingma and Ba, 2015) with a batch size of 32 and a learning rate of 10−3. 4.2 Effect of Lr We first validate whether augmenting Lr improves Lf. As a recap, n is the number of time steps ahead of the current position t in Lf, and s and ℓdenote the start and length, respectively, of a past sequence before t to build Lr. We consider (n, s, ℓ) ∈{1, 3, 5, 7, 9} × {7, 14, 20} × {3, 7}. Note that each phone has an average duration of about 7 frames. Figures 2a and 2b present Lr (before multiplying λ) and Lf of the considered APC variants on the LibriSpeech dev-clean subset, respectively. Each bar of the same color represents one (s, ℓ) combination. We use (−, −) to denote an APC optimized only with Lf. Bars are grouped by their n’s with different (s, ℓ) combinations within each group. We start with analyzing Figure 2a. Note that Lr does not exist for (−, −) and is set to 0 in the figure. We see that under the same n, the performance of Lr is mainly decided by how far (s) the past sequence is from the current position rather than the length (ℓ) to generate: when we keep ℓfixed and increase s from 7 (red), 14 (green), to 20 (blue), we observe the loss surges as well. From Figure 2b, we have the following findings. For a small n, the improvement in Lf brought by Lr is minor. By comparing (−, −) with other bars, we see that when n ≤3, which is smaller than half of the average phone duration (7 frames), adding Lr does not lower Lf by much. We speculate that when n ≤3, xt+n to be inferred is usually within the same phone as xt, making the task not challenging enough to force the model to leverage more past information. Lr becomes useful when n gets larger. We see that when n is close to or exceeds the average phone duration (n ≥5), an evident reduction in Lf after adding Lr is observed, which validates the effectiveness of Lr in assisting with the optimization of Lf. When n = 9, the improvement is not as large as when n = 5 or 7. One possible explanation is that xt+9 has become almost independent from the previous context ht and hence is less predictable. By observing the validation loss, we have shown that Lr indeed helps generalize Lf. 4.3 Learned representation analysis Next, we want to examine whether an improvement in Lf leads to a representation that encodes more useful information. Speech signals encompass a rich set of acoustic and linguistic properties. Here 2356 Feature Time shift -15 -10 -5 0 +5 +10 +15 log Mel 83.3 80.3 67.6 49.9 65.5 77.9 82.7 APC trained with Lf (Equation 1) n = 1 56.1 45.8 36.1 33.7 56.5 73.7 81.6 n = 3 50.8 41.8 34.8 33.4 56.0 73.5 81.1 n = 5 48.7 38.2 32.5 31.9 54.8 73.0 80.5 n = 7 44.6 38.6 32.9 32.1 56.3 73.8 80.4 n = 9 51.0 41.8 35.7 36.9 58.4 74.6 81.0 APC trained with Lm (Equation 3) n = 1 50.6 42.2 35.1 33.1 54.4 73.4 81.4 n = 3 46.4 38.0 34.1 32.4 54.1 71.4 80.5 n = 5 41.8 35.1 29.8 28.1 49.6 64.6 76.8 n = 7 39.8 33.8 28.7 27.8 46.8 60.6 74.4 n = 9 42.3 35.3 30.3 29.7 50.0 63.3 76.6 Table 1: Phonetic classification results using different types of features as input to a linear logistic regression classifier. The classifier aims to correctly classify each frame into one of the 48 phone categories. Frame error rates (↓) are reported. Given a time shift w ∈{0, ±5, ±10, ±15}, the classifier is asked to predict the phone identity of xt+w given xt. we will only focus on analyzing the phonetic content contained in a representation, and leave other properties such as speaker for future work. We use phonetic classification on TIMIT (Garofolo et al., 1993) as the probing task to analyze the learned representations. The corpus contains 3696, 400, and 192 utterances in the train, validation, and test sets, respectively. For each n ∈{1, 3, 5, 7, 9}, we pick the (s, ℓ) combination that has the lowest validation loss. As described in Section 2, we take the output of the last RNN layer as the extracted features, and provide them to a linear logistic regression classifier that aims to correctly classify each frame into one of the 48 phone categories. During evaluation, we follow the protocol (Lee and Hon, 1989) and collapse the prediction to 39 categories. We report frame error rate (FER) on the test set, which indicates how much phonetic content is contained in the representations. We also conduct experiments for the task of predicting xt−w and xt+w given xt for w ∈{5, 10, 15}. This examines how contextualized ht is, that is, how much information about the past and future is encoded in the current feature ht. We simply shift the labels in the dataset by {±5, ±10, ±15} and retrain the classifier. We keep the pre-trained APC RNN fixed for all runs. Results are shown in Table 1. We emphasize that our hyperparameters are chosen based on Lf and are never selected based on their performance on any downstream task, including phonetic classification, speech recognition, and speech translation to be presented next. Tuning hyperparameters towards a downstream task defeats the purpose of unsupervised learning. Phonetic classification We first study the standard phonetic classification results, shown in the column where time shift is 0. We see that APC features, regardless of the objective (Lf or Lm), achieve lower FER than log Mel features, showing that the phonetic information contained in the surface features has been transformed into a more accessible form (defined as how linearly separable they are). Additionally, we see that APC features learned by Lm outperform those learned by Lf across all n. For n ≥5 where there is a noticeable improvement in future prediction after adding Lr as shown in Figure 2b, their improvement in phonetic classification is also larger than when n ≤3. Such an outcome suggests that APC models that are better at predicting the future do learn representations that contain richer phonetic content. It is also interesting that when using Lf, the best result occurs at n = 5 (31.9); while with Lm, it is when n = 7 that achieves the lowest FER (27.8). Predicting the past or future We see that it is harder to predict the nearby phone identities from a log Mel frame, and the FER gets higher further away from the center frame. An APC feature ht contains more information about its past than its future. The result matches our intuition as the RNN generates ht conditioning on hi for i < t and thus their information are naturally encoded in ht. Furthermore, we observe a consistent improvement in 2357 both directions by changing Lf to Lm across all n and time shifts. This confirms the use of Lr, which requires the current hidden state ht to recall what has been learned in previous hidden states, so more information about the past is encoded in ht. The improvement also suggests that an RNN can forget the past information when training only with Lf, and adding Lr alleviates such problem. 4.4 Speech recognition and translation The above phonetic classification experiments are meant for analyzing the phonetic properties of a representation. Finally, we apply the representations learned by Lm to automatic speech recognition (ASR) and speech translation (ST) and show their superiority over those learned by Lf. We follow the exact setup in Chung and Glass (2020). For ASR, we use the Wall Street Journal corpus (Paul and Baker, 1992), use si284 for training, and report the word error rate (WER) on dev93. For ST, we use the LibriSpeech En-Fr corpus (Kocabiyikoglu et al., 2018), which aims to translate an English speech to a French text, and report the BLEU score (Papineni et al., 2002). For both tasks, the downstream model is an end-to-end, sequenceto-sequence RNN with attention (Chorowski et al., 2015). We compare different input features to the same model. Results, shown in Table 2, demonstrate that the improvement in predictive coding brought by Lr not only provides representations that contain richer phonetic content, but are also useful in real-world speech applications.1 Feature ASR (WER ↓) ST (BLEU ↑) log Mel 18.3 12.9 APC w/ Lf 15.2 13.8 APC w/ Lm 14.2 14.5 Table 2: Automatic speech recognition (ASR) and speech translation (ST) results using different types of features as input to a seq2seq with attention model. Word error rates (WER, ↓) and BLEU scores (↑) are reported for the two tasks, respectively. 5 Conclusions We improve the generalization of Autoregressive Predictive Coding by multi-target training of fu1According to Chung and Glass (2020), when using a Transformer architecture (Vaswani et al., 2017; Liu et al., 2018) as the autoregressive model, representations learned with Lf can achieve a WER of 13.7 on ASR and a BLEU score of 14.3 on ST. ture prediction Lf and past memory reconstruction Lr, where the latter serves as a regularization. Through phonetic classification, we find the representations learned with our approach contain richer phonetic content than the original representations, and achieve better performance on speech recognition and speech translation. References Alexei Baevski, Michael Auli, and Abdelrahman Mohamed. 2019. Effectiveness of self-supervised pretraining for speech recognition. arXiv preprint arXiv:1911.03912. Alexei Baevski, Steffen Schneider, and Michael Auli. 2020. vq-wav2vec: Self-supervised learning of discrete speech representations. In ICLR. Yi-Chen Chen, Sung-Feng Huang, Chia-Hao Shen, Hung-Yi Lee, and Lin-Shan Lee. 2018. Phoneticand-semantic embedding of spoken words with applications in spoken content retrieval. In SLT. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Workshop on Syntax, Semantics and Structure in Statistical Translation. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In NIPS. Jan Chorowski, Ron Weiss, Samy Bengio, and A¨aron van den Oord. 2019. Unsupervised speech representation learning using wavenet autoencoders. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(12):2041–2053. Yu-An Chung and James Glass. 2018. Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech. In Interspeech. Yu-An Chung and James Glass. 2020. Generative pretraining for speech with autoregressive predictive coding. In ICASSP. Yu-An Chung, Wei-Ning Hsu, Hao Tang, and James Glass. 2019. An unsupervised autoregressive model for speech representation learning. In Interspeech. Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. 2018. Unsupervised cross-modal alignment of speech and text embedding spaces. In NeurIPS. Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, and Lin-Shan Lee. 2016. Audio word2vec: Unsupervised learning of audio segment representations using sequence-to-sequence autoencoder. In Interspeech. 2358 John Garofolo, Lori Lamel, William Fisher, Jonathan Fiscus, David Pallett, and Nancy Dahlgren. 1993. DARPA TIMIT acoustic-phonetic continuous speech corpus. Technical Report NISTIR 4930, NIST. Wei-Ning Hsu, Yu Zhang, and James Glass. 2017. Unsupervised learning of disentangled and interpretable representations from sequential data. In NIPS. Dongwei Jiang, Xiaoning Lei, Wubo Li, Ne Luo, Yuxuan Hu, et al. 2019. Improving Transformer-based speech recognition using unsupervised pre-training. arXiv preprint arXiv:1910.09932. Herman Kamper. 2019. Truly unsupervised acoustic word embeddings using weak top-down constraints in encoder-decoder models. In ICASSP. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Ali Kocabiyikoglu, Laurent Besacier, and Olivier Kraif. 2018. Augmenting LibriSpeech with French translations: A multimodal corpus for direct speech translation evaluation. In LREC. Kai-Fu Lee and Hsiao-Wuen Hon. 1989. Speakerindependent phone recognition using hidden markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11):1641–1648. Andy Liu, Shu-Wen Yang, Po-Han Chi, Po-Chun Hsu, and Hung-Yi Lee. 2020. Mockingjay: Unsupervised speech representation learning with deep bidirectional Transformer encoders. In ICASSP. Peter Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In ICLR. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech. Benjamin Milde and Chris Biemann. 2018. Unspeech: Unsupervised speech context embeddings. In Interspeech. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In ICASSP. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL. Santiago Pascual, Mirco Ravanelli, Joan Serr`a, Antonio Bonafonte, and Yoshua Bengio. 2019. Learning problem-agnostic speech representations from multiple self-supervised tasks. In Interspeech. Douglas Paul and Janet Baker. 1992. The design for the wall street journal-based CSR corpus. In Speech and Natural Language Workshop. Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised pre-training for speech recognition. In Interspeech. Xingchen Song, Guangsen Wang, Zhiyong Wu, Yiheng Huang, Dan Su, et al. 2019. SpeechXLNet: Unsupervised acoustic model pretraining for self-attention networks. arXiv preprint arXiv:1910.10387. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc Le, Mohammad Norouzi, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
2020
213
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2359–2369 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2359 Integrating Multimodal Information in Large Pretrained Transformers Wasifur Rahman1, Md. Kamrul Hasan1*, Sangwu Lee1*, Amir Zadeh2, Chengfeng Mao2, Louis-Philippe Morency2, Ehsan Hoque1 1 - Department of Computer Science, University of Rochester, USA 2 - Language Technologies Institute, SCS, CMU, USA [email protected], [email protected], [email protected],[email protected], [email protected],[email protected], [email protected] Abstract Recent Transformer-based contextual word representations, including BERT and XLNet, have shown state-of-the-art performance in multiple disciplines within NLP. Fine-tuning the trained contextual models on task-specific datasets has been the key to achieving superior performance downstream. While finetuning these pre-trained models is straightforward for lexical applications (applications with only language modality), it is not trivial for multimodal language (a growing area in NLP focused on modeling face-to-face communication). Pre-trained models don’t have the necessary components to accept two extra modalities of vision and acoustic. In this paper, we proposed an attachment to BERT and XLNet called Multimodal Adaptation Gate (MAG). MAG allows BERT and XLNet to accept multimodal nonverbal data during fine-tuning. It does so by generating a shift to internal representation of BERT and XLNet; a shift that is conditioned on the visual and acoustic modalities. In our experiments, we study the commonly used CMUMOSI and CMU-MOSEI datasets for multimodal sentiment analysis. Fine-tuning MAGBERT and MAG-XLNet significantly boosts the sentiment analysis performance over previous baselines as well as language-only finetuning of BERT and XLNet. On the CMUMOSI dataset, MAG-XLNet achieves humanlevel multimodal sentiment analysis performance for the first time in the NLP community. 1 Introduction Human face-to-face communication flows as a seamless integration of language, acoustic, and vision modalities. In ordinary everyday interactions, we utilize all these modalities jointly to convey our * - Equal contribution intentions and emotions. Understanding this faceto-face communication falls within an increasingly growing NLP research area called multimodal language analysis (Zadeh et al., 2018b). The biggest challenge in this area is to efficiently model the three pillars of communication together. This gives artificial intelligence systems the capability to comprehend the multi-sensory information without disregarding nonverbal factors. In many applications such as dialogue systems and virtual reality, this capability is crucial to maintain the high quality of user interaction. The recent success of contextual word representations in NLP is largely credited to new Transformer-based (Vaswani et al., 2017) models such as BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019). These Transformer-based models have shown performance improvement across downstream tasks (Devlin et al., 2018). However, their true downstream potential comes from finetuning their pre-trained models for particular tasks (Devlin et al., 2018). This is often done easily for lexical datasets which exhibit language modality only. However, this fine-tuning for multimodal language is neither trivial nor yet studied; simply because both BERT and XLNet only expect linguistic input. Therefore, in applying BERT and XLNet to multimodal language, one must either (a) forfeit the nonverbal information and fine-tune for language, or (b) simply extract word representations and proceed to use a state-of-the-art model for multimodal studies. In this paper, we present a successful framework for fine-tuning BERT and XLNet for multimodal input. Our framework allows the BERT and XLNet core structures to remain intact, and only attaches a carefully designed Multimodal Adaptation Gate (MAG) to the models. Using an attention conditioned on the nonverbal behaviors, MAG essentially maps the informative visual and acoustic 2360 factors to a vector with a trajectory and magnitude. During fine-tuning, this adaptation vector modifies the internal state of the BERT and XLNet, allowing the models to seamlessly adapt to the multimodal input. In our experiments we use the CMU-MOSI (Zadeh et al., 2016) and CMU-MOSEI (Zadeh et al., 2018d) datasets of multimodal language, with a specific focus on the core NLP task of multimodal sentiment analysis. We compare the performance of MAG-BERT and MAG-XLNet to the above (a) and (b) scenarios in both classification and regression sentiment analysis. Our findings demonstrate that fine-tuning these advanced pre-trained Transformers using MAG yields consistent improvement, even though BERT and XLNet were never trained on multimodal data. The contributions of this paper are therefore summarized as: • We propose an efficient framework for finetuning BERT and XLNet for multimodal language data. This framework uses a component called Multimodal Adaptation Gate (MAG) that introduces minimal overhead to both the models. • MAG-BERT and MAG-XLNet set new state of the art in both CMU-MOSI and CMUMOSEI datasets, when compared to scenarios (a) and (b). For CMU-MOSI, MAG-XLNet achieves performance on par with reported human performance. 2 Related Works The studies in this paper are related to the following research areas: 2.1 Multimodal Language Analyses Multimodal language analyses is a recent research trend in natural language processing (Zadeh et al., 2018b) that helps us understand language from the modalities of text, vision and acoustic. These analyses have particularly focused on the tasks of sentiment analysis (Poria et al., 2018), emotion recognition (Zadeh et al., 2018d), and personality traits recognition (Park et al., 2014). Works in this area often focus on novel multimodal neural architectures (Pham et al., 2019; Hazarika et al., 2018) and multimodal fusion approaches (Liang et al., 2018; Tsai et al., 2018). Related to content in this paper, we discuss some of the models in this domain including TFN, MARN, MFN, RMFN and MulT. Tensor Fusion Network (TFN) (Zadeh et al., 2017) creates a multi-dimensional tensor to explicitly capture all possible interactions between the three modalities: unimodal, bimodal and trimodal. Multiattention Recurrent Network (MARN) (Zadeh et al., 2018c) uses three separate hybrid LSTM memories that have the ability to propagate the cross-modal interactions. Memory Fusion Network (Zadeh et al., 2018a) synchronizes the information from three separate LSTMs through a multi-view gated memory. Recurrent Memory Fusion Network (RMFN) (Liang et al., 2018) captures the nuanced interactions among the modalities in a multi-stage manner, giving each stage the ability to focus on a subset of signals. Multimodal Transformer for Unaligned Multimodal Language Sequences (MulT) (Tsai et al., 2019) deploys three Transformers – each for one modality – to capture the interactions with the other two modalities in a selfattentive manner. The information from the three Transformers are aggregated through late-fusion. 2.2 Pre-trained Language Representations Learning word representations from large corpora has been an active research area in NLP community (Mikolov et al., 2013; Pennington et al., 2014). Glove (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) contributed to advancing the state-of-the-art of many NLP tasks. A major setback of these word representations is their non-contextual nature. Recently, contextual language representation models trained on large text corpora have achieved state of the art results on several NLP tasks including question answering, sentiment classification, part-of-speech (POS) tagging and similarity modeling(Peters et al., 2018; Devlin et al., 2018). The first two notable contextual representation based models were ELMO (Peters et al., 2018) and GPT (Radford et al., 2018). However, they only captured unidirectional context and therefore, missed more nuanced interactions among words of a sentence. BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) outperforms both ELMO and GPT since it can provide better representation through capturing bi-directional context using Transformers. XLNet(Dai et al., 2019) gives new contextual representations through building an auto-regressive model capable of capturing all possible factorizations of the input. Fine-tuning pretrained mod2361 els for BERT and XLNet has been a key factor in achieving state of the art performance for downstream tasks. Even though previous works have explored using BERT to model multimodal data (Sun et al., 2019), to the best of our knowledge, directly fine-tuning BERT or XLNet for multimodal data has not been explored in previous works. 3 BERT and XLNet To better understand the proposed multimodal framework in this paper, we first present an overview of both the BERT and XLNet models. We start by quickly formalizing the operations within Transformer and Transformer-XL models, followed by an overview of BERT and XLNet. 3.1 Transformer Transformer is a non-recurrent neural architecture designed for modeling sequential data (Vaswani et al., 2017). The superior performance of Transformer model is largely credited to a Multi-head Self-Attention module. Using this module, each element of a sequence is attended by conditioning on all the other sequence elements. Figure 2 summarizes internal operations of a Transformer layer (for M such layers). Commonly, a Transformer uses an encoder-decoder paradigm. A stack of encoders is followed by a stack of decoders to map an input sequence to an output sequence. An additional embedding step with Positional Input Embedding is applied before the input goes through the stack of encoders and decoders. 3.2 Transformer-XL Transformer-XL (Dai et al., 2019) is an extension of the Transformer which offers two improvements: a) it enhances the capability of the Transformer to capture long-range dependencies (specifically for the case of context fragmentation), and b) it improves the capability to better predict first few symbols (which are often crucial for the rest of the sequence). It does so with a recurrence mechanism designed to pass context information from one segment to the next and a relative positional encoding mechanism to enable state reuse without causing temporal confusion. 3.3 BERT BERT is a successful language model that provides rich contextual word representation (Devlin et al., 2018). It follows an auto-encoding approach – masking out a portion of input tokens and predicting those tokens based on all other non-masked tokens – and thus learning a vector representation for the masked out tokens in that process. We use the variant of BERT used for Single Sentence Classification Tasks. First, input embeddings are generated from a sequence of word-piece tokens by adding token embeddings, segment embeddings and position embeddings . Then multiple Encoder layers are applied on top of these input embeddings. Each Encoder has a Multi-Head Attention layer and a Feed Forward layer, each followed by a residual connection with layer normalization. A special [CLS] token is appended in front of the input token sequence. So, for a N length input sequence, we get N + 1 vectors from the last Encoder layer – the first of those vectors is used to predict the label of the input after that vector undergoes an affine transformation. 3.4 XLNet XLNet (Yang et al., 2019) sets out to improve two critical aspects of the BERT model: a) independence among the masked out tokens and b) pretrainfinetune discrepancy in training vs inference, since inference inputs do not have masked out tokens. XLNet is an auto-regressive model and therefore, is free from the need of masking out certain tokens. However, auto-regressive models usually capture the unidirectional context (either forward or backward). XLNet can learn bidirectional context by maximizing likelihood over all possible permutations of factorization order. In essence, it randomly samples multiple factorization orders and trains the model on each of those orders. Therefore, it can model input by taking all possible permutations into consideration (in expectation). XLNet utilizes two key ideas from TransformerXL (Dai et al., 2019): relative positioning and segment recurrence mechanism. Like BERT, it also has a Input Embedder followed by multiple Encoders. The Embedder converts the input tokens into vectors after adding token embedding, segment embedding and relative positional embedding information. Each encoder consists of a Multi-Head attention layer and a feed forward layer – each followed by a residual addition and normalization layer. The embedder output is fed into the encoders to get a contextual representation of input. 2362 Attention Gating Shifting Lexical Input Acoustic Input Visual Input 𝑍" 𝐴" 𝑉" 𝐻" &𝑍" Figure 1: Multimodal Adaptation Gate (MAG) takes as input a lexical input vector, as well as its visual and acoustic accompaniments. Subsequently, an attention over lexical and nonverbal dimensions is used to fuse the multimodal data into another vector, which is subsequently added to the input lexical vector (shifting). 4 Multimodal Adaptation Gate (MAG) In multimodal language, a lexical input is accompanied by visual and acoustic information - simply gestures and prosody co-occurring with language. Consider a semantic space that captures latent concepts (positions in the latent space) for individual words. In absence of multimodal accompaniments, the semantic space is directly conditioned on the language manifold. Simply put, each word falls within some part of this semantic space, depending only on the meaning of the word in a linguistic structure (i.e. sentence). Nonverbal behaviors can have an impact on the meaning of words, and therefore on the position of words in this semantic space. Together, language and nonverbal accompaniments decide on the new position of the word in the semantic space. In this paper, we regard to this new position as addition of the language-only position with a displacement vector; a vector with trajectory and magnitude that shifts the language-only position of the word to the new position in light of nonverbal behaviors. This is the core philosophy behind the Multimodal Adaptation Gate (MAG). A particularly appealing implementation of such displacement is studied in RAVEN (Wang et al., 2018), where displacements are calculated using cross-modal self-attention to highlight relevant nonverbal information. Figure 1 shows the studied MAG in this paper. Essentially, a MAG unit receives three inputs, one is purely lexical, one is visual, and the last one is acoustic. Let the triplet (Zi,Ai,Vi) denote these inputs for ith word in a sequence. We break this displacement into bimodal factors [Zi;Ai] and [Zi;Vi] by concatenating lexical vector with acoustic and visual information respectively and use them to produce two gating vectors gv i and ga i : gv i = R(Wgv[Zi;Vi] + bv) (1) ga i = R(Wga[Zi;Ai] + ba) (2) where Wgv, Wga are weight matrices for visual and acoustic modality and bv and ba are scalar biases. R(x) is a non-linear activation function. These gates highlight the relevant information in visual and acoustic modality conditioned on the lexical vector. We then create a non-verbal displacement vector Hi by fusing together Ai and Vi multiplied by their respective gating vectors: Hi = ga i ⋅(WaAi) + gv i ⋅(WvVi) + bH (3) where Wa and Wv are weight matrices for acoustic and visual information respectively and bH is the bias vector. Subsequently, we use a weighted summation between Zi and its nonverbal displacement Hi to create a multimodal vector ¯Zi: ¯Zi = Zi + αHi (4) α = min( ∥Zi∥2 ∥Hi∥2 β,1) (5) where β is a hyper-parameter selected through the cross-validation process. ∥Zi∥2 and ∥Hi∥2 denote the L2 norm of the Zi and Hi vectors respectively. We use the scaling factor α so that the effect of nonverbal shift Hi remains within a desirable range. Finally, we apply a layer normalization and dropout layer to ¯Zi. 2363 Input Embedder …. 𝐶𝐿𝑆 Multimodal Adaptation Gate Multi-Head Attention Feed Forward Add & Norm Add & Norm 𝐿1 𝐿2 𝐿𝑁 …. 𝐸𝐶𝐿𝑆 𝐸1 𝐸2 𝐸𝑁 𝑍) * 𝐴) 𝑉) Multimodal Adaptation Gate 𝑍* 𝐴𝑉Multimodal Adaptation Gate 𝑍. * 𝐴. 𝑉. Multi-Head Attention Feed Forward Add & Norm Add & Norm (𝑀−𝑗)× 𝑗× ̅𝑍678 * ̅𝑍) * ̅𝑍* ̅𝑍. * ̅𝑍678 9 ̅𝑍) 9 ̅𝑍9 ̅𝑍. 9 …. Figure 2: Best viewed zoomed in and in color. The Transformer architecture of BERT/XLNet with MAG applied at jth layer. We consider a total of M layers within the pretrained Transformer. MAG can be applied at different layers of the pretrained Transformers. 4.1 MAG-BERT MAG-BERT is a combination of MAG applied to a certain layer of BERT network (Figure 2 demonstrates the structure of MAG-BERT as well as MAG-XLNet). Essentially, at each layer, BERT contains lexical vectors for ith word in the sequence. For the same word, nonverbal accompaniments are also available in multimodal language setup. MAG essentially forms an attachment to the desired layer in BERT; an attachment that allows for multimodal information to leak into the BERT model and displace the lexical vectors. The operations within MAG allows for the lexical vectors within BERT to adapt to multimodal information by changing their positions within the semantic space. Aside from the attachment of MAG, no change is made to the BERT structure. Given an N length language sequence L = [L1,L2,...LN] carrying word-piece tokens, a [CLS] token is appended to L so that we can use it later for class label prediction. Then, we input L to the Input Embedder which outputs E = [ECLS,E1,E2,...EN] after adding token, segment and position embeddings. Then, we input E to the first Encoding layer and then apply j Encoders on it successively. After that encoding process, we get the output Zj = [Zj CLS,Zj 1,Zj 2,...Zj N] which denotes the Lexical Embeddings after j layers of Encoding. For injecting audio-visual information into these embeddings, we prepare a sequence of triplets [(Zj i ,Ai,Vi) ∶∀i ∈{CLS,[1,N]}] by pairing Zj i with the corresponding (Ai,Vi). Each of these triplets are passed through the Multimodal Adaptation Gate which transforms the ith triplet into ¯Zj i – a unified multimodal representation of the corresponding Lexical Embedding. As there exists M = 12 Encoder layers in our BERT model, we input ¯ Zj = [ ¯Zj 1, ¯Zj 2,... ¯Zj N] to the next Encoder and apply M −j Encoder layers on it successively. At the end, we get ¯ZM from the Mth Encoder layer. As the first element ¯ZM CLS represents the [CLS] token, it has the information necessary to make a class label prediction. Therefore, ¯ZM CLS goes through an affine transformation to produce a single real-value which can be used to predict a class label. 4.2 MAG-XLNet Like MAG-BERT, MAG-XLNet also has the capability of injecting audio-visual information at any of its layers using MAG. At each position j of any of its layer, it holds the lexical vector corresponding to that position. Utilizing the audio-visual information available for that position, it can invoke MAG to get an appropriately shifted lexical vector in multimodal space. Although it mostly follows the general paradigm presented in Figure 2 verbatim, it uses the XLNet specific Embedder and Encoders. One other key difference is the position of the [CLS] token. Unlike BERT, the [CLS] token is appended at the right end of the input token 2364 sequence, and therefore in all the intermediate representations, the vector corresponding to the [CLS] will be the rightmost one. Following the same logic, the output from the final Encoding layer will be ¯ZM = [ ¯ZM 1 , ¯ZM 2 ,... ¯ZM N , ¯ZM CLS]. The last item, ¯ZM CLS can be used for class label prediction after it goes through an affine transformation. 5 Experiments In this section we outline the experiments in this paper. We first start by describing the datasets, followed by description of extracted features, baselines, and experimental setup. 5.1 CMU-MOSI Dataset CMU-MOSI (CMU Multimodal Opinion Sentiment Intensity) is a dataset of multimodal language specifically focused on multimodal sentiment analysis (Zadeh et al., 2016). CMU-MOSI contains 2199 video segments taken from 93 Youtube movie review videos. The dataset has real-valued highagreement sentiment intensity annotations in the range [−3,+3]. 5.2 Computational Descriptors For each modality, the following computational descriptors are available: Language: We transcribe the videos using Youtube API followed by manual correction. Acoustic: COVAREP (Degottex et al., 2014) is used to extract the following relevant features: fundamental frequency, quasi open quotient, normalized amplitude quotient, glottal source parameters (H1H2, Rd, Rd conf), VUV, MDQ, the first 3 formants, PSP, HMPDM 0-24 and HMPDD 0-12, spectral tilt/slope of wavelet responses (peak/slope), MCEP 0-24. Visual: For the visual modality, the Facet library (iMotions, 2017) is used to extract a set of visual features including facial action units, facial landmarks, head pose, gaze tracking and HOG features. For each word, we align all three modalities following the convention established in (Chen et al., 2017). Firstly, the word alignment between language and audio is obtained using forced alignment (Yuan and Liberman, 2008). Afterwards, the boundary of each word denotes the co-occurring visual and acoustic features (FACET and COVAREP). Subsequently, for each word, the co-occurring acoustic and visual features are averaged across each feature – thus achieving Ai and Vi vectors corresponding to word i. 5.3 Baseline Models We compare the performance of MAG-BERT and MAG-XLNet to a variety of state-of-the-art models for multimodal language analysis. These models are trained using extracted BERT and XLNet word embeddings as their language input: TFN (Tensor Fusion Network) explicitly models both intra-modality and inter-modality dynamics (Zadeh et al., 2017) by creating a multidimensional tensor that captures unimodal, bimodal and trimodal interactions across three modalities. MARN (Multi-attention Recurrent Network) models view-specific interactions using hybrid LSTM memories and cross-modal interactions using a Multi-Attention Block (MAB) (Zadeh et al., 2018c). MFN (Memory Fusion Network) has three separate LSTMs to model each modality separately and a multi-view gated memory to synchronize among them (Zadeh et al., 2018a). RMFN (Recurrent Memory Fusion Network) captures intra-modal and inter-modal information through recurrent multi-stage fashion (Liang et al., 2018). MulT (Multimodal Transformer for Unaligned Multimodal Language Sequence) uses three sets of Transformers and combines their output in a late fusion manner to model a multimodal sequence (Tsai et al., 2019). We use the aligned variant of the originally proposed model, which achieves superior performance over the unaligned variant. We also compare our model to fine-tuned BERT and XLNet using language modality only to measure the success of the MAG framework. 5.4 Experimental Design All the models in this paper are trained using Adam (Kingma and Ba, 2014) optimizer with learning rates between {0.001,0.0001,0.00001}. We use dropouts of {0.1,0.2,0.3,0.4,0.5} for training each model. LSTMs in TFN, MARN, MFN, RMFN, LFN use latent size of {16,32,64,128}. For MulT, we use {3,5,7} layers in the network and {1,3,5} attention heads. All models use the designated validation set of CMU-MOSI for finding best hyper-parameters. 2365 We perform two different evaluation tasks on CMU-MOSI datset: i) Binary Classification, and ii) Regression. We formulate it as a regression problem and report Mean-absolute Error (MAE) and the correlation of model predictions with true labels. Besides, we convert the regression outputs into categorical values to obtain binary classification accuracy (BA) and F1 score. Higher value means better performance for all the metrics except MAE. We use two evaluation metrics for BA and F1, one used in (Zadeh et al., 2018d) and one used in (Tsai et al., 2019). 6 Results and Discussion Table 1 shows the results of the experiments in this paper. We summarize the observations from the results in this table as following: 6.1 Performance of MAG-BERT In all the metrics across the CMU-MOSI dataset, we observe that performance of MAG-BERT is superior to state-of-the-art multimodal models that use BERT word embeddings. Furthermore, MAGBERT also performs superior to fine-tuned BERT. This essentially shows that the MAG component is allowing the BERT model to adapt to multimodal information during fine-tuning, thus achieving superior performance. 6.2 Performance of MAG-XLNet A similar performance trend to MAG-BERT is also observed for MAG-XLNet. Besides superior performance than baselines and fine-tuned XLNet, MAG-XLNet achieves near-human level performance for CMU-MOSI dataset. Furthermore, we train MulT using the fine-tuned XLNet embeddings and get the following performance: 83.6/85.3,82.6/84.2,0.810,0.759 which is lower than both MAG-XLNet and XLNet. It is notable that the p-value for student t-test between MAGXLNet and XLNet in Table 1 is lower than 10e −5 for all the metrics. The motivation behind the experiments reported in Table 1 is as follows: we extracted word embeddings from pre-trained BERT and XLNet models and trained the baseline models using those embeddings. Since BERT and XLNet are often perceived to provide better word embeddings than Glove, it is not fair to compare MAG-BERT/MAG-XLNet with previous models trained with Glove embeddings. Therefore, we retrain previous works usTask Metric BA↑ F1↑ MAE↓ Corr↑ Original (glove) TFN 73.9/– 73.4/– 0.970/– 0.633/– MARN 77.1/– 77.0/– 0.968/– 0.625/– MFN 77.4/– 77.3/– 0.965/– 0.632/– RMFN 78.4/– 78.0/– 0.922/– 0.681/– LFN 76.4/– 75.7/– 0.912/– 0.668/– MulT –/83.0 –/82.8 –/0.871 –/0.698 BERT TFN 74.8/76.0 74.1/75.2 0.955 0.649 MARN 77.7/78.9 77.9/78.2 0.938 0.691 MFN 78.2/79.3 78.1/78.4 0.911 0.699 RMFN 79.6/80.7 78.9/79.1 0.878 0.712 LFN 79.1/80.2 77.3/78.1 0.899 0.701 MulT 81.5/84.1 80.6/83.9 0.861 0.711 BERT 83.5/85.2 83.4/85.2 0.739 0.782 MAG-BERT 84.2/86.1 84.1/86.0 0.712 0.796 XLNet TFN 78.2/80.1 78.2/78.8 0.914 0.713 MARN 78.3/79.5 78.8/79.6 0.921 0.707 MFN 78.3/79.9 78.4/79.1 0.898 0.713 RMFN 79.1/81.0 78.6/80.0 0.901 0.703 LFN 80.2/82.9 79.1/81.6 0.862 0.701 MulT 81.7/84.4 80.4/83.1 0.849 0.738 XLNet 84.7/86.7 84.6/86.7 0.676 0.812 MAG-XLNet 85.7/87.9 85.6/87.9 0.675 0.821 Human 85.7/87.5/0.710 0.820 Table 1: Sentiment prediction results on CMU-MOSI dataset. Best results are highlighted in bold. MAGBERT and MAG-XLNet achieve superior performance than the baselines and their language-only finetuned counterpart. BA denotes binary accuracy (higher is better, same for F1), MAE denotes Mean-absolute Error (lower is better), and Corr is Pearson Correlation (higher is better). For BA and F1, we report two numbers: the number on the left side of “/” is measures calculated based on (Zadeh et al., 2018c) and the right side is measures calculated based on (Tsai et al., 2019). Human performance for CMU-MOSI is reported as (Zadeh et al., 2018a). Model E 1 4 6 8 12 A ⊕ ⊙ MAG-XLNet 80.1 85.6 84.1 84.1 83.8 83.6 64.0 60.0 55.8 Table 2: Results of variations of XLNet model: MAG applied at different layers of the XLNet model, inputlevel concatenation and addition of all modalities. “E” denotes application of MAG immediately after embedding layer of the XLNet and “A” denotes applying MAG after the embedding layer and all the subsequent Encoding layers. ⊕and ⊙denote input-level addition and concatenation of all modalities respectively. MAG applied at initial layers performs better overall. ing BERT/XLNet embeddings to establish a more 2366 # Spoken words + acoustic and visual behaviors Ground Truth MAGXLNet XLNet 1 “And it really just lacked what made the other movies more enjoyable.” + Frustrated and disappointed tone -1.4 -1.41 -0.9 2 “But umm I liked it.” + Emphasis on tone + positive shock through sudden eyebrow raise 1.8 1.9 1.2 3 “Except their eyes are kind of like this welcome to the polar express.” + tense voice + frown expression -0.6 -0.6 0.8 4 “Straight away miley cyrus acting miley cyrus, or lack of, she had this same expression throughout the entire film” + sarcastic voice + frustrated facial expression -1.0 -1.2 0.2 Table 3: Examples from the CMU-MOSI dataset. The ground truth sentiment labels are between strongly negative (-3) and strongly positive (+3). For each example, we show the Ground Truth and prediction output of both the MAG-XLNet and XLNet. XLNet seems to be replicating language modality mostly while MAG-XLNet is integrating the non-verbal information successfully. fair comparison between proposed approach in this paper, and previous work. Based on the information from Table 1, we observe that MAGBERT/MAG-XLNet models outperforms various baseline models using BERT/XLNet/Glove models substantially. 6.3 Adaptation at Different Layers We also study the effect of applying MAG at different encoder layers of the XLNet. Specifically, we first apply the MAG to the output of the embedding layer. Subsequently, we apply the MAG to the layer j ∈{1,4,6,8,12} of the XLNet. Then, we apply MAG at all the XLNet layers. From Table 2, we observe that earlier layers are more suitable for application of MAG. We believe that earlier layers allow for better integration of the multimodal information, as they allow the word shifting to happen from the beginning of the network. If the semantics of words should change based on the nonverbal accompaniments, then initial layers should reflect the semantic shift, otherwise, those layers are only working unimodally. Besides, the higher layers of BERT learn more abstract and higher-level information about the syntactic and semantic structure of linguistic features (Coenen et al., 2019). Since, the acoustic and visual information present in our model corresponds to each word in the utterance, it will be more difficult for the MAG to shift the vector extracted from a later layer since that vector’s information will be very abstract in nature. 6.4 Input-level Concatenation and Addition From Table 2, we see that both input-level concatenation and addition of modalities perform poorly. For Concatenation, we simply concatenate all the modalities. For Addition, we add the audio and visual information to the language embedding after mapping both of them to the language dimension. These results demonstrate the rationale behind using an advanced fusion mechanism like MAG. 6.5 Results on Comparable Datasets We also perform experiments on the CMU-MOSEI dataset (Zadeh et al., 2018d) to study the generalization of our approach to other multimodal language datasets. Unlike CMU-MOSI which has sentiment annotations at utterance level, CMU-MOSEI has sentiment annotations at sentence level. The experimental methodology for CMU-MOSEI is similar to the original paper. For the sake of comparison, we suffice1 to comparing the binary accuracy and f1 score for the top 3 models in Table 1. In BERT category, we compare the performance of MulT (with BERT embeddings), BERT and MAG-BERT which are respectively as follows: [83.5,82.9] for MulT, [83.9,83.9] for BERT, and [84.7,84.5] for MAG-BERT. Similarly for XLNET category, the results for MulT (with XLNet embeddings), XLNet and MAG-XLNet are as follows: [84.1,83.7] for MulT, [85.4,85.2] for XLNet and [85.6,85.7] for MAG-XLNet. Therefore, superior performance of 1Since Transformer based models take a long time to train for CMU-MOSEI 2367 MAG-BERT and MAG-XLNet also generalizes to CMU-MOSEI dataset. 6.6 Fine-tuning Effect We study whether or not the superior performance of the MAG-BERT and MAG-XLNet is related to successful finetuning of the models, or related to other factors e.g. any transformer with architecture like BERT or XLNet would achieve superior performance regardless of being pretrained. By randomly initializing the weights of BERT and XLNet within MAG-BERT and MAG-XLNet, we get the following performance on BA for the CMU-MOSI: 70.1 and 70.7 respectively. This indicates that the success of the MAG-BERT and MAG-XLNet is due to successful fine-tuning. Even on the larger CMU-MOSEI dataset we get BA of 76.8 and 78.4 for MAG-BERT and MAG-XLNet, which further substantiates the fact that fine-tuning is successful using MAG framework. 6.7 Qualitative Analysis In Table 3, we present some examples where MAGXLNet adjusted sentiment intensity properly by taking into account nonverbal information. The examples demonstrate that MAG-XLNET can successfully integrate the non-verbal modalities with textual information. In both Example-1 and Example-2, XLNet correctly predicted the polarity of the displayed emotion. However, additional information was present in the acoustic and visual domain which XLNet could not utlize. Given those information, MAGXLNet could better predict the magnitude of emotion displayed in both cases. Although the emotion in the text of Example-3 can be portrayed as a bit positive, the tense voice and frown expression helps MAG-XLnet reverse the polarity of predicted emotion. Similarly, the text in Example-4 is mostly neutral, but MAGXLNet can predict the negative emotion through the sarcastic vocal and frustrated facial expression. 7 Conclusion In this paper, we introduced a method for efficiently finetuning large pre-trained Transformer models for multimodal language. Using a proposed Multimodal Adaptation Gate (MAG), BERT and XLNet were successfully fine-tuned in presence of vision and acoustic modalities. MAG essentially poses the nonverbal behavior as a vector with a trajectory and magnitude, which is subsequently used to shift lexical representations within the pre-trained Transformer model. A unique characteristic of MAG is that it makes no change to the original structure of BERT or XLNet, but rather comes as an attachment to both models. Our experiments demonstrated the superior performance of MAG-BERT and MAGXLNet. The code for both MAG-BERT and MAGXLNet are publicly available here 2 Acknowledgement This research was supported in part by grant W911NF-15-1-0542 and W911NF-19-1-0029 with the US Defense Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO). Authors AZ and LM were supported by the National Science Foundation (Awards #1750439 #1722822) and National Institutes of Health. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of US Defense Advanced Research Projects Agency, Army Research Office, National Science Foundation or National Institutes of Health, and no official endorsement should be inferred. References Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltruˇsaitis, Amir Zadeh, and Louis-Philippe Morency. 2017. Multimodal sentiment analysis with wordlevel fusion and reinforcement learning. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 163–171. ACM. Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Vi´egas, and Martin Wattenberg. 2019. Visualizing and measuring the geometry of bert. arXiv preprint arXiv:1906.02715. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Gilles Degottex, John Kane, Thomas Drugman, Tuomo Raitio, and Stefan Scherer. 2014. Covarep—a collaborative voice analysis repository for speech technologies. In 2014 ieee international conference on acoustics, speech and signal processing (icassp), pages 960–964. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep 2https://github.com/WasifurRahman/ BERT_multimodal_transformer 2368 bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Devamanyu Hazarika, Soujanya Poria, Amir Zadeh, Erik Cambria, Louis-Philippe Morency, and Roger Zimmermann. 2018. Conversational memory network for emotion recognition in dyadic dialogue videos. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2122–2132. iMotions. 2017. Facial expression analysis. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Paul Pu Liang, Ziyin Liu, Amir Zadeh, and LouisPhilippe Morency. 2018. Multimodal language analysis with recurrent multistage fusion. arXiv preprint arXiv:1808.03920. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency. 2014. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proceedings of the 16th International Conference on Multimodal Interaction, pages 50–57. ACM. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Hai Pham, Paul Pu Liang, Thomas Manzini, LouisPhilippe Morency, and Barnabas Poczos. 2019. Found in translation: Learning robust joint representations by cyclic translations between modalities. arXiv preprint arXiv:1812.07809. Soujanya Poria, Amir Hussain, and Erik Cambria. 2018. Multimodal Sentiment Analysis, volume 8. Springer. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. arXiv preprint arXiv:1904.01766. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. arXiv preprint arXiv:1906.00295. Yao-Hung Hubert Tsai, Paul Pu Liang, Amir Zadeh, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2018. Learning factorized multimodal representations. arXiv preprint arXiv:1806.06176. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. 2018. Words can shift: Dynamically adjusting word representations using nonverbal behaviors. arXiv preprint arXiv:1811.09362. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Jiahong Yuan and Mark Liberman. 2008. Speaker identification on the scotus corpus. Journal of the Acoustical Society of America, 123(5):3878. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. arXiv preprint arXiv:1707.07250. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multiview sequential learning. In Thirty-Second AAAI Conference on Artificial Intelligence. Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency, Soujanya Poria, Erik Cambria, and Stefan Scherer. 2018b. Proceedings of grand challenge and workshop on human multimodal language (challengehml). In Proceedings of Grand Challenge and Workshop on Human Multimodal Language (ChallengeHML). Amir Zadeh, Paul Pu Liang, Soujanya Poria, Prateek Vij, Erik Cambria, and Louis-Philippe Morency. 2018c. Multi-attention recurrent network for human communication comprehension. In Thirty-Second AAAI Conference on Artificial Intelligence. 2369 Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259. AmirAli Bagher Zadeh, Paul Pu Liang, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018d. Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2236– 2246.
2020
214
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2370–2380 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2370 MultiQT: Multimodal Learning for Real-Time Question Tracking in Speech Jakob D. Havtorn Jan Latko Joakim Edin Lasse Borgholt Lars Maaløe Lorenzo Belgrano Nicolai F. Jacobsen Regitze Sdun ˇZeljko Agi´c Corti Store Strandstræde 21, 4 1255 Copenhagen K, Denmark [email protected] Abstract We address a challenging and practical task of labeling questions in speech in real time during telephone calls to emergency medical services in English, which embeds within a broader decision support system for emergency call-takers. We propose a novel multimodal approach to real-time sequence labeling in speech. Our model treats speech and its own textual representation as two separate modalities or views, as it jointly learns from streamed audio and its noisy transcription into text via automatic speech recognition. Our results show significant gains of jointly learning from the two modalities when compared to text or audio only, under adverse noise and limited volume of training data. The results generalize to medical symptoms detection where we observe a similar pattern of improvements with multimodal learning. 1 Introduction Our paper addresses the challenge of learning to discover and label questions in telephone calls to emergency medical services in English. The task is demanding in two key aspects: 1. Noise: A typical phone call to an emergency medical service differs significantly from data within most standard speech datasets. Most importantly, emergency calls are noisy by nature due to very stressful conversations conveyed over poor telephone lines. Automatic speech recognition (ASR) and subsequent text processing quickly becomes prohibitive in such noisy environments, where word error rates (WER) are significantly higher than for standard benchmark data (Han et al., 2017). For this reason, we propose a sequence labeler that makes use of two modalities of a phone call: audio and its transcription into text by utilizing an ASR model. Hereby we create a multimodal Figure 1: A speech sequence from our phone call dataset. Two audio segments are highlighted: a question (in blue) and a reported symptom (in yellow). architecture that is more robust to the adverse conditions of an emergency call. 2. Real-time processing: Our model is required to work incrementally to discover questions in real time within incoming streams of audio in order to work as a live decision support system. At runtime, no segmentation into sub-call utterances such as phrases or sentences is easily available. The lack of segmentation coupled with the real-time processing constraint makes it computationally prohibitive to discover alignments between speech and its automatic transcription. For these reasons, we cannot utilize standard approaches to multimodal learning which typically rely on near-perfect crossmodal alignments between short and well-defined segments (Baltruˇsaitis et al., 2018). Context and relevance. Learning to label sequences of text is one of the more thoroughly explored topics in natural language processing. In recent times, neural networks are applied not only to sequential labeling like part-of-speech tagging (Plank et al., 2016) or named entity recognition (Ma and Hovy, 2016), but also to cast into a labeling framework otherwise non-sequential tasks such as syntactic parsing (G´omez-Rodr´ıguez and Vilares, 2018; Strzyz et al., 2019). By contrast, assigning labels to audio sequences 2371 of human speech is comparatively less charted out. When addressed, speech labeling typically adopts a solution by proxy, which is to automatically transcribe speech into text, and then apply a text-only model (Surdeanu et al., 2005; Moll´a et al., 2007; Eidelman et al., 2010). The challenge then becomes not to natively label speech, but to adapt the model to adverse conditions of speech recognition error rates. Such models typically feature in end-to-end applications such as dialogue state tracking (Henderson et al., 2014; Ram et al., 2018). Recent advances in end-to-end neural network learning offer promise to directly label linguistic categories from speech alone (Ghannay et al., 2018). From another viewpoint, multimodal learning is successfully applied to multimedia processing where the modalities such as text, speech, and video are closely aligned. However, contributions there typically feature classification tasks such as sentiment analysis and not finer-grained multimedia sequence labeling (Zadeh et al., 2017). Our contributions. We propose a novel neural architecture to incrementally label questions in speech by learning from its two modalities or views: the native audio signal itself and its transcription into noisy text via ASR. 1. Our model utilizes the online temporal alignment between the input audio signal and its raw ASR transcription. By taking advantage of this fortuitous real-time coupling, we avoid having to learn the multimodal alignment over the entire phone call and its transcript, which would violate the real-time processing constraint that is crucial for decision support. 2. We achieve consistent and significant improvements from learning jointly from the two modalities compared to ASR transcriptions and audio only. The improvements hold across two inherently different audio sequence labeling tasks. 3. Our evaluation framework features a challenging real-world task with noisy inputs and realtime processing requirements. Under this adversity, we find questions and medical symptoms in emergency phone calls with high accuracy. Our task is illustrated in Figure 1. 2 Multimodal speech labeling We define the multimodal speech labeler MultiQT as a combination of three neural networks that we apply to a number of temporal input modalities. In our case, we consider speech and associated machine transcripts as the separate modalities or views. The model is illustrated in Figure 2. To obtain temporal alignment between speech and text, we propose a simple approach that uses the output of an ASR system as the textual representation. Here, we take the ASR to be a neural network trained with the connectionist temporal classification (CTC) loss function (Graves et al., 2006). Given audio, it produces a temporal softmax of length Ts with a feature dimension defined as a categorical distribution, typically over characters, words or subword units, per timestep. We refer to a sequence of input representations of the audio modality as (x(t) a )t∈[1..Ta] and of the textual modality as (x(t) s )t∈[1..Ts]. From the input sequences we compute independent unimodal representations denoted by z(t) a and z(t) s by applying two unimodal transformations denoted by fa and fs, respectively. Each of these transformations is parameterized by a convolutional neural network with overall temporal strides sa and ss and receptive fields ra and rs. With Tm as length of the resulting unimodal representations: z(t) a = fa  x(i) a sat+ra,r i=sat−ra,l  z(t) s = fs  x(i) s sst+rs,r i=sst−rs,l  , (1) for t ∈[1..Tm], where ra,l, ra,r, rs,l and rs,r are the left and right half receptive fields of fa and fs, respectively. For fa, ra,l = ⌊(ra −1)/2⌋and ra,r = ⌈(ra −1)/2⌉and similarly for fs. For i < 1 and i > Ta we define x(i) a and x(i) s by zero padding, effectively padding with half the receptive field on the left and right sides of the input. This then implies that Tm = ⌊Ta/sa⌋= ⌊Ts/ss⌋which constrains the strides according to Ta and Ts and functions as “same padding”. This lets us do convolutions without padding the internal representations for each layer in the neural networks, which in turn allows for online streaming. To form a joint multimodal representation from z(t) a and z(t) s we join the representations along the feature dimension. In the multimodal learning litterature such an operation is sometimes called fusion (Zadeh et al., 2017). We denote the combined multimodal representation by z(t) m and obtain it in a time-binded manner such that for a certain timestep 2372 Figure 2: MultiQT model illustration for two timesteps i and j. We depict the convolutional transformations fa and fs of the audio and character temporal softmax inputs into the respective modality encodings z(i) a and z(i) s , along with the corresponding receptive fields and strides: ra, sa and rs, ss. The convolutions are followed by multimodal fusion and finally dense layers g and h to predict the question labels ˆy(i) and ˆy(j). z(t) m only depends on z(t) a and z(t) s , z(t) m = fusion  z(t) a , z(t) s  . (2) In our experiments fusion(·) either denotes a simple concatenation, [z(t) a ; z(t) s ], or a flattened outer product, [1 z(t) a ] ⊗[1 z(t) s ]. The latter is similar to the fusion introduced by Zadeh et al. (2017), but we do not collapse the time dimension since our model predicts sequential labels. Finally, z(t) m is transformed before projection into the output space: z(t) y = g  z(t) m  , (3) ˆy(t) = h  z(t) y  , (4) where g is a fully connected neural network and h is a single dense layer followed by a softmax activation such that ˆy(t) ∈RK is a vector of probabilities summing to one for each of the K output categories. The predicted class is arg max(ˆy(t)). 2.1 Objective functions In general, the loss is defined as a function of all learnable parameters Θ and is computed as the average loss on M examples in a mini-batch. We denote by {Xa, Xs} a dataset consisting of N pairs of input sequences of each of the two modalities. As short-hand notation, let X(n) a refer to the n’th audio sequence example in Xa and similarly for X(n) s . The mini-batch loss is then L  Θ; n X(n) a , X(n) s o n∈Bi  = 1 M X n∈Bi L(n)  Θ; X(n) a , X(n) s  , (5) where Bi is an index set uniformly sampled from [1..N] which defines the i’th batch of size |Bi| = M. The loss for each example, L(n), is computed as the time-average of the loss per timestep, L(n)  Θ; X(n) a , X(n) s  = 1 T T X t=1 L(n,t)  Θ; X(n,ta) a , X(n,ts) s  , (6) where ta = [sat −ra,l .. sat + ra,r] and similarly for ts since the dependency of the loss per timestep is only on a limited timespan of the input. The loss per timestep is defined as the categorical crossentropy loss between the softmax prediction ˆy(t) and the one-hot encoded ground truth target y(t), L(n,t)  Θ; X(n,ta) a , X(n,ts) s  = K X k=1 y(t) k log(ˆy(t) k ). The full set of learnable parameters Θ is jointly optimized by mini-batch stochastic gradient descent. 2.2 Multitask objective In addition to the loss functions defined above, we also consider multitask training. This has been reported to improve performance in many different domains by including a suitably related auxiliary task (Bingel and Søgaard, 2017; Mart´ınez Alonso and Plank, 2017). For the task of labelling segments in the input sequences as pertaining to annotations from among a set of K −1 positive classes and one negative class, we propose the auxiliary task of binary labelling of segments as pertaining to either the negative class or any of the K −1 positive classes. For question tracking, this amounts to doing binary labelling of segments that are questions of any kind. The hope is that this will make the training signal stronger since the sparsity of each of the classes, e.g. questions, is reduced by collapsing them into one shared class. We use the same loss function as above, but with the number of classes reduced to K = 2. The total 2373 Label Description Example Count Fraction Q1 Question about the address of the incident. What’s the address? 663 26.3% Q2 Initial question of the call-taker to begin assessing the situation. What’s the problem? 546 21.6% Q3 Question about the age of the patient. How old is she? 537 21.3% Q4 All questions related to patient’s quality of breathing. Is she breathing in a normal pattern? 293 11.6% Q5 All question about patient’s consciousness or responsiveness. Is he conscious and awake? 484 19.2% Table 1: Explanation and prevalence of the questions used for the experiments. multitask loss is a weighted sum of the K-class loss and the binary loss: L(n,t) MT = βL(n,t) binary + (1 −β)L(n,t). (7) The tunable hyperparameter β ∈[0, 1] interpolates the task between regular K-class labeling for β = 0 and binary classification for β = 1. 3 Data Our dataset consists of 525 phone calls to an English-speaking medical emergency service. The call audio is mono-channel, PCM-encoded and sampled at 8000 Hz. The duration of the calls has the mean of 166 s (st. dev. 65 s, IQR 52 s). All calls are manually annotated for questions by trained native English speakers. Each question is annotated with its start and stop time and assigned with one of 13 predefined question labels or an additional label for any question that falls outside of the 13 categories. Figure 1 illustrates these annotations. We observe an initial inter-annotator agreement of α = 0.8 (Krippendorff, 2018). Each call has been additionally corrected at least once by a different annotator to improve the quality of the data. On average it took roughly 30 minutes to annotate a single call. For our experiments, we choose the five most frequent questions classes, which are explained in Table 1. Out of 24 hours of calls, the questions alone account for only 30 minutes (roughly 2%) of audio. For the experiments we use 5-fold cross-validation stratified by the number of questions in each call, such that calls of different lengths and contents are included in all folds. We test our model on an additional speech sequence labeling challenge: tracking mentions of medical symptoms in incoming audio. By using another task we gauge the robustness of MultiQT as a general sequence labeling model and not only a question tracker, since symptom utterances in speech carry inherently different linguistic features than questions. As our question-tracking data was not manually labeled for symptoms, we created silver-standard training and test sets automatically by propagating a list of textual keywords from the ground truth human transcripts back onto the audio signal as time stamps with a rule-based algorithm. The initial list contained over 40 medical symptoms, but in the experiment we retain the most frequent five: state of consciousness, breathing, pain, trauma, and hemorrhage. The utterances that we track are complex phrases with a high variance: There are many different ways to express a question or a medical symptom in conversation. This linguistic complexity sets our research apart from most work in speech labeling which is much closer to exact pattern matching (Salamon and Bello, 2017). 4 Experiments 4.1 Setup Inputs. The audio modality is encoded using 40 log-mel features computed with a window of 0.02 s and stride 0.01 s. The textual modality is formed by application of an ASR system to the audio modality. In all reported experiments, only ASR outputs are used and never human transcriptions, both in training and evaluation. The audio input to the ASR is encoded in the same way as described above. The ASR available to us has a purely convolutional architecture similar to the one in (Collobert et al., 2016) with an overall stride of 2. For MultiQT, this means that Ta = 2Ts. The ASR is trained on 600 hours of phone calls to medical emergency services in English from the same emergency service provider as the question and symptoms tracking datasets. Both of these are contained in the ASR test set. The ASR is trained using the connectionist temporal classification (CTC) loss function (Graves et al., 2006) and has a character error rate of 14 % and a word error rate of 31 %. Its feature dimension is 29 which corresponds to the English alphabet including apostrophe, space and a blank token for the CTC loss. 2374 Systems. The basic version of MultiQT uses a single softmax cross-entropy loss function and forms a time-bound multimodal representation by concatenating the unimodal representations. We then augment this model in three ways: 1. MultiQT-TF: tensor fusion instead of concatenation following Zadeh et al. (2017), 2. MultiQT-MT: auxiliary binary classification with β = 0.5, 3. MultiQT-TF-MT: combination of 1 and 2. Baselines. MultiQT can easily be adapted to a single modality by excluding the respective convolutional transformation fa or fs. For example, MultiQT can be trained unimodally on audio by removing fs and then defining z(t) m = z(t) a instead of concatenation or tensor fusion. We baseline the multimodal MultiQT models against versions trained unimodally on audio and text. We also compare MultiQT to two distinct baseline models: 1. Random forest (RF) 2. Fully connected neural network (FNN) Contrary to MultiQT, the baselines are trained to classify an input sequence into a single categorical distribution over the labels. At training, the models are presented with short segments of call transcripts in which all timesteps share the same label such that a single prediction can be made. The baselines are trained exclusively on text and both models represent the windowed transcript as a TF-IDF-normalized bag of words similar to Zhang et al. (2015). The bag of words uses word uni- and bigrams, and character tri-, four- and five-grams with 500 of each selected by χ2-scoring between labels and transcripts on the training set. Hyperparameters. We use 1D convolutions for fa and fs. For fa we use three layers with kernel sizes of 10, 20 and 40, filters of 64, 128 and 128 units and strides of 2, 2 and 2 in the first, second and third layer, respectively. For fs we use two layers with kernel sizes of 20 and 40, filters of 128 and 128 units and strides of 2 and 2. Before each nonlinear transformation in both fa and fs we use batch normalization (Ioffe and Szegedy, 2015) with momentum 0.99 and trainable scale and bias, and we apply dropout (Srivastava et al., 2014) with a dropout rate of 0.2. For g we use three fully connected layers of 256 units each and before each nonlinear transformation we use batch normalization as above and apply dropout with a dropout rate of 0.4. We l2 regularize all learnable parameters with a weighting of 0.1. The FNN model uses the same classifier as is used for g in MultiQT with a dropout rate of 0.3 and an l2 regularization factor of 0.05. All neural models are trained with the Adam optimizer (Kingma and Ba, 2015) using a learning rate of 1 × 10−4, β1 = 0.9 and β2 = 0.999 and batch size 6 except for those with tensor fusion which use a batch size of 1 due to memory constraints. Larger batch sizes were prohibitive since we use entire calls as single examples but results were generally consistent across different batch sizes. All hyperparameters were tuned manually and heuristically. It takes approximately one hour to train the base MultiQT model on one NVIDIA GeForce GTX 1080 Ti GPU card. Evaluation. For each model we report two F1 scores with respective precisions and recalls macroaveraged over the classes. – TIMESTEP: For each timestep, the model prediction is compared to the gold label. The metrics are computed per timestep and micro-averaged over the examples. This metric captures the model performance in finding and correctly classifying entire audio segments that represent questions and is sensitive to any misalignment. – INSTANCE: A more forgiving metric which captures if sequences of the same label are found and correctly classified with acceptance of misalignment. Here, the prediction counts as correct if there are at least five consecutive correctly labeled time steps within the sequence, as a heuristic to avoid ambiguity between classes. This metric also excludes the non-question label. The baseline models are evaluated per TIMESTEP by labeling segments from the test set in a sliding window fashion. The size of the window varies from 3 to 9 seconds to encompass all possible lengths of a question with the stride set to one word. Defining the stride in terms of words is possible because the ASR produces timestamps for the resulting transcript per word. 4.2 Results Labeling accuracy. The results are presented in Table 2. They show that for any model variation, the best performance is achieved when using both audio and text. The model performs the worst when using only audio which we hypothesize to be due 2375 INSTANCE TIMESTEP Model Modality P R F1 P R F1 RF-BOW T 61.8±3.5 88.5±0.9 72.2±2.2 39.3±1.1 70.4±1.0 48.1±1.0 FNN-BOW T 42.2±1.4 92.8±0.6 57.5±1.3 38.1±0.7 71.0±1.7 46.9±0.8 MultiQT A 87.4±1.9 60.6±4.0 70.3±3.1 79.2±1.3 57.8±3.3 65.0±2.4 MultiQT T 84.2±1.6 78.5±2.8 81.1±2.0 78.8±1.2 69.4±2.0 73.5±1.3 MultiQT A+T 83.6±2.2 83.3±2.5 83.3±1.6 75.7±2.2 73.8±2.3 74.5±1.3 MultiQT-MT A 84.6±5.1 57.4±3.9 66.2±2.9 77.7±5.6 56.0±2.8 62.8±2.0 MultiQT-MT T 81.9±1.1 80.6±2.8 81.0±1.8 75.9±1.5 71.2±2.4 73.3±1.7 MultiQT-MT A+T 85.2±2.7 83.2±1.2 84.1±2.0 78.5±2.5 74.0±0.7 76.0±1.1 MultiQT-TF A+T 85.0±1.8 83.3±2.6 83.9±1.7 78.9±2.1 75.2±2.3 76.7±1.2 MultiQT-TF-MT A+T 85.1±3.2 83.1±1.6 83.8±1.7 78.7±3.7 75.0±1.6 76.5±1.4 Table 2: Question tracking results on audio (A) and text (T) modalities with variations of MultiQT using modality concatenation (MultiQT) or tensor fusion (MultiQT-TF) and the auxiliary task (MultiQT-MT). The evaluation metrics are precision (P), recall (R), and (F1) at the macro level per TIMESTEP or INSTANCE. We report means and standard deviations for five-fold cross-validation runs. All F1 differences are statistically significant at p < 0.001, save for between MulitQT [T] & MulitQT-MT [T], and MulitQT [A+T] & MulitQT-TF-MT [A+T] (p ≈0.64). We employ the approximate randomization test with R = 1000 and Bonferonni correction (Dror et al., 2018). Bold face indicates the highest F1 score within each metric and MultiQT model group. to the increased difficulty of the task: While speech intonation may be a significant feature for detecting questions in general, discerning between specific questions is easier with access to transcribed keywords. Including the auxiliary binary classification task (MultiQT-MT) shows no significant improvement over MultiQT. We hypothesize that this may be due to training on a subset of all questions such that there are unlabelled questions in the training data which add noise to the binary task. Applying tensor fusion instead of concatenating the unimodal representations also does not yield significant improvements to MultiQT contrary to the findings by Zadeh et al. (2017). Since tensorfusion subsumes the concatenated unimodal representations by definition and appends all elementwise products, we must conclude that the multimodal interactions represented by the element-wise products either already exist in the unimodal representations, by correlation, are easily learnable from them or are too difficult to learn for MultiQT. We believe that the interactions are most likely to be easily learnable from the unimodal representations. Comparing any MultiQT variant with INSTANCE and TIMESTEP F1 clearly shows that INSTANCE is more forgiving, with models generally achieving higher values in this metric. The difference in performance between different combinations of the modalities is generally higher when measured per INSTANCE as compared to per TIMESTEP. The RF and FNN baseline models clearly underperform compared to MultiQT. It should be noted that both RF and FNN achieve F1-scores of around 85 when evaluated per input utterance, corresponding to the input they receive during training. On this metric, FNN also outperforms RF. However, both models suffer significantly from the discrepancy between the training and streaming settings as measured per the INSTANCE and TIMESTEP metrics; this effect is largest for the FNN model. Real-time tracking. One important use case of MultiQT is real-time labelling of streamed audio sequences and associated transcripts. For this reason, MultiQT must be able to process a piece of audio in a shorter time than that spanned by the audio itself. For instance, given a 1 s chunk of audio, MultiQT must process this in less than 1 s in order to maintain a constant latency from the time that the audio is ready to be processed to when it has been processed. To assess the real-time capability of MultiQT, we test it on an average emergency call using an NVIDIA GTX 1080 Ti GPU card. In our data, the average duration of an emergency call is 166 s. To simulate real-time streaming, we first process the call in 166 distinct one-second chunks using 166 sequential forward passes. This benchmark includes all overhead, such as the PCIe transfer of data to and from the GPU for each of the for2376 ward passes. The choice of 1 s chunk duration matches our production setting but is otherwise arbitrary with smaller chunks giving lower latency and larger chunks giving less computational overhead. In this streaming setting, the 166 s of audio are processed in 1.03 s yielding a real-time factor of approximately 161 with a processing time of 6.2 ms per 1 s of audio. This satisfies the real-time constraint by a comfortable margin, theoretically leaving room for up to 161 parallel audio streams to be processed on the same GPU before the real-time constraint is violated. When a single model serves multiple ongoing calls in parallel, we can batch the incoming audio chunks. Batching further increases the real-time factor and enables a larger number of ongoing calls to be processed in parallel on a single GPU. This efficiency gain comes at the cost of additional, but still constant, latency since we must wait for a batch of chunks to form. For any call, the expected additional latency is half the chunk duration. We perform the same experiment as above but with different batch sizes. We maintain super real-time processing for batches of up 256 one-second chunks, almost doubling the number of calls that can be handled by a single model. In the offline setting, for instance for on-demand processing of historical recordings, an entire call can be processed in one forward pass. Here, MultiQT can process a single average call of 166 s in 10.9 ms yielding an offline real-time factor of 15,000. Although batched processing in this setting requires padding, batches can be constructed with calls of similar length to reduce the relative amount of padding and achieve higher efficiency yet. 5 Discussion Label confusion. We analyze the label confusion of the basic MultiQT model using both modalities on the TIMESTEP metric. Less than 1% of all incorrect timestamps correspond to question-to-question confusions while the two primary sources of confusion are incorrect labelings of 1) “None” class for a question and 2) of a question with the “None” class. The single highest confusion is between the “None” class and “Q4” which is the least frequent question. Here the model has a tendency to both over-predict and miss: ca 40% of predicted “Q4” are labeled as “None” and 40% of “Q4” are predicted as “None”. In summary, when our model makes an error, it will most likely 1) falsely predict a non-question to 0.6 0.4 0.2 0.0 0.2 0.4 0.6 start 0.6 0.4 0.2 0.0 0.2 0.4 0.6 stop error margins [s] Figure 3: Error margin distributions for start and stop timestamps of question sequences. The dotted lines depict the ground truth start and stop timestamps. be a question or 2) falsely predict a question to be a non-question; once it discovers a question, it is much less likely to assign it the wrong label. Model disagreement. We examined the intermodel agreement between MultiQT trained on the different modes. The highest agreement of ∼90% is achieved between the unimodal text and the multimodal models whereas the lowest agreement was generally between the unimodal audio and any other model at ∼80%. The lower agreement with the unimodal audio model can be attributed to the generally slightly lower performance of this model compared to the other models as per Table 2. Question margins. In Figure 3, we visualize the distribution of the errors made by the model per TIMESTEP. For each question regarded as matching according to the INSTANCE metric we compute the number of seconds by which the model mismatched the label sequence on the left and right side of the label sequence, respectively. We see that the model errors are normally distributed around a center value that is shifted towards the outside of the question by slightly less than 100 ms. The practical consequence is that the model tends to make predictions on the safe side by extending question segments slightly into the outside of the question. Modality ablation. To evaluate the model’s robustness to noise in the modalities, we remove all information from one of the modalities in turn and report the results in Table 3. We remove the information in a modality by randomly permuting the entire temporal axis. This way we retain the numerical properties of the signal which is not the case when replacing a modality by zeros or noise. To increase MultiQT’s robustness to this modality ablation, we apply it at training so that for each batch example we permute the temporal axis of the 2377 Permuted INSTANCE TIMESTEP Modality Training Test P R F1 P R F1 A+T Yes T 82.2±4.9 60.1±5.6 68.6±5.7 79.0±4.7 58.4±3.7 64.7±3.5 A+T Yes A 82.6±3.2 75.9±2.9 78.7±1.6 78.3±2.4 68.3±2.7 72.3±1.1 A+T Yes 86.3±1.6 83.8±2.8 84.8±2.0 80.4±1.0 74.1±2.2 76.9±1.3 A+T No T 0.0±0.0 0.0±0.0 0.0±0.0 16.2±0.0 16.7±0.0 16.4±0.0 A+T No A 89.5±3.1 69.2±4.4 77.0±2.5 84.3±2.6 63.7±3.5 71.0±2.0 A+T No 83.6±2.2 83.3±2.5 83.3±1.6 75.7±2.2 73.8±2.3 74.5±1.3 A No 87.4±1.9 60.6±4.0 70.3±3.1 79.2±1.3 57.8±3.3 65.0±2.4 T No 84.2±1.6 78.5±2.8 81.1±2.0 78.8±1.2 69.4±2.0 73.5±1.3 Table 3: Results from the modality ablation on the MultiQT model. We compare multimodal MultiQT trained with the audio (A) and text (T) modalities temporally permuted in turn during training with probability pa = 0.1 and ps = 0.5 to MultiQT trained without modality permutation, unimodally and multimodally (some results copied from Table 2). We can obtain robustness to loosing a modality while maintaining (or even slightly improving) the multimodal performance. All results are based on five-fold cross-validation as in Table 2. 0-20 0-25 0-30 0-35 0-40 0-45 0-50 0-100 ASR WER [%] 0.62 0.64 0.66 0.68 0.70 0.72 F1 score [A+T] permuted [A+T] [T] [A] Figure 4: Relation between TIMESTEP F1 and WER on call-taker utterances without the “None” label. audio or text modality with some probability pa or ps. We choose pa = 0.1 and ps = 0.5 since the model more easily develops an over-reliance on the text-modality supposedly due to higher signal-tonoise ratio. The results are listed in Table 3 along with results for MultiQT from Table 2 for easy reference. We observe that the basic MultiQT model suffers significantly from permutation of the text modality and less so for audio which suggests that it relies on the audio only for supportive features. Training MultiQT with the random temporal permutation forces learning of robustness to loosing all information in a modality. We see that the results when removing a modality almost reach the level achieved when training exclusively on that modality while still maintaining the same (or better) performance of the basic MultiQT model. Relation to ASR. In Figure 4, we plot the performance of the multimodal model on different subsets of the test split by the maximum WER of the ASR (measured only on the call-taker utterances). This evaluation compares the microaveraged model F1-score when increasing the noise on the textual input. We see that regardless of the modality, the performance is the highest for calls with very low WER. We observe that the performance improvement of using both modalities over unimodal text or unimodal audio increases as we include noisy samples. This implies that multi modality increases robustness. Training on permuted inputs additionally improves the performance on noisy data. The evaluation of MultiQT in our paper has thus far been only in relation to one particular ASR model with CTC loss (Graves et al., 2006), where our system displays significant gains from multimodal learning. Yet, do these results hold with another ASR system, and in particular, are the multimodal gains still significant if WER decreases and produced text quality increases? For an initial probing of these questions, we replace the fully convolutional ASR with a densely-connected recurrent architecture with convolutional heads. This model is similar to the one in (Amodei et al., 2015) but also uses dense bottleneck layers. With this model the transcription quality improves by around +4% in WER, while the F1-scores of MultiQT still strongly favor the multimodal approach, by +6.15 points absolute over text-only. We argue that in a real-world scenario with high WER and limited in-domain training data, the gains warrant learning from joining the text and audio views on the input speech when learning a question tracker. Alterna2378 tively, the ASR model itself could be extended into a multitask learning setup to jointly track questions and transcribe speech; we defer that line of work for future research. On a practical note, for this multitask approach, the data must be fully transcribed by human annotators in addition to the question annotatations. This is generally more time consuming and expensive than exclusively annotating questions. Qualitative analysis. We analyze the model predictions on a subset of 21 calls to identify the most likely reasons for incorrect labeling. We find that in over half of the analysed cases the incorrect prediction is triggered either by a question-related keyword uttered in a non-question sentence or by a question asked in the background by a caller that was not assigned a label. We also encounter undetected questions that have a very noisy ASR transcript or are asked in an unusual way. Symptom labeling. The experiment with our silver-standard symptoms data shows a trend that is similar to question tracking: The dual-modality MultiQT scores an INSTANCE F1 score of 76.9 for a +1.8 absolute improvement over the best single modality. Text-only is the runner up (-1.8 F1) while audio-only lags behind with a significant -23.6 decrease in F1. At the same time, a simple text-only keyword matching baseline scores at 73.7. We argue that symptom tracking strongly favors text over audio because the distinctive audio features of questions, such as changes in intonation, are not present when communicating symptoms in speech. 6 Related work The broader context of our work is to track the dialogue state in calls to emergency medical services, where conversations are typically formed as sequences of questions and answers that pertain to various medical symptoms. The predominant approach to dialogue state tracking (DST) in speech is to first transcribe the speech by using ASR (Henderson et al., 2014; Henderson, 2015; Mrkˇsi´c et al., 2017). In our specific context, to entirely rely on ASR is prohibitive because of significantly higher WER in comparison to standard datasets. To exemplify, while WER is normally distributed with a mean of 37.6% in our data, the noisiest DST challenge datasets rarely involve with WER above 30% (Jagfeld and Vu, 2017) while standard ASR benchmarks offer even lower WER (Park et al., 2019). None of the standard ASR scenarios thus directly apply to a real-life ASR noise scenario. From another viewpoint, work in audio recognition mainly involves with detecting simple singleword commands or keyword spotting (de Andrade et al., 2018), recognizing acoustic events such as environmental or urban sounds (Salamon et al., 2014; Piczak, 2015; Xu et al., 2016) or music patterns, or document-level classification of entire audio sequences (Liu et al., 2017). McMahan and Rao (2018) provide a more extensive overview. While approaches in this line of work relate to ours, e.g. in the use of convolutional networks over audio (Sainath and Parada, 2015; Salamon and Bello, 2017), our challenge features questions as linguistic units of significantly greater complexity. Finally, research into multimodal or multi-view deep learning (Ngiam et al., 2011; Li et al., 2018) offers insights to effectively combine multiple data modalities or views on the same learning problem. However, most work does not directly apply to our problem: i) the audio-text modality is significantly under-represented, ii) the models are typically not required to work online, and iii) most tasks are cast as document-level classification and not sequence labeling (Zadeh et al., 2018). 7 Conclusions We proposed a novel approach to speech sequence labeling by learning a multimodal representation from the temporal binding of the audio signal and its automatic transcription. This way we learn a model to identify questions in real time with a high accuracy while trained on a small annotated dataset. We show the multimodal representation to be more accurate and more robust to noise than the unimodal approaches. Our findings generalize to a medical symptoms labeling task, suggesting that our model is applicable as a general-purpose speech tagger wherever the speech modality is coupled in real time to ASR output. Acknowledgements The authors are grateful to the anonymous reviewers and area chairs for the incisive and thoughtful treatment of our work. References Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong 2379 Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse H. Engel, Linxi Fan, Christopher Fougner, Tony Han, Awni Y. Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew Y. Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani Yogatama, Jun Zhan, and Zhenyao Zhu. 2015. Deep speech 2: End-to-end speech recognition in english and mandarin. CoRR, abs/1512.02595. Douglas Coimbra de Andrade, Sabato Leo, Martin Loesener Da Silva Viana, and Christoph Bernkopf. 2018. A neural attention model for speech command recognition. arXiv preprint arXiv:1808.08929. Tadas Baltruˇsaitis, Chaitanya Ahuja, and LouisPhilippe Morency. 2018. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2):423–443. Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 164–169, Valencia, Spain. Association for Computational Linguistics. Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve. 2016. Wav2letter: an end-to-end convnet-based speech recognition system. CoRR, abs/1609.03193. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics. Vladimir Eidelman, Zhongqiang Huang, and Mary Harper. 2010. Lessons learned in part-of-speech tagging of conversational speech. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 821–831, Cambridge, MA. Association for Computational Linguistics. Sahar Ghannay, Antoine Caubri`ere, Yannick Esteve, Antoine Laurent, and Emmanuel Morin. 2018. Endto-end named entity extraction from speech. arXiv preprint arXiv:1805.12045. Carlos G´omez-Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314– 1324, Brussels, Belgium. Association for Computational Linguistics. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 369–376, New York, NY, USA. ACM. Kyu J Han, Seongjun Hahm, Byung-Hak Kim, Jungsuk Kim, and Ian R Lane. 2017. Deep learning-based telephony speech recognition in the wild. In INTERSPEECH, pages 1323–1327. Matthew Henderson. 2015. Machine learning for dialog state tracking: A review. Technical report. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 448–456, Lille, France. PMLR. Glorianna Jagfeld and Ngoc Thang Vu. 2017. Encoding word confusion networks with recurrent neural networks for dialog state tracking. In Proceedings of the Workshop on Speech-Centric Natural Language Processing, pages 10–17, Copenhagen, Denmark. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Klaus Krippendorff. 2018. Content analysis: An introduction to its methodology. Sage publications. Yingming Li, Ming Yang, and Zhongfei Mark Zhang. 2018. A survey of multi-view representation learning. IEEE Transactions on Knowledge and Data Engineering. Chunxi Liu, Jan Trmal, Matthew Wiesner, Craig Harman, and Sanjeev Khudanpur. 2017. Topic identification for speech without asr. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. H´ector Mart´ınez Alonso and Barbara Plank. 2017. When is multitask learning effective? semantic sequence prediction under varying data conditions. In Proceedings of the 15th Conference of the European 2380 Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 44–53, Valencia, Spain. Association for Computational Linguistics. Brian McMahan and Delip Rao. 2018. Listening to the world improves speech command recognition. In Thirty-Second AAAI Conference on Artificial Intelligence. Diego Moll´a, Menno van Zaanen, and Steve Cassidy. 2007. Named entity recognition in question answering of speech data. In Proceedings of the Australasian Language Technology Workshop 2007, pages 57–65, Melbourne, Australia. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788, Vancouver, Canada. Association for Computational Linguistics. Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. 2011. Multimodal deep learning. In Proceedings of the 28th international conference on machine learning (ICML11), pages 689–696. Daniel S. Park, William Chan, Yu Zhang, ChungCheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. Specaugment: A simple data augmentation method for automatic speech recognition. Interspeech 2019. Karol J Piczak. 2015. Environmental sound classification with convolutional neural networks. In 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412–418, Berlin, Germany. Association for Computational Linguistics. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pettigrue. 2018. Conversational AI: the science behind the alexa prize. CoRR, abs/1801.03604. Tara Sainath and Carolina Parada. 2015. Convolutional neural networks for small-footprint keyword spotting. Technical report. Justin Salamon and Juan Pablo Bello. 2017. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279–283. Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. 2014. A dataset and taxonomy for urban sound research. In Proceedings of the 22nd ACM international conference on Multimedia, pages 1041– 1044. ACM. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Michalina Strzyz, David Vilares, and Carlos G´omezRodr´ıguez. 2019. Viable dependency parsing as sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 717–723, Minneapolis, Minnesota. Association for Computational Linguistics. Mihai Surdeanu, Jordi Turmo, and Eli Comelles. 2005. Named entity recognition from spontaneous opendomain speech. In Ninth European Conference on Speech Communication and Technology. Yong Xu, Qiang Huang, Wenwu Wang, Philip JB Jackson, and Mark D Plumbley. 2016. Fully dnn-based multi-label regression for audio tagging. arXiv preprint arXiv:1606.07695. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2017. Tensor fusion network for multimodal sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1103–1114, Copenhagen, Denmark. Association for Computational Linguistics. Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency, Soujanya Poria, Erik Cambria, and Stefan Scherer, editors. 2018. Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML). Association for Computational Linguistics, Melbourne, Australia. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
2020
215
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2381–2387 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2381 Multiresolution and Multimodal Speech Recognition with Transformers Georgios Paraskevopoulos Srinivas Parthasarathy Aparna Khare Shiva Sundaram Amazon Lab126 [email protected], {parsrini,apkhare,sssundar}@amazon.com Abstract This paper presents an audio visual automatic speech recognition (AV-ASR) system using a Transformer-based architecture. We particularly focus on the scene context provided by the visual information, to ground the ASR. We extract representations for audio features in the encoder layers of the transformer and fuse video features using an additional crossmodal multihead attention layer. Additionally, we incorporate a multitask training criterion for multiresolution ASR, where we train the model to generate both character and subword level transcriptions. Experimental results on the How2 dataset, indicate that multiresolution training can speed up convergence by around 50% and relatively improves word error rate (WER) performance by upto 18% over subword prediction models. Further, incorporating visual information improves performance with relative gains upto 3.76% over audio only models. Our results are comparable to state-of-the-art Listen, Attend and Spell-based architectures. 1 Introduction Automatic speech recognition is a fundamental technology used on a daily basis by millions of end-users and businesses. Applications include automated phone systems, video captioning and voice assistants providing an intuitive and seemless interface between users and end systems. Current ASR approaches rely solely on utilizing audio input to produce transcriptions. However, the wide availability of cameras in smartphones and home devices acts as motivation to build AV-ASR models that rely on and benefit from multimodal input. Traditional AV-ASR systems focus on tracking the user’s facial movements and performing lipreading to augment the auditory inputs (Potamianos et al., 1997; Mroueh et al., 2015; Tao and Busso, 2018). The applicability of such models in real world environments is limited, due to the need for accurate audio-video alignment and careful camera placement. Instead, we focus on using video to contextualize the auditory input and perform multimodal grounding. For example, a basketball court is more likely to include the term “lay-up” whereas an office place is more likely include the term “layoff”. This approach can boost ASR performance, while the requirements for video input are kept relaxed (Caglayan et al., 2019; Hsu et al., 2019). Additionally we consider a multiresolution loss that takes into account transcriptions at the character and subword level. We show that this scheme regularizes our model showing significant improvements over subword models. Multitask learning on multiple levels has been previously explored in the literature, mainly in the context of CTC (Sanabria and Metze, 2018; Krishna et al., 2018; Ueno et al., 2018). A mix of seq2seq and CTC approaches combine word and character level (Kremer et al., 2018; Ueno et al., 2018) or utilize explicit phonetic information (Toshniwal et al., 2017; Sanabria and Metze, 2018). Modern ASR systems rely on end-to-end, alignment free neural architectures, i.e. CTC (Graves et al., 2006) or sequence to sequence models (Graves et al., 2013; Zhang et al., 2017). The use of attention mechanisms significantly improve results in (Chorowski et al., 2015) and (Chan et al., 2016). Recently, the success of transformer architectures for NLP tasks (Vaswani et al., 2017; Devlin et al., 2019; Dai et al., 2019) has motivated speech researchers to investigate their efficacy in end-to-end ASR (Karita et al., 2019b). Zhou et. al., apply an end-to-end transformer architecture for Mandarin Chinese ASR (Zhou et al., 2018). SpeechTransformer extends the scaled dot-product attention mechanism to 2D and achieves competitive results for character level recognition (Dong et al., 2018; Karita et al., 2019a). Pham et. al. introduce the idea of stochastically deactivating layers dur2382 ing training to achieve a very deep model (Pham et al., 2019). A major challenge of the transformer architecture is the quadratic memory complexity as a function of the input sequence length. Most architectures employ consecutive feature stacking (Pham et al., 2019) or CNN preprocessing (Dong et al., 2018; Karita et al., 2019b) to downsample input feature vectors. Mohamed et al. (2019) use a VGG-based input network to downsample the input sequence and achieve learnable positional embeddings. Multimodal grounding for ASR systems has been explored in (Caglayan et al., 2019), where a pretrained RNN-based ASR model is finetuned with visual information through Visual Adaptive Training. Sterpu et al. (2018) propose a seq2seq model based on RNNs for lip-reading that performs cross-modal alignment of face tracking and audio features through an attention mechanism. Furthermore, Hsu et al. (2019) use a weakly supervised semantic alignment criterion to improve ASR results when visual information is present. Multimodal extensions of the transformer architecture have also been explored. These extensions mainly fuse visual and language modalities in the fields of Multimodal Translation and Image Captioning. Most approaches focus on using the scaled dotproduct attention layer for multimodal fusion and cross-modal mapping. Afouras et al. (2018) present a transformer model for AV-ASR targeted for lipreading in the wild tasks. It uses a self attention block to encode the audio and visual dimension independently. A decoder individually attends to the audio and video modalities producing character transcriptions. In comparison our study uses the video features to provide contextual information to our ASR. Libovick`y et al. (2018) employ two encoder networks for the textual and visual modalities and propose four methods of using the decoder attention layer for multimodal fusion, with hierarchical fusion yielding the best results. Yu et al. (2019) propose an encoder variant to fuse deep, multi-view image features and use them to produce image captions in the decoder. Le et al. (2019) use cascaded multimodal attention layers to fuse visual information and dialog history for a multimodal dialogue system. Tsai et al. (2019) present Multimodal Transformers, relying on a deep pairwise cascade of cross-modal attention mechanisms to map between modalities for multimodal sentiment analysis. In relation to the previous studies, the main contributions of this study are a) a fusion mechanism for audio and visual modalities based on the crossmodal scaled-dot product attention, b) an end to end training procedure for multimodal grounding in ASR and c) the use of a multiresolution training scheme for character and subword level recognition in a seq2seq setting without relying on explicit phonetic information. We evaluate our system in the 300 hour subset of the How2 database (Sanabria et al., 2018), achieving relative gains up to 3.76% with the addition of visual information. Further we show relative gains of 18% with the multiresolution loss. Our results are comparable to state-of-the-art ASR performance on this database. 2 Proposed Method Our transformer architecture uses two transformer encoders to individually process acoustic and visual information (Fig. 1). Audio frames are fed to the first set of encoder layers. We denote the space of the encoded audio features as the audio space A. Similarly, video features are projected to the video space V using the second encoder network. Features from audio and visual space are passed through a tied feed forward layer that projects them into a common space before passing them to their individual encoder layers respectively. This tied embedding layer is important for fusion as it helps align the semantic audio and video spaces. We then use a cross-modal attention layer that maps projected video representations to the projected audio space (Section 2.1). The outputs of this layer are added to the original audio features using a learnable parameter α to weigh their contributions. The fused features are then fed into the decoder stack followed by dense layers to generate character and subword outputs. For multiresolution predictions (Section 2.2), we use a common decoder for both character and subword level predictions, followed by a dense output layer for each prediction. This reduces the model parameters and enhances the regularization effect of multitask learning. 2.1 Cross-modal Attention Scaled dot-product attention operates by constructing three matrices, K, V and Q from sequences of inputs. K and V may be considered keys and values in a “soft” dictionary, while Q is a query that contextualizes the attention weights. The attention mechanism is described in Eq. 1, where σ denotes 2383 Dot product attention Encoder Layer Encoder Layer … Encoder Layer N x Decoder Layer Decoder Layer … Decoder Layer M x Audio frames Subword prediction Video frames K V Q Video Encoder Down-sampling ! " ! Character prediction # 1 −# Tied Dense Layer Fusion Subword Transcription Character Transcription Tied Dense Layer Figure 1: Overall system architecture. A cross-modal scaled dot-product attention layer is used to project the visual data into the audio feature space followed by an additive fusion. the softmax operation. Y = σ(KQT )V (1) The case where K, V and Q are constructed using the same input sequence consists a selfattention mechanism. We are interested in crossmodal attention, where K and V are constructed using inputs from one modality M1, video in our case (Fig. 1) and Q using another modality M2, audio. This configuration as an effective way to map features from M1 to M2 (Tsai et al., 2019). Note, that such a configuration is used in the decoder layer of the original transformer architecture (Vaswani et al., 2017) where targets are attended based on the encoder outputs. 2.2 Multiresolution training We propose the use of a multitask training scheme where the model predicts both character and subword level transcriptions. We jointly optimize the model using the weighted sum of character and subword level loss, as in Eq. 2: L = γ ∗Lsubword + (1 −γ) ∗Lcharacter (2) where γ is a hyperparameter that controls the importance of each task. The intuition for this stems from the reasoning that character and subword level models perform different kinds of mistakes. For character prediction, the model tends to predict words that sound phonetically similar to the ground truths, but are syntactically disjoint with the rest of the sentence. Subword prediction, yields more syntactically correct results, but rare words tend to be broken down to more common words that sound similar but are semantically irrelevant. For example, character level prediction may turn “old-fashioned” into “oldfashioning”, while subword level turns the sentence “ukuleles are different” to “you go release are different”. When combining the losses, subword prediction, which shows superior performance is kept as the preliminary output, while the character prediction is used as an auxiliary task for regularization. 3 Experimental Setup We conduct our experiments on the How2 instructional videos database (Sanabria et al., 2018). The dataset consists of 300 hours of instructional videos from the YouTube platform. These videos depict people showcasing particular skills and have high variation in video/audio quality, camera angles and duration. The transcriptions are mined from the YouTube subtitles, which contain a mix of automatically generated and human annotated transcriptions. Audio is encoded using 40 mel-filterbank coefficients and 3 pitch features with a frame size 2384 Input handling Recognition level WER Filtering Character 33.0 Filtering Subword 29.7 Chunking Character 31.3 Chunking Subword 29.9 Stacking Character 28.3 Stacking Subword 26.1 Stacking MR 21.3 Table 1: Results for different methods of input filtering for different prediction resolutions. MR stands for multiresolution. of 10 ms, yielding 43-dimensional feature vectors. The final samples are segments of the original videos, obtained using word-level alignment. We follow the video representation of the original paper (Caglayan et al., 2019), where a 3D ResNeXt101 architecture, pretrained on action recognition, is used to extract 2048D features (Hara et al., 2018). Video features are average pooled over the video frames yielding a single feature vector. For our experiments, we use the train, development and test splits proposed by (Sanabria et al., 2018), which have sizes 298.2 hours, 3.2 hours and 3.7 hours respectively. Our model consists of 6 encoder layers and 4 decoder layers. We use transformer dimension 480, intermediate ReLU layer size 1920 and 0.2 dropout. All attention layers have 6 attention heads. The model is trained using Adam optimizer with learning rate 10−3 and 8000 warmup steps. We employ label smoothing of 0.1. We weigh the multitask loss with γ = 0.5 which gives the best performance. A coarse search was performed for tuning all hyperparameters over the development set. For character-level prediction, we extract 41 graphemes from the transcripts. For subword-level prediction, we train a SentencePiece tokenizer (Kudo and Richardson, 2018) over the train set transcriptions using byte-pair encoding and vocabulary size 1200. For decoding we use beam search with beam size 5 and length normalization parameter 0.7. We train models for up to 200 epochs and the model achieving the best loss is selected using early stopping. Any tuning of the original architecture is performed on the development split. No language model or ensemble decoding is used in the output. 4 Results and Discussion One of the challenges using scaled dot-product attention is the quadratic increase of layerwise memory complexity as a function of the input sequence length. This issue is particularly prevalent in ASR tasks, with large input sequences. We explore three simple approaches to work around this limitation. First, we filter out large input sequences (x > 15s), leading to loss of 100 hours of data. Second we, chunk the input samples to smaller sequences, using forced-alignment with a conventional DNNHMM model to find pauses to split the input and the transcriptions. Finally, we stack 4 consecutive input frames into a single feature vector, thus reducing the input length by 4. Note that this only reshapes the input data as the dimension of our input is increased by the stacking process 1. Results for the downsampling techniques for character and subword level predictions are summarized in Table 1. We observe that subword-level model performs better than the character level (upto 10% relative) in all settings. This can be attributed to the smaller number of decoding steps needed for the subword model, where error accumulation is smaller. Furthermore, we see that the naive filtering of large sequences yields to underperforming systems due to the large data loss. Additionally, we see that frame stacking has superior performance to chunking. This is not surprising as splitting the input samples to smaller chunks leads to the loss of contextual information which is preserved with frame stacking. We evaluate the proposed multiresolution training technique with the frame stacking technique, observing a significant improvement(18.3%) in the final WER. We thus observe that predicting finer resolutions as an auxiliary task can be used as an effective means of regularization for this sequence to sequence speech recognition task. Furthermore, we have empirically observed that when training in multiple resolutions, models can converge around 50% faster than single resolution models. Next, we evaluate relative performance improvement obtained from utilizing the visual features (Table 2). We observe that incorporating visual information improves ASR results. Our AV-ASR system yields gains > 3% over audio only models for both subword and multiresolution predictions. Finally, we observe that while the Listen, Attend and Spell-based architecture of (Caglayan et al., 2019) is slightly stronger than the transformer model, the gains from adding visual information 1We tried to use the convolutional architecture from (Mohamed et al., 2019), but it failed to converge in our experiments, possibly due to lack of data 2385 ⇑ Features Level WER over audio Audio Subword 26.1 Audio + ResNeXt Subword 25.0 3.45% Audio MR 21.3 Audio + ResNeXt MR 20.5 3.76% Audio (B) Subword 19.2 Audio + ResNext (B) Subword 18.4 3.13% Table 2: Comparison of audio only ASR models versus AVASR models with ResNeXt image features. MR stands for multiresolution. (B) shows the results for the LAS model (Caglayan et al., 2019) Missing input handling WER Zeros 23.1 Gaussian Noise σ=0.2 22.6 Gating visual input α=0 22.8 Table 3: Experimental evaluation of AV-ASR model for handling missing visual input. Here σ denotes the standard deviation of the noise is consistent across models. It is important to note that our models are trained end-to-end with both audio and video features. An important question for real-world deployment of multimodal ASR systems is their performance when the visual modality is absent. Ideally, a robust system satisfactorily performs when the user’s camera is off or in low light conditions. We evaluate our AV-ASR systems in the absence of visual data with the following experiments - a) replace visual feature vectors by zeros b) initialize visual features with gaussian noise with standard deviation 0.2 c) tweak the value α to 0 on inference, gating the visual features completely. Table 3 shows the results for the different experiments. Results indicate gating visual inputs works better than zeroing them out. Adding a gaussian noise performs best which again indicates the limited availability of data. Overall, in the absence of visual information, without retraining, the AV-ASR model relatively worsens by 6% compared to audio only models. 5 Conclusions This paper explores the applicability of the transformer architecture for multimodal grounding in ASR. Our proposed framework uses a crossmodal dot-product attention to map visual features to audio feature space. Audio and visual features are then combined with a scalar additive fusion and used to predict character as well as subword transcriptions. We employ a novel multitask loss that combines the subword level and character losses. Results on the How2 database show that a) multiresolution losses regularizes our model producing significant gains in WER over character level and subword level losses individually b) Adding visual information results in relative gains of 3.76% over audio model’s results validating our model. Due to large memory requirements of the attention mechanism, we apply aggressive preprocessing to shorten the input sequences, which may hurt model performance. In the future, we plan to alleviate this by incorporating ideas from sparse transformer variants (Kitaev et al., 2020; Child et al., 2019). Furthermore, we will experiment with more ellaborate, attention-based fusion mechanisms. Finally, we will evaluate the multiresolution loss on larger datasets to analyze it’s regularizing effects. References Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2018. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence. Ozan Caglayan, Ramon Sanabria, Shruti Palaskar, Loic Barraul, and Florian Metze. 2019. Multimodal grounding for sequence-to-sequence speech recognition. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8648–8652. IEEE. William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4960–4964. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recognition. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, page 577–585, Cambridge, MA, USA. MIT Press. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th 2386 Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speechtransformer: A no-recurrence sequence-to-sequence model for speech recognition. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5884–5888. IEEE. Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, page 369–376, New York, NY, USA. Association for Computing Machinery. Alex Graves, Abdel-rahman Mohamed, and Geoffrey E. Hinton. 2013. Speech recognition with deep recurrent neural networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). IEEE. Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. 2018. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 6546–6555. Wei-Ning Hsu, David Harwath, and James Glass. 2019. Transfer learning from audio-visual grounding to speech recognition. Proc. Interspeech 2019, pages 3242–3246. Shigeki Karita, Nelson Enrique Yalta Soplin, Shinji Watanabe, Marc Delcroix, Atsunori Ogawa, and Tomohiro Nakatani. 2019a. Improving TransformerBased End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration. In Proc. INTERSPEECH, pages 1408–1412. Shigeki Karita, Xiaofei Wang, Shinji Watanabe, Takenori Yoshimura, Wangyou Zhang, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, and Ryuichi Yamamoto. 2019b. A comparative study on transformer vs RNN in speech applications. In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019, Singapore, December 14-18, 2019, pages 449– 456. IEEE. Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. In International Conference on Learning Representations. Jan Kremer, Lasse Borgholt, and Lars Maaløe. 2018. On the inductive bias of word-character-level multitask learning for speech recognition. Kalpesh Krishna, Shubham Toshniwal, and Karen Livescu. 2018. Hierarchical multitask learning for ctc-based speech recognition. arXiv preprint arXiv:1807.06234. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71. Hung Le, Doyen Sahoo, Nancy Chen, and Steven Hoi. 2019. Multimodal transformer networks for end-toend video-grounded dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5612–5623. Jindˇrich Libovick`y, Jindˇrich Helcl, and David Mareˇcek. 2018. Input combination strategies for multi-source transformer decoder. In Proc. 3rd Conference on Machine Translation, pages 253–260. Abdelrahman Mohamed, Dmytro Okhonko, and Luke Zettlemoyer. 2019. Transformers with convolutional context for asr. CoRR. Youssef Mroueh, Etienne Marcheret, and Vaibhava Goel. 2015. Deep multimodal learning for audiovisual speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2130–2134. IEEE. Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus M¨uller, and Alex Waibel. 2019. Very deep self-attention networks for end-to-end speech recognition. Proc. Interspeech 2019, pages 66–70. Gerasimos Potamianos, Eric Cosatto, Hans Peter Graf, and David B Roe. 1997. Speaker independent audiovisual database for bimodal asr. In Proc. European Tutorial and Research Workshop on Audio-Visual Speech Processing, pages 65–68. Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Lo¨ıc Barrault, Lucia Specia, and Florian Metze. 2018. How2: a large-scale dataset for multimodal language understanding. In Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL). NeurIPS. Ramon Sanabria and Florian Metze. 2018. Hierarchical multitask learning with ctc. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 485–490. IEEE. 2387 George Sterpu, Christian Saam, and Naomi Harte. 2018. Attention-based audio-visual fusion for robust automatic speech recognition. In Proceedings of the 20th ACM International Conference on Multimodal Interaction, pages 111–115. Fei Tao and Carlos Busso. 2018. Aligning audiovisual features for audiovisual speech recognition. In 2018 IEEE International Conference on Multimedia and Expo (ICME), pages 1–6. IEEE. Shubham Toshniwal, Hao Tang, Liang Lu, and Karen Livescu. 2017. Multitask learning with low-level auxiliary tasks for encoder-decoder based speech recognition. Proc. Interspeech 2017, pages 3532– 3536. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6558– 6569. Sei Ueno, Hirofumi Inaguma, Masato Mimura, and Tatsuya Kawahara. 2018. Acoustic-to-word attentionbased model complemented with character-level ctcbased model. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5804–5808. IEEE. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Jun Yu, Jing Li, Zhou Yu, and Qingming Huang. 2019. Multimodal transformer with multi-view visual representation for image captioning. IEEE Transactions on Circuits and Systems for Video Technology. Yu Zhang, William Chan, and Navdeep Jaitly. 2017. Very deep convolutional networks for end-to-end speech recognition. pages 4845–4849. Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu. 2018. Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese. Proc. Interspeech 2018, pages 791–795.
2020
216
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2388–2397 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2388 Phone Features Improve Speech Translation Elizabeth Salesky7 and Alan W Blackì 7Johns Hopkins University ìCarnegie Mellon University [email protected], [email protected] Abstract End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. Their performance is often assumed to be superior, though in many conditions this is not yet the case. We compare cascaded and end-to-end models across high, medium, and low-resource conditions, and show that cascades remain stronger baselines. Further, we introduce two methods to incorporate phone features into ST models. We show that these features improve both architectures, closing the gap between end-to-end models and cascades, and outperforming previous academic work – by up to 9 BLEU on our low-resource setting. 1 Introduction End-to-end models have become the common approach for speech translation (ST), but the performance gap between these models and a cascade of separately trained speech recognition (ASR) and machine translation (MT) remains, particularly in low-resource conditions. Models for low-resource ASR leverage phone1 information, but this information is not typically leveraged by current sequence-to-sequence ASR or speech translation models. We propose two methods to incorporate phone features into current neural speech translation models. We explore the existing performance gap between endto-end and cascaded models, and show that incorporating phone features not only closes this gap, but greatly improves the performance and training efficiency of both model architectures, particularly in lower-resource conditions. The sequences of speech features used as input for ST are ≈10 times longer than the equivalent sequence of characters in e.g. a text-based MT model. This impacts memory usage, the number of model parameters, and 1The term ‘phone’ refers to segments corresponding to a collection of fine-grained phonetic units, but which may separate allophonic variation: see Jurafsky and Martin (2000). training time. Multiple consecutive feature vectors can belong to the same phone, but the exact number depends on the phone and local context. Further, these speech features are continuously valued rather than discrete, such that a given phone will have many different instantiations across a corpus. Neural models learn to associate ranges of similarly valued feature vectors in a data-driven way, impacting performance in lower-resource conditions. Using phoneme-level information provides explicit links about local and global similarities between speech features, allowing models to learn the task at hand more efficiently and yielding greater robustness to lower-resource conditions. We propose two simple heuristics to integrate phoneme-level information into neural speech translation models: (1) as a more robust intermediate representation in a cascade; and (2) as a concatenated embedding factor. We use the common Fisher Spanish–English dataset to compare with previous work, and simulate high-, mid-, and low-resource conditions to compare model performance across different data conditions. We compare to recent work using phone segmentation for end-to-end speech translation (Salesky et al., 2019), and show that our methods outperform this model by up to 20 BLEU on our lowest-resource condition.2 Further, our models outperform all previous academic work on this dataset, achieving similar performance trained on 20 hours as a baseline end-to-end model trained on the full 160 hour dataset. Finally, we test model robustness by varying the quality of our phone features, which may indicate which models will better generalize across differently-resourced conditions.3 2 Models with Phone Supervision We add higher-level phone features to low-level speech features to improve our models’ robustness across data conditions and training efficiency. We propose two methods to incorporate phone information into cascaded and end-to-end models, depicted in Figure 1. Our phone cascade uses phone labels as the machine translation input, in place of the output transcription from a speech recognition model. Our phone end-to-end model uses 24-reference BLEU scores are used for this dataset. 3Our code is public: github.com/esalesky/xnmt-devel 2389 Figure 1: Comparison between traditional cascaded and end-to-end models, and our proposed methods using phone features as (1) the intermediate representation in a cascaded model; and (2) a concatenated embedding factor in an end-to-end model. We additionally compare to previous work; (3) where phone segmentation is used for feature vector downsampling in time (Salesky et al., 2019). phone labels to augment source speech feature vectors in end-to-end models. We call these end-to-end or ‘direct’ because they utilize a single model with access to the source speech features, though they additionally use phone features generated by an external model. We additionally compare to a recent end-to-end model proposed by Salesky et al. (2019). Model 1: Phone Cascade. In a cascade, the intermediate representation between ASR and MT is the final output of a speech recognition model, e.g. characters, subwords, or words. Using separate models for ASR and MT means that errors made in ASR are likely to propagate through MT. Common errors include substitution of phonetically similar words, or misspellings due to irregularities in a language’s orthography, the latter of which may be addressed by using phone labels in place of ASR output. By not committing to orthographic targets, we believe this model will propagate fewer errors to downstream MT. Model 2: Phone End-to-End. Our final model uses phone-factored embeddings, where trainable embeddings for phone features are concatenated to typical speech feature vector input. Because phone durations are variable and typically span more than one filterbank feature (or frame), adjacent filterbank features may have the predicted phone label; in the example shown in Figure 1, /R/ spans three frames or filterbank features. We note that this method maintains the same source sequence length as the original speech feature sequence. This method associates similar feature vectors at the corpus level, because all filterbank features with the same phone alignment (e.g. /OH/) will have the same trainable phone embedding concatenated. In MT and NER, concatenating trainable embeddings for linguistic features to words, such as morphemes and phones, has improved models’ ability to generalize (Sennrich and Haddow, 2016; Chaudhary et al., 2018). While these works appended finer-grained information to associate words with similar lower-level structure, we use phone embeddings to associate higher-level structure to similar but unique speech feature vectors globally across a corpus. Model 3: Phone Segmentation. We compare to the method from Salesky et al. (2019) as a strong end-to-end baseline. Here, phone boundaries are used to segment and compress speech feature vector sequences. Within each utterance, the feature vectors of consecutive speech frames with the same phone label are averaged to produce one feature vector for translation from a variable number of frames. This significantly reduces source sequence lengths (by ∼80%), reducing the number of model parameters and memory. Rather than having a variable number of feature vectors per phone-like unit, each has one representation, more similar in granularity to character-based MT. The averaged feature vectors remain continuously-valued, and are locally summarized: a given phone across the corpus will still have different representations in each instance. 3 Data We use the Fisher Spanish-English corpus,4 which consists of parallel speech, transcripts, and translations, enabling comparisons between cascaded and direct models on the same data and allowing us to generate phone supervision using matched data. The dataset contains 160 hours of Spanish telephone speech, split into 138K utterances, which were translated via crowdsourcing by Post et al. (2013). We use the standard dev and test sets, each with ∼4k utterances. Because we are particularly interested in how our methods will affect training across differently-resourced conditions, we compare results using randomly selected 40 hour and 20 hour subsets of the data. 4 joshua.incubator.apache.org/data/fisher-callhome-corpus 2390 4 Generating Phone Supervision To generate phoneme-level labels for sequences of speech features, we generate frame-level alignments using a trained speech recognizer. Specifically, we extract 40-dimensional Mel filterbank features with per-speaker mean and variance normalization using Kaldi (Povey et al., 2011). We train an HMM/GMM system on the full Fisher Spanish dataset with the Kaldi recipe (Povey et al., 2011), using the Spanish CALLHOME Lexicon (LDC96L16), and compute per-frame phone alignments with the triphone model (tri3a) with LDA+MLLT features. This yields 50 phone labels, including silence (<sil>), noise, and laughter. Producing phone alignments uses supervision from a transcript, which inherently does not exist at inference time. While phones can be extracted from Kaldi lattices at inference time, we found that our HMM/GMM model was not our best performing ASR model on this dataset – by greater than 10 WER. To leverage our better-performing neural ASR models for phone generation, we create essentially a ‘2-pass’ alignment procedure: first, generating a transcript, and second, using this transcript to force align phones. Table 1 shows the mapping between phone quality and the ASR models used for phone feature generation. This procedure enables us to both improve phone Alignment Quality WER ASR Supervision Gold – Gold transcript High 23.2 Salesky et al. (2019) Med 30.4 Seq2Seq ASR Low 35.5 Kaldi HMM/GMM Table 1: Mapping between phone quality and the ASR models used for alignment generation, with the models’ WER on Fisher Spanish test. alignment quality and also match training and inference procedures for phone generation for our translation models. In Section 8, we compare the impact of phone alignment quality on our translation models utilizing phone features, and show higher quality phone features can improve downstream results by >10 BLEU. Producing phone features in this way uses the same data (source speech and transcripts) as the ASR task in a cascade, and auxiliary ASR tasks from multi-task end-to-end models, but as we show, to far greater effect. Further, auxiliary tasks as used in previous work rely on three-way parallel data, while it is possible to generate effective phoneme-level supervision using a recognizer trained on other corpora or languages (Salesky et al., 2019), though we do not do this here. 5 Model & Training Procedure As in previous academic work on this corpus (Bansal et al., 2018; Sperber et al., 2019; Salesky et al., 2019), we use a sequence-to-sequence architecture inspired by Weiss et al. (2017) modified to train within lower resources; specifically, each model converges within ≈5 days on one GPU. We build encoder-decoder models with attention in xnmt (Neubig et al., 2018) with 512 hidden units. Our pyramidal encoder uses 3-layer BiLSTMs with linear network-in-network (NiN) projections and batch normalization between layers (Sperber et al., 2019; Zhang et al., 2017). The NiN projections are used to downsample by a factor of 2 between layers, resulting in the same total 4× downsampling in time as the additional convolutional layers from Weiss et al. (2017); Bansal et al. (2019): They give us the benefit of added depth with fewer additional parameters. We use single layer MLP attention (Bahdanau et al., 2015) with 128 units and 1 decoder layer as opposed to 3 or 4 in previous work – we did not see consistent benefits from additional depth. In line with previous work on this dataset, all experiments preprocess target text by lowercasing and removing punctuation aside from apostrophes. We use 40-dimensional Mel filterbank features as previous work did not see significant difference with higherdimensional features (Salesky et al., 2019). We use 1k BPE units for translation text, shown in Salesky et al. (2019) to have both better performance and training efficiency than characters (Weiss et al., 2017; Sperber et al., 2019) or words (Bansal et al., 2018). For both text and phones, we use 64-dimensional embeddings. For the MT component in cascaded speech translation models, we compared using the pyramidal speech architecture above (3 encoder, 1 decoder layers) to the traditional BiLSTM text model (2 layers each for encoder and decoder). Using the pyramidal architecture resulted in the same performance as the BiLSTM model when translating BPE transcriptions from ASR, but gave us consistent improvements of up to 1.5 BLEU when instead translating phone sequences; we posit this is because phone sequences are longer than BPE equivalents. Accordingly, we use the same model architecture for all our ASR, MT, and ST models. We use layer dropout with p = 0.2 and target embedding dropout with p = 0.1 (Gal and Ghahramani, 2016). We apply label smoothing with p = 0.1 (Szegedy et al., 2016) and fix the target embedding norm to 1 (Nguyen and Chiang, 2018). For inference, we use beam of size 15 and length normalization with exponent 1.5. We set the batch size dynamically depending on the input sequence length with average batch size was 36. We use Adam (Kingma and Ba, 2015) with initial learning rate 0.0003, decayed by 0.5 when validation BLEU did not improve for 10 epochs initially and subsequently 5 epochs. We do not use L2 weight decay or Gaussian noise, and use a single model replica. We use input feeding (Luong et al., 2015), and exclude utterances longer than 1500 frames in training for memory. 2391 6 Prior Work: Cascaded vs End-to-End Models on Fisher Spanish-English The large body of research on the Fisher SpanishEnglish dataset, including both cascaded and end-to-end models, makes it a good benchmark to compare these architectures. Not all previous work has compared across multiple resource settings or compared to cascaded models, which we address in this section. We summarize best previous results on this dataset on high, medium, and low-resource conditions in Table 2. Best Results. The cascade of traditional HMM/DNN ASR and Joshua MT models from Kumar et al. (2014) set a competitive baseline on the full dataset (40.4 test BLEU) which no subsequent academic models have been able to match until this work; subsequent exploration of end-to-end models has produced notable relative improvements but the best end-to-end academic number (Salesky et al., 2019) remains 1.6 BLEU behind this traditional cascade. Industry models from Weiss et al. (2017) achieved exceptional performance with very deep end-to-end models on the full dataset (47.3 test BLEU), exceeding a cascade for the first time. They additionally show results with an updated cascade using neural models, improving over Kumar et al. (2014). Their results have been previously unmet by the rest of the community. This is likely in part due to the computational resources required to fully explore training schedules and hyperparameters with models of their depth. While their ASR models took ∼4 days to converge, their ST models took another 2 weeks, compared to the lighter-weight models of recent academic work which converged in <5 days (Sperber et al., 2019; Salesky et al., 2019; Bansal et al., 2019). This dataset is challenging: improving ASR WER from 35 (Post et al.) to 23 (Kumar et al.) only resulted in 4 BLEU ST improvement: see Components in Table 2. We believe this to be in part because the multi-reference scoring masks some model differences, and the conversational phenomena (like disfluencies) are challenging. Lower-Resource. While deep end-to-end models have become competitive at higher-resource conditions, previous work on this dataset has showed they are not as data-efficient as cascades under lower-resource conditions. While some works have tested multiple resource conditions, only Sperber et al. (2019) compared against cascades across multiple conditions. Their end-to-end baseline outperformed their cascades on the full dataset, but not under lower-resource conditions, while their end-to-end but multi-stage attention-passing model is more data-efficient than previous models and shows the best previous results under lower-resource condition. Sperber et al. do not report results without auxiliary ASR, MT, and autoencoding tasks, which they state add up to 2 BLEU. Additional Data. Stoian et al. (2020); Bansal et al. (2019); Sperber et al. (2019) investigate speech translation performance using additional corpora through transfer learning from ASR and auxiliary MT tasks. The ability to leverage non-parallel corpora was previously a strength of cascades and had not been explored with end-to-end models. We do not use additional data here, but show these numbers as context for our results with phone supervision, and refer readers to Sperber et al. for discussion of cascaded and end-to-end models’ capacity to make use of more data. Parameter Tuning. We find cascaded model performance can be impacted significantly by model settings such as beam size and choice of ASR target preprocessing. While Weiss et al. (2017); Sperber et al. (2019) use character targets for ASR, we use BPE, which gave us an average increase of 2 BLEU. Further, we note that search space in decoding has significant impact on cascaded model performance. In cascaded models, errors produced by ASR can be unrecoverable, as the MT component has access only to ASR output. While Sperber et al. (2019) use a beam of size 1 for the ASR component of their cascade to compare with their two-stage end-toHIGH (160hr) MID (40hr) LOW (20hr) Components Model Source dev test dev test dev test ASR ↓ MT ↑ Cascaded Weiss et al. (2017) 45.1 45.5 – – – – 23.2 57.9 Kumar et al. (2014) – 40.4† – – – – 25.3 62.9 Sperber et al. (2019) – 32.5 – 16.8 – 6.6 40.9 58.1 End-to-End Weiss et al. (2017) 46.5 47.3∗ – – – – Salesky et al. (2019) 37.6 38.8 21.0 19.8 11.1 10.0 Sperber et al. (2019) – 36.7 – 31.9 – 22.8 Stoian et al. (2020) 34.1 34.6 – – 10.3 10.2 + Add’l Data Sperber et al. (2019) – 38.8 – – – – Stoian et al. (2020) 37.9 37.8 – – 20.1 20.2 Table 2: End-to-end vs cascaded speech translation model performance in BLEU↑on Fisher Spanish-English data from the literature. (†) denotes the best previous academic result on the full dataset, (∗) the best from industry. Component models for cascades reported on test on full dataset: ASR reported in WER↓and MT in BLEU↑. 2392 end models, we find that using equal beam sizes of 15 for both ASR and MT improves cascaded performance with the same model by 4-8 BLEU; combining these two parameter changes makes the same cascaded model a much more competitive baseline (compare lines 3 in both Table 2 and Table 3). In contrast, widening beam size to yield an equivalent search space for end-to-end models has diminishing returns after a certain point; we did not see further benefits with a larger beam (> 15). Our Baselines. We report best numbers from previous work in Table 2 for comparison (which may use multi-task training), but use single-task models in our work. We report our baseline results in Table 3. On the full dataset, our baseline cascade improves slightly over Kumar et al. (2014) with 41.0 compared to 40.4 on test, a mark most recent work has not matched primarily due to model choices noted above, with component ASR performance of WER 30.4 and 58.6 BLEU for MT. Our end-to-end baseline is comparable to the baselines in Salesky et al. (2019); Sperber et al. (2019); Stoian et al. (2020). This suggests we have competitive baselines for both end-to-end and cascaded models. 7 Results Using Phone Features We compare our two ways to leverage phone features to our cascaded and end-to-end baselines across three resource conditions. Table 3 shows our results; following previous work, all BLEU scores are multi-reference. Average single reference scores may be found in Appendix A. All models using phone supervision outperform the end-to-end baseline on all three resource conditions, while our proposed models also exceed the cascaded baseline and previous work at lower-resource conditions. Phone features. Salesky et al. (2019) performs most similarly to the end-to-end baseline, but nonetheless represents an average relative improvement of 13% across the three data sizes with a significant reduction in training time. Our phone featured models use not just the phone segmentation, but also the phone labels, and perform significantly better. Our phone end-to-end model not only shows less of a decrease in performance across Figure 2: Performance of all models relative to ‘Baseline Cascade’ (∆= 0) across our 3 resource conditions. Cascaded models in orange, end-to-end models in purple. Our proposed models yield improvements across all three conditions, with a widening margin under low-resource conditions for the phone cascade. resource conditions than Salesky et al. (2019), but further improves by 4 BLEU over the baseline cascade on our two lower-resource conditions. This suggests augmenting embeddings with discrete phone features is more effective than improved downsampling. The phone cascade performs still better, with marked improvements across all conditions over all other models (see Figure 2). On the full dataset, using phones as the source for MT in a cascade performs ∼2 BLEU better than using BPE, while at 40 and 20 hours this increases to up to 10 BLEU. We analyze the robustness of phone models further in Section 8. Hybrid cascade. We additionally use a ‘hybrid cascade’ model to compare using phone features to improving ASR. Our hybrid cascade uses an ASR model with phone-informed downsampling and BPE targets (Salesky et al., 2019). This improves the WER of our ASR model to 28.1 on dev and 23.2 on test, matching Weiss et al. (2017)’s state-of-the-art on test (23.2) and approaching it on dev (25.7). Our hybrid cascade performs more similarly to Weiss et al.’s cascade on the full dataset, with 45.0 to their 45.5 on test, and is our bestperforming ST model on the full dataset. However, at lower-resource conditions, it does not perform as favorHIGH (160hr) MID (40hr) LOW (20hr) Model dev test ∆ dev test ∆ dev test ∆ Baseline Baseline End-to-End 32.4 33.7 – 19.5 17.4 – 9.8 9.8 – Salesky et al. (2019) 37.6 38.8 +5.2 21.0 19.8 +2.0 11.1 10.0 +0.8 Baseline Cascade 39.7 41.0 +7.3 29.8 27.1 +10.0 22.6 20.2 +11.6 Proposed Phone End-to-End 40.5 42.1 +8.3 34.5 33.0 +15.3 26.7 26.2 +16.7 Phone Cascade 41.6 43.3 +9.4 37.2 37.4 +18.9 32.2 31.5 +22.1 Hybrid Cascade 42.9 45.0 +10.9 33.3 31.2 +13.8 23.2 21.5 +12.6 Table 3: Results in BLEU↑comparing our proposed phone featured models to baselines. We compare three resource conditions, and show average improvement for dev and test (∆). Best performance bolded by column. 2393 ably compared to phone featured models – as shown in Figure 2, both the phone cascade and phone end-to-end models outperform the hybrid cascade at lower-resource conditions, by up to 10 BLEU at 20 hours. This suggests improving ASR may enable cascades to perform better at high-resource conditions, but under lower-resource conditions it is not as effective as utilizing phone features. Training time. In addition to performance improvements, our models with phone features are typically more efficient with respect to training time, shown in Table 4. The fixed time to produce phone labels, which must be performed before translation, becomes a greater proportion of overall training time at lower-resource settings. In particular, the phone end-to-end model offers similar training time reduction over the baseline to Salesky et al. (2019), where downsampling reduces sequence lengths by up to 60%, with unreduced sequence lengths through earlier convergence; this model offers a better trade-off between time and performance. Model HIGH MID LOW ∆ Baseline End-to-End 118hr 40hr 22hr – Salesky et al. (2019) 41hr 13hr 10hr 0.4× Baseline Cascade 76hr 19hr 12hr 0.6× Phone Cascade 57hr 39hr 27hr 0.7× Phone End-to-End 42hr 20hr 13hr 0.4× Hybrid Cascade 47hr 34hr 24hr 0.6× Table 4: Total training time · E 2 0 for all models (including time to generate phone features) on 3 resource conditions. The ASR and MT models in the baseline cascade can be trained in parallel, reflected here, while phone featured models may not as the MT requires phone features from ASR. Comparing to previous work using additional data. Previous work used the parallel speech transcripts in this dataset for auxiliary tasks with gains of up to 2 BLEU; we show using the same data to generate phone supervision is far more effective. We note that our phone models further outperform previous work trained with additional corpora. The attention-passing model of Sperber et al. (2019) trained on additional parallel SpanishEnglish text yields 38.8 on test on the full dataset, which Salesky et al. (2019) matches on the full dataset and our proposed models exceed, with the phone cascade yielding a similar result (37.4) trained on only 40 hours. Pre-training with 300 hours of English ASR data and fine-tuning on 20 hours of Spanish-English data, Stoian et al. (2020); Bansal et al. (2019) improve their end-toend models from ≈10 BLEU to 20.2. All three of our proposed models exceed this mark trained on 20 hours of Fisher. 8 Model Robustness & Further Analysis In this section, we analyze the robustness of each of our models by varying the quality of our phone features, and further explore the strengths and limitations of each model. 8.1 Phone Cascade Phone cascades use a representation for translation which may be more robust to non-phonetic aspects of orthography. However, as a cascaded model, this still requires hard decisions between ASR and MT, and so we may expect lower phone quality to lead to unrecoverable errors. Figure 3 compares the impact of phone quality on the performance of phone cascades trained on our high, medium, and low-resource conditions. We use alignments produced with gold transcripts as an upper bound on performance. We note that with gold alignments, translation performance is similar to text-based translation (see Section 6). We see that phone quality does have a significant impact on performance, with the MT model trained on low phone quality yielding similar translation performance using the full 160 hour dataset to the MT model with the highest quality phones trained on only 20 hours. However, we also see significantly more data-efficiency with this model, with less reduction in performance between 160hr →40hr →20hr training conditions than previous models. Figure 3: Phone Cascade Robustness: using phone labels in place of BPE as the text source for downstream MT. Comparing performance across our three data conditions and phone label qualities. Redundancy. For the phone cascade models compared in Figure 3, we collapse adjacent consecutive phones with the same label, i.e. when three consecutive frames have been aligned to the same phone label ‘B B B’ we have reduced the sequence to a single phone ‘B’ for translation. We additionally compared translating non-uniqued phone sequences (e.g. the same sequence length as the number of frames) as a more controlled proxy for our model’s handling of longer frame-based feature vector sequences compared to Salesky et al. (2019)’s downsampled feature vector sequences. The redundant phones caused consistent decreases in BLEU, 2394 with much greater impact in lower-resource conditions. Translating the full sequence of redundant frame-level phone labels, for the full 160hr dataset, all models performed on average 0.6 BLEU worse; for 40hr, 1.8 BLEU worse; and with 20 hours, 4.1 BLEU worse – a 13% decrease in performance solely from non-uniqued sequences. Phones correspond to a variable-length number of speech frames depending on context, speaker, and other semantic information. When translating speech feature vectors, speech features within a phone are similar but uniquely valued; using instead phone labels in a phone cascade, the labels are identical though still redundant. These results suggest our LSTM-based models are better able to handle redundancy and variable phone length at higher resource conditions with sufficient examples, but are less able to handle redundancy with less training data. 8.2 Phone End-to-End Our phone end-to-end model concatenates trainable embeddings for phone labels to frame-level filterbank features, associating similar feature vectors globally across the corpus, as opposed to locally within an utterance as with the phone-averaged embeddings (Section 8.3). Figure 4 compares the results of these factored models using phone features of differing qualities, with ‘gold’ alignments as an upper bound. The phone end-to-end models compared do not reach the same upper performance as the phone cascades: comparing gold phone labels, the phone end-to-end model performs slightly worse at 160hr with more degradation in performance at 40hr and 20hr. While this comparison is even more pronounced for ‘low’ phone quality than ‘gold,’ the phone end-to-end model has more similar performance between ‘gold’ and ‘high’ phone quality than the cascade. This model’s input contains both the phone features used in the phone cascade and speech features of the baseline end-to-end model, but unlike the phone casFigure 4: Phone End-to-End Robustness: trainable embeddings for phone labels are concatenated to framelevel filterbank features. Comparing performance across three data conditions and phone label qualities. cade or Salesky et al. (2019) the input sequence has not been reduced in length. That the end-to-end phone model achieves top performance and converges much faster than end-to-end baseline is unsurprising, as access to both speech feature vectors and phone labels mitigates the effects of long noisy input sequences. The significant performance improvements over Salesky et al. (2019), however, are more interesting, as these models make use of the similar information in different ways – the use of discrete embeddings seems to aid the phone end-to-end model, though the sequence length is not reduced. The model’s performance degradation compared to the phone cascade in lower-resource conditions is likely due in part to these sequence lengths, as shown by our additional experiments with input redundancy for the cascade. The greater reduction in performance here using lower quality phones suggests the noise of the labels and concatenated filterbank features compound, further detracting from performance. Perhaps further investigation into the relative weights placed on the two embedding factors over the training process could close this additional gap. 8.3 Phone Segmentation: Salesky et al. (2019) We also compare to the models from Salesky et al. (2019) as a strong end-to-end baseline. That work introduced downsampling informed by phone segmentation – unlike our other models, the value of the phone label is not used, but rather, phone alignments are used only to determine the boundary between adjacent phones for variable-length downsampling. Their model provides considerable training and decoding time improvements due to the reduced source sequence length, and shows consistent improvements over the baseline end-to-end model using the original filterbank feature sequences which increase with the amount of training data. However, their model has lower overall performance and with much smaller performance improvements over our baselines in lower-resource conditions than the phone featured models we propose here. We hypothesize that the primary reason for their BLEU improvements is the reduction in local redundancy between similar frames, as discovered in the previous section. We refer readers to their paper for further analysis. 8.4 Quality of Phone Labels We show two examples of phone sequences produced with each overall model quality in Figure 5, uniqued within consecutive frame sequences with the same label for space constraints. Individual phones are typically 5-20 frames. We see the primary difference in produced phones between different models is the label values, rather than the boundaries. While we do see some cases where the boundaries shift, they chiefly vary by only 1-3 frames. It is not the case that there are significantly more or fewer phone segments aligned per utterance by quality, though there are outlying utterances (Example 2 – ‘Low’). 2395 Figure 5: Two examples of phone sequences demonstrating differences across qualities of phone features. (See Table 1 for the mapping between quality and generation procedure). Note: word-level segmentation is not marked, as it is also not present in {speech,phone} source sequences for translation. Relating our observed trends to the differences between our phone cascades and phone end-to-end models, we note that differences in frame-level phone boundaries would not affect our phone cascaded models, where the speech features are discarded, while they would affect our phone end-to-end models, where the phone labels are concatenated to speech feature vectors and associate them across the corpus. While errors in phone labels may be seen as ‘unrecoverable’ in a cascade, for the end-to-end model, they add noise to distribution of filterbank feature associated with each phone label embedding, which appears to have a more negative impact on performance than the hard decisions in cascades. Though the concatenated filterbank features may allow our end-to-end models to recover from discrete label errors, our results testing various phone qualities suggest this may only be the case under higher-resource settings with sufficient examples. 9 Related Work Speech translation was initially performed by cascading separately trained ASR and MT models, allowing each model to be trained on larger data sources without parallel speech, transcriptions, and translations, but potentially yielding unrecoverable errors between models. Linking models through lattices with both phrase-based (Kumar et al., 2014) and neural MT (Sperber et al., 2017) reduced many such errors. Using one model to directly translate speech was later enabled by attentional encoder-decoder models. Direct end-to-end speech translation was first explored as a way to reduce both error propagation, and also the need for high quality intermediate transcriptions (e.g. for unwritten languages). The first such models were investigated in B´erard et al. (2016); Duong et al. (2016), but these used, respectively, a small synthetic corpus and evaluated on speech-to-text alignments rather than translation. Subsequently Weiss et al. (2017) extended these neural attentional models to deep, multitask models with excellent results on Fisher Spanish– English, exceeding a cascade for the first time. However, efforts from the community have not yet replicated their success (Stoian et al., 2020; Sperber et al., 2019; Salesky et al., 2019). End-to-end models have performed inconsistently compared to cascades on other corpora: B´erard et al. (2018) perform well on high-resource audiobooks but do not exceed a cascade; Anastasopoulos and Chiang (2018) found ‘triangle’ models performed better than cascades for 2 of 3 very low-resource language pairs; and in the most recent IWSLT evaluation campaigns, cascades have remained the highest-performing systems (Niehues et al., 2018, 2019). Similarly-motivated work exists in speech translation. In addition to Salesky et al. (2019); Sperber et al. (2019) addressed above, preliminary cascades using phone-like units have been explored for low-resource speech translation, motivated by translation of unwritten languages where a traditional cascade would not be possible. To this end, Bansal et al. (2018) utilized unsupervised term discovery, and Wilkinson et al. (2016) synthesized speech; but these approaches were only evaluated in terms of precision and recall and were not tested on both ‘higher-resource’ and natural speech data conditions. 10 Conclusion We show that phone features significantly improve the performance and data efficiency of neural speech translation models. We study the existing performance gap between cascaded and end-to-end models, and introduce two methods to use phoneme-level features in both architectures. Our improvements hold across high, medium, and low-resource conditions. Our greatest improvements are seen in our lowest-resource settings (20 hours), where our end-to-end model outperforms a strong baseline cascade by ≈5 BLEU, and our cascade outperforms prior work by ≈9 BLEU. Generating phone features uses the same data as auxiliary speech recognition tasks from prior work; our experiments suggest these features are a more effective use of this data, with our models matching the performance from previous works’ performance without additional training data. We hope that these model comparisons and results inform development of more robust end-to-end models, and provide a stronger benchmark for performance on low-resource settings. Acknowledgments The authors thank Andrew Runge, Carlos Aguirre, Carol Edwards, Eleanor Chodroff, Florian’s cluster, Huda Khayrallah, Matthew Wiesner, Nikolai Vogler, Rachel Wicks, Ryan Cotterell, and the anonymous reviewers for helpful feedback and resources. 2396 References Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. Proc. of NAACL. arXiv:1802.06655. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Proc. of ICLR. arXiv:1409.0473. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018. Low-resource speech-to-text translation. Proc. of Interspeech. arXiv:1803.09164. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2019. Pre-training on high-resource speech recognition improves lowresource speech-to-text translation. Proc. of NAACL. arXiv:1809.01431. Alexandre B´erard, Laurent Besacier, Ali Can Kocabiyikoglu, and Olivier Pietquin. 2018. End-to-end automatic speech translation of audiobooks. In Proc. of ICASSP. Alexandre B´erard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. NIPS Workshop on End-to-end Learning for Speech and Audio Processing. arXiv:1612.01744. Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R Mortensen, and Jaime G Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword representations. Proc. of EMNLP. arXiv:1808.09500. Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription. In Proc. of NAACL. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. Proc. of NeurIPS. D Jurafsky and J Martin. 2000. Speech and Language Processing, 3rd edition. Prentice Hall. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. Proc. of ICLR. arXiv:1412.6980. Gaurav Kumar, Matt Post, Daniel Povey, and Sanjeev Khudanpur. 2014. Some insights from translating conversational telephone speech. Proc. of ICASSP. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. Proc. of EMNLP. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, et al. 2018. XNMT: The eXtensible neural machine translation toolkit. Proc. of AMTA. arXiv:1803.00188. Toan Q Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. Proc. of NAACL. arXiv:1710.01329. Jan Niehues, Ronaldo Cattoni, Sebastian St¨uker, Mauro Cettolo, Marco Turchi, and Marcello Federico. 2018. The iwslt 2018 evaluation campaign. Jan Niehues, Ronaldo Cattoni, Sebastian St¨uker, Matteo Negri, Marco Turchi, Thanh-Le Ha, Elizabeth Salesky, Ramon Sanabria, Lo¨ıc Barrault, Lucia Specia, and Marcello Federico. 2019. The iwslt 2019 evaluation campaign. Matt Post, Gaurav Kumar, Adam Lopez, Damianos Karakos, Chris Callison-Burch, and Sanjeev Khudanpur. 2013. Improved speech-to-text translation with the Fisher and Callhome Spanish–English speech translation corpus. Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al. 2011. The Kaldi speech recognition toolkit. Proc. of ASRU. Elizabeth Salesky, Matthias Sperber, and Alan Black. 2019. Exploring phoneme-level speech representations for end-to-end speech translation. Proc. of ACL. Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. Proc. of WMT. arXiv:1606.02892. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. Proc. of EMNLP. arXiv:1704.00559. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-passing models for robust and data-efficient end-to-end speech translation. Proc. of TACL. arXiv:1904.07209. Mihaela C Stoian, Sameer Bansal, and Sharon Goldwater. 2020. Analyzing ASR pretraining for lowresource speech-to-text translation. Proc. of ICASSP. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. Proc. of CVPR. Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-to-sequence models can directly transcribe foreign speech. Proc. of INTERSPEECH. arXiv:1703.08581. Andrew Wilkinson, Tiancheng Zhao, and Alan W Black. 2016. Deriving phonetic transcriptions and discovering word segmentations for speech-to-speech translation in low-resource settings. Proc. of INTERSPEECH. Yu Zhang, William Chan, and Navdeep Jaitly. 2017. Very deep convolutional networks for end-to-end speech recognition. Proc. of ICASSP. 2397 A Single-Reference BLEU Scores These tables contain the same results as our tables and figures as in the main paper, but show average singlereference BLEU scores in place of multi-reference (4-reference) BLEU. WER for ASR is unchanged: the dataset contains a single reference transcript for ASR. Results from prior work report only multi-reference BLEU and so are not included below. ASR↓ MT↑ Cascade End-to-End Data dev test dev test dev test dev test Full 33.3 30.4 34.5 33.6 23.2 23.7 19.0 19.6 40hr 44.8 46.7 29.9 28.3 17.4 15.7 11.5 10.4 20hr 56.3 59.1 22.4 22.6 13.2 11.8 5.9 5.3 Table 8: Baseline results for end-to-end and cascaded speech translation models, with component ASR and MT model performance for cascades (blue). ASR results in WER↓and translation results in BLEU↑. Phone Quality 160hr 40hr 20hr dev test dev test dev test Gold 33.3 33.2 29.3 28.5 24.4 23.0 High 24.1 25.1 21.6 21.7 18.9 18.3 Med 23.1 23.4 20.6 20.7 17.6 17.2 Low 18.2 19.1 16.4 17.0 14.1 14.2 Table 5: Phone Cascades. We use frame-level phone labels as the text source for downstream MT. Comparing method robustness to phone quality and resource conditions. Phone Quality 160hr 40hr 20hr dev test dev test dev test Gold 34.1 31.3 27.9 23.4 20.5 17.2 Med 24.0 23.7 20.8 18.4 16.5 14.6 Low 20.5 18.3 17.0 13.0 12.2 8.7 Table 6: Phone End-to-End. Trainable embeddings for phone labels are concatenated to frame-level filterbank features. Comparing method robustness to phone quality and resource conditions. Full (160hr) 40hr 20hr Model dev test ∆ dev test ∆ dev test ∆ Baseline Baseline End-to-End 19.0 19.6 – 11.5 10.4 – 5.9 5.3 – Salesky et al. (2019) 22.0 21.9 +2.7 12.6 11.6 +1.2 6.7 6.2 +0.9 Baseline Cascade 23.2 23.7 +4.2 17.4 15.7 +5.6 13.2 11.8 +6.9 Proposed Phone End-to-End 24.0 23.7 +4.6 20.8 18.4 +8.7 16.5 14.6 +10.0 Phone Cascade 24.1 25.1 +5.3 21.6 21.7 +10.7 18.9 18.3 +13.0 Hybrid Cascade 24.9 25.9 +6.1 19.6 18.2 +8.0 13.6 12.6 +7.5 Table 7: Results in BLEU↑comparing our proposed phone featured models to baselines. We compare three resource conditions, and show average improvement for dev and test (∆). Best performance bolded by column.
2020
217
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2398–2413 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2398 Grounding Conversations with Improvised Dialogues Hyundong Cho and Jonathan May Information Sciences Institute University of Southern California {jcho, jonmay}@isi.edu Abstract Effective dialogue involves grounding, the process of establishing mutual knowledge that is essential for communication between people. Modern dialogue systems are not explicitly trained to build common ground, and therefore overlook this important aspect of communication. Improvisational theater (improv) intrinsically contains a high proportion of dialogue focused on building common ground, and makes use of the yes-and principle, a strong grounding speech act, to establish coherence and an actionable objective reality. We collect a corpus of more than 26,000 yes-and turns, transcribing them from improv dialogues and extracting them from larger, but more sparsely populated movie script dialogue corpora, via a bootstrapped classifier. We fine-tune chit-chat dialogue systems with our corpus to encourage more grounded, relevant conversation and confirm these findings with human evaluations. 1 Introduction For humans, dialogue is fundamentally a collaborative, cooperative process by which partners coordinate via turns or acts to jointly construct a common world state (Bohm and Nichol, 2004). Without coordination, partners may establish different or conflicting world states, leading to solipsism in the best case and conflict in the worst. Clark and Schaefer (1989), describe five dimensions of grounding, by which partners cooperate to establish common ground, or a shared world state. The dimension of “initiation of next relevant contribution” is the most effective of these in expressing understanding of an ongoing dialogue, and yet is the least observed in dialogue systems. Improvisational theater (improv) is a form of theater in which most or all of what is performed is unscripted, created spontaneously by the actors in real time. Because the performance is not scripted and there is typically little to no scenery or other esFigure 1: Explicit (top) and implicit (bottom) examples of yes-ands in the SPOLIN corpus. The text highlighted in light blue reflects acceptance of the context established in the prompt (“yes”) and the text highlighted in orange initiates a new relevant contribution to the dialogue (“and”). tablished environment,1 there is no objective reality that can naturally ground the scene. Hence, actors must mainly rely on dialogue in order to build a coherent scene and progressively establish a common world view. This necessitates accelerated use of the “initiation of next relevant contribution,” which in improv is known as the yes-and principle. The yes-and principle is a rule-of-thumb that suggests that a participant should accept the reality of what the other participant has said (“yes”) and expand or refine that reality with additional information (“and”). Since actors consciously abide by this principle during improv performances, there is a high proportion of these turns embedded in improv dialogue, which helps ensure scenes are coherent and interesting. 1except for, on occasion, external stimulus such as a suggestion from the audience 2399 Open-domain neural dialogue systems, by contrast, specifically lack coherence and interestingness. They commonly repeat previous utterances (Li et al., 2016c) or generate non-committal, generic statements such as I don’t know that are logically coherent as a response but preempt further conversation (Sordoni et al., 2015; Serban et al., 2015; Li et al., 2016a). Either of these developments leads to a conversational black hole and discourages participation in further dialogue turns. This is a critical shortcoming for open-domain dialogue agents, which, unlike task-oriented dialogue systems, are not guided by specific objectives other than entertainment (Huang et al., 2020). It would behoove such systems to adopt the strategies improvisers include by habit in their dialogues and, consequently, incorporating improv acts should be a key focus for the dialogue community. Yet, to the best of our knowledge, this has not been previously done. There has been work in applying improv to build believable agents that interact with humans (Bruce et al., 2000; Winston and Magerko, 2017) or generate improvised stories (Martin et al., 2016), but development of improvcapable systems in the neural era is largely absent, stymied, we suspect, by the lack of substantial corpora. This is unsurprising; while improv speech acts such as yes-and are crucial in all dialogues, they are only highly concentrated in improv dialogues. And improv dialogues are quite difficult to collect; research collections (Busso and Narayanan, 2008) have been far too small to be useful in the modern ML era. The art form has historically been mostly ephemeral, performed live in regional venues on shoestring budgets and rarely recorded.2 Transcripts are all but absent and mainstream media products are rare.3 However, the liberalization of high quality audio podcasts since 2014 has enabled the availability of a long tail of niche products, improv included (McHugh, 2016). 2The art form has long roots, extending to the Italian Commedia dell’arte tradition from the 16th century and farces from the Roman era, but we constrain our focus to the post20th century form developed and championed by e.g. Keith Johnstone (Johnstone, 2017), Del Close (Halpern et al., 1994), and our corpus’ namesake, Viola Spolin (Spolin et al., 1986). Spolin was the originator of Theater Games, acting exercises that encourage the development of specific theatrical skills. As our corpus is similarly designed to elicit specific skills, we backronym it in recognition of her influence. 3One exception, the long-running TV show Whose Line Is It Anyway, has, despite a large number of episodes, surprisingly little continuous improvised dialogue, due to the rapid-fire nature of the program. Therefore we set our objective as collecting yesand-type dialogue pairs (yes-ands) to enable their modeling by corpus-driven dialogue systems. We mine podcasts and existing movie script corpora for dialogue that abides by the yes-and principle and extract dialogue pairs from these sources to build the Selected Pairs Of Learnable ImprovisatioN (SPOLIN) corpus. SPOLIN is a collection of more than 26,000 English dialogue turn pairs, each consisting of a prompt and subsequent response, which abide by the yes-and principle, though in diverse manners. Examples of yes-and type dialogue pairs collected for SPOLIN are in Figure 1. The corpus is substantial enough to be usable for fine-tuning existing dialogue models to encourage more yes-and behavior, and beyond that may prove a valuable knowledge base for empirical sociolinguistic studies on this dialogue act. Our contributions are summarized as follows: • We carefully curate Selected Pairs Of Learnable ImprovisatioN (SPOLIN), the first largescale corpus of yes-and dialogue acts, sourced from improv and movie dialogues. • We iteratively build a high-precision yes-and classifier, which we use to mine additional yesands from dialogue corpora with high volume but low yes-and density. • We fine-tune existing open-domain conversational models with our corpus and confirm via human evaluations that this approach improves creative grounding. • We release our models and data for public use, including a 64,000 turn pair extension of the core SPOLIN, at https://justin-cho. com/spolin. 2 Data Collection Our data collection has five stages: 1. Manually extract yes-ands from a rich corpus of improv to obtain an initial set of yes-ands. 2. Construct a yes-and classifier from the corpus of collected yes-and data and negative examples. 3. Use the classifier from step 2 to automatically extract yes-and candidates from a much larger but sparser dialogue corpus. 2400 Figure 2: An illustration of the yes-and collection workflow. The core SPOLIN corpus comprises Spontaneanation yes-ands and Cornell yes-ands (in blue boxes). However, SPOLIN can be augmented by including other generalpurpose dialogue corpora in place of Cornell in this workflow, as described in Section 5. Figure 3: Amazon Mechanical Turk interface for transcribing yes-ands from Spontaneanation episodes. Approximate transcriptions with speaker turns and time stamps generated from Amazon Transcribe are provided for additional guidance. 4. If necessary, manually validate candidates before adding them to the yes-and corpus. 5. Repeat from step 2 as needed. An overview of this process is shown in Figure 2. 2.1 Core yes-and Collection from Spontaneanation We select the Spontaneanation4 podcast as a source of concentrated yes-ands for its relatively noisefree recording quality and high-quality volume of broad domain improv dialogue. Each episode of this podcast includes an approximately 30 minutelong improv session performed by professional improvisers. Over its 201 episodes, we identified a total of 43K lines of useful spoken dialogue. Given the confluence of a lack of objective reality, and uninterrupted multiturn dialogue, the improvisers mostly abide by the yes-and principle, and therefore Spontaneanation is a rich resource for natural, high-quality yes-ands. As it exists only in audio form, and automatic transcription services are too noisy for high quality annotation use, we 4https://www.earwolf.com/show/ spontaneanation-with-paul-f-tompkins/ ask Amazon Mechanical Turk workers (Turkers) to listen to the improv sessions, view Amazon Transcribe preliminary transcriptions, and re-transcribe all of the yes-ands that they hear using our transcription interface, shown in Figure 3. The interface is based on oTranscribe, an open-source transcription service. Although the quality of transcriptions is poor, we find that including them assists the Turkers in identifying speaker turns and also understanding parts that are sometimes incomprehensible without helping context. 2.1.1 Recruiting Quality Crowdworkers for Difficult Annotation Tasks One of the main challenges for the data collection process is to recruit competent Turkers who are able to develop a good understanding of the yes-and principle. We actively recruit potential annotators to our task by inviting denizens of the sub-Reddit TurkerNation, rather than simply inviting workers through Amazon’s native task posting interface based on HIT approval rate and total number of HITs approved. Our approach enables more human-level engagement, making it easier to determine Turkers’ English fluency and experience with improv. To ensure their competence, 2401 Iteration 1 2 3 4 Spontaneanation + 10,459 10,459 10,459 10,459 Spontaneanation – 3,225 5,587 Cornell + 3,327 8,464 12,220 Cornell – 10,459 13,786 15,698 17,092 Total Training Samples 20,198 27,572 37,846 45,358 Dev Set Acc. (Spont) 80.9% 73.6% 71.6% 73.0% Dev Set Acc. (Cornell) 52.2% 56.8% 62.1% 64.5% Confidence Threshold 95% 70% 50% 50% New Extraction Volume 12,360 12,802 5,150 3,515 New Proportion of yes-ands 26.9% 44.0% 72.9% 78.4% Table 1: Iterative data collection results over Cornell. + indicates yes-ands and – indicates non-yes-ands. These counts exclude 500 turns collected from each of Spontaneanation and Cornell to form the validation set. The New Extraction Volume row indicates the new number of yes-and candidates identified with the given confidence threshold, and the New Proportion of yes-and row show as a percentage how many of these candidates were indeed evaluated as yes-ands by Turkers. The proportion of yes-ands increases after each iteration despite the lower confidence threshold used to filter the new predictions with the updated classifier. Turkers first read yes-and guidelines (in the appendix) then demonstrate their level of understanding through qualification Human Intelligence Tasks (HITs), which test whether the candidates can identify if a yes-and exists in a 30 second audio segment and transcribe it if there is one. s Even after inviting Turkers for the actual HIT of transcribing yes-ands, we frequently monitor the quality of the data they collect and provide feedback for incorrectly identified yes-ands. Apart from base pay for each episode they work on, we provide incentives for extracting more yes-ands. The pay for our HITs averages well above California minimum wage. From all of the episodes, we extract 10,959 yes-ands, indicating about 25% of the total number of dialogue turns in Spontaneanation are yes-ands. 2.2 Guided Extraction from the Cornell Movie-Dialogs Corpus Although larger than any improv corpus, let alone yes-and corpus known to date, we seek to increase our corpus volume from 10,959 turn pairs. The Cornell Movie-Dialogs Corpus (Danescu-NiculescuMizil and Lee, 2011, Cornell) contains 304,713 turns, nearly an order of magnitude more than Spontaneanation, and it is one of the closest in domain to improv among existing dialogue datasets. However, a sample annotation of 300 randomly selected turn pairs by Turkers reveal only 11.1% of pairs are yes-ands. We thus use the already-collected yes-ands to probe Cornell for likely candidates, to speed the search process. Recent developments of language models pre-trained on massive text data enable the training of high-accuracy models for down-stream tasks even with a small number of samples, by leveraging the contextualized embeddings that these models learn (Devlin et al., 2019; Radford et al., 2019). We thus fine-tune an initial BERT-based sequence classifier based on the implementation of Wolf et al. (2019a) with the yes-ands from the Spontaneanation episodes to determine if a given dialogue pair is a yes-and, using a high threshold (initially, a 95% probability of being yes-and) to bias for precision. We ask Turkers to validate the turn pairs identified by the classifier and add the validated pairs to our yes-and corpus. This procedure can be iterated. For the first iteration, we train the classifier with a balanced number of non-yes-ands chosen by random sampling from Cornell, a reasonable assumption due to the relatively low concentration of yesands observed. The same Turkers that extracted yes-ands from Spontaneanation are invited to validate the yes-and candidates filtered out by the classifier using the interface shown in Figure 4. In order to ensure consistent annotation standards among Turkers, they are given a small number of overlapping HITs against which we validated. For 90 samples of unfiltered yes-and candidates from Cornell, the two workers yield a reasonably high Cohen’s κ value of 0.74. Turkers are paid at rates consistent with their rates on the extraction-fromSpontaneanation task. After the set of Cornell yes-and candidates are validated, the yes-ands and non-yes-ands are added to the training set to train a new classifier, and the same process is repeated. We hold out 500 dialogue pairs from each subcategory (i.e. Spontaneanation yes-ands) as the development set for monitoring the classifier’s performance after each iteration. We incrementally lower the classification threshold for choosing a yes-and candidate as the classifier improved. We set this threshold on each iteration except for the first by retrospective evaluation of the classifier on the actual yes-and candidates’ labels from previous iterations. The threshold with the highest F1 score is chosen to filter new yes-and candidates to be validated. We balance each progressively larger corpus with negative sample turn pairs, which are either randomly selected from Cornell (round 1), selected 2402 Figure 4: Amazon Mechanical Turk interface for validating yes-and candidates determined by the yes-and classifier. Turkers are asked to correct minor errors in grammar, spelling, and punctuation for qualifying yes-and candidates, which are then categorized as ‘Typo/Fix.’ from the rejected-but-extracted turn pairs from Cornell (round 2 and later), or sampled from nonyes-and turn pairs in Spontaneanation formed by random coupling of prompts and responses of the Spontaneanation yes-ands (round 3 and later). The latter forces the classifier to make decisions based on semantic features relevant to a yes-and instead of only stylometric features in Spontaneanation yes-ands. We stop this iterative process after four rounds, when fewer than 5,000 new yes-and candidates are identified by the classifier, yielding a total corpus size of 26,435 yes-ands and 23,938 negative samples. An overview of this iterative process is summarized in Table 1. The negative sampling procedure, while somewhat ad-hoc, ultimately provides a mix of turn pairs from both corpora that is sufficient to allow extraction of yes-ands from new corpora at high precision rates, and is sufficient for our goals. 2.3 Additional Notes on yes-and Criteria Although the concept of a yes-and is easy to define and understand, there are borderline cases between a yes-and and a non-yes-and that make the validation phase more difficult than originally expected. One of the cases that confused Turkers in the earlier stages of data collection is the case of yes-buts. A yes-but is a yes-and with a response that is coherent with the provided reality, but does not appear to provide an affirmative acceptance of a suggestion given in the prompt. These are different from contradictions that do not align with the previously established reality. In addition, there are instances where the response is a yes-and, but is accepted by a speaker other than the one to whom the prompt is directed. Some yes-and responses initiates a repair of a problem encountered while accepting the prompt, due to a confusion or a possible inconsistency, by asking for clarification (Clark and Schaefer, 1989). While these responses may not strictly establish more detail, they provide information for ultimately establishing new information. We elide these edge cases under the umbrella category yesand in SPOLIN as they further our top-level goal of providing relevant, actionable turn responses. Examples of some of these subtle differences are shown in Table 2. 3 Dataset Analysis In order to provide a better understanding on the characteristics of our corpus, we annotate 200 yesands and 200 non-yes-ands in SPOLIN’s development set to categorize them into specific yes-and or non-yes-and types. We classify yes-ands into explicit yes-ands, implicit yes-ands, or yes-buts. Only 15% of all yesands are explicit yes-ands, containing phrases such as “Yeah” or “Sure” that reflects agreement. Even with such phrases, identifying explicit yes-ands is not a trivial task because it requires semantic understanding of the relevance of the context established in the prompt and that introduced in the response. In fact, there are non-yes-ands that contain phrases affirming agreement but have no contributions or have new contributions that lack relevance. The majority (78%) of yes-ands are implicit yes-ands, meaning that the agreement is implied, often in a subtle manner. The remaining 7% are yes-buts. Non-yes-ands are divided into contradictions and others. Most of the non-yes-and were others, as only 5% of candidates extracted from Cornell are contradictions, which are dialogue pairs with 2403 Type Example % yes-and Explicit P: Does this map look homemade to you? R: Yeah, it looks like someone without a grasp of English drew it. 15% Implicit P: Alright, pull up that plate so I can take a picture. R: Sorry, the coleslaw is definitely giving off a lot of glare. 78% yes-but P: We all must say the chant that we say to the king. R: No, it’s too erotic, please don’t. 7% non-yes-and Contra P: Hey, hey, aren’t you afraid you’ll burn out a tonsil? R: Tonsil? Me? No! Me burn a tonsil? My tonsils won’t burn - As life’s corners I... 5% Other P: I feel different right now. R: You wait and see. You’re going to marry a big hero! 95% Table 2: Examples and proportions of yes-and and non-yes-and types from annotations of 200 yes-ands and nonyes-ands in SPOLIN’s development set. Determining whether a given dialogue pair is a yes-and or not is a non-trivial task, as the agreement or contradiction of the previous dialogue turn’s context is usually implicit. yes-ands non-yes-ands Spontaneanation 10,959 6,087∗ Cornell 15,476 18,351 Total 26,435 24,438 Table 3: Composition of SPOLIN, including the development set. yes-ands and non-yes-ands from Cornell are validated by Turkers. ∗Spontaneanation nonyes-ands are sampled from random combination of prompts and responses in Spontaneanation yes-ands to balance the dataset for training the classifier in the final iteration, as shown in the last column of Table 1. a response that actively negates the reality in the prompt. Others encompass any dialogue pairs with a response that lacks coherence to the prompt or adds no or minimal contributions. The distribution and examples of different types of yes-ands and non-yes-ands are shown in Table 2. The main focus of our work is on yes-ands, but we provide non-yes-ands as part of SPOLIN for those interested in training their own classifiers. The negative samples are collected using the methods described in Section 2.2. The composition details of SPOLIN are shown in Table 3. 4 Experiments To evaluate the effect of SPOLIN on generating yes-and responses and thus improving generated dialogue quality, we train a common architecture with a variety of fine-tuning data configurations, both with and without SPOLIN. Specifically, for each data configuration we fine-tune a doublehead GPT-2 model (117M-parameter version based on the implementation by Wolf et al. (2019b)), which achieved state-of-the-art performance on Personachat for the ConvAI-2 dialogue competition (Zhang et al., 2018). We fine-tune the models using two learning objectives, which we weigh equally in calculating loss: 1. Predicting the next word. 2. Predicting the next correct candidate that best fits the dialogue given the dialogue history. The language modeling component uses pretrained weights from OpenAI, while the candidate classification head is trained from scratch. For evaluation, we use the language modeling component of the fine-tuned model to generate single-turn responses for the yes-and prompts in the development set. We use nucleus sampling (Holtzman et al., 2020) for the decoding step to keep only the top tokens with a cumulative probability that together exceed 0.9, from which the next token is chosen with multinomial sampling. 4.1 Data Configurations For our experiments, we use several established dialogue datasets as baselines, namely Persona-chat (Zhang et al., 2018), Cornell (Danescu-NiculescuMizil and Lee, 2011) (the unfiltered corpus out of which we extract yes-ands, as described in Section 2.2), and DailyDialog (Li et al., 2017b). Each of these is an English-language open-domain casual conversation corpus with 100k–300k turns. For each of these datasets, we either simply finetune on that dataset, or fine-tune and then further 2404 Figure 5: Interface used by human evaluators to rank responses based on their quality as a yes-and, where a rank of 1 is most preferred. The correct ranking is shown for this example. The option ranked 1 is a yesbut: it does not reject a reality but rather rejects a suggestion and provides refining information that is most coherent to the prompt. fine-tune with SPOLIN. In another configuration, we also fine-tune directly with SPOLIN on top of GPT-2. The original GPT-2 implementation prepends the personalities given in Persona-chat to the dialogue sequence input before tokenization. For fine-tuning to datasets apart from Persona-chat, we simply do not prepend any auxiliary information to the dialogue sequence input. 4.2 Human Evaluation Automatic metrics that rely on n-gram overlap, such as BLEU, ROUGE, and METEOR, are often used for generative models when there is little variability in the target output (Papineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005). However, there can be a wide variety of responses that qualify as a good yes-and, a problem common to opendomain generation tasks. An adequate evaluation of our models requires assessing the main yes-and criteria: agreement with the context and the quality of the new relevant contribution, both of which are not feasible with the aforementioned metrics. Therefore, we ask human evaluators to compare the quality of the yes-ands generated by various models and the actual response to the prompt in SPOLIN that is used as the input. We ask human evaluators to rank a set of four responses given a prompt, comparing the responses of a model trained only with SPOLIN, a model trained with an existing dialogue corpus, a model trained with both, and the actual response pair from the development set, denoted as “Gold.” These four responses are randomly ordered for each question to prevent evaluators from developing a bias for responses that frequently have a good or poor response in a set order, as shown in Figure 5. The evaluators are permitted to provide the same rank for different responses if they are equal in quality. This evaluation set contains 100 such prompts, and each is evaluated twice by different evaluators. The results of the average ranking and some of the examples generated by the models are shown in Table 4. Results show that models trained only with SPOLIN or with SPOLIN and another dialogue dataset are preferred to the models trained only with another dialogue dataset, although in the case of DailyDialog, the average ranking improves only by at most 0.06 after fine-tuning with SPOLIN. However, even the responses generated by models trained with SPOLIN are not ranked as well as the actual responses in the development set, indicating our models are still inferior to professional human improviser quality. 5 Extracting from Other Corpora The approach to classifier-based mining we describe in Section 2.2 can naturally be applied to other dialogue corpora. We thus next consider mining the gigantic (441M sentence) OpenSubtitles (Lison and Tiedemann, 2016) collection. As OpenSubtitles contains undesirable material, such as subtitles for media with minimal dialogue, we instead mine from the (3.3M sentence) SubTle corpus (Ameixa et al., 2013), a preprocessed subset of OpenSubtitles that heuristically combines subtitle sequences into dialogue form. By iterating through half of this corpus, we collect more than 40,000 yes-ands from it alone, which, when added to SPOLIN, yields what we call SPOLIN-extended, which contains about 68,000 yes-ands, more than 2.5 times the size of the core SPOLIN. Heuristics for finding alternations mean that SubTle’s utterances are shorter than those in Spontaneanation and Cornell, so once the proportion of utterances longer than the average length of in Spontaneanation and Cornell (18.5 words) is less than 40%, we stop further collection in the remainder of the dataset. SPOLINextended is available in the same public repository as SPOLIN. Details of the iterative process as applied to SubTle are in the appendix. 2405 Dataset Avg Rank ↓ Example Prompt Example Responses Persona-chat 3.67 I know alotta women and I’m sure she remembers me. oh my goodness, i don’t know her SPOLIN 3.41 Yeah, she’s a bit of a mystery. Persona-chat+SPOLIN 3.17 So you remember her? I remember her in the shower. Gold 1.92 She does. From when you were a boy. Cornell 3.73 I made this beautiful salmon mousse that just looked like a puddle of spit. What? SPOLIN 3.39 And it’s delicious! Cornell+SPOLIN 3.34 That’s the kind of thing you do when you’re in love. Gold 2.01 It was genius. It’s making me hungry thinking about it. DailyDialog 3.37 Excuse me. Is anybody in here? I’d like a bagel. Hello? Is anybody in here? I’m sorry, sir. I’m not in the mood. I’m not in the mood. SPOLIN 3.32 I’m in here. I’m just trying to make sure I can get a bagel. DailyDialog+SPOLIN 3.31 Oh, yeah, the guy who left the bagel. Gold 1.87 I can help you. The problem is that the bagels are burned. Table 4: Average human ranking of responses to prompts from Spontaneanation generated by models trained with SPOLIN, an existing dialog corpus, or both, based on the yes-and criteria. Rank is scaled from 1 to 4, lower is better. Dataset Source Size∗ DailyDialog (Li et al., 2017b) Crowdsourced 104K Cornell Movie-Dialogs Corpus (Danescu-Niculescu-Mizil and Lee, 2011) Movie scripts 304K Persona-chat (Zhang et al., 2018) Crowdsourced 162K The Ubuntu Dialogue Corpus (Lowe et al., 2015) Ubuntu chat logs 7M Twitter Triple Conversations (Sordoni et al., 2015) Social media 6K OpenSubtitles (Lison and Tiedemann, 2016) Subtitles 441M sentences SubTle (Eng) (Ameixa et al., 2013) Subtitles 3.3M pairs London-Lund Corpus (Greenbaum and Svartvik, 1990) Various sources 500K words London-Lund Corpus 2 (Põldvere et al., 2017) Various sources 500K words SPOLIN (yes-and only) Improv, Movie scripts 26K pairs SPOLIN-extended (yes-and only) Improv, Movie scripts, subtitles 68K pairs Table 5: A survey of publicly available English language text-based corpora frequently used for open-domain dialogue systems. The last two rows are our contribution. ∗Size is measured as the number of total utterances (dialogue turns) unless otherwise specified. 6 Related Work Many works have identified the same issues of repetitive or non-committal responses generated by neural conversational systems that are at least partially related to the lack of sufficiently high quality yes-ands we deal with in this work; approaches that mitigate these problems vary. The majority of recent works focus on diversifying the responses by modifying the training and decoding objectives (Li et al., 2016a,b, 2017a, 2016c; Xu et al., 2017; Shao et al., 2017). Other methods introduce latent variables to encourage diversity (Serban et al., 2017; Zhao et al., 2017). Some explore methods of re-weighing training instances that encourage diversity (Liu et al., 2018; Lison and Bibauw, 2017; Du and Black, 2019). Our approach is complementary to all the modelbased approaches described here, as it simply deals with the production of a particularly useful corpus, that can be used to fine-tune on top of these methods. We provide a survey of publicly available textbased datasets frequently used for open-domain dialogue systems and discuss their limitations for our purpose of generating grounded responses (see Table 5 for an overview). DailyDialog is a collection of multi-turn dialogue with manually annotated emotion and intent labels (Li et al., 2017b). Danescu-Niculescu-Mizil and Lee (2011) created the Cornell Movie-Dialogs Corpus, a compilation of dialogue sequences paired with meta data about the movie and characters. Persona-chat provides dialogue sequence coupled with corresponding personas (Zhang et al., 2018). 2406 The Ubuntu Dialogue Corpus contains 1 million dialogue turns extracted from Ubuntu chat logs, which discuss Ubuntu-related technical support (Lowe et al., 2015). The Twitter Triple Corpus is a dataset of 4K dialogue triples extracted from Twitter (Sordoni et al., 2015). OpenSubtitles is a huge collection of subtitles that span various genres, but the absence of speaker turn annotations make it difficult to modify into dialogue format (Lison and Tiedemann, 2016). Ameixa et al. (2013) use heuristics to reformat OpenSubtitles into dialogues with some limited success. Clark and Schaefer (1989) illustrate grounding in conversations with examples from the London-Lund Corpus (Greenbaum and Svartvik, 1990), a corpus of full conversations annotated with prosodic and paralinguistic features. A second version of the corpus was compiled with the same annotations standards as the first using more recent spoken and text data (Põldvere et al., 2017). These corpora were not collected with the criteria for yes-ands in mind. Even for datasets with dialogue taking place in a similar domain as improv, they naturally contain only a small proportion of yes-ands. However, the relatively large sizes of these datasets still make them useful for dialogue systems. They can be used effectively for grounded conversations if the yes-ands or other desirable dialogue acts can be filtered out or given higher weights in training to enforce their characteristics in the responses generated. Our data collection approach is similar to the method of Yarowsky (1995), which formalizes the bootstrapping mechanism of iteratively improving a classifier and label unlabeled data. The main difference from the Yarowsky algorithm and our approach is that, rather than using a fully automated process for increasing training data, we use a probability threshold to regulate recall, followed by human judgment to ensure high precision. Apart from Clark and Schaefer (1989) there have been other taxonomies of grounding. For example, Traum (1999) considers six categories; among these are acknowledge and continue, which, taken together, map nicely to yes-and. Magerko et al. (2009) and Fuller and Magerko (2010) note the importance of establishing common ground in improv. 7 Conclusion Inspired by yes-ands in improv, we carefully construct SPOLIN, a collection of dialogue pairs with responses that are not only coherent with dialogue context but also initiate the next relevant contribution. We extract high-quality yes-ands from Spontaneanation and build a classifier with them, which is then used to mine additional yes-ands from the Cornell Movie-Dialogs Corpus. We further use our mining technique to elicit a corpus of more than 68,000 yes-and turn pairs, easily the largest collection of this dialogue act known to exist. From human evaluations of dialogue models trained with various data configurations we find that SPOLIN is useful—when including it we are able to build models that can generate yes-ands more consistently than when we leave it out. Nevertheless, our models are still inferior at producing good yes-ands when compared to professional improvisers. We plan to continue our data-driven approach for grounded conversations by expanding our dataset through our iterative data collection process with other larger text-based open-domain dialogue corpora and extend our work to model and collect longer conversations exhibiting more complex improv-backed turns. Acknowledgments Many thanks to Nanyun Peng and Xinyu Wang for key contributions in a preliminary study, to Paul F. Tompkins, Colin Anderson, and Earwolf for allowing us to include yes-ands extracted from Spontaneanation in SPOLIN, to Paul Elsberg, Risa Harms, P.T. McNiff, and Peter Schell for initial inspiration, and to Jordan Boyd-Graber for feedback on the final draft. This material is based on research sponsored by the AFRL and DARPA under agreement number FA8650-18-C-7878. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the AFRL, DARPA, or the U.S. Government. References David Ameixa, Luísa Coheur, and Rua Alves Redol. 2013. From subtitles to human interactions: Introducing the SubTle corpus. Technical report, INESCID. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: 2407 An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. David Bohm and Lee Nichol. 2004. On Dialogue. Routledge classics. Routledge. Allison Bruce, Jonathan Knight, Samuel Listopad, Brian Magerko, and Illah R. Nourbakhsh. 2000. Robot improv: using drama to create believable agents. Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065). Carlos Busso and Shrikanth S Narayanan. 2008. Scripted dialogs versus improvisation: Lessons learned about emotional elicitation techniques from the IEMOCAP database. In Ninth annual conference of the international speech communication association. Herbert H Clark and Edward F Schaefer. 1989. Contributing to discourse. Cognitive science, 13(2):259– 294. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 76–87, Portland, Oregon, USA. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Wenchao Du and Alan W Black. 2019. Boosting dialog response generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 38–43, Florence, Italy. Association for Computational Linguistics. Daniel Fuller and Brian Magerko. 2010. Shared mental models in improvisational performance. In Proceedings of the Intelligent Narrative Technologies III Workshop, INT3 ’10, New York, NY, USA. Association for Computing Machinery. Sidney Greenbaum and Jan Svartvik. 1990. The London-Lund corpus of spoken English, volume 7. Lund University Press. Charna Halpern, Del Close, and Kim Johnson. 1994. Truth in comedy: The manual of improvisation. Meriwether Publishing. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In Proceedings of the Eighth International Conference on Learning Representations. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain dialog systems. ACM Transactions on Information Systems, 38(3):1–32. Keith Johnstone. 2017. Impro: Improvisation and the Theatre. Performance Books. Bloomsbury Publishing. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202, Austin, Texas. Association for Computational Linguistics. Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017a. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computational Linguistics. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017b. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Pierre Lison and Serge Bibauw. 2017. Not all dialogues are created equal: Instance weighting for neural conversational models. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 384–394, Saarbrücken, Germany. Association for Computational Linguistics. Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In LREC. 2408 Yahui Liu, Wei Bi, Jun Gao, Xiaojiang Liu, Jian Yao, and Shuming Shi. 2018. Towards less generic responses in neural conversation models: A statistical re-weighting method. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2769–2774, Brussels, Belgium. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Brian Magerko, Waleed Manzoul, Mark Riedl, Allan Baumer, Daniel Fuller, Kurt Luther, and Celia Pearce. 2009. An empirical study of cognition and theatrical improvisation. In Proceedings of the Seventh ACM Conference on Creativity and Cognition, page 117–126, New York, NY, USA. Association for Computing Machinery. Lara J. Martin, Brent Harrison, and Mark O. Riedl. 2016. Improvisational computational storytelling in open worlds. In Interactive Storytelling, pages 73– 84, Cham. Springer International Publishing. Siobhan McHugh. 2016. How podcasting is changing the audio storytelling genre. Radio Journal: International Studies in Broadcast & Audio Media, 14(1):65–82. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Nele Põldvere, V Johansson, and C Paradis. 2017. The London-Lund corpus 2: A new corpus of spoken British English in the making. In theICAME 38 Conference, Prague, Czech Republic. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2015. Hierarchical neural network generative models for movie dialogues. ArXiv, abs/1507.04808. Iulian Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating long and diverse responses with neural conversation models. arXiv preprint arXiv:1701.03185. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196– 205, Denver, Colorado. Association for Computational Linguistics. Viola Spolin, Arthur Morey, and Mary Ann Brandt. 1986. Theater Games for the Classroom: A Teacher’s Handbook. Northwestern University Press. David R Traum. 1999. Computational models of grounding in collaborative systems. In Psychological Models of Communication in Collaborative Systems-Papers from the AAAI Fall Symposium, pages 124–131. Lauren Winston and Brian Magerko. 2017. Turntaking with improvisational co-creative agents. In Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019a. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019b. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural response generation via GAN with an approximate embedding layer. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 617–626, Copenhagen, Denmark. Association for Computational Linguistics. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 189–196, Cambridge, Massachusetts, USA. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational 2409 Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664, Vancouver, Canada. Association for Computational Linguistics. A Appendix Iteration 4 5 6 7 Spontaneanation + 10,459 10,459 10,459 10,459 Spontaneanation 5,587 5,587 5,587 5,587 Cornell + 12,220 14,976 14,976 14,976 Cornell17,092 17,701 17,701 17,701 SubTle + 2,621 20,617 33,155 SubTle7,865 14,799 17,325 Total Training Samples 45,358 59,209 84,319 99,203 Dev Set Acc. (Spont) 73.0% 72.1% 68.4% 75.2% Dev Set Acc. (Cornell) 64.5% 63.3% 63.3% 61.0% Confidence Threshold 50% / 70%* 70% 70% 70% New Extraction Volume 3,515 / 10,486* 36,608 15,424 14,979 New Proportion of yes-ands 78.4% / 25.0%* 58.4% 83.2% 76.0% Table 6: Continuation of Table 1 with the extended version of SPOLIN that includes extracted yes-ands from SubTle. SubTle is collected from the fourth iteration onwards. *Statistics for Cornell/SubTle are shown separately. The same classifier is used for extracting candidates from Cornell and SubTle, but they are datasets with significantly different characteristics. A.1 yes-and Guidelines for Turkers We provide detailed annotation guidelines, shown in Figures 6–9, to the Turkers as a result of having continuous discussions with them and monitoring their submissions. Contrary to our expectations, it is difficult to make a binary decision on whether a dialogue turn is a yes-and or non-yes-and, and therefore these fine-grained details are crucial for collecting yes-ands in SPOLIN. A.2 Iterative data collection results for SubTle Due to SubTle’s relatively large size, we split SubTle into 20 equal blocks that each contains 10,486 dialogue turns. Note that every successive iteration of SubTle was not performed on the same block but on the next block. This is different from Cornell, for which every iteration is on the same set of dialogue turns. This difference is not due to any characteristics in the dataset but because of practical reasons arising from the size of the SubTle corpus. The first extraction proportion for SubTle is low because of the prevalence of self-yes-ands in this corpus. Self-yes-ands are prompt and response pairs that evidently originate from the same speaker but otherwise meet the criteria of a yes-and. There are many incorrectly combined dialogue turns that actually come from the same speaker because of the heuristics employed for building SubTle. By providing labeled self-yes-and as negative samples, the classifier quickly learns to remove these self-yesands, leading to a significantly higher proportion of yes-ands in subsequent iterations. This is demonstrated in the specifics of the additional iterations, which are shown in Table 6. 2410 Figure 6: Explanation of the objective and yes-and in the yes-and guideline provided to Turkers. 2411 Figure 7: Explanation of the label space for yes-ands and non-yes-ands and the detailed instructions for the transcription task. 2412 Figure 8: Common mistakes that Turkers made in the early stages of data collection were corrected and added to the guidelines to aid new Turkers. 2413 Figure 9: Annotated examples provided to Turkers for understanding the label space of the yes-and transcription task.
2020
218
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2414–2429 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2414 Image-Chat: Engaging Grounded Conversations Kurt Shuster, Samuel Humeau, Antoine Bordes, Jason Weston Facebook AI Research {kshuster,samuelhumeau,abordes,jase}@fb.com Abstract To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the ImageChat test set (preferred 47.7% of the time). 1 Introduction A key way for machines to exhibit intelligence is for them to be able to perceive the world around them – and to be able to communicate with humans in natural language about that world. To speak naturally with humans it is necessary to understand the natural things that humans say about the world they live in, and to respond in kind. This involves understanding what they perceive, e.g. the images they see, what those images mean semantically for humans, and how mood and style shapes the language and conversations derived from these observations. In this work we take a step towards these goals by considering grounded dialogue involving openended discussion of a given image, a setting that is naturally fun for humans (Hu et al., 2014), and study neural conversational models for task. In particular, we explore both generative and retrieval models that handle multimodal dialogue by fusing Transformer architectures (Vaswani et al., 2017) for encoding dialogue history and responses and ResNet architectures (He et al., 2016) for encoding images. We propose ways to fuse those modalities together and perform a detailed study including both automatic evaluations, ablations and human evaluations of our models using crowdworkers. To train and evaluate such models, we collect a large set of human-human crowdworker conversations, with the aim of training a model to engage a human in a similar fashion, consisting of 202k diverse images and 401k utterances over the images, with 215 different style traits (e.g., optimistic, skeptical or frivolous) to promote engaging conversation. The dataset is made publicly available in ParlAI (Miller et al., 2017) 1. Our results show that there is a significant gap between state-of-the-art retrieval and generative models on this task. Our best fused retrieval models set a strong baseline, being preferred to human conversationalists 47.7% of the time. We show that both large-scale image and text pre-training, and utilization of style traits, are critical for best results. We then consider transfer to the existing Image Grounded Conversations (IGC) task of Mostafazadeh et al. (2017), where we obtain stateof-the-art results. 2 Related Work The majority of work in dialogue is not grounded in perception, e.g. much recent work explores sequence-to-sequence models or retrieval models for goal-directed (Henderson et al., 2014) or chit1http://parl.ai/projects/image_chat 2415 chat tasks (Vinyals and Le, 2015; Zhang et al., 2018). While these tasks are text-based only, many of the techniques developed can likely be transferred for use in multimodal systems, for example using state-of-the-art Transformer representations for text (Mazare et al., 2018) as a sub-component. In the area of language and vision, one of the most widely studied areas is image captioning, whereby a single utterance is output given an input image. This typically involves producing a factual, descriptive sentence describing the image, in contrast to producing a conversational utterance as in dialogue. Popular datasets include COCO (Chen et al., 2015) and Flickr30k (Young et al., 2014). Again, a variety of sequence-to-sequence (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018) and retrieval models (Gu et al., 2018; Faghri et al., 2018; Nam et al., 2016) have been applied. These tasks measure the ability of models to understand the content of an image, but not to carry out an engaging conversation grounded in perception. Some works have extended image captioning from being purely factual towards more engaging captions by incorporating style while still being single turn, e.g. (Mathews et al., 2018, 2016; Gan et al., 2017; Guo et al., 2019; Shuster et al., 2019). Our work also applies a style component, but concentrates on image-grounded dialogue, rather than image captioning. Visual question answering (Antol et al., 2015) and visual dialogue (Das et al., 2017) are another set of tasks which employ vision and language. They require the machine to answer factual questions about the contents of the image, either in single turn or dialogue form. They do not attempt to model natural conversation, but rather assess whether the machine can perform basic perception over the image via a series of questions. There are some works which directly address dialogue grounded with vision. The work of Pasunuru and Bansal (2018) assesses the ability to execute dialogue given video of computer soccer games. The work of Huber et al. (2018) investigates the use of sentiment-based visual features and facial expressions for emotional image-based dialogue. Perhaps the most related work to ours is Mostafazadeh et al. (2017). Their work considers (visual context, textual context, question, response) tuples, and builds validation and test sets based on 4k eventful images called Image Grounded Conversations (IGC). No training data is provided, but instead the authors use Twitter for that in their experiments. In contrast, we provide training, validation and testing sets over 202k images for our task (that do not overlap with IGC), and consider a general set of images and dialogues, not just events and questions plus responses. In our experiments we also show strong transfer ability of our models to the IGC task. While there are many ways to measure dialogue quality, human engagement is a popular metric. Engagement itself can be measured in many ways (Bohus and Horvitz, 2009; Yu et al., 2016) but here we adopt the common approach of simply asking humans which speaker they find more engaging, following other works (Li et al., 2019; Dinan et al., 2020). 3 Image-Chat The IMAGE-CHAT dataset is a large collection of (image, style trait for speaker A, style trait for speaker B, dialogue between A & B) tuples that we collected using crowd-workers, Each dialogue consists of consecutive turns by speaker A and B. No particular constraints are placed on the kinds of utterance, only that we ask the speakers to both use the provided style trait, and to respond to the given image and dialogue history in an engaging way. The goal is not just to build a diagnostic dataset but a basis for training models that humans actually want to engage with. Style Traits A number of works have shown that style traits for image captioning help provide creative captions (Mathews et al., 2018, 2016; Gan et al., 2017; Shuster et al., 2019). We apply that same principle to image grounded dialogue, considering a set of 215 possible style traits, using an existing set from Shuster et al. (2019). The traits are categorized into three classes: positive (e.g., sweet, happy, eloquent, humble, witty), neutral (e.g., old-fashioned, skeptical, solemn, questioning) and negative (e.g., anxious, childish, critical, fickle, frivolous). We apply these to both speakers A and B, who will be assigned different style traits for each given conversation. Images The images used in our task are randomly selected from the YFCC100M Dataset2 (Thomee et al., 2016). Dialogue For each image, we pick at random two style traits, one for speaker A and one for speaker 2https://multimediacommons.wordpress.com/yfcc100m-core-dataset/ 2416 A: Peaceful B: Absentminded A: Fearful B: Miserable A: Erratic B: Skeptical A: I’m so thankful for this delicious food. A: I just heard something out there and I have no idea what it was. A: What is the difference between the forest and the trees? Oh look, dry pavement. B: What is it called again? B: It was probably a Wolf coming to eat us because you talk too much. B: I doubt that’s even a forest, it looks like a line of trees. A: Not sure but fried goodness. A: I would never go camping in the woods for this very reason. A: There’s probably more lame pavement on the other side! Figure 1: Some samples from the IMAGE-CHAT training set. For each sample we asked humans to engage in a conversation about the given image, where the two speakers, A and B, each have a given provided style. B, and collect the dialogue using crowdworkers who are asked to both assume those roles, and to be engaging to the other speaker while doing so. It was emphasized in the data collection instructions that the style trait describes a trait of the speaker, not properties of the content of the image they are discussing. Some examples from the training set are given in Figure 1. Data Quality During data collection crowdsourcers were manually monitored, checking to ensure they were following the instructions. Poor performers were banned, with comments discarded. A verification process was also conducted on a subset of the data, where separate annotators were asked to choose whether the utterance fit the image, style, or both, and found that 92.8% of the time it clearly fit the image, and 83.1% the style, and 80.5% both. Note, given that not all utterances should directly reference an image property or invoke the style, we do not expect 100%. Overall Dataset The overall dataset statistics are given in Table 1. This is a fairly large dialogue dataset compared to other existing publicly available datasets. For example, PersonaChat (Zhang et al., 2018) (which is not grounded in images) consists of 162k utterances, while IGC (Mostafazadeh et al., 2017) (grounded in images) consists of 4k of validation and test set examples only, compared to over 400k utterances in IMAGE-CHAT. Split train valid test Number of Images 186,782 5,000 9,997 Number of Dialogues 186,782 5,000 9,997 Number of Utterances 355,862 15,000 29,991 Style Types 215 215 215 Vocabulary Size 46,371 9,561 13,550 Tokens per Utterance 12.3 12.4 12.4 Table 1: IMAGE-CHAT dataset statistics. 4 Models We consider two major types of dialogue model: retrieval and generative. Both approaches make use of the same components as building blocks. We use three sub-networks for the three modalities of input: (i) an image encoder, (ii) a dialogue history encoder; and (iii) a style encoder. In the retrieval model these are then fed into a combiner module for combining the three modalities. Finally, there is a response encoder for considering candidate responses and this is scored against the combined input representations. An overview of the retrieval archictecture is shown in Figure 2. For the generative model, the three encoders are used as input, and a further decoder Transformer is used for outputting a token sequence; beam search is applied. Image Encoder We build our models on top of pretrained image features, and compare the performance of two types of image encoders. The first is a residual network with 152 layers described in He et al. (2016) trained on ImageNet (Russakovsky et al., 2015) to classify images among 1000 classes, which we refer to in the rest of the pa2417 Figure 2: The TRANSRESNETRET multimodal architecture for grounded dialogue. There are several options: different image encoders (ResNet152 or ResNeXt-IG-3.5B), text encoders (shared or separate Transformers for history and response), and different multimodal combiners (sum or attention-based). per as ResNet152 features. We used the implementation provided in the torchvision project (Marcel and Rodriguez, 2010). The second is a ResNeXt 32×48d (Xie et al., 2017) trained on 3.5 billion Instagram pictures following the procedure described by Mahajan et al. (2018), which we refer to in the rest of the paper as ResNeXt-IG-3.5B. The representation rI of an image I is obtained by using the 2048-dimensional output of the image encoder as input to a feed-forward network: a multi-layer perceptron with ReLU activation units and a final layer of 500 dimensions in the retrieval case, and a linear layer in the generative case. Style Encoder To condition on a given style trait, we embed each trait to an N-dimensional vector to obtain its representation rS. We used N = 500 for retrieval and N = 300 for generation. Dialogue Encoder The entire dialogue history D is encoded into a fixed size vector rD using a Transformer architecture (Vaswani et al., 2017), followed by a linear layer. Such Transformers have been shown to perform strongly on a variety of dialogue tasks previously (Yang et al., 2018; Mazare et al., 2018). We use a Transformer with 4 layers, 300 hidden units, and 6 attention heads. The outputs are pooled (mean) to give a final vectorial encoding. We pretrain the entire encoder following the setup described in Mazare et al. (2018): we train two encoders on a next-utterance retrieval task on a Reddit dataset of dialogues containing 1.7 billion pairs of utterances, where one encodes the context and another the candidates for the next utterance; their dot product indicates the degree of match, and they are trained with negative log-likelihood and k-negative sampling. We then initialize our system using the weights of the candidate encoder only, and then train on our task in either generative or retrieval mode. 4.1 Retrieval Models Multimodal combiner module We consider two possible combiner modules for the inputs: Multimodal sum combiner (MM-sum): Given an input image, style trait and dialogue (I, S, D), together with a candidate response C, the score of the final combination is computed as s(I, S, D, C) = (rI + rS + rD) · rC. Multimodal attention combiner (MM-att): A more sophisticated approach is to use an attention mechanism to choose which modalities are most relevant for each example by stacking Transformers. We concatenate the three representation vectors rI, rS and rD and feed them to a second Transformer (4 attention heads, 2 layers, 500 hidden units) which performs self-attention over them. The three modalities are thus reweighted by the corresponding attention weights to give the final input representation vector rT , which is used to compute the score for a given candidate using rT · rC. Response encoder We employ the same Transformer architecture as in the dialogue encoder for encoding candidate responses. We tried two variants: either sharing or not sharing the weights with the input dialogue encoder. Training and Inference Given a tuple I, S, D, and a set of candidates (c1, .., cN), at inference time the predicted utterance is the candidate ci that maximizes the score s(I, S, D, ci). At training time we pass a set of scores through a softmax and train to maximize the log-likelihood of the correct responses. We use mini-batches of 500 training 2418 examples; for each example, we use the gold responses of the other examples of the batch as negatives. During final human evaluation all candidates from the training set are considered to produce a response (356k candidates in our experiments). 4.2 Generative Models Dialogue Decoder The encoding from the image encoder has a final linear layer of dimension 2048 × 300. This projects it to the same size of the token encoding of the dialogue decoder. We thus add it as an extra token at the end of the Transformer’s encoder output. For style, we simply prepend the style to the beginning of the dialogue history, and it is thus encoded in the dialogue encoder. We then treat this as a standard seq2seq Transformer in order to generate dialogue responses. Training and Inference We train with a batch size of 32 and learning rate of .0001 using adam, and apply beam search with a beam of size 2 and trigram blocking at inference time. Hyperparameters are chosen on the validation set. 5 Experiments We test our models on the IMAGE-CHAT and IGC datasets using automatic metrics and human evaluations. We analyze the performance of the different module and architecture choices, as well as ablation studies to determine the importance of each of the model’s inputs. 5.1 Automatic Evaluation on IMAGE-CHAT Module Choices We first compare various module configurations of our TRANSRESNETRET model, and additionally show the results for a simple information retrieval baseline, in which the candidates are ranked according to their weighted word overlap to the input message. We measure recall at 1 and 5 (R@1/100 and R@5/100) retrieval metrics, where for each sample there are 100 candidates to rank: 99 random candidates chosen from the test set, and the true label. Note that in human evaluations we use all the train set candidates. The results are shown in Table 2. We report the average metrics for the total task, as well as the breakdown of the performance on each turn of dialogue (turns 1, 2 and 3). The average metrics indicate that using the ResNeXt-IG-3.5B image encoder features improves performance significantly across the whole task, as we obtain 50.3% R@1 for our best ResNeXt-IG-3.5B model and only 40.6% for our best ResNet152 model. When broken down by turn, it appears that the ResNeXt-IG-3.5B features are particularly important in the first round of dialogue, in which only the image and style are considered, as the difference between their best models increases from 9.7% in the full task to 19.5% in the first turn. Our baseline multimodal sum combiner (MM-Sum) outperforms the more sophisticated self-attention (MM-Att) combiner, with the latter scoring 49.3% on the full task. Having separate candidate and dialogue history text encoders also works better than sharing weights. In subsequent experiments we use the best performing system for our retrieval model. As ResNeXt-IG-3.5B performs best we use that for our generative model going forward as well. Full & Ablation Study We now perform experiments for both retrieval and generative models for the full system, and additionally we remove modalities (image, style, and dialogue history). For the generative models we report the ROUGE-L metric. The results are shown in Table 3, which we now analyze. Turn 1: In the first round of dialogue the models produce utterances given the image and style only, as there is no dialogue history yet. For both models, image is more important than style, but using both together helps. Turn 2: In the second turn, in which a model produces a response to a first utterance, the models perform similarly when using only the image or only the dialogue history, while performing poorly with just the style. Any combination of two modalities improves the results, with the style + dialogue combination performing slightly higher than the other two. Using all modalities works best. Turn 3: By the third turn of dialogue, the conversation history proves to be by far the most important in isolation compared to the other two modalities in isolation. Conditioning on the style+dialogue is the most effective of any combination of two modalities. Again, using all modalities still proves best. 5.2 Human Evaluations on IMAGE-CHAT We test our final models using human evaluation. Evaluation Setup We use a set of 500 images from YFCC-100M that are not present in IMAGECHAT to build a set of three-round dialogues pairing humans with models in conversation. We then 2419 Model Combiner Text Encoders Image Encoder Turn 1 Turn 2 Turn 3 All R@1 R@1 R@1 R@1 R@1 R@1 R@5 IR Baseline n/a n/a n/a 2.15 5.86 TRANSRESNETRET MM-Att Separate ResNet152 35.7 44.5 40.5 40.2 67.0 TRANSRESNETRET MM-Sum Separate ResNet152 34.5 46.0 41.3 40.6 67.2 TRANSRESNETRET MM-Sum Shared ResNeXt-IG-3.5B 53.6 47.0 41.3 47.3 73.1 TRANSRESNETRET MM-Att Shared ResNeXt-IG-3.5B 54.4 49.0 43.3 48.9 74.2 TRANSRESNETRET MM-Att Separate ResNeXt-IG-3.5B 53.5 50.5 43.8 49.3 74.7 TRANSRESNETRET MM-Sum Separate ResNeXt-IG-3.5B 54.0 51.9 44.8 50.3 75.4 Table 2: Module choices on IMAGE-CHAT. We compare different module variations for TRANSRESNETRET . TRANSRESNETRET (R@1/100 ) TRANSRESNETGEN (ROUGE-L) Modules Turn 1 Turn 2 Turn 3 All Turn 1 Turn 2 Turn 3 All Image Only 37.6 28.1 20.7 28.7 21.1 21.9 22.4 21.8 Style Only 18.3 15.3 17.0 16.9 20.2 20.9 22.0 21.0 Dialogue History Only 1.0 33.7 32.3 22.3 18.9 22.7 23.7 21.8 Style + Dialogue (no image) 18.3 45.4 43.1 35.4 20.4 24.1 24.8 23.1 Image + Dialogue (no style) 37.6 39.4 32.6 36.5 21.3 22.8 23.6 22.6 Image + Style (no dialogue) 54.0 41.1 35.2 43.4 23.7 23.2 23.8 23.5 Style + Dialogue + Image (full model) 54.0 51.9 44.8 50.3 23.7 24.2 24.9 24.3 Table 3: Ablations on IMAGE-CHAT. We compare variants of our best TRANSRESNET generative and retrieval models (ResNeXt-IG-3.5B image encoder, and MM-Sum + separate text encoders for retrieval) where we remove modalities: image, dialogue history and style conditioning, reporting R@1/100 for retrieval and ROUGE-L for generation for dialogue turns 1, 2 and 3 independently, as well as the average over all turns. conduct evaluations at each round of dialogue for each example in the evaluation set; we have a separate set of human evaluators look at the provided conversation turns, and ask them to compare two possible utterances for the next turn of conversation, given the image, dialogue history and relevant style (which is the same for both human author and model, so there is no advantage). We ask the evaluators in a blind test to choose the “more engaging” of the two possible utterances: one from a human, and the other from a model. Human annotation vs. TRANSRESNET model We compare human-authored utterances to those produced by our models. The human conversations are collected in the same fashion as in IMAGE-CHAT but on test images. As for humans, the model outputs are conditioned on the image, style and previous dialogue history. TRANSRESNETGEN simply generates a response, whereas TRANSRESNETRET retrieves candidate utterances from the IMAGE-CHAT training set. The latter is given a separate set of candidates corresponding to the round of dialogue – e.g. when producing a response to turn 1, the model retrieves from all possible round 1 utterances from the train set (in that case 186,858 possible choices). The results are shown in Fig. 4, comparing all models on the first round (left): TRANSRESNETGEN and TRANSRESNETRET using ResNeXt-IG-3.5B, and TRANSRESNETRET using ResNet152 features. As in automatic evaluations, ResNet152 features performed more poorly. The retrieval model outperformed the generative model, a result that has been observed in other (text-only) dialogue tasks (Dinan et al., 2019; Zhang et al., 2018). In turn 1, TRANSRESNETRET (ResNeXt-IG-3.5B) has a win rate against humans of 49.4% (difference not significant using a binomial two-tailed test, p > 0.5), while both other models are significantly outperformed by humans (p < 2 × 10−7 compared to ResNet152 features), showing the importance of our retrieval architecture and image feature choices. We thus compare only TRANSRESNETRET (ResNeXt-IG3.5B) to humans in all three turns (Fig. 4, right). That model performs well, with an overall win rate against humans of 47.7% (difference is significant, p < 7 × 10−5). Example predictions of TRANSRESNETRET (ResNeXt-IG-3.5B) are given in Figure 3. 5.3 Transfer to the IGC Task To test the strength of our task and models we consider transfer to the IGC of task of Mostafazadeh et al. (2017). In particular, we focus on their response task, which provides an image and a dialogue history of two utterances: a context utterance, followed by a question. The task is to then pro2420 Image Style Conversation Turn 1 examples Model predictions: A: Artful This looks like a painting. Something out of a Norman Rockwell. A: Monstrous Some wretched town!. A: Earnest Yeah, we have finally arrived at Grandpa’s old barnhouse! I can’t wait to get started with the restoration! Model predictions: A: Wishful I hope one day to have a dog this majestic. A: Opinionated This puppy looks cold get him a blanket. A: Imaginative Puppies are just the universe’s way of telling us everything will be okay. Model predictions: A: Respectful What an honor to have beautiful places like these to contemplate natures rocks at their best. A: Shy While the area looks nice, I’d stay away, fear of tsunamis leaves me paralyzed at the beach. A: Confused I don’t understand why would you put rocks there? Turn 2 examples A: Empathetic I understand that an open field full of beautiful foliage is a testament to our environment. Model predictions: B: Aloof It’s just a regular old field. B: Frivolous So is every grass in the country A: Playful i would jump inside that waterfall Model predictions: B: Questioning Do you really think it is feasible? B: Amusing Well I would try to top you with a backflip. A: Maternal It must be very dangerous if children play there. Model predictions: B: Whimsical Nonsense, this would be a great place to play! B: Blunt I would rather find some lawn. Those rocks look uncomfortable Turn 3 examples A: Happy This looks like something fun and exciting to do on vacation! B: Anxious oh dear what happens if he gets taken with the wind Model prediction: A: Happy Ooh, the wind looks perfect to fly this awesome kite! A: Pompous This archway is so dull and boring. I could have done a much better job with the garden in my own back yard. B: Offhand Its about as boring as you!! Model prediction: A: Pompous Please. I could show you much prettier places in my own back yard. A: Sweet This is so beautiful, I love nature and would love to see this place and meet the people who live there! B: Spontaneous Then lets go there right now! Just you and me. Model prediction: A: Sweet I wish you could take me there, I would love to go with you! Figure 3: Example predictions from our TRANSRESNETRET (MM-Sum) model on the evaluation set using all candidates for turns 1–3 . Two speakers A & B with given style traits discuss a photo. The dialogue context before the model prediction is completed by humans, followed by one or more possible model responses, given different style conditioning. The model clearly uses the image, given style and dialogue history in formulating its response. 2421 Figure 4: Human evaluations on IMAGE-CHAT. Engagingness win rates of pairwise comparisons between human utterances and TRANSRESNETRET (ResNet152 or ResNeXt-IG-3.5B) or TRANSRESNETGEN, comparing over the rounds of dialogue. duce a response. This is clearly related to our task, except it focuses on answering questions, which our task does not. Our task is more varied as it was collected in an unconstrained way, unlike in IGC where they were asked to write a question. Nevertheless, assuming a question contains a ? or starts with who, what, when, where, why or how, our dataset contains 40,076 training utterances that are questions (11.3% of the data) and so it could be possible to produce responses to them. Without any fine-tuning at all, we thus simply took exactly the same best trained models and used them for their question response task as well. Unfortunately, after contacting the authors of Mostafazadeh et al. (2017) they no longer have the predictions of their model available, nor have they made available the code for their human evaluation setup. However, the test set is available. We therefore attempted to reproduce the same setup as in their experiments, which we will also make publicly available upon acceptance. Automatic Evaluation We measure our best TRANSRESNETGEN model’s performance on the IGC test set in terms of BLEU-4. The results are shown in Fig. 5 (right). We find that our model outperforms the model from Mostafazadeh et al. (2017), achieving a score of 2.30 compared to 1.49. Human Evaluation We compare the provided human response (from the test set) with 7 variants of our TRANSRESNETRET model (mimicking their setup), whereby we have our model condition on 7 styles for which it performed well on evaluations in section 5.2. Annotators rated the quality of responses on a scale from 1 to 3, where 3 is the highest, reporting the mean over ∼2k questions. We then scale that by the score of human authored Figure 5: IGC Evaluations. The best model from Mostafazadeh et al. (2017) is compared to our best TRANSRESNETRET and TRASNRESNETGEN models. On the left, annotator’s ratings of responses from the models are shown as a percentage of the annotator’s ratings of human responses. On the right, BLEU-4 scores on the response task are shown. responses, to give a percentage. The results are shown in Fig. 5 (left). Our model narrows the gap between human and model performance, yielding a higher percentage of the human score (62.9% vs. 54.2%). More detailed results and example predictions of our model can be found in Appendices E and F, including examples of highly rated and poorly rated outputs from our model. 6 Conclusion This paper presents an approach for improving the way machines can generate grounded conversations that humans find engaging. Focusing on the case of chit-chatting about a given image, a naturally useful application for end-users of social dialogue agents, this work shows that our best proposed model can generate grounded dialogues that humans prefer over dialogues with other fellow humans almost half of the time (47.7%). This result is made possible by the creation of a new dataset IMAGE-CHAT3. Our work shows that we are close to having models that humans can relate to in chit-chat conversations, which could set new ground for social dialogue agents. However, our retrieval models outperformed their generative versions; closing that gap is an important challenge for the community. While our human evaluations were on short conversations, initial investigations indicate the model as is can extend to longer chats, see Appendix G, which should be studied in future work. The next challenge will also be to combine this engagingness with other skills, such as world knowledge (Antol et al., 2015) relation to personal interests (Zhang et al., 2018), and task proficiency. 3http://parl.ai/projects/image_chat 2422 References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and vqa. CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Dan Bohus and Eric Horvitz. 2009. Models for multiparty engagement in open-world dialog. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 225–234. Association for Computational Linguistics. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational intelligence challenge (convai2). In The NeurIPS’18 Competition, pages 187–208. Springer. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations (ICLR). Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2018. Vse++: Improving visualsemantic embeddings with hard negatives. Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proc IEEE Conf on Computer Vision and Pattern Recognition, pages 3137–3146. J. Gu, J. Cai, S. Joty, L. Niu, and G. Wang. 2018. Look, imagine and match: Improving textual-visual crossmodal retrieval with generative models. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7181–7189. Longteng Guo, Jing Liu, Peng Yao, Jiangwei Li, and Hanqing Lu. 2019. Mscap: Multi-style image captioning with unpaired stylized text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4204–4213. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Yuheng Hu, Lydia Manikonda, and Subbarao Kambhampati. 2014. What we instagram: A first analysis of instagram photo content and user types. In Eighth International AAAI Conference on Weblogs and Social Media. Bernd Huber, Daniel McDuff, Chris Brockett, Michel Galley, and Bill Dolan. 2018. Emotional dialogue generation using image-grounded language models. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, page 277. ACM. Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the limits of weakly supervised pretraining. In Computer Vision – ECCV 2018, pages 185–201, Cham. Springer International Publishing. S´ebastien Marcel and Yann Rodriguez. 2010. Torchvision the machine-vision package of torch. In Proceedings of the 18th ACM International Conference on Multimedia, MM ’10, pages 1485–1488. ACM. Alexander Mathews, Lexing Xie, and Xuming He. 2018. Semstyle: Learning to generate stylised image captions using unaligned text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8591–8600. Alexander Patrick Mathews, Lexing Xie, and Xuming He. 2016. Senticap: Generating image descriptions with sentiments. In AAAI, pages 3574–3580. Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. In Empirical Methods in Natural Language Processing (EMNLP), pages 79–84. 2423 Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Image-grounded conversations: Multimodal context for natural question and response generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 462–472, Taipei, Taiwan. Asian Federation of Natural Language Processing. Hyeonseob Nam, Jung-Woo Ha, and Jeonghee Kim. 2016. Dual attention networks for multimodal reasoning and matching. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2156–2164. Ramakanth Pasunuru and Mohit Bansal. 2018. Gamebased video-context dialogue. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 125–136, Brussels, Belgium. Association for Computational Linguistics. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252. Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston. 2019. Engaging image captioning via personality. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. 2016. Yfcc100m: The new data in multimedia research. Commun. ACM, 59(2):64–73. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of the 31st International Conference on Machine Learning, Deep Learning Workshop, Lille, France. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition. S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He. 2017. Aggregated residual transformations for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5987–5995. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 164–174, Melbourne, Australia. Association for Computational Linguistics. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Zhou Yu, Leah Nicolich-Henkin, Alan W Black, and Alexander Rudnicky. 2016. A wizard-of-oz study on a non-task-oriented dialog systems that reacts to user engagement. In Proceedings of the 17th annual meeting of the Special Interest Group on Discourse and Dialogue, pages 55–63. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. 2424 A More Details of IGC Evaluations In this section we describe a few choices we made and implementation details regarding the IGC human evaluation in the section regarding Transfer to the IGC Task. Multiple Traits In the IGC human evaluation setup from (Mostafazadeh et al., 2017), human annotators were shown eight choices when rating the quality of responses to questions: seven responses from various models, and one human response. To mirror this setup as closely as possible, we chose seven of our highest performing style traits to condition on to display in addition to the human response. We show the results of each trait in Table 4. Automatic Evaluation In (Mostafazadeh et al., 2017), the authors provide BLEU scores for their models in an attempt to evaluate their effectiveness via automated metrics. The authors note that the scores are very low, “as is characteristic for tasks with intrinsically diverse outputs.” Additionally, it has been shown in (Shuster et al., 2019) that BLEU scores for image captioning retrieval models are generally far lower than those of generative models (as retrieval models do not optimize for such a metric), and yet human evaluations can show the complete opposite results. In fact, in that work retrieval models were shown to be superior to generative models in human evaluations, which is why we adopted them here. For these reasons we omit BLEU scores of our retrieval models on the IGC test set as uninteresting. We do however compare BLEU scores with our generative model in the main paper. Test Set Size The IGC test set provides the urls to all 2591 images for which (context, question, response) tuples were collected. We were only able to recover 2195 images from this initial set, as some of the urls provided are no longer associated with the corresponding images. Thus, our human evaluations are conducted on this subset. Style Score Neutral 1.55 Charming 1.55 Extravagant 1.55 Calm 1.57 Sweet 1.58 Spirited 1.60 Enthusiastic 1.61 Human 2.55 Table 4: IGC Human Evaluation on responses from our TRANSRESNET MM-SUM model conditioned on various personalities. Responses were rated on a quality scale from 1 to 3, where 3 is the highest. 2425 B IMAGE-CHAT Human Annotation Setup Figure 6: Instructions pane for crowdworkers when collecting the second round of dialogue. Figure 7: Instructions pane for crowdworkers when collecting the third round of dialogue. 2426 C IMAGE-CHAT Human Evaluation Setup Figure 8: Instructions pane for crowdworkers when collecting the IMAGE-CHAT Evaluations. D IGC Human Evaluation Setup Figure 9: Instructions pane for crowdworkers when collecting the IGC Evaluations. 2427 E Highly Rated Examples from IGC Image IGC Round Output Context These flowers are growing in my back yard. Question What type of flowers are they? Human Response I don’t know but they are so pretty. TransResNet MM-Sum I don’t know but these flowers are gorgeous and look so bright! Context I bought this at a flea market. Question What is this for? Human Response I think it’s a mortar and pestle. TransResNet MM-Sum I’m not sure, but you could sell it for some cash! Context I’d love to visit this city. Question Is this some city in Europe? Human Response It definitely looks like it. TransResNet MM-Sum Oh my goodness, yes! I love Amsterdam! Context These just flew over my house! Question Wow, aren’t those Blue Angels? Human Response Yeah, they practice in this area for their shows. TransResNet MM-Sum These are the Blue Angels, they are truly one of a kind Context My nephew’s choir did so well yesterday! Question That is great. You must be proud Human Response I am so proud, because he’s been working really hard. TransResNet MM-Sum I am so proud of all that they accomplished. Context This would be a great place to have a picnic. Question yes it would it seems very relaxing doesnt it? Human Response Yes, and I would quite enjoy spending a day here. TransResNet MM-Sum I think a picnic is more than a great possibility! Table 5: Highly rated examples from the IGC dataset test split where TRANSRESNETRET MM-Sum responses were rated the highest (score of 3) by human evaluators. 2428 F Poorly Rated Examples from IGC Image IGC Round Output Context These are Easter eggs we made for art class. Question How did you make them? Human Response We started with a glass base and went from there. TransResNet MM-Sum OHMYGOSH EASTER EGG HUNT! I want to hunt for Easter eggs too! Context I’m thinking of getting these shocks on my bike. Question Are the shocks black in color? Human Response Yes, except for the silver portion at the end. TransResNet MM-Sum Just got my new wheels! Context I cannot decide if this is milk or something else. Question It looks like milk, but it could also be kefir. Human Response What is kefir? TransResNet MM-Sum You can get all the protein you want form the sugar. Context This woman is visiting the grave of a loved one. Question Where is the cemetery? Human Response The cemetary is at the foot a beautiful quartz mountain. TransResNet MM-Sum Well she’s just a walking, talking garden! Table 6: Low rated examples from the IGC dataset test split where TRANSRESNETRET MM-Sum responses were rated the lowest (score of 1) by human evaluators. 2429 G Longer Chat Example A: TransResNetRET MM-Sum (Extraordinary) B: Human (no style) A: It’s the most beautiful mountain in the world! B: It looks really cool! I wonder if you can ski on it A: They are a mountain range I’ve climed many times! B: You have climbed that? That seems really hard A: I did it; i conquered this climb in only 7 hours and 10 minutes! B: Wow, that’s fast. I would ski down that but I would need a lift to take me up A: But could you? Could you truly climb this? B: I really don’t think I could A: Climbing a mountain can give one a special strength, you need to experience it B: Maybe one day on a smaller mountain A: It would take hard will and determination to scale that mighty peak Figure 10: Long-form conversation with the model. The model is given a style here, while the human is not. H Additional Ablation Results TRANSRESNETGEN (F1) TRANSRESNETGEN (BLEU-4) Modules Turn 1 Turn 2 Turn 3 All Turn 1 Turn 2 Turn 3 All Image Only 10.8 11.0 11.2 11.0 1.1 1.3 1.2 1.2 Style Only 10.4 9.8 10.4 10.2 1.4 1.5 1.4 1.4 Dialogue History Only 9.9 11.4 12.2 11.2 1.0 1.9 1.8 1.6 Style + Dialogue (no image) 9.6 12.5 13.1 11.7 1.5 2.1 2.0 1.9 Image + Dialogue (no style) 10.7 11.1 11.7 11.2 1.1 1.7 1.6 1.5 Image + Style (no dialogue) 12.1 11.6 11.6 11.8 1.6 1.5 1.5 1.6 Style + Dialogue + Image (full model) 12.3 12.5 13.1 12.6 1.7 2.1 2.0 1.9 Table 7: Ablations on IMAGE-CHAT. We compare variants of our best TRANSRESNET generative model (ResNeXtIG-3.5B image encoder) where we remove modalities: image, dialogue history and style conditioning, reporting F1 and BLEU-4 for generation for dialogue turns 1, 2 and 3 independently, as well as the average over all turns.
2020
219
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 238–252 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 238 Neural Syntactic Preordering for Controlled Paraphrase Generation Tanya Goyal and Greg Durrett Department of Computer Science The University of Texas at Austin [email protected], [email protected] Abstract Paraphrasing natural language sentences is a multifaceted process: it might involve replacing individual words or short phrases, local rearrangement of content, or high-level restructuring like topicalization or passivization. Past approaches struggle to cover this space of paraphrase possibilities in an interpretable manner. Our work, inspired by pre-ordering literature in machine translation, uses syntactic transformations to softly “reorder” the source sentence and guide our neural paraphrasing model. First, given an input sentence, we derive a set of feasible syntactic rearrangements using an encoder-decoder model. This model operates over a partially lexical, partially syntactic view of the sentence and can reorder big chunks. Next, we use each proposed rearrangement to produce a sequence of position embeddings, which encourages our final encoder-decoder paraphrase model to attend to the source words in a particular order. Our evaluation, both automatic and human, shows that the proposed system retains the quality of the baseline approaches while giving a substantial increase in the diversity of the generated paraphrases.1 1 Introduction Paraphrase generation (McKeown, 1983; Barzilay and Lee, 2003) has seen a recent surge of interest, both with large-scale dataset collection and curation (Lan et al., 2017; Wieting and Gimpel, 2018) and with modeling advances such as deep generative models (Gupta et al., 2018; Li et al., 2019). Paraphrasing models have proven to be especially useful if they expose control mechanisms that can be manipulated to produce diverse paraphrases (Iyyer et al., 2018; Chen et al., 2019b; Park et al., 2019), which allows these models to be employed for data augmentation (Yu et al., 2018) and 1Data and code are available at https://github. com/tagoyal/sow-reap-paraphrasing Rearrangement Aware Paraphrasing Source Order Rewriting S Clippers won the game NP VP NP VBD XNP won YNP YNP won by XNP 4 3 1 2 Clippers won the game The game was won by the Clippers. Source order
 encoding Transformer seq2seq Figure 1: Overview of our paraphrase model. First, we choose various pairs of constituents to abstract away in the source sentence, then use a neural transducer to generate possible reorderings of the abstracted sentences. From these, we construct a guide reordering of the input sentence which then informs the generation of output paraphrases. adversarial example generation (Iyyer et al., 2018). However, prior methods involving syntactic control mechanisms do not effectively cover the space of paraphrase possibilities. Using syntactic templates covering the top of the parse tree (Iyyer et al., 2018) is inflexible, and using fully-specified exemplar sentences (Chen et al., 2019b) poses the problem of how to effectively retrieve such sentences. For a particular input sentence, it is challenging to use these past approaches to enumerate the set of reorderings that make sense for that sentence. In this paper, we propose a two-stage approach to address these limitations, outlined in Figure 1. First, we use an encoder-decoder model (SOW, for Source Order reWriting) to apply transduction operations over various abstracted versions of the input sentence. These transductions yield possible reorderings of the words and constituents, which can be combined to obtain multiple feasible rearrangements of the input sentence. Each rearrangement specifies an order that we should visit words of the source sentence; note that such orderings could encourage a model to passivize (visit the object before the subject), topicalize, or reorder clauses. These orderings are encoded for our encoder-decoder paraphrase model (REAP, for REarrangement Aware Paraphrasing) by way of po239 sition embeddings, which are added to the source sentence encoding to specify the desired order of generation (see Figure 2). This overall workflow is inspired by the pre-ordering literature in machine translation (Xia and McCord, 2004; Collins et al., 2005); however, our setting explicitly requires entertaining a diverse set of possible orderings corresponding to different paraphrasing phenomena. We train and evaluate our approach on the largescale English paraphrase dataset PARANMT-50M (Wieting and Gimpel, 2018). Results show that our approach generates considerably more diverse paraphrases while retaining the quality exhibited by strong baseline models. We further demonstrate that the proposed syntax-based transduction procedure generates a feasible set of rearrangements for the input sentence. Finally, we show that position embeddings provide a simple yet effective way to encode reordering information, and that the generated paraphrases exhibit high compliance with the desired reordering input. 2 Method Given an input sentence x = {x1, x2, . . . , xn}, our goal is to generate a set of structurally distinct paraphrases Y = {y1, y2, . . . , yk}. We achieve this by first producing k diverse reorderings for the input sentence, R = {r1, r2, . . . , rk}, that guide the generation order of each corresponding y. Each reordering is represented as a permutation of the source sentence indices. Our method centers around a sequence-tosequence model which can generate a paraphrase roughly respecting a particular ordering of the input tokens. Formally, this is a model P(y | x, r). First, we assume access to the set of target reorderings R and describe this rearrangement aware paraphrasing model (REAP) in Section 2.2. Then, in Section 2.3, we outline our reordering approach, including the source order rewriting (SOW) model, which produces the set of reorderings appropriate for a given input sentence x during inference (x →R). 2.1 Base Model The models discussed in this work build on a standard sequence-to-sequence transformer model (Vaswani et al., 2017) that uses stacked layers of self-attention to both encode the input tokens x and decode the corresponding target sequence y. This model is pictured in the gray block of Figure 2. Throughout this work, we use byte pair encoding Encoder Decoder + + + + Input tokens x Original Order Token embeddings Encoder Output EM Target Order r New Encoder Output E 4 3 1 2 1 2 3 4 Output tokens y Clippers won the game BOS The The game Figure 2: Rearrangement aware paraphrasing (REAP) model. The gray area corresponds to the standard transformer encoder-decoder system. Our model adds position embeddings corresponding to the target reordering to encoder outputs. The decoder attends over these augmented encodings during both training and inference. (BPE) (Sennrich et al., 2016) to tokenize our input and output sentences. These models are trained in the standard way, maximizing the log likelihood of the target sequence using teacher forcing. Additionally, in order to ensure that the decoder does not attend to the same input tokens repeatedly at each step of the decoding process, we include a coverage loss term, as proposed in See et al. (2017). Note that since the architecture of the transformer model is non-recurrent, it adds position embeddings to the input word embeddings in order to indicate the correct sequence of the words in both x and y (see Figure 2). In this work, we propose using an additional set of position embeddings to indicate the desired order of words during generation, described next. 2.2 Rearrangement aware Paraphrasing Model (REAP) Let r = {r1, r2, . . . , rn} indicate the target reordering corresponding to the input tokens x. We want the model to approximately attend to tokens in this specified order when generating the final output paraphrase. For instance, in the example in Figure 1, the reordering specifies that when producing the paraphrase, the model should generate content related to the game before content related to Clippers in the output. In this case, based on the rearrangement being applied, the model will most likely use passivization in its generation, although this is not strictly enforced. The architecture for our model P(y | x, r) is outlined in Figure 2. Consider an encoder-decoder architecture with a stack of M layers in the encoder 240 REORDER(S0): Recursively reorder constituents to get final ordering S0 SBAR PRP VP IN S PRP VP1 If it continues to rain I will carry an umbrella MD VP2 VB NP SOW Input SOW Output If S I will VP 4 5 1 2 3 SBAR I will carry NP 1 3 4 5 2 SELECTSEGMENTPAIRS:
 Choose constituents to abstract REORDERPHRASE:
 Use seq2seq model to reorder phrase SBAR I will carry NP If S I will VP I will VP if S SBAR NP I carry Source reordering Final paraphrases I will carry an umbrella if rain continues. FOR A, B SELECTSEGMENTPAIRS(S0) ∈ REORDERPHRASE (S0, A, B) REAP SOW SBAR I will carry NP If S I will VP 4 5 1 2 3 If S I will carry an umbrella 6 7 1 2 3 4 5 If it continues to rain I will carry an umbrella 6 7 10 8 9 1 2 3 4 5 REORDER(A) REORDER(B) r r If it continues to rain, an umbrella is what i will carry. Derived reorderings Figure 3: Overview of the source sentence rearrangement workflow for one level of recursion at the root node. First, candidate tree segment pairs contained within the input node are selected. A transduction operation is applied over the abstracted phrase, giving the reordering 4 5 1 2 3 for the case shown in red, then the process recursively continues for each abstracted node. This results in a reordering for the full source sentence; the reordering indices serve as additional input to the REAP model. and N layers in the decoder. We make the target reordering r accessible to this transformer model through an additional set of positional embeddings PEr. We use the sinusoidal function to construct these following Vaswani et al. (2017). Let EM = encoderM(x) be the output of the Mth (last) layer of the encoder. The specialpurpose position embeddings are added to the output of this layer (see Figure 2): E = EM + PEr. Note that these are separate from standard position embeddings added at the input layer; such embeddings are also used in our model to encode the original order of the source sentence. The transformer decoder model attends over E while computing attention and the presence of the position embeddings should encourage the generation to obey the desired ordering r, while still conforming to the decoder language model. Our experiments in Section 4.3 show that this position embedding method is able to successfully guide the generation of paraphrases, conditioning on both the input sentence semantics as well as the desired ordering. 2.3 Sentence Reordering We now outline our approach for generating these desired reorderings r. We do this by predicting phrasal rearrangements with the SOW model at various levels of syntactic abstraction of the sentence. We combine multiple such phrase-level rearrangements to obtain a set R of sentence-level rearrangements. This is done using a top-down approach, starting at the root node of the parse tree. The overall recursive procedure is outlined in Algorithm 1. One step of the recursive algorithm has three Algorithm 1 REORDER(t) Input: Sub-tree t of the input parse tree Output: Top-k list of reorderings for t’s yield T = SELECTSEGMENTPAIRS(t) // Step 1 R = INITIALIZEBEAM(size = k) for (A, B) in T do z = REORDERPHRASE(t, A, B) // Step 2 RA(1, . . . , k) = REORDER(tA) // k orderings RB(1, . . . , k) = REORDER(tB) // k orderings for ra, rb in RA × RB do r = COMBINE(z, ra, rb) // Step 3 score(r) = score(z)+score(ra)+score(rb) R.push(r, score(r)) end for end for return R major steps: Figure 3 shows the overall workflow for one iteration (here, the root node of the sentence is selected for illustration). First, we select sub-phrase pairs of the input phrase that respect parse-tree boundaries, where each pair consists of non-overlapping phrases (Step 1). Since the aim is to learn generic syntax-governed rearrangements, we abstract out the two sub-phrases, and replace them with non-terminal symbols, retaining only the constituent tag information. For example, we show three phrase pairs in Figure 3 that can be abstracted away to yield the reduced forms of the sentences. We then use a seq2seq model to obtain rearrangements for each abstracted phrase (Step 2). Finally, this top-level rearrangement is 241 combined with recursively-constructed phrase rearrangements within the abstracted phrases to obtain sentence-level rearrangements (Step 3). Step 1: SELECTSEGMENTPAIRS We begin by selecting phrase tuples that form the input to our seq2seq model. A phrase tuple (t, A, B) consists of a sub-tree t with the constituents A and B abstracted out (replaced by their syntactic categories). For instance, in Figure 3, the S0, S, and VP2 nodes circled in red form a phrase tuple. Multiple distinct combinations of A and B are possible.2 Step 2: REORDERPHRASE Next, we obtain rearrangements for each phrase tuple (t, A, B). We first form an input consisting of the yield of t with A and B abstracted out; e.g. If S I will VP, shown in red in Figure 3. We use a sequence-to-sequence model (the SOW model) that takes this string as input and produces a corresponding output sequence. We then perform word-level alignment between the input and generated output sequences (using cosine similarity between GloVe embeddings) to obtain the rearrangement that must be applied to the input sequence.3 The log probability of the output sequence serves as a score for this rearrangement. SOW model The SOW model is a sequence-tosequence model P(y′ | x′, o), following the transformer framework in Section 2.1.4 Both x′ and y′ are encoded using the word pieces vocabulary; additionally, embeddings corresponding to the POS tags and constituent labels (for non-terminals) are added to the input embeddings. For instance, in Figure 3, If S I will VP and I will VP if S is an example of an (x′, y′), pair. While not formally required, Algorithm 1 ensures that there are always exactly two non-terminal labels in these sequences. o is a variable that takes values MONOTONE or FLIP. This encodes a preference to keep the two abstracted nodes in the same order or to “flip” them in the output.5 o is encoded in the model with additional positional encodings of the form {. . . 0, 0, 1, 0, . . . 2, 0 . . . } for monotone and 2In order to limit the number of such pairs, we employ a threshold on the fraction of non-abstracted words remaining in the phrase, outlined in more detail in the Appendix. 3We experimented with a pointer network to predict indices directly; however, the approach of generate and then align post hoc resulted in a much more stable model. 4See Appendix for SOW model architecture diagram. 5In syntactic translation systems, rules similarly can be divided by whether they preserve order or invert it (Wu, 1997). {. . . 0, 0, 2, 0, . . . 1, 0 . . . } for flipped, wherein the non-zero positions correspond to the positions of the abstracted non-terminals in the phrase. These positional embeddings for the SOW MODEL are handled analogously to the r embeddings for the REAP model. During inference, we use both the monotone rearrangement and flip rearrangement to generate two reorderings, one of each type, for each phrase tuple. We describe training of this model in Section 3. Step 3: COMBINE The previous step gives a rearrangement for the subtree t. To obtain a sentence-level rearrangement from this, we first recursively apply the REORDER algorithm on subtrees tA and tB which returns the top-k rearrangements of each subtree. We iterate over each rearrangement pair (ra, rb), applying these reorderings to the abstracted phrases A and B. This is illustrated on the left side of Figure 3. The sentence-level representations, thus obtained, are scored by taking a mean over all the phrase-level rearrangements involved. 3 Data and Training We train and evaluate our model on the PARANMT50M paraphrase dataset (Wieting and Gimpel, 2018) constructed by backtranslating the Czech sentences of the CzEng (Bojar et al., 2016) corpus. We filter this dataset to remove shorter sentences (less than 8 tokens), low quality paraphrase pairs (quantified by a translation score included with the dataset) and examples that exhibit low reordering (quantified by a reordering score based on the position of each word in the source and its aligned word in the target sentence). This leaves us with over 350k paired paraphrase pairs. 3.1 Training Data for REAP To train our REAP model (outlined in Section 2.2), we take existing paraphrase pairs (x, y∗) and derive pseudo-ground truth rearrangements r∗of the source sentence tokens based on their alignment with the target sentence. To obtain these rearrangements, we first get contextual embeddings (Devlin et al., 2019) for all tokens in the source and target sentences. We follow the strategy outlined in Lerner and Petrov (2013) and perform reorderings as we traverse down the dependency tree. Starting at the root node of the source sentence, we determine the order between the head and its children (independent of other decisions) based on the order 242 If it continues to rain I will carry an umbrella I will carry an umbrella if rain continues Figure 4: Paraphrase sentence pair and its aligned tuples A →B, C and A′ →B′, C′. These produce the training data for the SOW MODEL. of the corresponding aligned words in the target sentence. We continue this traversal recursively to get the sentence level-rearrangement. This mirrors the rearrangement strategy from Section 2.3, which operates over constituency parse tree instead of the dependency parse. Given triples (x, r∗, y∗), we can train our REAP model to generate the final paraphrases conditioning on the pseudo-ground truth reorderings. 3.2 Training Data for SOW The PARANMT-50M dataset contains sentencelevel paraphrase pairs. However, in order to train our SOW model (outlined in section 2.3), we need to see phrase-level paraphrases with syntactic abstractions in them. We extract these from the PARANMT-50M dataset using the following procedure, shown in Figure 4. We follow Zhang et al. (2020) and compute a phrase alignment score between all pairs of constituents in a sentence and its paraphrase.6 From this set of phrase alignment scores, we compute a partial one-to-one mapping between phrases (colored shapes in Figure 4); that is, not all phrases get aligned, but the subset that do are aligned one-to-one. Finally, we extract aligned chunks similar to rule alignment in syntactic translation (Galley et al., 2004): when aligned phrases A and A′ subsume aligned phrase pairs (B, C) and (B′, C′) respectively, we can extract the aligned tuple (tA, B, C) and (tA′, B′, C′). The phrases (B, C) and (B′, C′) are abstracted out to construct training data for the phrase-level transducer, including supervision of whether o = MONOTONE or FLIP. Using the above alignment strategy, we were able to obtain over 1 million aligned phrase pairs. 4 Evaluation Setup As our main goal is to evaluate our model’s ability to generate diverse paraphrases, we 6The score is computed using a weighted mean of the contextual similarity between individual words in the phrases, where the weights are determined by the corpus-level inversedocument frequency of the words. Details in the Appendix. obtain a set of paraphrases and compare these to sets of paraphrases produced by other methods. To obtain 10 paraphrases, we first compute a set of 10 distinct reorderings r1, . . . , r10 with the SOW method from Section 2.3 and then use the REAP to generate a 1-best paraphrase for each. We use topk decoding to generate the final set of paraphrases corresponding to the reorderings. Our evaluation is done over 10k examples from PARANMT-50M. 4.1 Quantitative Evaluation Baselines We compare our model against the Syntactically Controlled Paraphrase Network (SCPN) model proposed in prior work (Iyyer et al., 2018). It produces 10 distinct paraphrase outputs conditioned on a pre-enumerated list of syntactic templates. This approach has been shown to outperform other paraphrase approaches that condition on interpretable intermediate structures (Chen et al., 2019b). Additionally, we report results on the following baseline models: i) A copy-input model that outputs the input sentence exactly. ii) A vanilla seq2seq model that uses the same transformer encoder-decoder architecture from Section 2.1 but does not condition on any target rearrangement. We use top-k sampling (Fan et al., 2018) to generate 10 paraphrases from this model.7 iii) A diversedecoding model that uses the above transformer seq2seq model with diverse decoding (Kumar et al., 2019) during generation. Here, the induced diversity is uncontrolled and aimed at maximizing metrics such as distinct n-grams and edit distance between the generated sentences. iv) A LSTM version of our model where the REAP model uses LSTMs with attention (Bahdanau et al., 2014) and copy (See et al., 2017) instead of transformers. We still use the transformer-based phrase transducer to obtain the source sentence reorderings, and still use positional encodings in the LSTM attention. Similar to Cho et al. (2019), we report two types of metrics: 1. Quality: Given k generated paraphrases Y = {y1, y2 . . . yk} for each input sentence in the test set, we select ˆybest that achieves the best (oracle) sentence-level score with the ground truth paraphrase y. The corpus level evaluation is performed using pairs (ˆybest, y). 2. Diversity: We calculate BLEU or WER be7Prior work (Wang et al., 2019; Li et al., 2019) has shown that such a transformer-based model provides a strong baseline and outperforms previous LSTM-based (Hasan et al., 2016) and VAE-based (Gupta et al., 2018) approaches. 243 Model oracle quality (over 10 sentences, no rejection) ↑ pairwise diversity (post-rejection) BLEU ROUGE-1 ROUGE-2 ROUGE-L % rejected self-BLEU ↓ self-WER ↑ copy-input 18.4 54.4 27.2 49.2 0 − − SCPN 21.3 53.2 30.3 51.0 40.6 35.9 63.4 Transformer seq2seq 32.8 63.1 41.4 63.3 12.7 50.7 35.4 + diverse-decoding 24.8 56.8 33.2 56.4 21.3 34.2 58.1 SOW-REAP (LSTM) 27.0 57.9 34.8 57.5 31.7 46.2 53.9 SOW-REAP 30.9 62.3 40.2 61.7 15.9 38.0 57.9 Table 1: Quality and diversity metrics for the different models. Our proposed approach outperforms other diverse models (SCPN and diverse decoding) in terms of all the quality metrics. These models exhibit higher diversity, but with many more rejected paraphrases, indicating that these models more freely generate bad paraphrases. tween all pairs (yi, yj) generated by a single model on a single sentence, then macro-average these values at a corpus-level. In addition to these metrics, we use the paraphrase similarity model proposed by Wieting et al. (2017) to compute a paraphrase score for generated outputs with respect to the input. Similar to Iyyer et al. (2018), we use this score to filter out low quality paraphrases. We report on the rejection rate according to this criterion for all models. Note that our diversity metric is computed after filtering as it is easy to get high diversity by including nonsensical paraphrase candidates that differ semantically. Table 1 outlines the performance of the different models. The results show that our proposed model substantially outperforms the SCPN model across all quality metrics.8 Furthermore, our LSTM model also beats the performance of the SCPN model, demonstrating that the gain in quality cannot completely be attributed to the use of transformers. The quality of our full model (with rearrangements) is also comparable to the quality of the vanilla seq2seq model (without rearrangements). This demonstrates that the inclusion of rearrangements from the syntax-based neural transducer do not hurt quality, while leading to a substantially improved diversity performance. The SCPN model has a high rejection score of 40.6%. This demonstrates that out of the 10 templates used to generate paraphrases for each sentence, on average 4 were not appropriate for the given sentence, and therefore get rejected. On the other hand, for our model, only 15.9% of the generated paraphrases get rejected, implying that the rearrangements produced were generally meaningful. This is comparable to the 12.7% rejection rate 8The difference in performance between our proposed model and baseline models is statistically significant according to a paired bootstrap test. exhibited by the vanilla seq2seq model that does not condition on any syntax or rearrangement, and is therefore never obliged to conform to an inappropriate structure. Finally, our model exhibits a much higher diversity within the generated paraphrases compared to the transformer seq2seq baseline. As expected, the SCPN model produces slightly more diverse paraphrases as it explicitly conditions the generations on templates with very different top level structures. However, this is often at the cost of semantic equivalence, as demonstrated by both quantitative and human evaluation (next section). A similar trend was observed with the diverse-decoding scheme. Although it leads to more diverse generations, there is a substantial decrease in quality compared to SOW-REAP and the seq2seq model. Moreover, the paraphrases have a higher rejection rate (21.3%), suggesting that diverse decoding is more likely to produce nonsensical paraphrases. A similar phenomenon is also reported by Iyyer et al. (2018), wherein diverse-decoding resulted in paraphrases with different semantics than the input. Syntactic Exemplars In addition to SCPN, we compare our proposed model against the controllable generation method of Chen et al. (2019b). Their model uses an exemplar sentence as a syntactic guide during generation; the generated paraphrase is trained to incorporate the semantics of the input sentence while emulating the syntactic structure of the exemplar (see Appendix D for examples). However, their proposed approach depends on the availability of such exemplars at test time; they manually constructed these for their test set (800 examples). Since we do not have such example sentences available for our test data, we report results of our model’s performance on their test data. 244 Input SOW-REAP SCPN if at any time in the preparation of this product the integrity of this container is compromised it should not be used . this container should not be used if any time in the preparation of this product is compromised in the preparation of this product , the integrity of this container is compromised , but it should not be used . if the integrity of the packaging is impaired at any time , the product should not be used . where is the integrity of this product of this container the integrity of this container should not be used . if the product integrity of this container is compromised it should not be used . i should not use if at any time in the preparation of this product , it should not be used . i was the first grower to use hydroponics . to use hydroponics , i was the first one . where did i have the first tendency to use hydroponics ? i used hydroponics for the first time . i used to use hydroponics . to use hydroponics the first time i was . first i was the first grower to use hydroponics Table 2: Examples of paraphrases generated by our system and the baseline SCPN model. Our model successfully rearranges the different structural components of the input sentence to obtain meaningful rearrangements. SCPN conforms to pre-enumerated templates that may not align with a given input. Note that Chen et al. (2019b) carefully curated the exemplar to be syntactically similar to the actual target paraphrase. Therefore, for fair comparison, we report results using the ground truth ordering (that similarly leverages the target sentence to obtain a source reordering), followed by the REAP model. This model (ground truth order + REAP) achieves a 1-best BLEU score of 20.9, outperforming both the prior works: Chen et al. (2019b) (13.6 BLEU) and SCPN (17.8 BLEU with template, 19.2 BLEU with full parse). Furthermore, our full SOWREAP model gets an oracle-BLEU (across 10 sentences) score of 23.8. These results show that our proposed formulation outperforms other controllable baselines, while being more flexible. 4.2 Qualitative Evaluation Table 2 provides examples of paraphrase outputs produced by our approach and SCPN. The examples show that our model exhibits syntactic diversity while producing reasonable paraphrases of the input sentence. On the other hand, SCPN tends to generate non-paraphrases in order to conform to a given template, which contributes to increased diversity but at the cost of semantic equivalence. In Table 3, we show the corresponding sequence of rules that apply to an input sentence, and the final generated output according to that input rearrangement. Note that for our model, on average, 1.8 phrase-level reorderings were combined to produce sentence-level reorderings (we restrict to a maximum of 3). More examples along with the input rule sequence (for our model) and syntactic templates (for SCPN) are provided in the Appendix. Human Evaluation We also performed human evaluation on Amazon Mechanical Turk to evaluInput Sentence: if at any time in the preparation of this product the integrity of this container is compromised it should not be used . Rule Sequence: if S it should not VB used . →should not VB used if S (parse tree level: 0) at NP the integrity of this container VBZ compromised → this container VBZ weakened at NP (parse tree level: 1) the NN of NP →NP NN (parse tree level: 2) Generated Sentence: this container should not be used if the product is compromised at any time in preparation . Table 3: Examples of our model’s rearrangements applied to a given input sentence. Parse tree level indicates the rule subtree’s depth from the root node of the sentence. The REAP model’s final generation considers the rule reordering at the higher levels of the tree but ignores the rearrangement within the lower sub-tree. ate the quality of the generated paraphrases. We randomly sampled 100 sentences from the development set. For each of these sentences, we obtained 3 generated paraphrases from each of the following models: i) SCPN, ii) vanilla sequence-to-sequence and iii) our proposed SOW-REAP model. We follow earlier work (Kok and Brockett, 2010; Iyyer et al., 2018) and obtain quality annotations on a 3 point scale: 0 denotes not a paraphrase, 1 denotes that the input sentence and the generated sentence are paraphrases, but the generated sentence might contain grammatical errors, 2 indicates that the input and the candidate are paraphrases. To emulate the human evaluation design in Iyyer et al. (2018), we sample paraphrases after filtering using the criterion outlined in the previous section and obtain three judgements per sentence and its 9 paraphrase candidates. Table 4 outlines the results from the human evaluation. As we can see, the results indicate 245 Model 2 1 0 SCPN (Iyyer et al., 2018) 35.9 24.8 39.3 Transformer seq2seq 45.1 20.6 34.3 SOW-REAP 44.5 22.6 32.9 Table 4: Human annotated quality across different models. The evaluation was done on a 3 point quality scale, 2 = grammatical paraphrase, 1 = ungrammatical paraphrase, 0 = not a paraphrase. Ordering oracle-ppl ↓ oracle-BLEU ↑ Monotone 10.59 27.98 Random 9.32 27.10 SOW 8.14 30.02 Ground Truth 7.79 36.40 Table 5: Comparison of different source reordering strategies. Our proposed approach outperforms baseline monotone and random rearrangement strategies. that the quality of the paraphrases generated from our model is substantially better than the SCPN model.9 Furthermore, similar to quantitative evaluation, the human evaluation also demonstrates that the performance of this model is similar to that of the vanilla sequence-to-sequence model, indicating that the inclusion of target rearrangements do not hurt performance. 4.3 Ablations and Analysis 4.3.1 Evaluation of SOW Model Next, we intrinsically evaluate the performance of our SOW model (Section 2.3). Specifically, given a budget of 10 reorderings, we want to understand how close our SOW model comes to covering the target ordering. We do this by evaluating the REAP model in terms of oracle perplexity (of the ground truth paraphrase) and oracle BLEU over these 10 orderings. We evaluate our proposed approach against 3 systems: a) Monotone reordering {1, 2, . . . , n}. b) Random permutation, by randomly permuting the children of each node as we traverse down the constituency parse tree. c) Ground Truth, using the pseudo-ground truth rearrangement (outlined in Section 3) between the source and ground-truth target sentence. This serves as an upper bound for the reorderings’ performance, as obtained by the recursive phrase-level transducer. 9The difference of our model performance with SCPN is statistically significant, while that with baseline seq2seq is not according to a paired bootstrap test. −0.5 −0.2 0.1 0.4 0.7 1.0 Target Degree of Rearrangement b/w input and ground truth output −0.5 0.0 0.5 1.0 Achieved Degree of Rearrangement b/w input and generated output Monotone r Ground Truth r∗ Figure 5: The degree of rearrangement (Kendall’s Tau) achieved by conditioning on monotone and pseudoground truth reorderings (r∗). The dotted line denotes the ideal performance (in terms of reorderingcompliance) of the REAP model, when supplied with perfect reordering r∗. The actual performance of the REAP model mirrors the ideal performance. Table 5 outlines the results for 10 generated paraphrases from each rearrangement strategy. Our proposed approach outperforms the baseline monotone and random reordering strategies. Furthermore, the SOW model’s oracle perplexity is close to that of the ground truth reordering’s perplexity, showing that the proposed approach is capable of generating a diverse set of rearrangements such that one of them often comes close to the target rearrangement. The comparatively high performance of the ground truth reorderings demonstrates that the positional embeddings are effective at guiding the REAP model’s generation. 4.3.2 Compliance with target reorderings Finally, we evaluate whether the generated paraphrases follow the target reordering r. Note that we do not expect or want our REAP model to be absolutely compliant with this input reordering since the model should be able to correct for the mistakes make by the SOW model and still generate valid paraphrases. Therefore, we perform reordering compliance experiments on only the monotone reordering and the pseudo-ground truth reorderings (r∗, construction outlined in Section 3), since these certainly correspond to valid paraphrases. For sentences in the test set, we generate paraphrases using monotone reordering and pseudoground truth reordering as inputs to REAP. We get the 1-best paraphrase and compute the degree of rearrangement10 between the input sentence and 10Quantified by Kendall’s Tau rank correlation between original source order and targeted/generated order. Higher 246 the generated sentence. In Figure 5, we plot this as a function of the target degree of rearrangement, i.e., the rearrangement between the input sentence x and the ground truth sentence y∗. The dotted line denotes the ideal performance of the model in terms of agreement with the perfect reordering r∗. The plot shows that the REAP model performs as desired; the monotone generation results in high Kendall’s Tau between input and output. Conditioning on the pseudo-ground truth reorderings (r∗) produces rearrangements that exhibit the same amount of reordering as the ideal rearrangement. 5 Related Work Paraphrase Generation Compared to prior seq2seq approaches for paraphrasing (Hasan et al., 2016; Gupta et al., 2018; Li et al., 2018), our model is able to achieve much stronger controllability with an interpretable control mechanism. Like these approaches, we can leverage a wide variety of resources to train on, including backtranslation (Pavlick et al., 2015; Wieting and Gimpel, 2018; Hu et al., 2019) or other curated data sources (Fader et al., 2013; Lan et al., 2017). Controlled Generation Recent work on controlled generation aims at controlling attributes such as sentiment (Shen et al., 2017), gender or political slant (Prabhumoye et al., 2018), topic (Wang et al., 2017), etc. However, these methods cannot achieve fine-grained control over a property like syntax. Prior work on diverse paraphrase generation can be divided into three groups: diverse decoding, latent variable modeling, and syntax-based. The first group uses heuristics such as Hamming distance or distinct n-grams to preserve diverse options during beam search decoding (Vijayakumar et al., 2018; Kumar et al., 2019). The second group includes approaches that use uninterpretable latent variables to separate syntax and semantics (Chen et al., 2019a), perturb latent representations to enforce diversity (Gupta et al., 2018; Park et al., 2019) or condition on latent codes used to represent different re-writing patterns (Xu et al., 2018; An and Liu, 2019). Qian et al. (2019) uses distinct generators to output diverse paraphrases. These methods achieve some diversity, but do not control generation in an interpretable manner. Finally, methods that use explicit syntactic structures (Iyyer et al., 2018; Chen et al., 2019b) may try to force a Kendall’s Tau indicates lower rearrangement and vice-versa. sentence to conform to unsuitable syntax. Phraselevel approaches (Li et al., 2019) are inherently less flexible than our approach. Machine Translation Our work is inspired by pre-ordering literature in machine translation. These systems either use hand-crafted rules designed for specific languages (Collins et al., 2005; Wang et al., 2007) or automatically learn rewriting patterns based on syntax (Xia and McCord, 2004; Dyer and Resnik, 2010; Genzel, 2010; Khalilov and Simaan, 2011; Lerner and Petrov, 2013). There also exist approaches that do not rely on syntactic parsers, but induce hierarchical representations to leverage for pre-ordering (Tromble and Eisner, 2009; DeNero and Uszkoreit, 2011). In the context of translation, there is often a canonical reordering that should be applied to align better with the target language; for instance, head-final languages like Japanese exhibit highly regular syntax-governed reorderings compared to English. However, in diverse paraphrase generation, there doesn’t exist a single canonical reordering, making our problem quite different. In concurrent work, Chen et al. (2020) similarly use an additional set of position embeddings to guide the order of generated words for machine translation. This demonstrates that the REAP technique is effective for other tasks also. However, they do not tackle the problem of generating plausible reorderings and therefore their technique is less flexible than our full SOW-REAP model. 6 Conclusion In this work, we propose a two-step framework for paraphrase generation: construction of diverse syntactic guides in the form of target reorderings followed by actual paraphrase generation that respects these reorderings. Our experiments show that this approach can be used to produce paraphrases that achieve a better quality-diversity trade-off compared to previous methods and strong baselines. Acknowledgments This work was partially supported by NSF Grant IIS-1814522, a gift from Arm, and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Thanks as well to the anonymous reviewers for their helpful comments. 247 References Zhecheng An and Sicong Liu. 2019. Towards Diverse Paraphrase Generation Using Multi-Class Wasserstein GAN. arXiv preprint arXiv:1909.13827. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. CoRR, abs/1409.0473. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 16– 23. Association for Computational Linguistics. Ondˇrej Bojar, Ondˇrej Duˇsek, Tom Kocmi, Jindˇrich Libovick`y, Michal Nov´ak, Martin Popel, Roman Sudarikov, and Duˇsan Variˇs. 2016. CzEng 1.6: enlarged Czech-English parallel corpus with processing tools Dockered. In International Conference on Text, Speech, and Dialogue, pages 231–238. Springer. Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2020. Explicit Reordering for Neural Machine Translation. arXiv preprint arXiv:2004.03818. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. A Multi-Task Approach for Disentangling Syntax and Semantics in Sentence Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453–2464. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. Controllable Paraphrase Generation with a Syntactic Exemplar. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5972–5984, Florence, Italy. Association for Computational Linguistics. Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture Content Selection for Diverse Sequence Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3112–3122. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 531–540. Association for Computational Linguistics. John DeNero and Jakob Uszkoreit. 2011. Inducing sentence structure from parallel corpora for reordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 193–203. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Chris Dyer and Philip Resnik. 2010. Context-free reordering, finite-state translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 858–866, Los Angeles, California. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608–1618. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical Neural Story Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 273–280, Boston, Massachusetts, USA. Association for Computational Linguistics. Dmitriy Genzel. 2010. Automatically learning sourceside reordering rules for large scale machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 376–384. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, Oladimeji Farri, et al. 2016. Neural Paraphrase Generation with Stacked Residual LSTM Networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923–2934. J. Edward Hu, Rachel Rudinger, Matt Post, and Benjamin Van Durme. 2019. ParaBank: Monolingual 248 Bitext Generation and Sentential Paraphrasing via Lexically-constrained Neural Machine Translation. In Proceedings of AAAI. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. In Proceedings of NAACL-HLT, pages 1875– 1885. Maxim Khalilov and Khalil Simaan. 2011. Contextsensitive syntactic source-reordering by statistical transduction. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 38–46. Stanley Kok and Chris Brockett. 2010. Hitting the right paraphrases in good time. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 145–153. Association for Computational Linguistics. Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619. Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A Continuously Growing Dataset of Sentential Paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224–1234, Copenhagen, Denmark. Association for Computational Linguistics. Uri Lerner and Slav Petrov. 2013. Source-side classifier preordering for machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 513–523. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase Generation with Deep Reinforcement Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865–3878, Brussels, Belgium. Association for Computational Linguistics. Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable Neural Paraphrase Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403–3414, Florence, Italy. Association for Computational Linguistics. Kathleen R McKeown. 1983. Paraphrasing questions using given and new information. Computational Linguistics, 9(1):1–10. Sunghyun Park, Seung-won Hwang, Fuxiang Chen, Jaegul Choo, Jung-Woo Ha, Sunghun Kim, and Jinyeong Yim. 2019. Paraphrase Diversification Using Counterfactual Debiasing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6883–6891. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425–430, Beijing, China. Association for Computational Linguistics. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style Transfer Through Back-Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866–876. Lihua Qian, Lin Qiu, Weinan Zhang, Xin Jiang, and Yong Yu. 2019. Exploring Diverse Expressions for Paraphrase Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3164–3173. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073–1083. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830–6841. Roy Tromble and Jason Eisner. 2009. Learning linear ordering problems for better translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2-Volume 2, pages 1007–1016. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. 249 Ashwin K Vijayakumar, Michael Cogswell, Ramprasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Thirty-Second AAAI Conference on Artificial Intelligence. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 737–745. Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Nyberg. 2017. Steering Output Style and Topic in Neural Response Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2140–2150. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019. A task in a suit and a tie: paraphrase generation with semantic augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7176–7183. John Wieting and Kevin Gimpel. 2018. ParaNMT50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Melbourne, Australia. Association for Computational Linguistics. John Wieting, Jonathan Mallinson, and Kevin Gimpel. 2017. Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 274–285. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational linguistics, 23(3):377–403. Fei Xia and Michael McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In Proceedings of the 20th international conference on Computational Linguistics, page 508. Association for Computational Linguistics. Qiongkai Xu, Juyan Zhang, Lizhen Qu, Lexing Xie, and Richard Nock. 2018. D-PAGE: Diverse Paraphrase Generation. CoRR, abs/1808.04364. Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In International Conference on Learning Representations. Shiyue Zhang and Mohit Bansal. 2019. Addressing Semantic Drift in Question Generation for SemiSupervised Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509, Hong Kong, China. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In International Conference on Learning Representations. Appendix A SELECTSEGMENTPAIRS: Limiting number of segment pairs As outlined in Section 2.3, the SELECTSEGMENTPAIRS subroutine returns a set of non-overlapping sub-phrases (A, B). In order to limit the number of sub-phrase pairs during inference, we employ the following heuristics: 1. We compute a score based on number of nonabstracted tokens divided by the total number of tokens in the yield of the parent sub-phrase t. We reject pairs (A, B) that have a score of more than 0.6. This reduces spurious ambiguity by encouraging the model to rearrange big constituents hierarchically rather than only abstracting out small pieces. 2. We maintain a list of tags that are never individually selected as sub-phrases. These include constituents that would be trivial to the reordering such as determiners (DT), prepositions (IN), cardinal numbers (CD), modals (MD), etc. However, these may be a part of larger constituents that form A or B. B Training Data for SOW MODEL In Section 3.2, we outlined our approach for obtaining phrase-level alignments from the PARANMT50M dataset used to train the SOW MODEL. In the described approach, an alignment score is computed between each pair of phrases p, ˆp belonging to sentences s and ˆs respectively. We use the exact procedure in Zhang and Bansal (2019) to compute the alignment score, outlined below: 1. First, we compute an inverse document frequency (idf) score for each token in the training set. Let M = {s(i)} be the total number of sentences. Then idf of a word w is computed as: idf(w) = −log 1 M M X i=i 1[w ∈s(i)] 2. Next, we extract a contextual representation of each word in the two phrases s and ˆs. We use ELMo (Peters et al., 2018) in our approach. 250 SOW Input SOW Output removing the NN from NP excluding this NN from NP they might consider VP if NP were imposed in the case of imposition of NP , they would consider VP NP lingered in the deserted NNS . in the abandoned NNS , there was NP . PP was a black NN archway . was a black NN passage PP . there is already a ring NN PP . PP circular NN exist . Table 6: Examples of aligned phrase pairs with exactly two sub-phrases abstracted out and replaced with constituent labels. These phrase pairs are used to train the SOW MODEL. 3. In order to compute a similarity score between each pair of phrases (p, ˆp), we use greedy matching to first align each token in the source phrase to its most similar word in the target phrase. To compute phrase-level similarity, these these word-level similarity scores are combined by taking a weighted mean, with weights specified by to the idf scores. Formally, Rp,ˆp = P wi∈p idf(wi) max ˆ wj∈ˆp wT i ˆ wj P wi∈p idf(wi) Pp,ˆp = P ˆ wj∈ˆp idf( ˆ wj) maxwi∈p wT i ˆ wj P ˆ wj∈ˆp idf( ˆ wj) Fp,ˆp = 2Pp,ˆpRp,ˆp Pp,ˆp + Rp,ˆp This scoring procedure is exactly same as the one proposed by Zhang et al. (2020) to evaluate sentence and phrase similarities. 4. Finally, the phrases p ∈s and ˆp ∈ˆs are aligned if: p = argmax pi∈s Fpi,ˆp & ˆp = argmax ˆpj∈ˆs Fp, ˆpj These aligned set of phrase pairs (p, ˆp) are used to construct tuples (tA, B, C) and (t′ A, B′, C′), as outlined in Section 3.2. Table 6 provides examples of such phrase pairs. C SOW Model Architecture Figure 6 provides an overview of the SOW seq2seq model. We add POS tag embeddings (or corEncoder Decoder + + + + Input tokens x Original Order POS Embeddings Encoder Output EM Order o = FLIP New Encoder Output E 0 2 0 1 1 2 3 4 Output tokens y If X then Y BOS Y Y if Token Embeddings Figure 6: Source Order reWriting (SOW) model. Our model encodes order preference MONOTONE or FLIP through position embeddings added to the encoder output. responding constituent label embeddings for abstracted X and Y) to the input token embeddings and original order position embeddings. As outlined in Section 2.3, another set of position embeddings corresponding to the order preference, either MONOTONE or FLIP, are further added to the output of the final layer of the encoder. The decoder attends over these augmented encodings during both training and inference. D Syntactic Exemplars Table 7 provides an example from the test set of Chen et al. (2019b). The output retains the semantics of the input sentence while following the structure of the exemplar. I: his teammates eyes got an ugly, hostile expression. E: the smell of flowers was thick and sweet. O: the eyes of his teammates had turned ugly and hostile. Table 7: Example of input (I), syntactic exemplar (E), and the reference output (O) from the evaluation test set of (Chen et al., 2019b). E Example Generations In Table 8, we provide examples of paraphrases generated by our system (SOW-REAP) and the baseline SCPN (Iyyer et al., 2018) system. We additionally include the phrase level transductions applied to obtain the sentence level reordering by our system (column 1) and the input template that the corresponding SCPN generation was conditioned on (Column 3). 251 Rules (SOW) Output (REAP) Template (SCPN) Output (SCPN) Input: the public tender result message normally contains the following information : NP normally contains the following NN: →the following NN usually contains in NP : the following information shall normally be included in the public procurement report : SBARQ ( WHADVP SQ . ) where is the public procurement report report usually contains the following information . NP normally VP : →usually VP , NP VBZ the following NN →the NN VBZ normally the following information shall be included in the public procurement result report : S ( PP , NP VP . ) in the public competition , the report on competition contains the following information . Input: the story of obi-wan kenobi ends here . NP VP . →VP is NP the NN of NP →NP NN . end of the obi-wan kenobi story . S ( VP . ) tell the story of obi-wan kenobi . the story PP NNS here . →there NNS a story PP . here ends the story of obiwan kenobi . S ( S , CC S . ) the story of obi-wan kenobi is here , and it ends here . Input: i leased it before i knew where the money came from . i VBN it before i VP . →before i VP , i VBN it . before i knew where the money came from , i rented it . SBARQ ( WHADVP SQ . ) where did you learn that it was the money ? NP knew SBAR . →SBAR , S knew . where the money came from , i lent it to me before i knew . S ( NP VP . ) i borrowed money before i knew where the money came from . Input: priority actions should be more clearly specified in future reviews . NP should be more clearly specified PP . →PP , NP should be clearly specified . in future reviews , priority measures should be more clearly specified . S ( S , CC S . ) priority actions should be more clearly specified in future reviews , and they should be informed . ADVP VBN in future reviews → VBN in future reviews ADVP priority measures should be specified in future reviews clearly . SBARQ ( WHADVP SQ . ) where should priority actions are more clearly specified in future reviews ? Input: okay , well , tonight the occasion is calling . ADJP , S . →S , ADJP . well , NN the occasion VP →the occasion VP , NN the occasion is calling today , okay ? S ( NP VP . ) the opportunity is calling . ADJP , S . →S , ADJP . well , NP VBZ calling →VBZ calling NP we ’ll call it tonight , okay ? S ( ADVP NP VP . ) of course , the occasion is calling . Input: a minor risk considering the number of telephones in new york . a JJ risk considering NP . →NP is a JJ risk . the NN of NP →NP NN phones in new york are a minor risk considering . SBARQ ( WHADVP SQ .) when do you consider the number of telephones in new york ? NP1 considering NP2 . →considering NP2 for NP1 NN of NP →NP NN NP in JJ york →JJ york NP in new york , the number of phones is a minor risk . FRAG ( SBAR ) . that minor risk is the number of telephones in new york . Input: that dress gets me into anywhere i want . that S i VBP . →i VBP S . i want that dress gets me into the place . NP ( NP . ) that dress gets me in there , i wish . that S i VBP . →i VBP S . NN gets me PP →PP , NN gets me . i want a dress in front of me . S ( VP . ) i want everywhere . Table 8: Examples of paraphrases generated by our system and the baseline SCPN model. The outputs from our model successfully rearranges the different structural components of the input sentence to obtain meaningful rearrangements. SCPN on the other hand tends to conform to pre-specified templates that are often not aligned with a given input. 252 F Implementation Details The hyperparameters values used in REAP (see Table 9) and SOW (see Table 10) models. Note that we do not use coverage loss for the SOW model. Seq2seq transformer architecture Hidden size 256 Num layers 2 Num heads 8 Dropout 0.1 Training Optimizer Adam, β = (0.9, 0.999), ϵ = 10−8 Learning rate 0.0001 Batch size 32 Epochs 50 (maximum) Coverage loss coeff. 1 (first 10 epochs), 0.5 (10 - 20 epochs), 0 (rest) Inference k in top-k 20 Beam Size 10 Table 9: Hyperparameters used in the implementation of the REAP model. Seq2seq transformer architecture Hidden size 256 Num layers 2 Num heads 8 Dropout 0.1 Training Optimizer Adam, β = (0.9, 0.999), ϵ = 10−8 Learning rate 0.0001 Batch size 32 Epochs 50 (maximum) Recombination of rules/transductions Ignored tags DT, IN, CD, MD, TO, PRP Max. no. of rules 3 Table 10: Hyperparameters used in the implementation of the SOW model.
2020
22
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2430–2441 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2430 Learning an Unreferenced Metric for Online Dialogue Evaluation Koustuv Sinha∗, 1,2,3 Prasanna Parthasarathi, 1,2 Jasmine Wang, 1 Ryan Lowe, 1,2,4 William L. Hamilton, 1,2 and Joelle Pineau 1,2,3 1 School of Computer Science, McGill University, Canada 2 Quebec Artificial Intelligence Institute (Mila), Canada 3 Facebook AI Research (FAIR), Montreal, Canada 4 OpenAI Abstract Evaluating the quality of a dialogue interaction between two agents is a difficult task, especially in open-domain chit-chat style dialogue. There have been recent efforts to develop automatic dialogue evaluation metrics, but most of them do not generalize to unseen datasets and/or need a human-generated reference response during inference, making it infeasible for online evaluation. Here, we propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances, and leverages the temporal transitions that exist between them. We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference. 1 Introduction Recent approaches in deep neural language generation have opened new possibilities in dialogue generation (Serban et al., 2017; Weston et al., 2018). Most of the current language generation efforts are centered around language modelling or machine translation (Ott et al., 2018), which are evaluated by comparing directly against the reference sentences. In dialogue, however, comparing with a single reference response is difficult, as there can be many reasonable responses given a context that have nothing to do with each other (Liu et al., 2016). Still, dialogue research papers tend to report scores based on word-overlap metrics from the machine translation literature (e.g. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014)). However word-overlap metrics aggressively penalize the generated response based on lexical differences with the ground truth and correlate poorly to human judgements (Liu et al., 2016). ∗Corresponding author: [email protected]. Code for reproducing the experiments are available at https://github.com/facebookresearch/online dialog eval. Figure 1: Model architecture for MaUdE, which is an unsupervised unreferenced metric for dialog evaluation. One can build dialogue evaluation metrics in two ways: referenced metrics, which compare the generated response with a provided ground-truth response (such as the above word-overlap metrics), or an unreferenced metrics, which evaluate the generated response without any such comparison. Lowe et al. (2017) propose a learned referenced metric named ADEM, which learns an alignment score between context and response to predict human score annotations. However, since the score is trained to mimic human judgements, it requires collecting large-scale human annotations on the dataset in question and cannot be easily applicable to new datasets (Lowe, 2019). Recently, Tao et al. (2017) proposed a hybrid referenced-unreferenced metric named RUBER, where the metric is trained without requiring human responses by bootstrapping negative samples directly from the dataset. However, referenced metrics (including RUBER, as it is part referenced) are not feasible for evaluation of dialogue models in an online setting—when the model is pitched against a human agent (model-human) or a model agent (model-model)—due to lack of a reference response. In this setting, models are usually eval2431 uated directly by humans, which is costly and requires careful annotator training (Li et al., 2019). The contributions of this paper are (1) a completely unsupervised unreferenced metric MAUDE (Metric for automatic Unreferenced dialogue evaluation), which leverages state-of-the-art pretrained language models (Devlin et al., 2018; Sanh et al., 2019), combined with a novel discoursestructure aware text encoder and contrastive training approach; and (2) results showing that MAUDE has good correlation with human judgements. 2 Background We consider the problem of evaluating the response of a dialogue system, where an agent is provided with a sequence of sentences (or utterances) c = {u1, u2, ..., un} (termed as context) to generate a response r = un+1. Each utterance, ui, can be represented as a set of words ui = {w1, w2, ..., wn}. An utterance ui can be represented as a vector as hi = fe(ui), where fe is an encoder that encodes the words into a fixed vector representation. This work focuses on the evaluation of generative neural dialogue models, which typically consist of an encoder-decoder style architecture that is trained to generate un+1 word-by-word (Serban et al., 2017). The response of a generative model is typically evaluated by comparing with the ground-truth response using various automatic word-overlap metrics, such as BLEU or METEOR. These metrics, along with ADEM and RUBER, are essentially single-step evaluation metrics, where a score is calculated for each contextresponse pair. If a dialogue Di contains n utterances, we can extract n −1 context-response pairs : (c1 : {u1}, r1 : {u2}), (c2 : {u1, u2}, r2 : {u3}), . . . , (cn−1 : {u1 . . . un−1}, rn−1 : un). In this paper, we are interested in devising a scalar metric that can evaluate the quality of a contextresponse pair: score(ci, ri) = R ∈(0, 1). A key benefit of this approach is that this metric can be used to evaluate online and also for better training and optimization, as it provides partial credit during response generation. 3 Proposed model We propose a new model, MAUDE, for online unreferenced dialogue evaluation. We first describe the general framework behind MAUDE, which is inspired by the task of measuring alignment in natural language inference (NLI) (Williams et al., 2017). It involves training text encoders via noise contrastive estimation (NCE) to distinguish between valid dialogue responses and carefully generated negative examples. Following this, we introduce our novel text encoder that is designed to leverage the unique structural properties of dialogue. MAUDE is designed to output a scalar score(ci, ri) = R ∈(0, 1), which measures how appropriate a response ri is given a dialogue context ci. This task is analogous to measuring alignment in NLI, but instead of measuring entailment or contradiction, our notion of alignment aims to quantify the quality of a dialogue response. As in NLI, we approach this task by defining encoders fθ e (c) and fθ e (r) to encode the context and response, a combination function fcomb(.) to combine the representations, and a final classifier ft(.), which outputs the alignment score: score(c, r) = σ(ft(fcomb(fθ1 e (c), fθ2 e (r))). (1) The key idea behind an unreferenced dialogue metric is the use of Noise Contrastive Estimation (NCE) (Gutmann and Hyv¨arinen, 2010) for training. Specifically, we train the model to differentiate between a correct response (score(c, r) →1), and a negative response (score(c, ˆr) →0), where ˆr represents a candidate false response for the given context c. The loss to minimize contains one positive example and a range of negative examples chosen from a sampling policy P(ˆr): L = −log(score(c, r))−Eˆr∼P(ˆr) log(−score(c, ˆr)). (2) The sampling policy P(ˆr) consists of syntactic and semantic negative samples. Syntactic negative samples. We consider three variants of syntax level adversarial samples: wordorder (shuffling the ordering of the words of r), word-drop (dropping x% of words in r) and wordrepeat (randomly repeating words in r). Semantic negative samples. We also consider three variants of negative samples that are syntactically well formed, but represent corruption in the semantic space. First, we choose a response rj which is chosen at random from a different dialogue such that rj ̸= ri (random utterance). Second, we use a pre-trained seq2seq model on the dataset, and pair random seq2seq generated response with ri (random seq2seq). Third, to provide a bigger variation of semantically negative samples, for each ri we generate high-quality paraphrases 2432 rb i using Back-Translation (Edunov et al., 2018). We pair random Back-Translations rb j with ri as in the above setup (random back-translation). We also provide the paired rb i as positive example for the models to learn variation in semantic similarity. We further discuss the effect of different sampling policies in Appendix C. Dialogue-structure aware encoder. Traditional NLI approaches (e.g., Conneau et al. (2017)) use the general setup of Equation 1 to score contextresponse pairs. The encoder fe is typically a Bidirectional LSTM—or, more recently, a BERT-based model (Devlin et al., 2018), which uses a large pre-trained language model. fcomb is defined as in Conneau et al. (2017): fcomb(u, v) = concat([u, v, u ∗v, u −v]). (3) However, the standard text encoders used in these traditional NLI approaches ignore the temporal structure of dialogues, which is critical in our setting where the context is composed of a sequence of distinct utterances, with natural and stereotypical transitions between them. (See Appendix A for a qualitative analysis of these transitions). Thus we propose a specialized text encoder for MAUDE, which uses a BERT-based encoder fBERT e but additionally models dialogue transitions using a recurrent neural network: hui = DgfBERT e (ui), h′ ui+1 = fR(hui, h′ ui), ci = W.pool∀t∈{u1,...,un−1}(h′ t) score(ci, ri) = σ(ft([hri, ci, hri ∗ci, hri −ci])), (4) where hui ∈Rd is a downsampled BERT representation of the utterance ui (using a global learned mapping Dg ∈RB×d). h′ ui is the hidden representation of fR for ui, where fR is a Bidirectional LSTM. The final representation of the dialogue context is learned by pooling the individual hidden states of the RNN using max-pool (Equation 4). This context representation is mapped into the response vector space using weight W, to obtain ci. We then learn the alignment score between the context ci and response ri’s representation hri following Equation 1, by using the combination function fcomb being the same as in Equation 3. 4 Experiments To empirically evaluate our proposed unreferenced dialogue evaluation metric, we are interested in answering the following key research questions: • Q1: How robust is our proposed metric on different types of responses? • Q2: How well does the self-supervised metric correlate with human judgements? Datasets. For training MAUDE, we use PersonaChat (Zhang et al., 2018), a large-scale opendomain chit-chat style dataset which is collected by human-human conversations over provided user persona. We extract and process the dataset using ParlAI (Miller et al.) platform. We use the public train split for our training and validation, and the public validation split for testing. We use the human-human and human-model data collected by See et al. (2019) for correlation analysis, where the models themselves are trained on PersonaChat. Baselines. We use InferSent (Conneau et al., 2017) and unreferenced RUBER as LSTM-based baselines. We also compare against BERT-NLI, which is the same as the InferSent model but with the LSTM encoder replaced with a pre-trained BERT encoder. Note that these baselines can be viewed as ablations of the MAUDE framework using simplified text encoders, since we use the same NCE training loss to provide a fair comparison. Also, note that in practice, we use DistilBERT (Sanh et al., 2019) instead of BERT in both MAUDE and the BERT-NLI baseline (and thus we refer to the BERT-NLI baseline as DistilBERT-NLI).1. 4.1 Evaluating MAUDE on different types of responses We first analyze the robustness of MAUDE by comparing with the baselines, by using the same NCE training for all the models for fairness. We evaluate the models on the difference score, ∆= score(c, rground-truth)−score(c, r) (Table 6). ∆provides an insight on the range of score function. An optimal metric would cover the full range of good and bad responses. We evaluate response r in three settings: Semantic Positive: responses that are semantically equivalent to the ground truth response; Semantic Negative: responses that are semantically opposite to the ground truth response; and Syntactic 1DistilBERT is the same BERT encoder with significantly reduced memory footprint and training time, which is trained by knowledge distillation (Bucilu et al., 2006; Hinton et al., 2015) on the large pre-trained model of BERT. 2433 R IS DNLI M Semantic Positive ↓ BackTranslation 0.249 0.278 0.024 0.070 Seq2Seq 0.342 0.362 0.174 0.308 Semantic Negative ↑ Random Utterance 0.152 0.209 0.147 0.287 Random Seq2Seq 0.402 0.435 0.344 0.585 Syntactic Negative ↑ Word Drop 0.342 0.367 0.261 0.3 Word Order 0.392 0.409 0.671 0.726 Word Repeat 0.432 0.461 0.782 0.872 Table 1: Metric score evaluation (∆= score(c, rground-truth)− score(c, r)) between RUBER (R), InferSent (IS), DistilBERTNLI (DNI) and MAUDE (M) on PersonaChat dataset’s public validation set. For Semantic Positive tests, lower ∆is better; for all Negative tests higher ∆is better. Negative: responses that have been adversarially modified in the lexical units. Ideally, we would want ∆→1 for semantic and syntactic negative responses, ∆→0 for semantic positive responses. We observe that the MAUDE scores perform robustly across all the setups. RUBER and InferSent baselines are weak, quite understandably so because they cannot leverage the large pre-trained language model data and thus is poor at generalization. DistilBERT-NLI baseline performs significantly better than InferSent and RUBER, while MAUDE scores even better and more consistently overall. We provide a detailed ablation of various training scenarios as well as the absolute raw ∆scores in Appendix C. We also observe both MAUDE and DistilBERT-NLI to be more robust on zero-shot generalization to different datasets, the results of which are available in Appendix B. 4.2 Correlation with human judgements Metrics are evaluated on correlation with human judgements (Lowe et al., 2017; Tao et al., 2017), or by evaluating the responses of a generative model trained on the metric (Wieting et al., 2019), by human evaluation. However, this introduces a bias either during the questionnaire setup or during data post-processing in favor of the proposed metric. In this work, we refrain from collecting human annotations ourselves, but refer to the recent work by See et al. (2019) on PersonaChat dataset. Thus, the evaluation of our metric is less subject to bias. See et al. (2019) conducted a large-scale human evaluation of 28 model configurations to study the effect of controllable attributes in dialogue generation. We use the publicly released model-human and human-human chat logs from See et al. (2019) to generate the scores on our models, and correlate them with the associated human judgement on a Likert scale. See et al. (2019) propose to use a multi-step evaluation methodology, where the huR IS DNLI M Fluency 0.322 0.246 0.443 0.37 Engagingness 0.204 0.091 0.192 0.232 Humanness 0.057 -0.108 0.129 0.095 Making Sense 0.0 0.005 0.256 0.208 Inquisitiveness 0.583 0.589 0.598 0.728 Interestingness 0.275 0.119 0.135 0.24 Avoiding Repetition 0.093 -0.118 -0.039 -0.035 Listening 0.061 -0.086 0.124 0.112 Mean 0.199 0.092 0.23 0.244 Table 2: Correlation with calibrated scores between RUBER (R), InferSent (IS), DistilBERT-NLI (DNI) and MAUDE (M) when trained on PersonaChat dataset man annotators rate the entire dialogue and not a context-response pair. On the other hand, our setup is essentially a single-step evaluation method. To align our scores with the multi-turn evaluation, we average the individual turns to get an aggregate score for a given dialogue. Figure 2: Human correlation on un-calibrated scores collected on PersonaChat dataset (Zhang et al., 2018), for MAUDE, DistilBERT-NLI, InferSent and RUBER We investigate the correlation between the scores and uncalibrated individual human scores from 100 crowdworkers (Fig. 2), as well as aggregated scores released by See et al. (2019) which are adjusted for annotator variance by using Bayesian calibration (Kulikov et al., 2018) (Table 2). In all cases, we report Spearman’s correlation coefficients. For uncalibrated human judgements, we observe MAUDE having higher relative correlation in 6 out of 8 quality measures. Interestingly, in case of calibrated human judgements, DistilBERT proves to be better in half of the quality measures. MAUDE achieves marginally better overall correlation for calibrated human judgements, due to significantly strong correlation on specifically two measures: Interestingness and Engagingness. These measures answers the questions “How interesting or boring did you find this conversation?” and “How much did you enjoy talking to this user?”. (Refer to Appendix B of See et al. (2019) for the full 2434 list of questions). Overall, using large pre-trained language models provides significant boost in the human correlation scores. 5 Conclusion In this work, we explore the feasibility of learning an automatic dialogue evaluation metric by leveraging pre-trained language models and the temporal structure of dialogue. We propose MAUDE, which is an unreferenced dialogue evaluation metric that leverages sentence representations from large pretrained language models, and is trained via Noise Contrastive Estimation. MAUDE also learns a recurrent neural network to model the transition between the utterances in a dialogue, allowing it to correlate better with human annotations. This is a good indication that MAUDE can be used to evaluate online dialogue conversations. Since it provides immediate continuous rewards and at the singlestep level, MAUDE can be also be used to optimize and train better dialogue generation models, which we want to pursue as future work. Acknowledgements We would like to thank the ParlAI team (Margaret Li, Stephen Roller, Jack Urbanek, Emily Dinan, Kurt Shuster and Jason Weston) for technical help, feedback and encouragement throughout this project. We would like to thank Shagun Sodhani and Alborz Geramifard for helpful feedback on the manuscript. We would also like to thank William Falcon and the entire Pytorch Lightning community for making research code awesome. We are grateful to Facebook AI Research (FAIR) for providing extensive compute / GPU resources and support regarding the project. This research, with respect to Quebec Artificial Intelligence Institute (Mila) and McGill University, was supported by the Canada CIFAR Chairs in AI program. References Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: A corpus for adding memory to goal-oriented dialogue systems. arXiv. Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Ultes Stefan, Ramadan Osman, and Milica Gaˇsi´c. 2018. Multiwoz - a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of EMNLP. MultiWoz CORPUS licensed under CCBY 4.0. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of ACL. W.A. Falcon. 2019. Pytorch lightning. https://github.com/williamFalcon/ pytorch-lightning. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv. Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. arXiv. Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of IJCNLP. Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. arXiv. Ryan Lowe. 2019. A Retrospective for “Towards an Automatic Turing Test - Learning to Evaluate Dialogue Responses”. ML Retrospectives. 2435 Ryan Lowe, Michael Noseworthy, Iulian V. Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. arXiv. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. Parlai: A dialog research software platform. arXiv. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation (WMT). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? How controllable attributes affect human judgments. arXiv. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of AAAI. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2017. RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems. arXiv. Jason Weston, Emily Dinan, and Alexander H Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. arXiv. John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond BLEU:Training Neural Machine Translation with Semantic Similarity. In Proceedings of ACL, Florence, Italy. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv. 2436 A Temporal Structure We hypothesize that a good encoding function can capture the structure that exists in dialogue. Often this translates to capturing the semantics, coherency in dialogue which are some of the key attributes of a conversation. Formally, we propose using a function fDi t which maps one utterance to the next. hui+1 = fDi t (hui) (5) To define a good encoding function, we turn to pre-trained language models. These models are typically trained on large corpus and achieve stateof-the-art results on a range of language understanding tasks (Ott et al., 2018). To validate our hypothesis, we use a pre-trained (and fine-tuned) BERT (Devlin et al., 2018) as fe. We compute hui = fe(ui)∀ui ∈D, and learn a linear classifier to predict an approximate position of the ui ∈Di. The task has details in its design, in the case of goal-oriented dialogues the vocabulary differs in different parts of the conversation and in chitchat dialogues it cannot be said. To experiment, we choose PersonaChat (Zhang et al., 2018) and DailyDialog (Li et al., 2017) to be nominal of chit-chat style data, and Frames (Asri et al., 2017) and MultiWOZ (Budzianowski et al., 2018) for goal-oriented data. We encode every consecutive pairs of the utterances with a % score, t, that denotes its occurrence after the completion of t% of dialogue. tup = indexup + 1 k (6) where indexup denote the average of the indices in the pair of the utterances and k denote the total number of utterances in dialogue. Now, we pre-define the number of bins B. We split the range 0-100 into B non-overlapping sets(every set will have min and max denoted by si min and si max respectively). We parse every dialogue in the dataset, and place the encoding of every utterance pair in the corresponding bin. binup = {i | tup > si min&si max > tup} (7) We then use Linear Discriminant Analysis (LDA) to predict the bin of each utterance ui in the dialogue after converting the high dimensional embedding into 2 dimensions. LDA provides the best possible class conditioned representation of data. This gives us a downsampled representation of each utterance ui which we plot as shown in Figure 3. The reduction on BERT encoding to 2dimensions shows that BERT is useful in nudging the encoded utterances towards useful structures. We see well defined clusters in goal-oriented but not-so-well-defined clusters in open domain dialogues. This is reasonable to expect and intuitive. B Generalization on unseen dialog datasets In order for a dialogue evaluation metric to be useful, one has to evaluate how it generalizes to unseen data. We performed the evaluation using our trained models on PersonaChat dataset, and then evaluated them zero-shot on two goal-oriented datasets, Frames (Asri et al., 2017) and MultiWoz (Budzianowski et al., 2018), and one chit-chat style dataset: Daily Dialog (Li et al., 2017) (Table 3). We find BERT-based models are significantly better at generalization than InferSent or RUBER, with MAUDE marginally better than DistilBERT-NLI baseline. MAUDE has the biggest impact on generalization to DailyDialog dataset, which suggests that it captures the commonalities of chit-chat style dialogue from PersonaChat. Surprisingly, generalization gets significantly better of BERT-based models on goal-oriented datasets as well. This suggests that irrespective of the nature of dialogue, pre-training helps because it contains the information common to English language lexical items. C Noise Contrastive Estimation training ablations The choice of negative samples (Section 3) for Noise Contrastive Estimation can have a large impact on the test-time scores of the metrics. In this section, we show the effect when we train only using syntactic negative samples (Table 4) and only semantic negative samples (Table 5). For comparison, we show the full results when trained using both of the sampling scheme in Table 6. We find overall training only using either syntactic or semantic negative samples achieve less ∆than training using both of the schemes. All models achieve high scores on the semantic positive samples when only trained with syntactical adversaries. However, training only with syntactical negative samples results in adverse effect on detecting semantic negative items. 2437 Datasets DailyDialog Frames MultiWOZ Model Eval Mode Score ∆ Score ∆ Score ∆ RUBER + 0.173 ±0.168 0.211 ±0.172 0.253 ±0.177 − 0.063 ±0.092 0.11 0.102 ±0.114 0.109 0.121 ±0.123 0.123 InferSent + 0.163 ±0.184 0.215 ±0.186 0.277 ±0.200 − 0.050 ±0.085 0.113 0.109 ±0.128 0.106 0.127 ±0.133 0.15 DistilBERT NLI + 0.885 ±0.166 0.744 ±0.203 0.840 ±0.189 − 0.575 ±0.316 0.31 0.538 ±0.330 0.206 0.566 ±0.333 0.274 MAUDE + 0.782 ±0.248 0.661 ±0.293 0.758 ±0.265 − 0.431 ±0.300 0.351 0.454 ±0.358 0.207 0.483 ±0.345 0.275 Table 3: Zero-shot generalization results on DailyDialog, Frames and MultiWOZ dataset for the baselines and MAUDE. + denotes semantic positive responses, and −denotes semantic negative responses. PersonaChat Dataset Model RUBER InferSent DistilBERT NLI MAUDE Training Modes Only Semantics Only Semantics Only Semantics Only Semantics Evaluation Modes Score ∆ Score ∆ Score ∆ Score ∆ Semantic Positive Gold Truth Response 0.443±0.197 0 0.466±0.215 0 0.746±0.236 0 0.789±0.244 0 BackTranslation 0.296±0.198 0.147 0.273±0.195 0.192 0.766±0.235 -0.02 0.723±0.277 0.066 Seq2Seq 0.082±0.163 0.361 0.10±0.184 0.367 0.46±0.357 0.286 0.428±0.390 0.361 Semantic Negative Random Utterance 0.299±0.203 0.144 0.287±0.208 0.178 0.489±0.306 0.257 0.388±0.335 0.40 Random Seq2Seq 0.028±0.077 0.415 0.036±0.082 0.429 0.237±0.283 0.529 0.16±0.26 0.629 Syntactic Negative Word Drop 0.334±0.206 0.109 0.308±0.217 0.158 0.802±0.224 -0.056 0.73±0.29 0.059 Word Order 0.472±0.169 -0.029 0.482±0.19 -0.016 0.685±0.284 0.061 0.58±0.35 0.209 Word Repeat 0.255±0.24 0.188 0.153±0.198 0.312 0.657±0.331 0.089 0.44±0.39 0.349 Table 4: Metric score evaluation between InferSent, DistilBERT-NLI and MAUDE on PersonaChat dataset, trained on P(ˆr) = Semantics. Bold scores represent the best individual scores, and bold with blue represents the best difference with the true response. PersonaChat Dataset Model RUBER InferSent DistilBERT NLI MAUDE Training Modes Only Syntax Only Syntax Only Syntax Only Syntax Evaluation Modes Score ∆ Score ∆ Score ∆ Score ∆ Semantic Positive Gold Truth Response 0.891±0.225 0 0.893±0.231 0 0.986±0.088 0 0.99±0.07 0 BackTranslation 0.687±0.363 0.204 0.672±0.387 0.221 0.877±0.268 0.109 0.91±0.23 0.08 Seq2Seq 0.929±0.187 -0.038 0.949±0.146 -0.055 0.996±0.048 -0.01 0.99±0.05 0.00 Semantic Negative Random Utterance 0.869±0.248 0.022 0.835±0.294 0.058 0.977±0.116 0.009 0.97±0.13 0.02 Random Seq2Seq 0.915±0.196 -0.024 0.904±0.206 -0.011 0.994±0.057 -0.008 0.99±0.08 0 Syntactic Negative Word Drop 0.119±0.255 0.772 0.105±0.243 0.788 0.373±0.414 0.613 0.41±0.44 0.584 Word Order 0.021±0.101 0.87 0.015±0.0915 0.878 0.064±0.194 0.922 0.07±0.21 0.928 Word Repeat 0.001±0.007 0.89 0.001±0.020 0.893 0.006±0.057 0.980 0.01±0.06 0.981 Table 5: Metric score evaluation between InferSent, DistilBERT-NLI and MAUDE on PersonaChat dataset, trained on P(ˆr) = Syntax. Bold scores represent the best individual scores, and bold with blue represents the best difference with the true response. 2438 Figure 3: From left to right, LDA downsampled representation of BERT on Frames (Goal oriented), MultiWOZ (Goal oriented), PersonaChat (chit-chat) and DailyDialog (chit-chat) PersonaChat Dataset Model RUBER InferSent DistilBERT NLI MAUDE Training Modes All All All All Evaluation Modes Score ∆ Score ∆ Score ∆ Score ∆ Semantic Positive Gold Truth Response 0.432±0.213 0 0.462±0.254 0 0.824±0.154 0 0.909±0.152 0 BackTranslation 0.183±0.198 0.249 0.184±0.218 0.278 0.8±0.19 0.024 0.838±0.227 0.070 Seq2Seq 0.09±0.17 0.342 0.10±0.184 0.362 0.65±0.287 0.174 0.6008±0.38 0.308 Semantic Negative Random Utterance 0.28±0.21 0.152 0.252±0.236 0.209 0.677±0.255 0.147 0.621±0.344 0.287 Random Seq2Seq 0.03±0.09 0.402 0.026±0.079 0.435 0.48±0.313 0.344 0.323±0.355 0.585 Syntactic Negative Word Drop 0.09±0.16 0.342 0.094±0.17 0.367 0.563±0.377 0.261 0.609±0.401 0.3 Word Order 0.04±0.10 0.392 0.052±0.112 0.409 0.153±0.29 0.671 0.182±0.327 0.726 Word Repeat 0.00±0.01 0.432 0.001±0.010 0.461 0.041±0.153 0.782 0.036±0.151 0.872 Table 6: Metric score evaluation between InferSent, DistilBERT-NLI and MAUDE on PersonaChat dataset, trained on P(ˆr) = Syntax + Semantics. Bold scores represent the best individual scores, and bold with blue represents the best difference with the true response. D Qualitative Evaluation We investigate qualitatively how the scores of different models are on the online evaluation setup on See et al. (2019)’c collected data. In Figure 4, we show a sample conversation where a human evaluator is pitched against a strong model. Here, MAUDE scores correlate strongly with raw likert scores on different metrics. We observe that RUBER and InferSent baselines overall correlate negatively with the response. In Figure 5, we show another sample where a human evaluator is pitched against a weak model, which exhibits degenerate responses. We see both MAUDE and DistilBERTNLI correlate strongly with human annotation and provides a very low score, compared to RUBER or InferSent. Since we essentially cherry-picked good results, its only fair to show a similarly cherry-picked negative example of MAUDE. We sampled from responses where MAUDE scores are negatively correlated with human annotations on Inquisitiveness metric (5% of cases), and we show one of those responses in Figure 6. We notice how both DistilBERT-NLI and MAUDE fails to recognize the duplication of utterances which leads to a low overall score. This suggests there still exists room for improvement in developing MAUDE, possibly by training the model to detect degeneracy in the context. E Hyperparameters and Training Details We performed rigorous hyperparameter search to tune our model MAUDE. We train MAUDE with downsampling, as we observe poor results when we run the recurrent network on top of 768 dimensions. Specifically, we downsample to 300 dimensions, which is the same used by our baselines RUBER and InferSent in their respective encoder representations. We also tested with the choice of either learning a PCA to downsample the BERT representations vs learning the mapping Dg (Equation 4), and found the latter producing better results. We keep the final decoder same for all models, which is a two layer MLP with hidden layer of size 200 dimensions and dropout 0.2. For BERT-based models (DistilBERT-NLI and MAUDE), we use HuggingFace Transformers (Wolf et al., 2019) to first fine-tune the training dataset on language model objective. We tested with training on frozen finetuned representations in our initial experiments, but fine-tuning end-to-end lead to better ablation scores. For all models we train using Adam optimizer with 0.0001 as the learning rate, early stopping till validation loss doesn’t improve. For the sake of easy reproducibility, we use Pytorch Lightning (Falcon, 2019) framework. We used 8 Nvidia-TitanX GPUs 2439 Figure 4: An example of dialogue conversation between human and a strong model, where MAUDE (M) score correlates positively with human annotations. Raw Likert scores for the entire dialogue are: Engagingness : 3, Interestingness : 3, Inquisitiveness : 2, Listening : 3, Avoiding Repetition : 3, Fluency : 4, Making Sense : 4, Humanness : 3, Persona retrieval : 1. Baselines are RUBER (R), InferSent (I) and BERT-NLI (B). on a DGX Server Workstation to train faster using Pytorch Distributed Data Parallel (DDP). 2440 Figure 5: An example of dialogue conversation between human and a weak model, where MAUDE (M) score correlates positively with human annotations. Raw Likert scores for the entire dialogue are: Engagingness : 1, Interestingness : 4, Inquisitiveness : 1, Listening : 1, Avoiding Repetition : 3, Fluency : 1, Making Sense : 2, Humanness : 1, Persona retrieval : 1. In our setup we only score responses only following a human response. Baselines are RUBER (R), InferSent (I) and BERT-NLI (B). 2441 Figure 6: An example of dialogue conversation between human and a model, where MAUDE (M) score correlates negatively with human annotations. Raw Likert scores for the entire dialogue are: Engagingness : 1, Interestingness : 1, Inquisitiveness : 2, Listening : 2, Avoiding Repetition : 2, Fluency : 3, Making Sense : 4, Humanness : 2, Persona retrieval : 1. Baselines are RUBER (R), InferSent (I) and BERT-NLI (B).
2020
220
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2442–2452 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2442 Neural Generation of Dialogue Response Timings Matthew Roddy and Naomi Harte ADAPT Centre, School of Engineering Trinity College Dublin, Ireland {roddym,nharte}@tcd.ie Abstract The timings of spoken response offsets in human dialogue have been shown to vary based on contextual elements of the dialogue. We propose neural models that simulate the distributions of these response offsets, taking into account the response turn as well as the preceding turn. The models are designed to be integrated into the pipeline of an incremental spoken dialogue system (SDS). We evaluate our models using offline experiments as well as human listening tests. We show that human listeners consider certain response timings to be more natural based on the dialogue context. The introduction of these models into SDS pipelines could increase the perceived naturalness of interactions.1 1 Introduction The components needed for the design of spoken dialogue systems (SDSs) that can communicate in a realistic human fashion have seen rapid advancements in recent years (e.g. Li et al. (2016); Zhou et al. (2018); Skerry-Ryan et al. (2018)). However, an element of natural spoken conversation that is often overlooked in SDS design is the timing of system responses. Many turn-taking components for SDSs are designed with the objective of avoiding interrupting the user while keeping the lengths of gaps and overlaps as low as possible e.g. Raux and Eskenazi (2009). This approach does not emulate naturalistic response offsets, since in human-human conversation the distributions of response timing offsets have been shown to differ based on the context of the first speaker’s turn and the context of the addressee’s response (Sacks et al., 1974; Levinson and Torreira, 2015; Heeman and Lunsford, 2017). It has also been shown that listeners have different anticipations about upcoming 1 Our code is available at https://github.com/ mattroddy/RTNets. 20000 10000 0 10000 20000 30000 0 100000 200000 300000 30000 20000 10000 0 10000 20000 30000 20000 10000 0 10000 20000 30000 0 100000 200000 300000 30000 20000 10000 0 10000 20000 30000 Response Encoder “Are you doing any kind of volunteer work now?” “Not right now...” Feature extraction Inference LSTM User Turn System Turn 0 0 0 0 0 1 Output Probabilities Sampling 1.5 1.0 0.5 0.0 0.5 1.0 1.5 True Predicted 0 Response Offset (Seconds) Figure 1: Overview of how our model generates the distribution of turn-switch offset timings using an encoding of a dialogue system response hz, and features extracted from the user’s speech xn. responses based on the length of a silence before a response (B¨ogels et al., 2019). If we wish to realistically generate offsets distributions in SDSs, we need to design response timing models that take into account the context of the user’s speech and the upcoming system response. For example, offsets where the first speaker’s turn is a backchannel occur in overlap more frequently (Levinson and Torreira, 2015). It has also been observed that dispreferred responses (responses that are not in line with the suggested action in the prior turn) are associated with longer delays (Kendrick and Torreira, 2015; B¨ogels et al., 2019). 2443 Overview We propose a neural model for generating these response timings in SDSs (shown in Fig. 1). The response timing network (RTNet) operates using both acoustic and linguistic features extracted from user and system turns. The two main components are an encoder, which encodes the system response hz, and an inference network, which takes a concatenation of user features (xn) and hz. RTNet operates within an incremental SDS framework (Schlangen and Skantze, 2011) where information about upcoming system responses may be available before the user has finished speaking. RTNet also functions independently of higher-level turn-taking decisions that are traditionally made by the dialogue manager (DM) component. Typically, the DM decides when the system should take a turn and also supplies the natural language generation (NLG) component with a semantic representation of the system response (e.g. intents, dialogue acts, or an equivalent neural representation). Any of the system response representations that are downstream from the DM’s output representation (e.g. lexical or acoustic features) can potentially be used to generate the response encoding. Therefore, we assume that the decision for the system to take a turn has already been made by the DM and our objective is to predict (on a frame-by-frame basis) the appropriate time to trigger the system turn. It may be impractical in an incremental framework to generate a full system response and then re-encode it using the response encoder of RTNet. To address this issue, we propose an extension of RTNet that uses a variational autoencoder (VAE) (Kingma and Welling, 2014) to train an interpretable latent space which can be used to bypass the encoding process at inference-time. This extension (RTNet-VAE) allows the benefit of having a data-driven neural representation of response encodings that can be manipulated without the overhead of the encoding process. This representation can be manipulated using vector algebra in a flexible manner by the DM to generate appropriate timings for a given response. Our model’s architecture is similar to VAEs with recurrent encoders and decoders proposed in Bowman et al. (2016); Ha and Eck (2018); Roberts et al. (2018). Our use of a VAE to cluster dialogue acts is similar to the approach used in Zhao et al. (2017). Our vector-based representation of dialogue acts takes inspiration from the ‘attribute vectors’ used in Roberts et al. (2018) for learning musical structure representations. Our model is also related to continuous turn-taking systems (Skantze, 2017) in that our model is trained to predict future speech behavior on a frame-by-frame basis. The encoder uses a multiscale RNN architecture similar to the one proposed in Roddy et al. (2018) to fuse information across modalities. Models that intentionally generate responsive overlap have been proposed in DeVault et al. (2011); Dethlefs et al. (2012). While other models have also been proposed that generate appropriate response timings for fillers (Nakanishi et al., 2018; Lala et al., 2019) and backchannels (Morency et al., 2010; Meena et al., 2014; Lala et al., 2017). This paper is structured as follows: First, we present how our dataset is structured and our training objective. Then, in sections 2.1 and 2.2 we present details of our two models, RTNet and RTNet-VAE. Section 2.3 presents our input feature representations. In section 2.4 we discuss our training and testing procedures. In sections 3.1 and 3.2 we analyze the performance of both RTNet and RTNet-VAE. Finally, in section 4 we present the results of a human listener test. 2 Methodology Dataset Our dataset is extracted from the Switchboard-1 Release 2 corpus (Godfrey and Holliman, 1997). Switchboard has 2438 dyadic telephone conversations with a total length of approximately 260 hours. The dataset consists of pairs of adjacent turns by different speakers which we refer to as turn pairs (shown in Fig. 2). Turn pairs are automatically extracted from orthographic annotations using the following procedure: We extract frame-based speech-activity labels for each speaker using a frame step-size of 50ms. The frame-based representation is used to partition each person’s speech signal into interpausal units (IPUs). We define IPUs as segments of speech by a person that are separated by pauses of 200ms or greater. IPUs are then used to automatically extract turns, which we define as consecutive IPUs by a speaker in which there is no speech by the other speaker in the silence between the IPUs. A turn pair is then defined as being any two adjacent turns by different speakers. The earlier of the two turns in a pair is considered to be the user turn and the second is considered to be the system turn. Training Objective Our training objective is to predict the start of the system turn one frame ahead 2444 200 ms frame length = 50 ms final IPU of user turn user turn Offset ( ) system turn 0 0 0 0 0 0 0 0 1 End of previous system turn and start of new pair Inference LSTM Network User Features System Encodings Training Objective End of current system turn and start of new pair IPU Figure 2: Segmentation of data into turn pairs, and how the inference LSTM makes predictions. of the ground truth start time. The target labels in each turn pair are derived from the ground truth speech activity labels as shown in Fig. 2. Each 50 ms frame has a label y ∈{0, 1}, which consists of the ground truth voice activity shifted to the left by one frame. As shown in the figure, we only include frames in the span R in our training loss. We define the span R as the frames from the beginning of the last IPU in the user turn to the frame immediately prior to the start of the system turn. We do not predict at earlier frames since we assume that at these mid-turn-pauses the DM has not decided to take a turn yet, either because it expects the user to continue, or it has not formulated one yet. As mentioned previously in section 1, we design RTNet to be abstracted from the turn-taking decisions themselves. If we were to include pauses prior to the turn-final silence, our response generation system would be additionally burdened with making turn-taking decisions, namely, classifying between mid-turn-pauses and end-of-turn silences. We therefore make the modelling assumption that the system’s response is formulated at some point during the user’s turn-final IPU. To simulate this assumption we sample an index RSTART from the span of R using a uniform distribution. We then use the reduced set of frames from RSTART to REND in the calculation of our loss. 2.1 Response Timing Network (RTNet) Encoder The encoder of RTNet (shown in Fig. 3) fuses the acoustic and linguistic modalities from a system response using three bi-directional LSTMs. Each modality is processed at independent timescales and then fused in a master Bi-LSTM which operates at the linguistic temporal rate. The output of the master Bi-LSTM is a sequence of encodings h0, h1, ...hI, where each encoding is a concatenation of the forward and backward hidden states of the master Bi-LSTM at each word index. The linguistic Bi-LSTM takes as input the sequence of 300-dimensional embeddings of the tokenized system response. We use three special tokens: SIL, WAIT, and NONE. The SIL token is used whenever there is a gap between words that is greater than the frame-size (50ms). The WAIT and NONE tokens are inserted as the first and last tokens of the system response sequence respectively. The concatenation [h0; h1; hI] is passed as input to a RELU layer (we refer to this layer as the reduction layer) which outputs the hz encoding. The hz encoding is used (along with user features) in the concatenated input to the inference network. Since the WAIT embedding corresponds to the h0 output of the master Bi-LSTM and the NONE embedding corresponds to hI, the two embeddings serve as “triggering” symbols that allow the linguistic and master Bi-LSTM to output relevant information accumulated in their cell states. The acoustic Bi-LSTM takes as input the sequence of acoustic features and outputs a sequence of hidden states at every 50ms frame. As shown in Fig. 3, we select the acoustic hidden states that correspond to the starting frame of each linguistic token and concatenate them with the linguistic hidden states. Since there are no acoustic features available for the WAIT and NONE tokens, we train two embeddings to replace these acoustic LSTM states (shown in purple in Fig. 3). The use of acoustic embeddings results in there being no connection between the WAIT acoustic embedding and the first acoustic hidden state. For this reason we include h1 in the [h0; h1; hI] concatenation, in order to make it easier for information captured by the the acoustic bi-LSTM to be passed through to the final concatenation. Inference Network The aim of our inference network is to predict a sequence of output probabilities Y = [yRSTART, yRSTART+1, ..., yN] using 2445 Not right now but I have done uh Red <SIL TOK> Cross work <NONE TOK> <WAIT TOK> Acoustic Linguistic Master Figure 3: The encoder is three stacked Bi-LSTMs. We use special embeddings (shown in purple) to represent the acoustic states corresponding to the first and last tokens (WAIT and NONE) of the system’s turn. RELU RELU RELU RELU Figure 4: VAE a response encoding hz, and a sequence of user features X = [x0, x1, ..., xN]. We use a a single-layer LSTM (shown in Fig. 2) which is followed by a sigmoid layer to produce the output probabilities: [hn; cn] = LSTMinf([xn; hz], [hn−1; cn−1]) yn = σ(W hhn + bh) Since there are only two possible output values in a generated sequence {0,1}, and the sequence ends once we predict 1, the inference network can be considered an autoregressive model where 0 is passed implicitly to the subsequent time-step. To generate an output sequence, we can sample from the distribution p(yn = 1|yRSTART = 0, yRSTART+1 = 0, ..., yn−1 = 0, X0:n, hz) using a Bernoulli random trial at each time-step. For frames prior to RSTART the output probability is fixed to 0, since RSTART is the point where the DM has formulated the response. During training we minimize the binary cross entropy loss (LBCE) between our ground truth objective and our output predictions Y . 2.2 RTNet-VAE Motivation A limitation of RTNet is that it may be impractical to encode system turns before triggering a response. For example, if we wish to apply RTNet using generated system responses, at run-time the RTNet component would have to wait for the full response to be generated by the NLG, which would result in a computational bottleneck. If the NLG system is incremental, it may also be desirable for the system to start speaking before the entirety of the system response has been generated. VAE To address this, we bypass the encoding stage by directly using the semantic representation output from the DM to control the response timing encodings. We do this by replacing the reduction layer with a VAE (Fig. 4). To train the VAE, we use the same concatenation of encoder hidden states as in the RTNet reduction layer ([h0; h1; hI]). We use a dimensionality reduction RELU layer to calculate hreduce, which is then split into µ and ˆσ components via two more RELU layers. ˆσ is passed through an exponential function to produce σ, a non-negative standard deviation parameter. We sample the latent variable z with the standard VAE method using µ, σ, and a random vector from the standard normal distribution N(0, I). A dimensionality expansion RELU layer is used to transform z into the response encoding hz, which is the same dimensionality as the output of the encoder: hreduce = RELU(Wreduce[h0; h1; hI] + breduce) µ = RELU(Wµhreduce + bµ) ˆσ = RELU(Wσhreduce + bσ) σ = exp( ˆσ 2 ) z = µ + σ ⊙N(0, I) hz = RELU(Wexpandz + bexpand) We impose a Gaussian prior over the latent space using a Kullback-Liebler (KL) divergence loss term: LKL = −1 2Nz (1 + ˆσ −µ2 −exp(ˆσ)) The LKL loss measures the distance of the generated distribution from a Gaussian with zero mean 2446 and unit variance. LKL is combined with LBCE using a weighted sum: L = LBCE + wKLLKL As we increase the value of wKL we increasingly enforce the Gaussian prior on the latent space. In doing so our aim is to learn a smooth latent space in which similar types of responses are organized in similar areas of the space. Latent Space During inference we can skip the encoding stage of RTNet-VAE and sample z directly from the latent space on the basis of the input semantic representation from the dialogue manager. Our sampling approach is to approximate the distribution of latent variables for a given responsetype using Gaussians. For example, if we have a collection of labelled backchannel responses (and their corresponding z encodings) we can approximate the distribution of p(z|label =backchannel) using an isotropic Gaussian by simply calculating µbackchannel and σbackchannel, the maximum likelihood mean and standard deviations of each of the z dimensions. These vectors can also be used to calculate directions in the latent space with different semantic characteristics and then interpolate between them. 2.3 Input Feature Representations Linguistic Features We use the word annotations from the ms-state transcriptions as linguistic features. These annotations give us the timing for the starts and ends of all words in the corpus. As our feature representation, we use 300 dimensional word embeddings that are initialized with GloVe vectors (Pennington et al., 2014) and then jointly optimized with the rest of the network. In total there are 30080 unique words in the annotations. We reduced the embedding number down to 10000 by merging embeddings that had low word counts with the closest neighbouring embedding (calculated using cosine distance). We also introduce four additional tokens that are specific to our task: SIL, WAIT, NONE, and UNSPEC. SIL is used whenever there is a silence. WAIT and NONE are used at the start and end of all the system encodings, respectively. The use of UNSPEC (unspecified) is shown in Fig. 5. UNSPEC was introduced to represent temporal information in the linguistic embeddings. We approximate the processing delay in ASR by delaying the annotation by 100 ms after the ground truth frame where the user’s word ended. This 100 ms delay was proposed in Skantze (2017) as a necessary assumption to modelling linguistic features in offline continuous systems. However, since voice activity detection (VAD) can supply an estimate of when a word has started, we propose that we can use this information to supply the network with the UNSPEC embedding 100ms after the word has started. Acoustic Features We combine 40 log-mel filterbanks, and 17 features from the GeMAPs feature set (Eyben et al., 2016). The GeMAPs features are the complete set excluding the MFCCs (e.g. pitch, intensity, spectral flux, jitter, etc.). Acoustic features were extracted using a 50ms framestep. 2.4 Experimental Settings Training and Testing Procedures The training, validation, and test sets consist of 1646, 150, 642 conversations respectively with 151595, 13910, and 58783 turn pairs. The test set includes all of the conversations from the NXT-format annotations (Calhoun et al., 2010), which include references to the Switchboard Dialog Act Corpus (SWDA) (Stolcke et al., 2000) annotations. We include the entirety of the NXT annotations in our test set so that we have enough labelled dialogue act samples to analyse the distributions. We used the following hyperparameter settings in our experiments: The inference, acoustic, linguistic, and master LSTMs each had hidden sizes of 1024, 256, 256, and 512 (respectively). We used a latent variable size of 4, a batch size of 128, and L2 regularization of 1e-05. We used the Adam optimizer with an initial learning rate of 5e-04. We trained each model for 15000 iterations, with learning rate reductions by a factor of 0.1 after 9000, 11000, 13000, and 14000 iterations. While we found that randomizing RSTART during training was important for the reasons given in Section 2, it presented issues for the stability and reproducibility of our evaluation and test results for LBCE and LKL. We therefore randomize during training and sampling, but when calculating the test losses (reported in Table 1) we fix RSTART to be the first frame of the user’s turn-final IPU. We also calculate the mean absolute error (MAE), given in seconds, from the ground truth response offsets to the generated output offsets. When sampling for the calculation of MAE, it is necessary to increase the length of the turn pair since the response time may be triggered by the 2447 But Uhhh Why? <SIL> <UNSPEC> <but> <UNSPEC> <UNSPEC> <uh> <SIL> <why> <SIL> 50 ms Figure 5: The user’s linguistic feature representation scheme. The embedding for each word is triggered 100 ms after the ground truth end of the word, to simulate ASR delay. The UNSPEC embedding begins 100ms after a word’s start frame and holds information about whether a word is being spoken (before it has been recognized) and the length of each word. sampling process after the ground truth time. We therefore pad the user’s features with 80 extra frames in which we simulate silence artificially using acoustic features. During sampling, we use the same RSTART randomization process that was used during training, rather than fixing it to the start of the user’s turn-final IPU. For each model we perform the sampling procedure on the test set three times and report the mean error in Table 1. Best Fixed Probability To the best of our knowledge, there aren’t any other published models that we can directly compare ours to. However, we can calculate the best performance that can be achieved using a fixed value for y. The best possible fixed y for a given turn pair is: ytp = 1 (REND−RSTART)/FrameLength. The best fixed y for a set of turn pairs is given by the expected value of ytp in that set: yfixed = E[ytp]. This represents the best performance that we could achieve if we did not have access to any user or system features. We can use the fixed probability model to put the performance of the rest of our models into context. 1.5 1.0 0.5 0.0 0.5 1.0 1.5 Offset (Seconds) True Predicted (a) Full Model 1.5 1.0 0.5 0.0 0.5 1.0 1.5 Offset (Seconds) True Predicted (b) Fixed Probability Figure 6: Generated offset distributions for the test set using the full model and the fixed probability (random) model. 3 Discussion 3.1 RTNet Discussion RTNet Performance The offset distribution for the full RTNet model is shown in Fig. 6a. This # Model LBCE LKL MAE Details 1 Full Model 0.1094 – 0.4539 No VAE 2 Fixed Probability 0.1295 – 1.4546 Fixed Probability 3 No Encoder 0.1183 – 0.4934 Encoder Ablation 4 Only Acoustic 0.1114 – 0.4627 5 Only Linguistic 0.1144 – 0.4817 6 Only Acoustic 0.1112 – 0.5053 Inference Ablation 7 Only Linguistic 0.1167 – 0.4923 8 wKL = 0.0 0.1114 3.3879 0.4601 Inclusion of VAE 9 wKL = 10−4 0.1122 1.5057 0.4689 10 wKL = 10−3 0.1125 0.8015 0.4697 11 wKL = 10−2 0.1181 0.0000 0.5035 12 wKL = 10−1 0.1189 0.0000 0.5052 Table 1: Experimental results on our test set. Lower is better in all cases. Best results shown in bold. 0 1 True Distribution BC State 0 1 Full Model 0 1 Random 0 1 No Encoder 0 1 VAE 1.0 0.5 0.0 0.5 1.0 Offset (Seconds) 0 1 Vector Representation (a) BC/Statement 0 1 True Distribution yes no 0 1 Full Model 0 1 Random 0 1 No Encoder 0 1 VAE 1.0 0.5 0.0 0.5 1.0 Offset (Seconds) 0 1 Vector Representation (b) Yes/No Figure 7: Generated offset distributions for selected response dialogue acts using different model conditions. 2448 baseline RTNet model is better able to replicate many of the features of the true distribution in comparison with predicted offsets using the best possible fixed probability shown in Fig. 6b. The differences between the baseline and the fixed probability distributions are reflected in the results of rows 1 and 2 in Table 1. In Fig. 6a, the model has the most trouble reproducing the distribution of offsets between -500 ms and 0 ms. This part of the distribution is the most demanding because it requires that the model anticipate the user’s turnending. From the plots it is clear that our model is able to do that to a large degree. We observe that after the user has stopped speaking (from 0 seconds onward) the generated distribution follows the true distribution closely. To look in more detail at how the system models the offset distribution we can investigate the generated distributions of labelled response dialogue acts in our test set. Fig. 7 shows plots of backchannels vs. statements (Fig. 7a), and yes vs. no (Fig.7b) responses. In the second rows, we can see that the full model is able to accurately capture the differences in the contours of the true distributions. For example, in the no dialogue acts, the full model accurately generates a mode that is delayed (relative to yes dialogue acts). Encoder Ablation The performance of the response encoder was analysed in an ablation study, with results in rows 3 through 5 of Table 1. Without the response encoder, there is a large decrease in performance, relative to the full model. From looking at the encoders with only acoustic and linguistic modalities, we can see that the results benefit more from the acoustic modality than the linguistic modality. If we consider the impact of the encoder in more detail, we would expect that the network would not be able to model distributional differences between different types of DA responses without an encoder. This is confirmed in the fourth rows of Fig. 7, where we show the generated distributions without the encoder. We can see that without the encoder, the distributions of the all of the dialogue act offsets are almost exactly the same. Inference Network Ablation In rows 6 and 7 of Table 1 we present an ablation of the inference network. We can see that removing either the acoustic or linguistic features from the user’s features is detrimental to the results. An interesting irregular1.5 1.0 0.5 0.0 0.5 1.0 1.5 Offset (Seconds) True Predicted (a) Only Acoustic 1.5 1.0 0.5 0.0 0.5 1.0 1.5 Offset (Seconds) True Predicted (b) Only Linguistic Figure 8: Generated offset distributions for the inference network ablation. sd nn ny b (a) wKL = 0.0 sd nn ny b (b) wKL = 10−3 Figure 9: T-SNE plots of z for four different dialogue acts using two different wKL settings. ity is observed in the results for the model that uses only acoustic features (row 6): the MAE is unusually high, relative to the LBCE. In all other rows, lower LBCE corresponds to lower MAE. However, row 6 has the second lowest LBCE, while also having the second highest MAE. In order to examine this irregularity in more detail, we look at the generated distributions from the inference ablation, shown in Fig. 8. We observe that the linguistic features are better for predicting the mode of the distribution whereas the acoustic features are better at modelling the -100 ms to +150 ms region directly preceding the mode. Since word embeddings are triggered 100 ms after the end of the word, the linguistic features can be used to generate modal offsets in the 150 ms to 200 ms bin. We propose that, in the absence of linguistic features, there is more uncertainty about when the user’s turn-end has occurred. Since the majority of all ground-truth offsets occur after the user has finished speaking, the unusually high MAE in row 6 could be attributed to this uncertainty in whether the user has finished speaking. 3.2 RTNet-VAE Discussion RTNet-VAE Performance In rows 8 through 12 of Table 1 we show the results of our experiments with RTNet-VAE with different settings of wKL. As wKL is increased, the LBCE loss increases while the LKL loss decreases. Examining some example distributions of dialogue acts generated by RTNet-VAE using wKL = 10−4 (shown in the fifth rows of Fig. 7) we can see that RTNet-VAE is capa2449 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Offset (Seconds) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Interpolated Distributions agree-accept reject interpolated Figure 10: Interpolated distributions ble of generating distributions that are of a similar quality to those generated by RTNet (shown in the second row). We also observe that RTNet-VAE using wKL = 10−4 produces competitive results, in comparison to the full model. These observations suggest that the inclusion of the VAE in pipeline does not severely impact the overall performance. In Fig. 9 we show the latent variable z generated using RTNet-VAE and plotted using t-SNE (van der Maaten and Hinton, 2008). To show the benefits of imposing the Gaussian prior, we show plots for with wKL = 0.0 and wKL = 10−3. The plots show the two-dimensional projection of four different types of dialogue act responses: statements (sd), no (nn), yes (ny), and backchannels (b). We can observe that for both settings, the latent space is able to organize the responses by dialogue act type, even though it is never explicitly trained on dialogue act labels. For example, in both cases, statements (shown in blue) are clustered at the opposite side of the distribution from backchannels (shown in red). However, in the case of wKL = 0.0 there are “holes” in the latent space. For practical applications such as interpolation of vector representations of dialogue acts (discussed in the next paragraph), we would like a space that does not contain any of these holes since they are less likely to have semantically meaningful interpretations. When the Gaussian prior is enforced (Fig. 9b) we can see that the space is smooth and the distinctions between dialogue acts is still maintained. Latent Space Applications As mentioned in Section 2.2, part of the appeal in using the VAE in our model is that it enables us to discard the response encoding stage. We can exploit the smoothness of the latent space to skip the encoding stage by sampling directly from the trained latent space. We can approximate the distribution of latent variables for individual dialogue act response types using isotropic Gaussians. This enables us to efficiently represent the dialogue acts using mean and standard-deviation vectors, a pair for each dialogue act. Fig. 7 shows examples of distributions generated using Gaussian approximations of the latent space distributions in the final rows. We can see that the generated outputs have similar properties to the true distributions. We can use the same parameterized vector representations to interpolate between different dialogue act parameters to achieve intermediate distributions. This dimensional approach is flexible in that we give the dialogue manager (DM) more control over the details of the distribution. For example, if the objective of the SDS was to generate an agree dialogue act, we could control the degree of agreement by interpolating between disagree and agree vectors. Figure 10 shows an example of a generated interpolated distribution. We can see that the properties of the interpolated distribution (e.g. mode, kurtosis) are perceptually “in between” the reject and accept distributions. 4 Listening Tests It has shown that response timings vary based on the semantic content of dialogue responses and the preceding turn (Levinson and Torreira, 2015), and that listeners are sensitive to these fluctuations in timing (B¨ogels and Levinson, 2017). However, the question of whether certain response timings within different contexts are considered more realistic than others has not been fully investigated. We design an online listening test to answer two questions: (1) Given a preceding turn and a response, are some response timings considered by listeners to be more realistic than others? (2) In cases where listeners are sensitive to the response timing, is our model more likely to generate responses that are considered realistic than a system that generates a modal response time? Participants were asked to make A/B choices between two versions of a turn pair, where each version had a different response offset. Participants were asked: ”Which response timing sounds like it was produced in the real conversation?” The turn pairs were drawn from our dataset and were limited to pairs where the response was either dispreferred or a backchannel. We limited the chosen pairs to those with ground truth offsets that were either clas2450 1.5 1.0 0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Mode = +157 ms Early / Late Cutoffs (a) Early, modal, and late regions. Non-Modal Offsets 1 0 1 1 0 1 Modal Offsets 1 0 1 (b) Generated distributions for six turn pairs. The highlighted regions indicate the region that was preferred by listeners. The red line indicates the ground truth offset. Figure 11: Listening test experiments sified as early or late. We classified offsets as early, modal, or late by segmenting the distribution of all of the offsets in our dataset into three partitions as shown in Fig. 11a. The cutoff points for the early and late offsets were estimated using a heuristic where we split the offsets in our dataset into two groups at the mode of the distribution (157 ms) and then used the median values of the upper (+367 ms) and lower (-72 ms) groups as the cutoff points. We selected eight examples of each dialogue act (four early and four late). We generated three different versions of each turn pair: true, modal, and opposite. If the true offset was late, the opposite offset was the mean of the early offsets (-316 ms). If the true offset was early, the opposite offset was the mean of the late offsets (+760 ms). We had 25 participants (15 female, 10 male) who all wore headphones. We performed binomial tests for the significance of a given choice in each question. For the questions in the first half of the test, in which we compared true vs. opposite offsets, 10 of the 16 comparisons were found to be statistically significant (p < 0.05). In all of the significant cases the true offset was was considered more realistic than the opposite. In reference to our first research question, this result supports the conclusion that some responses are indeed considered to be more realistic than others. For the questions in the second half of the test, in which we compared true vs. modal offsets, six out of the 16 comparisons were found to be statistically significant. Of the six significant preferences, three were a preference for the true offset, and three were a preference for the modal offset. To investigate our second research question, we looked at the offset distributions generated by our model for each of the six significant preferences, shown in Fig. 11b. For the turn pairs where listeners preferred nonmodal offsets (top row), the distributions generated by our system deviate from the mode into the preferred area (highlighted in yellow). In pairs where listeners preferred modal offsets (bottom row) the generated distributions tend to have a mode near the overall dataset mode (shown in the green line). We can conclude, in reference to our second question, that in instances where listeners are sensitive to response timings it is likely that our system will generate response timings that are more realistic than a system that simply generates the mode of the dataset. 5 Conclusion In this paper, we have presented models that can be used to generate the turn switch offset distributions of SDS system responses. It has been shown in prior studies (e.g. (B¨ogels et al., 2019)) that humans are sensitive to these timings and that they can impact how responses are perceived by a listener. We would argue that they are an important element of producing naturalistic interactions that is often overlooked. With the advent of commercial SDS systems that attempt to engage users over extended multi-turn interactions (e.g. (Zhou et al., 2018)) generating realistic response behaviors is a potentially desirable addition to the overall experience. Acknowledgments The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. 2451 References Sara B¨ogels, Kobin H. Kendrick, and Stephen C. Levinson. 2019. Conversational expectations get revised as response latencies unfold. Language, Cognition and Neuroscience, pages 1–14. Sara B¨ogels and Stephen C. Levinson. 2017. The Brain Behind the Response: Insights Into Turn-taking in Conversation From Neuroimaging. Research on Language and Social Interaction, 50(1):71–89. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating Sentences from a Continuous Space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Sasha Calhoun, Jean Carletta, Jason M. Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver. 2010. The NXT-format Switchboard Corpus: A rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Language Resources and Evaluation, 44(4):387–419. Nina Dethlefs, Helen Hastie, Verena Rieser, and Oliver Lemon. 2012. Optimising incremental dialogue decisions using information density for interactive systems. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 82–93. Association for Computational Linguistics. David DeVault, Kenji Sagae, and David Traum. 2011. Incremental interpretation and prediction of utterance meaning for interactive dialogue. Dialogue & Discourse, 2(1):143–170. Florian Eyben, Klaus R. Scherer, Bjorn W. Schuller, Johan Sundberg, Elisabeth Andre, Carlos Busso, Laurence Y. Devillers, Julien Epps, Petri Laukka, Shrikanth S. Narayanan, and Khiet P. Truong. 2016. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing. IEEE Transactions on Affective Computing, 7(2):190–202. John J Godfrey and Edward Holliman. 1997. Switchboard-1 release 2. Linguistic Data Consortium, Philadelphia, 926:927. David Ha and Douglas Eck. 2018. A neural representation of sketch drawings. In International Conference on Learning Representations. Peter A. Heeman and Rebecca Lunsford. 2017. Turntaking offsets and dialogue context. In Proc. Interspeech 2017, pages 1671–1675. Kobin H. Kendrick and Francisco Torreira. 2015. The timing and construction of preference: A quantitative study. Discourse Processes, 52(4):255–289. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations. Divesh Lala, Pierrick Milhorat, Koji Inoue, Masanari Ishida, Katsuya Takanashi, and Tatsuya Kawahara. 2017. Attentive listening system with backchanneling, response generation and flexible turn-taking. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 127–136, Saarbr¨ucken, Germany. Association for Computational Linguistics. Divesh Lala, Shizuka Nakamura, and Tatsuya Kawahara. 2019. Analysis of Effect and Timing of Fillers in Natural Turn-Taking. In Interspeech 2019, pages 4175–4179. ISCA. Stephen C. Levinson and Francisco Torreira. 2015. Timing in turn-taking and its implications for processing models of language. Frontiers in Psychology, 6. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A Persona-Based Neural Conversation Model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605. Raveesh Meena, Gabriel Skantze, and Joakim Gustafson. 2014. Data-driven models for timing feedback responses in a Map Task dialogue system. Computer Speech & Language, 28(4):903–922. Louis-Philippe Morency, Iwan de Kok, and Jonathan Gratch. 2010. A probabilistic multimodal approach for predicting listener backchannels. Autonomous Agents and Multi-Agent Systems, 20(1):70–84. Ryosuke Nakanishi, Koji Inoue, Shizuka Nakamura, Katsuya Takanashi, and Tatsuya Kawahara. 2018. Generating Fillers based on Dialog Act Pairs for Smooth Turn-Taking by Humanoid Robot. IWSDS, page 11. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Antoine Raux and Maxine Eskenazi. 2009. A finitestate turn-taking model for spoken dialog systems. In HLT-NAACL, pages 629–637. ACL. Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck. 2018. A hierarchical latent vector model for learning long-term structure in music. In ICML, pages 4361–4370. 2452 Matthew Roddy, Gabriel Skantze, and Naomi Harte. 2018. Multimodal Continuous Turn-Taking Prediction Using Multiscale RNNs. In Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI ’18, pages 186–190, Boulder, CO, USA. ACM Press. Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A Simplest Systematics for the Organization of Turn-Taking for Conversation. Language, 50(4):696. David Schlangen and Gabriel Skantze. 2011. A general, abstract model of incremental dialogue processing. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 710–718. Gabriel Skantze. 2017. Towards a General, Continuous Model of Turn-taking in Spoken Dialogue using LSTM Recurrent Neural Networks. In Proceedings of SigDial, Saarbrucken, Germany. RJ Skerry-Ryan, Eric Battenberg, Ying Xiao, Yuxuan Wang, Daisy Stanton, Joel Shor, Ron J Weiss, Rob Clark, and Rif A Saurous. 2018. Towards End-toEnd Prosody Transfer for Expressive Speech Synthesis with Tacotron. Proceedings of the 35 th International Conference on Machine Learning, page 10. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664, Vancouver, Canada. Association for Computational Linguistics. Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. arXiv preprint arXiv:1812.08989.
2020
221
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2453–2470 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2453 The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents Kurt Shuster, Da Ju, Stephen Roller Emily Dinan, Y-Lan Boureau, Jason Weston Facebook AI Research {kshuster,daju,roller,edinan,ylan,jase}@fb.com Abstract We introduce dodecaDialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources, discuss topics and situations, and perceive and converse about images. By multi-tasking on such a broad large-scale set of data, we hope to both move towards and measure progress in producing a single unified agent that can perceive, reason and converse with humans in an open-domain setting. We show that such multi-tasking improves over a BERT pretrained baseline, largely due to multi-tasking with very large dialogue datasets in a similar domain, and that the multi-tasking in general provides gains to both text and image-based tasks using several metrics in both the finetune and task transfer settings. We obtain stateof-the-art results on many of the tasks, providing a strong baseline for this challenge. 1 Introduction One of the goals of AI is to build a seeing, talking agent that can discuss, reason, empathize, and provide advice – in short a system that can perform natural communication displaying many of the properties expected when speaking to a human partner. Ideally, it should be able to be knowledgeable and personable, expert and engaging, serious or humorous – depending on the situation. It should be capable of answering questions, asking questions, responding to statements, having its own persona, and grounding the dialogue with external information and images. While no single task exists that can train an agent or measure its ability on all of these axes at once, a number of distinct large-scale datasets targeting subsets of these skills have recently become available. We thus assemble these disparate tasks to form a single challenge: dodecaDialogue, consisting of 12 subtasks. Each contains both training data to build the skills we desire for our agent, and validation and test sets to measure our agent’s ability at that skill. The overall goal is a single agent that can display all these skills. As some of the subtasks have very large datasets, e.g. 2.2 billion utterances, they can possibly help the agent with other skills too. We thus build a model capable of training and multi-tasking on all these sources. We employ a transformer-based architecture (Vaswani et al., 2017) which accepts an image, external textual information and dialogue history as input, and generates a response for a given dialogue turn. Practically, by pre-training on the largest of the subtasks and then multi-tasking on all them, we can obtain state-of-the-art results compared to existing independently reported performance on all 10 of the 12 subtasks that have previous comparable results. We hence set a strong baseline for this challenge. While many existing approaches use large-scale pre-training on general text corpora, we show that using dialogue datasets instead, which are more closely linked to the desired agent’s goals, is a strong alternative. However, many challenges remain. While multitasking performs well, and has clear benefits, as shown in other works (Liu et al., 2015; Raffel et al., 2019), when compared to fine-tuning of the same system we do obtain typically small losses. Zeroshot transfer to left-out tasks is also demanding for current approaches. We analyze these aspects, along with our model’s ability to ground on external knowledge and images in conjunction with the dialogue context, the impact of decoding algorithms, analysis of the weighting of tasks during multi-tasking as well as cross-task transfer ability in order to shed light and make progress on this challenging topic. 2454 Ask Questions Answer Questions Respond to Statements Persona Grounding Knowledge Grounding Situation Grounding Image Grounding Resp. Name Train Valid Test # Turns Length ConvAI2 ✓ ✓ ✓ ✓ 131,438 7,801 6,634 14.8 11.9 DailyDialog ✓ ✓ ✓ 87,170 8,069 7,740 7.9 14.6 Wiz. of Wikipedia ✓ ✓ ✓ ✓ 74,092 3,939 3,865 9.0 21.6 Empathetic Dialog ✓ ✓ ✓ ✓ 40,252 5,736 5,257 4.3 15.2 Cornell Movie ✓ ✓ ✓ 309,987 38,974 38,636 4.0 15.0 LIGHT ✓ ✓ ✓ ✓ ✓ 110,877 6,623 13,272 13.0 18.3 ELI5 ✓ ✓ 231,410 9,828 24,560 2.0 130.6 Ubuntu ✓ ✓ ✓ 1,000,000 19,560 18,920 2.0 18.9 Twitter ✓ ✓ ✓ 2,580,428 10,405 10,405 2.0 15.7 pushshift.io Reddit ✓ ✓ ✓ ∼2200 M 10,000 10,000 2.0 35.0 Image Chat ✓ ✓ ✓ ✓ ✓ 355,862 15,000 29,991 3.0 11.4 IGC ✓ ✓ ✓ 4,353 486 7,773 3.0 8.6 Table 1: The 12 dodecaDialogue subtasks, their sizes (number of train, valid, test utterances), and average number of turns and response length (words). 2 The dodecaDialogue Task The dodecaDialogue task is intended to assemble important aspects of an engaging conversational agent into a single collection, where each subtask covers some of those goals. Such an agent should be able to get to know you when you first talk to it (ConvAI2), discuss everyday topics (DailyDialog, pushshift.io Reddit, Twitter, Cornell Movie), speak knowledgeably at depth (Wizard of Wikipedia, Ubuntu) and answer questions on such topics (ELI5). It must be able to handle situated conversations and demonstrate empathy (Empathetic Dialog, LIGHT) . It can also discuss images, as this is a vital part of human connection (Image Chat, IGC). We note that all of the provided subtasks are in English. The overall statistics of the subtasks are given in Table 1. We now discuss each in turn. ConvAI2 ConvAI2 is a dataset used at the NeurIPS 2018 competition of the same name, and is based on PersonaChat (Zhang et al., 2018; Dinan et al., 2020). The training data involves paired crowdworkers having a conversation where they get to know each other, in which each is given a role to play based on sentences describing their persona, which were also separately crowdsourced (while they cannot see their partner’s persona). It thus involves asking and answering questions, responding in kind, and getting to know the other speaker and engaging them in friendly conversation – useful skills for an open-domain conversational agent. DailyDialog Li et al. (2017) built a dialogue dataset intended to reflect conversations occurring in daily life. It covers ten categories ranging from holidays to financial topics, rather than focusing on one domain. Compared to ConvAI2, these conversations seem more in keeping with partners who already know each other, and want to discuss typical life details, again useful skills for a conversational agent. The dataset is also annotated with topic, emotion and utterance acts, but here we ignore these annotations and learn only from the utterances in the dialogue turns. Wizard of Wikipedia This task involves discussing a given topic in depth, where the goal is to both engage the partner as well as display expert knowledge (Dinan et al., 2019). The training set consists of 1247 topics and a retrieval system over Wikipedia from which the dialogues were grounded during the human-human crowdsourced conversations. The topics were also crowdsourced and range from e-books to toga parties to showers. A model can thus learn to also perform similar retrieval and grounding at test time to potentially discuss any topic if it can generalize. We use the gold knowledge version of the task. We see this skill as a core component of an agent being able to not just chitchat, but actually engage a user in discussing real information about the world, e.g. by retrieving over documents from the internet. Empathetic Dialogues Rashkin et al. (2019) constructed a dataset of crowdworker conversations grounded in an emotional situation. In each dia2455 logue, one speaker describes a personal situation and the other plays a “listener” role, displaying empathy during the discussion. The dataset contains descriptions of the situations being discussed with an attached emotion label, but these are not used here. Trained models are measured playing the part of the empathetic listener, an important feature of an agent to which humans wish to speak. Cornell Movie Danescu-Niculescu-Mizil and Lee (2011) constructed a corpus containing a collection of fictional conversations from movie scripts, thus covering a large diversity of topics and emotional states. LIGHT LIGHT (Urbanek et al., 2019) involves situated interactions between characters in a text adventure game. Similar to ConvAI2, personas for each character are given, with the training set including conversations between crowdworkers playing those roles. Different from ConvAI2, included are emotes and actions grounded within the game world (e.g. picking up and giving objects). As such, it measures the ability of a conversational agent to ground its discussion on a dynamic environment. ELI5 ELI5 (Fan et al., 2019) involves long-form question answering grounded on multiple retrieved documents in order to answer common questions which people ask on the popular ELI5 subreddit. As such, the answers are in a conversational form applicable to a dialogue agent. Ubuntu Lowe et al. (2015) built a dataset that involves in-depth discussions in solving Ubuntu problems. This studies the ability of an agent on a very focused single topic, and is also a standard benchmark in the field. Twitter We use a variant of Twitter discussions (text-only), which have been used in many existing studies, e.g. Sordoni et al. (2015); See et al. (2019). This data naturally involves everyday discussions about topics that people care about. The public forum makes them different from the more personal discussions of some of the other tasks. This is the second largest dataset in the collection, and we thus measure in experiments its ability to help performance on other tasks. pushshift.io Reddit We use a variant of Reddit discussions (text-only), which has also been used in several existing studies, see e.g. Yang et al. (2018); Mazar´e et al. (2018); Keskar et al. (2019). Following Humeau et al. (2019), we use a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io, training to generate a comment conditioned on the full thread leading up to the comment, spanning 2200M training examples. This is the largest dataset in the collection – much larger than the others. The subreddits cover a vast range of topics, and hence this is a strong candidate for helping improve performance on other tasks via pre-training and multi-tasking. Note this dataset does not overlap with ELI5. Image Chat Shuster et al. (2018) collected a crowdsourced dataset of human-human conversations about an image with a given personality, where the goal is to engage the other speaker. As such, it covers natural conversational responses, including displays of emotion and humor. Image Grounded Conversations (IGC) IGC (Mostafazadeh et al., 2017) similarly involves two speakers discussing an image, here focusing on questions and responses. It only includes a validation and test set, and so we converted most of the validation set to form a small training set. 2.1 Evaluation Metrics For all tasks, we use the following metrics: perplexity (PPL), BLEU, ROUGE-1,-2 and -L and F1, and also pick the metric most used in the literature as that subtask’s ‘Score’ to compare to existing work. Multi-tasking As we are interested in building a single conversational agent, we measure the ability of multi-tasked models that can perform all twelve tasks at once. Single-Task Fine-tuning We can still compare such multi-tasked models to single-task fine-tuned baselines to assess if we have gained or lost performance. Like other works (Liu et al., 2015; Raffel et al., 2019) we also consider a multi-task followed by finetune setup in order to see if this produces better models. The latter tests if multi-tasking still proves useful in the single-task setting. Zero-shot Transfer Finally, we consider a leaveone-out zero-shot setting whereby training is constrained to be on all the training data except for the task being evaluated. This evaluates the performance on truly new unseen tasks, an important behavior given there are always new tasks. 2456 3 Related Work 3.1 Existing Models and Results Where possible, we have tried to track the best existing results for each task and provided a comparison in our final results table. As ConvAI2 was a competition, a number of competitors built strong models on it. The best results were obtained by large pre-trained transformers (Dinan et al., 2020). In particular, Wolf et al. (2019b) pre-trained via the method of Radford et al. (2018) using the BooksCorpus dataset, resulting in the best perplexities and F1 scores. Since then, results have gotten even better with the advent of better and larger pretraining (Lewis et al., 2019), which we compare to here; the same work also reports strong results on ELI5. He et al. (2019) recently obtained strong results on the DailyDialog and Cornell Movie tasks in terms of perplexity by pre-training on 10% of CCNEWS (Bakhtin et al., 2019), thus using 100 million sentences (2.7 billion words) and then finetuning a transformer based model with a multi-task strategy. Overall, large pre-trained transformers indeed provide strong existing results on many of the tasks. Several large language modeling projects have been undertaken in order to show prowess in multi-tasking ability (Radford et al., 2019; Keskar et al., 2019), and transformer-based approaches have been adapted to language and vision tasks as well (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019a; Shuster et al., 2018). As well as citing the relevant papers’ results where possible in the experiments section, we also train a BERTbased (Devlin et al., 2019) generative model as an additional baseline. 3.2 Related Tasks and Collections In the interests of feasibility, there are tasks we did not include in dodecaDialogue. For example, there are additional knowledge tasks (Qin et al., 2019; Moghe et al., 2018; Gopalakrishnan et al., 2019) and image-based datasets (Das et al., 2017) one could use. There are also a large number of QA tasks we did not include, e.g. Rajpurkar et al. (2016); Choi et al. (2018). In general, our choices were made based on tasks that after training might produce an engaging dialogue agent that humans naturally would want to talk to – which means either natural datasets or crowdsourced datasets where crowdworkers were encouraged to engage one another. As computational resources and ambitions scale, it would be interesting to add more tasks as well, while retaining the twelve we have chosen here in order to continue to evaluate their success, whilst extending the scope of the entire system. All the subtasks in the collection we use here already exist. Other research projects have also built such collection-based tasks before as well. In particular, the NLP decathlon (McCann et al., 2018), from which the name of this paper is inspired, collects together a diverse set of NLP tasks – from sentiment detection to parsing. Talmor and Berant (2019) collect a set of 10 QA datasets and build MULTIQA. Recently, (Raffel et al., 2019) also similarly multi-tasked a large set of NLP tasks, on an even bigger scale. Our work differs from these in that it is focused on dialogue tasks which naturally group together to form a conversational agent. 4 Models BERT baseline. We implement a generative baseline using BERT via adapting the model using a standard auto-regressive loss. We concatenate both the context and current generation and provide these as input to the model, using BERT’s sentence embeddings to distinguish the roles in the network. Although BERT is trained to predict masked tokens, we find that fine-tuning can easily adjust its behavior to predicting the next token. Our BERT baseline is roughly equivalent to the model of Wolf et al. (2019b), but does not have a classification loss term. The implementation relies on HuggingFace Transformers (Wolf et al., 2019a). We thus finetune this model for each of our tasks, except Image Chat and IGC which require images as input. Image+Seq2Seq. We use a modification of a transformer Seq2Seq architecture (Vaswani et al., 2017), additionally adding image features to the encoder. Our model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation (Miller et al., 2017). We use BPE following Humeau et al. (2019) via lower-cased Wikipedia, Toronto Books, and Open Subtitles with 30k merges, giving 54,940 terms. Reported perplexities are computed with this dictionary. For image features, we use the pre-trained image features from the ResNeXt-IG-3.5B model, a ResNeXt 32 x 48d architecture (Xie et al., 2017) trained on 3.5 billion Instagram images following the procedure 2457 BERT-based Single Task (from scratch) Single Task (fastText init) Twitter + Single Task Reddit Only Reddit + Single Task MT All Tasks + FT Single Task All Tasks MT Leave-One-Out Zero-Shot ConvAI2 19.4 43.3 38.9 28.7 18.3 11.4 11.2 11.3 16.4 DailyDialog 15.2 37.8 32.8 20.8 18.2 10.4 10.2 11.8 15.5 Wiz. of Wikipedia 14.1 40.7 36.0 37.3 15.3 8.7 8.5 8.7 13.2 Empathetic Dialog 23.2 47.1 40.5 23.1 14.4 11.3 11.1 11.2 13.0 Cornell Movie 29.4 46.2 44.8 34.2 27.8 20.0 19.8 22.3 25.4 LIGHT 29.7 63.6 57.5 40.0 32.9 18.7 18.7 19.0 26.9 ELI5 28.1 62.9 58.8 63.8 31.2 21.2 21.1 25.0 31.1 Ubuntu 20.7 35.8 34.5 38.5 31.1 17.3 17.2 23.3 30.8 Twitter 37.0 61.9 59.3 59.3 53.6 29.8 29.8 37.0 52.8 pushshift.io Reddit 39.0 27.8 27.8 27.8 27.8 27.8 25.8 28.0 106.3 Image Chat N/A 40.1 37.4 31.1 32.5 18.3 18.3 21.8 29.3 IGC N/A 86.3 79.5 23.1 14.6 10.0 10.0 10.2 12.2 dodecaScore N/A 49.5 45.7 35.6 26.5 17.1 16.8 19.1 31.1 Table 2: Validation perplexity for the dodecaDialogue tasks in various settings. described by Mahajan et al. (2018). This model was previously used successfully for the Image Chat task in Shuster et al. (2018). The final encoding from the ResNeXt model is a vector of size 2048; we then use a linear layer to project into the same size as the text encoding, and add it as an extra token at the end of the transformer’s encoder output, then feed them all into the decoder. During fine-tuning we train the text transformer, but leave the image encoding fixed, apart from finetuning the linear projection. The text transformer is fine-tuned with a standard auto-regressive negative log-likelihood (NLL) loss, following usual sequence to sequence training schemes. Our best models are available at https:// parl.ai/projects/dodecadialogue. 5 Experiments Task Training We employ the ParlAI framework (Miller et al., 2017) for training on single tasks and for multi-tasking, as many of the tasks are already implemented there, along with a (multitask) training and evaluation framework for such models. Pre-training As pushshift.io Reddit and (to some extent) Twitter are much larger than our other tasks, we try pre-training the Seq2Seq module of our Image+Seq2Seq networks with those datasets, before multi-tasking on all of the tasks, or for evaluating single task fine-tuning. For Reddit, the model was trained to generate Model ConvAI2 Wiz. of Wikipedia Empathetic Dialog Reddit 18.3 15.3 14.4 Reddit+ConvAI2 11.4 14.2 14.7 Reddit+Wiz. of Wikipedia 16.3 8.7 14.0 Reddit+Empathetic Dialog 17.9 15.3 11.3 Multi-Tasking All 4 Tasks 11.6 8.7 11.2 Table 3: Transfer performance of various multi-task models (validation perplexity). a comment conditioned on the full thread leading up to the comment. Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments. Comments were truncated to 1024 BPE tokens. The model was trained with a batch size of 3072 sequences for approximately 3M updates using a learning rate of 5e-4, and an inverse square root scheduler. This took approximately two weeks using 64 NVIDIA V100s. We note that our transformer pre-training only includes text, while our image encoder was pre-trained separately in previous work (Mahajan et al., 2018). Learning how to combine these sources occurs during fine-tuning. It is important to note that, while compute-heavy, pre-training was conducted exactly once, and all of the subsequent fine-tuning is significantly faster to run. 2458 Knowledge grounding Without With Wiz. of Wikipedia 16.8 8.7 ELI5 21.3 21.2 Image grounding Image Chat 19.5 18.3 IGC 10.1 10.1 Table 4: The impact of knowledge and image grounding in dodecaDialogue (validation perplexity). Transfer Performance between Tasks We first perform a preliminary study on a subset of the tasks: Reddit, ConvAI2, Wizard of Wikipedia and Empathetic Dialogues, and report the transfer ability of training on some of them, and testing on all of them (using the validation set), reporting perplexity. The results are reported in Table 3. They show that training on pushshift.io Reddit alone, a huge dataset, is effective at transfer to other tasks, but never as effective as fine-tuning on the task itself. Moreover, fine-tuning on most of the smaller tasks actually provides improvements over pushshift.io Reddit training alone at transfer, likely because the three tasks selected are more similar to each other than to pushshift.io Reddit. Finally, training on all four tasks is the most effective strategy averaged over all tasks compared to any other single model, although this does not beat switching between different fine-tuned models on a per-task basis. Comparison of Pre-training + Fine-tuning strategies Across all 12 tasks, we compare several pre-training strategies: using BERT, no pretraining at all, only initializing via fastText (Joulin et al., 2017), and using Twitter and pushshift.io Reddit pre-training with our Image+Seq2Seq architecture. For each variant we tune the learning rate, layers, number of heads and embedding size, with less pre-training typically requiring smaller capacity models. We then only fine-tune on a single task in these experiments, and report perplexity for that task alone, over all 12 tasks. The results are given in Table 2, reporting results on the validation set1. The results show a clear reduction in perplexity with more pre-training, as expected. This is most easily seen by the dodecaScore (last row) that is the mean perplexity over all 12 tasks, which decreases from 49.5 (from scratch models) down to 17.1 with pushshift.io Reddit pre-training. FastText (45.7) and Twitter (35.6) initializations help, but nowhere near as much. BERT fares better, but still is clearly 1We choose not to use the test set here as we report so many numbers, we do not want to overuse it. Relative Task Weighting 1 2 5 10 20 50 ∞ Cornell 21.9 21.5 20.6 20.1 19.9 19.8 Fine-tuned 20.1 20.0 20.0 19.9 19.8 19.8 20.0 ELI5 25.0 24.1 22.8 22.2 21.6 21.3 Fine-tuned 21.8 21.6 21.4 21.3 21.1 21.1 21.2 Ubuntu 23.1 22.2 20.6 19.6 18.6 17.4 Fine-tuned 18.2 18.1 17.8 17.7 17.2 17.2 17.3 Table 5: Validation perplexity on select dodecaDialogue tasks comparing relative weights of tasks during multi-tasking, followed by fine-tuning (row below). The relative task weight is the ratio of examples from that task compared to others presented during multitasking. ∞indicates single-task training. N-gram Beam Size Block Nucleus Task 1 2 3 5 N = 3 p =0.3 ConvAI2 20.0 21.0 21.3 21.2 21.3 18.7 WoW 35.9 37.4 37.8 37.9 37.9 31.1 Table 6: Impact of the decoding strategy on select tasks, reporting validation F1 score for the All Tasks MT model. N-gram block is for best beam size. worse than pushshift.io Reddit pre-training. The hypothesis here is that pushshift.io Reddit yields much more effective transfer as it is a dialogue task like our others, whereas non-dialogue corpora such as Wikipedia are not. This was previously observed for retrieval models in Humeau et al. (2019). Note that we do not report results for the image dialogue tasks for BERT as that architecture does not deal with images. Finally, as pushshift.io Reddit is so effective, we also compare to pushshift.io Reddit training only, with no fine-tuning at all across all tasks, similar to our initial study in Table 3. The performance is impressive, with some tasks yielding lower perplexity than BERT pre-training + single task finetuning. However, it still lags significantly behind fine-tuning applied after pushshift.io Reddit pretraining. Image and Knowledge Grounding Some of our tasks involve grounding on knowledge or images. To show such grounding helps, we report results with and without grounding on those tasks in Table 4, reporting perplexity. Particularly for Wizard of Wikipedia (knowledge) and Image Chat (images) such grounding has a clear effect. Multi-Task Results Next, we perform multitask training across all tasks, which is our ultimate goal in order to obtain an open-domain conversational agent. We optimize over the same set of 2459 Existing Approaches (independent) MT + FT All Tasks MT Approach PPL Score (Metric) PPL Score PPL Score ConvAI2 (Lewis et al., 2019) *11.9 *20.7 F1 11.1 21.6 10.8 21.7 DailyDialog (He et al., 2019) 11.1 F1 10.4 18.2 12.0 16.2 Wiz. of Wikipedia (Dinan et al., 2019) 23.1 35.5 F1 8.3 38.4 8.4 38.4 Empathetic Dialog (Rashkin et al., 2019) 21.2 6.27 Avg-BLEU 11.4 8.1 11.5 8.4 Cornell Movie (He et al., 2019) 27.5 F1 20.2 12.4 22.2 11.9 LIGHT (Urbanek et al., 2019) ∗27.1 ∗13.9 F1 18.9 16.2 19.3 16.1 ELI5 (Lewis et al., 2019) 24.2 20.4 Avg-ROUGE 21.0 22.6 24.9 20.7 Ubuntu (Luan et al., 2016) 46.8 F1 17.1 12.7 23.1 12.1 Twitter F1 30.7 9.9 38.2 9.8 pushshift.io Reddit F1 25.6 13.6 27.8 13.5 Image Chat (Shuster et al., 2018) 27.4 ROUGE-L (1st turn) 18.8 43.8 22.3 39.7 IGC (Mostafazadeh et al., 2017) 1.57 BLEU (responses) 11.9 9.9 12.0 8.2 Table 7: Test performance for various metrics on the dodecaDialogue tasks comparing our multi-task and multitask + fine-tuned methods to existing approaches (cited). Dashes mean metric was not provided. ∗was reported on validation only. Score is defined on a per-task basis in the metric column. hyperparameters as before, including multi-tasking weights for tasks, where one samples during training with differing probabilities, and we choose the best model by performing early stopping on the average performance across all tasks. In this way, we treat all 12 tasks as a single task, and thus during test time it is the model’s responsibility to understand how to respond from the context (image/dialogue) itself. In the end we did not obtain clear improvements beyond pre-training with pushshift.io Reddit and then equally sampling from all tasks. We report that final model’s validation performance in terms of perplexity in Table 2 (second to last column, “All Tasks MT”). It achieves a dodecaScore of 19.1, superior to all pre-train fine-tune approaches except pushshift.io Reddit pre-training followed by finetuning, and is also superior to a single pushshift.io Reddit model. However, comparing across tasks, while most are close to the corresponding best finetuned model, many are just slightly worse. This is an expected result and is often reported in multitask systems (Raffel et al., 2019). We look upon this result as both positive – we can obtain a single model doing well on all tasks, which a fine-tuned model cannot – whilst also remaining a challenge to the community: can one find architectures that leverage multi-tasking even better? Multi-Task followed by Fine-Tuning As also performed in Liu et al. (2015); Raffel et al. (2019) we can try to train in a multi-task manner on all tasks, before fine-tuning on a single task, and build a separate model performing this procedure for all tasks, in an attempt to improve single task results further. Using this approach, one is free to perform hyperparameter search differently for each task. Here, we found that applying relative task up-weighting during multi-tasking training made a clear difference to the final quality of the fine-tuned target task model, see Table 5. Generally, better results come from assigning most of the multi-task weight towards the task itself to be fine-tuned. Using such an approach we can get marginally better results than fine-tuning alone, although the differences are generally small. The final best models per task are shown compared to other approaches in Table 2 (third to last column, “MT All Tasks + FT Single Task”). The final validation dodecaScore is 16.8, only slightly below 17.1 for fine-tuning. Decoding Strategies So far, we have only been measuring perplexity, but we are actually interested in generation, which requires us to decode. We consider several standard approaches: greedy, beam search (with beam size, and minimum and maximum output length2 hyperparameters), beam search with beam blocking (blocking n-grams, we use n = 3) (Paulus et al., 2018) and nucleus sampling (with parameter p) (Holtzman et al., 2019). We show the effect of these choices in Table 6 for ConvAI2 and Wizard of Wikipedia (WoW). Final Systems The final test performance for our best multi-task and fine-tuned (via multi-task followed by fine-tuning) systems are reported in Table 7 (right), with more detailed results with all decoding-based metrics, and validation as well as test performance in Appendix A. Here, for the multi-task model we have fine-tuned the decoding hyperparameters per task. For results with a single set of decoding hyperparameters, see also 2The length parameters are important for ELI5. 2460 Appendix A. We generally find across all metrics a similar story as before when comparing the finetuning with multi-tasking: multi-tasking is successful, but the challenge is still to do better. Comparison to Existing Systems We compare to existing state-of-the-art results previously published for each task. Results are given in Table 7. As existing works report different metrics per task, we report perplexity where possible (but note, they may be computed on a different dictionary), and choose the sequence decoding-based metric that is commonly reported per task (listed in column ‘Metric’), where the ’Score’ column reports its value. We compare these to our best fine-tuned and multitasked models. Our multi-task model outperforms all available existing results, with 2 of the 12 tasks having no previous result. It is only surpassed by our fine-tuned model which also outperforms all available existing results. Overall, our methods set a strong challenge to future approaches. Human Evaluation In addition to automatic metrics, we perform human evaluation on two of the tasks to assess the abilities of our All Tasks MT conversational agent: the knowledge grounding task Wizard of Wikipedia (WoW) and the image grounding task Image Chat. We follow the same evaluation protocols as in Dinan et al. (2019); Shuster et al. (2018), comparing our method to the existing approaches referenced in Table 7. This involves collecting 100 human-bot conversations for WoW using crowdworkers, involving 8–10 turns each, across seen topics (seen in the training set) and unseen topics, and 500 image-based responses for Image Chat. A separate set of crowdworkers are then used to compare models pairwise following the ACUTE-Eval procedure of (Li et al., 2019b), where they are asked to choose which is “the more engaging response” for Image Chat (1500 trials) and “Who would you prefer to talk to for a long conversation?” for WoW (400 trials). The results, given in Figure 1, show our method outperforming the existing state of the art generative models on all three comparisons: Image Chat, WoW seen topics and WoW unseen topics. All three results are statistically significant (binomial test, p < .05). Additional details and results breakdown are given in Appendix Section B. Example Outputs We show some example outputs of our multi-task model for some of the tasks in Appendix C. Our model is able to leverage imFigure 1: Human evaluations on Image Chat and Wizard of Wikipedia (WoW), comparing existing state of the art models with our All Tasks MT conversational agent. Engagingness win rates are statistically significant in all three matchups (binomial test, p < .05). ages, knowledge, and given personality attributes to produce engaging dialogue with a large amount of variety, depending on the situation. Leave-One-Out Zero-Shot Performance Last, but not least, we evaluate the performance of a multi-task model at zero-shot transfer to a new dialogue task. This is performed by training on all but one of the tasks, and reporting performance on the left out one, repeating this experiment for all tasks. Our best performing models in that regard are reported in Table 2 (last column). First, it is reassuring that the overall scores are reasonable, outperforming a pushshift.io Reddit only model on every task except pushshift.io Reddit itself. This means that multi-tasking across many tasks helps transfer learning. However, the gap between zeroshot performance and multi-task or fine-tuning performance means there is still a significant challenge in improving these results. Finally, we believe that reporting results in this regime in addition to multitasking results may help avoid the temptation to “cheat” at multi-tasking by trying to detect the task and then apply a separate fine-tuned classifier, as presumably that approach will not truly leverage reasoning and skills between tasks, which transfer may help measure. 6 Discussion We have introduced the dodecaDialogue task, and provide strong baseline results leveraging multimodal Image+Seq2Seq transformers trained across all tasks. The goal of introducing this task is not just as another challenge dataset, but to further motivate building and evaluating conversational 2461 agents capable of multiple skills – one of the core goals of AI. We believe current systems are closer to that goal than ever before – but we also still have a long way to go. Recently reported results show systems can be reasonably competitive compared to humans in particular domains for short conversations (Li et al., 2019b; Shuster et al., 2018). This work tries to bridge the gap to avoid agents with niche skills, to move towards evaluating an open-domain set of skills. Still, despite leveraging 12 tasks, there are many skills not included in our set. For example, longer conversations involving memory (Moon et al., 2019), or mixing open-domain conversation with task oriented goals. Future work should consider adding these tasks to the ones used here, while continuing the quest for improved models. References Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. Real or fake? learning to discriminate machine from human generated text. arXiv preprint arXiv:1906.03351. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, Brussels, Belgium. Association for Computational Linguistics. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, Portland, Oregon, USA. Association for Computational Linguistics. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326–335. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The second conversational intelligence challenge (ConvAI2). In The NeurIPS ’18 Competition, pages 187– 208, Cham. Springer International Publishing. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek HakkaniT¨ur. 2019. Topical-Chat: Towards KnowledgeGrounded Open-Domain Conversations. In Proc. Interspeech 2019, pages 1891–1895. Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2019. Mixreview: Alleviate forgetting in the pretrain-finetune framework for neural language generation models. arXiv preprint arXiv:1910.07117. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain. Association for Computational Linguistics. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. 2462 Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019a. VisualBERT: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. Margaret Li, Jason Weston, and Stephen Roller. 2019b. ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. In Proceedings of the NeurIPS Workshop on Conversational AI. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986–995, Taipei, Taiwan. Asian Federation of Natural Language Processing. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 912–921, Denver, Colorado. Association for Computational Linguistics. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294, Prague, Czech Republic. Association for Computational Linguistics. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265. Yi Luan, Yangfeng Ji, and Mari Ostendorf. 2016. LSTM based conversation models. arXiv preprint arXiv:1603.09457. Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Exploring the limits of weakly supervised pretraining. In Proceedings of the European Conference on Computer Vision, pages 185–201, Cham. Springer International Publishing. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775–2779, Brussels, Belgium. Association for Computational Linguistics. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computational Linguistics. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322–2332, Brussels, Belgium. Association for Computational Linguistics. Seungwhan Moon, Pararth Shah, Rajen Subba, and Anuj Kumar. 2019. Memory grounded conversational reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP): System Demonstrations, pages 145–150, Hong Kong, China. Association for Computational Linguistics. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Georgios Spithourakis, and Lucy Vanderwende. 2017. Image-grounded conversations: Multimodal context for natural question and response generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 462–472, Taipei, Taiwan. Asian Federation of Natural Language Processing. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the International Conference on Learning Representations. Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jianfeng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine reading. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5427–5436, Florence, Italy. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. 2463 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1702–1723, Minneapolis, Minnesota. Association for Computational Linguistics. Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. 2018. Engaging image chat: Modeling personality in grounded dialogue. arXiv preprint arXiv:1811.00945. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196– 205, Denver, Colorado. Association for Computational Linguistics. Alon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and transfer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4911–4921, Florence, Italy. Association for Computational Linguistics. Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5099–5110, Hong Kong, China. Association for Computational Linguistics. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 673–683, Hong Kong, China. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. 2019a. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019b. TransferTransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR). Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversations. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 164–174, Melbourne, Australia. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. 2464 A Additional Results MT + FT All Tasks MT PPL BLEU ROUGE F1 PPL BLEU ROUGE F1 4 1 2 L 4 1 2 L ConvAI2 11.1 6.6 37.0 11.6 31.8 21.6 10.8 5.5 39.4 12.5 33.7 21.7 DailyDialog 10.4 4.0 35.6 10.0 30.8 18.2 12.0 2.9 33.9 8.7 29.2 16.2 Wiz. of Wikipedia 8.3 21.5 55.3 28.4 44.9 38.4 8.4 21.0 53.2 28.0 45.4 38.4 Empathetic Dialog 11.4 3.5 38.0 9.5 32.3 19.5 11.5 3.7 37.2 8.9 31.4 19.3 Cornell Movie 20.2 2.5 29.5 6.7 25.7 12.4 22.2 2.1 29.1 6.5 25.6 11.9 LIGHT 18.9 2.6 30.8 5.8 24.8 16.2 19.3 2.4 30.5 5.6 24.6 16.1 ELI5 21.0 3.7 38.6 7.2 22.1 23.1 24.9 3.2 35.2 6.3 20.5 21.3 Ubuntu 17.1 2.5 27.0 5.0 22.8 12.7 23.1 3.7 26.0 4.3 22.0 12.1 Twitter 30.7 3.2 16.5 3.3 14.3 9.9 38.2 2.6 19.4 3.3 16.5 9.8 pushshift.io Reddit 25.6 2.1 24.1 4.5 18.7 13.6 27.8 1.6 23.4 4.2 18.1 13.5 Image Chat 18.8 2.4 30.1 5.7 26.0 13.0 22.3 2.1 28.4 4.9 24.6 12.9 IGC 11.9 8.6 65.0 34.1 60.5 38.4 12.0 8.0 61.3 28.3 56.8 41.4 dodecaScore 17.1 5.3 35.6 11.0 29.6 19.8 19.4 4.9 34.8 10.1 29.0 19.6 Table 8: Test performance for various metrics on the dodecaDialogue tasks comparing our multi-task and multitask + fine-tuned methods. MT + FT All Tasks MT PPL BLEU ROUGE F1 PPL BLEU ROUGE F1 4 1 2 L 4 1 2 L ConvAI2 11.2 5.7 36.7 10.9 31.6 21.1 11.3 5.3 38.7 11.6 32.9 21.3 DailyDialog 10.2 4.4 36.8 10.7 32 18.8 11.8 3.1 34.8 9.3 30.2 17.1 Wiz. of Wikipedia 8.5 20.8 54.9 28.0 44.8 37.9 8.7 20.2 55.2 28.2 45.0 37.9 Empathetic Dialog 11.1 3.6 38.6 9.8 32.7 19.7 11.2 3.5 37.5 9.1 31.8 19.3 Cornell Movie 19.8 2.5 29.3 6.7 25.6 12.3 21.9 2.1 29.0 6.5 25.6 11.8 LIGHT 18.7 2.6 31.2 6.2 25.2 16.5 19.0 2.5 30.9 6.1 25.0 16.4 ELI5 21.1 3.7 38.7 7.3 22.1 23.2 25.0 3.2 35.3 6.3 20.6 21.2 Ubuntu 17.2 2.4 27.1 5.0 22.9 12.8 23.3 3.5 26.4 4.6 22.3 12.2 Twitter 29.8 3.2 16.7 3.5 14.5 10.1 37.0 2.6 19.7 3.6 16.8 9.9 pushshift.io Reddit 25.8 2.2 24.2 4.5 18.7 13.4 28.0 1.7 23.4 4.1 18.2 13.3 Image Chat 18.3 2.4 30.7 6.2 26.3 14.3 21.8 2.1 28.6 5.3 24.7 13.1 IGC 10.0 10.6 67.9 38.2 64.5 45.1 10.2 11.0 66.3 34.8 61.4 45.3 dodecaScore 16.8 5.3 36.1 11.4 30.1 20.4 19.1 5.1 35.5 10.8 29.5 19.9 Table 9: Validation performance for various metrics on the dodecaDialogue tasks comparing our multi-task and multi-task + fine-tuned methods. PPL BLEU ROUGE f1 4 1 2 L ConvAI2 11.3 5.6 22.2 7.0 20.4 21.3 DailyDialog 11.8 4.8 18.9 5.6 17.6 16.6 Wiz. of Wikipedia 8.7 19.7 40.9 22.6 36.9 37.7 Empathetic Dialog 11.2 4.8 20.9 5.6 19.0 19.3 Cornell Movie 21.9 3.3 14.2 3.2 13.4 11.3 LIGHT 19.0 2.9 17.0 3.4 15.0 16.2 ELI5 25.0 1.6 14.2 2.6 9.6 16.2 Ubuntu 23.3 2.3 12.5 1.9 11.6 11.2 Twitter 37.0 2.3 9.5 1.7 8.7 8.9 pushshift.io Reddit 28.0 1.8 12.1 2.2 10.4 11.3 Image Chat (all turns) 21.8 2.1 14.7 2.5 13.6 13.1 IGC 10.2 5.5 50.7 25.3 49.1 36.0 dodecaScore 19.1 4.7 20.7 7.0 18.8 18.3 Table 10: All Tasks Multi-Tasking (MT) validation performance for various metrics on the dodecaDialogue tasks with one set of decoding parameters: a beam size of 3, minimum response length of 10, and blocking repeated tri-grams. 2465 BLEU ROUGE-L F1 Score Beam Min L Max L N-gram Block Score Beam Min L Max L N-gram Block Score Beam Min L Max L N-gram Block ConvAI2 5.7 10 10 128 3 31.6 10 50 128 3 21.1 3 10 128 3 DailyDialog 4.4 10 5 128 3 32.0 3 50 128 3 18.8 5 10 128 3 Wiz. of Wikipedia 20.8 10 5 128 0 44.8 10 50 128 3 37.9 10 10 128 3 Empathetic Dialog 3.6 10 5 128 3 32.7 5 50 128 3 19.7 5 10 128 3 Cornell Movie 2.5 10 5 128 3 25.6 10 50 128 3 12.3 10 20 128 3 LIGHT 2.6 3 5 128 3 25.2 5 50 128 3 16.5 5 20 128 3 ELI5 3.7 10 200 256 3 22.1 5 200 256 3 23.2 10 200 256 3 Ubuntu 2.4 10 5 128 0 22.9 10 40 128 3 12.8 2 10 128 3 Twitter 3.2 10 20 128 3 14.5 5 50 128 3 10.1 10 20 128 3 pushshift.io Reddit 2.2 10 10 128 0 18.7 5 50 128 3 13.4 5 50 128 3 Image Chat (all turns) 2.4 10 5 128 3 26.4 3 50 128 3 14.3 5 1 128 3 IGC 10.6 10 5 128 3 64.5 3 50 128 3 45.1 10 5 128 3 Table 11: Best decoding parameters for each task, based on metric. Scores are from the best performing taskspecific multi-task + fine-tuned model on validation sets. ”Min L” and ”Max L” refer to the minimum and maximum decoding length, where ”L” is the number of tokens. B Human Evaluation Further Details We provide additional results from our human evaluations described in Section 5. In Figure 1, we compare our All Tasks MT Image+Seq2Seq model to existing baselines from both tasks; to produce those outputs, we used beam search with a beam size of 10 and tri-gram blocking. As with our experiments regarding automatic metrics, we additionally explored nucleus sampling, with parameter p = 0.7, and compared to both the baseline models as well as human outputs. In tables 12, 13, and 14, we show the full results of comparing various models both to each other and also to humans. When collecting the model-human chats for Wizard of Wikipedia, we additionally asked the humans for a rating from 1-5 at the end of each conversation, to indicate the quality of the model’s responses; we compare these Likert ratings to that of Dinan et al. (2019), which followed the same protocol, in Table 15. The findings are similar to the pairwise ACUTE-Eval results in the main paper. Win Percentage Lose Percentage (Shuster et al., 2018) Image+Seq2Seq Image+Seq2Seq Human Nucleus Beam (Shuster et al., 2018) 50.8 ∗60.7 ∗79.3 Image+Seq2Seq Nucleus 49.2 52.1 ∗73.8 Image+Seq2Seq Beam ∗39.3 47.9 ∗79.4 Human ∗20.7 ∗26.2 ∗20.6 Table 12: Human evaluations on Image Chat, comparing various decoding schemes for our Image+Seq2Seq model trained on all tasks MT, as well as comparisons with human outputs. Scores with ∗are statistically significant (binomial test, p < .05). Win Percentage Lose Percentage (Dinan et al., 2019) Image+Seq2Seq Image+Seq2Seq Human Nucleus Beam (Dinan et al., 2019) 59.1 62.1 71.9 Image+Seq2Seq Nucleus 40.1 70.4 Image+Seq2Seq Beam 37.9 60.0 Human 28.1 29.6 40.0 Table 13: Human evaluations on Wizard of Wikipedia (seen) test set, comparing various decoding schemes for our Image+Seq2Seq model trained on all tasks MT, as well as comparisons with human outputs, using ACUTE-Eval. All scores are statistically significant (binomial test, p < .05). 2466 Win Percentage Lose Percentage (Dinan et al., 2019) Image+Seq2Seq Image+Seq2Seq Human Nucleus Beam (Dinan et al., 2019) 62.3 64.1 75.8 Image+Seq2Seq Nucleus 37.7 72.8 Image+Seq2Seq Beam 35.9 60.5 Human 24.2 27.2 39.5 Table 14: Human evaluations on Wizard of Wikipedia (unseen) test set, comparing various decoding schemes for our Image+Seq2Seq model trained on all tasks MT, as well as comparisons with human outputs, using ACUTEEval. All scores are statistically significant (binomial test, p < .05). Method Decode Method Seen Unseen Human Performance 4.13 (1.08) 4.34 (0.98) (Dinan et al., 2019) Beam 2.92 (1.33) 2.93 (1.30) Image+Seq2Seq (All Tasks MT) Beam 3.42 (1.10) 3.33 (1.09) Image+Seq2Seq (All Tasks MT) Nucleus 2.95 (1.08) 3.43 (1.05) Table 15: Human evaluations on Wizard of Wikipedia, comparing various decoding schemes for our Image+Seq2Seq model trained on all tasks MT, as well as comparisons with human outputs, in terms of Likert Scores. Ratings are reported as mean (stddev). 2467 C All Tasks Multi-Task Model Examples Context ConvAI2 Personas: i love to drink wine and dance in the moonlight. i am very strong for my age. i’m 100 years old. i feel like i might live forever. Speaker A: hi , how are you doing today ? Speaker B: good ! you ! celebrating with fellow centenarians Speaker A: nice . i’m playing some card games with my family . Speaker B: that sounds like fun . i like wine and dancing too ! Speaker A: same . i try to get a small workout in a three mile walk for me is key . Model: that’s great. i’m very strong for my age. Context Wizard of Wikipedia Chosen Topic: The Rolling Stones Knowledge: no passages used Speaker A: I love the Rolling Stones. Do you? They were formed in 1962. Speaker B: I love them! Who was in the band at the start? Speaker A: The first lineup was Brian Jones, Mick, Keith, Bill Wyman, Charlie Watts and Ian Steward. Speaker B: Was Mick ever replaced? Model: i’m not sure, but i do know that mick jagger was the lead vocalist. Context Wizard of Wikipedia Chosen Topic: Dog Knowledge: The dog was the first species to be domesticated and has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes. Speaker A: Hi, do you like dogs? Speaker B: I do! Actually, my favourite is the Basenji breed! It is a hunting dog. Speaker A: Oh, those are cool. I have seen them on the Westminster Dog Show, but I’ve never met one in person. I have a Havanese. Speaker B: Wow, are you Cuban? I know that the Havanese are the national dog of cuba! Speaker A: No, not Cuban, but yes they are the national dog of Cuba! Apparently all the Havanese in the US are descended from a handful that came here from Cuba during the revolution! Model: that’s so cool! the dog was the first species to be domesticated! Context Wizard of Wikipedia Chosen Topic: Orc Knowledge: While the overall concept of orcs draws on a variety of pre-existing mythology, the main conception of the creatures stems from the fantasy writings of J. R. R. Tolkien, in particular ”The Lord of the Rings”. Speaker A: Orcs are cool fictional humanoid beings Speaker B: Yeah I’ve seen them in a lot of things like Magic and DnD. Speaker A: While the concept remains the same it draws on many preexisting mythology Speaker B: That’s really interesting! Orcs are some of my favorite fictional characters honestly. Model: the main conception of the creatures stems from the fantasy writings of the lord of the rings 2468 Context Empathetic Dialogues Situation I received concert tickets for Christmas. Emotion excited Speaker A: I couldn’t wait to go to the concert. Speaker B: Which Concert? Speaker A: The U2 concert. Tickets were really expensive and I never thought we would be able to go, but somehow we did!!! Model: that’s awesome. i’ve never been to a u2 concert. Context Empathetic Dialogues Situation I went to buy a 42 inch tv at walmart, the billboard says 299 usd, it was at 399 usd after taxes. Emotion disappointed Speaker A: So yeah i drove 2 hours to get to wallmart. with 299 in my wallet as advertised, and the real price of the tv was 399 usd. what a bummer Speaker B: Thats horrible. Speaker A: Yeah i had my hopes reall high and then i left empty handed, such a terrible feeling! Model: i know what you mean. i would have been so mad. Context Empathetic Dialogues Situation All my friends live in a different country Emotion lonely Speaker A: hi, i feel so lonely sometimes because all my friends live in a different country Speaker B: Oh, I’m sure you are lonely. Maybe you can join some kind of club that lets you meet new friends? Speaker A: i was thinking about it! I wanted to join a group for local moms Model: that’s a great idea! i’m sure you’ll meet some great people there. Context Cornell Movie No Context Speaker A: Why’d you let him get to you? Speaker B: Who? Speaker A: Dorsey. Speaker B: I hate him. Speaker A: I know. It’d have to be a pretty big deal to get you to mainline tequila. You don’t seem like the type. Model: i’m not above all that. i just don’t think i can be ”laid back” like everyone else. Context LIGHT Setting name : Castle Maids’ Room, Inside Castle Setting desc: Inside the maids room is dark and grey. It is much different from the rest of the Castle which has grand walls and floors and paintings from famous painters around the wall. The maid holds many secrets in this room. One which is a secret door that holds paintings she should not have. Partner name: painter Self name: worker Self persona: I am a worker from a village up the river. I tie the logs our woodsmen fell into rafts, and steer them down to the next town for milling. I am hardworking and patient. I enjoy fishing off of the edge of the raft during quiet times on the raft. Speaker A: Hello Speaker B: Good Evening. You must be the painter of this room. Speaker A: Yes sir! do you have a preferred color? self act get brush Speaker B: I am just a lowly worker. I think the red paint looks good. I can assist you with painting. Speaker A: I will love that. Tell me, how long have you been here? partner act give brush to worker self act drop paint brushes Model: i’ve been here for a long time. i have been working here since i was a child. 2469 Context Daily Dialog Topic ordinary life Emotion no emotion Speaker A: So how did I do on my driving test ? Speaker B: Do you want the truth ? Speaker A: Of course , I do . Speaker B: Well , you really didn’t do all that well . Speaker A: How bad did I do ? Model: don’t worry, i’ll give you another try. Context pushshift.io Reddit Subreddit beer Speaker A: In Santa Cruz? Be SURE to hit up Sante Adairius Rustic Ales in Capitola! If you like farmhouse, funky, sour ales, they are real tough to beat. Their IPAs and porters are also amazing. Another great new brewery is Discretion, in Capitola, and I am also a fan of Santa Cruz Mountain Brewing and Seabright Brewing for affordability’s sake ($5 pitchers at Seabright on Tuesday/Thursday I believe?) but some beer snobs would turn their noses up at those places. Also, burger in Santa Cruz and Aptos always has a great taplist, as well as Aptos BBQ in Aptos. Model: thanks for the suggestions! i’ll have to check out sante adairius rustic ales. i’ve heard good things about seabright brewing, but haven’t had a chance to try it yet. do you know much about it? Context Twitter No Context Speaker A: reading now . this is absolutely fantastic . Model: thank you so much. i’m glad you like it. i hope you enjoy it as much as i do. Context Ubuntu No Context Speaker A: i set up my hd such that i have to type a passphrase to access it at boot. how can i remove that passwrd, and just boot up normal. i did this at install, it works fine, just tired of having reboots where i need to be at terminal to type passwd in. help? . Speaker B: backup your data, and re-install without encryption ”might” be the easiest method . Model: i’ll try that, but i’m not sure how to do it without encryption.. Context ELI5 Knowledge <P> And it does n’t come without questioning and critical thought . What we feel entitled to is a better world . And the thing is , we can see it , clearer and with broader scope than anyone before us , because all we know is the entire world , in real time . <P> Page 1 of 2 1 2 Next > Spycke83 PSLS Level : Bronze Messages : 52 Likes Received : 20 PSN : Spycke83 I‘m wondering what you guys think of this newish trend of gamers feeling they are entitled certain things from a game . . . Speaker A: Why do recent generations feel so entitled , like everyone owes them ? –T– Model: it’s not that they feel entitled, but that they don’t have enough money to pay for it. 2470 Context Image Chat Image Personality Emotional Speaker A: Airplanes are scary to get on, you never know if it will crash or not. Speaker B: But these are professional pilots though. Model: i can’t imagine being a professional pilot. Context Image Chat Image Personality Playful Speaker A: Great for night fishing. Speaker B: Meh, I doubt any fish would bite anyway. Model: i m sure there are plenty of fish that would bite. Context Image Chat (First Turn) Image Personality Curious Model: i wonder if this is a lake? Context IGC Image Speaker A: I visited this forest on a walk. Speaker B: Was this tree special? Model: yes, it’s one of the oldest trees in the world. Context IGC Image Speaker A: This is what I am sipping on today. Speaker B: I’m jealous, how much did it cost? Model: it cost me $20.
2020
222
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2471 Automatic Poetry Generation from Prosaic Text Tim Van de Cruys Institut de Recherche en Informatique de Toulouse (IRIT) Artificial and Natural Intelligence Toulouse Institute (ANITI) CNRS, Toulouse [email protected] Abstract In the last few years, a number of successful approaches have emerged that are able to adequately model various aspects of natural language. In particular, language models based on neural networks have improved the state of the art with regard to predictive language modeling, while topic models are successful at capturing clear-cut, semantic dimensions. In this paper, we explore how these approaches can be adapted and combined to model the linguistic and literary aspects needed for poetry generation. The system is exclusively trained on standard, non-poetic text, and its output is constrained in order to confer a poetic character to the generated verse. The framework is applied to the generation of poems in both English and French, and is equally evaluated for both languages. Even though it only uses standard, non-poetic text as input, the system yields state of the art results for poetry generation. 1 Introduction Automatic poetry generation is a challenging task for a computational system. For a poem to be meaningful, both linguistic and literary aspects need to be taken into account. First of all, a poetry generation system needs to properly model language phenomena, such as syntactic well-formedness and topical coherence. Furthermore, the system needs to incorporate various constraints (such as form and rhyme) that are related to a particular poetic genre. And finally, the system needs to exhibit a certain amount of literary creativity, which makes the poem interesting and worthwhile to read. In recent years, a number of fruitful NLP approaches have emerged that are able to adequately model various aspects of natural language. In particular, neural network language models have improved the state of the art in language modeling, while topic models are successful at capturing clearcut, semantic dimensions. In this paper, we explore how these approaches can be adapted and combined in order to model both the linguistic and literary aspects that are required for poetry generation. More specifically, we make use of recurrent neural networks in an encoder-decoder configuration. The encoder first constructs a representation of an entire sentence by sequentially incorporating each word of the sentence into a fixed-size hidden state vector. The final representation is then given to the decoder, which emits a sequence of words according to a probability distribution derived from the hidden state of the input sentence. By training the network to predict the next sentence with the current sentence as input, the network learns to generate plain text with a certain discourse coherence. By modifying the probability distribution yielded by the decoder, we enforce the incorporation of poetic constraints, such that the network can be exploited for the generation of poetic verse. It is important to note that the poetry system is not trained on poetic texts; rather, the system is trained on a corpus of standard, prosaic texts extracted from the web, and it will be the constraints applied to the network’s probability distribution that confer a poetic character to the generated verse. The rest of this article is structured as follows. In section 2, we present an overview of related work on automatic poetry generation. Section 3 describes the different components of our model. In section 4, we present an extensive human evaluation of our model, as well as a number of examples generated by the system. Section 5, then, concludes and discusses some future research directions. 2 Related work Early computational implementations that go beyond mere mechanical creativity have often relied on rule-based or template-based methods. One of the first examples is the ASPERA system (Gervás, 2472 2001) for Spanish, which relies on a complex knowledge base, a set of rules, and case-based reasoning. Other approaches include Manurung et al. (2012), which combines rule-based generation with genetic algorithms, Gonçalo Oliveira (2012)’s PoeTryMe generation system, which relies on chart generation and various optimization strategies, and Veale (2013), which exploits metaphorical expressions using a pattern-based approach. Whereas poetry generation with rule-based and template-based models has an inherent tendency to be somewhat rigid in structure, advances in statistical methods for language generation have opened up new avenues for a more varied and heterogeneous approach to creative language generation. Greene et al. (2010), for example, use an n-gram language model in combination with a rhythmic model implemented with finite-state transducers. And more recently, recurrent neural networks (RNNs) have been exploited for poetry generation; Zhang and Lapata (2014) use an encoder-decoder RNN for Chinese poetry generation, in which one RNN builds up a hidden representation of the current line in a poem, and another RNN predicts the next line word by word, based on the hidden representation of the current line. The system is trained on a corpus of Chinese poems. Yan (2016) tries to improve upon the encoderdecoder approach by incorporating a method of iterative improvement: the network constructs a candidate poem in each iteration, and the representation of the former iteration is used in the creation of the next one. And Wang et al. (2016) extend the method using an attention mechanism. Ghazvininejad et al. (2016) combine RNNs (for syntactic fluency) with distributional similarity (for the modeling of semantic coherence) and finite state automata (for imposing literary constraints such as meter and rhyme). Their system, Hafez, is able to produce well-formed poems with a reasonable degree of semantic coherence, based on a userdefined topic. Hopkins and Kiela (2017) focus on rhythmic verse; they combine an RNN, trained on a phonetic representation of poems, with a cascade of weighted finite state transducers. Lau et al. (2018) present a joint neural network model for the generation of sonnets, called Deep-speare, that incorporates the training of rhyme and rhythm into the neural network; the network learns iambic stress patterns from data, while rhyming word pairs are separated from non-rhyming ones using a marginbased loss. And a number of recent papers extend neural poetry generation for Chinese with various improvements, such as unsupervised style disentanglement (Yang et al., 2018), reinforcement learning (Yi et al., 2018), and rhetorical control (Liu et al., 2019). Note that all existing statistical models are trained on or otherwise make use of a corpus of poetry; to our knowledge, our system is the first to generate poetry with a model that is exclusively trained on a generic corpus, which means the poetic character is endowed by the model itself. Secondly, we make use of a latent semantic model in order to model topical coherence, which is equally novel. 3 Model 3.1 Neural architecture The core of the poetry system is a neural network architecture, trained to predict the next sentence Si+1 given the current sentence Si. The architecture is made up of gated recurrent units (GRUs; Cho et al., 2014) that are linked together in an encoderdecoder setup. The encoder sequentially reads in each word wi 1,...,N of sentence Si (represented by its word embedding x) such that, at each time step ti, a hidden state ˆht is computed based on the current word’s embedding xt and the previous time step’s hidden state ˆht−1. For each time step, the hidden state ˆht is computed according to the following equations: rt = σ(Wrxt + Urˆht−1) (1) zt = σ(Wzxt + Uzˆht−1) (2) ¯ht = tanh(Wxt + U(rt ⊙ˆht−1)) (3) ˆht = (1 −zt) ⊙ˆht−1 + zt ⊙¯ht (4) where rt represents the GRU’s reset gate, zt represents the update gate, ¯ht represents the candidate update state, and ⊙represents pointwise multiplication. ˆht can be interpreted as a representation of the sequence w1, . . . , wt, and the final hidden state ˆhN will therefore be a representation of the entire sentence. This final hidden encoder state is transferred to the decoder. The decoder then sequentially predicts the next sentence word by word, conditioned on the encoder’s final hidden representation; at each time step ti+1, the decoder equally computes a hidden state ht based on the current word’s embedding xt (which was predicted by the decoder 2473 the encoder decoder lion left the lemon tree free rhyme prior i topic prior and wild roaming started and attention h h ~ p(w)out h ^ x x entropy threshold c { Figure 1: Graphical representation of the poetry generation model. The encoder encodes the current verse, and the final representation is given to the decoder, which predicts the next verse word by word in reverse. The attention mechanism is represented for the first time step. The rhyme prior is applied to the first time step, and the topic prior is optionally applied to all time steps, mediated by the entropy threshold of the network’s output distribution. in the previous time step) and the previous time step’s hidden state ht−1 (the first hidden state of the decoder is initialized by ˆhN and the first word is a symbolic start token). The computations for each time step ht of the decoder are equal to the ones used in the encoder (equations 1 to 4). In order to fully exploit the entire sequence of representations yielded by the encoder, we augment the base architecture with an attention mechanism, known as general attention (Luong et al., 2015). The attention mechanism allows the decoder to consult the entire set of hidden states computed by the encoder; at each time-step—for the generation of each word in sentence Si+1—the decoder determines which words in sentence Si are relevant, and accordingly selects a linear combination of the entire set of hidden states. In order to do so, we first compute an attention vector at, which attributes a weight to each hidden state ˆhi yielded by the encoder (based on the decoder’s current hidden state ht). according to equation 5: at(i) = exp(score(ht, ˆhi)) P i′ exp(score(ht, ˆhi′)) (5) where score(ht, ˆhi) = hT t Waˆhi (6) The next step is to compute a global context vector ct, which is a weighted average (based on attention vector at) of all of the encoder’s hidden states. The resulting context vector is then combined with the original decoder hidden state in order to compute a new, attention-enhanced hidden state ˜ht. ˜ht = tanh(Wc[ct; ht]) (7) where [·; ·] represents vector concatenation. Finally, this resulting hidden state ˜ht is transformed into a probability distribution p(wt|w<t, Si) over the entire vocabulary using a softmax layer. p(wt|w<t, Si) = softmax(Ws˜ht) (8) As an objective function, the sum of the logprobabilities of the next sentence is optimized, conditioned on the hidden state representation of the current sentence. Jt = X (Si,Si+1)∈C −log p(Si|Si+1) (9) At inference time, for the generation of a verse, each word is then sampled randomly according to the output probability distribution. Crucially, the decoder is trained to predict the next sentence in reverse, such that the last word of the verse is the first one that is generated. This reverse operation is important for an effective incorporation of rhyme, as will be explained in the next section. A graphical representation of the architecture, which includes the constraints discussed below, is given in Figure 1. 2474 3.2 Poetic constraints as a priori distributions As the neural architecture described above is trained on generic text, its output will in no way resemble poetic verse. In order to endow the generated output with a certain poetic character, we modify the neural network’s output probability distribution through the application of a prior probability distribution, that constrains the standard output probability distribution, and boosts the probability of words that are a good fit within the defined constraints. We will consider two kinds of constraints: a rhyme constraint and a topical constraint. 3.2.1 Rhyme constraint In order to adequately model the rhyme constraint, we make use of a phonetic representation of words, extracted from the online dictionary Wiktionary.1 For each word of the vocabulary, we determine its rhyme sound (i.e. the final group of vowels, optionally followed by a group of consonants), as well as the group of consonants that precedes the group of vowels. A sample of rhymes that are thus extracted is represented in Table 1. word rhyme embrace (mbô, eIs) suitcase (tk, eIs) sacrifice (f, aIs) paradise (d, aIs) reproduit (d4, i) thérapie (p, i) examen (m, ˜E) canadien (dj, ˜E) Table 1: A number of rhyme examples extracted from Wiktionary, for both English and French. The next step then consists in creating a probability distribution for a particular rhyme sound that we want the verse to adhere to: prhyme(w) = 1 Z x with ( xi = 1 if i ∈R xi = ϵ otherwise (10) where R is the set of words that contain the required rhyme sound, ϵ is a small value close to zero, used for numerical stability, and Z is a normalization factor in order to ensure a probability distribution. We can now use prhyme(w) as a prior probability distribution in order to reweight the neural network’s standard output probability distribution— according to Equation 11—each time the rhyme 1www.wiktionary.org scheme demands it: pout(w) = 1 Z (p(wt|w<t, Si) ⊙prhyme(w)) (11) where ⊙represents pointwise multiplication.2 As we noted before, each verse is generated in reverse; the reweighting of rhyme words is applied at the first step of the decoding process, and the rhyme word is generated first. This prevents the generation of an ill-chosen rhyme word that does not fit well with the rest of the verse. 3.2.2 Topical constraint For the modeling of topical coherence, we make use of a latent semantic model based on nonnegative matrix factorization (NMF; Lee & Seung, 2001). Previous research has shown that nonnegative factorization methods are able to induce clear-cut, interpretable topical dimensions (Murphy et al., 2012). As input to the method, we construct a frequency matrix A, which captures cooccurrence frequencies of vocabulary words and context words.3 This matrix is then factorized into two non-negative matrices W and H, Ai×j ≈Wi×kHk×j (12) where k is much smaller than i, j so that both instances and features are expressed in terms of a few components. Non-negative matrix factorization enforces the constraint that all three matrices must be non-negative, so all elements must be greater than or equal to zero. Using the minimization of the Kullback-Leibler divergence as an objective function, we want to find the matrices W and H for which the divergence between A and WH (the multiplication of W and H) is the smallest. The factorization is carried out through the iterative application of update rules. Matrices W and H are randomly initialized, and the rules in 13 and 14 are iteratively applied—alternating between them. In each iteration, each vector is adequately normalized, so that all dimension values sum to 1. Haµ ←Haµ P i Wia Aiµ (WH)iµ P k Wka (13) Wia ←Wia P µ Haµ Aiµ (WH)iµ P v Hav (14) 2Such a multiplicative combination of probability distributions is also known as a Product of Experts (Hinton, 2002). 3The raw frequencies are weighted using pointwise mutual information (Turney and Pantel, 2010). 2475 Tables 2 and 3 present a number of example dimensions induced by the model, for both English and French. dim 13 dim 22 dim 28 sorrow railway planets longing trains planet admiration rail cosmic earnest station universe Table 2: Three example dimensions from the NMF model for English (4 words with highest probability) dim 1 dim 20 dim 25 tendresse gare hypocrisie joie bus mensonge bonheur métro accuser sourires rer hypocrite Table 3: Three example dimensions from the NMF model for French (4 words with highest probability) The factorization that comes out of the NMF model can be interpreted probabilistically (Gaussier and Goutte, 2005; Ding et al., 2008): matrix W can be considered as p(w|k), i.e. the probability of a word given a latent dimension k. In order to constrain the network’s output to a certain topic, it would be straightforward to simply use p(w|k) as another prior probability distribution applied to each output. Initial experiments, however, indicated that such a blind modification of the output probability distribution for every word of the output sequence is detrimental to syntactic fluency. In order to combine syntactic fluency with topical consistency, we therefore condition the weighting of the output probability distribution on the entropy of that distribution: when the output distribution’s entropy is low, the neural network is certain of its choice for the next word in order to generate a well-formed sentence, so we will not change it. On the other hand, when the entropy is high, we will modify the distribution by using the topical distribution p(w|k) for a particular latent dimension as prior probability distribution—analogous to Equation 11—in order to inject the desired topic. The entropy threshold, above which the modified distribution is used, is set experimentally. Note that the rhyme constraint and the topical constraint can straightforwardly be combined in order to generate a topical rhyme word, through pairwise multiplication of the three relevant distributions, and subsequent normalization in order to ensure a probability distribution. 3.3 A global optimization framework The generation of a verse is embedded within a global optimization framework. There are two reasons to integrate the generation of a verse within an optimization procedure. First of all, the generation of a verse is a sampling process, which is subject to chance. The optimization framework allows us to choose the best sample according to the constraints presented above. Secondly, the optimization allows us to define a number of additional criteria, that assist in the selection of the best verse. For each final verse, the model generates a considerable number of candidates; each candidate verse is then scored according to the following criteria: • the log-probability score of the generated verse, according to the encoder-decoder architecture (section 3.1); • compliance with the rhyme constraint (section 3.2.1); additionally, the extraction of the preceding group of consonants (cf. Table 1) allows us to give a higher score to rhyme words with disparate preceding consonant groups, which elicits more interesting rhymes; • compliance with the topical constraint (section 3.2.2); the score is modeled as the sum of the probabilities of all words for the defined dimension; • the optimal number of syllables, modeled as a Gaussian distribution with mean µ and standard deviation σ;4 • the log-probability score of a standard n-gram model. The score for each criterion is normalized to the interval [0, 1] using min-max normalization, and the harmonic mean of all scores is taken as the final score for each candidate.5 After generation of a predefined number of candidates, we keep the candidate with the highest score, and append it to the poem. 4We equally experimented with rhythmic constraints based on meter and stress, but initial experiments indicated that the system had a tendency to output very rigid verse. Simple syllable counting tends to yield more interesting variation. 5The harmonic mean is computed as n Pn i=1 1 xi ; we choose this measure in order to balance the different scores. 2476 4 Results and evaluation 4.1 Implementational details We train two different models for the generation of poetry in both English and French. The neural architecture is trained on a large corpus of generic web texts, constructed on the basis of the CommonCrawl corpus.6 In order to filter out noise and retain clean, orderly training data, we apply the following filtering steps: • we only keep sentences written in the relevant language; • we only keep sentences of up to 20 words; • we only keep sentences that contain at least one function word from a predefined list—the idea again is to filter out noisy sentences, and only keep well-formed, grammatical ones; we create a list of about 10 highly frequent function words, specific to each language; • of all the sentences that remain after these filtering steps, we only keep the ones that appear successively within a document. Using the filtering steps laid out above, we construct a training corpus of 500 million words for each language. We employ a vocabulary of 15K words (those with highest frequency throughout the corpus); less frequent words are replaced by an <unk> token, the probability of which is set to zero during generation. Both encoder and decoder are made up of two GRU layers with a hidden state of size 2048, and the word embeddings are of size 512. Encoder, decoder, and output embeddings are all shared (Press and Wolf, 2017). Model parameters are optimized using stochastic gradient descent with an initial learning rate of 0.2, which is divided by 4 when the loss does no longer improve on a held-out validation set. We use a batch size of 64, and we apply gradient clipping. The neural architecture has been implemented using PyTorch (Paszke et al., 2017), with substantial reliance on the OpenNMT module (Klein et al., 2017). For the application of the topical constraint, we use an entropy threshold of 2.70. The n-gram model is a standard KneserNey smoothed trigram model implemented using KenLM (Heafield, 2011), and the NMF model is factorized to 100 dimensions. Both the n-gram 6commoncrawl.org model and the NMF model are trained on a large, 10 billion word corpus, equally constructed from web texts without any filtering steps except for language identification. For syllable length, we use µ = 12, σ = 2. We generate about 2000 candidates for each verse, according to a fixed rhyme scheme (ABAB CDCD). Note that no human selection whatsoever has been applied to the poems used in the evaluation; all poems have been generated in a single run, without cherry picking the best examples. Four representative examples of poems generated by the system are given in Figure 2. 4.2 Evaluation procedure Quantitatively evaluating creativity is far from straightforward, and this is no less true for creative artefacts that are automatically generated. Automatic evaluation measures that compute the overlap of system output with gold reference texts (such as BLEU or ROUGE), and which might be used for the evaluation of standard generation tasks, are of little use when it comes to creative language generation. The majority of research into creative language generation therefore makes use of some form of human evaluation, even though one needs to keep in mind that the evaluation of textual creativity is an inherently subjective task, especially with regard to poetic value. For a discussion of the subject, see Gonçalo Oliveira (2017). We adopt the evaluation framework by Zhang and Lapata (2014), in which human annotators are asked to evaluate poems on a five point scale with regard to a number of characteristics, viz. • fluency: is the poem grammatical and syntactically well-formed? • coherence: is the poem thematically structured? • meaningfulness: does the poem convey a meaningful message to the reader? • poeticness: does the text display the features of a poem? Additionally, we ask annotators to judge if the poem is written by a human or a computer. In total, we evaluate four different sets of poems, yielded by different model instantiations. The different sets of poems considered during evaluation are: 2477 At the moment it seems almost impossible Yet life is neither good nor evil The divine mind and soul is immortal In other words, the soul is never ill So far, it has barely lost its youthful look But no man is ever too young for the rest He thought deeply, and yet his heart shook At that moment he seemed utterly possessed ~ Malgré mon enthousiasme, le chagrin s’allonge Le bonheur est toujours superbe Toi, tu es un merveilleux songe Je te vois rêver de bonheur dans l’herbe Tu trouveras le bonheur de tes rêves Je t’aime comme tout le monde Je t’aime mon amour, je me lève Je ressens pour toi une joie profonde ~ The moon represents unity and brotherhood The earth stands in awe and disbelief Other planets orbit the earth as they should The universe is infinite and brief The sky has been so bright and beautiful so far See the moon shining through the cosmic flame See the stars in the depths of the earth you are The planet the planet we can all see the same Rien ne prouve qu’il s’indigne Dans le cas contraire, ce n’est pas grave Si la vérité est fausse, c’est très mauvais signe Il est vrai que les gens le savent Et cela est faux, mais qu’importe En fait, le mensonge, c’est l’effroi La négation de l’homme en quelque sorte Le tort n’est pas de penser cela, il est magistrat Figure 2: Four representative examples of poems generated by the system; the left-hand poems, in English, are respectively generated using dimensions 13 and 28 (cf. Table 2); the right-hand poems, in French, are generated using dimensions 1 and 25 (cf. Table 3). • rnn: poems generated by the neural architecture defined in section 3.1, without any added constraints; • rhyme: poems generated by the neural architecture, augmented with the rhyme constraint; • nmfrand: poems generated by the neural architecture, augmented with both the rhyme constraint and the topical constraint, where one of the automatically induced NMF dimensions is selected randomly; • nmfspec: poems generated by the neural architecture, augmented with both the rhyme constraint and the topical constraint, where one of the automatically induced NMF dimensions is specified manually.7 For a proper comparison of our system, we equally include: • random: poems yielded by a baseline model where, for each verse, we select a random sentence (that contains between 7 and 15 words) from a large corpus; the idea is that the lines selected by the baseline model should be fairly fluent (as they come from an actual corpus), but lacking in coherence (due to their random selection); 7This can be regarded as manually defining the theme of the generated poem. The specified dimension is selected for its poetic character. • human: poems written by human poets; the scores on this set of poems function as an upper bound; • Hafez and Deep-speare: poems generated by two state of the art poetry generation systems for English, respectively by Ghazvininejad et al. (2016) and Lau et al. (2018); we use the code made available by the respective authors.8 Note that we only compare to other poetry generation systems for English, as no other readily available systems exist for French. 4.3 Results for English For English, 22 annotators evaluated 40 poems in total (5 poems for each of the different sets considered in the evaluation; each poem was evaluated by at least 4 annotators). The annotators consist of native speakers of English, as well as master students in English linguistics and literature. For the human set, we select five poems by well-established English poets that follow the same rhyme scheme as the generated ones.9 For nmfspec, we select dimension 13 of Table 2. The results of the evaluation for English are presented in the upper part of Table 4. First of all, note that all our model instantiations score better than the random baseline model, 8Hafez needs to be initialized with user-defined topics; for a fair comparison, we seed the system with the top words of the NMF dimension used for our best performing model. 9The selected poets are W.H. Auden, E.E. Cummings, Philip Larkin, Sarojini Naidu, and Sylvia Plath. 2478 English model fluency coherence meaningfulness poeticness written by human (%) rnn 2.95 2.50 2.45 2.55 0.18 rhyme 3.41 2.77 2.82 2.95 0.59 nmfrand 3.32 3.09 2.86 2.95 0.32 nmfspec 3.64 3.41 3.27 3.86 0.55 random 2.68 2.09 1.91 2.41 0.14 Deep-speare 2.11 2.00 2.00 3.00 0.22 Hafez 3.44 3.11 3.11 3.50 0.53 human 3.73 3.73 3.68 4.00 0.73 French model fluency coherence meaningfulness poeticness written by human (%) rnn 3.45 2.73 2.59 2.55 0.27 rhyme 3.82 2.55 2.18 3.23 0.14 nmfrand 3.64 3.32 3.09 2.86 0.27 nmfspec 3.82 3.82 3.55 3.95 0.45 random 2.95 1.86 1.68 2.18 0.00 human 4.59 4.59 4.50 4.81 0.95 Table 4: Results of the human evaluation (mean score of all annotators) for English and French; values in bold indicate best performance of all generation models even with regard to grammatical fluency. The good scores on fluency for the constrained models indicate that the applied constraints do not disrupt the grammaticality of the generated verse (rhyme is significantly different10 with p < 0.05; nmfrand and nmfspec with p < 0.01; recall that the baseline consists of actual sentences from a corpus). Secondly, we note that the rhyme constraint seems to improve poeticness (though not significantly), while the topical constraint seems to improve both coherence (p < 0.01 for nmfspec) and meaningfulness (not significantly). Interestingly, a large proportion of the poems produced by the rhyme model are labeled as human, even though the other scores are fairly low. The score for poeticness is considerably higher (p < 0.01) for nmfspec (with a manually specified theme selected for its poeticness) than for nmfrand (with a randomly selected topic, which will often be more mundane). And the best scores on all criteria are obtained with the nmfspec model, for which more than half of the poems are judged to be written by a human; moreover, the difference between nmfspec and human poetry is not significant. Finally, our poetry generation compares favourably to previous work: nmfspec scores markedly and significantly better than Deep-speare (which does not differ significantly from the random baseline), and it equally attains better scores than Hafez on all 10Significance testing is carried out using a two-tailed permutation test. four criteria (though not significantly so). 4.4 Results for French The setup of the French evaluation is analogous to the English one: an equal number of 22 annotators have evaluated a total of 30 poems (5 poems for each of the six sets considered in the evaluation; each poem was evaluated by at least 4 annotators). The annotators are all native speakers of French. For the human poems, we select five poems with the same rhyme scheme as the generated ones, among the highest ranked ones on short-edition.com—a website with submissions by amateur poets. For nmfspec, we select dimension 1 of Table 3. The results for French are presented in the lower part of Table 4. Generally speaking, we see that the results for French confirm those for English. First of all, all model instantiations obtain better scores than the random baseline model, even with regard to fluency (p < 0.01), again confirming that the application of the rhyme constraint and topical constraint are not detrimental to the grammaticality of the verse. Secondly, the rhyme constraint significantly improves the score for poeticness (p < 0.05 compared to rnn), while the topical constraint improves both coherence (p < 0.05) and meaningfulness (p < 0.01). Contrary to the English results, only a small proportion of poems from the rhyme model are thought to be human. We do again see that the score for poeticness is considerably higher (p < 0.01) for 2479 nmfspec than for nmfrand, which seems to indicate that the topic of a poem is an important factor in people’s judgements on poeticness. Finally, we again see that the best scores on all criteria are obtained with nmfspec, for which almost half of the poems are judged to be written by a human. 5 Conclusion We presented a system for automatic poetry generation that is trained exclusively on standard, nonpoetic text. The system uses a recurrent neural encoder-decoder architecture in order to generate candidate verses, incorporating poetic and topical constraints by modifying the output probability distribution of the neural network. The best verse is then selected for inclusion in the poem, using a global optimization framework. We trained the system on both English and French, and equally carried out a human evaluation for both languages. The results indicate that the system is able to generate credible poetry, that scores well with regard to fluency and coherence, as well as meaningfulness and poeticness. Compared to previous systems, our model achieves state of the art performance, even though it is trained on standard, non-poetic text. In our best setup, about half of the generated poems are judged to be written by a human. We conclude with a number of future research avenues. First of all, we would like to experiment with different neural network architectures. Specifically, we believe hierarchical approaches (Serban et al., 2017) as well as the Transformer network (Vaswani et al., 2017) would be particularly suitable to poetry generation. Secondly, we would like to incorporate further poetic devices, especially those based on meaning. Gripping poetry often relies on figurative language use, such as symbolism and metaphor. A successful incorporation of such devices would mean a significant step towards truly inspired poetry generation. And finally, we would like to adapt the model for automatic poetry translation—as we feel that the constraint-based approach lends itself perfectly to a poetry translation model that is able to adhere to an original poem in both form and meaning. In order to facilitate reproduction of the results and encourage further research, the poetry generation system is made available as open source software. The current version can be downloaded at https://github.com/timvdc/poetry. Acknowledgements This work is supported by a grant overseen by the French National Research Agency ANR (project QUANTUM – ANR-19-CE23-0025); it has equally benefited from a GPU donated by NVIDIA Corporation. References Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Association for Computational Linguistics. Chris Ding, Tao Li, and Wei Peng. 2008. On the equivalence between non-negative matrix factorization and probabilistic latent semantic indexing. Computational Statistics & Data Analysis, 52(8):3913–3927. Eric Gaussier and Cyril Goutte. 2005. Relation between plsa and nmf and implications. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 601–602. ACM. Pablo Gervás. 2001. An expert system for the composition of formal spanish poetry. In Applications and Innovations in Intelligent Systems VIII, pages 19–32, London. Springer. Marjan Ghazvininejad, Xing Shi, Yejin Choi, and Kevin Knight. 2016. Generating topical poetry. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1183–1191, Austin, Texas. Association for Computational Linguistics. Hugo Gonçalo Oliveira. 2012. Poetryme: a versatile platform for poetry generation. Computational Creativity, Concept Invention, and General Intelligence, 1:21. Hugo Gonçalo Oliveira. 2017. A survey on intelligent poetry generation: Languages, features, techniques, reutilisation and evaluation. In Proceedings of the 10th International Conference on Natural Language Generation, pages 11–20. Erica Greene, Tugba Bodrumlu, and Kevin Knight. 2010. Automatic analysis of rhythmic poetry with applications to generation and translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 524– 533. Association for Computational Linguistics. Kenneth Heafield. 2011. KenLM: faster and smaller language model queries. In Proceedings of the 2480 EMNLP 2011 Sixth Workshop on Statistical Machine Translation, pages 187–197, Edinburgh, Scotland, United Kingdom. Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800. Jack Hopkins and Douwe Kiela. 2017. Automatically generating rhythmic verse with neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 168–178. Association for Computational Linguistics. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Opensource toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72. Association for Computational Linguistics. Jey Han Lau, Trevor Cohn, Timothy Baldwin, Julian Brooke, and Adam Hammond. 2018. Deep-speare: A joint neural model of poetic language, meter and rhyme. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1948–1958. Association for Computational Linguistics. Daniel D Lee and H Sebastian Seung. 2001. Algorithms for non-negative matrix factorization. In Advances in neural information processing systems, pages 556–562. Zhiqiang Liu, Zuohui Fu, Jie Cao, Gerard de Melo, Yik-Cheung Tam, Cheng Niu, and Jie Zhou. 2019. Rhetorically controlled encoder-decoder for modern Chinese poetry generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1992–2001, Florence, Italy. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Ruli Manurung, Graeme Ritchie, and Henry Thompson. 2012. Using genetic algorithms to create meaningful poetic text. Journal of Experimental & Theoretical Artificial Intelligence, 24(1):43–64. Brian Murphy, Partha Talukdar, and Tom Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embedding. In Proceedings of COLING 2012, pages 1933–1950. The COLING 2012 Organizing Committee. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In Advances in Neural Information Processing Systems. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37:141–188. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Tony Veale. 2013. Less rhyme, more reason: Knowledge-based poetry generation with feeling, insight and wit. In Proceedings of the international conference on computational creativity, pages 152– 159. Qixin Wang, Tianyi Luo, Dong Wang, and Chao Xing. 2016. Chinese song iambics generation with neural attention-based model. In Proceedings of International Joint Conference on Artificial Intelligence, pages 2943–2949. Rui Yan. 2016. i, poet: Automatic poetry composition through recurrent neural networks with iterative polishing schema. In Proceedings of International Joint Conference on Artificial Intelligence, pages 2238–2244. Cheng Yang, Maosong Sun, Xiaoyuan Yi, and Wenhao Li. 2018. Stylistic Chinese poetry generation via unsupervised style disentanglement. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3960–3969, Brussels, Belgium. Association for Computational Linguistics. Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Wenhao Li. 2018. Automatic poetry generation with mutual reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3143–3153, Brussels, Belgium. Association for Computational Linguistics. Xingxing Zhang and Mirella Lapata. 2014. Chinese poetry generation with recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680. Association for Computational Linguistics.
2020
223
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2481–2491 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2481 Bridging the Structural Gap Between Encoding and Decoding for Data-To-Text Generation Chao Zhao†, Marilyn Walker‡ and Snigdha Chaturvedi† † Department of Computer Science, University of North Carolina at Chapel Hill ‡ Natural Language and Dialog Systems Lab, University of California, Santa Cruz {zhaochao, snigdha}@cs.unc.edu [email protected] Abstract Generating sequential natural language descriptions from graph-structured data (e.g., knowledge graph) is challenging, partly because of the structural differences between the input graph and the output text. Hence, popular sequence-to-sequence models, which require serialized input, are not a natural fit for this task. Graph neural networks, on the other hand, can better encode the input graph but broaden the structural gap between the encoder and decoder, making faithful generation difficult. To narrow this gap, we propose DUALENC, a dual encoding model that can not only incorporate the graph structure, but can also cater to the linear structure of the output text. Empirical comparisons with strong single-encoder baselines demonstrate that dual encoding can significantly improve the quality of the generated text. 1 Introduction Data-to-text generation aims to create natural language text to describe the input data (Reiter and Dale, 2000). Here we focus on structured text input in a particular form such as a tree or a graph. Figure 1 shows an example where the input data is a mini knowledge graph, and the output text is its corresponding natural language description. Generating text from such data is helpful for many NLP tasks, such as question answering and dialogue (He et al., 2017; Liu et al., 2018; Moon et al., 2019). During generation, the structure of the data as well as the content inside the structure jointly determine the generated text. For example, the direction of the edge “capital” in Figure 1 determines that “London is the capital of U.K.” is an accurate description, but not vice versa. Current generation methods are based on sequence-to-sequence (Seq2Seq) encoder-decoder architecture (Sutskever et al., 2014), which requires the input data to be Figure 1: Illustration of the WebNLG challenge: the source data is an RDF graph and the target output is a text description of the graph. serialized as a sequence, resulting in a loss of structural information. Recent research has shown the utility of incorporating structural information during generation. By replacing the sequential encoder with a structureaware graph encoder, such as a graph convolutional network (GCNs) (Kipf and Welling, 2017) or graph-state LSTMs (Song et al., 2018), the resulting graph-to-sequence (Graph2Seq) methods can encode the structural information of the input and thus outperform Seq2Seq models on certain tasks. However, these architectures broaden the structural gap between the encoder and decoder. That is, while the encoder receives the input data as a graph, the decoder has to create the output text as a linear chain structure. This structural gap increases the difficulty of establishing alignments between source and target, which is believed to play a key role in text generation. For example, in machine translation, pre-reordering the source words into a word order that is close to that of the target sentence can yield significant improvements in translation quality (Bisazza and Federico, 2016). This suggests a need for an intermediate “planning” stage (Reiter 2482 and Dale, 2000; Puduppully et al., 2019) to help with organizing the output. In this work, we present a dual encoding model that is not only aware of the input graph structure but also incorporates a content planning stage. To encode the structural information in the input graph, we use a GCN based graph encoder. To narrow the ensuing structural gap, we use another GCN-based neural planner to create a sequential content plan of this graph, which is represented as a re-ordered sequence of its nodes. The plan is then encoded by an LSTM based sequential encoder. During generation, an LSTM based decoder simultaneously conditions on the two encoders, which helps it in capturing both the graph structure of the input data and the linear structure of the plan. We expect such a dual encoding (DUALENC) structure can integrate the advantages of both graph and sequential encoders while narrowing the structural gap present in single-encoder methods. We evaluate the proposed planning and generation models on the WebNLG dataset (Colin et al., 2016; Gardent et al., 2017) – a widely used benchmark for data-to-text generation. Experimental results show that our neural planner achieves a 15% absolute improvement on accuracy compared to the previous best planning method. Furthermore, DUALENC significantly outperforms the previous start-of-the-art on the generation task. The human evaluation confirms that the texts generated by our model are preferred over strong baselines. The contributions of this paper are three-fold: • We propose a dual encoding method to narrow the structural gap between data encoder and text decoder for data-to-text generation; • We propose a neural planner, which is more efficient and effective than previous methods; • Experiments show that our method outperforms all baselines on a variety of measures. 2 Related Work This work is inspired by two lines of research: Seq2Seq generation and Graph2Seq generation. 2.1 Seq2Seq Generation Traditional data-to-text generation follows a planning and realization pipeline (Reiter and Dale, 2000; Stent et al., 2004). More recent methods use Seq2Seq architecture (Sutskever et al., 2014) to combine planning and realization into an end-toend network and have achieved the state-of-the-art on a variety of generation tasks (Lebret et al., 2016; Trisedya et al., 2018; Juraska et al., 2018; Reed et al., 2018). Despite the fair fluency and grammatical correctness, the generated text suffers from several problems such as repetition, omission, and unfaithfulness, which are less likely to happen in traditional planning-and-realization frameworks. Recent work has shown that neural models can also benefit from an explicit planning step to alleviate the above-mentioned problems. The input of these planners ranges from unstructured keyphrases (Hua and Wang, 2019) to structured tables (Puduppully et al., 2019) and graphs (Ferreira et al., 2019; Moryossef et al., 2019a). Our work also focuses on planning from graph data. Compared with previous methods, we show that our neural planning method is more feasible and accurate. More importantly, rather than serializing the planning and realization stages in a pipeline, our dual encoding method simultaneously captures information from the original data and the corresponding plan. 2.2 Graph2Seq Generation Graph neural networks (GNN) (Scarselli et al., 2009) aim to learn a latent state representation for each node in a graph by aggregating local information from its neighbors and the connected edges. Previous work has explored different ways of aggregating this local information, such as in GCNs (Kipf and Welling, 2017), gated graph neural networks (GGNNs) (Li et al., 2016), and Graph attention networks (GANs) (Veliˇckovi´c et al., 2018) Several works have applied GNNs instead of Seq2Seq models for text generation (Beck et al., 2018; Marcheggiani and Perez-Beltrachini, 2018; Guo et al., 2019; Li et al., 2019), and some of them outperform Seq2Seq models. However, Damonte and Cohen (2019) use both types of encoders and show that GCN can help LSTM capture reentrant structures and long-range dependencies, albeit on a different problem than ours. Our method also uses the two types of encoders but instead of using one to assist the other, it combines them simultaneously to capture their complementary effects. 3 Problem Statement In this work we focus on text generation from RDF data.1 The input for this task is a set of RDF triples, where each triple (s, p, o) contains a subject, a predicate, and an object. For example, (“U.K.”, “cap1https://www.w3.org/TR/rdf-concepts/ 2483 Figure 2: The architecture of the proposed DUALENC model. The input triples are converted as a graph and then fed to two GCN encoders for plan and text generation (Planner and Graph Encoder, top center). The plan is then encoded by an LSTM network (Plan Encoder, bottom center). Finally an LSTM decoder combines the hidden states from both the encoders to generate the text (Text Decoder, middle right). ital”, “London”) is a RDF triple. The output is a natural language text with one or more sentences to describe the facts represented by this graph. Figure 1 shows an example of this task. 4 Dual Encoding Model For a given input RDF graph, the aim of our method is not only to capture its structural information, but also to facilitate the information alignment between the input and output. The first goal can be achieved by employing a GCN encoder. To achieve the second goal, we first serialize and re-order the nodes of the graph as an intermediate plan using another GCN, and then feed the plan into an LSTM encoder. Finally, an LSTM decoder is used to generate the output by incorporating the context representations of both encoders. Notice that the graph and the plan are dual representations of the same input data. We encode them with two independent encoders, which can provide complementary information for decoding. The architecture of our dual encoding method is shown in Figure 2. We describe the two encoders and the decoder in the following three subsections. 4.1 Graph Representation and Encoding To make it easier for GCNs to encode information from both entities and predicates, we reconstruct the input graph by regarding both entities and predicates as nodes, which is different from Figure 1. Formally, for each RDF triple (s, p, o), we regard the s, p, and o as three kinds of nodes. s and o are identified by their entity mentions, and p is identified by a unique ID. That is, two entities from different triples that have the same mentions will be regarded as the same node. However, since we want to use predicates to distinguish between different triples, two predicates with the same mentions will be regarded as separate nodes.2 Figure 3: The graph obtained from an RDF triple. We use the same edge structure as Beck et al. (2018). As Figure 3 shows, a triple contains four directed edges to connect its nodes: s →p, p →s, o →p, and p →o. These edges help in information exchange between arbitrary neighbor pairs. There is also a special self-loop edge n →n for each node n to enable information flow between adjacent iterations during feature aggregation. After building the graph G = (V, E) from the RDF data, we use a relational GCN (R-GCN) (Schlichtkrull et al., 2018) to encode the graph and learn a state representation hv ∈Rd for each node v ∈V using the following iterative method: ht v = ρ  X r∈R X u∈N rv 1 cv,r Wrh(t−1) u + br   (1) where h0 v = xv is the input embedding of the node v, and ht v is its hidden state at time-step t. We use the average embedding of the node mentions as xv. R is the set of all possible edge types, and N r v is the set of in-neighbors of node v with the edge 2For example, ‘capital’s in (U.K., capital, London) and (U.S., capital, Washington D.C.) are different nodes. 2484 Figure 4: The sequential decision-making process of the planning stage. type as r. Wr and br are parameters for each edge type, which allow transformations of message to become relational-specific. cv,r = 1/|N r v | is a normalization term and ρ() is an activation function. 4.2 Planning Creation and Encoding In the planning stage, we determine the content plan or order of triples (identified by their predicates) for text realization. For example, the content plan for the text in Figure 1 is: “assembly →capital →successor →manufacturer ”.3 Learning a plan can be naturally regarded as a sequential decision-making process. That is, given a set of triples, we first determine which triple to mention/visit first, and then select the second triple from the remaining triples that have not been visited so far. This process continues until all the triples have been visited. During each decision step, the selection of the next triple can be regarded as a classification task, where the output space is all the remaining unvisited triples. Figure 4 shows how our model implements this process. We first utilize the GCN encoder described in Section 4.1 to get the state representation of each node. However, while obtaining a predicate’s representation, we concatenate two extra bits to the input feature Xt. One is to indicate whether or not the predicate has been visited, the other to indicate the last predicate that has been visited. After the encoding, we get the final hidden state hri = h(T) ri for each predicate ri ∈R as its representation, and calculate its probability of being selected as P(ri) = softmax(hT riW¯hR) (2) where ¯hR is the average pooling of all the predicate embeddings. For obtaining a plan, we select the predicate with the highest probability, append it onto the plan sequence, and then repeat the above process until all the predicates have been visited. 3Here we only consider the order of triples. Future plans could explore ordering of subjects and/or objects. After determining an order of input predicates, we complete the plan’s triples by adding the corresponding subjects and objects. To better help the plan encoder (described below) capture the semantic roles of each entity and predicate, we add special tokens before Subjects, Predicates, and Objects as delimiters. For example, the plan of the example in Figure 1 will be: <S> Aston Martin V8 <P> assembly <O> United Kingdom <S> United Kingdom <P> capital <O> London <S> Aston Martin V8 <P> successor <O> Aston Martin Virage <S> Aston Martin Virage <P> manufacturer <O> Aston Martin Finally, we use an LSTM to encode the plan obtained above. We choose LSTM because it excels at capturing sequential information. 4.3 Decoding During decoding, we adopt an LSTM-based decoder with an attention and copy mechanism. Since we have two representations of the input triple-set: the original graph and the serialized plan, we adopt two strategies for inputting context to the decoder. The first strategy is to only use hidden states of the plan encoder as context. We refer to this strategy as PLANENC. While the serialized plan may contain some structural information, it cannot preserve all the information of the original graph. We therefore propose a second strategy, DUALENC, to incorporate the information from both the graph and the plan. More concretely, when calculating the context state mt of the LSTM decoder at time step t, we concatenate the previous hidden state zt−1 and the two context vectors c1 t and c2 t , and then update the current hidden state, zt as: mt = MLP([zt−1; c1 t ; c2 t ]), (3) zt = LSTM (zt−1, [(yt−1; mt]) , (4) where c1 t and c2 t are the attention-based weighted sum of the context memories from GCN and RNN encoders, respectively, and yt−1 is the embedding of the previously generated token. The initial hidden state z0 is the summation of the final states from the two encoders. For the plan encoder, we use the final state HT of LSTM as the context representation. For the graph encoder, we use an average of all the hidden states following a two-layer perceptron to produce the final state. 2485 5 Experiments We conduct experiments to evaluate our Planner (Section 5.2) and the overall generation system (Section 5.3). 4 5.1 Dataset We conduct experiments on the WebNLG dataset (Gardent et al., 2017; Castro Ferreira et al., 2018) used in the WebNLG challenge.5 For each instance, the input is a set of up to 7 RDF triples from DBPedia, and the output is their text descriptions. Each triple-set is paired with a set of (up to three) humangenerated reference texts. Each reference is also paired with the order of triples it realized. We use them to train and evaluate our Planner. Overall, the dataset contains 9, 674 unique triple-sets and 25, 298 text references, and is divided into training, development, and test set. The test set contains two subsets, the SEEN part where the instances belong to one of the nine domains that are seen in the training and development set (such as Astronaut and Food), and the UNSEEN part where the instances are from the other five unseen domains. The UNSEEN part is designed to evaluate models’ generalizability to out-of-domain instances. 5.2 Experiments on Plan Generation As previous work suggests, planning plays a crucial role in text generation. We, therefore, first investigate the performance of our planner. 5.2.1 Setup During the graph encoding, we initialize the node embeddings with 100-dimensional random vectors. Our GCN model has two layers, with the hidden size of each layer as 100. The activation function is ReLU (Nair and Hinton, 2010). We optimize the training objective using Adam (Kingma and Ba, 2015) with a learning rate of 0.001 and an early stopping on the development set. The batch size is 100. We compare our results with the following six baseline planners: • Random: returns a random permutation of the input triples as a plan; • Structure-Random: returns a random traversal over the input graph. We report the highest score among three random strategies: random walk, random BFS, and random DFS; 4Code is available on https://github.com/ zhaochaocs/DualEnc 5http://webnlg.loria.fr/pages/index. html • Step-By-Step (Moryossef et al., 2019a): a transition-based statistical ranking method; • Step-By-Step II (Moryossef et al., 2019b): a DFS-based method with a neural controller; • GRU & Transformer (Ferreira et al., 2019): two neural Seq2Seq methods with attention; We report the performance on three test sets: SEEN, UNSEEN, and ALL (SEEN & UNSEEN). We remove all one-triple instances for planner’s evaluation since the planning for these instances is trivial. Results are evaluated with accuracy and BLEU-n (Papineni et al., 2002). For accuracy, we regard a plan as correct only if it exactly matches one of the human-generated plans. BLEU-n is more forgiving than accuracy. It is also adopted in Yao et al. (2019) for plan evaluation. Here we choose n = 2. 5.2.2 Results Table 1 shows results of the planning experiments. Our GCN method significantly outperforms all the baselines (approximate randomization (Noreen, 1989; Chinchor, 1992), p < 0.05) by a large margin on all the test sets and both measures, indicating the effectiveness of our planner. The most competitive baseline on ALL and UNSEEN sets is Step-By-Step, but our method is more time-efficient. For example, Step-By-Step needs 250 seconds to solve one 7-triple instance, but our method solves all 4928 instances in less than 10 seconds. For the SEEN set, the most competitive models are GRU and Transformer. However, while their accuracies drop by 0.46 on UNSEEN test set, our method drops only slightly by 0.02, indicating our method’s better generalization power. We believe that this superior generalization capacity comes from the modeling of the graph structure. While the surface forms of triples in UNSEEN set do not overlap with those in the training data, the graph-level structural features are still shared, making it a key factor for generalization. GRU and Transformer linearize the graph as a sequential input, making them miss the structural information and resulting in poorer generalization capacity. Step-By-Step II also considers graph structure, but our model achieves better performance because we use GCN to encode the node representation, which can aggregate richer information from both the graph structure and the surface information. We also investigated the effect of the graph size on the plan quality. In Figure 5, we separate the ALL test set into six subsets according to the size of input triple-sets, to reflect the model’s capacity 2486 Accuracy BLEU-2 SEEN UNSEEN ALL SEEN UNSEEN ALL Random 0.28 0.34 0.31 54.1 62.1 57.9 Structure-random 0.32 0.38 0.34 56.6 62.9 59.5 Transformer (Ferreira et al., 2019) 0.56 0.09 0.34 74.3 20.9 49.3 GRU (Ferreira et al., 2019) 0.56 0.10 0.35 75.8 25.4 52.2 Step-By-Step II (Moryossef et al., 2019b) 0.45 0.44 0.44 67.7 67.3 67.5 Step-By-Step (Moryossef et al., 2019a) 0.49 0.44 0.47 73.2 68.0 70.8 GCN 0.63 0.61 0.62 80.8 79.3 80.1 Table 1: Planning results of three test sets evaluated by accuracy and BLEU-2. 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 Accuracy GCN step-by-step step-by step II GRU Transformer 2 3 4 5 6 7 0 20 40 60 80 100 BLEU-2 random_walk random_dfs random_bfs random Triple-set Size Figure 5: Fine-grained planning results for the ALL test set. Our method outperforms all the baselines regardless of the triple size. at a fine-grained level. Fewer input triples make the planning task easier, while the 7-triple case is the most difficult one. The accuracy of seven out of eight baselines drops to around 0 in this case, while our method achieves an accuracy of 0.19. Besides this, our method consistently outperforms all the baselines for all the triple-set sizes. 5.3 Experiments on Text Generation This section investigates the ability of our models to improve the generation quality. 5.3.1 Setup We implement the generator based on the OpenNMT toolkit.6 For the graph encoder, we use a similar setting as above. Since the generation task is more complicated than planning, we increase the dimension of the input and the hidden states to 256. The plan encoder is a 2-layer bidirectional LSTM with the same dimension setting of the GCN to ease the information fusion. During encoding, for UNSEEN test set, we adopt delexicalization (Gardent et al., 2017) to enhance the model’s generalizability to unseen domains. We use Adam with a batch size of 64. The initial learning rate is set to 0.001 and is decayed with a rate of 0.7 after the eighth epoch. We continue the 6https://github.com/OpenNMT/OpenNMT-py training until the perplexity of the development set does not decrease. We also apply dropout on the decoding output layer with a rate of 0.3. The quality of the generated text (as well as those of the baselines) is evaluated through a variety of automatic measures, such as BLEU, METEOR, and TER, which are strictly the same as those applied in the official challenge.7 Following Marcheggiani and Perez-Beltrachini (2018), we report averaged performances over ten runs of the models. We compare our method with the top systems of the WebNLG challenge and published state-of-theart systems. The WebNLG systems are: • ADAPT: a neural system with sub-word representations to deal with rare words and sparsity. • TILB-SMT: a statistical machine translation method using Moses and delexicalization. • MELBOURNE: a Seq2Seq model with enriched delexicalization from DBPedia. The published research models are: • GTR-LSTM (Trisedya et al., 2018): a graphbased triple encoder; • GCN-EC (Marcheggiani and PerezBeltrachini, 2018): a GCN-based triple encoder with glove embedding and copy; • GRU & Transformer (Ferreira et al., 2019): two pipeline methods with 5 sequential steps and GRU or Transformer as the encoder; • STEP-BY-STEP (Moryossef et al., 2019a): a pipeline method that generates the text from plans with OpenNMT and a copy mechanism. 5.3.2 Qualitative Results Table 2 shows the results of the automatic evaluation on the generation task. Our PLANENC achieves the best performance on BLEU and TER, while DUALENC performs best under METEOR. Both PLANENC and DUALENC significantly out7That is why some of the numbers in our table are not exactly the same as those in the cited works. 2487 BLEU (↑) METEOR (↑) TER (↓) SEEN UNSEEN ALL SEEN UNSEEN ALL SEEN UNSEEN ALL TILB-SMT 54.29 29.88 44.28 0.42 0.33 0.38 0.47 0.61 0.53 ADAPT 60.59 10.53 31.06 0.44 0.19 0.31 0.37 1.40 0.84 MELBOURNE 54.52 33.27 45.13 0.41 0.33 0.37 0.40 0.55 0.47 GTR-LSTM (2018) 54.00 29.20 37.10 0.37 0.28 0.31 0.45 0.60 0.55 GCN-EC (2018) 55.90 0.39 0.41 GRU (2019) 56.09 25.12 42.73 0.42 0.22 0.33 0.39 0.64 0.51 Transformer (2019) 56.28 23.04 42.41 0.42 0.21 0.32 0.39 0.63 0.50 Step-By-Step (2019a) 53.30 34.41 47.24 0.44 0.34 0.39 0.47 0.56 0.51 PLANENC 64.42 38.23 52.78 0.45 0.37 0.41 0.33 0.53 0.42 DUALENC 63.45 36.73 51.42 0.46 0.37 0.41 0.34 0.55 0.44 Table 2: Generation results evaluated by BLEU, METEOR, and TER. We compare our methods with different generation systems (SMT, Sequential NMT, Graph NMT, Pipeline). Both of our methods outperform all the baselines on all three measures. We highlight both results if there is no significant difference. perform the previous state-of-the-art (bootstrapping (Koehn and Monz, 2006), p < 0.05). For the SEEN part, while no existing published work performed better than ADAPT, our PLANENC achieves a 3.83 performance gain on BLEU. It also outperforms the single GCN encoder by 8.52 BLEU, which confirms the advantage of the planning stage for bridging the structural gap between the encoder and decoder. For the UNSEEN part, PLANENC and DUALENC improve BLEU by 3.82 and 2.32 compared with the previous state-of-the-art. While it is difficult to distinguish the performance of DUALENC and PLANENC by automatic measures, our human experiments (see Section 5.3.4) show that dual encoding generates better text compared with PLANENC. When comparing with the pipeline methods, one difference from the data perspective is how to obtain the plans of each instance to train the planner. While Step-By-Step uses heuristic string matching to extract plans from the referenced sentences, other methods (GRU and transformer), as well as ours, use plans provided in the enriched WebNLG dataset (Castro Ferreira et al., 2018). However, Step-By-Step reported worse BLEU results on these plans. 5.3.3 Ablation Study To further analyze what factors contribute to the performance gain, we conduct an ablation study by removing the following components: • Copy mechanism: The text is generated without copying from the source; • Triple planning: The input triples are shuffled before feeding into RNN, but the (s, p, o) Methods BLEU (↑) METEOR (↑) TER (↓) PLANENC 64.42 ± 0.17 0.45 ± 0.00 0.33 ± 0.00 -plan 57.81 ± 0.82 0.40 ± 0.00 0.40 ± 0.01 -copy 61.64 ± 0.53 0.43 ± 0.01 0.36 ± 0.01 -mention 61.49 ± 0.35 0.43 ± 0.00 0.36 ± 0.00 -delimiter 63.26 ± 0.33 0.44 ± 0.00 0.34 ± 0.00 Table 3: Results of the ablation study. inside a triple are not shuffled. • Entity mentions: We join the words in a node mention with underlines (e.g., Aston Martin instead of Aston Martin). • Plan delimiter: We concatenate the (s, p, o) without separating them with role delimiters. We conduct the ablation study on the SEEN testset using our PLANENC. Table 3 shows the average performance and standard deviations. Compared with PLANENC, replacing plans with a random sequence of triples hurts the BLEU score by 6.61 points, indicating that the accuracy of planning is essential for the quality of generation. Our planning also makes the model more stable to random seeds (by decreasing the standard deviation from 0.82 to 0.17). Removing the copy mechanism also decreases the BLEU score by 2.78 points. It demonstrates the effectiveness of copying words from the source triples rather than generating them from the vocabulary set. Removing the mention information, decreases the BLEU score by 2.93. It reflects two benefits of word mentions: to alleviate data sparsity and to coordinate with the copy mechanism. However, removing delimiters does not affect the BLEU much. Intuitively, we expected the delimiters to 2488 Absolute(%) Pairwise(%) CVGE FAITH CVGE FAITH FLCY ALL MELBOURNE 83.0 75.2 -35.0 -42.5 -38.8 -68.8 STEP 96.1 89.3 5.0 -3.7 -45.0 -55.0 E2E-TRANS 85.5 78.0 -21.2 -32.5 -21.2 -46.3 GCN 79.8 76.8 -48.7 -50.0 -26.3 -67.5 PLANENC 92.3 88.2 -7.5 -12.5 -7.5 -21.2 DUALENC 94.5 91.8 – – – – Table 4: Results of human evaluation. DUALENC outperforms most of the baselines on all measures. help the LSTM capture the boundaries and semantic roles of each node, but the ablation study does not support it. We provide an example in Table 5 to show that the LSTM indeed has trouble learning such semantic roles. 5.3.4 Human Evaluation Automatic measures are based on lexical similarities and are not good measures of text quality in general. We therefore further conduct a human evaluation on Amazon Mechanical Turk to better access the quality of the generated texts. We evaluate the results for MELBOURNE, Step-By-Step, Transformer, GCN, as well as our PLANENC and DUALENC. We randomly select 80 test instances (440 triples in total) with the size of tripleset between 4 to 7, since they are more challenging than those with fewer triples. Then we evaluate the generation quality of each system with the following three measures: • Coverage: the percentage of triples that are covered by the generated text (all < s, p, o > values in the triples are realized); • Faithfulness: the percentage of triples that are faithfully described by the text (the text correctly expresses the predicate and also the subject and object as its arguments. No substitutions or hallucinations); • Fluency: a measure of the fluency or naturalness of the generated text. For coverage and faithfulness, workers are asked to check each triple of an instance, and judge whether the triple is covered and faithfully described by the generated text. For fluency, we ask another group of workers to compare between two outputs of the same instance and identify which one is more fluent. Table 5 shows examples where these qualities are compromised. In Table 4, we report the absolute scores of coverage and faithfulness, which range from 0 to 100%. We also provide pairwise scores of all three measures by comparing the outputs of DUALENC with each of the other five systems. We report the percentage of instances that were judged to be worse/better/same than those of DUALENC, yielding a score ranging from -100% (unanimously worse) to 100% (unanimously better). For example, MELBOURNE performs better/worse/same than DUALENC for 10%/45%/45% of the instances, yielding a pairwise score as 10%-45%=-0.35%. We also report an overall pairwise score combining all three measures. For each instance, the overall score of one output is higher than the other iff it outperforms the other on at least one of the three measures and has a better or equal vote on the other two. Our PLANENC and DUALENC outperform most of the baselines on all of the measures by a large margin (approximate randomization, p < 0.05. ), which is consistent with the automatic results. The only exception is Step-By-Step, which has high Coverage and Faithfulness (not significant). It first separates the input triples into smaller subsets and then realizes them separately. This greatly reduces the difficulty of long-term generation but at the expense of Fluency (worst among all the baselines). GCN does not perform well on Coverage, which demonstrates that the structural gap between encoding and decoding indeed makes generation more difficult. However, it has the smallest difference between Coverage and Faithfulness among all the baselines, indicating that the fidelity of generation can benefit from the encoding of graph-level structural information. By combining GCN and PLANENC, our DUALENC incorporates the advantages of both encoders while ameliorating their weaknesses, and therefore achieves the best OVERALL performance on human evaluation. 5.4 Qualitative Analysis Table 5 shows examples of generated texts by various systems for an input of six triples. Colored fonts represent missing, unfaithful, and unfluent information. For example, PLANENC misses “Buzz Aldrin” and also wrongly expresses the subject of “retirement” as “Frank Borman”, indicating that LSTM is less powerful at capturing the semantic roles of entities. This disadvantage can be well complemented by GCN, which is designed to capture the graph structure and the relations between entities. Hence, by incorporating information from 2489 Tripleset (William Anders | birthPlace | British Hong Kong), (William Anders | was a crew member of | Apollo 8), (Apollo 8 | crewMembers | Frank Borman), (Apollo 8 | backup pilot | Buzz Aldrin), (Apollo 8 | operator | NASA), (William Anders | dateOfRetirement | 1969-09-01) MELBOURNE william anders (born in british hong kong) was a crew member of apollo 8’ s apollo 8 8 mission along with buzz aldrin as backup pilot and buzz aldrin on 1969-09-01 . [Frank Borman, NASA] Step-by-Step william anders was a crew member of apollo 8 operated by nasa. apollo 8’ s backup pilot was buzz aldrin and frank borman. william anders was born in british hong kong. william anders retired on september 01st, 1969. PLANENC william anders was born in british hong kong and was a crew member of nasa’ s apollo 8. frank borman was a crew members of apollo 8 and he retired on september 1st, 1969 . [Buzz Aldrin] DUALENC william anders was born in british hong kong and served as a crew member of nasa’ s apollo 8 along with frank borman and backup pilot buzz aldrin. he retired on september 1st, 1969 . Reference william anders was born in british hong kong and served as a crew member on apollo 8 along with frank borman. nasa operated apollo 8, where buzz aldrin was a back up pilot. anders retired on sept 1, 1969 . Table 5: Sample texts generated by our methods and baselines, compared with a human-provided reference. We highlight in different color the [missing], unfaithful, and unfluent parts of each text. Only the results of our DUALENC correctly mention all the input triples. both GCN and LSTM, DUALENC correctly expresses the subject argument of “retirement”. 6 Conclusion This paper proposes DUALENC, a dual encoding method to bridge the structural gap between encoder and decoder for data-to-text generation. We use GCN encoders to capture the structural information of the data, which is essential for accurate planning and faithful generation. We also introduce an intermediate content planning stage to serialize the data and then encode it with an LSTM network. This serialized plan is more compatible with the output sequence, making the information alignment between the input and output easier. Experiments on WebNLG dataset demonstrate the effectiveness of our planner and generator by outperforming the previous state-of-the-art by a large margin. Future work will validate the effectiveness of this method on more varied data-to-text generation tasks. References Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 273–283. Arianna Bisazza and Marcello Federico. 2016. A survey of word reordering in statistical machine translation: Computational models and language phenomena. Computational Linguistics, 42(2):163–205. Thiago Castro Ferreira, Diego Moussallem, Sander Wubben, and Emiel Krahmer. 2018. Enriching the webnlg corpus. In Proceedings of the 11th International Conference on Natural Language Generation, INLG’18, Tilburg, The Netherlands. Association for Computational Linguistics. Nancy Chinchor. 1992. The statistical significance of the muc-4 results. In Proceedings of the 4th conference on Message understanding, pages 30–50. Association for Computational Linguistics. Emilie Colin, Claire Gardent, Yassine M’rabet, Shashi Narayan, and Laura Perez-Beltrachini. 2016. The webnlg challenge: Generating text from dbpedia data. In Proceedings of the 9th International Natural Language Generation conference, pages 163– 167. Marco Damonte and Shay B Cohen. 2019. Structural neural encoders for amr-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3649–3658. Thiago Castro Ferreira, Chris van der Lee, Emiel van Miltenburg, and Emiel Krahmer. 2019. Neural datato-text generation: A comparison between pipeline and end-to-end architectures. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 552–562. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg challenge: Generating text from rdf data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124–133. Zhijiang Guo, Yan Zhang, Zhiyang Teng, and Wei Lu. 2019. Densely connected graph convolutional networks for graph-to-sequence learning. Transactions of the Association for Computational Linguistics, 7:297–312. 2490 Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 199– 208. Xinyu Hua and Lu Wang. 2019. Sentence-level content planning and style specification for neural text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 591–602. Juraj Juraska, Panagiotis Karagiannis, Kevin Bowden, and Marilyn Walker. 2018. A deep ensemble model with slot alignment for sequence-to-sequence natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 152–162. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Workshop on Statistical Machine Translation, pages 102– 121. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203–1213. Wei Li, Jingjing Xu, Yancheng He, Shengli Yan, Yunfang Wu, et al. 2019. Coherent comment generation for chinese articles with a graph-to-sequence model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4843–4852. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1498. Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for structured data to text generation. In Proceedings of the 11th International Conference on Natural Language Generation, pages 1–9. Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019. Opendialkg: Explainable conversational reasoning with attention-based walks over knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 845–854. Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019a. step-by-step: Separating planning from realization in neural data-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2267–2277. Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019b. improving quality and efficiency in planbased neural data-to-text generation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 377–382. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814. Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6908–6915. Lena Reed, Shereen Oraby, and Marilyn Walker. 2018. Can neural generators for dialogue learn sentence planning and discourse structuring? INLG 2018, page 284. Ehud Reiter and Robert Dale. 2000. Building natural language generation systems. Cambridge university press. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80. 2491 Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593–607. Springer. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1616– 1626. Amanda Stent, Rashmi Prassad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentations in spoken dialog systems. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL04), pages 79–86. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Bayu Distiawan Trisedya, Jianzhong Qi, Rui Zhang, and Wei Wang. 2018. Gtr-lstm: A triple encoder for sentence generation from rdf data. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1627–1637. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. International Conference on Learning Representations. Accepted as poster. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Planand-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7378–7385.
2020
224
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2492–2501 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2492 Enabling Language Models to Fill in the Blanks Chris Donahue Stanford University Mina Lee Stanford University {cdonahue,minalee,pliang}@cs.stanford.edu Percy Liang Stanford University Abstract We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document. While infilling could enable rich functionality especially for writing assistance tools, more attention has been devoted to language modeling—a special case of infilling where text is predicted at the end of a document. In this paper, we aim to extend the capabilities of language models (LMs) to the more general task of infilling. To this end, we train (or fine-tune) off-the-shelf LMs on sequences containing the concatenation of artificially-masked text and the text which was masked. We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics. Furthermore, we show that humans have difficulty identifying sentences infilled by our approach as machinegenerated in the domain of short stories. 1 Introduction Text infilling is the task of predicting missing spans of text which are consistent with the preceding and subsequent text.1 Systems capable of infilling have the potential to enable rich applications such as assisting humans in editing or revising text (Shih et al., 2019), connecting fragmented ideas (AI21, 2019), and restoring ancient documents (Assael et al., 2019). Rather than targeting a particular application, our goal here is to provide a general, flexible, and simple infilling framework which can convincingly infill in a variety of domains. A special case of infilling is language modeling: predicting text given preceding but not subsequent text.2 Language models are (1) capable of generat1Text infilling is a generalization of the cloze task (Taylor, 1953)—cloze historically refers to infilling individual words. 2In this paper, language modeling always refers to ordinary LMs, i.e., “unidirectional,” “autoregressive,” or “left-to-right.” She ate leftover pasta for lunch. She ate [blank] for [blank]. leftover pasta [answer] lunch [answer] Data Input Target Our Infilling Framework She ate [blank] for [blank]. She ate leftover pasta for lunch. Infilling Task Input Output Train Language Model Infilling Input 
 [sep] 
 Target Data Input 
 [sep] 
 Target Output Figure 1: We consider the task of infilling, which takes incomplete text as input and outputs completed text. To tackle this task, our framework constructs training examples by masking random spans to generate pairs of inputs (text with blanks) and targets (answers for each blank). We then train unidirectional language models on the concatenation of each pair. Once trained, a model takes text input with blanks, predicts the answers, and then combines them to produce the output. ing remarkably coherent text (Zellers et al., 2019; See et al., 2019), (2) efficient at generating text, and (3) conceptually simple, but cannot infill effectively as they can only leverage context in a single direction (usually the past). On the other hand, strategies such as BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2019) are able to infill using both preceding and subsequent text. However, their use of bidirectional attention limits their infilling capabilities to fixed-length spans. This is problematic as—for many applications—we may not know the length of a missing span a priori. Zhu et al. (2019) propose a method capable of infilling variable-length spans, but it uses a specialized architecture and hence cannot easily leverage large-scale pre-trained models. In this work, we present infilling by language modeling (ILM), a simple framework which en2493 ables LMs to infill variable-length spans while preserving their aforementioned benefits: generation quality, efficient sampling, and conceptual simplicity. Our framework involves a straightforward formulation of the infilling task which, as we demonstrate, can be learned effectively by existing LM architectures. As shown in Figure 1, our approach concatenates artificially-masked text with the text which was masked, and adopts a standard LM training (or fine-tuning) procedure on such examples. Once trained, infilling can be performed for a document with blanks by using the LM to generate text and then replacing the blanks with this text. In addition to its conceptual simplicity, our experiments show that ILM enables off-the-shelf LMs to infill effectively. Furthermore, we find that infilling performance improves when starting from a large-scale pre-trained LM (as opposed to training from scratch), suggesting an additional benefit of using our model-agnostic framework compared to approaches which require specialized architectures. We provide an interactive web demo of models trained under our framework. This demo can infill multiple variable-length spans with different granularities (e.g. words, n-grams, and sentences) on the domains of short stories, scientific abstracts, and song lyrics: https://chrisdonahue.com/ilm. All code, data, and trained models are available at https://github.com/chrisdonahue/ilm and also on the CodaLab platform at https: //worksheets.codalab.org/worksheets/ 0x9987b5d9cce74cf4b2a5f84b54ee447b. 2 Problem Statement The task of infilling is to take incomplete text ˜x, containing one or more missing spans, and return completed text x. Let [blank] be a placeholder for a contiguous sequence (span) of one or more missing tokens. Then, incomplete text ˜x is a sequence of tokens some of which are [blank]. In order to map ˜x to x, an infilling strategy must specify both how many and which tokens to generate for each [blank]. Note that there may be many reasonable x for a given ˜x. Hence, we are interested in learning a distribution p(x | ˜x). 3 Infilling by Language Modeling In this section, we describe our ILM framework. We first outline a simple reparametrization of the infilling task. Then, we define a procedure for automatically generating suitable training examples which can be fed to an off-the-shelf LM. 3.1 Formulation Fedus et al. (2018) explore an infilling framework where LMs are trained on concatenations of ˜x and x, i.e., they use LMs to directly predict x given ˜x. While their approach is effective at infilling individual words, it is somewhat redundant as the model must “predict” the unmasked text in ˜x. Additionally, a model is not guaranteed to exactly reproduce the unmasked text. Instead, we make the trivial observation that it suffices to predict only the missing spans y which will replace the [blank] tokens in ˜x. We can then construct x by simply replacing [blank] tokens in ˜x with predicted spans y in a deterministic fashion. In order to handle multiple variable-length spans, we pose y as the concatenation of all missing spans separated by special [answer] tokens (one [answer] per [blank]) (Figure 1). We can thus cast infilling as learning p(y | ˜x) without loss of generality. 3.2 Training Given a corpus consisting of complete text examples, our framework first manufactures infilling examples and then trains an LM on these examples. To produce an infilling example for a given x, we first sample an ˜x from a stochastic function Mask(x) which randomly replaces some number of spans in x with [blank] tokens. Then, we concatenate together the spans which were replaced— separated by [answer] tokens—to form a training target y. Finally, we construct the complete infilling example by concatenating ˜x, [sep], and y (see Figure 2 for a complete example). We train (or fine-tune) LMs on these infilling examples using standard LM training methodology, yielding models of the form pθ(y | ˜x). Specifically, we train GPT-2 (Radford et al., 2019) off the shelf, but any LM can potentially be used. This framework has several advantages. First, it incurs almost no computational overhead compared to language modeling. Specifically, if there are k missing spans in ˜x, the concatenation of ˜x and y contains only 2k+1 more tokens than x (one [blank] and one [answer] per missing span plus one [sep]). As k is usually small (averaging around 2 per example in our experiments), sequence lengths remain similar to those encountered for the same x during language modeling. In contrast, using LMs to directly predict x from ˜x as in Fedus et al. (2018) effectively doubles the sequence length of x. 2494 This is particularly problematic when considering models like GPT-2 whose memory usage grows quadratically with sequence length. Second, our framework requires minimal change (three additional tokens) to an existing LM’s vocabulary. Finally, because the entirety of ˜x is in the “past” when predicting y, the ILM framework combines the ability to attend to incorporate context on both sides of a blank with the simplicity of decoding from LMs. 4 Experimental Setup We design our experiments to determine if training an off-the-shelf LM architecture with our ILM framework can produce effective infilling models for a variety of datasets. Specifically, we train on three datasets of different sizes and semantics (details in Appendix A): short STORIES (Mostafazadeh et al., 2016), CS paper ABSTRACTS, and song LYRICS. 4.1 Mask Function A benefit of the ILM framework is that it can be trained to infill spans corrupted by arbitrary mask functions. Here, we explore a mask function which simultaneously trains models to infill different granularities of text; specifically, words, n-grams, sentences, paragraphs, and documents. By using a unique special token per granularity (e.g. [blank word]), this mask function offers users coarse but intuitive control over the length of the spans to be infilled. We configure our mask function to mask each token in a given document with around 15% probability, echoing the configuration of Devlin et al. (2019). However, instead of masking individual tokens uniformly at random, we perform a preorder traversal of the granularity hierarchy tree, randomly masking entire subtrees with 3% probability. For the datasets we consider, this results in a marginal token mask rate of about 15% (details in Appendix B). While we train to infill several different granularities, we primarily evaluate and discuss the ability of our models to infill sentences for brevity. Quantitative results of our models on other granularities can be found in Appendix D, and granularity functionality can also be explored in our web demo. 4.2 Task and Model Configurations For all experiments, we train the same architecture (GPT-2 “small”) using the same hyperparameters She ate leftover pasta for lunch. She ate [blank] for [blank]. She ate leftover pasta for lunch. [end] .lunch for leftover pasta ate She [end] She ate [blank] for [blank]. She ate leftover pasta for lunch. [end] She ate [blank] for [blank]. [sep] leftover pasta [answer] lunch [answer] Data Masked LM LM-Rev LM-All ILM Training Examples for Different Strategies Figure 2: Training examples for three baseline infilling strategies and ILM on a given artificially-masked sentence. For each strategy, we train the same architecture (GPT-2) on such examples. At both training and test time, examples are fed from left to right; anything to the left of a green target is available to the model as context when predicting the target. Precisely, LM only considers past context, and LM-Rev only considers future. LM-All considers all available context but uses long sequence lengths. Our proposed ILM considers all context while using fewer tokens. (Appendix C) while varying the infilling strategy and dataset. In addition to our proposed ILM strategy for infilling, we consider three baseline strategies: (1) language modeling (LM; “infilling” based only on past context), (2) reverse language modeling (LM-Rev; “infilling” based only on future context), and (3) language modeling based on all available context (LM-All). LM-All simply concatenates x and ˜x together as in Fedus et al. (2018). LM-All represents arguably the simplest way one could conceive of infilling with LMs, but results in long sequence lengths. Training examples for all strategies are depicted in Figure 2. For each strategy, we also vary whether training is initialized from the pre-trained GPT-2 model or from scratch. Despite discrepancies between the pre-training and our fine-tuning for most infilling strategies, all of the infilling experiments initialized from the pre-trained checkpoint performed better than their from-scratch counterparts. This indicates that ILM can effectively leverage large-scale language modeling pre-training to improve infilling performance. Henceforth, we will only discuss the models initialized from the pre-trained checkpoint, though we report quantitative performance for all models in Appendix D. For the models trained on STORIES and ABSTRACTS, we trained models to convergence using early stopping based on the validation set perplexity (PPL) of each model computed only on the masked tokens. These models took about a day to reach 2495 STO ABS LYR Length LM 18.3 27.9 27.7 1.00 LM-Rev 27.1 46.5 34.3 1.00 LM-All 15.6 22.3 21.4 1.81 ILM 15.6 22.4 22.6 1.01 Table 1: Quantitative evaluation results. We report test set perplexity (PPL) on the sentence infilling task for different model configurations on all three datasets, as well as average length of all test set examples in tokens relative to that of the original sequence (lower is better for all columns). Our proposed ILM framework achieves better PPL than both LM and LM-Rev, implying that it is able to take advantage of both past and future context. ILM achieves similar PPL to LM-All with shorter sequence lengths (hence less memory). their early stopping criteria on a single GPU. For the larger LYRICS dataset, we trained models for 2 epochs (about two days on a single GPU). 5 Quantitative Evaluation We evaluate the quantitative performance of our models on the sentence infilling task by measuring PPL on test data.3 In this setting, a sentence is selected at random and masked out, and we measure the likelihood assigned by a model to the masked sentence in the context of the rest of the document. Regardless of differences in the ordering and number of tokens that each strategy uses to represent a test example, PPL is always computed only for the span of tokens comprising the original sentence (e.g. green tokens in Figure 2). Table 1 shows that across all datasets, ILM outperforms models which see only past or future context (LM and LM-Rev respectively), implying that our proposed framework is able to take advantage of bidirectional context despite using unidirectional models. Additionally, while one might expect LMAll to outperform ILM because its training examples more closely “resemble” those of standard LMs, ILM achieves similar performance to LMAll. This indicates that GPT-2 is able to effectively learn the “syntax” of ILM examples and achieve reasonable infilling performance with shorter sequences (and hence with much less memory usage). We also observe that models trained via ILM perform similarly on the special case of language mod3Overlap-based metrics such as BLEU score (Papineni et al., 2002) are not appropriate for evaluating infilling as there are many realistic infills that have no word-level overlap with the original, e.g., “a sandwich” instead of “leftover pasta.” eling compared to the models which were trained only on language modeling (Appendix D.1). This suggests that ILM does not just repurpose LMs to infill, but rather extends their capabilities while maintaining their original functionality. 6 Human Evaluation In addition to our quantitative evaluation, we seek to evaluate the qualitative performance of ILM. To this end, we sample a story from the STORIES test set and randomly replace one of its five humanwritten sentences with a model output. Then, we task human annotators on Amazon Mechanical Turk with identifying which of the sentences in a story was machine-generated (details in Appendix E). We compare our ILM model to three baseline infilling strategies: an LM (context beyond the replaced sentence was discarded), the best model (self-attention; SA) from Zhu et al. (2019), and the pre-trained BERT (base) model (Devlin et al., 2019). All approaches except for BERT were first fine-tuned on the STORIES dataset. To infill using BERT, we replace the tokens representing the original sentence with mask tokens, and then generate text by replacing mask tokens one at a time (conditioning on previously-generated tokens). While vocabulary differences make it is less useful to compare PPL for the SA and BERT baselines to our GPT-2-based strategies, we can still meaningfully compare them in this human evaluation setting. For each approach we compute a score, which we define as the percentage of examples where the annotator did not correctly identify the machinegenerated sentence. Therefore, a higher score implies a better (more natural, human-like) model. We collect 100 responses for each model and report the scores in Table 2, with qualitative examples in Figure 3 and Appendix E. Of the four strategies, ILM achieves the highest score, implying that sentences infilled by ILM are harder for humans to recognize as fake than those produced by other strategies. Somewhat surprisingly, we observed that despite only observing past context the LM model performed better than BERT and SA. BERT may have performed poorly due to the intrinsic difficulty of finding convincing infills with a precise length in tokens. SA may have performed poorly because, unlike LM and ILM, it was not initialized from a large-scaled pre-trained LM. 2496 BERT SA LM ILM Score (%) 20 29 41 45 Table 2: Human evaluation results. We use BERT (Devlin et al., 2019), the best model from Zhu et al. (2019) (SA), and our LM and ILM models to replace random sentences in five-sentence stories from the STORIES test set. Then, we task humans with identifying which sentence of the five was generated by a machine. We report the score of each model: the percentage of infilled stories where the human failed to identify the machine-generated sentence. Our ILM model achieves a higher score than all of the other models. Note that the max score is effectively 80%, as a perfect model would cause annotators to randomly choose one of the five sentences. BERT
 SA LM ILM Human favoritea ", Mary brightly said. She wasn't sure she had to go to the store. She went to check the tv. Patty knew her friends wanted pizza. She also had the place looking spotless. Example Story with Masked Sentence Patty was excited about having her friends over. She had been working hard preparing the food. [blank] All of her friends arrived and were seated at the table. Patty had a great time with her friends. Figure 3: Example of a short story in our STORIES dataset with its third sentence masked, and sentences infilled by different models. The sentences generated by BERT and SA models are off-topic, the sentence generated by LM model is irrelevant to the future context, while the ones generated by ILM and Human successfully account for both previous and future context. 7 Related Work Methodology. A number of systems have the capability to infill but have practical drawbacks. Many systems are unable to automatically determine span length, and thus, can only infill fixedlength spans (Fedus et al., 2018; Devlin et al., 2019; Yang et al., 2019; Joshi et al., 2019; Gu et al., 2019; Liu et al., 2019). Methods such as BERT present additional challenges during inference (Wang and Cho, 2019). Rudinger et al. (2015) frame narrative cloze as a generation task and employ language models, but they only consider one infill of a fixed length. Zhu et al. (2019); Shen et al. (2020) infill multiple variable-length sequences, but these approaches require the masked context to be iteratively updated and reprocessed to fill in blanks one a time. In contrast, our approach appends infilled text to the context and does not require reprocessing the entire input sequence for each blank. AI21 (2019) train an LM which can fill in the middle of a paragraph given the first and last sentences—our work generalizes to such capabilities. Task. The cloze task (Taylor, 1953) evaluates language proficiency by asking systems to fill in randomly-deleted words by examining context. Cloze has been extended in the forms of discourse (Deyes, 1984) and narrative cloze (Chambers and Jurafsky, 2008), which remove phrases and narrative events respectively. Recently, cloze has been used not only for evaluation, but also to improve text generation quality (Fedus et al., 2018) and transfer learning (Devlin et al., 2019) (under the name “masked language modeling”). Text infilling can be thought of as generalizing the cloze task from single words to spans of unknown length. Raffel et al. (2019) explore infilling as a pre-training objective to improve downstream performance on inference tasks; our work focuses on generation. Story generation. Recent work seeks to generate stories given a title and storyline (Yao et al., 2019), entities (Clark et al., 2018), premise (Fan et al., 2018), or surrounding context and rare words (Ippolito et al., 2019). Our work differs in that we aim to build systems capable of making predictions based only on text context, rather than aspects specific to stories (e.g. storyline). 8 Conclusion We presented a simple strategy for the task of infilling which leverages language models. Our approach is capable of infilling sentences which humans have difficulty recognizing as machinegenerated. Furthermore, we demonstrated that our infilling framework is effective when starting from large-scale pre-trained LMs, which may be useful in limited data settings. In future work, we plan to incorporate these features into co-creation systems which assist humans in the writing process. We hope that our work encourages more investigation of infilling, which may be a key missing element of current writing assistance tools. Acknowledgments This work was funded by DARPA CwC under ARO prime contract no. W911NF-15-1-0462. We thank all reviewers for their helpful comments. 2497 References AI21. 2019. HAIM: A modest step towards controllable text generation. AI21 Labs Blog. Yannis Assael, Thea Sommerschield, and Jonathan Prag. 2019. Restoring ancient text using deep learning: a case study on greek epigraphy. arXiv:1910.06262. N. Chambers and D. Jurafsky. 2008. Unsupervised learning of narrative event chains. In Human Language Technology and Association for Computational Linguistics (HLT/ACL). Elizabeth Clark, Yangfeng Ji, and Noah A Smith. 2018. Neural text generation in stories using entity representations as context. In Association for Computational Linguistics: Human Language Technologies. J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics (ACL), pages 4171– 4186. T. Deyes. 1984. Towards an authentic ‘discourse cloze’. Applied Linguistics, 5(2):128–137. A. Fan, M. Lewis, and Y. Dauphin. 2018. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833. W. Fedus, I. Goodfellow, and A. M. Dai. 2018. Maskgan: Better text generation via filling in the. In International Conference on Learning Representations (ICLR). J. Gu, Q. Liu, and K. Cho. 2019. Insertion-based decoding with automatically inferred generation order. arXiv preprint arXiv:1902.01370. D. Ippolito, D. Grangier, C. Callison-Burch, and D. Eck. 2019. Unsupervised hierarchical story infilling. In NAACL Workshop on Narrative Understanding, pages 37–43. M. Joshi, D. Chen, Y. Liu, D. S. Weld, L. Zettlemoyer, and O. Levy. 2019. SpanBERT: Improving pretraining by representing and predicting spans. arXiv preprint arXiv:1907.10529. D. Liu, J. Fu, P. Liu, and J. Lv. 2019. TIGS: An inference algorithm for text infilling with gradient search. arXiv preprint arXiv:1905.10752. N. Mostafazadeh, N. Chambers, X. He, D. Parikh, D. Batra, L. Vanderwende, P. Kohli, and J. Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In North American Association for Computational Linguistics (NAACL). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In ACL. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. R. Rudinger, P. Rastogi, F. Ferraro, and B. V. Durme. 2015. Script induction as language modeling. In Empirical Methods in Natural Language Processing (EMNLP). Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? arXiv:1909.10705. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Tianxiao Shen, Victor Quach, Regina Barzilay, and Tommi Jaakkola. 2020. Blank language models. arXiv:2002.03079. Y. Shih, W. Chang, and Y. Yang. 2019. XL-Editor: Post-editing sentences with xlnet. arXiv preprint arXiv:1910.10479. W. L. Taylor. 1953. “Cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. A. Wang and K. Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov random field language model. arXiv preprint arXiv:1902.04094. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew. 2019. HuggingFace’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. L. Yao, N. Peng, R. Weischedel, K. Knight, D. Zhao, and R. Yan. 2019. Plan-and-write: Towards better automatic storytelling. In Association for the Advancement of Artificial Intelligence (AAAI). Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS. W. Zhu, Z. Hu, and E. Xing. 2019. Text infilling. arXiv preprint arXiv:1901.00158. 2498 A Datasets - STORIES (100K examples, 5M words) Short stories from the ROCStories dataset (Mostafazadeh et al., 2016). Each story contains a title and five sentences. - ABSTRACTS (200K examples, 30M words) Abstracts from CS papers on arXiv - LYRICS (2M examples, 60M words) Song lyrics from lyrics.com We experimented on multiple datasets to demonstrate that our framework was not custom tailored to a single domain. On the STORIES and ABSTRACTS datasets, we include metadata (story title, paper subject matter, etc.), as the first “paragraph” of the document. By providing these paragraphs (Appendix B), our infilling model implicitly learns to summarize (e.g. infill a title given a story), and do conditional generation (e.g. infill a story given a title). On the LYRICS dataset, infilling models may be especially helpful to humans; external aid in the form of rhyming dictionaries is already commonly employed in this domain. To ensure that all experiments were trained on the same data, we removed infilling examples which would have exceeded our training sequence length of 256 tokens for the model with the longest sequence length (LM-All). This removed no examples from STORIES, a small fraction of examples from LYRICS, and a substantial number of examples from ABSTRACTS. B Masking function We design a mask function which takes the entire document and selectively masks several span granularities: words, n-grams, sentences, paragraphs, and entire documents. Accordingly, models trained via ILM on this masking function offer users the ability to specify the granularity of text to infill at a particular location. This allows users to have coarse but intuitive control over infilling length, so that multiple paragraphs are not generated when the user was expecting a single word. Our masking function first constructs a tree of the training example (using the natural hierarchy of documents, paragraphs, sentences, and words). Then, using a pre-order tree traversal, each subtree is masked with 3% probability (or ignored if any of its ancestors are already masked). If the entire document (root node of the tree) is masked, then the infilling model’s job is equivalent to that of a language model. If a word (leaf) is selected to be masked, 50% of the time we mask that individual word, otherwise we mask an n-gram of random length between 1 and min(8, # words left in the sentence) words (inclusive). Note that a word may comprise multiple tokens, as GPT-2 uses sub-word tokenization (Sennrich et al., 2015). We chose the value of 3% as, for the datasets we considered, it resulted in a marginal token mask rate of around 15%, echoing the configuration of Devlin et al. (2019). We add special tokens for each granularity to our model’s vocabulary (e.g. [blank word]), so that the user may specify which granularity they would like the infilling model to produce. This functionality can be explored in our demo: https: //chrisdonahue.com/ilm. While we focus on this specific mask function in this paper, we structured the ILM codebase to allow users to train infilling models for completely different use cases. Users need only define a new mask function which takes complete documents and outputs lists of character-level spans representing the desired spans to be masked. C Hyperparameters We use early stopping based on the PPL of the model on infilling the masked token for the validation set. We train all models using the default fine-tuning parameters specified in the transformers library (Wolf et al., 2019), except that we use a batch size of 24 and a sequence length of 256. Note that the most straightforward way of training an LM on ILM examples (Section 3.2) is to maximize the likelihood of the entire concatenated example: ˜x, [sep], and y. This trains the model to predict tokens in ˜x even though such behavior is not necessary at inference time as ˜x will always be fully-specified. Nevertheless, we found that this additional supervision improved performance when evaluating model PPL of y. Conveniently, this is also the default behavior when adapting existing LM training code for use with ILM. D Evaluation on language modeling and infilling other granularities Our quantitative evaluation (Section 5) examined the sentence infilling performance of GPT-2 initialized from the large-scale pre-trained checkpoint 2499 STO ABS LYR LM (scratch) 33.4 52.1 25.1 LM-Rev (scratch) 32.9 53.9 24.7 LM-All (scratch) 30.4 44.6 26.2 ILM (scratch) 30.8 45.3 30.6 LM 17.6 25.7 20.8 LM-Rev 25.1 36.7 23.7 LM-All 17.8 25.2 21.5 ILM 18.1 23.9 23.0 Table 3: Document infilling PPL (or language modeling) of ILM and baselines initialized either from scratch or from the pre-trained checkpoint across three datasets. Note that PPL of ILM is similar to LM, implying that our infilling strategy can reasonably maintain the ability to perform language modeling while extending the ability to infill. STO ABS LYR LM (scratch) 34.0 52.8 28.9 LM-Rev (scratch) 34.9 59.3 30.4 LM-All (scratch) 27.0 46.2 24.3 ILM (scratch) 25.5 46.0 27.5 LM 17.5 25.5 23.9 LM-Rev 26.5 39.0 29.2 LM-All 15.1 24.4 19.3 ILM 14.9 23.5 20.2 Table 4: Mixture infilling PPL of all models (a mixture of all granularities). after fine-tuning on different datasets and infilling strategies. Here, we report PPL for GPT-2 both initialized from scratch and from the pre-trained checkpoint for several other configurations: language modeling, a mixture of granularities, specific granularities, and language modeling. D.1 Language modeling In Table 3, we report PPL for “document infilling,” which is equivalent to language modeling (because ˜x is always [blank document]). Because of how we structured our mask function (Appendix B), 3% of infilling examples consist of the entire document masked out, which results in the ability of our ILM framework to perform standard infilling. We see that performance of ILM is similar to that of LM on this task, even though ILM sees far fewer examples of language modeling compared to LM. STO ABS LYR LM (scratch) 35.6 51.5 25.1 LM-Rev (scratch) 34.8 65.1 24.7 LM-All (scratch) 33.4 45.0 26.2 ILM (scratch) 34.3 45.3 30.6 LM 18.3 24.2 20.8 LM-Rev 26.5 42.8 23.7 LM-All 20.4 23.4 21.5 ILM 20.7 22.5 23.0 Table 5: Paragraph infilling PPL of all models. STO ABS LYR LM (scratch) 36.0 65.4 33.5 LM-Rev (scratch) 35.1 92.2 35.8 LM-All (scratch) 27.1 53.8 27.1 ILM (scratch) 26.7 51.0 31.0 LM 18.3 27.9 27.7 LM-Rev 27.1 46.5 34.3 LM-All 15.6 22.3 21.4 ILM 15.6 22.4 22.6 Table 6: Sentence infilling PPL of all models. D.2 Mixture of granularities In Table 4, we report results for a mixture of granularities. Specifically, we run the same mask function we use for training (Appendix B) on our test data and evaluate PPL on the masked spans. This reflects general infilling ability across a wide variety of granularities (and hence lengths). Unlike our other quantitative evaluations, there may be multiple variable-length spans missing from each example in this evaluation. Results are similar to that of sentence infilling. Namely, that ILM outperforms LM and LM-Rev and is similar to LM-All despite using much less memory. D.3 Individual granularities In Tables 5 to 8 we report PPL values for infilling performance on paragraphs, sentences, n-grams, and words, respectively, across the three datasets. For each granularity, we create one infilling example per document from the test set with exactly one masked span (randomly chosen from all spans of that granularity for that document). Then, we compute PPL only on the tokens which comprise the masked span, i.e., PPL is computed for all models on exactly the same set of tokens. Across all granularities, we observe that ILM outperforms 2500 STO ABS LYR LM (scratch) 36.1 62.5 34.1 LM-Rev (scratch) 36.4 89.1 36.3 LM-All (scratch) 26.4 60.1 24.3 ILM (scratch) 23.1 49.5 26.3 LM 19.2 25.5 28.2 LM-Rev 26.6 45.0 34.8 LM-All 14.5 20.5 18.6 ILM 13.8 21.5 18.8 Table 7: N-gram infilling PPL of all models. STO ABS LYR LM (scratch) 32.3 57.2 34.8 LM-Rev (scratch) 31.6 100.0 36.7 LM-All (scratch) 12.6 51.8 12.5 ILM (scratch) 9.2 37.9 12.2 LM 17.1 23.0 28.7 LM-Rev 24.1 45.0 35.1 LM-All 7.5 15.8 9.5 ILM 5.4 14.2 8.5 Table 8: Word infilling PPL of all models. LM and LM-Rev and either outperforms or is comparable with LM-All while using less memory. E Details on human evaluation For human evaluation, we sampled 100 stories from the test set of the STORIES dataset. From each story, we masked out one sentence at a time, thereby resulting in 500 stories with masked sentences. Then we used these stories as context and tasked each model with infilling the masked sentence. We compared 8 models in total. In addition to the four models reported in Section 6 (BERT, SA, LM, and ILM), we included the models which are initialized from scratch (as opposed to initialized from the large-scale pre-trained checkpoint) for exhaustive comparison. Furthermore, to filter out spam, we used a control model which always generates “This sentence was generated by a computer.” Lastly, we included the original sentence from the dataset as a reference model (Human) to sanity check the max score is around 80%. Each annotator was shown 8 stories, one from each model, and was asked to identify one of the five sentences generated by machine (see Figure 4 for an example). Among the 100 collected responses, we filtered out 5 responses whose annotation for the control model was wrong. The quantitative and qualitative results can be found in Table 9 and Figure 5, respectively. All model outputs and responses of human evaluation can be found at https://github.com/chrisdonahue/ilm. Score (%) Control 0 BERT 20 SA 29 LM (scratch) 40 LM 41 ILM (scratch) 39 ILM 45 Human 78 Table 9: Human evaluation results. Identify one of the five sentences generated by machine. ○ Patty was excited about having her friends over. ○ She had been working hard preparing the food. ○ Patty knew her friends wanted pizza. ○ All of her friends arrived and were seated at the table. ○ Patty had a great time with her friends. Figure 4: Example of a task and instruction for human evaluation on Amazon Mechanical Turk. 2501 Example Story with Masked Sentence Lily always loved to read. She wondered sometimes, what it would be like to write a book? [blank] Lily did well in the course, and during it, wrote a short book. BERT SA LM ILM Human I held her hand and helped her sit. Of her, but she didn't know her. She practiced reading a lot every week. Finally, in middle school, her teacher introduced her to writing that. She decided to take a course on fiction writing. BERT SA LM ILM Human Or rather, what the next job would be now. I was going out I was going to the beach. I put on about thirty sugar cubes. The issues are getting so many people crazy. I could never catch up and each week got worse. Example Story with Masked Sentence Yesterday was Kelly's first concert. She was nervous to get on stage. [blank] Kelly was then happy. She couldn't wait to do it again. BERT
 SA LM ILM Human Today was the first concert that she had to see every where. She was going to go to the play. When she went on stage she smoothly walked right past the audience. When she got on stage the band was amazing. As soon as she got on the audience applauded. Example Story with Masked Sentence Yesterday was Kelly's first concert. She was nervous to get on stage. [blank] Kelly was then happy. She couldn't wait to do it again. Figure 5: Examples of sentence-level infills by different models.
2020
225
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2502–2515 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2502 INSET: Sentence Infilling with INter-SEntential Transformer Yichen Huang (黄 黄 黄溢 溢 溢辰 辰 辰)12∗, Yizhe Zhang1∗, Oussama Elachqar1, Yu Cheng1 1Microsoft Corporation, Redmond, Washington 98052, USA 2Center for Theoretical Physics, MIT, Cambridge, Massachusetts 02139, USA [email protected], {yizzhang, ouelachq, yu.cheng}@microsoft.com Abstract Missing sentence generation (or sentence infilling) fosters a wide range of applications in natural language generation, such as document auto-completion and meeting note expansion. This task asks the model to generate intermediate missing sentences that can syntactically and semantically bridge the surrounding context. Solving the sentence infilling task requires techniques in natural language processing ranging from understanding to discourselevel planning to generation. In this paper, we propose a framework to decouple the challenge and address these three aspects respectively, leveraging the power of existing largescale pre-trained models such as BERT and GPT-2. We empirically demonstrate the effectiveness of our model in learning a sentence representation for generation and further generating a missing sentence that fits the context. 1 Introduction Generating a span of missing tokens in a text chunk, known as “text infilling,” has attracted many attentions recently (Zhu et al., 2019; Song et al., 2019; Liu et al., 2019; Ippolito et al., 2019; Joshi et al., 2020). Here we study the related but somewhat different task of “sentence infilling.” Specifically, as illustrated in Figure 1, intermediate sentences (or chunks of text) are removed from long-form text (e.g., paragraphs, documents), and the task is to generate the missing pieces that can smoothly blend into and fit the context both syntactically and semantically. The generation can be either based only on context, or based on both context and side information such as constraint keywords. Compared with text infilling, sentence infilling requires the model to handle inter-sentential correlation and to reason about missing semantic information. Developing models for sentence infilling can potentially ∗These authors contributed equally to this work. She was extremely happy with our hotel and we had a complimentary buffet. ... The food was just phenomenal! I can’t recall what everything was called, but we rolled out of there stuffed and happy. My husband had the rib eye dumpling as an appetizer and he said it was the best dumpling he has ever had. Beautiful beachside boutique hotel with great views and modern decoration. My favorite part about this hotel is definitely the restaurant, UVA. I recently visited UVA to attend a friend’s birthday party. ... Figure 1: Sentence infilling: generating an intermediate sentence that provides a smooth semantic transition from the preceding to the following context. This example is generated by our model on the TripAdvisor dataset. The colors mark the correspondence between the generated sentence and the context. facilitate many text generation applications. Possible scenarios include, but are not limited to: document auto-completion by detecting and suggesting missing bridging sentences in the surrounding context; collaborative document writing by modifying and unifying different writing styles from multiple authors; meeting note expansion by extending a set of keywords (lexical constraints) to a full sentence, leveraging the surrounding context. There are many challenges associated with this long-form sentence infilling task, which is typically a one-to-many problem in that the possible outputs can be diverse. As the generated sentence should connect separate text pieces in a syntactically and semantically smooth and coherent manner, the task requires a wide range of understanding, planning, and generation techniques. Largescale pre-trained language models such as BERT (Devlin et al., 2019) and GPT-2 (Radford et al., 2019) have dramatically enhanced the understanding and generation modules. However, how to in2503 tegrate them in a holistic manner, and to analyze and establish the long-range dependence structure by high-level semantic planning is still challenging and yet to explore, as semantic appropriateness is usually subtler than syntactic appropriateness, which can be well characterized by autoregressive language models. Several works have been done in this direction. MASS (Song et al., 2019) obtains sentence representations by predicting a span of missing tokens. It can be used to generate missing text, but the missing span length needs to be pre-specified. Other related works (Liu et al., 2019; Joshi et al., 2020) also require knowledge of the span length as an input to their models, and thus are different from our work. Text infilling (Zhu et al., 2019) sequentially generates tokens for the missing part of a sentence until an end-of-blank token is generated. Its generation can be of arbitrary length. By design, all these previous approaches operate at the token level, and thus arguably focus more on lexical appropriateness than the global semantics. In this paper, we propose INter-SEntential Transformer (INSET), a novel approach to sentence infilling. Our model first produces sentence-level semantic features that capsulate the missing highlevel information. Then, grounded on the predicted semantic features, the model generates the syntactic and lexical features to embody the predicted sentence. Specifically, understanding, planning, and generation are handled by three modules in a synergistic manner: • a BERT-based encoder to map each sentence to the latent semantic space. • a sentence-level semantic planner to infer the missing information that can bridge the semantics of preceding and following context. • a GPT-based generator (decoder) to map semantic features back to the text domain. The main contributions and advantages of this work are summarized as follows: • We study the task of sentence infilling, which requires the model to handle inter-sentential correlation and to predict missing semantic information. This goes beyond text infilling (Zhu et al., 2019), which asks the model to fill in the missing part of a single sentence. • Our approach decouples understanding, planning, generation, and leverages existing largescale pre-trained understanding and generation models (BERT, GPT-2). The components of our model can be separately examined and improved with additional data. • Our model predicts a feature vector in the latent semantic space for the missing sentence and maps the vector to text. Thus, it takes care of semantic smoothness and appropriateness. • Our model allows the generation to be of arbitrary length, as in (Zhu et al., 2019). • Compared with directly processing text, our approach significantly reduces computation time and memory usage during training, as (after pre-computing sentence features) the sequence length is the number of sentences rather than that of tokens. 2 Related Work Pre-Trained Language Model. Language models pre-trained on a large corpus improve natural language understanding and generation through transferable contextualized word representations (Devlin et al., 2019; Lample et al., 2019) and models (Howard and Ruder, 2018). Large transformer models (Vaswani et al., 2017) like GPT-2 (Radford et al., 2019), Megatron (https://github. com/NVIDIA/Megatron-LM), and T5 (Raffel et al., 2019) can achieve state-of-the-art results without training on any particular language modeling benchmark. (Keskar et al., 2019) proposes a conditional generation model, trained to condition on control codes that govern style, content, and other task-specific properties. Different from them, our model builds sentence representations via autoencoding with a pair of BERT and GPT-2. Context-Aware Text Generation. There are some related works on context-aware text generation (Mikolov and Zweig, 2012; Tang et al., 2016; Mangrulkar et al., 2018). Most previous works on language modeling with contextual information (Wang and Cho, 2016; Wang et al., 2018; Sordoni et al., 2015b; Wen et al., 2015; Vinyals and Le, 2015) treat the preceding sentences as context. Compared with these sequential generation tasks, our task is constrained by bidirectional context, and is more challenging. Text infilling (Zhu et al., 2019) aims at filling in the missing part, given the rest of a sentence. (Liu et al., 2019) proposes an iterative inference algorithm based on gradient search for text infilling. For story infilling, (Ippolito et al., 2019) first predicts rare words in the missing span, and then generates text conditioned on these words. SpanBERT (Joshi 2504 et al., 2020) masks random contiguous spans and (pre-)trains a language model to predict tokens in the span. XL-Editor (Shih et al., 2019) adapts XLNet (Yang et al., 2019) to text infilling and other editing tasks. (Kang and Hovy, 2019) models logic connections between sentences and generates intermediate sentences grounded on inter-sentential “flow.” (Bhagavatula et al., 2020) formulates abductive commonsense reasoning as a natural language inference task to decide the appropriate reason that could explain the observation in one sentence given the background described by another sentence. (Cheng et al., 2020) proposes a text style transfer task to translate a sentence in the context of a paragraph into the desired style. These three works study generation tasks that address inter-sentential relationship, and thus may be conceptually related to our motivation. Compared with (Zhu et al., 2019; Liu et al., 2019; Ippolito et al., 2019; Joshi et al., 2020; Shih et al., 2019; Kang and Hovy, 2019; Bhagavatula et al., 2020; Cheng et al., 2020), our approach is clearly different. We fully exploit existing large-scale pretrained models BERT and GPT-2 to learn smooth sentence embeddings in the latent semantic space, and then process sentence-level information in this space. Hierarchical Text Generation. Hierarchical text generation with high-level semantic planning has been studied in many previous works. (Sordoni et al., 2015a) presents a hierarchical recurrent encoder-decoder architecture for context-aware query suggestion. (Zhang et al., 2019) proposes a framework to infer semantic features for response generation using self-supervised learning. Previous works have used multi-level LSTM encoders (Yang et al., 2016; Hu et al., 2020) and hierarchical autoencoders (Li et al., 2015) to learn hierarchical representations for long text. (Shen et al., 2019) uses a variational autoencoder to encode an entire paragraph into a single latent variable, from which the paragraph can be generated hierarchically. In comparison, our task is to generate intermediate sentences in the surrounding context. 3 Tasks and Methods 3.1 Task Definition The task of sentence infilling is formally defined as follows. Consider a dataset of N paragraphs {p(k)}N k=1. Each paragraph p(k) = (s(k) 1 , s(k) 2 , . . . , s(k) Mk) consists of Mk consecutive sentences. For each k, we are given a positive integer mk ≤ Mk and the context (s(k) 1 , s(k) 2 , . . . , s(k) mk−1, s(k) mk+1, . . . , s(k) Mk), but the mk’th sentence s(k) mk is missing. The task is to generate a sentence ˆs(k) mk in the missing position such that it fits the context. For simplicity and without any confusion, we drop the index k from now on (note that M and m may depend on k). The criteria for successful generation are: • The sentence ˆsm is fluent and meaningful. • Inserting the generated sentence into the context, we obtain a semantically coherent paragraph (s1, s2, . . . , sm−1, ˆsm, sm+1, . . . , sM). • ˆsm is written in the same style as contextual sentences {sj}j̸=m. Since there could be multiple semantically different sentences that fit the same context well, it is not necessary for ˆsm to be close to the ground truth sm. Rather, ˆsm is considered successful as long as it satisfies the criteria above. 3.2 INSET: Inter-Sentential Transformer Model Overview. At a high level, our model consists of two components: a (denoising) autoencoder and a sentence-level transformer. The former maps each sentence to a fixed-length feature vector in the latent semantic space, and reconstructs the sentence from the representation. The latter predicts the semantic features of the missing sentence from those of contextual sentences. We call our model INter-SEntential Transformer (INSET). Formally, let (E, D) be an autoencoder, where E (D) is the encoder (decoder) such that E ◦D and D ◦E are supposed to be identity maps. Let T be a sentence-level transformer with positional encoding P. The transformer T takes the contextual information as input and outputs a hypothetical representation of the missing sentence. Specifically, ˆsm = D T (f1 + P(1), f2 + P(2), . . . , fm−1 + P(m −1),⃗0 + P(m), fm+1 + P(m + 1), . . . , fM + P(M))[m]  , (1) where fj = Esj is the encoding of the sentence sj, ⃗0 is the zero vector representing the missing sentence, and T (· · · )[m] is output of the transformer T in the missing position m. The autoencoder and the sentence-level transformer can be trained separately. We first train the 2505 [CLS] w1 w2 [MASK] w4 · · · wl [SEP] Transformer encoder E from BERT f [SOS] w1 w2 w3 · · · wl−1 wl Transformer decoder D from GPT-2 [SOS] w1 w2 w3 w4 · · · wl [EOS] s1 s2 s3 s5 s6 s7 E E E E E E f1 f2 f3 ⃗0 f5 f6 f7 Sentence-level transformer T ˆf4 Figure 2: Model overview. Left panel: Denoising autoencoder. The encoder E takes a corrupted sentence (with each token wi for i = 1, 2, . . . , l masked randomly) as input and outputs a representation of the sentence. The decoder D should reconstruct the original uncorrupted sentence. The training parameters of E and D are initialized with those of BERT and GPT-2 , respectively. Right panel: Sentence-level transformer. Using the encoder E, we obtain the representation of every contextual sentence. These sentence representations are fed into a sentence-level transformer T , which outputs a representation of the missing sentence. former on individual sentences. Then, we precompute and save the feature vectors of all sentences. While training the latter, it is not necessary to load the former. This makes training more efficient. Sentence Representation Learning via Denoising Autoencoding. Large-scale pre-training approaches (e.g., BERT) lead to superior performance in many language understanding tasks related to sentence representation learning (Reimers and Gurevych, 2019). However, the features learned by BERT (or fine-tuned on downstream tasks) cannot be directly used for generation tasks, as the masked language model objective of BERT does not enforce the reconstruction of the original sentence from the extracted features. Instead of directly using BERT features, we learn sentence representations via autoencoding. This naturally integrates BERT and GPT-2, and combines sentence representation learning and generation. As shown in the left panel of Figure 2, we pad the [CLS] token at the beginning of each sentence sj. We initialize the encoder E with BERT, and extract the output fj corresponding to the [CLS] token as the embedding of sj. We initialize the decoder D with GPT-2, and feed fj as the embedding of the zeroth token. Then, we have D generate a sequence of tokens in the hope that the sequence matches sj (padded with special tokens [SOS] at the beginning and [EOS] at the end). To train the autoencoder, we use teacher forcing and minimize the negative log-likelihood loss by (fine-)tuning the parameters of E and D jointly. An autoencoder embeds sentences into vectors in the latent space. We hope that the embedding is smooth in the sense that semantically similar sentences are mapped to vectors that are close to each other. In particular, interpolation between two points in the latent space should correspond to a smooth semantic transition in the text domain. To this end, we use the following two tricks. First, we employ a denoising autoencoder, which is known to yield a smoother embedding (Vincent et al., 2008). To add noise, we randomly mask each token in sj with probability 15% by replacing the masked tokens with a special token [MASK]. During training, we use the “noisy” sj with masks as input to the encoder, and use the “clean” sj without masks to compute the loss function. Of course, one could try more sophisticated noise-adding strategies (Lewis et al., 2019). Second, we use early stopping. In our experiments, we observe that as training proceeds, the validation loss of the autoencoder keeps decreasing. In the absence of masks, presumably it would eventually decay to zero so that the autoencoder perfectly reconstructs every sentence. However, this does not necessarily imply that the embedding is smooth. On the contrary, an overtrained autoencoder often tries to remember every individual token and thus fails to achieve smoothness in the latent semantic space. Moreover, it can catastrophically forget some of the information in the initial pre-trained model (GPT-2) and partially lose the power of generating fluent sentences. In practice, we select a checkpoint by monitoring its validation performance on sentence interpolation. Some examples of sentence interpolation are shown in Table 1. Sentence Feature Prediction. After encoding sentences into feature vectors, we use a sentence2506 level transformer T to predict the feature vector of the missing sentence from those of contextual sentences. This is analogous to the task of predicting masked tokens for (pre-)training BERT (Devlin et al., 2019), but now it is at the sentence level. Indeed, sentence feature vectors in T correspond to token embeddings in BERT, and sentence position ID in T corresponds to position ID in BERT. We train the transformer T with the objective LSentTrans = 1 −cos(fm, T (· · · )[m]), (2) where cos(· · · ) is the cosine similarity between the ground truth sentence feature vector fm and the prediction T (· · · )[m] in Eq. (1). Note that cos(· · · ) is a good similarity measure only when its arguments are unit vectors. This is guaranteed by the technical trick of fixing the parameters of the last LayerNorm of the transformers E and T , i.e., do not compute the gradients of these parameters in backpropagation. Generating Sentences from Features. At test time, we use the decoder D to generate the missing sentence by mapping the predicted feature vector to the text domain. Note that standard generation schemes such as top-k sampling, beam search, and nucleus sampling (Holtzman et al., 2020) can be used without additional modeling effort. Computational Efficiency. Compared with vanilla GPT-2, our model can process and analyze a document containing many sentences at the discourse level with dramatically lower time and space complexity. To estimate quantitatively, suppose that a document contains Ns sentences, each of which has Nt tokens. Then, the time complexity is reduced from O(N2 s N2 t ) to O(N2 s + NsN2 t ). Moreover, sentence features can be precomputed once and then reused for every epoch or even in other tasks on the same dataset. If sentence features have been precomputed and are already directly available, the time complexity is further reduced to O(N2 s ). 3.3 Sentence Infilling with Lexical Constraints We further introduce a related task called sentence infilling with lexical constraints, which is the same as sentence infilling except that now we are given some keywords of the missing sentence as an additional input to hint the generation. The keywords are treated as soft constraints (aka priming): The generated sentence is not directly enforced to contain the exact keywords. It may contain a synonym or share some semantics with the keywords. We expect that the presence of keyword constraints makes the task more difficult rather than easier, although incorporating keywords can significantly improve the BLEU score of the generation with respect to the ground truth. Intuitively, keywords force the model to speculate the semantics of the ground truth sentence, and significantly reduce the number of possible solutions. In the absence of keywords, the model has the freedom of completing the task according to its own way of thinking. To handle keyword constraints, we introduce a new component called the constraint feature encoder to our architecture. It is a transformer encoder K that maps a set S of keywords to a feature vector that lives in the same latent space of sentence embeddings. We train K with knowledge distillation (Kim and Rush, 2016). The teacher model is the sentence encoder E, which maps a sentence containing the keywords in S to a feature vector. We use the cosine similarity loss between these two feature vectors to teach the student model K. For implementation details, suppose we have two keywords w1 and w2. Then, the input to K is three tokens ([CLS], w1, w2). We replace the zero vector in Eq. (1), which represents the missing sentence, with the output of K above the [CLS] token. We do not use positional encoding in K because keywords do not have order. 4 Experiments 4.1 Experimental Setup We evaluate our model on two datasets (TripAdvisor and Recipe). We have released the source code to facilitate future research (https://github. com/dreasysnail/INSET). Dataset and Preprocessing. We conduct experiments on the TripAdvisor and Recipe datasets. For the TripAdvisor dataset of hotel reviews (Wang et al., 2010), we partially follow the preprocessing in (Cho et al., 2019). Our preprocessing includes, but is not limited to: (i) discarding reviews containing non-English tokens; (ii) removing duplicate reviews so that only one copy is retained. We set the maximum number of tokens in a sentence to be 32 and the minimum number of sentences in a review to be 7 (so that the context is not too short). Any review with longer sentences or having 2507 a smaller number of sentences is discarded. We use the following strategy to mask sentences. For a paragraph consisting of M ≥7 consecutive sentences, we split it into M−6 data points, each of which has exactly 7 sentences. The j’th data point spans from the j’th to the (j + 6)’th sentence (inclusive) of the paragraph, for j = 1, 2, . . . , M −6. We mask the middle (i.e., 4th) sentence for each data point so that the masking rate is 1/7 ≈14.3%, which is close to that (15%) of BERT. After preprocessing, the size of the dataset (training, validation, test) is (1108134, 62543, 533) data points. Our strategy of always masking the middle sentence out of 7 sentences is not only the simplest but also without loss of generality. Our model is directly applicable to the situation where we randomly mask, e.g., 3 out of 20 sentences. However, the quality of human evaluation may be affected because the patience and attention of human evaluators may decrease as the context length increases. For the effectiveness of human evaluation, we use the simplest strategy to mask sentences. The Recipe dataset is obtained from (https: //commoncrawl.org), where the metadata is formatted according to Schema.org (https:// schema.org/Recipe). We use the same preprocessing as that of the TripAdvisor dataset except that instructions with less than 5 sentences are discarded. After preprocessing, the size of the dataset (training, validation, test) is (1073886, 56055, 500) data points. Recipe instructions usually describe a time-ordered procedure, and thus are ideal for testing the reasoning capability of the model. Evaluation Metrics. Following (Galley et al., 2019; Zhang et al., 2020), we perform automatic evaluation using standard machine translation metrics, including BLEU (Papineni et al., 2002), NIST (Doddington, 2002), and METEOR (Lavie and Agarwal, 2007). As a variant of BLEU, NIST weights n-gram matches by their information gain, and thus penalizes uninformative n-grams. We also use Entropy (Zhang et al., 2018) and Dist-n (Li et al., 2016) to evaluate lexical diversity. See (Galley et al., 2019) for more details. BLEU, NIST, and METEOR measure the similarity between the generated sentence and the ground truth. They are not ideal scores for our task because a sentence that is semantically very different from the ground truth could possibly fit the context perfectly. However, it may still be helpful to compute these commonly used scores. It is an important and challenging open problem to design an automatic score that faithfully measures the overall quality of the generation in our task. Baseline. Our baseline is the self-attention model for text infilling (Zhu et al., 2019). It is a transformer language model with novel positional encoding. The traditional approach of encoding the absolute position of each token is not directly applicable to our task because we do not know in advance the absolute positions of contextual tokens after the missing sentence. To resolve this issue, (Zhu et al., 2019) divides the text into segments. In the case of only one masked sentence, the first (third) segment consists of contextual tokens before (after) the mask, and the second corresponds to the mask. Then, each token is indexed by its segment ID and its position ID within the segment. The missing tokens are sequentially generated from these IDs and the current surrounding context. Training the baseline model on our dataset, we use the same set of hyperparameters as in the original reference except that the batch size is set to 250 (it is 400 in (Zhu et al., 2019)). This avoids out-ofmemory errors. Note that we are handling much longer sequences (usually > 100 tokens) than (Zhu et al., 2019), in which the maximum number of tokens in a sequence is only 16. The baseline model is trained for a sufficient number (30) of epochs until the validation (negative log-likelihood) loss and perplexity clearly saturate. We report the results of the checkpoint with the smallest validation loss and perplexity. Note that we observe that other checkpoints in the saturation regime behave very similarly on the test set. Keyword Extraction. In the task of sentence infilling with lexical constraints, we need to extract keywords from the masked sentence. Keyword extraction is a classical problem in information retrieval. Standard methods include, but are not limited to, tf-idf (term frequency–inverse document frequency) (Ramos, 2003). We have tried tf-idf, but it does not work well for the TripAdvisor dataset of hotel reviews. One reason is that this dataset has quite a few typos, and unfortunately tf-idf favors them because typos occur less frequently than normal words. This issue can be resolved by manually filtering out all typos. After the fix, however, we observe that the quality of extracted keywords remains unsatisfactory. We use the following strategy to extract key2508 words. We first define a list of stop words. To this end, we use the stop word list from NLTK (Bird et al., 2009) and manually add a number of words (e.g., “hotel”) that are not very informative for the particular dataset of hotel reviews. For each sentence, we select non-stop words that appear most frequently in the entire dataset. We usually select two keywords per sentence, but occasionally select one or even zero if few words remain after filtering out stop words and typos. We observe that the keywords extracted with this strategy can pivot the gist of most sentences well. Model Size and Hyperparameters. Our architecture has several components. The encoder E and the sentence-level transformer T have the same size as BERT BASE. The decoder D has the same size as GPT-2 (117M). In the presence of lexical constraints, the constraint feature encoder K has the same size as BERTBASE. During decoding, we use beam search with beam size 5. 4.2 Experimental Results Sentence Representation Learning. We first qualitatively evaluate the smoothness of the latentspace sentence embeddings learned via denoising autoencoding. Table 1 shows two examples of sentence interpolation on the TripAdvisor dataset. In each example, the first and last sentences are inputs by hand, and the 3 intermediate ones are interpolations generated by our (denoising) autoencoder. We observe that the interpolations not only combine words from input sentences, but are readable, meaningful, and show a smooth semantic transition from the first to the last sentence. We speculate that the power of generating fluent and semantically coherent sentence interpolations is derived from BERT and GPT-2. Inherited from these largescale pre-trained models, the latent-space sentence embedding is reasonably smooth as our sentence interpolation results show. Automatic Evaluation. Table 2 shows the BLEU, NIST, METEOR, Entropy, Dist-n scores, and the average length (number of words) of the generated sentences. For the TripAdvisor dataset, we also present results in the presence of keyword constrains. Table 2 compares the baseline (Zhu et al., 2019), our results, and the ground truth. In the absence of keyword constraints, INSET outperforms the baseline in terms of all scores on both datasets. This indicates that our results are semantically closer example 1 A The pool area was nice and sunbathing was great. The pool area was nice and staff was great. The pool area staff was nice and very helpful. Front desk staff were very helpful and friendly. B Front desk staff were very nice and helpful. example 2 A The service was attentive and we had the best food in town. The service was attentive and we had a great room with plenty of food. The room was spacious with good service and we had a queen bed. The room was very spacious with queen beds. B The room was very spacious with 2 queen beds. Table 1: Sentence interpolation. “A” and “B” are two sentences in the test set. The intermediate sentences are generated by interpolating between the latent-space representations of A and B. to the ground truth and are more diverse than the baseline. In terms of the average generation length, our results are much closer to the ground truth than the baseline is. Table 2 also presents two ablation studies. The first shows the performance decrease with less context. Recall that each data point in the TripAdvisor dataset has 6 contextual sentences (full context). We train INSET on the same set of data points but truncate the context to 4 sentences (less context). The second ablation study shows the effect of context in the presence of keywords. We compare two models. The first (INSET w/ context) is the model described in Subsection 3.3. Its generation is based on both keywords and context. The second model (INSET w/o context) is D ◦K, which directly decodes the output of the constraint feature encoder K using the decoder D. Its generation is only based on keywords but not context. We observe that the scores of the first model are higher than those of the second. Both ablation studies show that our model can make full use of context to improve the generation. Human Evaluation. We performed human evaluation of our method on the TripAdvisor dataset. We used a crowd evaluation platform to compare two systems and assess their fluency, informativeness, and relevance to the surrounding context (coherence) on 500 random samples from the test set. Following recommended best practices, each sample was evaluated by five judges. We performed simple spam detection by excluding judges that were too fast or performed too low on a gold set. To avoid bias, we randomized the position of each system while asking judges to compare our systems (with and without keywords) with the ground truth 2509 Dataset NIST BLEU METEnt. Dist Len. Method N-2 N-4 B-2 B-4 EOR E-4 D-1 D-2 Trip Without keyword constraints: baseline 0.54 0.54 4.29% 0.54% 5.85% 3.10 1.32% 2.23% 6.97 INSET (full context) 1.23 1.23 6.08% 0.96% 7.04% 8.13 16.30% 46.64% 10.70 INSET (less context) 1.02 1.02 4.74% 0.51% 5.83% 7.85 12.98% 41.39% 11.26 With keyword constraints: INSET (w/ context) 3.09 3.15 20.14% 6.57% 16.48% 8.34 22.61% 63.60% 11.23 INSET (w/o context) 3.00 3.04 19.47% 6.07% 16.00% 8.16 20.51% 57.41% 11.12 ground truth (human) 8.40 33.96% 79.84% 11.36 Recipe baseline 0.67 0.68 3.91% 0.88% 5.23% 3.12 0.37% 0.47% 15.32 INSET (ours) 1.36 1.37 7.24% 1.33% 7.07% 7.99 20.12% 55.13% 9.63 ground truth (human) 8.22 29.21% 74.97% 10.55 Table 2: Automatic evaluation. “w/ context” indicates that the generation is based on both keywords and context. “w/o context” indicates that the generation is only based on keywords but not context. “Ent.” and “Len.” stand for Entropy and the average generation length, respectively. system A system B criterion prefer A (%) same (%) prefer B (%) coherence 54.16 13.76 32.07 INSET (ours) baseline fluency 43.38 26.98 29.64 informativeness 53.48 18.79 27.72 coherence 27.87 15.69 56.44 INSET (ours) ground truth fluency 21.78 31.38 46.84 informativeness 27.49 21.92 50.59 INSET coherence 18.50 23.45 58.04 w/ keywords ground truth fluency 17.82 29.78 52.39 w/ context informativeness 20.54 26.13 53.33 INSET INSET coherence 37.71 37.62 24.68 w/ keywords w/ keywords fluency 36.16 37.87 25.97 w/ context w/o context informativeness 35.93 39.86 24.21 INSET INSET coherence 34.97 17.06 47.97 w/ keywords w/o keywords fluency 29.30 28.04 42.65 w/ context w/ context informativeness 31.73 23.24 45.03 Table 3: Human evaluation. “w/(w/o) keywords” and “w/(w/o) context” indicate whether the generation is based on keywords and context, respectively. All numbers are percentages. and the text infilling baseline (Zhu et al., 2019). Table 3 shows the human evaluation results. The judges strongly prefer our results (without keywords) to the baseline in all aspects: coherence, fluency, and informativeness. They also strongly prefer the ground truth to our results. Moreover, our results with keywords and context are compared with three other systems: (i) the ground truth; (ii) our results with keywords but not context; (iii) our results with context but not keywords. The second comparison shows that in the presence of keywords, our model can use context to improve all aspects of the generation. The third comparison shows that the presence of keywords reduces the performance of our model, probably because keywords are constraints that the model must take care of. Generated Examples. To qualitatively demonstrate the effectiveness of our model, Table 4 shows some examples from the TripAdvisor and Recipe datasets. We observe that the baseline (Zhu et al., 2019) tends to generate generic sentences, while our results (either with or without keywords) are more informative and can fit the surrounding context reasonably well. Table 5 shows examples generated by our model in the same context but with different keywords. Our model can extend keywords to a full sentence, adapting to the context. More examples generated by our model on both datasets are given in Appendix A. 5 Conclusions and Outlook We study the task of sentence infilling, which is analogous to the masked language modeling task for (pre-)training BERT, but now it is at the sen2510 example from the TripAdvisor dataset example from the TripAdvisor dataset example from the Recipe dataset preceding context It was such a pleasure to see somthing new every night. It was not very crowded so we were able to get great seats at either the pool or the beach. The VIP sevice was great for dinner reservations and pillow service. The walls are very thin. Since this is a family vacation type of hotel, people are up at the pool/bbq area/hallways during all hours of the night. Do not stay here if you need a quite night of sleep. After another 15 minutes or so the mixture should thicken up. The mixture will continue to thicken as it cools. following context Enjoyed the shrimp coctail and seafood salad delivered to us while enjoying the pool. All of us would not want to stay at another resort and are planning to go back again. Enjoy and Hola!Karen and FriendsMilford, CT You have to take multiple elevators to go all the way to the 5th floor. My other complaint is that the hotel staff seemed a bit unprofessional. Not what I’m used to when I stay at Marriot properties. Sterilize your jars and lids and while still hot fill with the jam leaving about a 1/2 inch headspace. Place lids onto the jars and boil in a water bath with jars covered by 3 inches of water for 10 minutes. ground truth We did bring a lot of $1 for tipping and of course the service stepped up a notch more. Also, the elevator situation is weird. Remove from the heat and stir in your amaretto. baseline The staff was friendly and helpful. The rooms are very clean and well kept. Add the flour mixture to the dry ingredients and mix well. INSET The buffet dinner was amazing and we had the best food in the resort. There is only one elevator block in the hotel. Carefully remove the jars from hot water and keep going until a thick sauce is formed. + keywords $, service elevator, situation INSET (w/ keywords) Service fee for the buffet dinner was $5.00 and we paid $5.00 extra for food service. The elevator situation is extremely frustrating. Table 4: Examples generated by our model and the baseline. preceding context My room was a very good size. Tiled floors and woodchip painted walls. The tv did not work - so what. following context Great places to eat close by and very reasonable. No air con -so summer could be sticky. My concern is the left luggage room not supervised. human oracle The location is terrific beside Sevilla metro stn so only 2 to get by metro all the way to airport. + (walk, shopping) Walking distance to shopping mall and Circular Quay. + (internet, $) Internet cost $20.00 per day. Table 5: Examples generated by our model in the same context but with different keywords. “+ (· · · )” is keywords. tence level. Sentence infilling requires the model to handle long-range inter-sentential correlation and to process high-level semantic information. It is complementary to (token-level) masked language modeling, which focuses more on syntactic appropriateness and short-range correlation. We propose a framework called INSET to decouple three aspects of the task (understanding, planning, and generation) and address them in a unified manner. We demonstrate the effectiveness of our approach using automatic and human evaluation. Our approach can be modified or extended in some ways. (i) We use a denoising autoencoder to obtain sentence embeddings. One can try to use a variational autoencoder (Kingma and Welling, 2014) instead. A large-scale pre-trained variational autoencoder (Li et al., 2020) could possibly improve the smoothness of sentence embeddings. (ii) Our model predicts a feature vector for the missing sentence. This vector can be fed into and serve as a guide to token-level models including the baseline (Zhu et al., 2019). Since sentence infilling is analogous to masked language modeling, we expect that it can also be used as a pre-training task. For example, in machine translation of long texts, it is often the case that sentences are translated independently from each other. This sometimes leads to incoherence or even inconsistency between the translated sentences. A post-editor to fix the issue (Voita et al., 2019) should be able to understand inter-sentential relationship and to generate fluent sentences in the surrounding context, both of which can be learned from sentence infilling. Note. After this paper was posted on arXiv, some related works appeared. (Shen et al., 2020) proposes Blank Language Model for text infilling and other tasks. (Donahue et al., 2020) trains (finetunes) a language model (GPT-2) for text and sentence infilling. (Li et al., 2020) pre-trains a largescale variational autoencoder with a pair of BERT and GPT-2. (Ippolito et al., 2020) uses a sentencelevel language model, which operates on sentence embeddings obtained from BERT, to predict story endings. Acknowledgments We thank Bill Dolan, Chris Quirk, and Jingjing Liu for helpful discussions and suggestions. 2511 References Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In International Conference on Learning Representations. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media Inc. Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, and Jingjing Liu. 2020. Contextual text style transfer. arXiv:2005.00136. Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley, Chris Brockett, Mengdi Wang, and Jianfeng Gao. 2019. Towards coherent and cohesive long-form text generation. In Proceedings of the First Workshop on Narrative Understanding, pages 1–11, Minneapolis, Minnesota. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. 2019. Grounded response generation task at DSTC7. http://workshop. colips.org/dstc7/papers/DSTC7_ Task_2_overview_paper.pdf. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, and Graham Neubig. 2020. What makes a good story? Designing composite rewards for visual storytelling. In AAAI Conference on Artificial Intelligence. Daphne Ippolito, David Grangier, Chris CallisonBurch, and Douglas Eck. 2019. Unsupervised hierarchical story infilling. In Proceedings of the First Workshop on Narrative Understanding, pages 37– 43, Minneapolis, Minnesota. Association for Computational Linguistics. Daphne Ippolito, David Grangier, Douglas Eck, and Chris Callison-Burch. 2020. Toward better storylines with sentence-level language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77. Dongyeop Kang and Eduard Hovy. 2019. Linguistic versus latent relations for modeling coherent flow in paragraphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5809–5815, Hong Kong, China. Association for Computational Linguistics. Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. arXiv:1909.05858. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In International Conference on Learning Representations. Guillaume Lample, Alexandre Sablayrolles, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou. 2019. Large memory layers with product keys. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8548–8559. Curran Associates, Inc. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 228–231, Prague, Czech Republic. Association for Computational Linguistics. 2512 Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. arXiv:1910.13461. Chunyuan Li, Xiang Gao, Yuan Li, Xiujun Li, Baolin Peng, Yizhe Zhang, and Jianfeng Gao. 2020. Optimus: Organizing sentences via pre-trained modeling of a latent space. arXiv:2004.04092. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1106–1115, Beijing, China. Association for Computational Linguistics. Dayiheng Liu, Jie Fu, Pengfei Liu, and Jiancheng Lv. 2019. TIGS: An inference algorithm for text infilling with gradient search. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4146–4156, Florence, Italy. Association for Computational Linguistics. Sourab Mangrulkar, Suhani Shrivastava, Veena Thenkanidiyoor, and Dileep Aroor Dinesh. 2018. A context-aware convolutional natural language generation model for dialogue systems. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 191–200, Melbourne, Australia. Association for Computational Linguistics. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 234–239. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. https://d4mucfpksywv.cloudfront. net/better-language-models/ language_models_are_unsupervised_ multitask_learners.pdf. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683. Juan Ramos. 2003. Using TF-IDF to determine word relevance in document queries. https://www. cs.rutgers.edu/˜mlittman/courses/ ml03/iCML03/papers/ramos.pdf. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, and Lawrence Carin. 2019. Towards generating long and coherent text with multi-level latent variable models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2079–2089, Florence, Italy. Association for Computational Linguistics. Tianxiao Shen, Victor Quach, Regina Barzilay, and Tommi Jaakkola. 2020. Blank language models. arXiv:2002.03079. Yong-Siang Shih, Wei-Cheng Chang, and Yiming Yang. 2019. XL-Editor: Post-editing sentences with XLNet. arXiv:1910.10479. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5926–5936, Long Beach, California, USA. PMLR. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 553–562, New York, NY, USA. ACM. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015b. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196– 205, Denver, Colorado. Association for Computational Linguistics. 2513 Jian Tang, Yifan Yang, Sam Carton, Ming Zhang, and Qiaozhu Mei. 2016. Context-aware natural language generation with recurrent neural networks. arXiv:1611.09900. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 1096–1103, New York, NY, USA. Association for Computing Machinery. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. arXiv:1506.05869. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. Context-aware monolingual repair for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 877–886, Hong Kong, China. Association for Computational Linguistics. Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, pages 783–792, New York, NY, USA. Association for Computing Machinery. Tian Wang and Kyunghyun Cho. 2016. Larger-context language modelling with recurrent neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1319–1329, Berlin, Germany. Association for Computational Linguistics. Wenlin Wang, Zhe Gan, Wenqi Wang, Dinghan Shen, Jiaji Huang, Wei Ping, Sanjeev Satheesh, and Lawrence Carin. 2018. Topic compositional neural language model. In Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pages 356–365, Playa Blanca, Lanzarote, Canary Islands. PMLR. Tsung-Hsien Wen, Milica Gaˇsi´c, Dongho Kim, Nikola Mrkˇsi´c, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 275–284, Prague, Czech Republic. Association for Computational Linguistics. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 5753– 5763. Curran Associates, Inc. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489, San Diego, California. Association for Computational Linguistics. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1810–1820. Curran Associates, Inc. Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. arXiv:1903.05759. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DialoGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics. Wanrong Zhu, Zhiting Hu, and Eric Xing. 2019. Text infilling. arXiv:1901.00158. A Additional Generated Examples Tables 6, 7 show some additional examples generated by our model (without keywords) on the TripAdvisor and Recipe datasets, respectively. The results are semantically informative and can fit the surrounding context reasonably well. Table 8 provides additional examples to Table 5. Our model can incorporate keywords into the generated sentence in a smart way, adapting to the context. 2514 example 1 example 2 preceding context I went in October to meet with their FABULOUS wedding coordinator Summer Laetari. Their property is very beautiful, it’s extremely green and lush. Parrot Key has 4 pools. Good Location if traveling for business or you have a car! Got this hotel thru a discount travel company and paid $65.00 american a night. Excellent deal at this price. following context Their cottages are brand new, very clean and well appointed. If you are looking for a place to have a destination wedding I would recommend Parrot Key! My family and I have already planned another trip to visit next month. Unfortunetly the view is going to be partly blocked with yet another “Glass tower” going in. The room was spacious and clean. No tub in our room. ground truth It’s very colorful and unique. We had a terrific view from the 16th floor. INSET There is also a beach resort with lots of loungers. We had a room on the upper floor which overlooks the lobby. example 3 example 4 preceding context My family stayed here for 5 nights in August 2011. The resort is beautiful and the grounds are immaculately manicured. The kitchen is great for the family. We stayed in 2 interconnecting rooms as we are a family of 5. We started off with a bad start, as the check in was not aware that we were with 3 kids. I booked directly with them and got a confirmation via email for 2 rooms for 2 adults. following context We would just pack a cooler and head out in our rental car and explore the island. The pools at the resort were fabulous and the staff was attentive. We used the grills(kept very clean) several nights. Obviously this was not reflected in the paper work check-in had. We could only add an extra bed for an extra charge, but I refused to pay for this as I had phoned them before. The check-in lady would not bend, and we had to go for 2 rooms with 2 seperate beds. ground truth We were able to keep essentials in the room which made those early morning excursions more enjoyable. Before we arrived I called reservations to change this into 2 adults and 3 children. INSET We have plenty of kitchen utensils and the beach was a nice place to stay. When we checked in we were told that we had to request another room on the 2nd floor due to the extra charges. example 5 example 6 preceding context It was such a pleasure to see somthing new every night. It was not very crowded so we were able to get great seats at either the pool or the beach. The VIP sevice was great for dinner reservations and pillow service. My intentions were to expect the worst which made my stay there that much better than everyone elses. If everyone thought they were staying at the Hyatt, no wonder they thought so negatively about the place. I am in my late twenties and wanted a place where I could walk to local bars, restaurants, etc. following context Enjoyed the shrimp coctail and seafood salad delivered to us while enjoying the pool. All of us would not want to stay at another resort and are planning to go back again. Enjoy and Hola!Karen and FriendsMilford, CT This was the perfect place for me. As far as the accomodations, the beds were small (but so was everywhere else in Europe) and the showers were unusual. Otherwise it was worth the money for a prime time location in the heart of the night life area. ground truth We did bring a lot of $1 for tipping and of course the service stepped up a notch more. without struggling to find my way home at night. INSET The buffet dinner was amazing and we had the best food in the resort. So I had no reason to stay in the HOTEL itself. Table 6: Generated examples by our model on the TripAdvisor dataset 2515 example 1 example 2 preceding context Roll up rectangles width-wise and pinch ends to seal. Bake for 12 minutes or until the tops begin to brown. Drizzle each potato cup with 1 teaspoon browned butter. Cover muffin tin tightly with aluminium foil and place in oven. following context Best when served warm. For added flavor, serve with strawberry jelly. Remove from oven and turn broiler on high. Sprinkle potato rounds evenly with remaining parmesan cheese. ground truth Let cool on baking sheet. Bake for 25 minutes. INSET Cool on wire rack and remove. Bake for 20 minutes or until potatoes are tender. example 3 example 4 preceding context Preheat oven to 425 degrees Fahrenheit. Line a baking sheet with a SILPAT mat. Heat the oil in a pan at medium. Add the mushrooms and saute until tender, about 7-10 minutes. following context With a pastry cutter, cut in the coconut oil and the butter. Make a well and add in the milk 1/2 cup at a time, stirring gently with a wooden spoon. Add the reserved water and simmer at medium-high until reduced by half, about 10 minutes. Meanwhile cook the pasta as directed on the package. ground truth In a bowl, mix the flour, baking powder, baking soda and sea salt. Add shallots, garlic, thyme, salt and pepper and saute for 2 minutes. INSET In a medium bowl, mix together the flour, baking powder, sugar, salt and cinnamon. Add the garlic and sautee until fragrant, about 2 minutes. example 5 example 6 preceding context After another 15 minutes or so the mixture should thicken up. The mixture will continue to thicken as it cools. Bake the graham cracker crust for 10 minutes. Remove from oven and allow to cool to room temperature. following context Sterilize your jars and lids and while still hot fill with the jam leaving about a 1/2 inch headspace. Place lids onto the jars and boil in a water bath with jars covered by 3 inches of water for 10 minutes. Stir in the lime zest and lime juice. Stir until mixture is smooth and begins to slightly thicken. ground truth Remove from the heat and stir in your amaretto. Meanwhile, combine the egg yolks and condensed milk in a medium bowl. INSET Carefully remove the jars from hot water and keep going until a thick sauce is formed. In a medium bowl, combine the cream cheese and powdered sugar, stirring until smooth. Table 7: Generated examples by our model on the Recipe dataset preceding context Also has a safe. The hotel is in a good location, beside the City Centre and there are a nice selection of shops within the Monte Carlo. Service was very good but avoid the concierge in the morning when people are booking tours, the queues are long. following context No wi-fiin the room which is a bit annoying but they have it in the foodcourt by Starbucks and McDs. Also we were disappointed to see the $15/night resort fee was charged to our credit card after our stay. I don’t recall them mentioning this at check-in. human oracle CVs is next door and it’s 24/7 so you can buy snacks and anything else you fancy. + (breakfast, cereal) Breakfast is included with cereal, muffins and breads. + (food, expensive) Prices are expensive but food in the hotel is very cheap. Table 8: Examples generated by our model in the same context but with different keywords. “+ (· · · )” is keywords.
2020
226
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2516–2531 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2516 Improving Adversarial Text Generation by Modeling the Distant Future Ruiyi Zhang1, Changyou Chen2, Zhe Gan3, Wenlin Wang4 Dinghan Shen3, Guoyin Wang1, Zheng Wen5, Lawrence Carin1 1 Duke University 2 University at Buffalo 3 Microsoft Dynamics 365 AI 4 Citadel LLC 5 DeepMind [email protected] Abstract Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply. We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues. Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization. Extensive experiments demonstrate that the proposed method leads to improved performance. 1 Introduction Text generation is an important area of investigation within machine learning. Recent work has shown excellent performance on a number of tasks, by combining reinforcement learning (RL) and generative models. Example applications include image captioning (Ren et al., 2017; Rennie et al., 2016), text summarization (Li et al., 2018b; Paulus et al., 2017; Rush et al., 2015), and adversarial text generation (Guo et al., 2017; Lin et al., 2017; Yu et al., 2017; Zhang et al., 2017; Zhu et al., 2018). The sequence-to-sequence framework (Seq2Seq) (Sutskever et al., 2014) is a popular technique for text generation. However, models from such a setup are typically trained to predict the next token given previous ground-truth tokens as input, causing what is termed exposure bias (Ranzato et al., 2016). By contrast, sequence-level training with RL provides an effective means of solving this challenge, by treating text generation as a sequential decision-making problem. By directly optimizing an evaluation score (cumulative rewards) (Ranzato et al., 2016), state-of-the-art results have been obtained in many text-generation tasks (Paulus et al., 2017; Rennie et al., 2016). However, one problem in such a framework is that rewards in RL training are particularly sparse, since a scalar reward is typically only available after an entire sequence has been generated. Furthermore, the recurrent models focus more on local fluency, and may cause inconsistent semantic meanings for long text generation. For RL-based text generation, most existing works rely on a model-free framework, which has been criticized for its high variance and poor sample efficiency (Sutton and Barto, 1998). On the other hand, while model-based RL methods do not suffer from these issues, they are usually difficult to train in complex environments. Further, a learned policy is usually restricted by the capacity of an environment model. Recent developments on model-based RL (Gu et al., 2016; Kurutach et al., 2018; Nagabandi et al., 2017) combine the advantages of these two approaches, and have achieved improved performance by learning a model-free policy, assisted by an environment model. In addition, model-based RL has been employed recently to solve problems with extremely sparse rewards, with curiosity-driven methods (Pathak et al., 2017). In this paper, we propose a model-based imitation-learning method to overcome the aforementioned issues in text-generation tasks. Our main idea is to employ an explicit guider network to model the generation environment in the feature space of sentence tokens, used to emit intermediate rewards by matching the predicted features from the guider network and features from generated sentences. The guider network is trained to encode global structural information of training sentences, and thus is useful to guide next-token prediction in the generative process. Within the proposed framework, to assist the guider network, we also develop a new type of self-attention mechanism to provide high-level planning-ahead information 2517 and maintain consistent semantic meaning. Our experimental results demonstrate the effectiveness of proposed methods. 2 Background Text Generation Model Text generation models learn to generate a sentence Y = (y1, . . . , yT ) of length T, possibly conditioned on some context X. Here each yt is a token from vocabulary A. Starting from the initial state s0, a recurrent neural network (RNN) produces a sequence of states (s1, . . . , sT ) given an input sentence-feature representation (e(y1), . . . , e(yT )), where e(·) denotes a word embedding function mapping a token to its ddimensional feature representation. The states are recursively updated with a function known as the cell: st = h(st−1, e(yt)). One typically assigns the following probability to an observation y at location t: p(y|Y<t) = [softmax(g(st))]y. Together (g, h) specifies a probabilistic model π, i.e., log π(Y ) = X t log p(yt|Y<t). (1) To train the model π, one typically uses maximum likelihood estimation (MLE), via minimizing the cross-entropy loss, i.e., JMLE(π) = −E[log π(Y )]. In order to generate sentence Y s from a (trained) model, one iteratively applies the following operations: ys t+1 ∼Multi(1, softmax(g(st))), (2) st = h(st−1, e(ys t )) , (3) where Multi(1, ·) denotes one draw from a multinomial distribution. Model-Based Imitation Learning Text generation can be considered as an RL problem with a large number of discrete actions, deterministic transitions, and deterministic terminal rewards. It can be formulated as a Markov decision process (MDP) M = ⟨S, A, P, r, γ⟩, where S is the state space, A is the action space, P is the deterministic environment dynamics, r(s, y) is a reward function, and γ ∈(0, 1) is the discrete-time discount factor. The policy πφ, parameterized by φ, maps each state s ∈S to a probability distribution over A. The objective is to maximize the expected reward: J(π) = ∞ X t=1 EP,π  γt−1 · r(st, yt)  . (4) In model-based imitation learning (Baram et al., 2017; Cheng et al., 2019), a model is built to make predictions for future state st+△t conditioned on the current state1, which can be used for action selection, e.g., next-token generation. This model is typically a discrete-time system, taking the current state-action pair (st, yt) as input, and outputting an estimate of the future state st+△t at time t + △t. At each step t, yt is chosen based on the model, and the model will re-plan with the updated information from the dynamics. This control scheme is different from a standard model-based method, and is referred to as model-predictive control (MPC) (Nagabandi et al., 2017). Note that in our setting, the state in RL typically corresponds to the current generated sentences Y1,...,t instead of the RNN state of generator (decoder). 3 Proposed Model The model is illustrated in Figure 1, with an autoeocoder (AE) structure for sentence feature extraction and generation. The encoder is shared for sentences from both training data and generated data, as explained in detail below. Overall, text generation can be formulated as an imitationlearning problem. At each timestep t, the agent, also called a generator (which corresponds to the LSTM decoder), takes the current LSTM state as input, denoted as st. The policy πφ(·|st) parameterized by φ is a conditional generator, to generate the next token (action) given st, the observation representing the current generated sentence. The objective of text generation is to maximize the total reward as in (4). We detail the components for our proposed model in the following subsections. 3.1 The Guider Network The guider network, implemented as an RNN with LSTM units, is adopted to model environment dynamics to assist text generation. The idea is to train a guider network such that its predicted sentence features at each time step are used to assist next-word generation and construct intermediate rewards, which in turn are used to optimize the sentence generator. Denote the guider network as Gψ(sG t−1, f t), with parameters ψ and input arguments (sG t−1, f t) at time t, to explicitly write out the dependency on the guider network latent state sG t−1 from the previous time step. Here f t is the input to the LSTM guider, which represents the feature of the current generated sentence extracted 1 △t > 1; the model predicts future states based on the collected trajectories. 2518 f G t <latexit sha1_base64="zUc7C4RMykuC+qQKuzBZqwSLz0=">AB/nicbVBPS8MwHE3nv zn/VcWTl+AQPI12DvQ48KDHCW4OtlrSN3C0qQkqTBKwa/ixYMiXv0c3vw2plsPuvkg5PHe70deXpAwqrTjfFuVldW19Y3qZm1re2d3z94/6CmRSky6WDAh+wFShFOupqRvqJCgOGLkPJ leFf/9IpKC3+lpQrwYjTiNKEbaSL59lA0DwUI1jc0Fo9zPdP5w7dt1p+HMAJeJW5I6KNHx7a9hKHAaE64xQ0oNXCfRXoakpiRvDZMFUkQnqARGRjKUyUl83i5/DUKCGMhDSHazhTf29kKF ZFQDMZIz1Wi14h/ucNUh1dehnlSaoJx/OHopRBLWDRBQypJFizqSEIS2qyQjxGEmFtGquZEtzFLy+TXrPhnjeat616u1XWUQXH4AScARdcgDa4AR3QBRhk4Bm8gjfryXqx3q2P+WjFKncOwR 9Ynz/EBZX0</latexit> ' <latexit sha1_base64="fFWBIbp62QE489epHA8wiMyomE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hX Uo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5BWx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCof7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsW ulmhilCR2TAOo5KkjAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZMh0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg=</ latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMyomE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hX Uo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5BWx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCof7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsW ulmhilCR2TAOo5KkjAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZMh0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg=</ latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMyomE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hX Uo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5BWx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCof7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsW ulmhilCR2TAOo5KkjAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZMh0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg=</ latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMyomE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hX Uo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5BWx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCof7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsW ulmhilCR2TAOo5KkjAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZMh0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg=</ latexit> yt <latexit sha1_base64="1Gq4R7h6xweUtWp NBqA5VPExBCY=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoN4KXjxWtB/QhrLZbtqlm 03YnQih9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZ eJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY38789hPXRsTqEbOE+xEdKhEKRtFKD1kf+WKW 3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6C/oRqFEzyamXGp5QNqZD3rVU0YgbfzI/dUrOr DIgYaxtKSRz9fEhEbGZFgOyOKI7PszcT/vG6K4bU/ESpJkSu2WBSmkmBMZn+TgdCcocwsoU wLeythI6opQ5tOyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjAYwjO8wpsjnR fn3flYtBacfOY/sD5/AFxW43f</latexit> yt <latexit sha1_base64="1Gq4R7h6xweUtWpNBqA5VPExBCY=">AB6nicbVBNS8NAEJ3Ur 1q/qh69LBbBU0mqoN4KXjxWtB/QhrLZbtqlm03YnQih9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZeJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY3 8789hPXRsTqEbOE+xEdKhEKRtFKD1kf+WKW3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6C/oRqFEzyamXGp5QNqZD3rVU0YgbfzI/dUrOrDIgYaxtKSRz9fEhEbGZFgOyOKI7PszcT/vG 6K4bU/ESpJkSu2WBSmkmBMZn+TgdCcocwsoUwLeythI6opQ5tOyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjAYwjO8wpsjnRfn3flYtBacfOY/sD5/AFxW43f</latexit> ӗ d1 <latexit sha1_base64="xf5jgloeipzEzbjNztUCcV37hEY=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lstu3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW/4PCofHzSNnGqGW+xWMa6G1DpVC8hQIl7ya0yiQvBNMbud+54lrI2L1iNOE+xEdKTEUjKVHsKBNyhX3Kq7AFknXk4qkKM5KH/ 1w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4dUYurBKSYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6Kwxs/EypJkSu2XDRMJcGYzP8modCcoZxaQpkW9lbCxlRThjadkg3BW315nbRrVe+qWruvVxr1PI4inME5XIH19CA O2hCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gDqQY2C</latexit> d2 <latexit sha1_base64="Fv7li54WDO58UBZM8eW5QHBXGQ=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lspu3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5EPOqLHSQzioDcoVt+ouQNaJl5MK5GgOyl/ 9MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKiEZxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea4Y2fcZmkBiVbLhqmgpiYzP8mIVfIjJhaQpni9lbCxlRZmw6JRuCt/ryOmnXqt5VtXZfrzTqeRxFOINzuAQPrqEB d9CEFjAYwTO8wpsjnBfn3flYthacfOYU/sD5/AHrxY2D</latexit> dN <latexit sha1_base64="s5EZQSlOiaFe5srLhmUajDJVcQ=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4Kkt6LH gxZNUtB/QhrLZTNqlm03Y3Qil9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSq4Nq7aytb2xubRd2irt7+weHpaPjlk4yxbDJEpGoTkA1Ci6xabgR2EkV0jgQ2A5GNzO/YRK80Q+mnGKfkwHkecUWOlh7B/1y+V3Yo7B1klXk7KkKPRL3 1woRlMUrDBNW67mp8SdUGc4ETou9TGNK2YgOsGupDFqfzI/dUrOrRKSKFG2pCFz9fEhMZaj+PAdsbUDPWyNxP/87qZia79CZdpZlCyxaIoE8QkZPY3CblCZsTYEsoUt7cSNqSKMmPTKdoQvOWXV0mrWvEuK9X7Wrley+MowCmcwQV4cAV1 uIUGNIHBAJ7hFd4c4bw4787HonXNyWdO4A+czx8WRI2f</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> d1 <latexit sha1_base64="xf5jgloeipzEzbjNztUCcV37hEY=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lstu3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW/4PCofHzSNnGqGW+xWMa6G1DpVC8hQIl7ya0yiQvBNMbud+54lrI2L1iNOE+xEdKTEUjKVHsKBNyhX3Kq7AFknXk4qkKM5KH/ 1w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4dUYurBKSYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6Kwxs/EypJkSu2XDRMJcGYzP8modCcoZxaQpkW9lbCxlRThjadkg3BW315nbRrVe+qWruvVxr1PI4inME5XIH19CA O2hCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gDqQY2C</latexit> d2 <latexit sha1_base64="Fv7li54WDO58UBZM8eW5QHBXGQ=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lspu3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5EPOqLHSQzioDcoVt+ouQNaJl5MK5GgOyl/ 9MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKiEZxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea4Y2fcZmkBiVbLhqmgpiYzP8mIVfIjJhaQpni9lbCxlRZmw6JRuCt/ryOmnXqt5VtXZfrzTqeRxFOINzuAQPrqEB d9CEFjAYwTO8wpsjnBfn3flYthacfOYU/sD5/AHrxY2D</latexit> dN <latexit sha1_base64="s5EZQSlOiaFe5srLhmUajDJVcQ=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4Kkt6LH gxZNUtB/QhrLZTNqlm03Y3Qil9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSq4Nq7aytb2xubRd2irt7+weHpaPjlk4yxbDJEpGoTkA1Ci6xabgR2EkV0jgQ2A5GNzO/YRK80Q+mnGKfkwHkecUWOlh7B/1y+V3Yo7B1klXk7KkKPRL3 1woRlMUrDBNW67mp8SdUGc4ETou9TGNK2YgOsGupDFqfzI/dUrOrRKSKFG2pCFz9fEhMZaj+PAdsbUDPWyNxP/87qZia79CZdpZlCyxaIoE8QkZPY3CblCZsTYEsoUt7cSNqSKMmPTKdoQvOWXV0mrWvEuK9X7Wrley+MowCmcwQV4cAV1 uIUGNIHBAJ7hFd4c4bw4787HonXNyWdO4A+czx8WRI2f</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> d1 <latexit sha1_base64="xf5jgloeipzEzbjNztUCcV37hEY=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lstu3SzSbsToQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IJHCoOt+O4WNza3tneJuaW/4PCofHzSNnGqGW+xWMa6G1DpVC8hQIl7ya0yiQvBNMbud+54lrI2L1iNOE+xEdKTEUjKVHsKBNyhX3Kq7AFknXk4qkKM5KH/ 1w5ilEVfIJDWm57kJ+hnVKJjks1I/NTyhbEJHvGepohE3frY4dUYurBKSYaxtKSQL9fdERiNjplFgOyOKY7PqzcX/vF6Kwxs/EypJkSu2XDRMJcGYzP8modCcoZxaQpkW9lbCxlRThjadkg3BW315nbRrVe+qWruvVxr1PI4inME5XIH19CA O2hCxiM4Ble4c2Rzovz7nwsWwtOPnMKf+B8/gDqQY2C</latexit> d2 <latexit sha1_base64="Fv7li54WDO58UBZM8eW5QHBXGQ=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lqQY8 FLx4r2g9oQ9lspu3SzSbsboQS+hO8eFDEq7/Im/GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5EPOqLHSQzioDcoVt+ouQNaJl5MK5GgOyl/ 9MGZphNIwQbXueW5i/Iwqw5nAWamfakwom9AR9iyVNELtZ4tTZ+TCKiEZxsqWNGSh/p7IaKT1NApsZ0TNWK96c/E/r5ea4Y2fcZmkBiVbLhqmgpiYzP8mIVfIjJhaQpni9lbCxlRZmw6JRuCt/ryOmnXqt5VtXZfrzTqeRxFOINzuAQPrqEB d9CEFjAYwTO8wpsjnBfn3flYthacfOYU/sD5/AHrxY2D</latexit> dN <latexit sha1_base64="s5EZQSlOiaFe5srLhmUajDJVcQ=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4Kkt6LH gxZNUtB/QhrLZTNqlm03Y3Qil9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSq4Nq7aytb2xubRd2irt7+weHpaPjlk4yxbDJEpGoTkA1Ci6xabgR2EkV0jgQ2A5GNzO/YRK80Q+mnGKfkwHkecUWOlh7B/1y+V3Yo7B1klXk7KkKPRL3 1woRlMUrDBNW67mp8SdUGc4ETou9TGNK2YgOsGupDFqfzI/dUrOrRKSKFG2pCFz9fEhMZaj+PAdsbUDPWyNxP/87qZia79CZdpZlCyxaIoE8QkZPY3CblCZsTYEsoUt7cSNqSKMmPTKdoQvOWXV0mrWvEuK9X7Wrley+MowCmcwQV4cAV1 uIUGNIHBAJ7hFd4c4bw4787HonXNyWdO4A+czx8WRI2f</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> Encoder Decoder Guider Softmax Probability Weighted Sum = Assignment f t , Enc(Yt) <latexit sha1_base64="AKcj5fvmasMsiN7WICN1wBdgRPs=">ACF3icbVBNS8NAEN34bf2qevSyWAS9lEQF9V YQwaOC/ZCmhM12Uhc3m7g7EUvov/DiX/HiQRGvevPfuK05aPXBso/3ZpiZF6ZSGHTdT2dicmp6ZnZuvrSwuLS8Ul5da5gk0xzqPJGJboXMgBQK6ihQivVwOJQjO8Ph76zVvQRiTqAvspdGLWUyISnKGVgnI198NEdk0/th+NBgFSH7Vg qifhxlK4Q8T8RPHB9mWAO0G54lbdEehf4hWkQgqcBeUPv5vwLAaFXDJj2p6bYidnGgWXMCj5mYGU8WvWg7alisVgOvnorgHdskqXRom2TyEdqT87chab4ea2MmZ4Zca9ofif184wOuzkQqUZgj1tNCjKJMWEDkOiXaGBo+xbwrgWdlfKr5 hmHG2UJRuCN37yX9LYrXp71d3z/UrtqIhjmyQTbJNPHJAauSUnJE64eSePJn8uI8OE/Oq/P2XTrhFD3r5Bec9y/a2qBN</latexit> CNN MLP wt <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+gJw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFk Mnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtMyk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLce cs3PnpCwrjDua2NL9+WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+BFCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="8LiqkOr4yReyvWhWMTnLMjkC0c=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFZ0bGFdiZ 9E4bmskMyR2lD6CGxcqPpY738b0Z6GtBwIf5yTk3hPnSlry/W9vbX1jc2u7tFPereztH1QPK482K4zAUGQqM62YW1RSY0iSFLZygzyNFTbj4c0bz6hsTLTDzTKMUp5X8tECk7Oun/uUrda8+v+TGwVgXUYKFGt/rV6WiSFGTUNzaduDnFI25ISkUTsqdwmLOxZD3se1Q8xR tNJ6NOmGnzumxJDPuaGIz9/eLMU+tHaWxu5lyGtjlbGr+l7ULSi6jsdR5QajF/KOkUIwyNt2b9aRBQWrkgAsj3axMDLjhglw7ZVdCsLzyKoTn9at6cOdDCY7hBM4gAu4hltoQAgC+vACb/DuKe/V+5i3teYtajuCP/I+fwDA+Yxi</latexit> <latexit sha1_base64="8LiqkOr4yReyvWhWMTnLMjkC0c=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFZ0bGFdiZ 9E4bmskMyR2lD6CGxcqPpY738b0Z6GtBwIf5yTk3hPnSlry/W9vbX1jc2u7tFPereztH1QPK482K4zAUGQqM62YW1RSY0iSFLZygzyNFTbj4c0bz6hsTLTDzTKMUp5X8tECk7Oun/uUrda8+v+TGwVgXUYKFGt/rV6WiSFGTUNzaduDnFI25ISkUTsqdwmLOxZD3se1Q8xR tNJ6NOmGnzumxJDPuaGIz9/eLMU+tHaWxu5lyGtjlbGr+l7ULSi6jsdR5QajF/KOkUIwyNt2b9aRBQWrkgAsj3axMDLjhglw7ZVdCsLzyKoTn9at6cOdDCY7hBM4gAu4hltoQAgC+vACb/DuKe/V+5i3teYtajuCP/I+fwDA+Yxi</latexit> <latexit sha1_base64="8TPj8pW5mg7rIVNuDPD9nlC068k=">AB6XicbVBNT8JAEJ3iF+IX6tHLRmLibRe1BvRi0eMVkigIdtl Cxu2Z3qiENP8GLBzVe/Ufe/Dcu0IOCL5nk5b2ZzMwLUykMu63U1pZXVvfKG9WtrZ3dveq+wcPJsk04z5LZKLbITVcCsV9FCh5O9WcxqHkrXB0PfVbj1wbkah7HKc8iOlAiUgwila6e+phr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUs VjbkJ8tmpE3JilT6JEm1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoJcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4p/VL+verVtrXBVplOEIjuEUPDiHBtxAE3xgMIBneIU3RzovzrvzMW8tOcXMIfyB8/kD2/mNsg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpOGM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvt pl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA470ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iGlfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUs VjbkJ8umpY3JslR6JEm1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> {X} <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="X/BbPQRM 1pmBhxdK1enSbL+gJw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4 MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5yTk3pOUSloKgi +vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y 7jMaKwv9SNMS45yPtMyk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+G JQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9+WLGc2uneJ u5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg 14zvOgj/brwJ0WX7ph0+BFCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37u wEfsn7+AaqKYoN</latexit> <latexit sha1_base64="kvu14BGDB IhRlFRIGu9Z3vMYUyg=">AB7nicbVC9TsMwGPxS/kopkLKyWFRITFXCAmxIL IxFIrRSE1WO47RWHTuyHVAV+igsDIB4HDbeBqftAC0nfLpzpbvuzjnTBvP+3 ZqG5tb2zv13cZec/g0G01H7QsFKEBkVyqfow15UzQwDaT9XFGcxp714clP5 vUeqNJPi3kxzGmV4JFjKCDZWGrqtsAxjyRM9zeyB+uFs6La9jcHWif+krRhi e7Q/QoTSYqMCkM41nrge7mJSqwMI5zOGmGhaY7JBI/owFKBM6qjch59hk6tkqB UKjvCoLn6+0WJM1lszczbMZ61avE/7xBYdLqGQiLwVZPFRWnBkJKp6QAlT lBg+tQTxWxWRMZYWJsWw1bgr+68joJzjtXHf/OgzocwmcgQ8XcA230IUACD zBC7zBu/PsvDofi7ZqzrK2I/gD5/MHbU6SgA=</latexit> <latexit sha1_base64="kvu14BGDB IhRlFRIGu9Z3vMYUyg=">AB7nicbVC9TsMwGPxS/kopkLKyWFRITFXCAmxIL IxFIrRSE1WO47RWHTuyHVAV+igsDIB4HDbeBqftAC0nfLpzpbvuzjnTBvP+3 ZqG5tb2zv13cZec/g0G01H7QsFKEBkVyqfow15UzQwDaT9XFGcxp714clP5 vUeqNJPi3kxzGmV4JFjKCDZWGrqtsAxjyRM9zeyB+uFs6La9jcHWif+krRhi e7Q/QoTSYqMCkM41nrge7mJSqwMI5zOGmGhaY7JBI/owFKBM6qjch59hk6tkqB UKjvCoLn6+0WJM1lszczbMZ61avE/7xBYdLqGQiLwVZPFRWnBkJKp6QAlT lBg+tQTxWxWRMZYWJsWw1bgr+68joJzjtXHf/OgzocwmcgQ8XcA230IUACD zBC7zBu/PsvDofi7ZqzrK2I/gD5/MHbU6SgA=</latexit> <latexit sha1_base64="SdOLOhyvh at7GhdSUzXLfg4piJ4=">AB+XicbVC9TsMwGPzCbyl/KYwsFhUSU5WwAFsFC 2ORCK3URJXjOK1VJ45sB1SFPgoLAyBW3oSNt8FpM0DLSZPd98ny/MOFPacb 6tldW19Y3N2lZ9e2d3b9uHNwrkUtCPSK4kL0QK8pZSj3NKe9TFKchJx2w/F1 6XcfqFRMpHd6ktEgwcOUxYxgbaSB3fALPxQ8UpPEXKjnTwd202k5M6Bl4lakC RU6A/vLjwTJE5pqwrFSfdfJdFBgqRnhdFr3c0UzTMZ4SPuGpjihKihm0afoxCg RioU0J9Vopv7eKHCiymxmMsF6pBa9UvzP6+c6vgKlma5pimZPxTnHGmByh5Q xCQlmk8MwUQykxWREZaYaNW3ZTgLn5mXhnrcuWe+s021dVGzU4gmM4BRfOoQ 030AEPCDzCM7zCm/VkvVjv1sd8dMWqdg7hD6zPH8lok+A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhYyty hJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQmKoEIQFbB QtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0mWT3fJ58vTDlT2n G+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6woZ4J6mlOu6mkOA457YTj 68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu536Y8EhNYnOhrj/t2w2n6cyAlolbk gaUaPftLz9KSBZToQnHSvVcJ9VBjqVmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bF RIjRIpDlCo5n6eyPHsSqymckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0 gCImKdF8YgmkpmsiIywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwj m04Aba4AGBR3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> ӗ y1 <latexit sha1_base64="JR+aWz1bKb96XAl ht4qXnK5V3Sk=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoN4KXjxWtB/QhrLZbtqlm 03YnQih9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZ eJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY38789hPXRsTqEbOE+xEdKhEKRtFKD1nf65crb tWdg6wSLycVyNHol796g5ilEVfIJDWm67kJ+hOqUTDJp6VeanhC2ZgOedSRSNu/Mn81Ck5s 8qAhLG2pZDM1d8TExoZk0WB7YwojsyNxP/87ophtf+RKgkRa7YlGYSoIxmf1NBkJzhjKzhD It7K2EjaimDG06JRuCt/zyKmnVqt5FtXZ/Wanf5HEU4QRO4Rw8uI63EDmsBgCM/wCm+OdF 6cd+dj0Vpw8plj+APn8wcLz42c</latexit> y2 <latexit sha1_base64="IFc9IR7Y2FURO1C UBKJKIRNh7aw=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoN4KXjxWtB/QhrLZbtqlm 03YnQih9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZ eJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY38789hPXRsTqEbOE+xEdKhEKRtFKD1m/1i9X3 Ko7B1klXk4qkKPRL3/1BjFLI6QSWpM13MT9CdUo2CST0u91PCEsjEd8q6likbc+JP5qVNyZ pUBCWNtSyGZq78nJjQyJosC2xlRHJlbyb+53VTDK/9iVBJilyxaIwlQRjMvubDITmDGVmCW Va2FsJG1FNGdp0SjYEb/nlVdKqVb2Lau3+slK/yeMowgmcwjl4cAV1uIMGNIHBEJ7hFd4c6b w4787HorXg5DPH8AfO5w8NU42d</latexit> yt <latexit sha1_base64="1Gq4R7h6xweUtWp NBqA5VPExBCY=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoN4KXjxWtB/QhrLZbtqlm 03YnQih9Cd48aCIV3+RN/+N2zYHbX0w8Hhvhpl5QSKFQdf9dgpr6xubW8Xt0s7u3v5B+fCoZ eJUM95ksYx1J6CGS6F4EwVK3k0p1EgeTsY38789hPXRsTqEbOE+xEdKhEKRtFKD1kf+WKW 3XnIKvEy0kFcjT65a/eIGZpxBUySY3pem6C/oRqFEzyamXGp5QNqZD3rVU0YgbfzI/dUrOr DIgYaxtKSRz9fEhEbGZFgOyOKI7PszcT/vG6K4bU/ESpJkSu2WBSmkmBMZn+TgdCcocwsoU wLeythI6opQ5tOyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd9CAJjAYwjO8wpsjnR fn3flYtBacfOY/sD5/AFxW43f</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16w MEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3az W7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZ VSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV 7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe 2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0 sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6 OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16w MEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3az W7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZ VSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV 7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe 2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0 sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6 OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur 1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej2 5nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M /rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</late xit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur 1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej2 5nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M /rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</late xit> . . . <latexit sha1_base64="1e352gWfrlvf16w MEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3az W7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZ VSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV 7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe 2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0 sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6 OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> sG t <latexit sha1_base64="Va3fPcn6Rb3yvHO OY8apDZiOsUA=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9DjwoMcJ7gO2WtI03cLSp CSpUMr8V7x4UMSrf4g3/xvTrQfdfBDyeO/3Iy8vSBhV2nG+rbX1jc2t7cpOdXdv/+DQPjruK ZFKTLpYMCEHAVKEU6mpGBokKA4Y6QfT68LvPxKpqOD3OkuIF6MxpxHFSBvJt2v5KBAsV FlsLqhmvn648e2603DmgKvELUkdlOj49tcoFDiNCdeYIaWGrpNoL0dSU8zIrDpKFUkQnqIxG RrKUyUl8/Dz+CZUIYCWkO13Cu/t7IUayKeGYyRnqilr1C/M8bpjq68nLKk1QTjhcPRSmDWs CiCRhSbBmSEIS2qyQjxBEmFt+qaEtzlL6+SXrPhXjSad616u1XWUQEn4BScAxdcgja4BR 3QBRhk4Bm8gjfryXqx3q2PxeiaVe7UwB9Ynz8FgZT1</latexit> sG 1 <latexit sha1_base64="lcGuLgGfKDo+lT5 W7YvrAQHMVGI=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9DjwoMcJ7gO2WtI03cLSp CSpUMr8V7x4UMSrf4g3/xvTrQfdfBDyeO/3Iy8vSBhV2nG+rbX1jc2t7cpOdXdv/+DQPjruK ZFKTLpYMCEHAVKEU6mpGBokKA4Y6QfT68LvPxKpqOD3OkuIF6MxpxHFSBvJt2v5KBAsV FlsLqhmvtw49t1p+HMAVeJW5I6KNHx7a9RKHAaE64xQ0oNXSfRXo6kpiRWXWUKpIgPEVjM jSUo5goL5+Hn8Ezo4QwEtIcruFc/b2Ro1gV8cxkjPRELXuF+J83THV05eWUJ6kmHC8eilIGtY BFEzCkmDNMkMQltRkhXiCJMLa9FU1JbjLX14lvWbDvWg071r1dqusowJOwCk4By64BG1wCz qgCzDIwDN4BW/Wk/VivVsfi9E1q9ypgT+wPn8An2CUsg=</latexit> sG 2 <latexit sha1_base64="AOaENu4cNGyJsWp Ph3RkawI7B5E=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9DjwoMcJ7gO2WtI03cLSp CSpUMr8V7x4UMSrf4g3/xvTrQfdfBDyeO/3Iy8vSBhV2nG+rbX1jc2t7cpOdXdv/+DQPjruK ZFKTLpYMCEHAVKEU6mpGBokKA4Y6QfT68LvPxKpqOD3OkuIF6MxpxHFSBvJt2v5KBAsV FlsLqhmfvPhxrfrTsOZA64StyR1UKLj21+jUOA0JlxjhpQauk6ivRxJTEjs+oVSRBeIrGZ GgoRzFRXj4P4NnRglhJKQ5XMO5+nsjR7Eq4pnJGOmJWvYK8T9vmOroyspT1JNOF48FKUMag GLJmBIJcGaZYgLKnJCvESYS16atqSnCXv7xKes2Ge9Fo3rXq7VZRwWcgFNwDlxwCdrgFn RAF2CQgWfwCt6sJ+vFerc+FqNrVrlTA39gf4AoOaUsw=</latexit> f G 1 <latexit sha1_base64="G/jybduzgTnG+MY LRrvPdHTmX8w=">AB/nicbVBPS8MwHE3nvzn/VcWTl+AQPI12DvQ48KDHCW4OtlrSN3C0 qQkqTBKwa/ixYMiXv0c3vw2plsPuvkg5PHe70deXpAwqrTjfFuVldW19Y3qZm1re2d3z94/6 CmRSky6WDAh+wFShFOupqRvqJCgOGLkPJleFf/9IpKC3+lpQrwYjTiNKEbaSL59lA0Dw UI1jc0Fo9zP3Pzh2rfrTsOZAS4TtyR1UKLj21/DUOA0JlxjhpQauE6ivQxJTEjeW2YKpIgP EjMjCUo5goL5vFz+GpUIYCWkO13Cm/t7IUKyKgGYyRnqsFr1C/M8bpDq69DLKk1QTjucPRS mDWsCiCxhSbBmU0MQltRkhXiMJMLaNFYzJbiLX14mvWbDPW80b1v1dqusowqOwQk4Ay64AG 1wAzqgCzDIwDN4BW/Wk/VivVsf89GKVe4cgj+wPn8AXbCVsQ=</latexit> f G 2 <latexit sha1_base64="yL3s04Bp/VqGkJ3 0G8qCMamWTQg=">AB/nicbVBPS8MwHE3nvzn/VcWTl+AQPI12DvQ48KDHCW4OtlrSN3C0 qQkqTBKwa/ixYMiXv0c3vw2plsPuvkg5PHe70deXpAwqrTjfFuVldW19Y3qZm1re2d3z94/6 CmRSky6WDAh+wFShFOupqRvqJCgOGLkPJleFf/9IpKC3+lpQrwYjTiNKEbaSL59lA0Dw UI1jc0Fo9zPmvnDtW/XnYzA1wmbknqoETHt7+GocBpTLjGDCk1cJ1EexmSmJG8towVSRBe IJGZGAoRzFRXjaLn8NTo4QwEtIcruFM/b2RoVgVAc1kjPRYLXqF+J83SHV06WUJ6kmHM8fil IGtYBFzCkmDNpoYgLKnJCvEYSYS1axmSnAXv7xMes2Ge95o3rbq7VZRxUcgxNwBlxwAd rgBnRAF2CQgWfwCt6sJ+vFerc+5qMVq9w5BH9gf4AXzeVsg=</latexit> s0 <latexit sha1_base64="AqR/o2xq8Wy1Enu jwdg3IcVHSm0=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9Djw4nGC+4CtlDRNt7A0K UkqlFL/FS8eFPHqH+LN/8Z060E3H4Q83v9yMsLEkaVdpxva2Nza3tnt7ZX3z84PDq2T04HS qQSkz4WTMhRgBRhlJO+pqRUSIJigNGhsH8tvSHj0QqKviDzhLixWjKaUQx0kby7UY+CQLV RabC6rCz53Ct5tOy1kArhO3Ik1QoefbX5NQ4DQmXGOGlBq7TqK9HElNMSNFfZIqkiA8R1MyN pSjmCgvX4Qv4IVRQhgJaQ7XcKH+3shRrMp4ZjJGeqZWvVL8zxunOrxcsqTVBOlw9FKYNawL IJGFJsGaZIQhLarJCPEMSYW36qpsS3NUvr5NBu+Vetdr3nWa3U9VRA2fgHFwCF1yDLrgDPd AHGTgGbyCN+vJerHerY/l6IZV7TAH1ifPxwdlQ=</latexit> sG 0 <latexit sha1_base6 4="w7+OAhR3E42BDJqreW7OKfcpCQ0=">A B/nicbVBPS8MwHE3nvzn/VcWTl+AQPI12DvQ 48KDHCW4OtlrSN3C0qQkqTBKwa/ixYMiXv0 c3vw2plsPuvkg5PHe70deXpAwqrTjfFuVldW 19Y3qZm1re2d3z94/6CmRSky6WDAh+wFShF OupqRvqJCgOGLkPJleFf/9IpKC3+lpQrw YjTiNKEbaSL59lA0DwUI1jc0FVe5nTv5w7dt 1p+HMAJeJW5I6KNHx7a9hKHAaE64xQ0oNXCf RXoakpiRvDZMFUkQnqARGRjKUyUl83i5/D UKCGMhDSHazhTf29kKFZFQDMZIz1Wi14h/ucN Uh1dehnlSaoJx/OHopRBLWDRBQypJFizqSEI S2qyQjxGEmFtGquZEtzFLy+TXrPhnjeat616 u1XWUQXH4AScARdcgDa4AR3QBRhk4Bm8gjfr yXqx3q2P+WjFKncOwR9Ynz9wOJW9</latexi t> f G t <latexit sha1_base64="zUc7C4RMykuC+qQ KuzBZqwSLz0=">AB/nicbVBPS8MwHE3nvzn/VcWTl+AQPI12DvQ48KDHCW4OtlrSN3C0 qQkqTBKwa/ixYMiXv0c3vw2plsPuvkg5PHe70deXpAwqrTjfFuVldW19Y3qZm1re2d3z94/6 CmRSky6WDAh+wFShFOupqRvqJCgOGLkPJleFf/9IpKC3+lpQrwYjTiNKEbaSL59lA0Dw UI1jc0Fo9zPdP5w7dt1p+HMAJeJW5I6KNHx7a9hKHAaE64xQ0oNXCfRXoakpiRvDZMFUkQn qARGRjKUyUl83i5/DUKCGMhDSHazhTf29kKFZFQDMZIz1Wi14h/ucNUh1dehnlSaoJx/OHop RBLWDRBQypJFizqSEIS2qyQjxGEmFtGquZEtzFLy+TXrPhnjeat616u1XWUQXH4AScARdcgD a4AR3QBRhk4Bm8gjfryXqx3q2P+WjFKncOwR9Ynz/EBZX0</latexit> f 1 <latexit sha1_base64="1Vk4/OpZ0Oh0k/+ z5+HJ6r3Nbak=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9Djw4nGC+4CtlDRNt7A0K UkqlFL/FS8eFPHqH+LN/8Z060E3H4Q83v9yMsLEkaVdpxva2Nza3tnt7ZX3z84PDq2T04HS qQSkz4WTMhRgBRhlJO+pqRUSIJigNGhsH8tvSHj0QqKviDzhLixWjKaUQx0kby7UY+CQLV RabC0aFn7uFbzedlrMAXCduRZqgQs+3vyahwGlMuMYMKTV2nUR7OZKaYkaK+iRVJEF4jqZkb ChHMVFevghfwAujhDAS0hyu4UL9vZGjWJXxzGSM9EyteqX4nzdOdXTj5ZQnqSYcLx+KUga1gG UTMKSYM0yQxCW1GSFeIYkwtr0VTcluKtfXieDdsu9arXvO81up6qjBs7AObgELrgGXAHeq APMjAM3gFb9aT9WK9Wx/L0Q2r2mAP7A+fwAJrZT4</latexit> f 2 <latexit sha1_base64="v54xgqVm/nq9DyN zGrdc1Fu6OPg=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9Djw4nGC+4CtlDRNt7A0K UkqlFL/FS8eFPHqH+LN/8Z060E3H4Q83v9yMsLEkaVdpxva2Nza3tnt7ZX3z84PDq2T04HS qQSkz4WTMhRgBRhlJO+pqRUSIJigNGhsH8tvSHj0QqKviDzhLixWjKaUQx0kby7UY+CQLV RabC0aFn7cL3246LWcBuE7cijRBhZ5vf01CgdOYcI0ZUmrsOon2ciQ1xYwU9UmqSILwHE3J2 FCOYqK8fBG+gBdGCWEkpDlcw4X6eyNHsSrjmckY6Zla9UrxP2+c6ujGylPUk04Xj4UpQxqAc smYEglwZplhiAsqckK8QxJhLXpq25KcFe/vE4G7Z71Wrfd5rdTlVHDZyBc3AJXHANuAO9E AfYJCBZ/AK3qwn68V6tz6WoxtWtdMAf2B9/gALMpT5</latexit> f t <latexit sha1_base64="g757F+XyuGBmKwU jfaZqyBpYFOg=">AB/HicbVDNS8MwHE39nPOruqOX4BA8jXYO9Djw4nGC+4CtlDRNt7A0K UkqlFL/FS8eFPHqH+LN/8Z060E3H4Q83v9yMsLEkaVdpxva2Nza3tnt7ZX3z84PDq2T04HS qQSkz4WTMhRgBRhlJO+pqRUSIJigNGhsH8tvSHj0QqKviDzhLixWjKaUQx0kby7UY+CQLV RabC0aFn+vCt5tOy1kArhO3Ik1QoefbX5NQ4DQmXGOGlBq7TqK9HElNMSNFfZIqkiA8R1MyN pSjmCgvX4Qv4IVRQhgJaQ7XcKH+3shRrMp4ZjJGeqZWvVL8zxunOrxcsqTVBOlw9FKYNawL IJGFJsGaZIQhLarJCPEMSYW36qpsS3NUvr5NBu+Vetdr3nWa3U9VRA2fgHFwCF1yDLrgDPd AHGTgGbyCN+vJerHerY/l6IZV7TAH1ifP298lTs=</latexit> = . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16w MEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3az W7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZ VSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV 7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe 2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0 sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6 OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16w MEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3az W7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZ VSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV 7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe 2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0 sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6 OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur 1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej2 5nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M /rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</late xit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur 1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsNu3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej2 5nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXUkSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M /rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</late xit> . . . <latexit sha1_base64="1e352gWfrlvf16wMEbX2S1ZUQCQ=">AB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRVUG8FLx4r2A9oQ9lsN u3azW7YnQil9D948aCIV/+PN/+N2zYHbX0w8Hhvhpl5YSq4Qc/7dgpr6xubW8Xt0s7u3v5B+fCoZVSmKWtSJZTuhMQwSVrIkfBOqlmJAkFa4ej25nfmLacCUfcJyICEDyWNOCVqp1RORQtMvV7yqN4e7SvycVCBHo1/+6kWKZgmTSAUxput7KQYTopFTwalXmZYSuiIDFjXU kSZoLJ/Nqpe2aVyI2VtiXRnau/JyYkMWachLYzITg0y95M/M/rZhfBxMu0wyZpItFcSZcVO7sdTfimlEUY0sI1dze6tIh0YSiDahkQ/CX14lrVrVv6jW7i8r9Zs8jiKcwCmcgw9XUIc7aEATKDzCM7zCm6OcF+fd+Vi0Fpx85hj+wPn8Abq4jzM=</latexit> s1 <latexit sha1_base64="mQ97ri6ncSF3ALP bDOJnM391ork=">AB+nicbVC7TsMwFL0pr1JeKYwsFhUSU5UJGCrxMJYJPqQ2ihyHLe16 jiR7YCq0E9hYQAhVr6Ejb/BaTNAy5EsH51zr3x8goQzpR3n2yqtrW9sbpW3Kzu7e/sHdvWwo +JUEtomMY9lL8CKciZoWzPNaS+RFEcBp91gcpP73QcqFYvFvZ4m1IvwSLAhI1gbyber2SCIe aimkbmQmvmub9ecujMHWiVuQWpQoOXbX4MwJmlEhSYcK9V3nUR7GZaEU5nlUGqaILJBI9o3 1CBI6q8bB59hk6NEqJhLM0RGs3V3xsZjlQezkxGWI/VspeL/3n9VA+vIyJNVUkMVDw5QjHa O8BxQySYnmU0MwkcxkRWSMJSbatFUxJbjLX14lnUbdPa837i5qzeuijIcwmcgQuX0IRbaE EbCDzCM7zCm/VkvVjv1sditGQVO0fwB9bnD068k/4=</latexit> s2 <latexit sha1_base64="CnyM0TO4Pa3yors OWlXGBCImg9I=">AB+nicbVC7TsMwFL0pr1JeKYwsFhUSU5UJGCrxMJYJPqQ2ihyHLe16 jiR7YCq0E9hYQAhVr6Ejb/BaTNAy5EsH51zr3x8goQzpR3n2yqtrW9sbpW3Kzu7e/sHdvWwo +JUEtomMY9lL8CKciZoWzPNaS+RFEcBp91gcpP73QcqFYvFvZ4m1IvwSLAhI1gbyber2SCIe aimkbmQmvkN3645dWcOtErcgtSgQMu3vwZhTNKICk04VqrvOon2Miw1I5zOKoNU0QSTCR7Rv qECR1R52Tz6DJ0aJUTDWJojNJqrvzcyHKk8nJmMsB6rZS8X/P6qR5eRkTSaqpIuHhilHOk Z5DyhkhLNp4ZgIpnJisgYS0y0atiSnCXv7xKOo26e15v3F3UmtdFHWU4hM4AxcuoQm30I I2EHiEZ3iFN+vJerHerY/FaMkqdo7gD6zPH1BAk/8=</latexit> st <latexit sha1_base64="qvQzC35d0h8jYJj yd9Xz3w/gojE=">AB+nicbVC7TsMwFL0pr1JeKYwsFhUSU5UJGCrxMJYJPqQ2ihyHLe16 jiR7YCq0E9hYQAhVr6Ejb/BaTNAy5EsH51zr3x8goQzpR3n2yqtrW9sbpW3Kzu7e/sHdvWwo +JUEtomMY9lL8CKciZoWzPNaS+RFEcBp91gcpP73QcqFYvFvZ4m1IvwSLAhI1gbyber2SCIe aimkbmQmvnat2tO3ZkDrRK3IDUo0PLtr0EYkzSiQhOleq7TqK9DEvNCKezyiBVNMFkgke0b 6jAEVeNo8+Q6dGCdEwluYIjebq740MRyoPZyYjrMdq2cvF/7x+qodXsZEkmoqyOKhYcqRjl HeAwqZpETzqSGYSGayIjLGEhNt2qYEtzlL6+STqPuntcbdxe15nVRxmO4QTOwIVLaMItK ANB7hGV7hzXqyXqx362MxWrKnSP4A+vzB7RIlE=</latexit> Yt <latexit sha1_base64="nP+rW/D+HbyuJZ r9TZgq7WKwCk=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkV1FvBi8eK9kPaUDbTbt0s wm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvSKQw6Lrfzsrq2vrGZmGruL2zu7dfOjhsm jVjDdYLGPdDqjhUijeQIGStxPNaRI3gpGN1O/9cS1EbF6wHC/YgOlAgFo2il+8ce9kplt +LOQJaJl5My5Kj3Sl/dfszSiCtkhrT8dwE/YxqFEzySbGbGp5QNqID3rFU0YgbP5udOiGnV umTMNa2FJKZ+nsio5Ex4yiwnRHFoVn0puJ/XifF8MrPhEpS5IrNF4WpJBiT6d+kLzRnKMeWUK aFvZWwIdWUoU2naEPwFl9eJs1qxTuvVO8uyrXrPI4CHMJnIEHl1CDW6hDAxgM4Ble4c2Rzo vz7nzMW1ecfOYI/sD5/AFAm42/</latexit> f G t , ˆf t+c , G (sG t−1, f t) <latexit sha1_base64="lXNFjquUu/RPdMZO5rFMK7tjZxQ=">ACY3icbVFdS8MwFE3r15wfq9M3EYJDUNTRq qC+CT7MxwluE9ZtpFm6BdO0JrfCLP2Tvnmi/DbO7BbV4IOZxzbnJzEiSCa3DdT8teWl5ZXSusFzc2t7ZLzk65qeNUdagsYjVc0A0E1yBnAQ7DlRjESBYK3g5X6st96Y0jyWTzBKWCciA8lDTgkYque8Z34Qi74eRWbDYd6Dbg37o DiRA8FesT8kMG/J4JTmf021buYnmufHM06d2vGe+7lZ3jxiPyk51TcqjspvAi8KaigadV7zofj2kaMQlUEK3bnptAJyMKOBUsL/qpZgmhL2TA2gZKEjHdySYZ5fjIMH0cxsosCXjC/u3ISKTH8xlnRGCo57Ux+Z/WTiG86WRcJikwSX 8vClOBIcbjwHGfK0ZBjAwgVHEzK6ZDogF8y1FE4I3/+RF0LyoepfVi8eryt3tNI4C2keH6Bh56BrdoQdURw1E0Ze1apUsx/q2N+yvfdrta1pzy6aKfvgB92JuW0=</latexit> st <latexit sha1_base64="qvQzC35d0h8jYJjyd9Xz3w/gojE=">AB+nicbVC7TsMwFL0pr 1JeKYwsFhUSU5UJGCrxMJYJPqQ2ihyHLe16jiR7YCq0E9hYQAhVr6Ejb/BaTNAy5EsH51zr3x8goQzpR3n2yqtrW9sbpW3Kzu7e/sHdvWwo+JUEtomMY9lL8CKciZoWzPNaS+RFEcBp91gc pP73QcqFYvFvZ4m1IvwSLAhI1gbyber2SCIeaimkbmQmvnat2tO3ZkDrRK3IDUo0PLtr0EYkzSiQhOleq7TqK9DEvNCKezyiBVNMFkgke0b6jAEVeNo8+Q6dGCdEwluYIjebq740MRyoPZy YjrMdq2cvF/7x+qodXsZEkmoqyOKhYcqRjlHeAwqZpETzqSGYSGayIjLGEhNt2qYEtzlL6+STqPuntcbdxe15nVRxmO4QTOwIVLaMItKANB7hGV7hzXqyXqx362MxWrKnSP4A+vzB7 RIlE=</latexit> Ot+1 = g(st) <latexit sha1_base64="l+SuEUvFhZdtPGcfLDRX6Ur4l+s=">ACFXicbVDLSgMxFM3UV 62vUZdugkWoKGWmCupCKLhxZwX7gHYMpm0Dc08SO4IZifcOvuHGhiFvBnX9j+ljY1gMh3Pu5d57vFhwBZb1Y+SWldW1/LrhY3Nre0dc3evoaJEUlankYhkyOKCR6yOnAQrBVLRgJPs KY3uBn5zUcmFY/CBxjGzAlIL+RdTgloyTVP04XCV8NA/3hu8xN4cTO8DXulWYclblw7JpFq2yNgReJPSVFNEXNb87fkSTgIVABVGqbVsxOCmRwKlgWaGTKBYTOiA91tY0JAFTjq+KsNHWv FxN5L6hYDH6t+OlARqtJ2uDAj01bw3Ev/z2gl0L52Uh3ECLKSTQd1EYIjwKCLsc8koiKEmhEqud8W0TyShoIMs6BDs+ZMXSaNSts/KlfvzYvVqGkceHaBDVEI2ukBVdItqI4oekIv6A29G8 /Gq/FhfE5Kc8a0Zx/NwPj6BfS9nqU=</latexit> Figure 1: Model overview of text generation with a guider network. Solid lines mean gradients are backpropagated in training; dash lines mean gradients are not backpropagated. CNN is the feature extractor, and MLP outputs the parameters of the Gaussian density which is compatible with the initial state of the LSTM Guider and Decoder. by an encoder network. Specifically, let the current generated sentence be Y1...t (encouraged to be the same as parts of a training sentence in training), with f t calculated as: f t = Enc(Y1...t). The initial state of the guider network is the encoded feature of a true input sentence by the same convolutional neural network (CNN), i.e., sG 0 = Enc(X), where Enc(·) denotes the encoder transformation, implemented with a CNN (Zhang et al., 2017). Importantly, the input to the guider network, at each time point, is defined by features from the entire sentence generated to that point. This provides an important “guide” to the LSTM decoder, accounting for the global properties of the generated text. Text Generation with Planning We first explain how one uses the guider network to guide next-word generation for the generator (the LSTM decoder in Figure 1). Our framework is inspired by the MPC method (Nagabandi et al., 2017), and can be regarded as a type of plan-ahead attention mechanism. Given the feature f t at time t from the current input sentence, the guider network produces a prediction Gψ(sG t−1, f t) as a future feature representation, by feeding f t into the LSTM guider. Since the training of the guider network is based on real data (detailed in the next paragraph), the predicted feature contains global-structure information of the training sentences. To utilize such information to predict the next word, we combine the predicted feature with the output of the decoder by constructing an attention-like mechanism. Specifically, we first apply a linear transformation ϕ on the predicted feature Gψ(sG t−1, f t), forming a weight vector wt ≜ϕ Gψ(sG t−1, f t)  . The weight wt is applied to the output Ot of the LSTM decoder by an element-wise multiplication operation. The result is then fed into a softmax layer to generate the next token yt. Formally, the generative process is written as: Ot = g(st−1), wt = ϕ(Gψ(sG t−1, f t)), (5) yt ∼Multi(1, softmax(Ot · wt)), (6) sG t = hG(sG t−1, f t), st = h(st−1, e(yt)) . (7) Guider Network Training Given a sentence of feature representations (f 1, f 2, . . . f T ) for a training sentence, we seek to update the guider network such that it is able to predict f t+c given f t, where c > 0 is the number of steps that are looked ahead. We implement this by forcing the predicted feature, Gψ(sG t , f t), to match both the sentence feature f t+c (first term in (8)) and the corresponding feature-changing direction (second term in (8)). This is formalized by maximizing an objective function of the following form at time t: Jψ G = Dcos  f t+c, Gψ(sG t−1, f t)  (8) + Dcos  f t+c −f t, Gψ(sG t−1, f t) −f t  , where Dcos(·, ·) denotes the cosine similarity2. By maximizing (8), an ideal guider network should be able to predict the true next words conditioned on the current word in a sentence. As a result, the prediction is used to construct an intermediate reward, used to update the generator (the LSTM decoder), as described further below. 3.2 Feature-Matching Rewards and Generator Optimization As in many RL-based text-generation methods, such as SeqGAN (Yu et al., 2017) and LeakGAN (Guo et al., 2017), the generator is updated based on policy-gradient methods. As a result, collecting rewards in the generation process is critical. 2We found that the cosine similarity worked better than the l2-norm. 2519 Though SeqGAN (Yu et al., 2017) has proposed to use rollout to get rewards for each generated word, the variance of the rewards is typically too high to be useful practically. In addition, the computational cost may be too high for practical use. We below describe how to use the proposed guider network to define intermediate rewards, leading to a definition of feature-matching reward. Feature-Matching Rewards We first define an intermediate reward to generate a particular word. The idea is to match the ground-truth features from the CNN encoder in Figure 1 with those generated from the guider network. Equation (8) indicates that the further the generated feature is from the true feature, the smaller the reward should be. To this end, for each time t, we define the intermediate reward for generating the current word as: rg t = 1 2c c X i=1 (Dcos(f t, ˆf t)+ Dcos(f t −f t−i, ˆf t −f t−i)) , where ˆf t = Gψ(sG t−c−1, f t−c) is the predicted feature. Intuitively, f t −f t−i measures the difference between the generated sentences in feature space; the reward is high if it matches the predicted feature transition ˆf t −f t−i from the guider network. At the last step of text generation, i.e., t = T, the corresponding reward measures the quality of the whole generated sentence, thus it is called a final reward. The final reward is defined differently from the intermediate reward, discussed below for both the unconditional- and conditional-generation cases. Note that a token generated at time t will influence not only the rewards received at that time but also the rewards at subsequent time steps. Thus we propose to define the cumulative reward, PT i=t γirg i with γ a discount factor, as a featurematching reward. Intuitively, this encourages the generator to focus on achieving higher long-term rewards. Finally, in order to apply policy gradient to update the generator, we combine the featurematching reward with the problem-specific final reward, to form a Q-value reward specified below. Similar to SeqGAN, the final reward is defined as the output of a discriminator, evaluating the quality of the whole generated sentence, i.e., the smaller the output, the less likely the generation is a true sentence. As a result, we combine the adversarial reward rf ∈[0, 1] by the discriminator (Yu et al., Algorithm 1 Model-based Imitation Learning for Text Generation Require: generator policy πφ; guider network Gψ; a sequence dataset {X1...T } by some expert policy. 1: Initialize Gψ, Dθ with random weights. 2: while Imitation Learning phase do 3: Update generator πφ, guider Gψ with MLE loss. 4: end while 5: while Reinforcement Learning phase do 6: Generate a sequence Y1...T ∼πφ. 7: Compute Qt, and update πφ. 8: end while 2017) with the guider-matching rewards, to define a Q-value reward as Qt = (PT i=t γirg i ) × rf. Generator Optimization The generator is initialized by pre-training on sentences with an autoencoder structure, based on MLE training. After that, the final Q-value reward Qt is used as a reward for each time t, with standard policy gradient optimization methods to update the generator. Specifically, the policy gradient is ∇φJ = E(st−1,yt)∼ρπ [Qt∇φ log p(yt|st−1; φ, ϕ)] , ∇ϕJ = E(st−1,yt)∼ρπ [Qt∇ϕ log p(yt|st−1; φ, ϕ)] , where p(yt|st−1; φ, ϕ) is the probability of generating yt given st−1 in the generator. Algorithm 1 describes the proposed model-based imitation learning framework for text generation. Model-based or Model-free Text generation seeks to generate the next word (action) given the current (sub-)sentence (state). The generator is considered as an agent that learns a policy to predict the next word given its current state. In previous work (Ranzato et al., 2016), a metric reward is given and the generator is trained to only maximize the metric reward by trial, thus this is model-free learning. In the proposed method, the guider network models the environment dynamics, and is trained by minimizing the cosine similarity between the prediction and the ground truth on real text. For generator training, it maximizes the reward which is determined by the metric and guider network, and thus is model-free learning with model-based boosting (Gu et al., 2016). The model predictive control scheme is included in our method, where the guider network is used to help next-word selection at each time-step. 2520 CNN MLP Decoder (Content) Guider (Style) ' <latexit sha1_base64="fFWBIbp62QE489epHA8wiMy omE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hXUo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5B Wx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCo f7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsWulmhilCR2TAOo5Kk jAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZM h0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg =</latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMy omE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hXUo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5B Wx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCo f7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsWulmhilCR2TAOo5Kk jAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZM h0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg =</latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMy omE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hXUo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5B Wx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCo f7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsWulmhilCR2TAOo5Kk jAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZM h0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg =</latexit> <latexit sha1_base64="fFWBIbp62QE489epHA8wiMy omE=">AB7XicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWcG2hXUo2zbah2WxIsoWy9Ed48aDi1f/jzX9j2u5B Wx8MPN6bYWZepAQ31vO+UWltfWNzq7xd2dnd2z+oHh49mTlAU0FaluR8QwSULeCtZVmJIkEa0Wju5nfGjNteCo f7USxMCEDyWNOiXVSqzsmWg15r1rz6t4ceJX4BalBgWav+tXtpzRLmLRUEGM6vqdsmBNtORVsWulmhilCR2TAOo5Kk jAT5vNzp/jMKX0cp9qVtHiu/p7ISWLMJIlcZ0Ls0Cx7M/E/r5PZ+DrMuVSZIuFsWZwDbFs9xn2tGrZg4Qqjm7lZM h0QTal1CFReCv/zyKgku6jd1/+Gy1rgt0ijDCZzCOfhwBQ24hyYEQGEz/AKb0ihF/SOPhatJVTMHMfoM8f5y2Peg =</latexit> yt+1 <latexit sha1_base64="1akGlXlB3dWHTa2PNmtmKsTFHI=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBZBE oignorevFYwdhCG8pmu2mXbjZhdyKU0B/hxYOKV/+PN/+N2zYHbX0w8Hhvhpl5YSqFQdf9dkorq2vrG+XNytb2zu5edf/g0SZtxniUx0O6SGS6G4jwIlb6ea0ziUvBWObqd+64lrIxL1gOUBzEdKBEJRtFKrXEvxzNv0qvW3 Lo7A1kmXkFqUKDZq351+wnLYq6QSWpMx3NTDHKqUTDJ5VuZnhK2YgOeMdSRWNugnx27oScWKVPokTbUkhm6u+JnMbGjOPQdsYUh2bRm4r/eZ0Mo6sgFyrNkCs2XxRlkmBCpr+TvtCcoRxbQpkW9lbChlRThjahig3BW3x5mfjn 9eu6d39Ra9wUaZThCI7hFDy4hAbcQRN8YDCZ3iFNyd1Xpx352PeWnKmUP4A+fzB31DjzQ=</latexit> <latexit sha1_base64="1akGlXlB3dWHTa2PNmtmKsTFHI=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBZBE oignorevFYwdhCG8pmu2mXbjZhdyKU0B/hxYOKV/+PN/+N2zYHbX0w8Hhvhpl5YSqFQdf9dkorq2vrG+XNytb2zu5edf/g0SZtxniUx0O6SGS6G4jwIlb6ea0ziUvBWObqd+64lrIxL1gOUBzEdKBEJRtFKrXEvxzNv0qvW3 Lo7A1kmXkFqUKDZq351+wnLYq6QSWpMx3NTDHKqUTDJ5VuZnhK2YgOeMdSRWNugnx27oScWKVPokTbUkhm6u+JnMbGjOPQdsYUh2bRm4r/eZ0Mo6sgFyrNkCs2XxRlkmBCpr+TvtCcoRxbQpkW9lbChlRThjahig3BW3x5mfjn 9eu6d39Ra9wUaZThCI7hFDy4hAbcQRN8YDCZ3iFNyd1Xpx352PeWnKmUP4A+fzB31DjzQ=</latexit> <latexit sha1_base64="1akGlXlB3dWHTa2PNmtmKsTFHI=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBZBE oignorevFYwdhCG8pmu2mXbjZhdyKU0B/hxYOKV/+PN/+N2zYHbX0w8Hhvhpl5YSqFQdf9dkorq2vrG+XNytb2zu5edf/g0SZtxniUx0O6SGS6G4jwIlb6ea0ziUvBWObqd+64lrIxL1gOUBzEdKBEJRtFKrXEvxzNv0qvW3 Lo7A1kmXkFqUKDZq351+wnLYq6QSWpMx3NTDHKqUTDJ5VuZnhK2YgOeMdSRWNugnx27oScWKVPokTbUkhm6u+JnMbGjOPQdsYUh2bRm4r/eZ0Mo6sgFyrNkCs2XxRlkmBCpr+TvtCcoRxbQpkW9lbChlRThjahig3BW3x5mfjn 9eu6d39Ra9wUaZThCI7hFDy4hAbcQRN8YDCZ3iFNyd1Xpx352PeWnKmUP4A+fzB31DjzQ=</latexit> <latexit sha1_base64="1akGlXlB3dWHTa2PNmtmKsTFHI=">AB7XicbVBNS8NAEJ3Ur1q/qh69LBZBE oignorevFYwdhCG8pmu2mXbjZhdyKU0B/hxYOKV/+PN/+N2zYHbX0w8Hhvhpl5YSqFQdf9dkorq2vrG+XNytb2zu5edf/g0SZtxniUx0O6SGS6G4jwIlb6ea0ziUvBWObqd+64lrIxL1gOUBzEdKBEJRtFKrXEvxzNv0qvW3 Lo7A1kmXkFqUKDZq351+wnLYq6QSWpMx3NTDHKqUTDJ5VuZnhK2YgOeMdSRWNugnx27oScWKVPokTbUkhm6u+JnMbGjOPQdsYUh2bRm4r/eZ0Mo6sgFyrNkCs2XxRlkmBCpr+TvtCcoRxbQpkW9lbChlRThjahig3BW3x5mfjn 9eu6d39Ra9wUaZThCI7hFDy4hAbcQRN8YDCZ3iFNyd1Xpx352PeWnKmUP4A+fzB31DjzQ=</latexit> wt <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+g Jw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5 yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtM yk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9 +WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+B FCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="8LiqkOr4yReyvWhWMTnLMjkC 0c=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFZ0bGFdiZ9E4bmskMyR2lD6CGxcqPpY738b0Z6GtBwIf5 yTk3hPnSlry/W9vbX1jc2u7tFPereztH1QPK482K4zAUGQqM62YW1RSY0iSFLZygzyNFTbj4c0bz6hsTLTDzTKMUp5X8 tECk7Oun/uUrda8+v+TGwVgXUYKFGt/rV6WiSFGTUNzaduDnFI25ISkUTsqdwmLOxZD3se1Q8xRtNJ6NOmGnzumxJDPu aGIz9/eLMU+tHaWxu5lyGtjlbGr+l7ULSi6jsdR5QajF/KOkUIwyNt2b9aRBQWrkgAsj3axMDLjhglw7ZVdCsLzyKoTn9 at6cOdDCY7hBM4gAu4hltoQAgC+vACb/DuKe/V+5i3teYtajuCP/I+fwDA+Yxi</latexit> <latexit sha1_base64="8LiqkOr4yReyvWhWMTnLMjkC 0c=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFZ0bGFdiZ9E4bmskMyR2lD6CGxcqPpY738b0Z6GtBwIf5 yTk3hPnSlry/W9vbX1jc2u7tFPereztH1QPK482K4zAUGQqM62YW1RSY0iSFLZygzyNFTbj4c0bz6hsTLTDzTKMUp5X8 tECk7Oun/uUrda8+v+TGwVgXUYKFGt/rV6WiSFGTUNzaduDnFI25ISkUTsqdwmLOxZD3se1Q8xRtNJ6NOmGnzumxJDPu aGIz9/eLMU+tHaWxu5lyGtjlbGr+l7ULSi6jsdR5QajF/KOkUIwyNt2b9aRBQWrkgAsj3axMDLjhglw7ZVdCsLzyKoTn9 at6cOdDCY7hBM4gAu4hltoQAgC+vACb/DuKe/V+5i3teYtajuCP/I+fwDA+Yxi</latexit> <latexit sha1_base64="8TPj8pW5mg7rIVNuDPD9nlC06 8k=">AB6XicbVBNT8JAEJ3iF+IX6tHLRmLibRe1BvRi0eMVkigIdtlCxu2Z3qiENP8GLBzVe/Ufe/Dcu0IOCL5nk5 b2ZzMwLUykMu63U1pZXVvfKG9WtrZ3dveq+wcPJsk04z5LZKLbITVcCsV9FCh5O9WcxqHkrXB0PfVbj1wbkah7HKc8iO lAiUgwila6e+phr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoJcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/VL+verVtrXBVplOEIjuEUPDiHBtxAE3xgMIBneIU3RzovzrvzMW8tOcXMIfyB8/kD2/mNsg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> <latexit sha1_base64="q3UT9ytxi6JaGM3Y4Fl9wHXpO GM=">AB6XicbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oxWNFYwtKJvtpl262YTdiVJCf4IXDype/Ufe/Ddu2xy09cHA4 70ZuaFqRQGXfbWVpeWV1bL2UN7e2d3Yre/sPJsk04z5LZKJbITVcCsV9FCh5K9WcxqHkzXB4PfGbj1wbkah7HKU8iG lfiUgwila6e+pit1J1a+4UZJF4BalCgUa38tXpJSyLuUImqTFtz0xyKlGwSQflzuZ4SlQ9rnbUsVjbkJ8umpY3JslR6J Em1LIZmqvydyGhszikPbGVMcmHlvIv7ntTOMLoJcqDRDrthsUZRJgmZ/E16QnOGcmQJZVrYWwkbUE0Z2nTKNgRv/uVF4 p/WLmve7Vm1flWkUYJDOIT8OAc6nADfCBQR+e4RXeHOm8O/Ox6x1ySlmDuAPnM8f3TmNtg=</latexit> f G t <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+g Jw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5 yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtM yk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9 +WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+B FCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="YfBplrXhtVgcdZq6ra0DRYNhO cA=">AB4HicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuNBlBcW2rFk0jtaCYzJHeEUvoMblyo+FTufBvTn4W2Hgh8n JOQe0+cK2nJ97+9tfWNza3t0k5t7K3f1A9rDzarDACQ5GpzLRiblFJjSFJUtjKDfI0VtiMhzfTvPmMxspMP9AoxyjlfS 0TKTg5K0y69HTbrdb8uj8TW4VgATVYqNGtfnV6mShS1CQUt7Yd+DlFY25ICoWTcqewmHMx5H1sO9Q8RuNZ8NO2KlzeizJ jDua2Mz9/WLMU2tHaexupwGdjmbmv9l7YKSy2gsdV4QajH/KCkUo4xN2c9aVCQGjngwkg3KxMDbrg10/ZlRAsr7wK4 Xn9qh7c+1CYziBMwjgAq7hDhoQgAJL/AG7572Xr2PeVtr3qK2I/gj7/MH5maNCA=</latexit> <latexit sha1_base64="YfBplrXhtVgcdZq6ra0DRYNhO cA=">AB4HicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuNBlBcW2rFk0jtaCYzJHeEUvoMblyo+FTufBvTn4W2Hgh8n JOQe0+cK2nJ97+9tfWNza3t0k5t7K3f1A9rDzarDACQ5GpzLRiblFJjSFJUtjKDfI0VtiMhzfTvPmMxspMP9AoxyjlfS 0TKTg5K0y69HTbrdb8uj8TW4VgATVYqNGtfnV6mShS1CQUt7Yd+DlFY25ICoWTcqewmHMx5H1sO9Q8RuNZ8NO2KlzeizJ jDua2Mz9/WLMU2tHaexupwGdjmbmv9l7YKSy2gsdV4QajH/KCkUo4xN2c9aVCQGjngwkg3KxMDbrg10/ZlRAsr7wK4 Xn9qh7c+1CYziBMwjgAq7hDhoQgAJL/AG7572Xr2PeVtr3qK2I/gj7/MH5maNCA=</latexit> <latexit sha1_base64="youqmG/hyLp9/9PMT/pRgi4UX OE=">AB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0m8qLeiBz1WMLbQxrLZbtqlm03YnQgl9Dd48aDi1T/kzX/jts1BWx8MP N6bYWZemEph0HW/ndLK6tr6RnmzsrW9s7tX3T94MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wj6nfeuLaiETd4zjlQU wHSkSCUbSH/Xw8aZXrbl1dwayTLyC1KBAs1f96vYTlsVcIZPUmI7nphjkVKNgk8q3czwlLIRHfCOpYrG3AT57NgJObFK n0SJtqWQzNTfEzmNjRnHoe2MKQ7NojcV/M6GUYXQS5UmiFXbL4oyiTBhEw/J32hOUM5toQyLeythA2pgxtPhUbgrf48 jLxz+qXde/OrTWuijTKcATHcAoenEMDbqEJPjAQ8Ayv8OYo58V5dz7mrSWnmDmEP3A+fwAIUY5a</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> <latexit sha1_base64="/OetdrR/HRQ3UP6smUtcSJ0n6 Jk=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bXPQ1gcDj /dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFtRKLucZTyIK Z9JSLBKFrJj7r4eNOtVN2aOwVZJF5BqlCg0a18dXoJy2KukElqTNtzUwxyqlEwycflTmZ4StmQ9nbUkVjboJ8euyYHFul R6JE21JIpurviZzGxozi0HbGFAdm3puI/3ntDKOLIBcqzZArNlsUZJgQiafk57QnKEcWUKZFvZWwgZU4Y2n7INwZt/e ZH4p7XLmnd3Vq1fFWmU4BCO4AQ8OIc63EIDfGAg4Ble4c1Rzovz7nzMWpecYuYA/sD5/AEJkY5e</latexit> Ot <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+g Jw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5 yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtM yk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9 +WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+B FCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="LQycelWjn/7H2eEgatvnP8LrW oI=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFnRcW2qFk0jtaCYzJHeEUvoIblyo+FjufBvTn4W2Hgh8n JOQe0+cK2nJ97+9tfWNza3t0k5t7K3f1A9rDzZrDACQ5GpzLRiblFJjSFJUtjKDfI0VtiMhzfTvPmMxspMP9IoxyjlfS 0TKTg56+GuS91qza/7M7FVCBZQg4Ua3epXp5eJIkVNQnFr24GfUzTmhqRQOCl3Cos5F0Pex7ZDzVO0Xg26oSdOqfHksy4 o4nN3N8vxjy1dpTG7mbKaWCXs6n5X9YuKLmMxlLnBaEW84+SQjHK2HRv1pMGBamRAy6MdLMyMeCGC3LtlF0JwfLKqxCe1 6/qwb0PJTiGEziDAC7gGm6hASEI6MLvMG7p7xX72Pe1pq3qO0I/sj7/AGF6Yw6</latexit> <latexit sha1_base64="LQycelWjn/7H2eEgatvnP8LrW oI=">AB3nicbZDNSgMxFIXv+Ftr1erWTbAIrsqMG3UnuHFnRcW2qFk0jtaCYzJHeEUvoIblyo+FjufBvTn4W2Hgh8n JOQe0+cK2nJ97+9tfWNza3t0k5t7K3f1A9rDzZrDACQ5GpzLRiblFJjSFJUtjKDfI0VtiMhzfTvPmMxspMP9IoxyjlfS 0TKTg56+GuS91qza/7M7FVCBZQg4Ua3epXp5eJIkVNQnFr24GfUzTmhqRQOCl3Cos5F0Pex7ZDzVO0Xg26oSdOqfHksy4 o4nN3N8vxjy1dpTG7mbKaWCXs6n5X9YuKLmMxlLnBaEW84+SQjHK2HRv1pMGBamRAy6MdLMyMeCGC3LtlF0JwfLKqxCe1 6/qwb0PJTiGEziDAC7gGm6hASEI6MLvMG7p7xX72Pe1pq3qO0I/sj7/AGF6Yw6</latexit> <latexit sha1_base64="klyCZfzijrCUfsqxXJjpoqfKG DA=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0m8qLeiF29WNLbQhrLZbtqlm03YnQgl9Cd48aDi1X/kzX/jts1BWx8MP N6bYWZemEph0HW/ndLK6tr6RnmzsrW9s7tX3T94NEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wj6nfeuLaiEQ94DjlQU wHSkSCUbTS/W0Pe9WaW3dnIMvEK0gNCjR71a9uP2FZzBUySY3peG6KQU41Cib5pNLNDE8pG9EB71iqaMxNkM9OnZATq/RJ lGhbCslM/T2R09iYcRzazpji0Cx6U/E/r5NhdBHkQqUZcsXmi6JMEkzI9G/SF5ozlGNLKNPC3krYkGrK0KZTsSF4iy8vE /+sfln37txa46pIowxHcAyn4ME5NOAGmuADgwE8wyu8OdJ5cd6dj3lrySlmDuEPnM8fnzGNig=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> <latexit sha1_base64="ylDYvNJ9iTd4HBVDRQf3cgHt3 sw=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FL96saGyhDWz3bRLN5uwOxFK6E/w4kHFq/Im/GbZuDVh8MP N6bYWZemEph0HW/nNLS8srqWnm9srG5tb1T3d17MEmGfdZIhPdDqnhUijuo0DJ26nmNA4lb4Wjq6nfeuTaiETd4zjlQU wHSkSCUbTS3U0Pe9WaW3dnIH+JV5AaFGj2qp/dfsKymCtkhrT8dwUg5xqFEzySaWbGZ5SNqID3rFU0ZibIJ+dOiFHVumT KNG2FJKZ+nMip7Ex4zi0nTHFoVn0puJ/XifD6DzIhUoz5IrNF0WZJiQ6d+kLzRnKMeWUKaFvZWwIdWUoU2nYkPwFl/+S /yT+kXduz2tNS6LNMpwAIdwDB6cQOuoQk+MBjAE7zAqyOdZ+fNeZ+3lpxiZh9+wfn4BqBxjY4=</latexit> ft <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+g Jw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5 yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtM yk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9 +WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+B FCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="YtvF5ec6Cwp4wrMADnkU2DSP PI=">AB3nicbZDNSgMxFIXv1L9aq1a3boJFcFVm3Kg7wY3Lio4tEPJpHfa0ExmSO4IpfQR3LhQ8bHc+TamPwtPRD4O Cch954V9KS7397pY3Nre2d8m5lr7p/cFg7qj7ZrDACQ5GpzLRjblFJjSFJUtjODfI0VtiKR7ezvPWMxspMP9I4xyjlAy 0TKTg56yHpUa9W9xv+XGwdgiXUYalmr/bV7WeiSFGTUNzaTuDnFE24ISkUTivdwmLOxYgPsONQ8xRtNJmPOmVnzumzJDPu aGJz9/eLCU+tHaexu5lyGtrVbGb+l3UKSq6idR5QajF4qOkUIwyNtub9aVBQWrsgAsj3axMDLnhglw7FVdCsLryOoQXj etGcO9DGU7gFM4hgEu4gTtoQgCBvACb/DuKe/V+1i0VfKWtR3DH3mfP6fjFE=</latexit> <latexit sha1_base64="YtvF5ec6Cwp4wrMADnkU2DSP PI=">AB3nicbZDNSgMxFIXv1L9aq1a3boJFcFVm3Kg7wY3Lio4tEPJpHfa0ExmSO4IpfQR3LhQ8bHc+TamPwtPRD4O Cch954V9KS7397pY3Nre2d8m5lr7p/cFg7qj7ZrDACQ5GpzLRjblFJjSFJUtjODfI0VtiKR7ezvPWMxspMP9I4xyjlAy 0TKTg56yHpUa9W9xv+XGwdgiXUYalmr/bV7WeiSFGTUNzaTuDnFE24ISkUTivdwmLOxYgPsONQ8xRtNJmPOmVnzumzJDPu aGJz9/eLCU+tHaexu5lyGtrVbGb+l3UKSq6idR5QajF4qOkUIwyNtub9aVBQWrsgAsj3axMDLnhglw7FVdCsLryOoQXj etGcO9DGU7gFM4hgEu4gTtoQgCBvACb/DuKe/V+1i0VfKWtR3DH3mfP6fjFE=</latexit> <latexit sha1_base64="Rogs9bw4pczSbPx9xQIs8jHXA RA=">AB6XicbVBNT8JAEJ3iF+IX6tHLRmLibRexBvRi0eMVkigIdtlCxu2Z3akIafoIXD2q8+o+8+W9coAcFXzLJy 3szmZkXplIYdN1vp7S2vrG5Vd6u7Ozu7R9UD48eTZJpxn2WyER3Qmq4FIr7KFDyTqo5jUPJ2+H4Zua3n7g2IlEPOEl5EN OhEpFgFK10H/WxX625dXcOskq8gtSgQKtf/eoNEpbFXCGT1Jiu56Y5FSjYJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBE ibalkMzV3xM5jY2ZxKHtjCmOzLI3E/zuhlGjSAXKs2QK7ZYFGWSYEJmf5OB0JyhnFhCmRb2VsJGVFOGNp2KDcFbfnmV+ Bf1q7p359a10UaZTiBUzgHDy6hCbfQAh8YDOEZXuHNkc6L8+58LFpLTjFzDH/gfP4AwiSNoQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> <latexit sha1_base64="zUm79lo9eYUgi5DW6opCUa4f s=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8eFDx6j/y5r9x2+agrQ8GH u/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIK YDJSLBKFrpPuphr1pz6+4MZJl4BalBgWav+tXtJyLuUImqTEdz0xyKlGwSfVLqZ4SlIzrgHUsVjbkJ8tmpE3JilT6J Em1LIZmpvydyGhszjkPbGVMcmkVvKv7ndTKMLoNcqDRDrth8UZRJgmZ/k36QnOGcmwJZVrYWwkbUk0Z2nQqNgRv8eVl4 p/Vr+re3XmtcV2kUYjOIZT8OACGnALTfCBwQCe4RXeHOm8O/Ox7y15BQzh/AHzucPw2SNpQ=</latexit> s0 <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="X/BbPQRM1pmBhxdK1enSbL+g Jw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8EiuCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwIf5 yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtM yk4OSszrDZCtrBUmwTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5jaeLcecs3PnpCwrjDua2NL9 +WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7ph0+B FCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+AaqKYoN</latexit> <latexit sha1_base64="cXvNK4TZpyn+uhz0W3cOTS/2R Sg=">AB7HicbVC9TsMwGPxS/kopNLCyWFRITFXCAt2QWBiLRGilNocx2mtOnZkO0gl6pOwMADiedh4G5y2A7Sc9MmnO 1u+7+KcM20879upbW3v7O7V9xsHzcOjlnvcfNSyUIQGRHKpBjHWlDNBA8Mp4NcUZzFnPbj6W3l95+o0kyKBzPLaZjhsW ApI9hYKXJbo1jyRM8yeyAdeZHb9jreAmiT+CvShV6kfs1SiQpMioM4Vjroe/lJiyxMoxwOm+MCk1zTKZ4TIeWCpxRHZaL 4HN0bpUEpVLZEQYt1N8vSpzpKpu9mWEz0eteJf7nDQuTXoclE3lhqCDLj9KCIyNR1QJKmKLE8JklmChmsyIywQoTY7tq2 BL89ZU3SXDZ6Xb8ew/qcApncAE+XMEN3EPAiBQwAu8wbvz7Lw6H8u2as6qthP4A+fzB5XmkWg=</latexit> <latexit sha1_base64="cXvNK4TZpyn+uhz0W3cOTS/2R Sg=">AB7HicbVC9TsMwGPxS/kopNLCyWFRITFXCAt2QWBiLRGilNocx2mtOnZkO0gl6pOwMADiedh4G5y2A7Sc9MmnO 1u+7+KcM20879upbW3v7O7V9xsHzcOjlnvcfNSyUIQGRHKpBjHWlDNBA8Mp4NcUZzFnPbj6W3l95+o0kyKBzPLaZjhsW ApI9hYKXJbo1jyRM8yeyAdeZHb9jreAmiT+CvShV6kfs1SiQpMioM4Vjroe/lJiyxMoxwOm+MCk1zTKZ4TIeWCpxRHZaL 4HN0bpUEpVLZEQYt1N8vSpzpKpu9mWEz0eteJf7nDQuTXoclE3lhqCDLj9KCIyNR1QJKmKLE8JklmChmsyIywQoTY7tq2 BL89ZU3SXDZ6Xb8ew/qcApncAE+XMEN3EPAiBQwAu8wbvz7Lw6H8u2as6qthP4A+fzB5XmkWg=</latexit> <latexit sha1_base64="fsHLBYWidsYrWV4fSzeZwucoj lM=">AB93icbVC9TsMwGPxS/kr5aYCRxaJCYqoSFmCrYGEsEqGV2ihyHKe16jiR7SCVqE/CwgCIlVdh421w2gzQcpLl0 93yecLM86Udpxvq7a2vrG5Vd9u7Ozu7Tftg8MHleaSUI+kPJX9ECvKmaCeZprTfiYpTkJOe+HkpvR7j1Qqlop7Pc2on+ CRYDEjWBspsJvDMOWRmibmQipwArvltJ050CpxK9KCt3A/hpGKckTKjThWKmB62TaL7DUjHA6awxzRTNMJnhEB4YKnFDl F/PgM3RqlAjFqTRHaDRXf28UOFlNjOZYD1Wy14p/ucNch1f+gUTWa6pIuH4pwjnaKyBRQxSYnmU0MwkcxkRWSMJSbad NUwJbjLX14l3n7qu3eOa3OdVGHY7hBM7AhQvowC10wQMCOTzDK7xZT9aL9W59LEZrVrVzBH9gf4A5j6Sxg=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> <latexit sha1_base64="v8T3vgoR2QPyVG+h5WxOelGpn 5I=">AB93icbVBPS8MwHP1/pvz6oevQSH4Gm0Iqi3oRePE6wOtlLSN3C0rQkqTDLPokXDype/Sre/DamWw+6+SDk8 d7vR15emHGmtON8W7WV1bX1jfpmY2t7Z7dp7+3fqzSXhHok5anshVhRzgT1NOc9jJcRJy+hCOr0v/4ZFKxVJxpycZ9R M8FCxmBGsjBXZzEKY8UpPEXEgFTmC3nLYzA1ombkVaUKEb2F+DKCV5QoUmHCvVd51M+wWmhFOp41BrmiGyRgPad9QgROq /GIWfIqOjRKhOJXmCI1m6u+NAieqzGYmE6xHatErxf+8fq7jC79gIs1FWT+UJxzpFNUtoAiJinRfGIJpKZrIiMsMREm 64apgR38cvLxDtX7bd27NW56pqow6HcAQn4MI5dOAGuABgRye4RXerCfrxXq3PuajNavaOYA/sD5/AOd+kso=</latex it> {X} <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="X/BbP QRM1pmBhxdK1enSbL+gJw=">AB2HicbZDNSgMxFIXv1L86Vq1rN8Ei uCozbtSd4MZlBcW2qFkMnfa0ExmSO4IpfQFXLhRfDB3vo3pz0KtBwI f5yTk3pOUSloKgi+vtrW9s7tX3/cPGv7h0XGz8WSLygiMRKEK0u4RSU 1RiRJYa80yPNEYTeZ3C3y7jMaKwv9SNMS45yPtMyk4OSszrDZCtrBUm wTwjW0YK1h83OQFqLKUZNQ3Np+GJQUz7ghKRTO/UFlseRiwkfYd6h5j aeLcecs3PnpCwrjDua2NL9+WLGc2uneJu5pzG9m+2MP/L+hVl1/FM6 rIi1GL1UVYpRgVb7MxSaVCQmjrgwkg3KxNjbrg14zvOgj/brwJ0WX7p h0+BFCHUziDCwjhCm7hHjoQgYAUXuDNG3uv3vuqpq37uwEfsn7+Aaq KYoN</latexit> <latexit sha1_base64="kvu14B GDBIhRlFRIGu9Z3vMYUyg=">AB7nicbVC9TsMwGPxS/kopkLKyWFRI TFXCAmxILIxFIrRSE1WO47RWHTuyHVAV+igsDIB4HDbeBqftAC0nfL pzpbvuzjnTBvP+3ZqG5tb2zv13cZec/g0G01H7QsFKEBkVyqfow15Uz QwDaT9XFGcxp714clP5vUeqNJPi3kxzGmV4JFjKCDZWGrqtsAxjyR M9zeyB+uFs6La9jcHWif+krRhie7Q/QoTSYqMCkM41nrge7mJSqwMI5 zOGmGhaY7JBI/owFKBM6qjch59hk6tkqBUKjvCoLn6+0WJM1lszczb MZ61avE/7xBYdLqGQiLwVZPFRWnBkJKp6QAlTlBg+tQTxWxWRMZY WJsWw1bgr+68joJzjtXHf/OgzocwmcgQ8XcA230IUACDzBC7zBu/Ps vDofi7ZqzrK2I/gD5/MHbU6SgA=</latexit> <latexit sha1_base64="kvu14B GDBIhRlFRIGu9Z3vMYUyg=">AB7nicbVC9TsMwGPxS/kopkLKyWFRI TFXCAmxILIxFIrRSE1WO47RWHTuyHVAV+igsDIB4HDbeBqftAC0nfL pzpbvuzjnTBvP+3ZqG5tb2zv13cZec/g0G01H7QsFKEBkVyqfow15Uz QwDaT9XFGcxp714clP5vUeqNJPi3kxzGmV4JFjKCDZWGrqtsAxjyR M9zeyB+uFs6La9jcHWif+krRhie7Q/QoTSYqMCkM41nrge7mJSqwMI5 zOGmGhaY7JBI/owFKBM6qjch59hk6tkqBUKjvCoLn6+0WJM1lszczb MZ61avE/7xBYdLqGQiLwVZPFRWnBkJKp6QAlTlBg+tQTxWxWRMZY WJsWw1bgr+68joJzjtXHf/OgzocwmcgQ8XcA230IUACDzBC7zBu/Ps vDofi7ZqzrK2I/gD5/MHbU6SgA=</latexit> <latexit sha1_base64="SdOLOh yvhat7GhdSUzXLfg4piJ4=">AB+XicbVC9TsMwGPzCbyl/KYwsFhUS U5WwAFsFC2ORCK3URJXjOK1VJ45sB1SFPgoLAyBW3oSNt8FpM0DLSZ Pd98ny/MOFPacb6tldW19Y3N2lZ9e2d3b9uHNwrkUtCPSK4kL0QK8p ZSj3NKe9TFKchJx2w/F16XcfqFRMpHd6ktEgwcOUxYxgbaSB3fALPx Q8UpPEXKjnTwd202k5M6Bl4lakCRU6A/vLjwTJE5pqwrFSfdfJdFBgqR nhdFr3c0UzTMZ4SPuGpjihKihm0afoxCgRioU0J9Vopv7eKHCiymxmM sF6pBa9UvzP6+c6vgKlma5pimZPxTnHGmByh5QxCQlmk8MwUQykxWRE ZaYaNW3ZTgLn5mXhnrcuWe+s021dVGzU4gmM4BRfOoQ030AEPCDzC M7zCm/VkvVjv1sd8dMWqdg7hD6zPH8lok+A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> <latexit sha1_base64="U0gMhY ytyhJ9OiNIMQF4cMzJCYs=">AB+XicbVC9TsMwGPxS/kr5S2FksaiQ mKoEIQFbBQtjkQit1ESV47itVceJbAdUhT4KCwMgVt6EjbfBaTNAy0m WT3fJ58vTDlT2nG+rcrK6tr6RnWztrW9s7tn1/fvVZJQj2S8ER2Q6w oZ4J6mlOu6mkOA457YTj68LvPFCpWCLu9CSlQYyHg0YwdpIfbvu53 6Y8EhNYnOhrj/t2w2n6cyAlolbkgaUaPftLz9KSBZToQnHSvVcJ9VBjq VmhNpzc8UTEZ4yHtGSpwTFWQz6JP0bFRIjRIpDlCo5n6eyPHsSqym ckY65Fa9ArxP6+X6cFkDORZpoKMn9okHGkE1T0gCImKdF8Ygmkpmsi IywxESbtmqmBHfxy8vEO21eNt3bs0brqmyjCodwBCfgwjm04Aba4AGB R3iGV3iznqwX6936mI9WrHLnAP7A+vwByqiT5A=</latexit> st <latexit sha1_base64="5nCjdFJ8JRBIRqG7yf DEScqxviU=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8 eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3 UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrp3vSwV625dXcGsky8gtSgQLNX/er2E5bFXCG T1JiO56Y5FSjYJPKt3M8JSyER3wjqWKxtwE+ezUCTmxSp9EibalkMzU3xM5jY0Zx6HtjCkOzaI3Ff/ zOhlGl0EuVJohV2y+KMokwYRM/yZ9oTlDObaEMi3srYQNqaYMbToVG4K3+PIy8c/qV3Xv7rzWuC7SKM RHMpeHABDbiFJvjAYADP8ApvjnRenHfnY95acoqZQ/gD5/MH1yWNsg=</latexit> <latexit sha1_base64="5nCjdFJ8JRBIRqG7yf DEScqxviU=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8 eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3 UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrp3vSwV625dXcGsky8gtSgQLNX/er2E5bFXCG T1JiO56Y5FSjYJPKt3M8JSyER3wjqWKxtwE+ezUCTmxSp9EibalkMzU3xM5jY0Zx6HtjCkOzaI3Ff/ zOhlGl0EuVJohV2y+KMokwYRM/yZ9oTlDObaEMi3srYQNqaYMbToVG4K3+PIy8c/qV3Xv7rzWuC7SKM RHMpeHABDbiFJvjAYADP8ApvjnRenHfnY95acoqZQ/gD5/MH1yWNsg=</latexit> <latexit sha1_base64="5nCjdFJ8JRBIRqG7yf DEScqxviU=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8 eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3 UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrp3vSwV625dXcGsky8gtSgQLNX/er2E5bFXCG T1JiO56Y5FSjYJPKt3M8JSyER3wjqWKxtwE+ezUCTmxSp9EibalkMzU3xM5jY0Zx6HtjCkOzaI3Ff/ zOhlGl0EuVJohV2y+KMokwYRM/yZ9oTlDObaEMi3srYQNqaYMbToVG4K3+PIy8c/qV3Xv7rzWuC7SKM RHMpeHABDbiFJvjAYADP8ApvjnRenHfnY95acoqZQ/gD5/MH1yWNsg=</latexit> <latexit sha1_base64="5nCjdFJ8JRBIRqG7yf DEScqxviU=">AB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEUG9FLx4rGltoQ9lsN+3SzSbsToQS+hO8 eFDx6j/y5r9x2+agrQ8GHu/NMDMvTKUw6LrfTmldW19o7xZ2dre2d2r7h8miTjPskYluh9RwKRT3 UaDk7VRzGoeSt8LRzdRvPXFtRKIecJzyIKYDJSLBKFrp3vSwV625dXcGsky8gtSgQLNX/er2E5bFXCG T1JiO56Y5FSjYJPKt3M8JSyER3wjqWKxtwE+ezUCTmxSp9EibalkMzU3xM5jY0Zx6HtjCkOzaI3Ff/ zOhlGl0EuVJohV2y+KMokwYRM/yZ9oTlDObaEMi3srYQNqaYMbToVG4K3+PIy8c/qV3Xv7rzWuC7SKM RHMpeHABDbiFJvjAYADP8ApvjnRenHfnY95acoqZQ/gD5/MH1yWNsg=</latexit> sG t <latexit sha1_base64="gtaGdGsRiNSqcfeaq5f3GNU BZc0=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bX PQ1gcDj/dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFt RKLucZTyIKZ9JSLBKFrJN18vOlWqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtq aIxN0E+PXZMjq3SI1GibSkU/X3RE5jY0ZxaDtjigMz703E/7x2htFkAuVZsgVmy2KMkwIZPSU9ozlCOLKFMC3 srYQOqKUObT9mG4M2/vEj809plzbs7q9avijRKcAhHcAIenEMdbqEBPjAQ8Ayv8OYo58V5dz5mrUtOMXMAf+B8/gAd bI5r</latexit> <latexit sha1_base64="gtaGdGsRiNSqcfeaq5f3GNU BZc0=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bX PQ1gcDj/dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFt RKLucZTyIKZ9JSLBKFrJN18vOlWqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtq aIxN0E+PXZMjq3SI1GibSkU/X3RE5jY0ZxaDtjigMz703E/7x2htFkAuVZsgVmy2KMkwIZPSU9ozlCOLKFMC3 srYQOqKUObT9mG4M2/vEj809plzbs7q9avijRKcAhHcAIenEMdbqEBPjAQ8Ayv8OYo58V5dz5mrUtOMXMAf+B8/gAd bI5r</latexit> <latexit sha1_base64="gtaGdGsRiNSqcfeaq5f3GNU BZc0=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bX PQ1gcDj/dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFt RKLucZTyIKZ9JSLBKFrJN18vOlWqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtq aIxN0E+PXZMjq3SI1GibSkU/X3RE5jY0ZxaDtjigMz703E/7x2htFkAuVZsgVmy2KMkwIZPSU9ozlCOLKFMC3 srYQOqKUObT9mG4M2/vEj809plzbs7q9avijRKcAhHcAIenEMdbqEBPjAQ8Ayv8OYo58V5dz5mrUtOMXMAf+B8/gAd bI5r</latexit> <latexit sha1_base64="gtaGdGsRiNSqcfeaq5f3GNU BZc0=">AB63icbVBNS8NAEJ34WetX1aOXxSJ4KokI6q3oQY8VjC20sWy2m3bpZhN2J0IJ/Q1ePKh49Q9589+4bX PQ1gcDj/dmJkXplIYdN1vZ2l5ZXVtvbR3tza3tmt7O0/mCTjPskYluhdRwKRT3UaDkrVRzGoeSN8Ph9cRvPnFt RKLucZTyIKZ9JSLBKFrJN18vOlWqm7NnYIsEq8gVSjQ6Fa+Or2EZTFXyCQ1pu25KQY51SiY5ONyJzM8pWxI+7xtq aIxN0E+PXZMjq3SI1GibSkU/X3RE5jY0ZxaDtjigMz703E/7x2htFkAuVZsgVmy2KMkwIZPSU9ozlCOLKFMC3 srYQOqKUObT9mG4M2/vEj809plzbs7q9avijRKcAhHcAIenEMdbqEBPjAQ8Ayv8OYo58V5dz5mrUtOMXMAf+B8/gAd bI5r</latexit> Current Sentence Y1...t <latexit sha1_base64="NkVogWsoqBj2N9S6woHLj FO7Z4=">AB83icbVBNS8NAEN3Ur1q/qh69LBbBU0lEUG9FLx4rGFtpQ9lsNu3SzW7cnRK6O/w4kHFq3/Gm/ GbZuDtj4YeLw3w8y8MBXcgOt+O6WV1bX1jfJmZWt7Z3evun/wYFSmKfOpEkq3Q2KY4JL5wEGwdqoZSULBWuHwZu q3RkwbruQ9jFMWJKQvecwpASsFj73c64pIgcEw6Vrbt2dAS8TryA1VKDZq351I0WzhEmghjT8dwUgpxo4FSwS aWbGZYSOiR91rFUkoSZIJ8dPcEnVolwrLQtCXim/p7ISWLMOAltZ0JgYBa9qfif18kgvgxyLtMmKTzRXEmMCg8 TQBHXDMKYmwJoZrbWzEdE0o2JwqNgRv8eVl4p/Vr+re3XmtcV2kUZH6BidIg9doAa6RU3kI4qe0DN6RW/OyH lx3p2PeWvJKWYO0R84nz/5OJH</latexit> <latexit sha1_base64="NkVogWsoqBj2N9S6woHLj FO7Z4=">AB83icbVBNS8NAEN3Ur1q/qh69LBbBU0lEUG9FLx4rGFtpQ9lsNu3SzW7cnRK6O/w4kHFq3/Gm/ GbZuDtj4YeLw3w8y8MBXcgOt+O6WV1bX1jfJmZWt7Z3evun/wYFSmKfOpEkq3Q2KY4JL5wEGwdqoZSULBWuHwZu q3RkwbruQ9jFMWJKQvecwpASsFj73c64pIgcEw6Vrbt2dAS8TryA1VKDZq351I0WzhEmghjT8dwUgpxo4FSwS aWbGZYSOiR91rFUkoSZIJ8dPcEnVolwrLQtCXim/p7ISWLMOAltZ0JgYBa9qfif18kgvgxyLtMmKTzRXEmMCg8 TQBHXDMKYmwJoZrbWzEdE0o2JwqNgRv8eVl4p/Vr+re3XmtcV2kUZH6BidIg9doAa6RU3kI4qe0DN6RW/OyH lx3p2PeWvJKWYO0R84nz/5OJH</latexit> <latexit sha1_base64="NkVogWsoqBj2N9S6woHLj FO7Z4=">AB83icbVBNS8NAEN3Ur1q/qh69LBbBU0lEUG9FLx4rGFtpQ9lsNu3SzW7cnRK6O/w4kHFq3/Gm/ GbZuDtj4YeLw3w8y8MBXcgOt+O6WV1bX1jfJmZWt7Z3evun/wYFSmKfOpEkq3Q2KY4JL5wEGwdqoZSULBWuHwZu q3RkwbruQ9jFMWJKQvecwpASsFj73c64pIgcEw6Vrbt2dAS8TryA1VKDZq351I0WzhEmghjT8dwUgpxo4FSwS aWbGZYSOiR91rFUkoSZIJ8dPcEnVolwrLQtCXim/p7ISWLMOAltZ0JgYBa9qfif18kgvgxyLtMmKTzRXEmMCg8 TQBHXDMKYmwJoZrbWzEdE0o2JwqNgRv8eVl4p/Vr+re3XmtcV2kUZH6BidIg9doAa6RU3kI4qe0DN6RW/OyH lx3p2PeWvJKWYO0R84nz/5OJH</latexit> <latexit sha1_base64="NkVogWsoqBj2N9S6woHLj FO7Z4=">AB83icbVBNS8NAEN3Ur1q/qh69LBbBU0lEUG9FLx4rGFtpQ9lsNu3SzW7cnRK6O/w4kHFq3/Gm/ GbZuDtj4YeLw3w8y8MBXcgOt+O6WV1bX1jfJmZWt7Z3evun/wYFSmKfOpEkq3Q2KY4JL5wEGwdqoZSULBWuHwZu q3RkwbruQ9jFMWJKQvecwpASsFj73c64pIgcEw6Vrbt2dAS8TryA1VKDZq351I0WzhEmghjT8dwUgpxo4FSwS aWbGZYSOiR91rFUkoSZIJ8dPcEnVolwrLQtCXim/p7ISWLMOAltZ0JgYBa9qfif18kgvgxyLtMmKTzRXEmMCg8 TQBHXDMKYmwJoZrbWzEdE0o2JwqNgRv8eVl4p/Vr+re3XmtcV2kUZH6BidIg9doAa6RU3kI4qe0DN6RW/OyH lx3p2PeWvJKWYO0R84nz/5OJH</latexit> {l} <latexit sha1_base64="+elRaxInQHe qGNq+ETb1L0fdrR8=">AB7HicbVBNS8NAEJ34WetX1aOXxSJ4KkV7LHgxWMF0x aUDbabt0swm7G6GE/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvSgXxnW/nY3Nre2 d3dJef/g8Oi4cnLa1kmGPosEYnqRlSj4BJ9w43AbqQxpHATjS5m/udJ1SaJ/LRT FMYzqSfMgZNVbyg1wEs36l6tbcBcg68QpShQKtfuUrGCQsi1EaJqjWPc9NTZhTZT gTOCsHmcaUsgkdYc9SWPUYb4dkYurTIgw0TZkoYs1N8TOY21nsaR7YypGetVby7 +5/UyM2yEOZdpZlCy5aJhJohJyPxzMuAKmRFTSyhT3N5K2JgqyozNp2xD8FZfXift es27rtUfbqrNRhFHCc7hAq7Ag1towj20wAcGHJ7hFd4c6bw4787HsnXDKWbO4A+cz x/4847C</latexit> Adversarial Figure 2: Guided style transfer: the Guider network controls the sentiment in the higher level, and the Generator focuses on preserving content in the lower level. 4 Extension to Non-parallel Text Style Transfer As illustrated in Figure 2, our framework naturally provides a way for style transfer, where the guider network plays the role of style selection, and the generator only focuses on maintaining content without considering the styles. To make the guider network focus on the guidance of styles, we assign the label l as the initial state sG 0 of the guider network. Specifically, at each step t, we feed the current sentence representation f t and label l into the guider network: Ot = g(st−1), wt = ϕ(Gψ(sG t−1, [f t, l])), (9) yt ∼Multi(1, softmax(Ot · wt)). (10) For the generator, we put an adversarial regularizer on the encoded latent s0(X) and penalize it if it contains the sentiment information, by maximizing the entropy, i.e., max P l p(l| s0(X)) log p(l| s0(X)), where p is a pre-trained classifier. Intuitively, the generator gives candidate words represented by Ot, while the guider makes a choice implicitly by wt based on the sentiment information. The sentiment information is contained in wt, while the content of the original sentence is represented by Ot. To achieve styletransfer, one feeds the original sentence X with the target style label l to get the transferred sentence Y with style l. Following previous work (Hu et al., 2017; Yang et al., 2018; Cheng et al., 2020), we adopt a classifier as the discriminator and the soft-argmax approach (Kusner and Miguel, 2016) for the update of generator instead of policy gradient (Sutton and Barto, 1998). 5 Related Work We first review related works that combine RL and GAN for text generation. As one of the most representative models in this direction, SeqGAN (Yu et al., 2017) adopts Monte-Carlo search to calculate rewards. However, such a method introduces high variance in policy optimization. There are a number of works proposed subsequently to improve the reward-generation process. For example, RankGAN (Lin et al., 2017) proposes to replace the reward from the GAN discriminator with a rankingbased reward, MaliGAN (Che et al., 2017) modifies the GAN objective and proposes techniques to reduce gradient variance, MaskGAN (Fedus et al., 2018) uses a filling technique to define a Q-value reward for sentence completion, RelGAN (Nie et al., 2019) uses a relational memory based generator for the long-distance dependency modeling, FMGAN (Chen et al., 2018) uses a feature mover distance to match features of real and generated sentences inspired by optimal transport (Chen et al., 2019; Zhang et al., 2018), and LeakGAN (Guo et al., 2017) tries to address the sparse-reward issue for long-text generation with hierarchical RL by utilizing the leaked information from a GAN discriminator. One problem of LeakGAN is that it tends to overfit the training data, yielding generated sentences that are often not diverse. By contrast, by relying on a model-based imitation learning approach, our method learns global-structure information, which generates more-diverse sentences, and can be extended to conditional text generation. Zhang et al. (2020) designed a differentiable nested Wasserstein distance for semantic matching, which can be applied for further improvement. RL techniques can also be used in other ways for text generation (Bachman and Precup, 2015). For example, Ranzato et al. (2016) trained a Seq2Seq model by directly optimizing the BLEU/ROUGE scores with the REINFORCE algorithm. To reduce variance of the vanilla REINFORCE, Bahdanau et al. (2017) adopted the actor-critic framework for sequence prediction. Furthermore, Rennie et al. (2016) trained a baseline algorithm with a greedy decoding scheme for the REINFORCE method. Note that all these methods can only obtain reward after a whole sentence is generated. Planning techniques in RL have also been explored to improve text generation (Gulcehre et al., 2017; Serdyuk et al., 2018). Zhang et al. (2020) introduced the selfimitation scheme to exploit historical high-quality sentences for enhanced exploration. Compared to these related works, the proposed guider network can provide a planning mechanism and intermediate rewards. 2521 Method Test-BLEU-2 3 4 5 Self-BLEU-2 3 4 SeqGAN (Yu et al., 2017) 0.820 0.604 0.361 0.211 0.807 0.577 0.278 RankGAN (Lin et al., 2017) 0.852 0.637 0.389 0.248 0.822 0.592 0.230 GSGAN (Kusner and Miguel, 2016) 0.810 0.566 0.335 0.197 0.785 0.522 0.230 TextGAN (Zhang et al., 2017) 0.910 0.728 0.484 0.306 0.806 0.548 0.217 LeakGAN (Guo et al., 2017) 0.922 0.797 0.602 0.416 0.912 0.825 0.689 MLE (Caccia et al., 2018) 0.902 0.706 0.470 0.392 0.787 0.646 0.485 GMGAN (ours) 0.949 0.823 0.635 0.421 0.746 0.511 0.319 Table 1: Test-BLEU (↑) and Self-BLEU (↓) scores on Image COCO. Method Test-BLEU-2 3 4 5 Self-BLEU-2 3 4 SeqGAN (Yu et al., 2017) 0.630 0.354 0.164 0.087 0.728 0.411 0.139 RankGAN (Lin et al., 2017) 0.723 0.440 0.210 0.107 0.672 0.346 0.119 GSGAN (Kusner and Miguel, 2016) 0.723 0.440 0.210 0.107 0.807 0.680 0.450 TextGAN (Zhang et al., 2017) 0.777 0.529 0.305 0.161 0.806 0.662 0.448 LeakGAN (Guo et al., 2017) 0.923 0.757 0.546 0.335 0.837 0.683 0.513 MLE (Caccia et al., 2018) 0.902 0.706 0.470 0.392 0.787 0.646 0.485 GMGAN (ours) 0.923 0.727 0.491 0.303 0.814 0.576 0.328 Table 2: Test-BLEU (↑) and Self-BLEU (↓) scores on EMNLP2017 WMT News. 6 Experiments We test the proposed framework on unconditional and conditional text generation tasks, and analyze the results to understand the performance gained by the guider network. We also perform an ablation investigation on the improvements brought by each part of our proposed method, and consider non-parallel style transfer. All experiments are conducted on a single Tesla P100 GPU and implemented with TensorFlow and Theano. Details of the datasets, the experimental setup and model architectures are provided in the Appendix. 6.1 Implementation Details Encoder as the feature extractor For unconditional generation, the feature extractor that generates inputs for the guider network shares the CNN part of the encoder. We stop gradients from the guider network to the encoder CNN in the training process. For conditional generation, we use a pretrained feature extractor, trained similarly to the unconditional generation. Training procedure As with many imitationlearning models (Bahdanau et al., 2017; Rennie et al., 2016; Sutskever et al., 2014), we first train the encoder-decoder part based on the off-policy data with an MLE loss. Then we use RL training to fine-tune the trained generator. We adaptively transfer the training from MLE loss to RL loss, similar to (Paulus et al., 2017; Ranzato et al., 2016). Initial states We use the same initial state for both the generator and the guider networks. For conditional generation, the initial state is the encoded latent code of the conditional information for both training and testing. For unconditional generation, the initial state is the encoded latent code of a target sentence in training and random noise in testing. 6.2 Adversarial Text Generation We focus on adversarial text generation, and compare our approach with a number of related works (Guo et al., 2017; Lin et al., 2017; Yu et al., 2017; Zhang et al., 2017; Zhu et al., 2018). In this setting, a discriminator in the GAN framework is added to the model in Figure 1 to guide the generator to generate high-quality sentences. This is implemented by defining the final reward to be the output of the discriminator. All baseline experiments are implemented on the texygen platform (Zhu et al., 2018). We adopt the BLEU score, referenced by the test set (test-BLEU, higher value implies better quality) and itself (self-BLEU, lower value implies better diversity) (Zhu et al., 2018) to evaluate quality of generated samples, where test-BLEU evaluates the reality of generated samples, and self-BLEU measures the diversity. A good generator should achieve both a high test-BLEU score and a low self-BLEU score. In practice, we use △t = c = 4 and γ = 0.25. We call the proposed method guidermatching GAN (GMGAN) for unconditional text 2522 generation. More details of GMGAN are provided in Appendix D. Short Text Generation: COCO Image Captions We use the COCO Image Captions Dataset, in which most sentences have a length of about 10 words. Since we consider unconditional text generation, only image captions are used as the training data. After preprocessing, we use 120,000 random sample sentences as the training set, and 10,000 as the test set. The BLEU scores with different methods are listed in Table 1. We observe that GMGAN performs significantly better than the baseline models. Specifically, besides achieving higher test-BLEU scores, the proposed method also generates samples with very good diversity in terms of self-BLEU scores. LeakGAN represents the state-of-the-art in adversarial text generation, however, its diversity measurement is relatively poor (Zhu et al., 2018). We suspect that the high BLEU score achieved by LeakGAN is due to its mode collapse on some good samples, resulting in high self-BLEU scores. Other baselines achieve lower self-BLEU scores since they cannot generate reasonable sentences. Long Text Generation: EMNLP2017 WMT Following (Zhu et al., 2018), we use the News section in the EMNLP2017 WMT4 Dataset as our training data. The dataset consists of 646,459 words and 397,726 sentences. After preprocessing, the training dataset contains 5,728 words and 278,686 sentences. The BLEU scores with different methods are provided in Table 2. Compared with other methods, LeakGAN and GMGAN achieve comparable test-BLEU scores, demonstrating high-quality generated sentences. Again, LeakGAN tends to over-fit on training data, leading to much higher (worse) self-BLEU scores. Our proposed GMGAN shows good diversity of long text generation with lower self-BLEU scores. Other baselines obtain both low self-BLEU and testBLEU scores, leading to more random generations. Human Evaluation Simply relying on the above metrics is not sufficient to evaluate the proposed method (Caccia et al., 2018). Following previous work (Guo et al., 2017), we perform human evaluations using Amazon Mechnical Turk, evaluating the text quality based on readability and meaningfulness (whether sentences make sense) on the EMNNLP2017 WMT News dataset. We ask the worker to rate the input sentence with scores scalScores Criteria 5 (Best) It is consistent, informative, grammatically correct. 4 It is grammatically correct and makes sense. 3 It is mostly meaningful and with small grammatical error. 2 It needs some time to understand and has grammatical errors. 1 (Worst) Meaningless, not readable. Table 3: Human evaluation rating criteria. Methods MLE SeqGAN RankGAN GSGAN Human scores 2.45±0.14 2.57±0.15 2.91±0.17 2.48±0.14 Methods textGAN LeakGAN GMGAN Real Human scores 3.11±0.16 3.47±0.15 3.89±0.15 4.21±0.14 Table 4: Results of human evaluation with different methods on EMNLP2017 WMT dataset. ing from 1 to 5, with 1 as the worst score and 5 as the best. The detailed criteria is listed in Table 3. We require all the workers to be native English speakers, with approval rate higher than 90% and at least 100 assignments completed. We randomly sample 100 sentences generated by each model. Ten native English speakers on Amazon Mechanical Turk are asked to rate each sentence. The average human rating scores are shown in Table 4, indicating GMGAN achieves higher human scores compared to other methods. As examples, Table 5 illustrates some generated samples by GMGAN and its baselines. The performance on the two datasets indicates that the generated sentences of GMGAN are of higher global consistency and better readability than SeqGAN and LeakGAN. More generated examples are provided in the Appendix. Ablation Study We conduct ablation studies on long text generation to investigate the improvements brought by each part of our proposed method. We first test the benefits of using the guider network. Among the methods compared, Guider is the standard MLE model with the guider network. We further compare RL training with i) only final rewards , ii) only feature-matching rewards, and iii) combining both rewards, namely GMGAN. The results are shown in Table 6. We observe that guider network plays an important role in improving the performance. RL training with final rewards given by a discriminator typically damages the generation quality, but feature-matching reward produces sentences with much better diversity due to the ability of exploration. 2523 Method COCO Image Captions EMNLP2017 WMT News SeqGAN (1) A person and black wooden table. (2) A closeup of a window at night. (1) She added on a page where it was made clear more old but public got said. (2) I think she’re guys in four years , and more after it played well enough. LeakGAN (1) A bathroom with a black sink and a white toilet next to a tub. (2) A man throws a Frisbee across the grass covered yard. (1)"I’m a fan of all the game, I think if that’s something that I’ve not," she said, adding that he would not be decided. (2) The UK is Google’ s largest non-US market, he has added "20, before the best team is amount of fewer than one or the closest home or two years ago. GMGAN (1) Bicycles are parked near a row of large trees near a sidewalk. (2) A married couple posing in front of a piece of birthday cake. (1) "Sometimes decisions are big, but they’re easy to make," he told The Sunday Times in the New Year. (2) A BBC star has been questioned by police on suspicion of sexual assault against a 23-year-old man , it was reported last night. Table 5: Examples of generated samples with different methods on COCO and EMNLP datasets. Methods MLE Guider Final Stepwise GMGAN Test-BLEU-2 0.761 0.920 0.843 0.914 0.923 BLEU-3 0.468 0.723 0.623 0.704 0.727 BLEU-4 0.230 0.489 0.390 0.457 0.491 BLEU-5 0.116 0.289 0.221 0.276 0.303 Self-BLEU-2 0.664 0.812 0.778 0.798 0.814 BLEU-3 0.338 0.589 0.525 0.563 0.576 BLEU-4 0.113 0.360 0.273 0.331 0.328 Table 6: Ablation study on EMNLP2017 WMT. (a) (b) Figure 3: Guider-Matching Rewards Illustrations. Case Study of Guider-Matching Rewards Figure 3(a) illustrates the feature-matching rewards in the generation. Figure 3(a) shows an example of failure generation in the training stage, when two sentences are combined by the word ‘was’. It is grammatically wrong to select ‘was’ for the generator, thus the guider network gives a small reward. We can see that the rewards become lower with more time steps, which is consistent with the exposure bias. Figure 3(b) shows a successful generation, where the rewards given by the guider are relatively high (larger than 0.5). These observations validate that: (i) exposure bias exists in MLE training. (ii) RL training with exploration can help reduce the effects of exposure bias. (iii) Our proposed feature-matching rewards can provide meaningful guidance to maintain sentence structure and fluency. Model Acc(%) BLEU BLEU-ref CVAE (Shen et al., 2017) 73.9 20.7 7.8 Controllable (Hu et al., 2017) 86.7 58.4 BackTrans (Prabhumoye et al., 2018) 91.2 2.8 2.0 DeleteAndRetrieval (Li et al., 2018a) 88.9 36.8 14.7 Guider (Ours) 92.7 52.1 25.4 Table 7: Non-parallel text style transfer results on the test set with human references. 6.3 Non-parallel Text-style Transfer We test the proposed framework on the non-parallel text-style-transfer task, where the goal is to transfer one sentence in one style (e.g., positive) to a similar sentence but with a different style (e.g., negative). Pair-wise information should be inferred from the training data, which becomes more challenging. For a fair comparison, we use the same data and its split method as in (Shen et al., 2017). Specifically, there are 444,000, 63,500, and 127,000 sentences with either positive or negative sentiments in the training, validation and test sets, respectively. To measure whether the original sentences (in the test set) have been transferred to the desired sentiment, we follow the settings of (Shen et al., 2017) and employ a pretrained CNN classifier, which achieves an accuracy of 97.4% on the validation set, to evaluate the transferred sentences. We also report the BLEU scores with original sentences (BLEU) and human references (BLEU-ref) (Li et al., 2018a), to evaluate the content preservation of transferred sentences. Results are summarized in Table 7. Our proposed model exhibits higher transfer accuracy and better content preservation, indicating the guider network provides good sentiment guidance to better preserve the content information. 2524 From positive to negative Original: all the employees are friendly and helpful . Transferred: all the employees are rude and unfriendly . Original: i ’m so lucky to have found this place ! Transferred: i ’m so embarrassed that i picked this place . From negative to positive Original: the service was slow . Transferred: the service was fast and friendly . Original: i would never eat there again and would probably not stay there either . Transferred: i would definitely eat this place and i would recommend them . Table 8: Generated samples of guided style transfer. 7 Conclusions We have proposed a model-based imitationlearning framework for adversarial text generation, by introducing a guider network to model the generation environment. The guider network provides a plan-ahead mechanism for next-word selection. Furthermore, this framework can alleviate the sparse-reward issue, as the intermediate rewards are used to optimize the generator. Our proposed models are validated on both unconditional and conditional text generation, including adversarial text generation and non-parallel style transfer. We achieve improved performance in terms of generation quality and diversity for unconditional and conditional generation tasks. Acknowledgement The authors would like to thank the anonymous reviewers for their insightful comments. The research was supported in part by DARPA, DOE, NIH, NSF and ONR. References Philip Bachman and Doina Precup. 2015. Data generation as sequential decision making. In NIPS. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In ICLR. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL Workshop. Nir Baram, Oron Anschel, and Shie Mannor. 2017. Model-based adversarial imitation learning. In ICML. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. arXiv:1811.02549. Tong Che, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. 2017. Maximum-likelihood augmented discrete generative adversarial networks. In arXiv:1702.07983. Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018. Adversarial text generation via featuremover’s distance. In NeurIPS. Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019. Improving sequence-to-sequence learning via optimal transport. In ICLR. Ching-An Cheng, Xinyan Yan, Evangelos Theodorou, and Byron Boots. 2019. Accelerating imitation learning with predictive models. In AISTATS. Pengyu Cheng, Renqiang Min, Dinghan Shen, Yizhe Zhang, Yitong Li, and Lawrence Carin. 2020. Improving disentangled text representation learning with information theoretical guidance. In ACL. William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better text generation via filling in the _. ICLR. Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. 2017. Semantic compositional networks for visual captioning. In CVPR. Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. 2016. Continuous deep q-learning with model-based acceleration. In ICML. Caglar Gulcehre, Francis Dutil, Adam Trischler, and Yoshua Bengio. 2017. Plan, attend, generate: Character-level neural machine translation with planning. In NIPS. Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2017. Long text generation via adversarial training with leaked information. In AAAI. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Controllable text generation. In ICML. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In CVPR. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. 2018. Model-ensemble trust-region policy optimization. In ICLR Workshop. 2525 Matt J Kusner and Hernández-Lobato José Miguel. 2016. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051. Juncen Li, Robin Jia, He He, and Percy Liang. 2018a. Delete, retrieve, generate: A simple approach to sentiment and style transfer. In NAACL. Piji Li, Lidong Bing, and Wai Lam. 2018b. Actor-critic based training framework for abstractive summarization. In arXiv:1803.11070. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In NIPS. Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. 2017. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In ICRA. Weili Nie, Nina Narodytska, and Ankit Patel. 2019. Relgan: Relational generative adversarial networks for text generation. In ICLR. Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. 2017. Curiosity-driven exploration by self-supervised prediction. In ICML. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. In ICLR. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In ACL. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In ICLR. Zhou Ren, Xiaoyu Wang, Ning Zhang, Xutao Lv, and Li-Jia Li. 2017. Deep reinforcement learning-based image captioning with embedding reward. In CVPR. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2016. Self-critical sequence training for image captioning. In CVPR. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv:1509.00685. Dmitriy Serdyuk, Nan Rosemary Ke, Alessandro Sordoni, Adam Trischler, Chris Pal, and Yoshua Bengio. 2018. Twin networks: Matching the future for sequence generation. In ICLR. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Richard S Sutton and Andrew G Barto. 1998. Reinforcement learning: An introduction. MIT Press. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR. Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In NeurIPS. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. Ruiyi Zhang, Changyou Chen, Zhe Gan, Zheng Wen, Wenlin Wang, and Lawrence Carin. 2020. Nestedwasserstein self-imitation learning for sequence generation. In AISTATS. Ruiyi Zhang, Changyou Chen, Chunyuan Li, and Lawrence Carin. 2018. Policy optimization as wasserstein gradient flows. In ICML. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. In ICML. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. In SIGIR. 2526 A Additional Experiments More Generated Samples of Text Generation Table 13 lists more generated samples on the proposed GMGAN and its baselines. From the experiments, we can see, (i) SeqGAN tends to generate shorter sentences, and the readability and fluency is very poor. (ii) LeakGAN tends to generate very long sentences, and usually longer than the original sentences. However, even with good locality fluency, its sentences usually are not semantically consistent. By contrast, our proposed GMGAN can generate sentences with similar length to the original sentences, and has good readability and fluency. This is also validated in the Human evaluation experiment. Image Captioning We conduct experiments on image captioning (Karpathy and Fei-Fei, 2015), investigating benefits brought by the Guider network. In image captioning, instead of using a discriminator to define final rewards for generated sentence, we adopt evaluation metrics computed based on human references. The final rewards appear more important as they contain reference (ground-truth) information. Feature-matching rewards work as a regularizer of the final rewards. We call our model in this setting a guider-matching sequence training (GMST) model. An overview of GMST is provided in the Appendix. We test our proposed model on the MS COCO dataset (Karpathy and Fei-Fei, 2015), containing 123,287 images in total. Each image is annotated with at least 5 captions. Following Karpathy’s split (Karpathy and Fei-Fei, 2015), 5,000 images are used for both validation and testing. We report BLEU-k (k from 1 to 4), CIDEr (Vedantam et al., 2015), and METEOR (Banerjee and Lavie, 2005) scores. We consider two settings: (i) using a pre-trained 152layer ResNet (He et al., 2016) for feature extraction, where we take the output of the 2048-way pool5 layer from ResNet-152, pretrained on the ImageNet dataset; and (ii) using semantic tags detected from the image as features (Gan et al., 2017). We use an LSTM with 512 hidden units with mini-batches of size 64. Adam (Kingma and Ba, 2014) is used for optimization, with learning rate 2 × 10−4. We pretrain the captioning model for the maximum 20 epochs, then use the reinforcement learning to train it for 20 epochs and test on the best model on the validation set. The results are summarized in Table 9. When Method BLEU-3 BLEU-4 METEOR CIDEr No attention, Greedy, Resnet-152 MLE 37.2 26.5 23.1 83.9 Guider 38.0 27.3 23.9 85.4 MIXER (BLEU) 39.1 29.3 22.3 79.7 SCST (BLEU) 41.6 31.6 23.1 87.5 GMST (BLEU) 41.8 32.1 23.4 87.9 MIXER (CIDEr) 39.1 27.7 23.0 90.9 SCST (CIDEr) 41.2 30.0 24.3 98.6 GMST (CIDEr) 41.3 30.3 24.4 100.1 No attention, Greedy, Tag MLE 39.4 28.8 24.4 91.3 Guider 39.6 29.0 24.6 92.7 MIXER (BLEU) 42.4 32.2 23.7 90.4 SCST (BLEU) 43.9 33.6 24.5 95.9 GMST (BLEU) 44.3 33.9 24.5 97.1 MIXER (CIDEr) 42.1 30.8 24.7 101.2 SCST (CIDEr) 43.6 32.1 25.4 105.5 GMST (CIDEr) 44.1 32.6 25.5 107.4 Table 9: Results for image captioning on the MS COCO dataset; the higher the better for all metrics. comparing an AutoEncoder (AE) with a variant implemented by adding a guider network (Guider), improvements are observed. We compare the proposed GMST with SCST. Note the main difference between GMST and SCST is that the former employs our proposed feature-matching reward, while the latter only considers the final reward provided by evaluation metrics. GMST achieves higher scores compared with SCST on its optimized metrics. The gain of GMST compared with SCST comes from the immediate rewards, which can maintain the semantic consistency and sentence structure, preventing language-fluency damage caused by only focusing on evaluation metrics. Specifically, the average length of generated sentence with a Guider is 15.7, and 12.9 for traditional generator. Comparison with MLE The guider network models the long-term dependency and overcome the issue of sparse reward inspired by model predictive control (MPC). The experiments aim to quantify the gain when incorporating MPC for imitation learning, i.e., MLE and RL finetune. We provide an additional comparison with Caccia et al. (2018) and evaluate the diversity and quality with BLEU scores. We also report the F1-BLEU which considers both diversity and quality in Table 10. B Discussions of the Guider Network Guider network can be regarded as a model of the text-generation environments, namely the model of dynamics. It takes current st and at as input, and outputing an estimate of the next state st+△t 2527 Method Test-BLEU-2 3 4 Self-BLEU-2 3 4 F1-BLEU-2 3 4 MLE (Caccia et al., 2018) 0.902 0.706 0.470 0.787 0.646 0.485 0.345 0.472 0.491 Guider (MLE) 0.920 0.723 0.489 0.812 0.589 0.360 0.312 0.524 0.554 GMGAN (Ours) 0.923 0.727 0.491 0.814 0.576 0.328 0.310 0.537 0.567 Table 10: Additional Comparison with MLE (Caccia et al., 2018) . at time t + △t. In the text generation setting, when △t = 1, we can exactly get the feature representation of the current generated sentence if the guider does not help the word selection. If not, we cannot exactly get this feature extraction since the guider’s prediction partly determine next token. In practice, we use △t = c = 4, to give the guider planning ability, to help for word selection and guide sentence generation. C Experimental Setup C.1 Adversarial Text Generation For Image COCO, the learning rate of the generator is 0.0002, the learning rate of the guider 0.0002, the maximum length of sequence is 25. For WMT, the learning rate of the guider 0.0002, the learning rate of the guider 0.0002, the maximum length of sequence is 50. We use c = 4 chosen from [2, 3, 4, 5, 8] and γ = 0.25 chosen from [0.1, 0.25, 0.5, 0.75, 0.99]. We use Adam (Kingma and Ba, 2014) optimization algorithm to train the guider, generator and discriminator. For both tasks, the LSTM state of dimension for the generator is 300, and the LSTM state of dimension for the generator is 300. The dimension of word-embedding is 300. The output dimension of the linear transformation connecting guider and generator is 600×10. The learning rate of Discriminator is 0.001. C.2 Conditional Generation For Image Captioning, the learning rate of the guider 0.0002, the learning rate of the guider 0.0002, the maximum length of sequence is 25. For Style transfer, the learning rate of the guider 0.0001, the learning rate of the guider 0.0001, the maximum length of sequence is 15. C.3 Network Structure of Models The LSTM state of dimension for the generator is 300, and the LSTM state of dimension for the guider is 300. The dimension of word-embedding is 300. (Sub-)sequence to latent features Input 300× Seq. Length Sequences 5 × 300 conv. 300 ReLU, stride 2 5 × 1 conv. 600 ReLU, stride 2 MLP output 600, ReLU Table 11: Architecture of Encoder. Sequence to a scalar value Input 300× Seq. Length Sequences 5 × 300 conv. 300 ReLU, stride 2 5 × 1 conv. 600 ReLU, stride 2 MLP output 1, ReLU Table 12: Architecture of Discriminator. D Algorithm Details Algorithm 2 Guider Matching Generative Adversarial Network (GMGAN) Require: generator policy πφ; discriminator Dθ; guider network Gψ; a sequence dataset S = {X1...T }. 1: Initialize Gψ, πφ, Dθ with random weights. 2: Pretrain generator πφ, guider Gψ and discriminator Dθ with MLE loss. 3: repeat 4: for g-steps do 5: Generate a sequence Y1...T ∼πφ. 6: Compute Qt via (5), and update πφ with policy gradient via (8). 7: end for 8: for d-steps do 9: Generate a sequences from πφ. 10: Train discriminator Dθ. 11: end for 12: until GMGAN converges 2528 Res152-SCST: a group of zebras standing in a 1eld . Res152-GMST: a herd of zebras standing in a 1eld of grass . Tag-SCST: a zebra and a zebra drinking water from a 1eld of grass . Tag-GMST: a group of zebras drinking water in the 1eld of grass . Res152-SCST: a group of people walking down a skateboard . Res152-GMST: a group of people standing on a street with a skateboard . Tag-SCST: a woman walking down a street with a skateboard . Tag-GMST: a black and white photo of a man riding a skateboard . Res152-SCST: a baby si:ng next to a baby gira<e . Res152-GMST: a li=le baby si:ng next to a baby holding a teddy bear . Tag-SCST: a black and white photo of a woman holding a teddy bear . Tag-GMST: a black and white photo of a man and a woman holding a teddy bear . Res152-SCST: a tra>c light on a street with a in the . Res152-GMST: a tra>c light on the side of a street . Tag-SCST: a tra>c light on a street with a green . Tag-GMST: a red tra>c light si:ng on the side of a road . Figure 4: Examples of image captioning on MS COCO. Algorithm 3 Guider Matching Sequence Training (GMST) Require: generator policy πφ; discriminator Dθ; guider network Gψ; a sequence dataset S = {Y1...T } and its condition information I = {X} 1: Initialize Gψ, πφ, Dθ with random weights. 2: Pretrain generator πφ, guider Gψ and discriminator Dθ with MLE loss. 3: repeat 4: Generate a sequence Y1...T ∼πφ. 5: Compute evaluation scores based on references. 6: Compute Qs t via (6), and update πφ with policy gradient via (8). 7: until GMST converges 2529 Method Generated Examples Real Data What this group does is to take down various different websites it believes to be criminal and leading to terrorist acts . Over 1 , 600 a day have reached Greece this month , a higher rate than last July when the crisis was already in full swing . " We ’ re working through a legacy period , with legacy products that are 10 or 20 years old ," he says . ’ The first time anyone says you need help , I ’ m on the defensive , but that ’ s all that I know . Out of those who came last year , 69 per cent were men , 18 per cent were children and just 13 per cent were women . He has not played for Tottenham ’ s first team since and it is now nearly two years since he completed a full Premier League match for the club . So you have this man who seems to represent this way to live and how to be a good citizen of the world . CNN : You made that promise , but it wasn ’ t until 45 years later that you acted on it . This is a part of the population that is notorious for its lack of interest in actually showing up when the political process takes place . They picked him off three times and kept him out of the end zone in a 22 - 6 victory at Arizona in 2013 . The treatment was going to cost £ 12 , 000 , but it was worth it for the chance to be a mum . But if black political power is so important , why hasn ’ t it made more of a difference in the lives of poor black people in Baltimore such as Gray ? Local media reported the group were not looking to hurt anybody , but they would not rule out violence if police tried to remove them . The idea was that couples got six months ’ leave per child with each parent entitled to half the days each . The 55 to 43 vote was largely split down party lines and fell short of the 60 votes needed for the bill to advance . Taiwan ’ s Defence Ministry said it was " aware of the information ," and declined further immediate comment , Reuters reported . I ’ m racing against a guy who I lost a medal to - but am I ever going to get that medal back ? Others pushed back their trips , meaning flights early this week are likely to be even more packed than usual . " In theory there ’ s a lot to like ," Clinton said , " but ’ in theory ’ isn ’ t enough . If he makes it to the next election he ’ ll lose , but the other three would have lost just as much . SeqGAN Following the few other research and asked for " based on the store to protect older , nor this . But there , nor believe that it has reached a the person to know what never - he needed . The trump administration later felt the alarm was a their doctors are given . We have been the time of single things what people do not need to get careful with too hurt after wells then . If he was waited same out the group of fewer friends a more injured work under it . It will access like the going on an " go back there and believe . Premier as well as color looking to put back on a his is . So , even though : " don ’ t want to understand it at an opportunity for our work . I was shocked , nor don ’ t know if mate , don ’ t have survived , So one point like ten years old , but a sure , nor with myself more people substantial . And if an way of shoes of crimes the processes need to run the billionaire . Now that their people had trained and people the children live an actor , nor what trump had . However , heavily she been told at about four during an innocent person . LeakGAN The country has a reputation for cheap medical costs and high - attack on a oil for more than to higher its - wage increase to increase access to the UK the UK women from the UK ’ s third nuclear in the last couple of weeks . I ’ ve been watching it through , and when the most important time it is going to be so important . I ’ m hopeful that as that process moves along , that the U . S . Attorney will share as much as far as possible . The main thing for should go in with the new contract , so the rest of the Premier League is there to grow up and be there ," she said . I think the main reason for their sudden is however , I didn ’ t get any big thing ," he says , who is the whole problem on the U . S . Supreme Court and rule had any broken . The average age of Saudi citizens is still very potential for the next year in the past year , over the last year he realised he has had his massive and family and home . " I think Ted is under a lot of people really want a " and then the opportunity to put on life for security for them to try and keep up . The new website , set to launch March 1 , but the U . S is to give up the time the case can lead to a more than three months of three months to be new home . It ’ s a pub ; though it was going to be that , but , not , but I am not the right thing to live ," she said . " I ’ m not saying method writing is the only way to get in the bedroom to get through the season and we ’ ll be over again ," he says . I ’ m not suggesting that our jobs or our love our years because I have a couple of games where I want it to be . The German government said 31 suspects were briefly detained for questioning after the New Year ’ s Eve trouble , among them not allowed to stay in the long - term . It was a punishment carried out by experts in violence , and it was hard to me he loved the man and he ’ s got off to support me in the future . " I ’ ve known him , all that just over the last two weeks and for the last 10 years , I ’ ll have one day of my life ," she said . The main idea behind my health and I think we saw in work of our country was in big fourth - up come up with a little you ’ ve ever . he Kings had needed scoring from the left side , too , and King has provided that since his return are the of the first three quarters of the game . It ’ s going to be a good test for us and we are on the right way to be able to get through it on every day on the year . GMGAN But it ’ s grown up a little now , and might be ready for actually putting into your house . More than a dozen Republicans and a handful of Democrats have announced they are running for their party ’ s 2016 presidential nomination , and when they were wealthy in 2010 right , what he has . And with a growing following of more than 45 , 000 people on Facebook , awareness of their work is on the rise . In all age groups , for instance , more people cited retirement as the reason for being out of the labour force , and it wasn ’ t a problem in big . I had to train really , really hard and that ’ s the advice I can give , because if you don ’ t work hard somebody else will . I am picking up two cars tomorrow and taking them down south tomorrow if all goes according to plan ," he said . The team looked into the influence of marriage on weight loss after surgery - as well as the effects of surgery on the quality of his administration and rest on the world . Two former prime ministers were set to face off in the second round of a presidential election in New Hampshire . A third more complaints were made about the accounts between April and December last year than in the whole of 2014 / 15 . United Airlines subsequently worked to get those passengers back in the air so they could get to Colorado , the airline spokesman said . Mr Brown was standing in the kitchen when he started to feel a bit cold - and he noticed the door had disappeared . She has focused instead on where she parts ways with her rival on other issues , like to have someone with a president has revealed . Once , an ex - boyfriend and I lived with her for two months after we came back from travelling . He had faced 10 years in prison on the charges but the first government have been made at the recent peak . " We weren ’ t exposed to things we didn ’ t have in the same way kids these days are ," said Obama . I have no idea what it is , but there is definitely an intelligence - a higher intelligence - at work you have you want to make sure you are going into the local community . His current club have confirmed they would be willing to listen to offers for the attacking midfielder , but we did not have the right manager - there ’ s summer to be in a big . We are in the last 16 and the target is always to win in the Champions League and will continue at the best level to be the coach . People are seeing that you can go into real estate and do really well and do something we want and if we make the right decision , and how we will be doing it is . Table 13: Generated Examples on EMNLP2017 WMT. 2530 Original: i ’m so lucky to have found this place ! Guider: i ’m so embarrassed that i picked this place . Original: awesome place , very friendly staff and the food is great ! Guider: disgusting place , horrible staff and extremely rude customer service . Original: this was my first time trying thai food and the waitress was amazing ! Guider: this was my first experience with the restaurant and we were absolutely disappointed . Original: thanks to this place ! Guider: sorry but this place is horrible . Original: the staff was warm and friendly . Guider: the staff was slow and rude . Original: great place and huge store . Guider: horrible place like ass screw . Original: the service is friendly and quick especially if you sit in the bar . Guider: the customer service is like ok - definitely a reason for never go back .. Original: everything is always delicious and the staff is wonderful . Guider: everything is always awful and their service is amazing . Original: best place to have lunch and or dinner . Guider: worst place i have ever eaten . Original: best restaurant in the world ! Guider: worst dining experience ever ! Original: you ’ll be back ! Guider: you ’re very disappointed ! Original: you will be well cared for here ! Guider: you will not be back to spend your money . Original: they were delicious ! Guider: they were overcooked . Original: seriously the best service i ’ve ever had . Guider: seriously the worst service i ’ve ever experienced . Original: it ’s delicious ! Guider: it ’s awful . Table 14: Sentiment transfer samples on Yelp dataset (positive →negative). 2531 Original: gross ! Guider: amazing ! Original: the place is worn out . Guider: the place is wonderful . Original: very bland taste . Guider: very fresh . Original: terrible service ! Guider: great customer service ! Original: this place totally sucks . Guider: this place is phenomenal . Original: this was bad experience from the start . Guider: the food here was amazing good . Original: very rude lady for testing my integrity . Guider: very nice atmosphere for an amazing lunch ! Original: they recently renovated rooms but should have renovated management and staff . Guider: great management and the staff is friendly and helpful . Original: this store is not a good example of sprint customer service though . Guider: this store is always good , consistent and they ’re friendly . Original: one of my least favorite ross locations . Guider: one of my favorite spots . Original: horrible in attentive staff . Guider: great front desk staff ! Original: the dining area looked like a hotel meeting room . Guider: the dining area is nice and cool . Original: never ever try to sell your car at co part ! Guider: highly recommend to everyone and recommend this spot for me ! Original: i ordered the filet mignon and it was not impressive at all . Guider: i had the lamb and it was so good . Table 15: Sentiment transfer samples on Yelp dataset (negative →positive).
2020
227
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2532–2538 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2532 Simple and Effective Retrieve-Edit-Rerank Text Generation Nabil Hossain† Marjan Ghazvininejad♠ †Dept. Computer Science, University of Rochester ♠Facebook AI Research [email protected], {ghazvini,lsz}@fb.com Luke Zettlemoyer♠ Abstract Retrieve-and-edit seq2seq methods typically retrieve an output from the training set and learn a model to edit it to produce the final output. We propose to extend this framework with a simple and effective post-generation ranking approach. Our framework (i) retrieves several potentially relevant outputs for each input, (ii) edits each candidate independently, and (iii) re-ranks the edited candidates to select the final output. We use a standard editing model with simple task-specific reranking approaches, and we show empirically that this approach outperforms existing, significantly more complex methodologies. Experiments on two machine translation (MT) datasets show new state-of-art results. We also achieve near state-of-art performance on the Gigaword summarization dataset, where our analyses show that there is significant room for performance improvement with better candidate output selection in future work. 1 Introduction Retrieve-and-edit text generation methods have received significant recent interest; editing humanauthored text can potentially avoid many of the challenges that are seen while generating text from scratch, including the tendency to be overly repetitive or to degrade on longer texts (Holtzman et al., 2018, 2019). Retrieve-and-edit methods have been developed for summarization (Cao et al., 2018), machine translation (Wu et al., 2019), language modeling (Guu et al., 2018), and conversation generation (Weston et al., 2018). These methods first retrieve a single output from the training set, and then use a learned model to edit it into the final output. In this paper, we show that generation performance can be improved with a retrieve-edit-rerank approach that instead retrieves a set of outputs from Figure 1: Our retrieve-edit-rerank framework, generating candidate outputs with three retrieved outputs, and re-ranking ˆy2 as the best candidate post-generation. the training set, edits each independently, and then re-ranks the results to produce the final output. Figure 1 shows an overview of the approach. We use standard keyword-based retrieval and a simple editor, where the retrieved output is concatenated to the original input to train a Transformerbased seq2seq editing model. Our final re-ranking step is task specific, but again very simple in every case. Our goal here is not to find the best possible way to do the re-ranking. Instead, we show that gains are possible and that it helps to see what edits are made for multiple candidates before making the final decision, instead of following previous work by trying to select a single candidate before editing. We evaluate performance on the Gigaword summarization dataset (Rush et al., 2015) and on the English to Dutch (EN-NL) and the English to Hungarian (EN-HU) machine translation (MT) tasks, following Bulte and Tezcan (2019). For MT, we experimented with different re-ranking schemes but found that the original model score (log-likelihood) worked best, amounting to extended beam search within the complete retreive-edit-rerank pipeline. We improve performance by 6.5 BLEU points on EN-NL and 7.5 on EN-HU over the state-of-art Neural Fuzzy Repair system (Bulte and Tezcan, 2019). On Gigaword, we simply re-rank by returning the most common output, and we achieve up 2533 to 1.2 ROUGE improvement over the comparable Re3Sum model (Cao et al., 2018). Finally, through qualitative analysis, we find evidence that better post-generation ranking is feasible and can lead to substantial performance improvement, which emphasizes the need for future work in developing new post-generation ranking techniques. 2 Related Work Recent work has developed retrieve-and-edit approaches for many tasks, including dialogue generation (Weston et al., 2018), language modeling (Guu et al., 2018), code generation (Hashimoto et al., 2018), neural machine translation (NMT) (Gu et al., 2018; Zhang et al., 2018; Cao and Xiong, 2018) and post-editing for NMT (Hokamp, 2017; Dabre et al., 2017). Candidate ranking has served as a core part in some retrieval-based models (Ji et al., 2014; Yan et al., 2016), but these models do not edit the retrieved candidates. For machine translation, Bulte and Tezcan (2019) developed a retrieve-and-edit based LSTM model called Neural Fuzzy Repair (NFR), which they applied on two MT datasets obtained from (Steinberger et al., 2012). Using a keyword based followed by a token edit distance based retrieval method called sss+ed, they showed that concatenating the source and retrieved outputs as the input significantly boosts translation quality. NFR is trained by augmenting the source with up to 3 retrieved outputs, which are fed together into the editing model in several ways. Our approach, instead, simply edits multiple candidates separately and then re-ranks the final results. For summarization, Re3Sum (Cao et al., 2018) is an LSTM-based model developed under the retrieve-and-edit framework, and tested on the Gigaword summarization (also headline generation) task (Rush et al., 2015). Re3Sum retrieves 30 headlines from the training set using the popular information retrieval method Lucene1. Next, it learns a model to pick the single best retrieved headline, which is then edited. BiSET (Wang et al., 2019) is a retrieve-and-edit framework with more complex retrieval ranking and editing stages, which again edits only a single output. We compare our framework’s performance against those of NFR, Re3Sum, and BiSET, showing the effectiveness of post-generation ranking. 1https://lucene.apache.org/ 3 Framework Figure 1 shows our proposed retrieve-edit-rerank framework. It has three components: (i) a retrieval mechanism to extract output from the training set; (ii) a seq2seq model to generate output from the source concatenated with the retrieved output; and (iii) a post-generation ranking module to select a high quality output from a set of generated candidates. For the rest of this paper, we will use (x, y) to represent a source and target pair, (x′, y′) to denote a retrieved source and output pair from the training set, and ˆy to represent the edited/generated output. 3.1 Retrieve Given input x, the goal of the retrieve module is to find a similar training example (x′, y′). We experiment with both Lucene and sss+ed. These can be replaced with any other retrieval methods in the literature. 3.2 Joint Pre-ranking and Generation Similar to Re3Sum, we design a model that can jointly learn to produce the edited output ˆy and re-rank the retrieved outputs y′, which we refer to as pre-ranking, a common practice to determine which retrieved outputs are worth editing. For editing, we use a Transformer as our seq2seq model. We provide the model a concatenated input x[SEP]y′, where [SEP] is a separator token, and we train it to produce the original target y with a standard cross entropy loss. For pre-ranking, we add a [RANK] token to the Transformer’s encoder analogous to the [CLS] token in BERT (Devlin et al., 2019). We train the model to predict the similarity between y′ and y as the output of the [RANK] token, akin to predicting a token from a different vocabulary (Ghazvininejad et al., 2019). We use a cross entropy loss based on a text similarity metric2, adding it to the Transformer’s loss function. 3.3 Post-generation Ranking For source x, given a set of N input (x concatenated with N retrieved outputs y′) and generated candidate output pairs: {(x[SEP]y′ 1; ˆy1), . . . , (x[SEP]y′ N; ˆ yN)} 2we use BLEU for MT and ROUGE-L for Gigaword. This can be any other text similarity metric. 2534 this module’s objective is to select a high quality candidate output. Ideally, we want to find: ˆy∗= arg max ˆyi similarity(ˆyi , y), 1 ≤i ≤N For post-ranking, we use simple ranking functions that work effectively. For MT, we calculate the log-likelihood score of the generated candidate outputs using our trained model (Transformer based) and we choose the candidate that gets the highest model score. For Gigaword, our ranking function simply chooses the most frequently generated output from the list of candidates. In preliminary experiments, we tried other ranking methods, but we did not see a gain compared to our simple post-ranking methods. Our goal here is not to find the best possible way to do the post-ranking, but only to show that gains are possible. In particular, running the preranker over a larger candidate list is not enough; we find that it is better to see what edits are made for multiple candidates before making the final decision. This strongly suggests that the direction is worthy of future work, to determine how to best combine the evidence from x, x′, y′ and ˆy. 4 Experiments 4.1 Datasets and Evaluation Metrics We test our proposed framework on the machine translation datasets English to Dutch (EN-NL) and English to Hungarian (EN-HU) following previous work (Bulte and Tezcan, 2019). The training, validation, and test set sizes, respectively, are 2.4M, 3000 and 3207, and both datasets have the same source English sentences. Additionally, we apply our framework on the Gigaword summarization task (Rush et al., 2015). Here, the training, validation, and test set sizes are 3.8M, 189k, and 1951 respectively. We evaluate MT performance using BLEU3 scores. For evaluation on Gigaword, we use the F1 scores for ROUGE-1, ROUGE-2, and ROUGE-L with commonly used evaluation parameters4. 4.2 Implementation Details We preprocess the data with Byte Pair Encoding (BPE) (Sennrich et al., 2016). Our model is built using the Fairseq library (Ott et al., 2019). We 3we use the multi-bleu.perl script from Moses. 4ROUGE evaluation parameters: -m -n 2 -w 1.2 follow most of the Transformer base hyperparameter configurations Vaswani et al. (2017). We use a 6-layer Transformer with 8 attention heads per layer, 512 model dimensions, 2048 hidden dimensions and shared embeddings. Our Transformer uses segment embeddings, with one segment for x and another for y′. For training, we use a learning rate of 5e−4, a batch size of 128k tokens, the Adam optimizer (Kingma and Ba, 2014), a dropout of 0.3, and a joined dictionary. We train our models for 200k update steps, and we calculate validation loss following each epoch to choose our final model. For test, we use a beam size of 5. 4.3 Training For MT, we use the 3 best retrieved outputs per source x to create 4 training examples: {x, x[SEP]y′ 1, x[SEP]y′ 2, x[SEP]y′ 3} This is similar to NFR, which then uses for test, the input x[SEP]y′ 1 if it exists, and only x otherwise. We use both sss+ed and Lucene to compare how retrieval impacts translation quality. For Gigaword, we train with 10 retrieved outputs as opposed to 30 retrieved by (Cao et al., 2018), and for testing we use 30 retrieved outputs. As a baseline, we also train a Transformer without retrieval. 4.4 Results The MT results in Table 1 show that for both ENNL and EN-HU, the Transformer without retrieval slightly outperforms the LSTM based NFR which includes retrieval. Replacing LSTM with Transformer in NFR (Tr + sss+ed) gives roughly a 4 point increase in BLEU. Replacing sss+ed with Lucene further increases BLEU by 2 points. Generating from x concatenated with the best pre-ranked output further improves performance, System EN-NL EN-HU LSTM 51.45 40.47 NFR 58.91 48.24 Transformer (Tr) 59.88 49.61 Tr + sss+ed (NFR equivalent) 62.86 52.74 Tr + Lucene + x [SEP] y′ 1 64.92 55.16 Tr + Lucene + pre-rank 65.20 55.36 Tr + Lucene + post-rank (ours) 65.43 55.73 Table 1: BLEU scores on the MT datasets. y′ 1 implies using the best retrieved output from Lucene. LSTM results are reported from Bulte and Tezcan (2019). 2535 System R-1 R-2 R-L LSTM (from Cao et al. (2018)) 35.01 16.55 32.42 Re3Sum 37.04 19.03 34.46 Transformer (Tr) 37.68 18.79 34.87 Tr + Luc + x [SEP] y′ 1 37.51 19.15 34.86 Tr + Luc + pre-rank 36.46 18.01 33.85 Tr + Luc + post-rank (ours) 38.23 19.58 35.60 BiSET 39.11 19.78 36.87 Table 2: ROUGE scores for Gigaword summarization. y′ 1 implies using the best retrieved output from Lucene. and the best results are obtained by post-ranking, for which we use the highest scored output according to the model. Overall, our retrieve-edit-rerank system with Transformer, Lucene, and a simple but effective post-ranking function obtains a BLEU score increase of 6.52 on EN-NL and 7.49 on ENHU over the current state of art NFR model. Results on Gigaword are shown in Table 2. The Transformer baseline obtains more than a 2 point increase in ROUGE over the LSTM baseline, and it achieves comparable performance to Re3Sum which is LSTM based and uses retrieval. While pre-ranking before editing hurts performance, with post-ranking, our model is able to outperform the Transformer baseline and Re3Sum, obtaining between 0.55-1.24 improvement in ROUGE scores. Our model comes slightly short of the retrieveand-edit based state-of-art BiSET (Wang et al., 2019). However, BiSET uses more complex preranking and editing stages which could also incorporated into our model. We leave this exploration to future work as it is largely orthogonal to postranking, which is the focus of our efforts. Overall, with retrieve-edit-rerank, our model outperforms comparable systems which use retrieveand-edit but no post-generation ranking, demonstrating that a simple post-ranking can boost the performance across two challenging tasks. 5 Post-ranking Analysis 5.1 Oracle Experiments We report a more detailed analysis on Gigaword, which strongly suggests performance can be further improved by using better post-ranking methods. For this purpose, we use an Oracle that has access to the gold target outputs. Using this Oracle, we find the N-best generated candidate outputs (out of 30 total generated) in terms of ROUGE-1 similarity to the target. We vary N from 1 to 30, and for each N, we randomly select one of the NFigure 2: Comparison with Oracle-based post-ranking methods in Gigaword. best Oracle-chosen outputs. The ROUGE-1 scores obtained for each N are shown in Figure 2. We also provide lower bounds which show the performance obtained with the candidate from the best N that is least similar to the target. Figure 2 shows that our post-generation ranker, which selects the most-frequent candidate output, performs better than choosing a random candidate output (N=30). We also observe that randomly choosing from one of the 1st - 26th best (out of 30) generated outputs surpasses the summarization performance achieved with our post-ranking function. Moreover, choosing any of the 12-best candidates is a feasible strategy that outperforms our ranking function. These observations suggest that many of the 30 retrieved outputs are useful for effective summary generation, and hence, there is a large room for improving by designing new post-generation ranking algorithms. Similar analysis on MT shows that a ranker that always selects the optimal of the three candidate outputs gets about 3-5 BLEU points improvement over our post-ranking based models, leaving room for further performance gains. 5.2 Examples To analyze the impact of post-ranking, we compare various outputs from our models for the Gigaword test set, as shown in Table 3. For the sample 3A, when augmenting the source with y′ 1 or the pre-ranked y′, the model simply copies the retrieved text and ignores important details from the source. However, the Transformer output indicates that most of the salient information can be obtained from the source itself. By generating multiple outputs with multiple augmented inputs and then choosing the most-frequent output, our post-ranking function helps to lessen the sensitivity of the model to certain retrieved outputs. 2536 Source jurors visited phil spector s mansion thursday to see the place where actress lana clarkson died , some of them sitting in a chair to mimic the position in which her body was found Target jurors in spector trial visit mansion where actress died Transformer phil spector jury visits scene of actress s death Ret-ID Retrieved Output Candidate Output y′ 1 jurors tour phil spector s home jurors tour phil spector s home pre-rank (y′ 15) spector jury tours scene of clarkson s death spector jury tours scene of clarkson s death post-rank (y′ 19) phil spector found guilty of #nd-degree murder jurors visit phil spector s mansion to see where actress died Example 3A. Source puerto rico ended water rationing for nearly half a million residents tuesday after heavy rain partly replenished a reservoir serving the san juan metropolitan area Target puerto rico ends water rationing Ret-ID Retrieved Output Candidate Output y′ 1 for second time in # years water rationed in san juan puerto rico ends water rationing pre-rank (y′ 4) water rationing resumes tuesday for ###,### puerto ricans water rationing resumes tuesday for ###,### puerto ricans post-rank (y′ 3) puerto rico just days away from water rationing if rain does n’t puerto rico ends water rationing Example 3B. Table 3: Sample outputs from the Gigaword test set. “Ret-ID” indicates which of the 30 retrieved y′ was used in the input, for example, y′ 1 and the pre-ranked y′. For the (most-frequent) post-ranked output, we show the y′ for which the generated output had the highest generation score (log-likelihood) from the model. For sample 3B, post-ranking chooses the output generated using y′ 1 which is also the actual target. However, due to a poor retrieval, pre-ranking forces the model to generate an output that largely differs from the target. We also found some examples where both the retrieve-only y′ 1 and the pre-ranked y′ were the same, and they were copied verbatim to generate the candidate output. However, several of these copied retrieved outputs were too general summaries, and since the source was ignored during generation, the generated candidate output was missing some article specific information present in the target summary. In many of these cases, simply using the source without any retrieval in the input resulted in an output more representative of the target summary, and also post-ranking helped select this better output. These examples highlight the cases where simply relying on the best retrieval or on the pre-ranking can hurt results since the output generated using only the source without any retrieval is the same as the higher quality post-ranked output. Overall, these examples demonstrate the flexibility offered by our post-ranking module. It allows the framework to choose between combinations of generations ignoring retrieval, generations using the closest retrieved output and generations using the pre-ranked output. The post-ranking function also acts like a voting scheme, helping to convey the salient information from the inputs to the output while ignoring noise in the inputs. 6 Conclusion and Future Work In this paper, we presented a retrieve-edit-rerank framework for seq2seq text generation. We used Lucene for retrieval, a Transformer model for editing, and simple task-specific post-generation ranking techniques. We applied the framework on two MT datasets and the Gigaword summarization dataset. Our results show that our simple ranking functions are effective in helping our model outperform the comparable retrieve-and-edit based methods for these datasets. By performing analysis on Gigaword, we find that there exists room to improve summarization performance with better post-ranking algorithms, a promising direction for future research. This is in line with our overall goal, which is not to find the best possible way to do the postranking, but only to show that gains are possible by editing multiple candidates and then comparing the results. Moving forward, we would like to apply this framework to other retrieve-and-edit based generation scenarios such as dialogue, conversation, and code generation. References Bram Bulte and Arda Tezcan. 2019. Neural fuzzy repair: Integrating fuzzy matches into neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1800–1809, Florence, Italy. Association for Computational Linguistics. 2537 Qian Cao and Deyi Xiong. 2018. Encoding gated translation memory into neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3042–3047, Brussels, Belgium. Association for Computational Linguistics. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 152–161, Melbourne, Australia. Association for Computational Linguistics. Raj Dabre, Fabien Cromieres, and Sadao Kurohashi. 2017. Enabling multi-source neural machine translation by concatenating source sentences in multiple languages. arXiv preprint arXiv:1702.06135. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6111– 6120, Hong Kong, China. Association for Computational Linguistics. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems, pages 10052–10062. Chris Hokamp. 2017. Ensembling factored neural machine translation models for automatic post-editing and quality estimation. In Proceedings of the Second Conference on Machine Translation, pages 647– 654. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638–1649, Melbourne, Australia. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. ArXiv, abs/1904.09751. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48–53, Minneapolis, Minnesota. Association for Computational Linguistics. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Ralf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl¨uter. 2012. DGTTM: A freely available translation memory in 22 languages. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), pages 454–459, Istanbul, Turkey. European Language Resources Association (ELRA). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Kai Wang, Xiaojun Quan, and Rui Wang. 2019. BiSET: Bi-directional selective encoding with template for abstractive summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2153–2162, Florence, Italy. Association for Computational Linguistics. Jason Weston, Emily Dinan, and Alexander H Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776. 2538 Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Extract and edit: An alternative to back-translation for unsupervised neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1173–1183, Minneapolis, Minnesota. Association for Computational Linguistics. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 55–64. Jingyi Zhang, Masao Utiyama, Eiichro Sumita, Graham Neubig, and Satoshi Nakamura. 2018. Guiding neural machine translation with retrieved translation pieces. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1325– 1335, New Orleans, Louisiana. Association for Computational Linguistics.
2020
228
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2539–2556 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2539 BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps Wang Zhu∗1 Hexiang Hu∗2 Jiacheng Chen2 Zhiwei Deng3 Vihan Jain4 Eugene Ie4 Fei Sha† 2,4 1Simon Fraser University 2University of Southern California 3Princeton University 4Google Research Abstract Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN). In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones. We show that existing state-of-the-art agents do not generalize well. To this end, we propose BabyWalk, a new VLN agent that is learned to navigate by decomposing long instructions into shorter ones (BabySteps) and completing them sequentially. A special design memory buffer is used by the agent to turn its past experiences into contexts for future steps. The learning process is composed of two phases. In the first phase, the agent uses imitation learning from demonstration to accomplish BabySteps. In the second phase, the agent uses curriculum-based reinforcement learning to maximize rewards on navigation tasks with increasingly longer instructions. We create two new benchmark datasets (of long navigation tasks) and use them in conjunction with existing ones to examine BabyWalk’s generalization ability. Empirical results show that BabyWalk achieves state-of-the-art results on several metrics, in particular, is able to follow long instructions better. The codes and the datasets are released on our project page https://github.com/ Sha-Lab/babywalk. 1 Introduction Autonomous agents such as household robots need to interact with the physical world in multiple modalities. As an example, in vision-and-language navigation (VLN) (Anderson et al., 2018), the agent moves around in a photo-realistic simulated environment (Chang et al., 2017) by following a sequence of natural language instructions. To infer its whereabouts so as to decide its moves, the ∗Author contributed equally †On leave from University of Southern California agent infuses its visual perception, its trajectory and the instructions (Fried et al., 2018; Anderson et al., 2018; Wang et al., 2019; Ma et al., 2019a,b). Arguably, the ability to understand and follow the instructions is one of the most crucial skills to acquire by VLN agents. Jain et al. (2019) shows that the VLN agents trained on the originally proposed dataset ROOM2ROOM (i.e. R2R thereafter) do not follow the instructions, despite having achieved high success rates of reaching the navigation goals. They proposed two remedies: a new dataset ROOM4ROOM (or R4R) that doubles the path lengths in the R2R, and a new evaluation metric Coverage weighted by Length Score (CLS) that measures more closely whether the groundtruth paths are followed. They showed optimizing the fidelity of following instructions leads to agents with desirable behavior. Moreover, the long lengths in R4R are informative in identifying agents who score higher in such fidelity measure. In this paper, we investigate another crucial aspect of following the instructions: can a VLN agent generalize to following longer instructions by learning from shorter ones? This aspect has important implication to real-world applications as collecting annotated long sequences of instructions and training on them can be costly. Thus, it is highly desirable to have this generalization ability. After all, it seems that humans can achieve this effortlessly1. To this end, we have created several datasets of longer navigation tasks, inspired by R4R (Jain et al., 2019). We trained VLN agents on R4R and use the agents to navigate in ROOM6ROOM (i.e., R6R) and ROOM8ROOM (i.e., R8R). We contrast to the performance of the agents which are trained on those datasets directly (“in-domain”). The results 1Anecdotally, we do not have to learn from long navigation experiences. Instead, we extrapolate from our experiences of learning to navigate in shorter distances or smaller spaces (perhaps a skill we learn when we were babies or kids). 2540 Figure 1: Performance of various VLN agents on generalizing from shorter navigation tasks to longer ones. The vertical axis is the newly proposed path-following metric SDTW (Magalhaes et al., 2019), the higher the better. BABYWALK generalizes better than other approaches across different lengths of navigation tasks. Meanwhile, it get very close to the performances of the in-domain agents (the dashed line). Please refer to the texts for details. are shown in Fig. 1. Our findings are that the agents trained on R4R (denoted by the purple and the pink solid lines) perform significantly worse than the in-domain agents (denoted the light blue dashed line). Also interestingly, when such out-of-domain agents are applied to the dataset R2R with shorter navigation tasks, they also perform significantly worse than the corresponding in-domain agent despite R4R containing many navigation paths from R2R. Note that the agent trained to optimize the aforementioned fidelity measure (RCM(fidelity)) performs better than the agent trained to reach the goal only (RCM(goal)), supporting the claim by Jain et al. (2019) that following instructions is a more meaningful objective than merely goal-reaching. Yet, the fidelity measure itself is not enough to enable the agent to transfer well to longer navigation tasks. To address these deficiencies, we propose a new approach for VLN. The agent follows a long navigation instruction by decomposing the instruction into shorter ones (“micro-instructions”, i.e., BABYSTEPs), each of which corresponds to an intermediate goal/task to be executed sequentially. To this end, the agent has three components: (a) a memory buffer that summarizes the agent’s experiences so that the agent can use them to provide the context for executing the next BABY-STEP. (b) the agent first learns from human experts in “bitesize”. Instead of trying to imitate to achieve the ground-truth paths as a whole, the agent is given the pairs of a BABY-STEP and the corresponding human expert path so that it can learn policies of actions from shorter instructions. (c) In the second stage of learning, the agent refines the policies by curriculum-based reinforcement learning, where the agent is given increasingly longer navigation tasks to achieve. In particular, this curriculum design reflects our desiderata that the agent optimized on shorter tasks should generalize well to slightly longer tasks and then much longer ones. While we do not claim that our approach faithfully simulates human learning of navigation, the design is loosely inspired by it. We name our approach BABYWALK and refer to the intermediate navigation goals in (b) as BABY-STEPs. Fig. 1 shows that BABYWALK (the red solid line) significantly outperforms other approaches and despite being out-of-domain, it even reach the performance of in-domain agents on R6R and R8R. The effectiveness of BABYWALK also leads to an interesting twist. As mentioned before, one of the most important observations by Jain et al. (2019) is that the original VLN dataset R2R fails to reveal the difference between optimizing goalreaching (thus ignoring the instructions) and optimizing the fidelity (thus adhering to the instructions). Yet, leaving details to section 5, we have also shown that applying BABYWALK to R2R can lead to equally strong performance on generalizing from shorter instructions (i.e., R2R) to longer ones. In summary, in this paper, we have demonstrated empirically that the current VLN agents are ineffective in generalizing from learning on shorter navigation tasks to longer ones. We propose a new approach in addressing this important problem. We validate the approach with extensive benchmarks, including ablation studies to identify the effectiveness of various components in our approach. 2 Related Work Vision-and-Language Navigation (VLN) Recent works (Anderson et al., 2018; Thomason et al., 2019; Jain et al., 2019; Chen et al., 2019; Nguyen and Daumé III, 2019) extend the early works of instruction based navigation (Chen and Mooney, 2011; Kim and Mooney, 2013; Mei et al., 2016) to photo-realistic simulated environments. For instance, Anderson et al. (2018) proposed to learn a multi-modal Sequence-to-Sequence agent (Seq2Seq) by imitating expert demonstration. Fried et al. (2018) developed a method that augments the 2541 paired instruction and demonstration data using a learned speaker model, to teach the navigation agent to better understand instructions. Wang et al. (2019) further applies reinforcement learning (RL) and self-imitation learning to improve navigation agents. Ma et al. (2019a,b) designed models that track the execution progress for a sequence of instructions using soft-attention. Different from them, we focus on transferring an agent’s performances on shorter tasks to longer ones. This leads to designs and learning schemes that improve generalization across datasets. We use a memory buffer to prevent mistakes in the distant past from exerting strong influence on the present. In imitation learning stage, we solve fine-grained subtasks (BABY-STEPs) instead of asking the agent to learn the navigation trajectory as a whole. We then use curriculum-based reinforcement learning by asking the agent to follow increasingly longer instructions. Transfer and Cross-domain Adaptation There have been a large body of works in transfer learning and generalization across tasks and environments in both computer vision and reinforcement learning (Andreas et al., 2017; Oh et al., 2017; Zhu et al., 2017a,b; Sohn et al., 2018; Hu et al., 2018). Of particular relevance is the recent work on adapting VLN agents to changes in visual environments (Huang et al., 2019; Tan et al., 2019). To our best knowledge, this work is the first to focus on adapting to a simple aspect of language variability — the length of the instructions. Curriculum Learning Since proposed in (Bengio et al., 2009), curriculum learning was successfully used in a range of tasks: training robots for goal reaching (Florensa et al., 2017), visual question answering (Mao et al., 2019), image generation (Karras et al., 2018). To our best knowledge, this work is the first to apply the idea to learning in VLN. 3 Notation and the Setup of VLN In the VLN task, the agent receives a natural language instruction X composed of a sequence of sentences. We model the agent with an Markov Decision Process (MDP) which is defined as a tuple of a state space S, an action space A, an initial state s1, a stationary transition dynamics ρ : S×A →S, a reward function r : S ×A →R, and the discount factor γ for weighting future rewards. The agent acts according to a policy π : S × A →0 ∪R+. The state and action spaces are defined the same as in (Fried et al., 2018) (cf. § 4.4 for details). For each X, the sequence of the pairs (s, a) is called a trajectory Y =  s1, a1, . . . , s|Y|, a|Y|  where |·| denotes the length of the sequence or the size of a set. We use ˆa to denote an action taken by the agent according to its policy. Hence, ˆY denotes the agent’s trajectory, while Y (or a) denotes the human expert’s trajectory (or action). The agent is given training examples of (X, Y) to optimize its policy to maximize its expected rewards. In our work, we introduce additional notations in the following. We will segment a (long) instruction X into multiple shorter sequences of sentences {xm, m = 1, 2, · · · , M}, to which we refer as BABY-STEPs. Each xm is interpreted as a microinstruction that corresponds to a trajectory by the agent ˆym and is aligned with a part of the human expert’s trajectory, denoted as ym. While the alignment is not available in existing datasets for VLN, we will describe how to obtain them in a later section (§ 4.3). Throughout the paper, we also freely interexchange the term “following the mth microinstruction”, “executing the BABY-STEP xm”, or “complete the mth subtask”. We use t ∈[1, |Y|] to denote the (discrete) time steps the agent takes actions. Additionally, when the agent follows xm, for convenience, we sometimes use tm ∈[1, |ˆym|] to index the time steps, instead of the “global time” t = tm + m−1 i=1 |ˆyi|. 4 Approach We describe in detail the 3 key elements in the design of our navigation agent: (i) a memory buffer for storing and recalling past experiences to provide contexts for the current navigation instruction (§ 4.1); (ii) an imitation-learning stage of navigating with short instructions to accomplish a single BABY-STEP (§ 4.2.1); (iii) a curriculum-based reinforcement learning phase where the agent learns with increasingly longer instructions (i.e. multiple BABY-STEPs) (§ 4.2.2). We describe new benchmarks created for learning and evaluation and key implementation details in § 4.3 and § 4.4 (with more details in the Appendix). 4.1 The BABYWALK Agent The basic operating model of our navigation agent BABYWALK is to follow a “micro instruction” xm (i.e., a short sequence of instructions, to which we 2542                                              Figure 2: The BABYWALK agent has a memory buffer storing its past experiences of instructions xm, and its trajectory ˆym. When a new BABY-STEP xm is presented, the agent retrieves from the memory a summary of its experiences as the history context. It takes actions conditioning on the context (as well as its state st and the previous action ˆat). Upon finishing following the instruction. the trajectory ˆym is then sent to the memory to be remembered. also refer as BABY-STEP), conditioning on the context ˆzm and to output a trajectory ˆym. A schematic diagram is shown in Fig. 2. Of particularly different from previous approaches is the introduction of a novel memory module. We assume the BABYSTEPs are given in the training and inference time – § 4.3 explains how to obtain them if not given a prior (Readers can directly move to that section and return to this part afterwards). The left of the Fig. 3 gives an example of those micro-instructions. Context The context is a summary of the past experiences of the agent, namely the previous (m− 1) mini-instructions and trajectories: ˆzm = g  fSUMMARY(x1, · · · , xm−1), fSUMMARY(ˆy1, · · · , ˆym−1)  (1) where the function g is implemented with a multilayer perceptron. The summary function fSUMMARY is explained in below. Summary To map variable-length sequences (such as the trajectory and the instructions) to a single vector, we can use various mechanisms such as LSTM. We reported an ablation study on this in § 5.3. In the following, we describe the “forgetting” one that weighs more heavily towards the most recent experiences and performs the best empirically. fSUMMARY(x1, · · · , xm−1) = m−1  i=1 αi · u(xi) (2) fSUMMARY(ˆy1, · · · , ˆym−1) = m−1  i=1 αi · v(ˆyi) (3) where the weights are normalized to 1 and inverse proportional to how far i is from m, αi ∝exp  −γ · ω(m −1 −i)  (4) γ is a hyper-parameter (we set to 1/2) and ω(·) is a monotonically nondecreasing function and we simply choose the identity function. Note that, we summarize over representations of “micro-instructions” (xm) and experiences of executing those micro-instructions ˆym. The two encoders u(·) and v(·) are described in § 4.4. They are essentially the summaries of “low-level” details, i.e., representations of a sequence of words, or a sequence of states and actions. While existing work often directly summarizes all the low-level details, we have found that the current form of “hierarchical” summarizing (i.e., first summarizing each BABY-STEP, then summarizing all previous BABY-STEPs) performs better. Policy The agent takes actions, conditioning on the context ˆzm, and the current instruction xm: ˆat ∼π (·|st, ˆat−1; u(xm), ˆzm) (5) where the policy is implemented with a LSTM with the same cross-modal attention between visual states and languages as in (Fried et al., 2018). 4.2 Learning of the BABYWALK Agent The agent learns in two phases. In the first one, imitation learning is used where the agent learns to execute BABY-STEPs accurately. In the second one, the agent learns to execute successively longer tasks from a designed curriculum. 4.2.1 Imitation Learning BABY-STEPs are shorter navigation tasks. With the mth instruction xm, the agent is asked to follow the instruction so that its trajectory matches the human expert’s ym. To assist the learning, the context is computed from the human expert trajectory up to the mth BABY-STEP (i.e., in eq. (1), ˆys are replaced with ys). We maximize the objective ℓ= M  m=1 |ym|  tm=1 log π (atm|stm, atm−1; u(xm), zm) We emphasize here each BABY-STEP is treated independently of the others in this learning regime. Each time a BABY-STEP is to be executed, we “preset” the agent in the human expert’s context 2543       "$           "  " !     # "        #"    !  "     #   #    ! !       ! !   #   !            !              Baby Walk !                        Baby Walk Baby Walk                 Figure 3: Two-phase learning by BABYWALK. (Left) An example instruction-trajectory pair from the R4R dataset is shown. The long instruction is segmented into four BABY-STEP instructions. We use those BABYSTEPs for imitation learning (§ 4.2.1) (Right) Curriculum-based RL. The BABYWALK agent warm-starts from the imitation learning policy, and incrementally learns to handle longer tasks by executing consecutive BABY-STEPs and getting feedback from external rewards (c.f. § 4.2.2). We illustrate two initial RL lectures using the left example. and the last visited state. We follow existing literature (Anderson et al., 2018; Fried et al., 2018) and use student-forcing based imitation learning, which uses agent’s predicted action instead of the expert action for the trajectory rollout. 4.2.2 Curriculum Reinforcement Learning We want the agent to be able to execute multiple consecutive BABY-STEPs and optimize its performance on following longer navigation instructions (instead of the cross-entropy losses from the imitation learning). However, there is a discrepancy between our goal of training the agent to cope with the uncertainty in a long instruction and the imitation learning agent’s ability in accomplishing shorter tasks given the human annotated history. Thus it is challenging to directly optimize the agent with a typical RL learning procedure, even the imitation learning might have provided a good initialization for the policy, see our ablation study in § 5.3. Inspired by the curriculum learning strategy (Bengio et al., 2009), we design an incremental learning process that the agent is presented with a curriculum of increasingly longer navigation tasks. Fig. 3 illustrates this idea with two “lectures”. Given a long navigation instruction X with M BABY-STEPs, for the kth lecture, the agent is given all the human expert’s trajectory up to but not including the (M −k + 1)th BABY-STEP, as well as the history context zM−k+1. The agent is then asked to execute the kth micro-instructions from xM−k+1 to xM using reinforcement learning to produce its trajectory that optimizes a task related R2R R4R R6R R8R Train seen instr. 14,039 233,532 89,632 94,731 Val unseen instr. 2,349 45,234 35,777 43,273 Avg instr. length 29.4 58.4 91.2 121.6 Avg # BABY-STEPs 1.8 3.6 5.6 7.4 Table 1: Datasets used for VLN learning and evaluation Figure 4: The distribution of lengths of instructions and ground-truth trajectories in our datasets. metric, for instance the fidelity metric measuring how faithful the agent follows the instructions. As we increase k from 1 to M, the agent faces the challenge of navigating longer and longer tasks with reinforcement learning. However, the agent only needs to improve its skills from its prior exposure to shorter ones. Our ablation studies show this is indeed a highly effective strategy. 4.3 New Datasets for Evaluation & Learning To our best knowledge, this is the first work studying how well VLN agents generalize to long navigation tasks. To this end, we create the following datasets in the same style as in (Jain et al., 2019). 2544 ROOM6ROOM and ROOM8ROOM We concatenate the trajectories in the training as well as the validation unseen split of the ROOM2ROOM dataset for 3 times and 4 times respectively, thus extending the lengths of navigation tasks to 6 rooms and 8 rooms. To join, the end of the former trajectory must be within 0.5 meter with the beginning of the later trajectory. Table 1 and Fig. 4 contrast the different datasets in the # of instructions, the average length (in words) of instructions and how the distributions vary. Table 1 summarizes the descriptive statistics of BABY-STEPs across all datasets used in this paper. The datasets and the segmentation/alignments are made publically available2. 4.4 Key Implementation Details In the following, we describe key information for research reproducibility, while the complete details are in the Appendix. States and Actions We follow (Fried et al., 2018) to set up the states as the visual features (i.e. ResNet-152 features (He et al., 2016)) from the agent-centric panoramic views in 12 headings × 3 elevations with 30 degree intervals. Likewise, we use the same panoramic action space. Identifying BABY-STEPs Our learning approach requires an agent to follow microinstructions (i.e., the BABY-STEPs). Existing datasets (Anderson et al., 2018; Jain et al., 2019; Chen et al., 2019) do not provide fine-grained segmentations of long instructions. Therefore, we use a template matching approach to aggregate consecutive sentences into BABY-STEPs. First, we extract the noun phrase using POS tagging. Then, we employs heuristic rules to chunk a long instruction into shorter segments according to punctuation and landmark phrase (i.e., words for concrete objects). We document the details in the Appendix. Aligning BABY-STEPs with Expert Trajectory Without extra annotation, we propose a method to approximately chunk original expert trajectories into sub-trajectories that align with the BABYSTEPs. This is important for imitation learning at the micro-instruction level (§ 4.2.1). Specifically, we learn a multi-label visual landmark classifier to identify concrete objects from the states along expert trajectories by using the landmark phrases 2Available at https://github.com/Sha-Lab/ babywalk extracted from the their instructions as weak supervision. For each trajectory-instruction pair, we then extract the visual landmarks of every state as well as the landmark phrases in BABY-STEP instructions. Next, we perform a dynamic programming procedure to segment the expert trajectories by aligning the visual landmarks and landmark phrases, using the confidence scores of the multi-label visual landmark classifier to form the function. Encoders and Embeddings The encoder u(·) for the (micro)instructions is a LSTM. The encoder for the trajectory y contains two separate Bi-LSTMs, one for the state st and the other for the action at. The outputs of the two Bi-LSTMs are then concatenated to form the embedding function v(·). The details of the neural network architectures (i.e. configurations as well as an illustrative figure), optimization hyper-parameters, etc. are included in the Appendix. Learning Policy with Reinforcement Learning In the second phase of learning, BABYWALK uses RL to learn a policy that maximizes the fidelity-oriented rewards (CLS) proposed by Jain et al. (2019). We use policy gradient as the optimizer (Sutton et al., 2000). Meanwhile, we set the maximum number of lectures in curriculum RL to be 4, which is studied in Section 5.3. 5 Experiments We describe the experimental setup (§ 5.1),followed by the main results in § 5.2 where we show the proposed BABYWALK agent attains competitive results on both the in-domain dataset but also generalizing to out-of-the-domain datasets with varying lengths of navigation tasks. We report results from various ablation studies in § 5.3. While we primarily focus on the ROOM4ROOM dataset, we re-analyze the original ROOM2ROOM dataset in § 5.4 and were surprised to find out the agents trained on it can generalize. 5.1 Experimental Setups. Datasets We conduct empirical studies on the existing datasets ROOM2ROOM and ROOM4ROOM (Anderson et al., 2018; Jain et al., 2019), and the two newly created benchmark datasets ROOM6ROOM and ROOM8ROOM, described in § 4.3. Table 1 and Fig. 4 contrast their differences. 2545 In-domain Generalization to other datasets Setting R4R →R4R R4R →R2R R4R →R6R R4R →R8R Average Metrics SR↑CLS↑SDTW↑ SR↑CLS↑SDTW↑ SR↑CLS↑SDTW↑ SR↑CLS↑SDTW↑ SR↑CLS↑SDTW↑ SEQ2SEQ 25.7 20.7 9.0 16.3 27.1 10.6 14.4 17.7 4.6 20.7 15.0 4.7 17.1 19.9 6.6 SF+ 24.9 23.6 9.2 22.5 29.5 14.8 15.5 20.4 5.2 21.6 17.2 5.0 19.9 22.4 8.3 RCM(GOAL)+ 28.7 36.3 13.2 25.9 44.2 20.2 19.3 31.8 7.3 22.8 27.6 5.1 22.7 34.5 10.9 RCM(FIDELITY)+ 24.7 39.2 13.7 29.1 34.3 18.3 20.5 38.3 7.9 20.9 34.6 6.1 23.5 35.7 10.8 REGRETFUL+⋆ 30.1 34.1 13.5 22.8 32.6 13.4 18.0 31.7 7.5 18.7 29.3 5.6 19.8 31.2 8.8 FAST+⋆ 36.2 34.0 15.5 25.1 33.9 14.2 22.1 31.5 7.7 27.7 29.6 6.3 25.0 31.7 9.4 BABYWALK 29.6 47.8 18.1 35.2 48.5 27.2 26.4 44.9 13.1 26.3 44.7 11.5 29.3 46.0 17.3 BABYWALK + 27.3 49.4 17.3 34.1 50.4 27.8 25.5 47.2 13.6 23.1 46.0 11.1 27.6 47.9 17.5 Table 2: VLN agents trained on the R4R dataset and evaluated on the unseen portion of the R4R (in-domain) and the other 3 out-of-the-domain datasets: R2R, R6R and R8R with different distributions in instruction length. The Appendix has more comparisons. (+: pre-trained with data augmentation. ⋆: reimplemented or adapted from the original authors’ public codes). Evaluation Metrics We adopt the following metrics: Success Rate (SR) that measures the average rate of the agent stopping within a specified distance near the goal location (Anderson et al., 2018), Coverage weighted by Length Score (CLS) (Jain et al., 2019) that measures the fidelity of the agent’s path to the reference, weighted by the length score, and the newly proposed Success rate weighted normalized Dynamic Time Warping (SDTW) that measures in more fine-grained details, the spatiotemporal similarity of the paths by the agent and the human expert, weighted by the success rate (Magalhaes et al., 2019). Both CLS and SDTW measure explicitly the agent’s ability to follow instructions and in particular, it was shown that SDTW corresponds to human preferences the most. We report results in other metrics in the Appendix. Agents to Compare to Whenever possible, for all agents we compare to, we either re-run, reimplement or adapt publicly available codes from their corresponding authors with their provided instructions to ensure a fair comparison. We also “sanity check” by ensuring the results from our implementation and adaptation replicate and are comparable to the reported ones in the literature. We compare our BABYWALK to the following: (1) the SEQ2SEQ agent (Anderson et al., 2018), being adapted to the panoramic state and action space used in this work; (2) the Speaker Follower (SF) agent (Fried et al., 2018); (3) the Reinforced Cross-Modal Agent (RCM) (Wang et al., 2019) that refines the SF agent using reinforcement learning with either goal-oriented reward (RCM(GOAL)) or fidelity-oriented reward (RCM(FIDELITY)); (4) the Regretful Agent (REGRETFUL) (Ma et al., 2019b) that uses a progress monitor that records visited path and a regret module that performs backtracking; (5) the Frontier Aware Search with Backtracking agent (FAST) (Ke et al., 2019) that incorporates global and local knowledge to compare partial trajectories in different lengths. The last 3 agents are reported having state-ofthe art results on the benchmark datasets. Except the SEQ2SEQ agent, all other agents depend on an additional pre-training stage with data augmentation (Fried et al., 2018), which improves crossboard. Thus, we train two BABYWALK agents: one with and the other without the data augmentation. 5.2 Main results In-domain Generalization This is the standard evaluation scenario where a trained agent is assessed on the unseen split from the same dataset as the training data. The leftmost columns in Table 2 reports the results where the training data is from R4R. The BABYWALK agents outperform all other agents when evaluated on CLS and SDTW. When evaluated on SR, FAST performs the best and the BABYWALK agents do not stand out. This is expected: agents which are trained to reach goal do not necessarily lead to better instructionfollowing. Note that RCM(FIDELITY) performs well in path-following. Out-of-domain Generalization While our primary goal is to train agents to generalize well to longer navigation tasks, we are also curious how the agents perform on shorter navigation tasks too. The right columns in Table 2 report the comparison. The BABYWALK agents outperform all other agents in all metrics except SR. In particular, on 2546 Figure 5: Performance by various agents on navigation tasks in different lengths. See texts for details. Setting R4R →R4R R4R →others Metrics SR↑ CLS↑ SDTW ↑ SR↑ CLS↑ SDTW ↑ fSUMMARY = NULL 18.9 43.1 9.9 17.1 42.3 9.6 LSTM(·) 25.8 44.0 14.4 25.7 42.1 14.3 fSUMMARY = m−1 i=1 αi · (·), i.e., eqs. (2,3) γ = 5 27.5 46.8 15.8 26.7 44.4 14.9 γ = 0.5 27.3 49.4 17.3 27.6 47.9 17.5 γ = 0.05 27.5 47.7 16.2 26.0 45.5 15.2 γ = 0 26.1 46.6 15.1 25.1 44.3 14.4 Table 3: The memory buffer is beneficial to generalizing to different tasks from on which the agent is trained. SDTW, the generalization to R6R and R8R is especially encouraging, resulting almost twice those of the second-best agent FAST. Moreover, recalling from Fig. 1, BABYWALK’s generalization to R6R and R8R attain even better performance than the RCM agents that are trained in-domain. Fig. 5 provides additional evidence on the success of BABYWALK, where we have contrasted to its performance to other agents’ on following instructions in different lengths across all datasets. Clearly, the BABYWALK agent is able to improve very noticeably on longer instructions. Qualitative Results Fig. 6 contrasts visually several agents in executing two (long) navigation tasks. BABYWALK’s trajectories are similar to what human experts provide, while other agents’ are not. 5.3 Analysis Memory Buffer is Beneficial Table 3 illustrates the importance of having a memory buffer to summarize the agent’s past experiences. Without the memory (NULL), generalization to longer tasks is significantly worse. Using LSTM to summarize is worse than using forgetting to summarize (eqs. (2,3)). Meanwhile, ablating γ of the forgetting Setting R4R →R4R R4R →others Metrics SR↑ CLS↑ SDTW ↑ SR↑ CLS↑ SDTW ↑ IL 24.7 27.9 11.1 24.2 25.8 10.2 IL+RL 25.0 45.5 13.6 25.0 43.8 14.1 IL+ CRL w/ LECTURE # 1st 24.1 44.8 13.5 24.1 43.1 13.6 2nd 26.7 45.9 15.2 26.2 43.7 14.8 3rd 27.9 47.4 17.0 26.7 45.4 16.3 4th 27.3 49.4 17.3 27.6 47.9 17.5 Table 4: BABYWALK’s performances with curriculumbased reinforcement learning (CRL), which improves imitation learning without or with reinforcement learning (IL+RL). Eval →R6R →R8R Training SR↑ CLS↑ SDTW↑ SR↑ CLS↑ SDTW↑ R2R 21.7 49.0 11.2 20.7 48.7 9.8 R4R 25.5 47.2 13.6 23.1 46.0 11.1 Eval →R2R →R4R Training SR↑ CLS↑ SDTW↑ SR↑ CLS↑ SDTW↑ R2R 43.8 54.4 36.9 21.4 51.0 13.8 R4R 34.1 50.4 27.8 27.3 49.4 17.3 Table 5: (Top) BABYWALK trained on R2R is nearly as effective as the agent trained on R4R when generalizing to longer tasks. (Bottom) BABYWALK trained on R2R adapts to R4R better than the agent trained in the reverse direction. mechanism concludes that γ = 0.5 is the optimal to our hyperparameter search. Note that when γ = 0, this mechanism degenerates to taking average of the memory buffer, and leads to inferior results. Curriculum-based RL (CRL) is Important Table 4 establishes the value of CRL. While imitation learning (IL) provides a good warm-up for SR, significant improvement on other two metrics come from the subsequent RL (IL+RL). Furthermore, CRL (with 4 “lectures”) provides clear improvements over direct RL on the entire instruction (i.e., learning to execute all BABY-STEPs at once). Each lecture improves over the previous one, especially in terms of the SDTW metric. 5.4 Revisiting ROOM2ROOM Our experimental study has been focusing on using R4R as the training dataset as it was established that as opposed to R2R, R4R distinguishes well an agent who just learns to reach the goal from an agent who learns to follow instructions. Given the encouraging results of generalizing to longer tasks, a natural question to ask, how well 2547 HUMAN BABYWALK RCM SF SEQ2SEQ Figure 6: Trajectories by human experts and VLN agents on two navigation tasks. More are in the Appendix. can an agent trained on R2R generalize? Results in Table 5 are interesting. Shown in the top panel, the difference in the averaged performance of generalizing to R6R and R8R is not significant. The agent trained on R4R has a small win on R6R presumably because R4R is closer to R6R than R2R does. But for even longer tasks in R8R, the win is similar. In the bottom panel, however, it seems that R2R →R4R is stronger (incurring less loss in performance when compared to the in-domain setting R4R →R4R) than the reverse direction (i.e., comparing R4R →R2R to the in-domain R2R →R2R). This might have been caused by the noisier segmentation of long instructions into BABY-STEPs in R4R. (While R4R is composed of two navigation paths in R2R, the segmentation algorithm is not aware of the “natural” boundaries between the two paths.) 6 Discussion There are a few future directions to pursue. First, despite the significant improvement, the gap between short and long tasks is still large and needs to be further reduced. Secondly, richer and more complicated variations between the learning setting and the real physical world need to be tackled. For instance, developing agents that are robust to variations in both visual appearance and instruction descriptions is an important next step. Acknowledgments We appreciate the feedback from the reviewers. This work is partially supported by NSF Awards IIS-1513966/1632803/1833137, CCF-1139148, DARPA Award#: FA8750-18-2-0117, DARPA-D3M - Award UCB-00009528, Google Research Awards, gifts from Facebook and Netflix, and ARO# W911NF-12-1-0241 and W911NF-15-1-0484. References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In CVPR. Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Modular multitask reinforcement learning with policy sketches. In ICML. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In ICML. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Matterport3D: Learning from RGB-D data in indoor environments. In 3DV. David L Chen and Raymond J Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In AAAI. Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In CVPR. Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. 2017. Reverse curriculum generation for reinforcement learning. In CoRL. 2548 Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In NeurIPS. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To appear. Hexiang Hu, Liyu Chen, Boqing Gong, and Fei Sha. 2018. Synthesized policies for transfer and adaptation across tasks and environments. In NeurIPS. Haoshuo Huang, Vihan Jain, Harsh Mehta, Alexander Ku, Gabriel Magalhaes, Jason Baldridge, and Eugene Ie. 2019. Transferable representation learning in vision-and-language navigation. In ICCV. Vihan Jain, Gabriel Magalhaes, Alex Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-andlanguage navigation. In EMNLP. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive growing of gans for improved quality, stability, and variation. In ICLR. Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha Srinivasa. 2019. Tactical rewind: Self-correction via backtracking in visionand-language navigation. In CVPR. Joohyun Kim and Raymond Mooney. 2013. Adapting discriminative reranking to grounded language learning. In ACL. Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, and Caiming Xiong. 2019a. Self-monitoring navigation agent via auxiliary progress estimation. In ICLR. Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, and Zsolt Kira. 2019b. The regretful agent: Heuristic-aided navigation through progress estimation. In CVPR. Gabriel Magalhaes, Vihan Jain, Alexander Ku, Eugene Ie, and Jason Baldridge. 2019. Effective and general evaluation for instruction conditioned navigation using dynamic time warping. In NeurIPS ViGIL Workshop. Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B Tenenbaum, and Jiajun Wu. 2019. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In ICLR. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In AAAI. Khanh Nguyen and Hal Daumé III. 2019. Help, anna! visual navigation with natural multimodal assistance via retrospective curiosity-encouraging imitation learning. In EMNLP. Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. 2017. Zero-shot task generalization with multi-task deep reinforcement learning. In ICML. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Sungryull Sohn, Junhyuk Oh, and Honglak Lee. 2018. Hierarchical reinforcement learning for zeroshot generalization with subtask dependencies. In NeurIPS. Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In NeurIPS. Hao Tan, Licheng Yu, and Mohit Bansal. 2019. Learning to navigate unseen environments: Back translation with environmental dropout. In EMNLP. Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog navigation. In CoRL. Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. 2019. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In CVPR. Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. 2017a. Visual semantic planning using deep successor representations. In ICCV. Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi. 2017b. Target-driven visual navigation in indoor scenes using deep reinforcement learning. In ICRA. 2549 Appendix In this supplementary material, we provide details omitted in the main text. The content is organized as what follows: • Section A. Details on identifying BABY-STEP instructions and aligning BABY-STEPs with expert trajectories. (§ 4.3 and § 4.4 of the main text) • Section B. Implementation details of the navigation agent, reward function used in RL and optimization hyper-parameters. (§ 4.4 of the main text) • Section C. Additional experimental results, including in-domain & transfer results of different dataset trained models, sanity check of our reimplementation, and extra analysis of BABYWALK. (§ 5.1 and § 5.2 of the main text) A Details on BABY-STEP Identification and Trajectory Alignments In this section, we describe the details of how BABY-STEPs are identified in the annotated natural language instructions and how expert trajectory data are segmented to align with BABY-STEP instructions. A.1 Identify BABY-STEPs We identify the navigable BABY-STEPs from the natural language instructions of R2R, R4R, R6R and R8R, based on the following 6 steps: 1. Split sentence and chunk phrases. We split the instructions by periods. For each sentence, we perform POS tagging using the SpaCy (Honnibal and Montani, 2017) package to locate and chunk all plausible noun phrases and verb phrases. 2. Curate noun phrases. We curate noun phrases by removing the stop words (i.e., the, for, from etc.) and isolated punctuations among them and lemmatizing each word of them. The purpose is to collect a concentrated set of semantic noun phrases that contain potential visual objects. 3. Identify “landmark words”. Next, given the set of candidate visual object words, we filter out a blacklist of words that either do not correspond to any visual counterpart or are misclassified by the SpaCy package. The word blacklist includes: end, 18 inch, head, inside, forward, position, ground, home, face, walk, feet, way, walking, bit, veer, ’ve, next, stop, towards, right, direction, thing, facing, side, turn, middle, one, out, piece, left, destination, straight, enter, wait, don’t, stand, back, round We use the remaining noun phrases as the “landmark words” of the sentences. Note that this step identifies the “landmark words” for the later procedure which aligns BABY-STEPs and expert trajectories. 4. Identifying verb phrases. Similarly, we use a verb blacklist to filter out verbs that require no navigational actions of the agent. The blacklist includes: make, turn, face, facing, veer. 5. Merge non-actionable sentences. We merge the sentence without landmarks and verbs into the next sentence, as it is likely not actionable. 6. Merge stop sentences. There are sentences that only describe the stop condition of a navigation action, which include verb-noun compositions indicating the stop condition. We detect the sentences starting with wait, stop, there, remain, you will see as the sentences that only describe the stop condition and merge them to the previous sentence. Similarly, we detect sentences starting with with, facing and merge them to the next sentence. After applying the above 6 heuristic rules to the language instruction, we obtain chunks of sentences that describes the navigable BABY-STEPs of the whole task (i.e., a sequence of navigational sub-goals.). A.2 Align Expert Trajectories with identified BABY-STEPs In the previous section, we describe the algorithm for identifying BABY-STEP instructions from the original natural language instructions of the dataset. Now we are going to describe the procedure of aligning BABY-STEPs with the expert trajectories, which segments the expert trajectories according to the BABY-STEPs to create the training data for the learning pipeline of our BABYWALK agent. Note 2550 that during the training, our BABYWALK does not rely on the existence of ground-truth alignments between the (micro)instructions and BABY-STEPs trajectories. Main Idea The main idea here is to: 1) perform visual landmark classification to produce confidence scores of landmarks for each visual state s along expert trajectories; 2) use the predicted landmark scores and the “landmark words” in BABYSTEPs to guide the alignment between the expert trajectory and BABY-STEPs. To achieve this, we train a visual landmark classifier with weak supervision — trajectory-wise existence of landmark objects. Next, based on the predicted landmark confidence scores, we use dynamic programming (DP) to chunk the expert trajectory into segments and assign the segments to the BABY-STEPs. Weakly Supervised Learning of the Landmark Classifier Given the pairs of aligned instruction and trajectories (X, Y) from the original dataset, we train a landmark classifier to detect landmarks mentioned in the instructions. We formulate it as a multi-label classification problem that asks a classifier f LDMK (st; O) to predict all the landmarks OX of the instruction X given the corresponding trajectory Y. Here, we denotes all possible landmarks from the entire dataset to be O, and the landmarks of a specific instruction X to be OX. Concretely, we first train a convolutional neural network (CNN) based on the visual state features st to independently predict the existence of landmarks at every time step, then we aggregate the predictions across all time steps to get trajectory-wise logits ψ via max-pooling over all states of the trajectory. ψ = max {f LDMK (st; O) | t = 1, . . . , |Y|} Here f LDMK denotes the independent state-wise landmark classifier, and ψ is the logits before normalization for computing the landmark probability. For the specific details of f LDMK, we input the 6×6 panorama visual feature (i.e. ResNet-152 feature) into a two-layer CNN (with kernel size of 3, hidden dimension of 128 and ReLU as non-linearity layer) to produce feature activation with spatial extents, followed by a global averaging operator over spatial dimensions and a multi-layer perceptron (2-layer with hidden dimension of 512 and ReLU as non-linearity layer) that outputs the state-wise logits for all visual landmarks O. We then max pool all the state-wise logits along the trajectory and compute the loss using a trajectory-wise binary cross-entropy between the ground-truth landmark label (of existence) and the prediction. Aligning BABY-STEPs and Trajectories with Visual Landmarks Now, sppose we have a sequence of BABY-STEP instructions X = {xm, m = 1, . . . , M}, and its expert trajectory Y = {st, t = 1, . . . , |Y|}, we can compute the averaged landmark score for the landmarks Oxm that exists in this sub-task instruction xm on a single state st: Ψ (t, m) = 1 [om ∈Oxm]⊤f LDMK (st; O) |Oxm| Here 1 [om ∈O] represents the one-hot encoding of the landmarks that exists in the BABY-STEP xm, and |Oxm| is the total number of existed landmarks. We then apply dynamic programming (DP) to solve the trajectory segmentation specified by the following Bellman equation (in a recursive form). Φ (t, m) = ⎧ ⎪ ⎨ ⎪ ⎩ Ψ(t, m), if t = 1 Ψ(t, m) + max i∈{1,...,t−1}  Φ(i, m −1)  , otherwise Here, Φ (t, m) represents the maximum potential of choosing the state st as the end point of the BABY-STEP instruction xm. Solving this DP leads to a set of correspondingly segmented trajectories Y = {ym, m = 1, . . . , M}, with ym being the mth BABY-STEP sub-trajectory. B Implementation details B.1 Navigation Agent Configurations Figure 7 gives an overview of the unrolled version of our full navigation agent. Panoramic State-Action Space (Fried et al., 2018) We set up the states st as the stacked visual feature of agent-centric panoramic views in 12 headings × 3 elevations with 30 degree intervals. The visual feature of each view is a concatenation of the ResNet-152 feature vector of size 2048 and the orientation feature vector of size 128 (The 4-dimensional orientation feature [sin(φ); cos(φ); sin(ω); cos(ω)] are tiled 32 times). We use similar single-view visual feature of size 2176 as our action embeddings. 2551 Instruction Encoder Memory Buffer ݑ(ݔ௜) Trajectory Encoder Bi-LSTM Bi-LSTM ݏ௧೔ ܽ௧೔ Vision Attention ݒ(ොݕ௜) ݑ(ݔ௠) Word Embedding ݔ௠ BabyWalk Policy ̂ݖ௠ MLP Concat ℎ௧ାଵ ܽ௧ Softmax Dot Product ۯ࢙࢚ ℎ௧ Concat ݏ௧ ܽ௧ିଵ Vision Attention LSTM Text Attention LSTM , ݔ௜ ොݕ௜ , ݔଵ ොݕଵ , ݔ ොݕ ⋮ ⋮ MLP m-1 m-1 Figure 7: Our network architecture at the m-th BABY-STEP sub-task. Red line represents the procedure of encoding context variable zm via summarizing the BABY-STEP trajectory fSUMMARY(v(ˆy1), . . . , v(ˆym−1)) and the corresponding (micro)instruction fSUMMARY(u(x1), . . . , u(xm−1)) in the memory buffer. Blue line represents the procedure of encoding the (micro)instruction u(xm) of the current BABY-STEP. Purple line represents the detailed decision making process of our BABYWALK policy (Ast is denoted as the set of navigable directions at st as defined by Fried et al. (2018)) Encoders Instruction encoder u(·) for the instructions is a single directional LSTM with hidden size 512 and a word embedding layer of size 300 (initialized with GloVE embedding (Pennington et al., 2014)). We use the same encoder for encoding the past experienced and the current executing instruction. Trajectory encoder v(·) contains two separate bidirectional LSTMs (Bi-LSTM), both with hidden size 512. The first Bi-LSTM encodes ati and outputs a hidden state for each time step ti. Then we attends the hidden state to the panoramic view sti to get a state feature of size 2176 for each time step. The second Bi-LSTM encoders the state feature. We use the trajectory encoder just for encoding the past experienced trajectories. BABYWALK Policy The BABYWALK policy network consists of one LSTM with two attention layers and an action predictor. First we attend the hidden state to the panoramic view st to get state feature of size 2176. The state feature is concatenated with the previous action embedding as a variable to update the hidden state using a LSTM with hidden size 512. The updated hidden state is then attended to the context variables (output of u(·)). For the action predictor module, we concatenate the output of text attention layer with the summarized past context ˆzm in order to get an action prediction variable. We then get the action prediction variable through a 2-layer MLP and make a dot product with the navigable action embeddings to retrieve the probability of the next action. Model Inference During the inference time, the BABYWALK policy only requires running the heuristic BABY-STEP identification on the test-time instruction. No need for oracle BABY-STEP trajectory during this time as the BABYWALK agent is going to roll out for each BABY-STEP by itself. B.2 Details of Reward Shaping for RL As mentioned in the main text, we learn policy via optimizing the Fidelity-oriented reward (Jain et al., 2019). Now we give the complete details of this reward function. Suppose the total number of roll out steps is T = M i=1 |ˆyi|, we would have the following form of reward function: r(st, at) = 0, if t < T SR(Y, ˆY) + CLS(Y, ˆY), if t = T Here, ˆY = ˆy1 ⊕. . . ⊕ˆyM represents the concatenation of BABY-STEP trajectories produced by the navigation agent (and we note ⊕as the concatenation operation). B.3 Optimization Hyper-parameters For each BABY-STEP task, we set the maximal number of steps to be 10, and truncate the corresponding BABY-STEP instruction length to be 100. During both the imitation learning and the curriculum reinforcement learning procedures, we fix the learning rate to be 1e-4. In the imitation 2552 learning, the mini-batch size is set to be 100. In the curriculum learning, we reduce the mini-batch size as curriculum increases to save memory consumption. For the 1st, 2nd, 3rd and 4th curriculum, the mini-batch size is set to be 50, 32, 20, and 20 respectively. During the learning, we pre-train our BABYWALK model for 50000 iterations using the imitation learning as a warm-up stage. Next, in each lecture (up to 4) of the reinforcement learning (RL), we train the BABYWALK agent for an additional 10000 iterations, and select the best performing model in terms of SDTW to resume the next lecture. For executing each instruction during the RL, we sample 8 navigation episodes before performing any back-propagation. For each learning stage, we use separate Adam optimizers to optimize for all the parameters. Meanwhile, we use the L2 weight decay as the regularizer with its coefficient set to be 0.0005. In the reinforcement learning, the discounted factor γ is set to be 0.95. C Additional Experimental Results In this section, we describe a comprehensive set of evaluation metrics and then show transfer results of models trained on each dataset, with all metrics. We provide additional analysis studying the effectiveness of template based BABY-STEP identification. Finally we present additional qualitative results. Complete set of Evaluation Metrics. We adopt the following set of metrics: • Path Length (PL) is the length of the agent’s navigation path. • Navigation Error (NE) measures the distance between the goal location and final location of the agent’s path. • Success Rate (SR) that measures the average rate of the agent stopping within a specified distance near the goal location (Anderson et al., 2018) • Success weighted by Path Length (SPL) (Anderson et al., 2018) measures the success rate weighted by the inverse trajectory length, to penalize very long successful trajectory. • Coverage weighted by Length Score (CLS) (Jain et al., 2019) that measures the fidelity of the agent’s path to the reference, weighted by the length score, and the newly proposed • Normalized Dynamic Time Warping (NDTW) that measures in more fine-grained details, the spatiotemporal similarity of the paths by the agent and the human expert (Magalhaes et al., 2019). • Success rate weighted normalized Dynamic Time Warping (SDTW) that further measures the spatiotemporal similarity of the paths weighted by the success rate (Magalhaes et al., 2019). CLS, NDTW and SDTW measure explicitly the agent’s ability to follow instructions and in particular, it was shown that SDTW corresponds to human preferences the most. C.1 Sanity Check between Prior Methods and Our Re-implementation Data Splits R2R Validation Unseen Perf. Measures PL NE↓SR↑ SPL Reported Results SEQ2SEQ (Fried et al., 2018) 7.07 31.2 SF+ (Fried et al., 2018) 6.62 35.5 RCM+ (Wang et al., 2019) 14.84 5.88 42.5 REGRETFUL+⋆(Ma et al., 2019b) 5.32 50.0 41.0 FAST+⋆(Ke et al., 2019) 21.17 4.97 56.0 43.0 Re-implemented Version SEQ2SEQ 15.76 6.71 33.6 25.5 SF+ 15.55 6.52 35.8 27.6 RCM+ 11.15 6.18 42.4 38.6 REGRETFUL+⋆ 13.74 5.38 48.7 39.7 FAST+⋆ 20.45 4.97 56.6 43.7 Table 6: Sanity check of model trained on R2R and evaluated on its validation unseen split (+: pre-trained with data augmentation; ⋆:reimplemented or readapted from the original authors’ released code). As mentioned in the main text, we compare our re-implementation and originally reported results of baseline methods on the R2R datasets, as Table 6. We found that the results are mostly very similar, indicating that our re-implementation are reliable. C.2 Complete Curriculum Learning Results We present the curriculum learning results with all evaluation metrics in Table 7. C.3 Results of BABY-STEP Identification We present an additional analysis comparing different BABY-STEP identification methods. We compare our template-based BABY-STEP identification with a simple method that treat each sentence as an BABY-STEP (referred as sentence-wise), both using the complete BABYWALK model with the same training routine. The results are shown in the 2553 IL+ CRL w/ LECTURE # Datasets Metrics IL IL+RL 1st 2nd 3rd 4th R2R PL 22.4 12.0 11.6 13.2 10.6 9.6 NE↓ 6.8 7.1 6.8 6.8 6.7 6.6 SR↑ 28.1 29.8 29.9 33.2 32.2 34.1 SPL↑ 15.7 24.3 24.9 26.6 27.5 30.2 CLS↑ 28.9 46.2 46.6 47.2 48.1 50.4 NDTW↑30.6 43.8 42.5 41.0 47.7 50.0 SDTW↑16.5 23.2 23.1 24.3 25.7 27.8 R4R PL 43.4 22.8 23.9 25.5 21.4 19.0 NE↓ 8.4 8.6 8.5 8.4 8.0 8.2 SR↑ 24.7 25.0 24.1 26.7 27.9 27.3 SPL↑ 8.2 11.2 11.0 12.3 13.7 14.7 CLS↑ 27.9 45.5 44.8 45.9 47.4 49.4 NDTW↑24.3 34.4 32.8 33.7 38.4 39.6 SDTW↑11.1 13.6 13.5 15.2 17.0 17.3 R6R PL 68.8 35.3 37.0 40.6 33.2 28.7 NE↓ 9.4 9.5 9.4 9.4 8.9 9.2 SR↑ 22.7 23.7 21.9 23.4 24.7 25.5 SPL↑ 4.2 7.2 6.4 6.8 8.1 9.2 CLS↑ 24.4 43.0 41.8 42.3 44.2 47.2 NDTW↑17.8 28.1 26.0 26.9 30.9 32.7 SDTW↑ 7.7 10.8 9.7 11.0 12.7 13.6 R8R PL 93.1 47.5 50.0 55.3 45.2 39.9 NE↓ 10.0 10.2 10.2 10.1 9.3 10.1 SR↑ 21.9 21.4 20.4 22.1 23.1 23.1 SPL↑ 4.3 6.1 5.5 6.1 6.8 7.4 CLS↑ 24.1 42.1 41.0 41.5 43.9 46.0 NDTW↑15.5 24.6 22.9 23.8 27.7 28.2 SDTW↑ 6.4 8.3 7.9 9.2 10.5 11.1 Average PL 51.8 26.8 27.9 30.6 25.1 22.1 NE↓ 8.5 8.7 8.5 8.5 8.1 8.3 SR↑ 24.7 25.5 24.6 27.0 27.5 28.1 SPL↑ 8.6 13.1 12.9 13.9 15.1 16.5 CLS↑ 26.6 44.5 43.9 44.6 46.2 48.6 NDTW↑23.0 33.9 32.2 32.4 37.4 39.0 SDTW↑11.0 14.8 14.4 15.7 17.3 18.4 Table 7: Ablation on BABYWALK after each learning stage (trained on R4R). Table 8. Generally speaking, the template based BABY-STEP identification provides a better performance. C.4 In-domain Results of Models Trained on Instructions with Different lengths As mentioned in the main text, we display all the indomain results of navigation agents trained on R2R, R4R, R6R, R8R, respectively. The complete results of all different metrics are included in the Table 9. We note that our BABYWALK agent consistently outperforms baseline methods on each dataset. It is worth noting that on R4R, R6R and R8R datasets, RCM(GOAL)+ achieves better results in SPL. This is due to the aforementioned fact that they often Datasets Metrics Sentence-wise Template based R2R PL 10.3 9.6 NE↓ 6.8 6.6 SR↑ 28.7 34.1 SPL↑ 24.9 30.2 CLS↑ 48.3 50.4 NDTW↑ 43.6 50.0 SDTW↑ 22.4 27.8 R4R PL 20.9 19.0 NE↓ 8.2 8.2 SR↑ 26.3 27.3 SPL↑ 12.7 14.7 CLS↑ 46.4 49.4 NDTW↑ 35.5 39.6 SDTW↑ 15.9 17.3 R6R PL 32.1 28.7 NE↓ 9.0 9.2 SR↑ 22.5 25.5 SPL↑ 7.5 9.2 CLS↑ 44.2 47.2 NDTW↑ 29.3 32.7 SDTW↑ 11.1 13.6 R8R PL 42.9 39.9 NE↓ 9.8 10.1 SR↑ 21.2 23.1 SPL↑ 6.3 7.4 CLS↑ 43.2 46.0 NDTW↑ 25.5 28.2 SDTW↑ 9.3 11.1 Average PL 24.2 22.1 NE↓ 8.3 8.3 SR↑ 25.2 28.1 SPL↑ 13.8 16.5 CLS↑ 45.9 48.6 NDTW↑ 34.6 39.0 SDTW↑ 15.4 18.4 Table 8: BABYWALK Agent performances between different segmentation rules (trained on R4R). Refer to text for more details. take short-cuts to directly reach the goal, with a significantly short trajectory. As a consequence, the success rate weighted by inverse path length is high. C.5 Transfer Results of Models Trained on Instructions with Different lengths For completeness, we also include all the transfer results of navigation agents trained on R2R, R4R, R6R, R8R, respectfully. The complete results of all different metrics are included in the Table 10. According to this table, we note that models trained on R8R can achieve the best overall transfer learning performances. This could because of the fact that R8R trained model only needs to deal with interpo2554 Datasets Metrics SEQ2SEQ SF+ RCM(GOAL)+ RCM(FIDELITY)+ BABYWALK BABYWALK + R2R →R2R PL 15.8 15.6 11.1 10.2 10.7 10.2 NE↓ 6.7 6.5 6.2 6.2 6.2 5.9 SR↑ 33.6 35.8 42.4 42.1 42.6 43.8 SPL↑ 25.5 27.6 38.6 38.6 38.3 39.6 CLS↑ 38.5 39.8 52.7 52.6 52.9 54.4 NDTW↑39.2 41.0 51.0 50.8 53.4 55.3 SDTW↑ 24.9 27.2 33.5 34.4 35.7 36.9 R4R →R4R PL 28.5 26.1 12.3 26.4 23.8 19.0 NE↓ 8.5 8.3 7.9 8.4 7.9 8.2 SR↑ 25.7 24.9 28.7 24.7 29.6 27.3 SPL↑ 14.1 16.0 22.1 11.6 14.0 14.7 CLS↑ 20.7 23.6 36.3 39.2 47.8 49.4 NDTW↑20.6 22.7 31.3 31.3 38.1 39.6 SDTW↑ 9.0 9.2 13.2 13.7 18.1 17.3 R6R →R6R PL 34.1 43.4 11.8 28.0 28.4 27.2 NE↓ 9.5 9.6 9.2 9.4 9.4 9.3 SR↑ 18.1 17.8 18.2 20.5 21.7 22.0 SPL↑ 9.6 7.9 14.8 7.4 7.8 8.1 CLS↑ 23.4 20.3 31.6 39.0 47.1 47.4 NDTW↑19.3 17.8 25.9 25.8 32.6 33.4 SDTW↑ 6.5 5.9 7.6 9.5 11.5 11.8 R8R →R8R PL 40.0 53.0 12.4 42.3 35.6 39.1 NE↓ 9.9 10.1 10.2 10.7 9.6 9.9 SR↑ 20.2 18.6 19.7 18.2 22.3 22.0 SPL↑ 12.4 9.8 15.4 5.3 7.3 7.0 CLS↑ 19.8 16.3 25.7 37.2 46.4 46.4 NDTW↑15.8 13.5 19.4 21.6 29.6 28.3 SDTW↑ 5.1 4.4 5.8 7.6 10.4 10.1 Table 9: Indomain results. Each model is trained on the training set of R2R, R4R, R6R and R8R datasets, and evaluated on the corresponding unseen validation set (+: pre-trained with data augmentation). lating to shorter ones, rather than extrapolating to longer instructions, which is intuitively an easier direction. C.6 Additional Qualitative Results We present more qualitative result of various VLN agents as Fig 8. It seems that BABYWALK can produce trajectories that align better with the human expert trajectories. 2555 Datasets Metrics SEQ2SEQ SF+ RCM(GOAL)+ RCM(FIDELITY)+ REGRETFUL+⋆ FAST+⋆ BABYWALK BABYWALK + R2R →R4R PL 28.6 28.9 13.2 14.1 15.5 29.7 19.5 17.9 NE↓ 9.1 9.0 9.2 9.3 8.4 9.1 8.9 8.9 SR↑ 18.3 16.7 14.7 15.2 19.2 13.3 22.5 21.4 SPL↑ 7.9 7.4 8.9 8.9 10.1 7.7 12.6 11.9 CLS↑ 29.8 30.0 42.5 41.2 46.4 41.8 50.3 51.0 NDTW↑25.1 25.3 33.3 32.4 31.6 33.5 38.9 40.3 SDTW↑7.1 6.7 7.3 7.2 9.8 7.2 14.5 13.8 R2R →R6R PL 39.4 41.4 14.2 15.7 15.9 32.0 29.1 25.9 NE↓ 9.6 9.8 9.7 9.8 8.8 9.0 10.1 9.8 SR↑ 20.7 17.9 22.4 22.7 24.2 26.0 21.4 21.7 SPL↑ 11.0 9.1 17.7 18.3 16.6 16.5 7.9 8.8 CLS↑ 25.9 26.2 37.1 36.4 40.9 37.7 48.4 49.0 NDTW↑20.5 20.8 26.6 26.1 16.2 21.9 30.8 32.6 SDTW↑7.7 7.2 8.2 8.4 6.8 8.5 11.2 11.2 R2R →R8R PL 52.3 52.2 15.3 16.9 16.6 34.9 38.3 34.0 NE↓ 10.5 10.5 11.0 11.1 10.0 10.6 11.1 10.5 SR↑ 16.9 13.8 12.4 12.6 16.3 11.1 19.6 20.7 SPL↑ 6.1 5.6 7.4 7.5 7.7 6.2 6.9 7.8 CLS↑ 22.5 24.1 32.4 30.9 35.3 33.7 48.1 48.7 NDTW↑17.1 18.2 23.9 23.3 8.1 14.5 26.7 29.1 SDTW↑4.1 3.8 4.3 4.3 2.4 2.4 9.4 9.8 Average PL 40.1 40.8 14.2 15.6 16.0 32.2 29.0 25.9 NE↓ 9.7 9.8 10.0 10.1 9.1 9.6 10.0 9.7 SR↑ 18.6 16.1 16.5 16.8 19.9 16.8 21.2 21.3 SPL↑ 8.3 7.4 11.3 11.6 11.5 10.1 9.1 9.5 CLS↑ 26.1 26.8 37.3 36.2 40.9 37.7 48.9 49.6 NDTW↑20.9 21.4 27.9 27.3 18.6 23.3 32.1 34.0 SDTW↑6.3 5.9 6.6 6.6 6.3 6.0 11.7 11.6 Datasets Metrics SEQ2SEQ SF+ RCM(GOAL)+ RCM(FIDELITY)+ REGRETFUL+⋆ FAST+⋆ BABYWALK BABYWALK + R4R →R2R PL 16.2 17.4 10.2 17.7 20.0 26.5 12.1 9.6 NE↓ 7.8 7.3 7.1 6.7 7.5 7.2 6.6 6.6 SR↑ 16.3 22.5 25.9 29.1 22.8 25.1 35.2 34.1 SPL↑ 9.9 14.1 22.5 18.2 14.0 16.3 28.3 30.2 CLS↑ 27.1 29.5 44.2 34.3 32.6 33.9 48.5 50.4 NDTW↑29.3 31.8 41.1 33.5 28.5 27.9 46.5 50.0 SDTW↑10.6 14.8 20.2 18.3 13.4 14.2 27.2 27.8 R4R →R6R PL 40.8 38.5 12.8 33.0 19.9 26.6 37.0 28.7 NE↓ 9.9 9.5 9.2 9.3 9.5 8.9 8.8 9.2 SR↑ 14.4 15.5 19.3 20.5 18.0 22.1 26.4 25.5 SPL↑ 6.8 8.4 15.2 8.5 10.6 13.7 8.1 9.2 CLS↑ 17.7 20.4 31.8 38.3 31.7 31.5 44.9 47.2 NDTW↑16.4 18.3 23.5 23.7 23.5 23.0 30.1 32.7 SDTW↑4.6 5.2 7.3 7.9 7.5 7.7 13.1 13.6 R4R →R8R PL 56.4 50.8 13.9 38.7 20.7 28.2 50.0 39.9 NE↓ 10.1 9.5 9.5 9.9 9.5 9.1 9.3 10.1 SR↑ 20.7 21.6 22.8 20.9 18.7 27.7 26.3 23.1 SPL↑ 10.4 11.8 16.9 9.0 9.2 13.7 7.2 7.4 CLS↑ 15.0 17.2 27.6 34.6 29.3 29.6 44.7 46.0 NDTW↑13.4 15.1 19.5 21.7 19.0 17.7 27.1 28.2 SDTW↑4.7 5.0 5.1 6.1 5.6 6.9 11.5 11.1 Average PL 37.8 35.6 12.3 29.8 20.2 27.1 33.0 26.1 NE↓ 9.3 8.8 8.6 8.6 8.8 8.4 8.2 8.6 SR↑ 17.1 19.9 22.7 23.5 19.8 25.0 29.3 27.6 SPL↑ 9.0 11.4 18.2 11.9 11.3 14.6 14.5 15.6 CLS↑ 19.9 22.4 34.5 35.7 31.2 31.7 46.0 47.9 NDTW↑19.7 21.7 28.0 26.3 23.7 22.9 34.6 37.0 SDTW↑6.6 8.3 10.9 10.8 8.8 9.6 17.3 17.5 (a) R2R trained model (b) R4R trained model Datasets Metrics SEQ2SEQ SF+ RCM(GOAL)+ RCM(FIDELITY)+ BABYWALK BABYWALK + R6R →R2R PL 14.5 19.4 8.1 15.5 9.4 9.2 NE↓ 7.7 7.1 7.6 7.5 6.8 6.8 SR↑ 19.3 21.9 19.6 22.6 31.3 30.6 SPL↑ 13.3 11.6 17.2 14.1 28.3 27.8 CLS↑ 32.1 26.2 43.2 34.3 49.9 50.0 NDTW↑31.9 30.8 39.7 32.4 49.5 49.4 SDTW↑ 13.1 13.3 15.3 14.3 25.9 25.4 R6R →R4R PL 25.2 33.0 11.6 25.7 18.1 17.7 NE↓ 8.7 8.6 8.5 8.4 8.4 8.2 SR↑ 24.2 22.4 23.6 25.4 24.3 24.3 SPL↑ 13.7 9.3 17.5 10.6 12.8 12.9 CLS↑ 25.8 21.4 35.8 34.8 48.6 48.6 NDTW↑22.9 20.6 29.8 26.5 39.0 39.4 SDTW↑ 9.3 7.5 10.8 11.1 15.1 15.1 R6R →R8R PL 43.0 52.8 14.2 29.9 38.3 36.8 NE↓ 9.9 9.9 9.6 9.7 10.2 10.0 SR↑ 20.1 20.3 20.3 22.4 20.8 21.0 SPL↑ 11.2 9.4 14.9 8.1 6.6 6.8 CLS↑ 20.6 18.3 27.7 38.9 45.9 46.3 NDTW↑16.3 15.2 21.9 22.2 28.4 29.3 SDTW↑ 5.6 5.0 6.4 6.8 9.6 9.9 Average PL 27.6 35.1 11.3 23.7 21.9 21.2 NE↓ 8.8 8.5 8.6 8.5 8.5 8.3 SR↑ 21.2 21.5 21.2 23.5 25.5 25.3 SPL↑ 12.7 10.1 16.5 10.9 15.9 15.8 CLS↑ 26.2 22.0 35.6 36.0 48.1 48.3 NDTW↑23.7 22.2 30.5 27.0 39.0 39.4 SDTW↑ 9.3 8.6 10.8 10.7 16.9 16.8 Datasets Metrics SEQ2SEQ SF+ RCM(GOAL)+ RCM(FIDELITY)+ BABYWALK BABYWALK + R8R →R2R PL 13.7 19.3 7.8 17.8 9.1 9.8 NE↓ 7.6 7.3 8.0 8.2 6.8 6.7 SR↑ 18.7 23.4 14.8 19.2 30.0 32.1 SPL↑ 13.3 12.9 12.9 10.6 27.0 28.2 CLS↑ 32.7 26.6 37.9 28.9 49.5 49.3 NDTW↑32.4 29.9 34.9 25.9 48.9 48.9 SDTW↑12.7 14.5 11.1 10.5 24.6 26.2 R8R →R4R PL 23.1 31.7 11.1 32.5 17.4 19.0 NE↓ 8.7 8.8 8.7 9.2 8.2 8.5 SR↑ 23.6 21.8 23.2 21.7 24.4 24.4 SPL↑ 15.1 10.5 18.2 7.4 12.6 12.5 CLS↑ 24.9 20.8 32.3 29.4 48.1 48.5 NDTW↑22.3 19.7 26.4 20.6 39.1 38.5 SDTW↑8.8 7.7 9.3 8.4 14.9 15.2 R8R →R6R PL 30.9 42.2 11.9 39.9 26.6 29.2 NE↓ 9.7 9.9 9.9 10.1 9.0 9.3 SR↑ 15.4 14.7 14.8 20.0 22.9 22.9 SPL↑ 8.6 6.7 11.6 5.3 8.4 7.9 CLS↑ 22.2 18.5 29.1 33.5 46.9 46.6 NDTW↑18.5 15.9 22.5 20.1 33.3 31.8 SDTW↑5.5 4.7 6.0 7.8 12.1 11.8 Average PL 22.6 31.1 10.3 30.1 17.7 19.3 NE↓ 8.7 8.7 8.9 9.2 8.0 8.2 SR↑ 19.2 20.0 17.6 20.3 25.8 26.5 SPL↑ 12.3 10.0 14.2 7.8 16.0 16.2 CLS↑ 26.6 22.0 33.1 30.6 48.2 48.1 NDTW↑24.4 21.8 27.9 22.2 40.4 39.7 SDTW↑9.0 9.0 8.8 8.9 17.2 17.7 (c) R6R trained model (d) R8R trained model Table 10: Transfer results of R2R, R4R, R6R, R8R trained model evaluated on their complementary unseen validation datasets (+: pre-trained with data augmentation; ⋆: reimplemented or readapted from the original authors’ released code). 2556 HUMAN BABYWALK RCM SF SEQ2SEQ Figure 8: Additional trajectories by human experts and VLN agents on two navigation tasks.
2020
229
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 253–262 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 253 Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encoders Yu Duan1∗, Canwen Xu2∗, Jiaxin Pei3∗, Jialong Han4†, Chenliang Li2‡ 1 Alibaba Group, China 2 Wuhan University, China 3 University of Michigan, United States 4 Amazon, United States 1 [email protected], 2 {xucanwen,cllee}@whu.edu.cn 3 [email protected], 4 [email protected] Abstract Conditional Text Generation has drawn much attention as a topic of Natural Language Generation (NLG) which provides the possibility for humans to control the properties of generated contents. Current conditional generation models cannot handle emerging conditions due to their joint end-to-end learning fashion. When a new condition added, these techniques require full retraining. In this paper, we present a new framework named Pre-train and Plug-in Variational Auto-Encoder (PPVAE) towards flexible conditional text generation. PPVAE decouples the text generation module from the condition representation module to allow “one-to-many” conditional generation. When a fresh condition emerges, only a lightweight network needs to be trained and works as a plug-in for PPVAE, which is efficient and desirable for real-world applications. Extensive experiments demonstrate the superiority of PPVAE against the existing alternatives with better conditionality and diversity but less training effort.1 1 Introduction Currently, neural generation techniques have powered many inspiring applications, e.g., poem generation (Yang et al., 2018), neural machine translation (NMT) (Bahdanau et al., 2015) and chatbot (Zhao et al., 2017). Conditional (also known as controllable) text generation is an important task of text generation, aiming to generate realistic text that carries a specific attribute (e.g., positive or negative sentiment). A common solution is to encode the condition into a vector representation and then integrate it with the text generation process (Kingma ∗The first three authors contribute equally to this paper. † Work done when Jialong Han was with Tencent AI Lab. ‡ Chenliang Li is the corresponding author. 1The code is available at https://github.com/ WHUIR/PPVAE. et al., 2014; Hu et al., 2017; Mirza and Osindero, 2014). These existing neural models have achieved encouraging results. However, when a new condition is added (e.g., a new topic for categorical generation), they require a full retraining or finetuning. This process is both time-consuming and computationally inefficient (Houlsby et al., 2019). Both fine-tuning and retraining are not desirable in real-world applications since the delivery (e.g., transmitting updated weights through the Internet) and client-side re-deployment (e.g., distribute updated weights to users) of large-scale weights are often difficult. Inspired by the recent success of Variational Auto-Encoder (VAE) (Kingma and Welling, 2014) based post-hoc conditional image generation strategy (Engel et al., 2018), we provide a new perspective for flexible conditional text generation. We propose Pre-train and Plug-in Variational AutoEncoder (PPVAE), which decouples the text generation module from the condition representation module. PPVAE is a hierarchical framework composed of two VAEs: (1) PRETRAINVAE, which derives the global latent space of text with its encoder (pre-trained global encoder) and learns to generate text based on an easily-accessible large unlabeled dataset with its decoder (pre-trained global decoder); (2) PLUGINVAE, which is a lightweight neural network that learns to transform vectors from the conditional latent space to the global latent space, and vice versa. This mapping function can be easily learned with only a few conditional training samples. In this sense, once we transform a latent variable (also known as latent code) randomly sampled from the conditional space distribution to the global space, the pre-trained global decoder is directly adopted for generation. In other words, whenever a new condition emerges, we only need to train a PLUGINVAE and directly plug it into the framework. 254 Different from the existing end-to-end neural models (Mirza and Osindero, 2014; Sohn et al., 2015; Kingma et al., 2014), PPVAE focuses on the learning of pure transformation between the continuous latent spaces, instead of the tricky discrete text generation. Once trained, PRETRAINVAE is fixed for text representation and generation under all conditions. Our proposed framework decouples the conditional space learning from the text generation, endowing PPVAE with more flexibility when handling emerging conditions. Also, training only a small conditional network for latent space transformation is much more efficient than co-training with the text generation. Additionally, we can easily increase the capability of generation using a larger corpus or deeper neural networks for text encoding and decoding. Our main contributions can be summarized as follows: (1) We propose a novel framework, PPVAE, for conditional text generation, which allows a separate training for a new condition without retraining the whole network. (2) We conduct extensive experiments and analysis to verify the effectiveness of our proposed PPVAE. Our framework achieves state-of-the-art performance on conditionality in both automatic and human evaluations. 2 Related work Boosted by the recent success of deep learning technology, Natural Language Generation (NLG) has recently become popular in the NLP community. Many great works have attempted to solve various subtasks like dialogue generation (Li et al., 2016), poetry generation (Yi et al., 2018) and story generation (Fan et al., 2018) and new techniques keep emerging (Bowman et al., 2016; Yu et al., 2017; Zhou et al., 2020). However, due to the blackbox nature of neural networks, the recent proposed generic models suffer the problem of lacking interpretability and controllability. To handle this problem and support generating plausible text with a specified condition, conditional text generation (Kikuchi et al., 2016; Ficler and Goldberg, 2017; Hu et al., 2017) has recently attracted extensive attention. Current research in this direction mainly falls into two fashions: the supervised methods and semi-supervised methods. For supervised methods, Mirza and Osindero (2014); Sohn et al. (2015) first converted the condition information to one-hot vectors, then integrated them into a generator and a discriminator. To enhance the correlation between structured conditional code and generated samples, Chen et al. (2016) adopted an extra adversarial classifier to infer the structured code from generated samples. Wang and Wan (2018) used multiple generators for multiple conditions and a multi-class classifier to provide training signals for the learning of generators. However, given only a limited number of conditional samples, semi-supervised methods are compulsory. To utilize the implicit conditional distribution behind the unlabeled text, Kingma et al. (2014) introduced a classifier into the VAE architecture. Hu et al. (2017) further involved two additional independent regularization terms in enhancing the disentanglement between structured code and unstructured code. Very recently, Keskar et al. (2019) used human-defined “control code” to pre-trained Language Model in an unsupervised manner. Our work falls in the category of semisupervised learning yet differs from the existing works in the following ways: (1) Our model decouples the text generation module from the condition representation module which two are tightly fused as a single one in previous studies, enabling possible exploitation for pre-trained Language Models (e.g., GPT-2 (Radford et al., 2019)). (2) Our model allows single-condition generation, which could inspire new applications like polite speech generator (Niu and Bansal, 2018) and data augmentation (Guo et al., 2018). (3) Our model can handle emerging conditions while achieving state-of-theart performance with fewer parameters and less training time. 3 Preliminaries Variational Auto-Encoder (VAE). VAE (Kingma and Ba, 2015) is widely used in continuous generation (e.g., image generation). Bowman et al. (2016) introduced VAE to NLG to solve the “one-to-many” generation problem (i.e., generating multiple feasible samples for the same input). Given a latent variable z randomly sampled from a prior distribution, VAE comprises an encoder enc(x) = qφ(z|x) and a decoder dec(z) = pθ(x|z). The encoder aims to encode input data x into latent space Z ∈Rd. The decoder is used to reconstruct the original input x, given the corresponding z. Thus, the loss function of VAE is formulated as: LVAE(x) = −Eqφ(z|x)[log pθ(x|z)] + KL(qφ(z|x)∥p(z)) (1) 255 where KL(·||·) is the Kullback-Leibler (KL) divergence, p(z) = N(0, 1) is the prior distribution. The first term ensures that VAE can distill compact variable z in latent space for reconstruction. The second term pushes posterior distribution to be close to the prior distribution, securing the mutual information between original data and the latent space (Dupont, 2018). Conditional Text Generation with VAE. Conditional text generation has drawn much attention recently. By controlling the properties of generated contents, we can apply the generative models to many real-world scenarios. We follow the problem setting in (Hu et al., 2017). Given a set of k conditions C = {c1, c2, ..., ck}, an unlabeled corpus X, and conditional text samples Y = Y1 ∪Y2 ∪...∪Yk where each Yi is a set of text samples that carries the condition ci. The goal of a VAE model is to learn a decoder pθ(ˆy|z, ci) that takes the latent variable z and the condition ci to calculate the distribution over the text samples Yi. Thus, when the condition ci and a randomly sampled latent variable z ∼p(z) specified, the model could generate realistic text samples matching the given condition. 4 Pre-train and Plug-in Variational Auto-Encoder As a basis for semi-supervised learning, a large unlabeled corpus should include diverse text which covers a vast spectrum of conditions. Thus, text under each condition forms a conditional latent space, which could be mapped from a larger global latent space. Based on this, we propose a PRETRAINVAE and a PLUGINVAE to derive the global and conditional latent space, respectively. 4.1 Framework PRETRAINVAE is composed of a pre-trained global encoder for text representation and a pretrained global decoder for text generation. PRETRAINVAE. The encoder and decoder of PRETRAINVAE are used to encode and generate text, respectively. As discussed above, PRETRAINVAE is trained on a large amount of unlabeled text to derive the global latent space Zg for the latent variable zg, where Zg ∈Rdg and dg is the space dimension. Previous studies usually use a common VAE for text representation and generation. However, as pointed out in (Bowman et al., 2016), VAE suffers the notorious “posterior collapse” problem. To address this, we utilize Wasserstein Autoencoder (WAE) (Tolstikhin et al., 2018) for PRETRAINVAE. Different from the original VAE, WAE encourages aggregated posterior distribution to be close to the prior, which is effective in alleviating the reconstruction problem of VAE (Tolstikhin et al., 2018). Specifically, we adopt WAE-GAN, a variant of WAE, which incorporates the merits of adversarial learning. During training, the encoder encg(x) = qg(zg|x) encodes the text to the latent space and the decoder decg(zg) = pg(x|zg) reconstruct the text with the latent variable zg. Thus, the loss function of PRETRAINVAE is formulated as: LPRETRAINVAE(x) = −Eqg(zg|x)[log pg(x|zg)] + λD(Q(zg), p(zg)) (2) where Q(zg) = R qg(zg|x)p(x) dx is the aggregated posterior distribution; p(zg) is the prior normal distribution; D is the adversarial discriminator; λ is the coefficient hyper-parameter (λ > 0). PLUGINVAE. For each condition, we use a condition-specific PLUGINVAE to derive the conditional space. That is, PLUGINVAE is proposed to learn the transformation between the conditional and global latent space for each condition. Specifically, for each condition ci, we use a limited number of conditional samples yi and utilize the global encoder encg to encode them into vyi. Note that normally, the encoded text samples under a single condition are not likely to densely clustered in the global text space Zg, since the learning process of Zg is condition-independent and the unlabeled corpus contains diverse text samples. PLUGINVAE for condition ci consists of an encoder encci(vyi) = qci(zci|vyi) and a decoder decci(zci) = pci(vyi|zci). The learned conditiondependent latent space is Zci ∈Rdc, where dc is the space dimension. Thus, PLUGINVAE is capable of mapping the samples in the global latent space to and from a denser conditional latent space (i.e., dc < dg). During training, the loss function of PLUGINVAE for a single condition is defined as: Lsingle(vyi) = −Eq(zci|vyi)[log pci(vyi|zci)] + | (KL(qci(zci|vyi)∥p(zci)) −β | (3) where p(zci) is the prior normal distribution of the conditional latent space; zci is the latent variable; vyi = encg(yi) is encoded text samples from Yi. To enhance the diversity of generated text, we introduce an extra constant term β to control the amount 256 Reconstruction (a) (c) Sample and map Generate conditional text (b) Reconstruction Encode PLUGINVAE x <latexit sha1_ base64="ze+WcU7V23dwaMcDXx Vk0US2TPs=">AB6HicbZC7SgN BFIbPxltcb1FLm8EgWIXdWGgjB m0sEzAXSEKYnZxNxsxemJkVw5In sLFQxFYfxt5GfBsnl0ITfxj4+P 9zmHOFwutON8W5ml5ZXVtey6v bG5tb2T292rqSiRDKsEpFseFS h4CFWNdcCG7FEGngC697gapzX71 AqHoU3ehjO6C9kPucUW2syn0n l3cKzkRkEdwZ5C8+7P4/csud3K frW7EkgBDzQRVquk6sW6nVGrOB I7sVqIwpmxAe9g0GNIAVTudDoi R8bpEj+S5oWaTNzfHSkNlBoGnq kMqO6r+Wxs/pc1E+2ftVMexonGk E0/8hNBdETGW5Mul8i0GBqgTHI zK2F9KinT5ja2OYI7v/Ii1IoF96 RQrLj50iVMlYUDOIRjcOEUSnAN ZagCA4QHeIJn69Z6tF6s12lpxpr 17MfW8/R2QPg=</latexit> decg <latexit sha1_base64="9/RuNtyzOuzMzRByHSV+BYq4Yp8=">AB7HicbZA7T sNAEIbHPEPCI0BJYxGQqCI7FBG0FAGCSeRkihar8fJKu1tbuOFk5Aw0FCNFyBC7ADeg4CNRsHgUk/NJKn/5/RjszfsKZ0o7za2srq1vbOa28oXtnd294v5BXcWpO jRmMey6ROFnAn0NMcm4lEvkcG/7gepI3higVi8WdHiXYiUhPsJBRo3lBUi7vW6x5JSdqexlcOdQqp58vb0PC9+1bvGjHcQ0jVBoyolSLdJdCcjUjPKcZxvpwoTQge khy2DgkSoOtl02LF9apzADmNpntD21P3dkZFIqVHkm8qI6L5azCbmf1kr1eFlJ2MiSTUKOvsoTLmtY3uyuR0wiVTzkQFCJTOz2rRPJKHa3CdvjuAurwM9UrZPS9Xbt1S 9QpmysERHMZuHABVbiBGnhAgcE9PMKTJawH69l6mZWuWPOeQ/gj6/UHfjyS6A=</latexit> ˜x <latexit sha1_base64="6txPo rvItW2eK2WBwsj61QWfb10=">AB8HicbZDLSgMxFIbPeK3jrerS TbAIrspMXehGLpxWcFepB1KJpO2oUlmSDJiGfoUblwoIu58Efdux LcxvSy09YfAx/+fQ845YcKZNp737SwsLi2vrObW3PWNza3t/M5uTc epIrRKYh6rRog15UzSqmG0aiKBYhp/WwfznK63dUaRbLGzNIaCB wV7IOI9hY67ZlGI9odj9s5wte0RsLzYM/hcL5h3uWvH25lXb+sxXF JBVUGsKx1k3fS0yQYWUY4XTotlJNE0z6uEubFiUWVAfZeOAhOrROh Dqxsk8aNHZ/d2RYaD0Qoa0U2PT0bDYy/8uaqemcBhmTSWqoJOPOi lHJkaj7VHEFCWGDyxgopidFZEeVpgYeyPXHsGfXkeaqWif1wsXfu F8gVMlIN9OIAj8OEynAFagCAQEP8ATPjnIenRfndVK64Ex79uCP nPcfpIKT9A=</latexit> zg <latexit sha1_bas e64="ak/OuH4h9Zi7gMLQkiGdXQFYcYE =">AB9XicbVC7TsNAEFzDA6PACXNi RCJKrJDAWUEDWQyENKTHQ+X5JTzmfr7 hwUrPwHDQUI0dLxA/wBHR8CNZdHAQkjr TSa2dXujh9zprTjfFpLyura+uZDTu7ub W9k9vdq6kokYRWScQj2fCxopwJWtVMc9 qIJcWhz2nd71+M/fqASsUica2HMfVC3B WswjWRroptDTjAU0bI/u3W3n8k7RmQ AtEndG8uWjr7f3Qfa70s59tIKIJCEVmnC sVN1Yu2lWGpGOB3ZrUTRGJM+7tKmoQK HVHnp5OoRKhglQJ1ImhIaTdTfEykOlRq GvukMse6peW8s/uc1E90581Im4kRTQa LOglHOkLjCFDAJCWaDw3BRDJzKyI9LDH RJijbhODOv7xIaqWie1IsXbn58jlMkYED OIRjcOEUynAJFagCAQn38AhP1q31YD1b L9PWJWs2sw9/YL3+AM2nln0=</latexi t> vyi <latexit sha1_base64="J1br3v1SdIyQgXyNdJh QN2W74=">AB7nicbZC7TsMwFIZPyq20XAqMLBYFialK2gHGChbGItGL1EaR4zqtVceJbKdSFPUhWBhAiJUn4 AV4AzYeBGbcywAtv2Tp0/+fI59z/JgzpW3708qtrW9sbuW3C8Wd3b390sFhS0WJLRJIh7Jjo8V5UzQpma04 sKQ59Ttv+6Hqat8dUKhaJO53G1A3xQLCAEayN1R57WeqxiVcq2xV7JrQKzgLK9dOvt/dx8bvhlT56/YgkIRWac KxU17Fj7WZYakY4nR6iaIxJiM8oF2DAodUudls3Ak6M04fBZE0T2g0c393ZDhUKg19UxliPVTL2dT8L+smOrh 0MybiRFNB5h8FCUc6QtPdUZ9JSjRPDWAimZkVkSGWmGhzoYI5grO8iq0qhWnVqneOuX6FcyVh2M4gXNw4ALqc AMNaAKBEdzDIzxZsfVgPVsv89Kcteg5gj+yXn8Ae92UGA=</latexit> PRETRAINVAE Zci <latexit sha1_base64="dV/CIqx4CVoqgSjTb+EYupx7qT8=">AB+3icbVC7TsMwFHV4lvAKZWSxWlViqpIywFj Bwlgk+hBNFDmO01p1nMh2EFWUD+AL2FgYQIiVL+APWB/g/sYoOVIVzo6517de0+QMiqVbX8bK6tr6xubpS1ze2d3b986KHdkglM2jhiegFSBJGOWkrqhjpYKgOGCkG4wuJn73lghJE36txinxYjTgNKIYKS35VrnmKspCkvcK8bPsU8L 36radXsKuEycOak2K27l/uHjq+Vbn26Y4CwmXGpOw7dq8HAlFMSOF6WaSpAiP0ID0NeUoJtLp7cXsKaVEaJ0MUVnKq/J3IUSzmOA90ZIzWUi95E/M/rZyo683LK0wRjmeLoxBlcBJEDCkgmDFxpogLKi+FeIhEgrHZepQ3AWX14mnU bdOak3rpxq8xzMUAJHoAKOgQNOQRNcghZoAwzuwCN4Bi9GYTwZr8brHXFmM8cgj8w3n8AcsaXw=</latexit> encci <latexit sha1_base64="hLlPk1Zh219IlufqQdRXgVEOEMQ=">AB/nicbVDLSsNAFJ3UV42vqLhyM7QUXJWkLnRZdOygn1AU8JkMmHTiZhZiKUEHDrF7h240IR t+79Azfi3zhpu9DWAxcO59zLvf4CaNS2fa3UVpZXVvfKG+aW9s7u3vW/kFHxqnApI1jFouejyRhlJO2oqRXiIinxGuv74svC7t0RIGvMbNUnIEJDTkOKkdKSZx3VXEVZQLJebhKOvQx7NDc9q2rX7SngMnHmpNqsuJX7h4+vlmd9ukGM04hwhRmSsu/YiRpkSCiKGclN5UkQXiMhqSvKUcRkYNsen4Oa1oJYBgLXVzBqfp7IkORlJPI150 RUiO56BXif14/VeH5IKM8SZX+bYoTBlUMSygAEVBCs20QRhQfWtEI+QFjpxIoQnMWXl0mnUXdO641rp9q8ADOUwTGogBPgDPQBFegBdoAgw8gmfwYtwZT8ar8TZrLRnzmUPwB8b7D1TQmMc=</latexit> decci <latexit sha1_base64="x9Qs2J4s93QtIYuEKiw6N1/Ei0=">AB/nicbVDLSsNAFJ3UV42vqLhyE1oKrkpSF7osunFZwT6gKWEyuWmHTh7MTIQSAm79Atd uXCji1r1/4Eb8GydtF9p64MLhnHu59x4vYVRIy/rWSiura+sb5U19a3tnd8/YP+iIOUE2iRmMe95WACjEbQlQx6CQcegy63viy8Lu3wAWNoxs5SWAQ4mFEA0qwVJrHNUcSZkPWS/XfSBuRlya65RterWFOYysek2qw4lfuHj6+Wa3w6fkzSECJGBaib1uJHGSYS0oY5LqTCkgwGeMh9BWNcAhikE3Pz82aUnwziLmqSJp T9fdEhkMhJqGnOkMsR2LRK8T/vH4qg/NBRqMklRCR2aIgZaMzSIL06ciGQTRTDhVN1qkhHmEiVWBGCvfjyMuk06vZpvXFtV5sXaIYyOkYVdIJsdIa6Aq1UBsRlKFH9IxetDvtSXvV3matJW0+c4j+QHv/AUVXmL0=</latexit> p(zci) <latexit sha1_base64="3PDwN796q0fKag9k3F708b7RmeE=">AB8XicbZDLSsNAFIZP6q3W9Wlm6FqAglqQtdBt24rGAv2IYwmU7boZ NJmJkIMfQtunGhiFvfxl3fxuloa0/DHz8/znMOSeIOVPatqdWbmNza3snv1vY2z84PCoenzRVlEhCGyTikWwHWFHOBG1opjltx5LiMOC0FYzuZnrmUrFIvGo05h6IR4I1mcEa2M9xZUXPyM+G1/4xbJdtedC6+AsoeyWupeTqZvW/eJ3txeRJKRCE46V6jh2rL0MS80Ip+NCN1E0xmSEB7RjUO CQKi+bTzxG58bpoX4kzRMazd3fHRkOlUrDwFSGWA/VajYz/8s6ie7feBkTcaKpIuP+glHOkKz9VGPSUo0Tw1gIpmZFZEhlphoc6SCOYKzuvI6NGtV56pae3DK7i0slIczKEFHLgGF+6hDg0gIGACb/BuKevV+rA+F6U5a9lzCn9kf0AVOiTqw=</latexit> decci <latexit sha1_base64="x9Qs2J4s93QtIYuEKiw6N1/Ei0=">AB/nicbVDLSsNAFJ3UV42vqLhyE1oKrkpSF7osunFZwT6gKWEyuWmHTh7MTIQSAm79Atd uXCji1r1/4Eb8GydtF9p64MLhnHu59x4vYVRIy/rWSiura+sb5U19a3tnd8/YP+iIOUE2iRmMe95WACjEbQlQx6CQcegy63viy8Lu3wAWNoxs5SWAQ4mFEA0qwVJrHNUcSZkPWS/XfSBuRlya65RterWFOYysek2qw4lfuHj6+Wa3w6fkzSECJGBaib1uJHGSYS0oY5LqTCkgwGeMh9BWNcAhikE3Pz82aUnwziLmqSJp T9fdEhkMhJqGnOkMsR2LRK8T/vH4qg/NBRqMklRCR2aIgZaMzSIL06ciGQTRTDhVN1qkhHmEiVWBGCvfjyMuk06vZpvXFtV5sXaIYyOkYVdIJsdIa6Aq1UBsRlKFH9IxetDvtSXvV3matJW0+c4j+QHv/AUVXmL0=</latexit> Sample zci <latexit sha1_base64="cjqCroEtOxvITpXeDBKIqUV8r1I=">AB+3icbVC7TsMwFHV4lvAKZWSxWlViqpIy wFjBwlgk+pCaKHIcp7XqOJHtIEqUD+AL2FgYQIiVL+APWB/g/sYoOVIVzo6517de0+QMiqVbX8bK6tr6xubpS1ze2d3b986KHdkglM2jhiegFSBJGOWkrqhjpYKgOGCkG4wuJn73hghJE36txinxYjTgNKIYKS35VrnmKspCkv cK87PsU8L36radXsKuEycOak2K27l/uHjq+Vbn26Y4CwmXGpOw7dq8HAlFMSOF6WaSpAiP0ID0NeUoJtLp7cXsKaVEaJ0MUVnKq/J3IUSzmOA90ZIzWUi95E/M/rZyo683LK0wRjmeLoxBlcBJEDCkgmDFxpogLKi+FeIhE grHZepQ3AWX14mnUbdOak3rpxq8xzMUAJHoAKOgQNOQRNcghZoAwxuwSN4Bi9GYTwZr8brHXFmM8cgj8w3n8ApAaX4w=</latexit> z 0 ci <latexit sha1_base64="EU0qrVJAYXl0j9NoXSutaekQZc=">AB/3icbVC7TsNAEDyHVzAvByQamhNRBDSR HQoI2gog4STSLGxzudzcsr5obszUjAu+BUaChCi5Tfo+BFqLo8CEkZaTSzq90dP2VUSNP80kpLyura+V1fWNza3vHqOy2RZJxTGycsIR3fSQIozGxJZWMdFNOUOQz0vGHl2O/c0e4oEl8I0cpcSPUj2lIMZJK8oz9miMpC0jeLf T72/yo8HLs0cIzqmbdnAuEmtGqs0Tp+J+207LMz6dIMFZRGKJGRKiZ5mpdHPEJcWMFLqTCZIiPER90lM0RhERbj65v4A1pQwTLiqWMKJ+nsiR5EQo8hXnRGSAzHvjcX/vF4mw3M3p3GaSRLj6aIwY1AmcBwGDCgnWLKRIghzqm6Fe IA4wlJFpqsQrPmXF0m7UbdO641rq9q8AFOUwQE4BMfAmegCa5AC9gAgwfwBF7Aq/aoPWtv2vu0taTNZvbAH2gfP5brmMg=</latexit> decg <latexit sha1_base64="9/RuNtyzOuzMzRByHSV+BYq4Yp8=">AB7HicbZA7TsNAEIbHPEPCI0BJYxGQqCI7FBG0FAGCSeRkihar8fJKu1tbuOFk5Aw0FCNFyB C7ADeg4CNRsHgUk/NJKn/5/RjszfsKZ0o7za2srq1vbOa28oXtnd294v5BXcWpOjRmMey6ROFnAn0NMcm4lEvkcG/7gepI3higVi8WdHiXYiUhPsJBRo3lBUi7vW6x5JSdqexlcOdQqp58vb0PC9+1bvGjHcQ0jVBoyolSLdJdCcjUjPKcZxvpwoTQgekhy2DgkSoOtl02LF9apzADmNpntD21P3dkZFIqVHkm8qI6L5azCbmf1kr1eFlJ2M iSTUKOvsoTLmtY3uyuR0wiVTzkQFCJTOz2rRPJKHa3CdvjuAurwM9UrZPS9Xbt1S9QpmysERHMZuHABVbiBGnhAgcE9PMKTJawH69l6mZWuWPOeQ/gj6/UHfjyS6A=</latexit> ˆyi <latexit sha1_base64="o5+brFuadtlKLJ6K4B/AeEMJY=">AB/XicbVDJSgNBEO1xj XEbl5sijSHgKczEgx6DXjwmYBZIhqGnp5M06VnorhHGIXjyP7x4UMSr+Q5vfoM/YWc5aOKDgsd7Vd1Vz4sFV2BZX8bS8srq2npuI7+5tb2za+7tN1SUSMrqNBKRbHlEMcFDVgcOgrViyUjgCd b0Btdjv3nHpOJReAtpzJyA9ELe5ZSAlzsNgBLnyWtYb5Tp9Alg5d7poFq2RNgBeJPSOFyvGo9v14Mq65mfHj2gSsBCoIEq1bSsGJyMSOBVMv5woFhM6ID3W1jQkAVNONtl+iIta8XE3krp CwBP190RGAqXSwNOdAYG+mvfG4n9eO4HupZPxME6AhXT6UTcRGCI8jgL7XDIKItWEUMn1rpj2iSQUdGB5HYI9f/IiaZRL9nmpXLMLlSs0RQ4doVN0hmx0gSroBlVRHVF0j57QC3o1Hoxn4814 n7YuGbOZA/QHxscPWgeY3g=</latexit> zci <latexit sha1_base64="YuECGoO3HC8+PnKU0hLqiNnXTQ=">AB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMei F48V7Ae0IWy203bpZhN2N0IN/RFePCji1d/jzX/jts1BWx8MPN6bYWZemAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZLGLVCalGwSU2DTcCO4lCGoUC2+H4dua3H1FpHsHM0nQj+hQ8gFn1Fip/RkLODToFxq+4cZJV4OalAjkZQ/u r1Y5ZGKA0TVOu5ybGz6gynAmclnqpxoSyMR1i1JI9R+Nj93Ss6s0ieDWNmShszV3xMZjbSeRKHtjKgZ6WVvJv7ndVMzuPYzLpPUoGSLRYNUEBOT2e+kzxUyIyaWUKa4vZWwEVWUGZtQyYbgLb+8Slq1qndRrd1fVuo3eRxFOIFTOAcPrqAOd 9CAJjAYwzO8wpuTOC/Ou/OxaC04+cwx/IHz+QOeS4/A</latexit> yi <latexit sha1_base64="Kr9zKiAScfd9h9AH I+C+F2nCG10=">AB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Qe0oWy2k3bpZhN2N0 Io/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZL GLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHkyXoR3QoecgZNVZ6yPq8X64VXcOskq8nFQg R6Nf/uoNYpZGKA0TVOu5ybGn1BlOBM4LfVSjQlYzrErqWSRqj9yfzUKTmzyoCEsbIlDZmrvycm NI6iwLbGVEz0sveTPzP6YmvPYnXCapQckWi8JUEBOT2d9kwBUyIzJLKFPc3krYiCrKjE2nZEP wl9eJa1a1buo1u4vK/WbPI4inMApnIMHV1CHO2hAExgM4Rle4c0Rzovz7nwsWgtOPnMf+B8/g BjZI3d</latexit> Zg <latexit sha1_base64=" arNq5Vu7SJy862IhHFbjbz98opA=">AB6nicbVBNS8 NAEJ34WetX1aOXxSJ4KkV9Fj04rGi/cA2lM12ki7dbM LuRilP8GLB0W8+ou8+W/ctjlo64OBx3szMwLUsG1c d1vZ2V1bX1js7BV3N7Z3dsvHRw2dZIphg2WiES1A6pRc IkNw43AdqQxoHAVjC8mfqtJ1SaJ/LBjFL0YxpJHnJGj ZXuH3tRr1R2K+4MZJl4OSlDjnqv9NXtJyLURomqNYd z02NP6bKcCZwUuxmGlPKhjTCjqWSxqj98ezUCTm1Sp+E ibIlDZmpvyfGNZ6FAe2M6ZmoBe9qfif18lMeOWPuUwz g5LNF4WZICYh079JnytkRowsoUxeythA6oMzadog3 BW3x5mTSrFe+8Ur27KNeu8zgKcAwncAYeXEINbqEODWA QwTO8wpsjnBfn3fmYt64+cwR/IHz+QMxIo28</latex it> ˜vyi <latexit sha1_base64="TcsKtXuTdRK1Z1nu8UA696K/Tng=">AB+HicbVBNS8NAEN3Ur1o/GvXoJVgETyWpgh6 LXjxWsB/QhrDZTNqlm03Y3RiyC/x4kERr/4Ub/4bt20O2vpg4PHeDPz/IRqWz726hsbG5t71R3a3v7B4d18+i4J+NUEOiSmMVi4GMJjHLoKqoYDBIBOPIZ9P3p3dzvz0BIGvNHlSXgRnjMaUgJVlryzPpIURZAPiu8PNo4ZkNu2kvYK0T pyQNVKLjmV+jICZpBFwRhqUcOnai3BwLRQmDojZKJSYTPEYhpyHIF08XhXWulcAKY6GLK2uh/p7IcSRlFvm6M8JqIle9ufifN0xVeOPmlCepAk6Wi8KUWSq25ilYARVAFMs0wURQfatFJlhgonRWNR2Cs/ryOum1ms5ls/Vw1WjflnFU0 Sk6QxfIQdeoje5RB3URQSl6Rq/ozXgyXox342PZWjHKmRP0B8bnD6NYk7k=</latexit> Zg <latexit sha1_base64="arNq5Vu7SJy862IhHFbjbz 98opA=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkV9Fj04rGi/cA2lM12ki7dbMLuRilP8GLB0W8+ou8+W/ ctjlo64OBx3szMwLUsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZIphg2WiES1A6pRcIkNw43AdqQxoHAVjC8mfq tJ1SaJ/LBjFL0YxpJHnJGjZXuH3tRr1R2K+4MZJl4OSlDjnqv9NXtJyLURomqNYdz02NP6bKcCZwUuxmGlPKhj TCjqWSxqj98ezUCTm1Sp+EibIlDZmpvyfGNZ6FAe2M6ZmoBe9qfif18lMeOWPuUwzg5LNF4WZICYh079JnytkR owsoUxeythA6oMzadog3BW3x5mTSrFe+8Ur27KNeu8zgKcAwncAYeXEINbqEODWAQwTO8wpsjnBfn3fmYt64+ cwR/IHz+QMxIo28</latexit> Zg <latexit sha1_base64="arNq5Vu7SJy862IhHFbjbz98opA=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkV 9Fj04rGi/cA2lM12ki7dbMLuRilP8GLB0W8+ou8+W/ctjlo64OBx3szMwLUsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZIphg2WiES1A6pRcIkNw43AdqQxoHAVjC8mfqtJ1SaJ/LBjFL0YxpJHnJGjZXuH3tRr1R2K+4MZJl4OS lDjnqv9NXtJyLURomqNYdz02NP6bKcCZwUuxmGlPKhjTCjqWSxqj98ezUCTm1Sp+EibIlDZmpvyfGNZ6FAe2M6ZmoBe9qfif18lMeOWPuUwzg5LNF4WZICYh079JnytkRowsoUxeythA6oMzadog3BW3x5mTSrFe+8Ur27KNeu8 zgKcAwncAYeXEINbqEODWAQwTO8wpsjnBfn3fmYt64+cwR/IHz+QMxIo28</latexit> Zg <latexit sha1_base64="arNq5Vu7SJy862IhHFbjbz98opA=">AB6nicbVBNS8NAEJ34WetX1aOXxSJ4KkV 9Fj04rGi/cA2lM12ki7dbMLuRilP8GLB0W8+ou8+W/ctjlo64OBx3szMwLUsG1cd1vZ2V1bX1js7BV3N7Z3dsvHRw2dZIphg2WiES1A6pRcIkNw43AdqQxoHAVjC8mfqtJ1SaJ/LBjFL0YxpJHnJGjZXuH3tRr1R2K+4MZJl4OS lDjnqv9NXtJyLURomqNYdz02NP6bKcCZwUuxmGlPKhjTCjqWSxqj98ezUCTm1Sp+EibIlDZmpvyfGNZ6FAe2M6ZmoBe9qfif18lMeOWPuUwzg5LNF4WZICYh079JnytkRowsoUxeythA6oMzadog3BW3x5mTSrFe+8Ur27KNeu8 zgKcAwncAYeXEINbqEODWAQwTO8wpsjnBfn3fmYt64+cwR/IHz+QMxIo28</latexit> encg <latexit sha1_base64="Li9nok9 2h2Eq6yhlwfhTUGRPrA=">AB7HicbVC7SgNBFL3rMyY+opY2g1GwCru x0DJoYxnBTQLJEmYns8mQmdlZjYQlnyDjYUitn6CP+Af2PkhWjt5FJp4 MLhnHu59w4Uwb1/10VlbX1jc2c1v5wvbO7l5x/6Cu41QR6pOYx6oZYk 05k9Q3zHDaTBTFIuS0EQ6uJ35jSJVmsbwzo4QGAvckixjBxko+laT6xRL btmdAi0Tb05K1ZOvt/dh4bvWKX60uzFJBZWGcKx1y3MTE2RYGUY4Hefbqa YJgPcoy1LJRZUB9n02DE6tUoXRbGyJQ2aqr8nMiy0HonQdgps+nrRm4j /ea3URJdBxmSGhtrtihKOTIxmiRHXaYoMXxkCSaK2VsR6WOFibH/ydsne IuRl0m9UvbOy5Vbr1S9ghlycATHcAYeXEAVbqAGPhBgcA+P8ORI58F5dl5 mrSvOfOYQ/sB5/QGNg5Ly</latexit> encg <latexit sha1_base64="Li9nok92h2Eq6yhlwfhTUGRPrA=">AB7H icbVC7SgNBFL3rMyY+opY2g1GwCrux0DJoYxnBTQLJEmYns8mQmdlZjYQlnyDjYUitn6CP+Af2PkhWjt5FJp4MLhnHu59w4Uwb1/10VlbX1jc2c1 v5wvbO7l5x/6Cu41QR6pOYx6oZYk05k9Q3zHDaTBTFIuS0EQ6uJ35jSJVmsbwzo4QGAvckixjBxko+laT6xRLbtmdAi0Tb05K1ZOvt/dh4bvWKX60u zFJBZWGcKx1y3MTE2RYGUY4HefbqaYJgPcoy1LJRZUB9n02DE6tUoXRbGyJQ2aqr8nMiy0HonQdgps+nrRm4j/ea3URJdBxmSGhtrtihKOTIxmiRH XaYoMXxkCSaK2VsR6WOFibH/ydsneIuRl0m9UvbOy5Vbr1S9ghlycATHcAYeXEAVbqAGPhBgcA+P8ORI58F5dl5mrSvOfOYQ/sB5/QGNg5Ly</latexi t> Figure 1: The whole workflow of our proposed framework. of encoded information in VAE (Dupont, 2018; Chen et al., 2018; Kim and Mnih, 2018). By setting β to an appropriate value, PLUGINVAE could extract compact conditional information without sacrificing the fluency or accuracy. Although we can already generate conditional text under a single condition by Equation 3, it is possible to even further improve the conditionality by introducing negative samples. We construct the negative samples y ′ i from Y ′ i and encode them: Y ′ i = Y −Yi v ′ yi = encg(y ′ i) (4) Thus, the loss function of PLUGINVAE with negative samples is defined as: LPLUGINVAE(vyi, v ′ yi) = Lsingle(vyi) −γ Lsingle(v ′ yi) (5) where vyi is a batch of encoded samples under condition ci, and v ′ yi is a batch of encoded negative samples; γ is a hyper-parameter balancing the positive and negative samples. For different tasks, the best setting for γ may vary. Intuitively, the larger the difference between the conditions is, the smaller γ should be. 4.2 Workflow In this section, we provide the details of training and generation procedures. As illustrated in Figure 1, the workflow is composed of three steps. Pre-train once, infer everywhere. First, as shown in Figure 1(a), using the unlabeled corpus X, we pre-train PRETRAINVAE to learn the global latent space Zg by reconstruction with Equation 2. Once pre-trained, the weights of both encg and decg are fixed. As an unsupervised VAE model, PRETRAINVAE is capable of generating diverse but unconditional text. Train it when you need it. Previous methods (Kingma et al., 2014; Hu et al., 2017) learn the joint conditional space by jointly considering all conditions. However, once the model is trained, it is not possible to add a new condition without a full retraining. Different from those approaches, PPVAE is totally flexible that allows adding new conditions. Shown in Figure 1(b), once a condition is added, we only need to train a PLUGINVAE specifically for this condition with Equation 3 (or Equation 5, if provided with samples of other conditions). Since PLUGINVAE is textirrelevant and only learns to map between two latent spaces, the training number of parameters is only 0.34% (see Section 6.3) of fine-tuning PRETRAINVAE or retraining other models. Additionally, although we need to train k PLUGINVAE for k conditions, the total number of trained parameters is still much smaller than existing methods (unless k > 1/0.34% ≈294, which is impossible in actual applications). Plus, we can parallel the conditional training to speed up the process easily. Plug it in and generate. Shown in Figure 1(c), once PLUGINVAE for the condition ci is trained, we can plug it into the PPVAE framework and generate text together with PRETRAINVAE. First, we randomly sample a latent variable zci from the prior distribution p(zci) = N(0, 1). Then we use PLUGINVAE’s decoder decci to map zci to the global latent space Zg and obtain z ′ ci: z ′ ci = decci(zci). (6) Since z ′ ci ∈Zg, we can directly use the global decoder decg to generate text: ˆyi = decg(z ′ ci) (7) where ˆyi is the generated text under condition ci. 5 Experimental Settings 5.1 Datasets Following the setting of (Hu et al., 2017), we mainly focus on short text generation (no longer 257 Dataset #Train #Dev #Test Avg-len Yelp 444,101 63,483 126,670 8.93 News Titles 249,043 23,949 20,000 9.85 Table 1: The statistics of Yelp and News Titles. than 15 tokens), which is easier for both automatic and human evaluations. We use Yelp (Shen et al., 2017) and News Titles (Fu et al., 2018) for experiments. Yelp is a collection of restaurant reviews. We use the pre-processed version used in (Shen et al., 2017), where two polarity sentiment labels are provided. For News Titles, we choose the titles belong to Business, Entertainment and Health categories for our experiments. Both Yelp and News Titles are datasets with relatively short text. We filter out text longer than 15 words, then choose the top 8,900 and 10,000 words as the vocabulary for Yelp and News Titles, respectively. The statistics of the two datasets are listed in Table 1. We discard the labels in the original training and validation splits. We use the original training split as the unlabeled corpus; the validation split to select the best unsupervised models, and the test split as the labeled conditional text. Based on the Yelp dataset, we define two tasks: (1) Sentiment. This task aims at generating text samples, either positive or negative. The ratio of positive/negative text in Yelp is roughly 0.6 : 0.4. We randomly sample 200 positive and 200 negative text for supervised training. (2) Length. This task aims at generating text samples with a specific length. We define (len ≤3) as short text, (len ≥12) as long text and (3 < len < 12) as medium text. We respectively sample 200 text for short, medium, and long text for supervised training. Based on the News Titles dataset, we define the categorical text generation task called Topic. This task aims at generating text samples on a certain topic. The ratio of business/health/entertainment in News Title is 0.38 : 0.15 : 0.47, which is more imbalanced than Yelp. We randomly sample 200 text for each category for supervised learning. 5.2 Baselines We use two semi-supervised methods, SVAE (Kingma et al., 2014) and CTRL-GEN (Hu et al., 2017) as our baselines. S-VAE incorporates a classifier to provide conditional distribution for unlabeled data. Note that S-VAE is originally proposed for image generation but adapted to text generation as a baseline by Hu et al. (2017). CTRL-GEN further exploits several regularization terms to enhance the disentanglement between the structured code and the unstructured code. For a fair comparison, both the text encoder and decoder of the two baselines are the same as PRETRAINVAE. Furthermore, the baseline methods also exploit the same unlabeled corpus X and labeled corpus Y as described in the original papers. 5.3 Models PPVAE is a model-agnostic approach, which means that both the encoders and encoders of PRETRAINVAE and PLUGINVAE can be modified to work under different settings. Here, we describe the model architecture used in our experiments. PRETRAINVAE. For the encoder, we use a onelayer Bidirectional Gated Recurrent Unit (Bi-GRU) with 256 hidden units in each direction as its encoder. Two linear Fully-Connected (FC) layers are used for re-parameteristic trick (Kingma and Welling, 2014). For the decoder, we use a Transformer (Vaswani et al., 2017) (3 layers, 8 heads). Additionally, we add extra positional embedding after each block, and the linearly transformed encoded vector is provided as input for each block (Brock et al., 2019). For a fair comparison, we use the same encoder-decoder architecture for both S-VAE and CTRL-GEN. PLUGINVAE. The encoder is a two-layer FC network of 64/32 hidden units taking input in dg dimensions with an additional linear output layer of dc units. The decoder is a two-layer FC network of 32/64 hidden units taking the latent variable in dc dimensions as input with a linear output layer of dg units. The activation function used in the FC networks is LeakyRelu (Maas et al., 2013). 5.4 Hyper-Parameters PRETRAINVAE. The size of latent space dg is set to 128. The word embedding is in 256 dimensions and randomly initialized. The output softmax matrix is tied with the embedding layer. For the adversarial classifier, we adopt two 128D hidden FC layers with LeakyRelu activation and one 1D output linear layer without bias. The balance coefficient λ is 20 for Yelp and 15 for News Titles. We train the WAE-GAN with Wasserstein Divergence (Wu et al., 2018) to smooth the training process. The coefficient k and power p of Wasserstein Divergence 258 Task Conditions Method Accuracy Log-Variance Distinct-1 Distinct-2 (↑better) (↓better) (↑better) (↑better) Sentiment {Positive, Negative} S-VAE 0.7194 -5.38 0.0198 0.2520 CTRL-GEN 0.6998 -2.78 0.0026 0.0164 PPVAE-single (ours) 0.7832 -11.12 0.0350 0.2568 PPVAE (ours) 0.8484 -11.90 0.0356 0.2627 Length {Short, Medium, Long} S-VAE 0.8598 -4.82 0.0187 0.1795 CTRL-GEN 0.3957 -1.96 0.0021 0.0146 PPVAE-single (ours) 0.9640 -6.96 0.0375 0.2549 PPVAE (ours) 0.9722 -7.64 0.0372 0.2538 Topic {Business, Health, Entmt.} S-VAE 0.6930 -2.32 0.0360 0.2162 CTRL-GEN 0.5335 -3.39 0.0107 0.0431 PPVAE-single (ours) 0.7725 -3.82 0.0497 0.3152 PPVAE (ours) 0.8024 -3.68 0.0478 0.3056 Table 2: The results of conditional text generation tasks. We use boldface and underline to indicate the best and the second-best performance. PPVAE-single indicates PPVAE with a PLUGINVAE trained under the single condition setting, as described in Section 5.5. We show the natural logarithm (ln) of variance, since the original scale is too small for demonstration. are set to 2 and 6, respectively. During pre-training, the batch size is set to 512. Adam (Kingma and Ba, 2015) with beta1 = 0 is used as the optimizer. The learning rate is set to 5 × 10−4. PLUGINVAE. We set the size of latent space dc = 20. γ is set to 0.1 for sentiment tasks, 0.05 for categorical tasks, and 3 × 10−3 for length tasks. The batch size is set to 128. Adam (Kingma and Ba, 2015) with beta1 = 0.5 is used as the optimizer, learning rate is 3 × 10−4 for 20K iterations. β linearly increases from 0 to 5 in first 10K iterations. 5.5 Evaluation Settings Metrics. We evaluate the results with two metrics, accuracy and diversity. For accuracy, we train a sentiment classifier and categorical classifier (Kim, 2014), which could achieve accuracy of 90% and 97% on the validation set, respectively. The accuracy of length task can be directly calculated with the word count of generated text. Plus, a model that performs well on only one condition but poorly on others is not practically useful. Thus, to measure the robustness among conditions, we calculate the variance of accuracy under all conditions in a task. For diversity, we adopt Distinct-1 and Distinct2 (Li et al., 2016) metrics. Distinct-1/Distinct-2 are the ratios of unique 1-gram/2-gram, respectively. A higher value indicates better diversity. For all tasks and models, we randomly generate 10K text for each condition by greedy decoding and report the averaged results. Single Condition Generation. In a real-world scenario, the full set of conditions is not always available. When provided only a labeled set of target text (i.e., k = 1), it is not possible to learn the joint conditional space for S-VAE and CTRL-GEN any more. However, PPVAE can deal with that by training without negative samples using Equation 3. 6 Experimental Results 6.1 Overall Comparisons Accuracy. The results of conditional text generation are listed in Table 2. On sentiment task, our model outperforms CTRL-GEN and S-VAE by 0.1486 and 0.129, respectively. On length task, the accuracy of our model exceeds 95%, dramatically outperforming S-VAE and CTRL-GEN by 0.1124 and 0.5765 on accuracy. Notably, the performance of CTRL-GEN (0.3957) is extremely low, demonstrating the limitation of its generatordiscriminator (Goodfellow et al., 2014) training process and its token-based discriminator, which is unable to discriminate text with different lengths. On topic task, our model scores higher on accuracy than S-VAE and CTRL-GEN by 0.1094 and 0.2689, respectively. On all three tasks, PPVAE-single performs slightly poorer than PPVAE with negative samples, verifying the effectiveness of negative sampling. Furthermore, our models achieve the lowest variance on all three tasks, indicating that PPVAE is robust and achieves a good balance among conditions. Diversity. Diversity is a long-lasting issue lying in the field of generative models. Recent works (Wang et al., 2017; Razavi et al., 2019) reveal the capability of the diverse content generation with 259 Task Method Fluency Conditionality Sentiment S-VAE 3.10 3.04 CTRL-GEN 3.65 3.23 PPVAE-single 3.54 3.23 PPVAE 3.30 3.29 Length S-VAE 3.64 0.8598 CTRL-GEN 2.53 0.3597 PPVAE-single 3.43 0.9640 PPVAE 3.50 0.9722 Topic S-VAE 3.31 2.78 CTRL-GEN 3.09 2.51 PPVAE-single 3.38 3.33 PPVAE 3.45 3.57 Table 3: Human evaluation results. Note that since the length task is objectively defined, we copy the accuracy results from Table 2. VAE-based methods. These works also conclude that VAE-based methods have better output diversity than GAN-based models. Our experimental results support this conclusion well. Particularly, CTRL-GEN suffers poor diversity, which indicates the generation of “dull text” (Li et al., 2016). Both S-VAE and PPVAE show prominently better diversity than GAN-based model, CTRL-GEN. Note that the relation between the usage of negative examples and text diversity of PPVAE is not statistically prominent (p > 0.05). 6.2 Human Evaluation We conduct human annotations as a complementary evaluation beyond automatic metrics. Specifically, eight individual judges are asked to rate over 200 conditional samples generated from each model and each condition. That is, for each model, a total of 4, 800 text samples are annotated. A judge needs to rate fluency and conditionality in the standard 1 to 5 scale. Fluency measures whether the text samples are natural and fluent as real (i.e., humanwritten) ones. Conditionality indicates whether the generated text adheres to the given condition. Shown in Table 3, PPVAE achieves the best conditionality in both automatic and human evaluations on all three tasks. Meanwhile, PPVAE retains a satisfying fluency on sentiment and length tasks and obtains the best fluency on the topic task. 6.3 Training Costs To measure the efficiency of proposed methods, we report the training time and the number of parameters of S-VAE, CTRL-GEN and PPVAE in Table 4. We train the models on a single Nvidia Method # Training Params Training Time S-VAE 6.5M 1.4h CTRL-GEN 8.5M 3.5h PRETRAINVAE 6.5M 1.2h (only once) PLUGINVAE 22K 64s Table 4: Average numbers of parameters and time costs for training. Task Method Acc. Distinct-1/2 Sentiment Fine-tuning 0.5319 0.0281 / 0.2845 PPVAE-single 0.7832 0.0350 / 0.2568 PPVAE 0.8484 0.0356 / 0.2627 Length Fine-tuning 0.9456 0.0340 / 0.2923 PPVAE-single 0.9640 0.0375 / 0.2549 PPVAE 0.9722 0.0372 / 0.2538 Table 5: The comparisons of fine-tuned PRETRAINVAE with the full PPVAE on the two tasks of Yelp dataset. β Accuracy Distinct-1 Distinct-2 0.0 1.0000 0.0001 0.0001 2.0 0.9938 0.0256 0.1629 5.0 0.9908 0.0301 0.2112 10.0 0.9875 0.0324 0.2370 Table 6: The impact of different β on long text generation task. GTX 1080 GPU and report the training time until the convergence of each model. PRETRAINVAE has the same size of S-VAE but only needs to be trained once and does not require a full retraining when a new condition added. Also, PLUGINVAE, which learns to transform between the global latent space and the conditional latent space, only has 22K parameters and can be trained within about one minute. 6.4 PLUGINVAE vs. Fine-Tuning As a natural baseline, the conditional generation can also be done by directly fine-tuning PRETRAINVAE on each condition. Shown in Table 5, despite the fact that it is not computationally efficient and saving the full weights is undesirable for industrial applications when the model is large (e.g., GPT2 (Radford et al., 2019)), both PLUGINVAE trained with and without negative samples significantly outperform a directly fine-tuned PRETRAINVAE on accuracy. 260 Task Condition Generated Examples Sentiment Positive The services are friendly, fast. Negative The egg drop soup was old and tasted like feet. Length Short Great pricing! Medium I refused to work with you and this place. Long And this made me feel uncomfortable and the prices aren’t right. Topic Business FDA Approves New Case of E-cigarettes Health Ebola : Virus Spreads in the US Entertainment Surprise Birthday: The Guys of the Cast of Disney Parks Table 7: Some conditional examples generated by PPVAE for qualitative analysis (cherry-picked). Generated Examples S-VAE Chinese State Media: 17 Miners Trapped Underground Huge Increases in Obamacare Premiums Are Coming Herbalife Ltd. (HLF) Probe Earns Bill Ackman Back Millions CTRL-GEN Pfizer’s Astrazeneca’s Astrazeneca Bid for Astrazeneca FDA’s New Drug to Treat Migraines Pfizer to Acquire Seragon in $42.9B PPVAE Despite Highway Crisis, Many Worries Remain on US Oil Exports Lululemon: Digital Sales Surge in 1Q Net Income, Revenue Crisis of Market: US Stocks Climb; Nike Jumps Table 8: Some generated conditional examples under condition Business (randomly sampled). Failed Examples Grammatical Eat the service! In addition, this location sucks it is. Star Wars 7 will include US production on set Conditional (Negative) I was shocked that this is what I needed. (Long) Are you actually drunk outside? (Business) Michael Jackson’s New Album ‘Xscape’ Table 9: Some failed examples (cherry-picked). 6.5 Effect of Hyper-parameter β Since β is an important hyper-parameter for PPVAE, we test β ∈{0, 2, 5, 10} on the long text generation task. From the results in Table 6, we find that β controls the balance between diversity and accuracy. Specifically, when β is too large, more diverse samples could be generated, but the accuracy may be sacrificed slightly. On the contrary, when β is too small, the accuracy could climb to a higher value, but meanwhile, the diversity drops drastically. Empirically, we find that β = 5 is an appropriate value for all tasks. 7 Case Study We select some generated conditional text of each condition in Table 7. As shown in the table, our proposed PPVAE is capable of generating realistic conditional text. Also, shown in Table 8, on topic task, we randomly select some examples from the output of each model. The output of S-VAE seems to be diverse but is poorly conditioned. CTRLGEN suffers an obvious diversity issue, which makes it repeatedly output similar text. For the error analysis, we pick some failed examples of PPVAE in Table 9. We categorize the errors into two main classes. (1) Grammatical. Grammatical problems are common in NLG. As we analyze, this kind of errors can be mitigated with a deeper encoder and decoder with even more unlabeled data for pre-training. (2) Conditional. Conditional errors are of great interest to us since they lie in our focus. We choose three typical errors and list them in Table 9. In the first sentence, “shocked” is a subtle word which may indicate either positive or negative sentiment depending on the context. Thus, with a greedy decoding strategy, it may be incorrectly decoded into the other polarity. We believe this kind of errors could be fixed with more elaborate decoding strategies (e.g., Weighted Decoding (See et al., 2019)). In the second sentence, the length is limited by the nature of an interrogative sentence. As a linguistic fact, an interrogative sentence often has fewer words than a declarative sentence. In the third sentence, we remark an overlapping problem between classes. Some topics (e.g., music album) may appear in both business and entertainment news. In some way, these samples can also be considered as correctly conditioned ones, which highlights the importance of a fine-grained human evaluation on this task. 261 8 Conclusion In this paper, we present a novel PPVAE framework for flexible conditional text generation, which decouples the text generation module from the condition representation module. The extensive experiments demonstrate the superiority of the proposed PPVAE against the existing alternatives on conditionality and diversity while allowing new conditions to be added without a full retraining. Acknowledgments We are grateful for the insightful comments from the anonymous reviewers. We would like to especially thank Daya Guo for his help and suggestions. This research was supported by National Natural Science Foundation of China (No. 61872278). Chenliang Li is the corresponding author. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J´ozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CoNLL. Andrew Brock, Jeff Donahue, and Karen Simonyan. 2019. Large scale GAN training for high fidelity natural image synthesis. In ICLR. Tian Qi Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. 2018. Isolating sources of disentanglement in variational autoencoders. In NeurIPS. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NeurIPS. Emilien Dupont. 2018. Learning disentangled joint continuous and discrete representations. In NeurIPS. Jesse H. Engel, Matthew Hoffman, and Adam Roberts. 2018. Latent constraints: Learning to generate conditionally from unconditional generative models. In ICLR. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In ACL. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. CoRR, abs/1707.02633. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In AAAI. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NeurIPS. Daya Guo, Yibo Sun, Duyu Tang, Nan Duan, Jian Yin, Hong Chi, James Cao, Peng Chen, and Ming Zhou. 2018. Question generation from SQL queries improves neural semantic parsing. In EMNLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzkebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In ICML. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. 2017. Toward controlled generation of text. In ICML. Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. CoRR, abs/1909.05858. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Controlling output length in neural encoder-decoders. In EMNLP. Hyunjik Kim and Andriy Mnih. 2018. Disentangling by factorising. In ICML. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In NeurIPS. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL-HLT. Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In ICML. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. CoRR, abs/1411.1784. Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Trans. Assoc. Comput. Linguistics, 6:373–389. 262 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Ali Razavi, A¨aron van den Oord, and Oriol Vinyals. 2019. Generating diverse high-fidelity images with VQ-VAE-2. In NeurIPS. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In NAACL-HLT. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NeurIPS. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NeurIPS. Ilya O. Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Sch¨olkopf. 2018. Wasserstein autoencoders. In ICLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Ke Wang and Xiaojun Wan. 2018. Sentigan: Generating sentimental texts via mixture adversarial networks. In IJCAI. Liwei Wang, Alexander G. Schwing, and Svetlana Lazebnik. 2017. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In NeurIPS. Jiqing Wu, Zhiwu Huang, Janine Thoma, Dinesh Acharya, and Luc Van Gool. 2018. Wasserstein divergence for gans. In ECCV. Xiaopeng Yang, Xiaowen Lin, Shunda Suo, and Ming Li. 2018. Generating thematic chinese poetry using conditional variational autoencoders with hybrid decoders. In IJCAI. Xiaoyuan Yi, Maosong Sun, Ruoyu Li, and Zonghan Yang. 2018. Chinese poetry generation with a working memory model. In IJCAI. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI. Tiancheng Zhao, Ran Zhao, and Maxine Esk´enazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, and Ming Zhou. 2020. Self-adversarial learning with comparative discrimination for text generation. In ICLR.
2020
23
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2557–2568 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2557 Cross-media Structured Common Space for Multimedia Event Extraction Manling Li1∗, Alireza Zareian2∗, Qi Zeng1, Spencer Whitehead1, Di Lu3, Heng Ji1, Shih-Fu Chang2 1University of Illinois at Urbana-Champaign, 2Columbia University 3Dataminr {manling2,hengji}@illinois.edu, {az2407,sc250}@columbia.edu Abstract We introduce a new task, MultiMedia Event Extraction (M2E2), which aims to extract events and their arguments from multimedia documents. We develop the first benchmark and collect a dataset of 245 multimedia news articles with extensively annotated events and arguments.1 We propose a novel method, Weakly Aligned Structured Embedding (WASE), that encodes structured representations of semantic information from textual and visual data into a common embedding space. The structures are aligned across modalities by employing a weakly supervised training strategy, which enables exploiting available resources without explicit cross-media annotation. Compared to unimodal state-of-the-art methods, our approach achieves 4.0% and 9.8% absolute F-score gains on text event argument role labeling and visual event extraction. Compared to stateof-the-art multimedia unstructured representations, we achieve 8.3% and 5.0% absolute Fscore gains on multimedia event extraction and argument role labeling, respectively. By utilizing images, we extract 21.4% more event mentions than traditional text-only methods. 1 Introduction Traditional event extraction methods target a single modality, such as text (Wadden et al., 2019), images (Yatskar et al., 2016) or videos (Ye et al., 2015; Caba Heilbron et al., 2015; Soomro et al., 2012). However, the practice of contemporary journalism (Stephens, 1998) distributes news via multimedia. By randomly sampling 100 multimedia news articles from the Voice of America (VOA), we find that 33% of images in the articles contain visual objects that serve as event arguments and are not mentioned in the text. Take ∗These authors contributed equally to this work. 1Our data and code are available at http://blender. cs.illinois.edu/software/m2e2 Figure 1: An example of Multimedia Event Extraction. An event mention and some event arguments (Agent and Person) are extracted from text, while the vehicle arguments can only be extracted from the image. Figure 1 as an example, we can extract the Agent and Person arguments of the Movement.Transport event from text, but can extract the Vehicle argument only from the image. Nevertheless, event extraction is independently studied in Computer Vision (CV) and Natural Language Processing (NLP), with major differences in task definition, data domain, methodology, and terminology. Motivated by the complementary and holistic nature of multimedia data, we propose MultiMedia Event Extraction (M2E2), a new task that aims to jointly extract events and arguments from multiple modalities. We construct the first benchmark and evaluation dataset for this task, which consists of 245 fully annotated news articles. We propose the first method, Weakly Aligned Structured Embedding (WASE), for extracting events and arguments from multiple modalities. Complex event structures have not been covered by existing multimedia representation methods (Wu et al., 2019b; Faghri et al., 2017; Karpathy and Fei-Fei, 2015), so we propose to learn a structured multimedia embedding space. More specifically, given a multimedia document, we represent each image or sentence as a graph, where each node represents an event or entity and each 2558 edge represents an argument role. The node and edge embeddings are represented in a multimedia common semantic space, as they are trained to resolve event co-reference across modalities and to match images with relevant sentences. This enables us to jointly classify events and argument roles from both modalities. A major challenge is the lack of multimedia event argument annotations, which are costly to obtain due to the annotation complexity. Therefore, we propose a weakly supervised framework, which takes advantage of annotated uni-modal corpora to separately learn visual and textual event extraction, and uses an image-caption dataset to align the modalities. We evaluate WASE on the new task of M2E2. Compared to the state-of-the-art uni-modal methods and multimedia flat representations, our method significantly outperforms on both event extraction and argument role labeling tasks in all settings. Moreover, it extracts 21.4% more event mentions than text-only baselines. The training and evaluation are done on heterogeneous data sets from multiple sources, domains and data modalities, demonstrating the scalability and transferability of the proposed model. In summary, this paper makes the following contributions: • We propose a new task, MultiMedia Event Extraction, and construct the first annotated news dataset as a benchmark to support deep analysis of cross-media events. • We develop a weakly supervised training framework, which utilizes existing singlemodal annotated corpora, and enables joint inference without cross-modal annotation. • Our proposed method, WASE, is the first to leverage structured representations and graph-based neural networks for multimedia common space embedding. 2 Task Definition 2.1 Problem Formulation Each input document consists of a set of images M = {m1, m2, . . . } and a set of sentences S = {s1, s2, . . . }. Each sentence s can be represented as a sequence of tokens s = (w1, w2, . . . ), where wi is a token from the document vocabulary W. The input also includes a set of entities T = {t1, t2, . . . } extracted from the document text. An entity is an individually unique object in the real world, such as a person, an organization, a facility, a location, a geopolitical entity, a weapon, or a vehicle. The objective of M2E2is twofold: Event Extraction: Given a multimedia document, extract a set of event mentions, where each event mention e has a type ye and is grounded on a text trigger word w or an image m or both, i.e., e = (ye, {w, m}). Note that for an event, w and m can both exist, which means the visual event mention and the textual event mention refer to the same event. For example in Figure 1, deploy indicates the same Movement.Transport event as the image. We consider the event e as text-only event if it only has textual mention w, and as image-only event if it only contains visual mention m, and as multimedia event if both w and m exist. Argument Extraction: The second task is to extract a set of arguments of event mention e. Each argument a has an argument role type ya, and is grounded on a text entity t or an image object o (represented as a bounding box), or both, a = (ya, {t, o}) . The arguments of visual and textual event mentions are merged if they refer to the same realworld event, as shown in Figure 1. 2.2 The M2E2 Dataset We define multimedia newsworthy event types by exhaustively mapping between the event ontology in NLP community for the news domain (ACE2) and the event ontology in CV community for general domain (imSitu (Yatskar et al., 2016)). They cover the largest event training resources in each community. Table 1 shows the selected complete intersection, which contains 8 ACE types (i.e., 24% of all ACE types), mapped to 98 imSitu types (i.e., 20% of all imSitu types). We expand the ACE event role set by adding visual arguments from imSitu, such as instrument, bolded in Table 1. This set encompasses 52% ACE events in a news corpus, which indicates that the selected eight types are salient in the news domain. We reuse these existing ontologies because they enable us to train event and argument classifiers for both modalities without requiring joint multimedia event annotation as training data. 2https://catalog.ldc.upenn.edu/ldc2006T06 2559 Event Type Argument Role Movement.Transport (223|53) Agent (46|64), Artifact (179|103), Vehicle (24|51), Destination (120|0), Origin (66|0) Conflict.Attack (326|27) Attacker (192|12), Target (207|19), Instrument (37|15), Place (121|0) Conflict.Demonstrate (151|69) Entity (102|184), Police (3|26), Instrument (0|118), Place (86|25) Justice.ArrestJail (160|56) Agent (64|119), Person (147|99), Instrument (0|11), Place (43|0) Contact.PhoneWrite (33|37) Entity (33|46), Instrument (0|43), Place (8|0) Contact.Meet (127|79) Participant (119|321), Place (68|0) Life.Die (244|64) Agent (39|0), Instrument (4|2), Victim (165|155), Place (54|0) Transaction. TransferMoney (33|6) Giver (19|3), Recipient (19|5), Money (0|8) Table 1: Event types and argument roles in M2E2, with expanded ones in bold. Numbers in parentheses represent the counts of textual and visual events/arguments. We collect 108,693 multimedia news articles from the Voice of America (VOA) website 3 20062017, covering a wide range of newsworthy topics such as military, economy and health. We select 245 documents as the annotation set based on three criteria: (1) Informativeness: articles with more event mentions; (2) Illustration: articles with more images (> 4); (3) Diversity: articles that balance the event type distribution regardless of true frequency. The data statistics are shown in Table 2. Among all of these events, 192 textual event mentions and 203 visual event mentions can be aligned as 309 cross-media event mention pairs. The dataset can be divided into 1,105 text-only event mentions, 188 image-only event mentions, and 395 multimedia event mentions. Source Event Mention Argument Role sentence image textual visual textual visual 6,167 1,014 1,297 391 1,965 1,429 Table 2: M2E2 data statistics. We follow the ACE event annotation guidelines (Walker et al., 2006) for textual event and argument annotation, and design an annotation guideline 4 for multimedia events annotation. One unique challenge in multimedia event annotation is to localize visual arguments in complex scenarios, where images include a crowd of people or a group of object. It is hard to delineate 3https://www.voanews.com/ 4http://blender.cs.illinois.edu/software/ m2e2/ACL2020_M2E2_annotation.pdf Figure 2: Example of bounding boxes. each of them using a bounding box. To solve this problem, we define two types of bounding boxes: (1) union bounding box: for each role, we annotate the smallest bounding box covering all constituents; and (2) instance bounding box: for each role, we annotate a set of bounding boxes, where each box is the smallest region that covers an individual participant (e.g., one person in the crowd), following the VOC2011 Annotation Guidelines5. Figure 2 shows an example. Eight NLP and CV researchers complete the annotation work with two independent passes and reach an Inter-Annotator Agreement (IAA) of 81.2%. Two expert annotators perform adjudication. 3 Method 3.1 Approach Overview As shown in Figure 3, the training phase contains three tasks: text event extraction (Section 3.2), visual situation recognition (Section 3.3), and crossmedia alignment (Section 3.4). We learn a crossmedia shared encoder, a shared event classifier, and a shared argument classifier. In the testing phase (Section 3.5), given a multimedia news article, we encode the sentences and images into the structured common space, and jointly extract textual and visual events and arguments, followed by cross-modal coreference resolution. 3.2 Text Event Extraction Text Structured Representation: As shown in Figure 4, we choose Abstract Meaning Representation (AMR) (Banarescu et al., 2013) to represent text because it includes a rich set of 150 fine-grained semantic roles. To encode each text sentence, we run the CAMR parser (Wang et al., 2015b,a, 2016) to generate an AMR graph, based on the named entity recognition and partof-speech (POS) tagging results from Stanford CoreNLP (Manning et al., 2014). To represent each word w in a sentence s, we concatenate its 5http://host.robots.ox.ac.uk/pascal/VOC/ voc2011/guidelines.html 2560 For the rebels, bravado goes hand-inhand with the desperate resistance the insurgents have mounted..... trigger image entity region attend VOA Image-Caption Pairs Liana Owen [Participant] drove from Pennsylvania to attend [Contact.Meet] the rally in Manhattan with her parents [Participant]. ... ... destroying [Conflict.Attack] Item [Target]: ship Tool [Instrument]: bomb Liana Owen trigger image entity region ... ... insurgents imSitu Image Event Multimedia News resistance Contact.Meet Conflict.Attack Contact.Meet Participant Conflict.Attack Instrument Conflict.Attack Attacker Conflict.Attack Instrument Training Phase Testing Phase Cross-media Structured Common Representation Encoder Cross-media Shared Argument Classifier Conflict.Attack Alignment Cross-media Shared Event Classifier ACE Text Event Figure 3: Approach overview. During training (left), we jointly train three tasks to establish a cross-media structured embedding space. During test (right), we jointly extract events and arguments from multimedia articles. pre-trained GloVe word embedding (Pennington et al., 2014), POS embedding, entity type embedding and position embedding. We then input the word sequence to a bi-directional long short term memory (Bi-LSTM) (Graves et al., 2013) network to encode the word order and get the representation of each word w. Given the AMR graph, we apply a Graph Convolutional Network (GCN) (Kipf and Welling, 2016) to encode the graph contextual information following (Liu et al., 2018a): w(k+1) i = f( X j∈N(i) g(k) ij (WE(i,j)w(k) j + b(k) E(i,j))), (1) where N(i) is the neighbour nodes of wi in the AMR graph, E(i, j) is the edge type between wi and wj, gij is the gate following (Liu et al., 2018a), k represents GCN layer number, and f is the Sigmoid function. W and b denote parameters of neural layers in this paper. We take the hidden states of the last GCN layer for each word as the common-space representation wC, where C stands for the common (multimedia) embedding space. For each entity t, we obtain its representation tC by averaging the embeddings of its tokens. Event and Argument Classifier: We classify each word w into event types ye6 and classify each 6We use BIO tag schema to decide trigger word boundary, i.e., adding prefix B- to the type label to mark the beginning of a trigger, I- for inside, and O for none. entity t into argument role ya: P(ye|w) = exp WewC + be  P e′ exp (We′wC + be′), P(ya|t) = exp(Wa[tC; wC] + ba) P a′ exp(Wa′[tC; wC] + ba′). (2) We take ground truth text entity mentions as input following (Ji and Grishman, 2008) during training, and obtain testing entity mentions using a named entity extractor (Lin et al., 2019). 3.3 Image Event Extraction Image Structured Representation: To obtain image structures similar to AMR graphs, and inspired by situation recognition (Yatskar et al., 2016), we represent each image with a situation graph, that is a star-shaped graph as shown in Figure 4, where the central node is labeled as a verb v (e.g., destroying), and the neighbor nodes are arguments labeled as {(n, r)}, where n is a noun (e.g., ship) derived from WordNet synsets (Miller, 1995) to indicate the entity type, and r indicates the role (e.g., item) played by the entity in the event, based on FrameNet (Fillmore et al., 2003). We develop two methods to construct situation graphs from images and train them using the imSitu dataset (Yatskar et al., 2016) as follows. (1) Object-based Graph: Similar to extracting entities to get candidate arguments, we employ the 2561 Caption AMR Graph Attention-based Graph Image Structured Multimedia Common Space ... ... :agent :destination :item :item attack-01 protest-01 bus :ARG0 :ARG1 Bi-LSTM Context Thailand :name rally-01 :mod oppose-01 :ARG0-of person Bangkok :location support-01 pro-government Red Shirt :ARG0 :ARG0-of :mod :ARG1 attack-01 ... protest-01 bus rally-01 Bangkok :agent :destination ... Role-driven Attention GCN ... ... man car stone :ARG0 :ARG1 :location :mod throwing Thai opposition protesters [Attacker] attack [Conflict.Attack] a bus [Target] carrying progovernment Red Shirt supporters on their way to a rally at a stadium in Bangkok [Place]. AMR Parser Situation Graph Encoder GCN or Object-based Graph Figure 4: Multimedia structured common space construction. Red pixels stands for attention heatmap. most similar task in CV, object detection, and obtain the object bounding boxes detected by a Faster R-CNN (Ren et al., 2015) model trained on Open Images (Kuznetsova et al., 2018) with 600 object types ( classes).We employ a VGG-16 CNN (Simonyan and Zisserman, 2014) to extract visual features of an image m and and another VGG-16 to encode the bounding boxes {oi}. Then we apply a Multi-Layer Perceptron (MLP) to predict a verb embedding from m and another MLP to predict a noun embedding for each oi. ˆ m = MLPm(m) , ˆoi = MLPo(oi). We compare the predicted verb embedding to all verbs v in the imSitu taxonomy in order to classify the verb, and similarly compare each predicted noun embedding to all imSitu nouns n which results in probability distributions: P(v|m) = exp ( ˆmv) P v′ exp ( ˆmv′), P(n|oi) = exp( ˆoin) P n′ exp( ˆoin′), where v and n are word embeddings initialized with GloVE (Pennington et al., 2014). We use another MLP with one hidden layer followed by Softmax (σ) to classify role ri for each object oi: P(ri|oi) = σ MLPr( ˆoi)  . Given verb v∗and role-noun (r∗ i , n∗ i ) annotations for an image (from the imSitu corpus), we define the situation loss functions: Lv = −log P(v∗|m), Lr = −log(P(r∗ i |oi) + P(n∗ i |oi)). (2) Attention-based Graph: State-of-the-art object detection methods only cover a limited set of object types, such as 600 types defined in Open Images. Many salient objects such as bomb, stone and stretcher are not covered in these ontologies. Hence, we propose an open-vocabulary alternative to the object-based graph construction model. To this end, we construct a role-driven attention graph, where each argument node is derived by a spatially distributed attention (heatmap) conditioned on a role r. More specifically, we use a VGG-16 CNN to extract a 7×7 convolutional feature map for each image m, which can be regarded as attention keys ki for 7 × 7 local regions. Next, for each role r defined in the situation recognition ontology (e.g., agent), we build an attention query vector qr by concatenating role embedding r with the image feature m as context and apply a fully connected layer: qr = Wq[r; m] + bq. Then, we compute the dot product of each query with all keys, followed by Softmax, which forms a heatmap h on the image, i.e., hi = exp(qrki) P j∈7×7 exp(qrkj). 2562 We use the heatmap to obtain a weighted average of the feature map to represent the argument or of each role r in the visual space: or = X i himi. Similar to the object-based model, we embed or to ˆor, compare it to the imSitu noun embeddings to define a distribution, and define a classification loss function. The verb embedding ˆm and the verb prediction probability P(v|m) and loss are defined in the same way as in the object-based method. Event and Argument Classifier: We use either the object-based or attention-based formulation and pre-train it on the imSitu dataset (Yatskar et al., 2016). Then we apply a GCN to obtain the structured embedding of each node in the common space, similar to Equation 1. This yields mC and oC i . We use the same classifiers as defined in Equation 2 to classify each visual event and argument using the common space embedding: P(ye|m) = exp(WemC + be) P e′ exp(We′mC + be′), P(ya|o) = exp(Wa[oC; mC] + ba) P a′ exp(Wa′[oC; mC] + ba′). (3) 3.4 Cross-Media Joint Training In order to make the event and argument classifier shared across modalities, the image and text graph should be encoded to the same space. However, it is extremely costly to obtain the parallel text and image event annotation. Hence, we use event and argument annotations in separate modalities (i.e., ACE and imSitu datasets) to train classifiers, and simultaneously use VOA news image and caption pairs to align the two modalities. To this end, we learn to embed the nodes of each image graph close to the nodes of the corresponding caption graph, and far from those in irrelevant caption graphs. Since there is no ground truth alignment between the image nodes and caption nodes, we use image and caption pairs for weakly supervised training, to learn a soft alignment from each words to image objects and vice versa. αij = exp (wC i oC j ) P j′ exp (wC i oC j′), βji = exp (wC i oC j ) P i′ exp (wC i′oC j ), where wi indicates the ith word in caption sentence s and oj represents the jth object of image m. Then, we compute a weighted average of softly aligned nodes for each node in other modality, i.e., w′ i = X j αijoC j , o′ j = X i βjiwC i . (4) We define the alignment cost of the image-caption pair as the Euclidean distance between each node to its aligned representation, ⟨s, m⟩= X i ||wi −w′ i||2 2 + X j ||oj −o′ j||2 2 We use a triplet loss to pull relevant image-caption pairs close while pushing irrelevant ones apart: Lc = max(0, 1 + ⟨s, m⟩−⟨s, m−⟩), where m−is a randomly sampled negative image that does not match s. Note that in order to learn the alignment between the image and the trigger word, we treat the image as a special object when learning cross-media alignment. The common space enables the event and argument classifiers to share weights across modalities, and be trained jointly on the ACE and imSitu datasets, by minimizing the following objective functions: Le = − X w log P(ye|w) − X m log P(ye|m), La = − X t log P(ya|t) − X o log P(ya|o), All tasks are jointly optimized: L = Lv + Lr + Le + La + Lc 3.5 Cross-Media Joint Inference In the test phase, our method takes a multimedia document with sentences S = {s1, s2, . . . } and images M = {m1, m2, . . . , } as input. We first generate the structured common embedding for each sentence and each image, and then compute pairwise similarities ⟨s, m⟩. We pair each sentence s with the closest image m, and aggregate the features of each word of s with the aligned representation from m by weighted averaging: w′′ i = (1 −γ)wi + γw′ i, (5) where γ = exp(−⟨s, m⟩) and w′ i is derived from m using Equation 4. We use w′′ i to classify each 2563 Training Model Text-Only Evaluation Image-Only Evaluation Multimedia Evaluation Event Mention Argument Role Event Mention Argument Role Event Mention Argument Role P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 Text JMEE 42.5 58.2 48.7 22.9 28.3 25.3 42.1 34.6 38.1 21.1 12.6 15.8 GAIL 43.4 53.5 47.9 23.6 29.2 26.1 44.0 32.4 37.3 22.7 12.8 16.4 WASET 42.3 58.4 48.2 21.4 30.1 24.9 41.2 33.1 36.7 20.1 13.0 15.7 Image WASEI att 29.7 61.9 40.1 9.1 10.2 9.6 28.3 23.0 25.4 2.9 6.1 3.8 WASEI obj 28.6 59.2 38.7 13.3 9.8 11.2 26.1 22.4 24.1 4.7 5.0 4.9 Multimedia VSE-C 33.5 47.8 39.4 16.6 24.7 19.8 30.3 48.9 26.4 5.6 6.1 5.7 33.3 48.2 39.3 11.1 14.9 12.8 Flatatt 34.2 63.2 44.4 20.1 27.1 23.1 27.1 57.3 36.7 4.3 8.9 5.8 33.9 59.8 42.2 12.9 17.6 14.9 Flatobj 38.3 57.9 46.1 21.8 26.6 24.0 26.4 55.8 35.8 9.1 6.5 7.6 34.1 56.4 42.5 16.3 15.9 16.1 WASEatt 37.6 66.8 48.1 27.5 33.2 30.1 32.3 63.4 42.8 9.7 11.1 10.3 38.2 67.1 49.1 18.6 21.6 19.9 WASEobj 42.8 61.9 50.6 23.5 30.3 26.4 43.1 59.2 49.9 14.5 10.1 11.9 43.0 62.1 50.8 19.5 18.9 19.2 Table 3: Event and argument extraction results (%). We compare three categories of baselines in three evaluation settings. The main contribution of the paper is joint training and joint inference on multimedia data (bottom right). word into an event type and to classify each entity into a role with multimedia classifiers in Equation 2. To this end, we define t′′ i similar to w′′ i but using ti and t′ i. Similarly, for each image m we find the closest sentence s, compute the aggregated multimedia features m′′ and o′′ i , and feed into the shared classifiers (Equation 3) to predict visual event and argument roles. Finally, we corefer the cross-media events of the same event type if the similarity ⟨s, m⟩is higher than a threshold. 4 Experiments 4.1 Evaluation Setting Evaluation Metrics We conduct evaluation on text-only, image-only, and multimedia event mentions in M2E2 dataset in Section 2.2. We adopt the traditional event extraction measures, i.e., Precision, Recall and F1. For text-only event mentions, we follow (Ji and Grishman, 2008; Li et al., 2013): a textual event mention is correct if its event type and trigger offsets match a reference trigger; and a textual event argument is correct if its event type, offsets, and role label match a reference argument. We make a similar definition for image-only event mentions: a visual event mention is correct if its event type and image match a reference visual event mention; and a visual event argument is correct if its event type, localization, and role label match a reference argument. A visual argument is correctly localized if the Intersection over Union (IoU) of the predicted bounding box with the ground truth bounding box is over 0.5. Finally, we define a multimedia event mention to be correct if its event type and trigger offsets (or the image) match the reference trigger (or the reference image). The arguments of multimedia events are either textual or visual arguments, and are evaluated accordingly. To generate bounding boxes for the attention-based model, we threshold the heatmap using the adaptive value of 0.75 ∗p, where p is the peak value of the heatmap. Then we compute the tightest bounding box that encloses all of the thresholded region. Examples are shown in Figure 7 and Figure 8. Baselines The baselines include: (1) Textonly models: We use the state-of-the-art model JMEE (Liu et al., 2018a) and GAIL (Zhang et al., 2019) for comparison. We also evaluate the effectiveness of cross media joint training by including a version of our model trained only on ACE, denoted as WASET. (2) Image-only models: Since we are the first to extract newsworthy events, and the most similar work situation recognition can not localize arguments in images, we use our model trained only on image corpus as baselines. Our visual branch has two versions, object-based and attention-based, denoted as WASEIobj and WASEIatt. (3) Multimedia models: To show the effectiveness of structured embedding, we include a baseline by removing the text and image GCNs from our model, which is denoted as Flat. The Flat baseline ignores edges and treats images and sentences as sets of vectors. We also compare to the state-of-the-art crossmedia common representation model, Contrastive Visual Semantic Embedding VSE-C (Shi et al., 2018), by training it the same way as WASE. Parameter Settings The common space dimension is 300. The dimension is 512 for image position embedding and feature map, and 50 for word position embedding, entity type embedding, and POS tag embedding. The layer of GCN is 3. 2564 4.2 Quantitative Performance As shown in Table 3, our complete methods (WASEatt and WASEobj) outperform all baselines in the three evaluation settings in terms of F1. The comparison with other multimedia models demonstrates the effectiveness of our model architecture and training strategy. The advantage of structured embedding is shown by the better performance over the flat baseline. Our model outperforms its text-only and image-only variants on multimedia events, showing the inadequacy of singlemodal information for complex news understanding. Furthermore, our model achieves better performance on text-only and image-only events, which demonstrates the effectiveness of multimedia training framework in knowledge transfer between modalities. WASEobj and WASEatt, are both superior to the state of the art and each has its own advantages. WASEobj predicts more accurate bounding boxes since it is based on a Faster R-CNN pretrained on bounding box annotations, resulting in a higher argument precision. While WASEatt achieves a higher argument recall as it is not limited by the predefined object classes of the Faster R-CNN. Model P (%) R (%) F1 (%) rule based 10.1 100 18.2 VSE 31.2 74.5 44.0 Flatatt 33.1 73.5 45.6 Flatobj 34.3 76.4 47.3 WASEatt 39.5 73.5 51.5 WASEobj 40.1 75.4 52.4 Table 4: Cross-media event coreference performance. Furthermore, to evaluate the cross-media event coreference performance, we pair textual and visual event mentions in the same document, and calculate Precision, Recall and F1 to compare with ground truth event mention pairs7. As shown in Table 4, WASEobj outperforms all multimedia embedding models, as well as the rule-based baseline using event type matching. This demonstrates the effectiveness of our cross-media soft alignment. 4.3 Qualitative Analysis Our cross-media joint training approach successfully boosts both event extraction and argument role labeling performance. For example, in Figure 5 (a), the text-only model can not extract Jus7We do not use coreference clustering metrics because we only focus on mention-level cross-media event coreference instead of the full coreference in all documents. tice.Arrest event, but the joint model can use the image as background to detect the event type. In Figure 5 (b), the image-only model detects the image as Conflict.Demonstration, but the sentences in the same document help our model not to label it as Conflict.Demonstration. Compared with multimedia flat embedding in Figure 6, WASE can learn structures such as Artifact is on top of Vehicle, and the person in the middle of Justice.Arrest is Entity instead of Agent. Iraqi security forces search [Justice.Arrest] a civilian in the city of Mosul. People celebrate Supreme Court ruling on Same Sex Marriage in front of the Supreme Court in Washington. Figure 5: Image helps textual event extraction, and surrounding sentence helps visual event extraction. Flat Event Movement.Transport Role Artifact = none Ours Event Movement.Transport Role Artifact = man Flat Event Justice:ArrestJail Role Agent = man Ours Event Conflict.Attack Role Entity = man Figure 6: Comparison with multimedia flat embedding. 4.4 Remaining Challenges One of the biggest challenges in M2E2is localizing arguments in images. Object-based models suffer from the limited object types. Attentionbased method is not able to precisely localize the objects for each argument, since there is no supervision on attention extraction during training. For example, in Figure 7, the Entity argument in the Conflict.Demonstrate event is correctly predicted as troops, but its localization is incorrect because Place argument share similar attention. When one argument targets at too many instances, attention heatmaps tend to lose focus and cover the whole image, as shown in Figure 8. 5 Related Work Text Event Extraction Text event extraction has been extensively studied for general news do2565 Entity: people Entity: troops Place: street Figure 7: Argument labeling error examples: correct entity name but wrong localization. Entity: people Place: street Entity: dissent Figure 8: Attention heatmaps lose focus due to large instance candidate number. main (Ji and Grishman, 2008; Liao and Grishman, 2011; Huang and Riloff, 2012; Li et al., 2013; Chen et al., 2015; Nguyen et al., 2016; Hong et al., 2018; Liu et al., 2018b; Chen et al., 2018; Zhang et al., 2019; Liu et al., 2018a; Wang et al., 2019; Yang et al., 2019; Wadden et al., 2019). Multimedia features has been proven to effectively improve text event extraction (Zhang et al., 2017). Visual Event Extraction “Events” in NLP usually refer to complex events that involve multiple entities in a large span of time (e.g. protest), while in CV (Chang et al., 2016; Zhang et al., 2007; Ma et al., 2017) events are less complex singleentity activities (e.g. washing dishes) or actions (e.g. jumping). Visual event ontologies focus on daily life domains, such as “dogshow” and “wedding ceremony” (Perera et al., 2012). Moreover, most efforts ignore the structure of events including arguments. There are a few methods that aim to localize the agent (Gu et al., 2018; Li et al., 2018; Duarte et al., 2018), or classify the recipient (Sigurdsson et al., 2016; Kato et al., 2018; Wu et al., 2019a) of events, but neither detects the complete set of arguments for an event. The most similar to our work is Situation Recognition (SR) (Yatskar et al., 2016; Mallya and Lazebnik, 2017) which predicts an event and multiple arguments from an input image, but does not localize the arguments. We use SR as an auxiliary task for training our visual branch, but exploit object detection and attention to enable localization of arguments. Silberer and Pinkal redefine the problem of visual argument role labeling with event types and bounding boxes as input. Different from their work, we extend the problem scope to including event identification and coreference, and further advance argument localization by proposing an attention framework which does not require bounding boxes for training nor testing. Multimedia Representation Multimedia common representation has attracted much attention recently (Toselli et al., 2007; Weegar et al., 2015; Hewitt et al., 2018; Chen et al., 2019; Liu et al., 2019; Su et al., 2019a; Sarafianos et al., 2019; Sun et al., 2019b; Tan and Bansal, 2019; Li et al., 2019a,b; Lu et al., 2019; Sun et al., 2019a; Rahman et al., 2019; Su et al., 2019b). However, previous methods focus on aligning images with their captions, or regions with words and entities, but ignore structure and semantic roles. UniVSE (Wu et al., 2019b) incorporates entity attributes and relations into cross-media alignment, but does not capture graph-level structures of images or text. 6 Conclusions and Future Work In this paper we propose a new task of multimedia event extraction and setup a new benchmark. We also develop a novel multimedia structured common space construction method to take advantage of the existing image-caption pairs and singlemodal annotated data for weakly supervised training. Experiments demonstrate its effectiveness as a new step towards semantic understanding of events in multimedia data. In the future, we aim to extend our framework to extract events from videos, and make it scalable to new event types. We plan to expand our annotations by including event types from other text event ontologies, as well as new event types not in existing text ontologies. We will also apply our extraction results to downstream applications including cross-media event inference, timeline generation, etc. Acknowledgement This research is based upon work supported in part by U.S. DARPA AIDA Program No. FA875018-2-0014 and U.S. DARPA KAIROS Program No. FA8750-19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 2566 References Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178–186. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961–970. Xiaojun Chang, Zhigang Ma, Yi Yang, Zhiqiang Zeng, and Alexander G Hauptmann. 2016. Bilevel semantic representation analysis for multimedia event detection. IEEE transactions on cybernetics, 47(5):1180–1197. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proc. ACL-IJCNLP2015. Yubo Chen, Hang Yang, Kang Liu, Jun Zhao, and Yantao Jia. 2018. Collective event detection via a hierarchical and bias tagging networks with gated multilevel attention mechanisms. In Proc. EMNLP2018. Kevin Duarte, Yogesh Rawat, and Mubarak Shah. 2018. Videocapsulenet: A simplified network for action detection. In Advances in Neural Information Processing Systems, pages 7610–7619. Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. Vse++: Improving visualsemantic embeddings with hard negatives. Charles J Fillmore, Christopher R Johnson, and Miriam RL Petruck. 2003. Background to framenet. International journal of lexicography, 16(3):235– 250. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE. Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. 2018. Ava: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6047– 6056. John Hewitt, Daphne Ippolito, Brendan Callahan, Reno Kriz, Derry Tanti Wijaya, and Chris Callison-Burch. 2018. Learning translations via images with a massively multilingual image dataset. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2566–2576. Yu Hong, Wenxuan Zhou, jingli zhang jingli, Guodong Zhou, and Qiaoming Zhu. 2018. Self-regulation: Employing a generative adversarial network to improve event detection. In Proc. ACL2018. Ruihong Huang and Ellen Riloff. 2012. Bootstrapped training of event extraction classifiers. In Proc. EACL2012. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL-08: HLT, pages 254–262. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137. Keizo Kato, Yin Li, and Abhinav Gupta. 2018. Compositional learning for human object interaction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 234–251. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. 2018. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982. Dong Li, Zhaofan Qiu, Qi Dai, Ting Yao, and Tao Mei. 2018. Recurrent tubelet proposal and recognition networks for action detection. In Proceedings of the European conference on computer vision (ECCV), pages 303–318. Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019a. Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. arXiv preprint arXiv:1908.06066. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proc. ACL2013. 2567 Shasha Liao and Ralph Grishman. 2011. Acquiring topic features to improve event extraction: in pre-selected and balanced collections. In Proc. RANLP2011. Ying Lin, Liyuan Liu, Heng Ji, Dong Yu, and Jiawei Han. 2019. Reliability-aware dynamic feature composition for name tagging. In Proc. The 57th Annual Meeting of the Association for Computational Linguistics (ACL2019). Chunxiao Liu, Zhendong Mao, An-An Liu, Tianzhu Zhang, Bin Wang, and Yongdong Zhang. 2019. Focus your attention: A bidirectional focal attention network for image-text matching. In Proceedings of the 27th ACM International Conference on Multimedia, pages 3–11. ACM. Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018a. Jointly multiple events extraction via attentionbased graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247–1256. Xiao Liu, Zhunchen Luo, and Heyan Huang. 2018b. Jointly multiple events extraction via attentionbased graph information aggregation. In Proc. EMNLP2018. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13–23. Zhigang Ma, Xiaojun Chang, Zhongwen Xu, Nicu Sebe, and Alexander G Hauptmann. 2017. Joint attributes and event analysis for multimedia event detection. IEEE transactions on neural networks and learning systems, 29(7):2921–2930. Arun Mallya and Svetlana Lazebnik. 2017. Recurrent models for situation recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 455–463. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proc. NAACL-HLT2016. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. AG Amitha Perera, Sangmin Oh, P Megha, Tianyang Ma, Anthony Hoogs, Arash Vahdat, Kevin Cannons, Greg Mori, Scott Mccloskey, Ben Miller, et al. 2012. Trecvid 2012 genie: Multimedia event detection and recounting. In In TRECVID Workshop. Citeseer. Wasifur Rahman, Md Kamrul Hasan, Amir Zadeh, Louis-Philippe Morency, and Mohammed Ehsan Hoque. 2019. M-bert: Injecting multimodal information in the bert structure. arXiv preprint arXiv:1908.05787. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. Nikolaos Sarafianos, Xiang Xu, and Ioannis A. Kakadiaris. 2019. Adversarial representation learning for text-to-image matching. In The IEEE International Conference on Computer Vision (ICCV). Haoyue Shi, Jiayuan Mao, Tete Xiao, Yuning Jiang, and Jian Sun. 2018. Learning visually-grounded semantics from contrastive adversarial samples. arXiv preprint arXiv:1806.10348. Gunnar A Sigurdsson, G¨ul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510–526. Springer. Carina Silberer and Manfred Pinkal. 2018. Grounding semantic roles in images. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2616–2626. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. 2012. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Mitchell Stephens. 1998. The Rise of the Image, The Fall of the Word. New York: Oxford University Press. Shupeng Su, Zhisheng Zhong, and Chao Zhang. 2019a. Deep joint-semantics reconstructing hashing for large-scale unsupervised cross-modal retrieval. In The IEEE International Conference on Computer Vision (ICCV). Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2019b. Vl-bert: Pretraining of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530. 2568 Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019b. Videobert: A joint model for video and language representation learning. In The IEEE International Conference on Computer Vision (ICCV). Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Alejandro H Toselli, Ver´onica Romero, and Enrique Vidal. 2007. Viterbi based alignment between text images and their transcripts. In Proceedings of the Workshop on Language Technology for Cultural Heritage Data (LaTeCH 2007)., pages 9–16. David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. arXiv preprint arXiv:1909.03546. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. Camr at semeval-2016 task 8: An extended transition-based amr parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173– 1178, San Diego, California. Association for Computational Linguistics. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015a. Boosting transition-based amr parsing with refined actions and auxiliary analyzers. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 857–862, Beijing, China. Association for Computational Linguistics. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015b. A transition-based algorithm for amr parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 366–375, Denver, Colorado. Association for Computational Linguistics. Rui Wang, Deyu Zhou, and Yulan He. 2019. Open event extraction from online text using a generative adversarial network. arXiv preprint arXiv:1908.09246. Rebecka Weegar, Kalle ˚Astr¨om, and Pierre Nugues. 2015. Linking entities across images and text. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 185– 193. Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. 2019a. Long-term feature banks for detailed video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 284–293. Hank Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, and Wei-Ying Ma. 2019b. Univse: Robust visual semantic embeddings via structured semantic representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 5284– 5294. Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5534–5542. Guangnan Ye, Yitong Li, Hongliang Xu, Dong Liu, and Shih-Fu Chang. 2015. Eventnet: A large scale structured concept library for complex event detection in video. In Proceedings of the 23rd ACM international conference on Multimedia, pages 471–480. ACM. Tongtao Zhang, Heng Ji, and Avirup Sil. 2019. Joint entity and event extraction with generative adversarial imitation learning. Data Intelligence Vol 1 (2): 99-120. Tongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph Ellis, Lifu Huang, Wei Liu, Heng Ji, and Shih-Fu Chang. 2017. Improving event extraction via multimodal integration. In Proceedings of the 25th ACM international conference on Multimedia, pages 270–278. ACM. Yifan Zhang, Changsheng Xu, Yong Rui, Jinqiao Wang, and Hanqing Lu. 2007. Semantic event extraction from basketball games using multi-modal analysis. In 2007 IEEE International Conference on Multimedia and Expo, pages 2190–2193. IEEE.
2020
230
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2569–2588 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2569 Learning to Segment Actions from Observation and Narration Daniel Fried‡ Jean-Baptiste Alayrac† Phil Blunsom† Chris Dyer† Stephen Clark† Aida Nematzadeh† †DeepMind, London, UK ‡Computer Science Division, UC Berkeley [email protected] {jalayrac,pblunsom,cdyer,clarkstephen,nematzadeh}@google.com Abstract We apply a generative segmental model of task structure, guided by narration, to action segmentation in video. We focus on unsupervised and weakly-supervised settings where no action labels are known during training. Despite its simplicity, our model performs competitively with previous work on a dataset of naturalistic instructional videos. Our model allows us to vary the sources of supervision used in training, and we find that both task structure and narrative language provide large benefits in segmentation quality. 1 Learning to Segment Actions Finding boundaries in a continuous stream is a crucial process for human cognition (Martin and Tversky, 2003; Zacks and Swallow, 2007; Levine et al., 2019; ¨Unal et al., 2019). To understand and remember what happens in the world around us, we need to recognize the action boundaries as they unfold and also distinguish the important actions from the insignificant ones. This process, referred to as temporal action segmentation, is also an important first step in systems that ground natural language in videos (Hendricks et al., 2017). These systems must identify which frames in a video depict actions – which amounts to distinguishing these frames from background ones – and identify which actions (e.g., boiling potatoes) each frame depicts. Despite recent advances (Miech et al., 2019; Sun et al., 2019), unsupervised action segmentation in videos remains a challenge. The recent availability of large datasets of naturalistic instructional videos provides an opportunity for modeling of action segmentation in a rich task context (Yu et al., 2014; Zhou et al., 2018; Zhukov et al., 2019; Miech et al., 2019; Tang et al., 2019); Work begun while DF was interning at DeepMind. Code is available at https://github.com/dpfried/action-segmentation. in these videos, a person teaches a specific highlevel task (e.g., making croquettes) while describing the lower-level steps involved in that task (e.g., boiling potatoes). However, the real-world nature of these datasets introduces many challenges. For example, more than 70% of the frames in one of the YouTube instructional video datasets, CrossTask (Zhukov et al., 2019), consist of background regions (e.g., the video presenter is thanking their viewers), which do not correspond to any of the steps for the video’s task. These datasets are interesting because they provide (1) narrative language that roughly corresponds to the activities demonstrated in the videos and (2) structured task scripts that define a strong signal of the order in which steps in a task are typically performed. As a result, these datasets provide an opportunity to study the extent to which task structure and language can guide action segmentation. Interestingly, young children can segment actions without any explicit supervision (Baldwin et al., 2001; Sharon and Wynn, 1998), by tapping into similar cues – action regularities and language descriptions (Levine et al., 2019). While previous work mostly focuses on building action segmentation models that perform well on a few metrics (Richard et al., 2018; Zhukov et al., 2019), we aim to provide insight into how various modeling choices impact action segmentation. How much do unsupervised models improve when given implicit supervision from task structure and language, and which types of supervision help most? Are discriminative or generative models better suited for the task? Does explicit structure modeling improve the quality of segmentation? To answer these questions, we compare two existing models with a generative hidden semi-Markov model, varying the degree of supervision. On a challenging and naturalistic dataset of instructional videos (Zhukov et al., 2019), we find 2570 that our model and models from past work both benefit substantially from the weak supervision provided by task structure and narrative language, even on top of rich features from state-of-the-art pretrained action and object classifiers. Our analysis also shows that: (1) Generative models tend to do better than discriminative models of the same or similar model class at learning the full range of step types, which benefits action segmentation; (2) Task structure affords strong, feature-agnostic baselines that are difficult for existing systems to surpass; (3) Reporting multiple metrics is necessary to understand each model’s effectiveness for action segmentation; we can devise feature-agnostic baselines that perform well on single metrics despite producing low-quality action segments. 2 Related Work Typical methods (Rohrbach et al., 2012; Singh et al., 2016; Xu et al., 2017; Zhao et al., 2017; Lea et al., 2017; Yeung et al., 2018; Farha and Gall, 2019) for temporal action segmentation consist of assigning action classes to intervals of videos and rely on manually-annotated supervision. Such annotation is difficult to obtain at scale. As a result, recent work has focused on training such models with less supervision: one line of work assumes that only the order of actions happening in the video is given and use this weak supervision to perform action segmentation (Bojanowski et al., 2014; Huang et al., 2016; Kuehne et al., 2017; Richard et al., 2017; Ding and Xu, 2018; Chang et al., 2019). Other approaches weaken this supervision and use only the set of actions that occur in each video (Richard et al., 2018), or are fully unsupervised (Sener and Yao, 2018; Kukleva et al., 2019). Instructional videos have gained interest over the past few years (Yu et al., 2014; Sener et al., 2015; Malmaud et al., 2015; Alayrac et al., 2016; Zhukov et al., 2019) since they enable weakly-supervised modeling: previous work most similar to ours consists of models that localize actions in narrated videos with minimal supervision (Alayrac et al., 2016; Sener et al., 2015; Elhamifar and Naing, 2019; Zhukov et al., 2019). We present a generative model of action segmentation that incorporates duration modeling, narration and ordering constraints, and can be trained in all of the above supervision conditions by maximizing the likelihood of the data; while these past works have had these individual components, they have not yet all been combined. 3 The CrossTask Dataset We use the recent CrossTask dataset (Zhukov et al., 2019) of instructional videos. To our knowledge, CrossTask is the only available dataset that has tasks from more than one domain, includes background regions, provides step annotations and naturalistic language. Other datasets lack one of these; e.g.they focus on one domain (Kuehne et al., 2014) or do not have natural language (Tang et al., 2019) or step annotations (Miech et al., 2019). An example instance from the dataset is shown in Figure 1, and we describe each aspect below. Tasks Each video comes from a task, e.g. make a latte, with tasks taken from the titles of selected WikiHow articles, and videos curated from YouTube search results for the task name. We focus on the primary section of the dataset, containing 2,700 videos from 18 different tasks. Steps and canonical order Each task has a set of steps: lower-level action step types, e.g., steam milk and pour milk, which are typically completed when performing the task. Step names consist of a few words, typically naming an action and an object it is applied to. The dataset also provides a canonical step order for each task: an ordering, like a script (Schank and Abelson, 1977; Chambers and Jurafsky, 2008), in which a task’s steps are typically performed. For each task, the set of step types and their canonical order were hand-constructed by the dataset creators based on section headers in the task’s WikiHow article. Annotations Each video in the primary section of the dataset is annotated with labeled temporal segments identifying where steps occur. (In the weak supervision setting, these step segment labels are used only in evaluation, and never in training.) A given step for a task can occur multiple times, or not at all, in any of the task’s videos. Steps in a video also need not occur in the task’s canonical ordering (although in practice our results show that this ordering is a helpful inductive bias for learning). Most of the frames in videos (72% over the entire corpus) are background – not contained in any step segment. Narration Videos also have narration text (transcribed by YouTube’s automatic speech recognition system) which typically consists of a mix of the 2571 Regions background pour mixture into pan flip pancake background background Video Narration "hey folks here welcome to my kitchen [...] folks my pan is nice and hot [...] just change the angle to show you [...] let cook [...] sit on towel [...] big old stack [...] Timestep Time (in s) Step background flip pancake rm pancake background Figure 1: An example video instance from the CrossTask dataset (Sec. 3). The video depicts a task, make pancakes, and is annotated with region segments, which can be either action steps (e.g., pour mixture into pan) or background regions. Videos also are temporally-aligned with transcribed narration. We learn to segment the video into these regions and label them with the action steps (or background), without access to region annotations during training. task demonstrator describing their actions and talking about unrelated topics. Although narration is temporally aligned with the video, and steps (e.g., pour milk) are sometimes mentioned, these mentions often do not occur at the same time as the step they describe (e.g., “let the milk cool before pouring it”). Zhukov et al. (2019) guide weaklysupervised training using the narration by defining a set of narration constraints for each video, which identify where in the video steps are likely to occur, using similarity between the step names and temporally-aligned narration (see Sec. 6.1). 4 Model Our generative model of the video features and labeled task segments is a first-order semi-Markov model. We use a semi-Markov model for the action segmentation task because it explicitly models temporal regions of the video, their duration, their probable ordering, and their features.1 It can be trained in an unsupervised way, without labeled regions, to maximize the likelihood of the features. Timesteps Our atomic unit is a one-second region of the video, which we refer to as a timestep. A video with T timesteps has feature vectors x1:T . The features xt at timestep t are derived from the video, its narration, or both, and in our work (and past work on the dataset) are produced by pre-trained neural models which summarize some non-local information in the region containing each timestep, which we describe in Sec. 6.3. Regions Our model segments a video with T timesteps into a sequence of regions, each of which consists of a consecutive number of timesteps (the region’s duration). The number of regions K in a 1Semi-Markov models are also shown to be successful in the similar domain of speech recognition (e.g., Pylkkonen and Kurimo, 2004). video and the duration dk of each region can vary; the only constraint is that the sum of the durations equals the video length: PK k=1 dk = T. Each region has a label rk, which is either one of the task’s step labels (e.g., pour milk) or a special label BKG indicating the region is background. In our most general, unconstrained model, a given task step can occur multiple times (or not at all) as a region label in any video for the task, allowing step repetitions, dropping, and reordering. Structure We define a first-order Markov (bigram) model over these region labels: P(r1:K) = P(r1) K Y k=2 P(rk | rk−1) (1) with tabular conditional probabilities. While region labels are part of the dataset, they are primarily used for evaluation: we seek models that can be trained in the unsupervised and weaklysupervised conditions where labels are unavailable. This model structure, while simple, affords a dynamic program allowing efficient enumeration over both all possible segmentations of the video into regions and assignments of labels to the regions, allowing unsupervised training (Sec. 4.1). Duration Our model, following past work (Richard et al., 2018), parameterizes region durations using Poisson distributions, where each label type r has its own mean duration λr: dk ∼ Poisson(λrk). These durations are constrained so that they partition the video: e.g., region r2 begins at timestep d1 (after region r1), and the final region rK ends at the final timestep T. Timestep labels The region labels r1:K (step, or background) and region durations d1:K together give a sequence of timestep labels l1:T for all timesteps, where a timestep’s label is equal to the label for the region it is contained in. 2572 Feature distribution Our model’s feature distribution p(xt|lt) is a class-conditioned multivariate Gaussian distribution: xt ∼Normal(µlt, Σ), where lt is the step label at timestep t. (We note that the assignment of labels to steps is latent and unobserved during unsupervised and weakly-supervised training.) We use a separate learned mean µl for each label type l, both steps and background. Labels are atomic and task-specific, e.g., the step type pour milk when it occurs in the task make a latte does not share parameters with the step add milk when it occurs in the task make pancakes.2 We use a diagonal covariance matrix Σ which is fixed to the empirical covariance of each feature dimension.3 4.1 Training In the unsupervised setting, labels l are unavailable at training (used only in evaluation). We describe training in this setting, as well as two supervised training methods which we use to analyze properties of the dataset and compare model classes. Unsupervised We train the generative model as a hidden semi-Markov model (HSMM). We optimize the model’s parameters to maximize the log marginal likelihood of the features for all video instance features x(i) in the training set: ML = N X i log P(x(i) 1:Ti) (2) Applying the semi-Markov forward algorithm (Murphy, 2002; Yu, 2010) allows us to marginalize over all possible sequences of step labels to compute the log marginal likelihood for each video as a function of the model parameters, which we optimize directly using backpropagation and minibatched gradient descent with the Adam (Kingma and Ba, 2015) optimizer.4 See Appendix A for optimization details. Generative supervised Here the labels l are observed; we train the model as a generative semiMarkov model (SMM) to maximize the log joint likelihood: J L = N X i log P(l(i) 1:Ti, x(i) 1:Ti) (3) 2We experimented with sharing steps, or step components, across tasks in initial experiments, but found that it was helpful to have task-specific structural probabilities. 3We found that using a shared diagonal covariance matrix outperformed using full or unshared covariance matrices. 4This is the same as mini-batched Expectation Maximization using gradient descent on the M-objective (Eisner, 2016). Richard et al. (2018) Zhukov et al. (2019) Ours step reordering ✓ ✓ step repetitions ✓ ✓ step duration ✓ ✓ language ✓ ✓ generative model ✓ ✓ Table 1: Characteristics of each model we compare. We maximize this likelihood over the entire training set using the closed form solution given the dataset’s sufficient statistics (per-step feature means, average durations, and step transition frequencies). Discriminative supervised To train the SMM model discriminatively in the supervised setting, we use gradient descent to maximize the log conditional likelihood: CL = N X i log P(l(i) 1:T | x(i) 1:T ) (4) 5 Benchmarks We identify five modeling choices made in recent work: imposing a fixed ordering on steps (not allowing step reordering); allowing for steps to repeat in a video; modeling the duration of steps; using the language (narrations) associated with the video; and using a discriminative/generative model. We picked the recent models of Zhukov et al. (2019) and Richard et al. (2018) since they have non-overlapping strengths (see Table 1). ORDEREDDISCRIM This work (Zhukov et al., 2019) uses a discriminative classifier which gives a probability distribution over labels at each timestep: p(lt | xt). Inference finds an assignment of steps to timesteps that maximizes P t log p(lt|xt) subject to the constraints that: all steps are predicted exactly once; steps occur in the fixed canonical ordering defined for the task; one background region occurs between each step. Unsupervised training of the model alternates between inferring labels using the dynamic program, and updating the classifier to maximize the probability of these inferred labels.5 ACTIONSETS This work (Richard et al., 2018) uses a generative model which has structure similar to ours, but uses dataset statistics (e.g., average video length and number of steps) to learn the 5To allow the model to predict step regions with duration longer than a single timestep, we modify this classifier to also predict a background class, and incorporate the scores of the background class into the dynamic program. 2573 structure distributions, rather than setting parameters to maximize the likelihood of the data. As in our model, region durations are modeled using a class-conditional Poisson distribution. The feature distribution is modeled using Bayesian inversion of a discriminative classifier (a multi-layer perceptron) with an estimated label prior. The structural parameters of the model (durations and class priors) are estimated using the length of each video, and the number of possible step types. As originally presented, this model depends on knowing which steps occur in a video at training time; for fair comparison, we adapt it to the same supervision conditions of Zhukov et al. (2019) by enforcing the canonical step ordering for the task during both training and evaluation. 6 Experimental Setting We compare models on the CrossTask dataset across supervision conditions. We primarily evaluate the models on action segmentation (Sec. 1). Past work on the dataset (Zhukov et al., 2019) has focused on a step recognition task, where models identify individual timesteps in videos that correspond to possible steps; for comparison, we also report performance for all models on this task. 6.1 Supervision Conditions In all settings, the task for a given video is known (and hence the possible steps), but the settings vary in the availability of other sources of supervision: step labels for each timestep in a video, and constraints from language and step ordering. Models are trained on a training set and evaluated on a separate held-out testing set, consisting of different videos (from the same tasks). Supervised Labels for all timesteps l1:T are provided for all videos in the training set. Fully unsupervised No labels for timesteps are available during training. The only supervision is the number of possible step types for each task (and, as in all settings, which task each video is from). In evaluation, the task for a given video (and hence the possible steps, but not their ordering) are known. We follow past work in this setting (Sener et al., 2015; Sener and Yao, 2018) by finding a mapping from model states to region labels that maximizes label accuracy, averaged across all videos in the task. See Appendix C for details. Weakly supervised No labels for timesteps are available, but two supervision types are used in the form of constraints (Zhukov et al., 2019): (1) Step ordering constraints: Step regions are constrained to occur in the canonical step ordering (see Sec. 3) for the task, but steps may be separated by background. We constrain the structure prior distributions p(r1) and transition distributions p(rk+1|rk) of the HSMM to enforce this ordering. For p(r1), we only allow non-zero probability for the background region, BKG, and for the first step in the task’s ordering. p(rk | rk−1) constrains each step type to only transition to the next step in the constrained ordering, or to BKG.6 As step ordering constraints change the parameters of the model, when we use them we enforce them during both training and testing. While this obviates most of the learned structure of the HSMM, the duration model (as well as the feature model) is still learned. (2) Narration constraints: These give regions in the video where each step type is likely to occur. Zhukov et al. (2019) obtained these using similarities between word vectors for the transcribed narration and the words in the step labels, and a dynamic program to produce constraint regions that maximize these similarities, subject to the step ordering matching the canonical task ordering. See Zhukov et al. for details. We enforce these constraints in the HSMM by penalizing the feature distributions to prevent any step labels that occur outside of one of the allowed constraint regions for that step. Following Zhukov et al., we only use these narration constraints during training.7 6.2 Evaluation We use three metrics from past work, outlined here and described in more detail in Appendix D. To evaluate action segmentation, we use two varieties of the standard label accuracy metric (Sener and Yao, 2018; Richard et al., 2018): all label accuracy, which is computed on all timesteps, including background and non-background, as well as step label accuracy: accuracy only for timesteps that occur in a non-background region (according to the ground-truth annotations). Since these two accuracy metrics are defined on individual frames, 6To enforce ordering when steps are separated by BKG, we annotate BKG labels with the preceeding step type (but all BKG labels for a task share feature and duration parameters, and are merged for evaluation). 7We also experiment with using features derived from transcribed narration in Appendix G. 2574 they penalize models if they don’t capture the full temporal extent of actions in their predicted segmentations. Our third metric is step recall, used by past work on the CrossTask dataset (Zhukov et al., 2019) to measure step recognition (defined in Sec. 6). This metric evaluates the fraction of step types which are correctly identified by a model when it is allowed to predict only one frame per step type, per video. A high step recall indicates a model can accurately identify at least one representative frame of each action type in a video. We also report three other statistics to analyze the predicted segmentations: (1) Sequence similarity: the similarity of the sequence of region labels predicted in the video to the groundtruth, using inverse Levenshtein distance normalized to be between 0 and 100. See Appendix D for more details. (2) Predicted background percentage: the percentage of timesteps for which the model predicts the background label. Models with a higher percentage than the ground truth background percentage (72%) are overpredicting background. (3) Number of segments: the number of step segments predicted in a video. Values higher than the ground truth average (7.7) indicate overly-fragmented steps. Sequence similarity and number of segments are particularly relevant for measuring the effects of structure, as they do not factor over individual timesteps (as do the all label and step label accuracies and step recall). We average values across the 18 tasks in the evaluation set (following Zhukov et al., 2019). 6.3 Features For our features x1:T , we use the same base features as Zhukov et al. (2019), which are produced by convolutional networks pre-trained on separate activity, object, and audio classification datasets. See Appendix B for details. In our generative models, we apply PCA (following Kuehne et al., 2014 and Richard et al., 2018) to project features to 300 dimensions and decorrelate dimensions (see Appendix B for details).8 7 Results We first define several baselines based on dataset statistics (Sec. 7.1), which we will find to be strong in comparison to past work. We then analyze each 8This reduces the number of parameters that need to be learned in the emission distributions, both by reducing the dimensionality and allowing a diagonal covariance matrix. In early experiments we found PCA improved performance. B1 B2 B3 S1 S2 S3 S4 S6 S5 U1 U2 U3 U4 U5 U6 U7 0 10 20 30 40 50 0 10 20 30 40 50 60 Step Recall Step Label Accuracy Baselines Supervised: Unstructured Supervised: Structured Fully Unsupervised Ordering Constraints Narration Constraints Ordering+Narration OrderedDiscrim OrderedDiscrim SMM, discrim. HSMM+Narr+Ord SMM, gen. Figure 2: Baseline and model performance on two key metrics: step label accuracy and step recall. Points are colored according to their supervision type, and labeled with their row number from Table 2. We also label particular important models. aspect of our proposed model on the dataset in a supervised training setting (Sec. 7.2), removing some error sources of unsupervised learning and evaluating whether a given model fits the dataset (Liang and Klein, 2008). Finally, we move to our main setting, the weakly-supervised setting of past work, incrementally adding step ordering and narration constraints (see Sec. 6.1) to evaluate the degree to which each helps (Sec. 7.3). Results are given in Table 2 for models trained on the CrossTask training set of primary tasks, and evaluated on the held-out validation set. We will describe and analyze each set of results in turn. See Figure 2 for a plot of models’ performance on two key metrics, and Appendix I for example predictions. 7.1 Dataset Statistic Baselines Table 2 (top block) shows baselines that do not use video (or narration) features, but predict steps according to overall statistics of the training data. These demonstrate characteristics of the data, and the importance of using multiple metrics. Predict background (B1) Since most timesteps are background, a model that predicts background everywhere can obtain high overall label accuracy, showing the importance of also using step label accuracy as a metric for action segmentation. Sample from the training distribution (B2) For each timestep in each video, we sample a label from the empirical distribution of step and background label frequencies for the video’s task in the training data. 2575 All Label Step Label Step Sequence Predicted Num. # Model Accuracy Accuracy Recall Similarity Bkg. % Segments. Dataset Statistic Baselines (Sec. 7.1) GT Ground truth 100.0 100.0 100.0 100.0 71.9 7.7 B1 Predict background 71.9 0.0 0.0 9.0 100.0 0.0 B2 Sample from train distribution 54.6 7.2 8.3 12.8 72.4 69.5 B3 Ordered uniform 55.6 8.1 12.2 55.0 73.0 7.4 Supervised (Sec. 7.2) Unstructured S1 Discriminative linear 71.0 36.0 31.6 30.7 73.3 27.1 S2 Discriminative MLP 75.9 30.4 27.7 41.1 82.8 13.0 S3 Gaussian mixture 69.4 40.6 31.5 33.3 68.9 23.9 Structured S4 ORDEREDDISCRIM 75.2 18.1 45.4 54.4 90.7 7.4 S5 SMM, discriminative 66.0 37.3 24.1 50.5 65.9 8.5 S6 SMM, generative 60.5 49.4 28.7 46.6 52.4 10.6 Un- and Weakly-Supervised (Sec. 7.3) Fully Unsupervised U1 HSMM (with opt. acc. assignment) 31.8 28.8 10.6 31.0 31.1 15.4 Ordering Supervision U2 ACTIONSETS 40.8 14.0 12.1 55.0 49.8 7.4 U3 ORDEREDDISCRIM (without Narr.) 69.5 0.2 2.8 55.0 97.2 7.4 U4 HSMM + Ord 55.5 8.3 7.3 55.0 70.6 7.4 Narration Supervision U5 HSMM + Narr 65.7 9.6 8.5 35.1 84.6 4.5 Ordering + Narration Supervision U6 ORDEREDDISCRIM 71.0 1.8 24.5 55.0 97.2 7.4 U7 HSMM + Narr + Ord 61.2 15.9 17.2 55.0 73.7 7.4 Table 2: Model comparison on the CrossTask validation data. We evaluate primarily using all label accuracy and step label accuracy to evaluate action segmentation, and step recall to evaluate step recognition. Ordered uniform (B3) For each video, we predict step regions in the canonical step order, separated by background regions. The length of each region is set so that all step regions in a video have equal duration, and the percentage of background timesteps is equal to the corpus average. See Uniform in Figure 3a for sample predictions. Sampling each timestep label independently from the task distribution (row B2), and using a uniform step assignment in the task’s canonical ordering with background (B3) both obtain similar step label accuracy, but the ordered uniform baseline improves substantially on the step recall metric, indicating that step ordering is a useful inductive bias for step recognition. 7.2 Full Supervision Models in the unstructured block of Table 2 are classification models applied independently to all timesteps, allowing us to compare the performance of the feature models used as components in our structured models. We find that a Gaussian mixture model (row S3), which is used as the feature model in the HSMM, obtains comparable step recall and substantially higher step label accuracy than a discriminative linear classifer (row S1) similar to the one used in Zhukov et al. (2019), which is partially explained by the discriminative classifier overpredicting the background class (comparing Predicted Background % for those two rows). Using a higher capacity discriminative classifier, a neural net with a single hidden layer (MLP), improves performance over the linear model on several metrics (row S2); however, the MLP still overpredicts background, substantially underperforming the Gaussian mixture on the step label accuracy metric. In the structured block of Table 2, we compare the full models which use step constraints (Zhukov et al., 2019) or learned transition distributions (the SMM) to model task structure. The structured models learn (or in the case of Zhukov et al., enforce) orderings over the steps, which greatly improve their sequence similarity scores when compared to the unstructured models, and decrease step fragmentation (as measured by num. segments). Figure 3a shows predictions for a typical video, demonstrating this decreased fragmentation.9 9We also perform an ablation study to understand the effect of the duration model. See Appendix F for details. 2576 GT Uniform GMM SMM 0 50 100 150 200 250 300 BKG pour sesame oil add onion add ham add kimchi add rice stir mixture (a) Step segmentations in the full supervision condition for a video from the make kimchi fried rice task, comparing the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the Gaussian mixture (GMM) and semi-Markov (SMM) models. BKG GT Ordered Discrim HSMM+ Narr+Ord HSMM pour egg add flour pour mixture into pan add sugar flip pancake whisk mixture take pancake from pan pour milk 0 50 100 150 200 250 300 350 (b) Step segmentations in the no- or weak-supervision conditions for a video from the make pancakes task, comparing the ground truth (GT) to predictions from our model without (HSMM) and with constraint supervision (HSMM+Narr+Ord) and from Zhukov et al. (2019) (ORDEREDDISCRIM). Figure 3: Step segmentation visualizations for two sample videos in supervised (left) and unsupervised (right) conditions. The x-axes show timesteps, in seconds. See Appendix I for more visualizations. We see two trends in the supervised results: (1) Generative models obtain substantially higher step label accuracy than discriminative models of the same or similar class. This is likely due to the fact that the generative models directly parameterize the step distribution. (See Appendix E.) (2) Structured sequence modeling naturally improves performance on sequence-level metrics (sequence similarity and number of segments predicted) over the unstructured models. However, none of the learned structured models improve on the strong ordered uniform baseline (B3) which just predicts the canonical ordering of a task’s steps (interspersed with background regions). This will motivate using this canonical ordering as a constraint in unsupervised learning. Overall, the SMM models obtain strong action segmentation performance (high step label accuracy without fragmenting segments or overpredicting background). 7.3 No or Weak Supervision Here models are trained without supervision for the labels l1:T . We compare models trained without any constraints, to those that use constraints from step ordering and narration, in the Un- and Weakly Supervised block of Table 2. Example outputs are shown in Appendix I. Our generative HSMM model affords training without any constraints (row U1). This model has high step label accuracy (compared to the other unsupervised models) but low all label accuracy, and similar scores for both metrics. This hints, and other metrics confirm, that the model is not adequately distinguishing steps from background: the percentage of predicted background is very low (31%) compared to the ground truth (72%, row GT). See HSMM in Figure 3b for predictions for a typical video. These results are attributable to features within a given video (even across step types) being more similar than features of the same step type in different videos (see Appendix H for feature visualizations). The induced latent model states typically capture this inter-video diversity, rather than distinguishing steps across tasks. We next add in constraints from the canonical step ordering, which our supervised results showed to be a strong inductive bias. Unlike in the fully unsupervised setting, the HSMM model with ordering (HSMM+Ord, row U4) learns to distinguish steps from background when constrained to predict each step region once in a video, with predicted background timesteps (70.6%) close to the ground-truth (72%). However, performance of this model is still very low on the task metrics – comparable to or underperforming the ordered uniform baseline with background (row B3) on all metrics. This constrained step ordering setting also allows us to apply ACTIONSETS (Richard et al., 2018) and ORDEREDDISCRIM (Zhukov et al., 2019). ACTIONSETS obtains high step label accuracy, but substantially underpredicts background, as evidenced by both the all label accuracy and the low predicted background percentage. The tendency of ORDEREDDISCRIM to overpredict background which we saw in the supervised setting (row S4) is even more pronounced in this weaklysupervised setting (row U3), resulting in scores very close to the predict background baseline (B1). Next, we use narration constraints (U5), which are enforced only during training time, following Zhukov et al. (2019). Narration constraints substantially improve all label accuracy (comparing U1 and U5). However, the model overpredicts 2577 All Label Step Label Step Acc. Acc. Recall ORDEREDDISCRIM 71.3 1.2 17.9 HSMM+Narr+Or 66.0 5.6 14.2 Table 3: Unsupervised and weakly supervised results in the cross-validation setting. background, likely because it doesn’t enforce each step type to occur in a given video. Overpredicting background causes step label accuracy and step recall to decrease. Finally, we compare the HSMM and ORDEREDDISCRIM models when using both narration constraints (in training) and ordering constraints (in training and testing) in the ordering + narration block. Both models benefit substantially from narration on all metrics when compared to using only ordering supervision, more than doubling their performance on step label accuracy and step recall (comparing U6 and U7 to U3 and U4). Our weakly-supervised results show that: (1) Both action segmentation metrics – all label accuracy and step label accuracy – are important to evaluate whether models adequately distinguish meaningful actions from background. (2) Step constraints derived from the canonical step ordering provide a strong inductive bias for unsupervised step induction. Past work requires these constraints and the HSMM, when trained without them, does poorly, learning to capture diversity across videos rather than to identify steps. (3) However, ordering supervision alone is not sufficient to allow these models to learn better segmentations than a simple baseline that just uses the ordering to assign labels (ordered uniform); narration is also required. 7.4 Comparison to Past Work Finally, we compare our full model to the ORDEREDDISCRIM model of Zhukov et al. (2019) in the primary data evaluation setup from that work: averaging results over 20 random splits of the primary data (Table 3). This is a low data setting which uses only 30 videos per task as training data in each split. Accordingly, both models have lower performance, although the relative ordering is the same: higher step label accuracy for the HSMM, and higher all label accuracy and step recall for ORDEREDDISCRIM. Although in this low-data setting, models overpredict background even more, this problem is less pronounced for the HSMM: 97.4% of timesteps for ORDEREDDISCRIM are predicted background (explaining its high all label accuracy), and 87.1% for HSMM. 8 Discussion We find that unsupervised action segmentation in naturalistic instructional videos is greatly aided by the inductive bias given by typical step orderings within a task, and narrative language describing the actions being done. While some results are more mixed (with the same supervision, different models are better on different metrics), we do observe that across settings and metrics, step ordering and narration increase performance. Our results also illustrate the importance of strong baselines: without weak supervision from step orderings and narrative language, even state-of-the-art unsupervised action segmentation models operating on rich video features underperform feature-agnostic baselines. We hope that future work will continue to evaluate broadly. While action segmentation in videos from diverse domains remains challenging – videos contain both a large variety of types of depicted actions, and high visual variety in how the actions are portrayed – we find that structured generative models provide a strong benchmark for the task due to their abilities to capture the full diversity of action types (by directly modeling distributions over action occurrences), and to benefit from weak supervision. Future work might explore methods for incorporating richer learned representations both of the diverse visual observations in videos, and the narration that describes them, into such models. Acknowledgments Thanks to Dan Klein, Andrew Zisserman, Lisa Anne Hendricks, Aishwarya Agrawal, G´abor Melis, Angeliki Lazaridou, Anna Rohrbach, Justin Chiu, Susie Young, the DeepMind language team, and the anonymous reviewers for helpful feedback on this work. DF is supported by a Google PhD Fellowship. References Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, and Sudheendra Vijayanarasimhan. 2016. YouTube-8M: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675. 2578 Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised learning from narrated instruction videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Dare A Baldwin, Jodie A Baird, Megan M Saylor, and M Angela Clark. 2001. Infants parse dynamic action. Child Development, 72(3):708–717. Piotr Bojanowski, R´emi Lajugie, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, and Josef Sivic. 2014. Weakly supervised action labeling in videos under ordering constraints. In Proceedings of the European Conference on Computer Vision (ECCV). Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the Kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Chien-Yi Chang, De-An Huang, Yanan Sui, Li Fei-Fei, and Juan Carlos Niebles. 2019. D3TW: Discriminative differentiable dynamic time warping for weakly supervised action alignment and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Li Ding and Chenliang Xu. 2018. Weakly-supervised action segmentation with iterative soft boundary assignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Jason Eisner. 2016. Inside-outside and forwardbackward algorithms are just backprop (tutorial paper). In Proceedings of the Workshop on Structured Prediction for NLP. Ehsan Elhamifar and Zwe Naing. 2019. Unsupervised procedure learning via joint dynamic summarization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Yazan Abu Farha and Jurgen Gall. 2019. MS-TCN: Multi-stage temporal convolutional network for action segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. CNN architectures for largescale audio classification. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). De-An Huang, Li Fei-Fei, and Juan Carlos Niebles. 2016. Connectionist temporal modeling for weakly supervised action labeling. In Proceedings of the European Conference on Computer Vision (ECCV). Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The Kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR). Dan Klein and Christopher D. Manning. 2002. Conditional structure versus conditional estimation in NLP models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Hilde Kuehne, Ali Arslan, and Thomas Serre. 2014. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hilde Kuehne, Alexander Richard, and Juergen Gall. 2017. Weakly supervised learning of actions from transcripts. In CVIU. Harold W Kuhn. 1955. The Hungarian method for the assignment problem. Naval research logistics quarterly, 2(1-2):83–97. Anna Kukleva, Hilde Kuehne, Fadime Sener, and Jurgen Gall. 2019. Unsupervised learning of action classes with continuous temporal embedding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager. 2017. Temporal convolutional networks for action segmentation and detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Dani Levine, Daphna Buchsbaum, Kathy Hirsh-Pasek, and Roberta M Golinkoff. 2019. Finding events in a continuous world: A developmental account. Developmental Psychobiology, 61(3):376–389. Percy Liang and Dan Klein. 2008. Analyzing the errors of unsupervised learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). 2579 Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research (JMLR), 9(Nov):2579–2605. Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nicholas Johnston, Andrew Rabinovich, and Kevin Murphy. 2015. What’s cookin’? interpreting cooking videos using text, speech and vision. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Bridgette A Martin and Barbara Tversky. 2003. Segmenting ambiguous events. In Proceedings of the Annual Meeting of the Cognitive Science Society. Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. HowTo100M: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC). Kevin Murphy. 2002. Hidden semi-markov models. Unpublished tutorial. Janne Pylkkonen and Mikko Kurimo. 2004. Duration modeling techniques for continuous speech recognition. In Eighth International Conference on Spoken Language Processing. Alexander Richard, Hilde Kuehne, and Juergen Gall. 2017. Weakly supervised action learning with RNN based fine-to-coarse modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Alexander Richard, Hilde Kuehne, and Juergen Gall. 2018. Action sets: Weakly supervised action segmentation without ordering constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. 2012. A database for fine grained activity detection of cooking activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211–252. Roger C Schank and Robert P Abelson. 1977. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Psychology Press. Fadime Sener and Angela Yao. 2018. Unsupervised learning and segmentation of complex activities from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). O. Sener, A. Zamir, S. Savarese, and A. Saxena. 2015. Unsupervised semantic parsing of video collections. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Tanya Sharon and Karen Wynn. 1998. Individuation of actions from continuous motion. Psychological Science, 9(5):357–362. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR). Bharat Singh, Tim K Marks, Michael Jones, Oncel Tuzel, and Ming Shao. 2016. A multi-stream bidirectional recurrent neural network for fine-grained action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. 2019. COIN: A large-scale dataset for comprehensive instructional video analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Ercenur ¨Unal, Yue Ji, and Anna Papafragou. 2019. From event representation to linguistic meaning. Topics in Cognitive Science. Huijuan Xu, Abir Das, and Kate Saenko. 2017. R-C3D: Region convolutional 3d network for temporal activity detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei. 2018. Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision (IJCV), 126(2-4):375–389. Shoou-I Yu, Lu Jiang, and Alexander Hauptmann. 2014. Instructional videos for unsupervised harvesting and learning of action examples. In Proceedings of the ACM International Conference on Multimedia (MM). Shun-Zheng Yu. 2010. Hidden semi-markov models. Artificial intelligence, 174(2):215–243. Jeffrey M Zacks and Khena M Swallow. 2007. Event segmentation. Current Directions in Psychological Science, 16(2):80–84. 2580 Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaoou Tang, and Dahua Lin. 2017. Temporal action detection with structured segment networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV). Luowei Zhou, Xu Chenliang, and Jason J. Corso. 2018. Towards automatic learning of procedures from web instructional videos. In Proceedings of the Conference on Artificial Intelligence (AAAI). Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learning from instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2581 A Optimization For both training conditions for our semi-Markov models that require gradient descent (generative unsupervised and discriminative supervised), we initialize parameters randomly and use Adam (Kingma and Ba, 2015) with an initial learning rate of 5e-3, a batch size of 5 videos, and decay the learning rate when training log likelihood does not decrease for more than one epoch. B Features For our features x1:T , we use the same base features as Zhukov et al. (2019). There are three feature types: activity recognition features, produced by an I3D model (Carreira and Zisserman, 2017) trained on the Kinetics-400 dataset (Kay et al., 2017); object classification features, from a ResNet-152 (He et al., 2016) trained on ImageNet (Russakovsky et al., 2015), and audio classification features10 from the VGG model (Simonyan and Zisserman, 2015) trained by Hershey et al. (2017) on a preliminary version of the YouTube-8M dataset (Abu-ElHaija et al., 2016).11 For the generative mdoels which use Gaussian emission distributions, we apply PCA to the base features above to reduce the feature dimensionality and decorrelate dimensions. We perform PCA separately for features within task and within each feature group (I3D, ResNet, and audio features), but on features from all videos within that task. We use 100 components for each feature group, which explained roughly 70-100% of the variance in the features, depending on the task and feature group. The 100-dimensional PCA representations for the I3D, ResNet, and audio features for each frame, at timestep t, are then concatenated to give a 300-dimensional vector for the frame, xt. C Unsupervised Evaluation The HSMM model, when trained in a fully unsupervised setting, induces class labels for regions in the video; however while these class labels are distinct, they do not correspond a priori to any of the actual region labels (which can be step types, or background) for our task. Just as with other unsupervised tasks and models (e.g., part-of-speech induction), we need a mapping from these classes 10https://github.com/tensorflow/models/ tree/master/research/audioset/vggish 11We also experiment with using features derived from transcribed narration in Appendix G. to step types (and background) in order to evaluate the model’s predictions. We follow the evaluation procedure of past work (Sener and Yao, 2018; Sener et al., 2015) by finding the mapping from model states to region labels that maximizes label accuracy, averaged across all videos in the task, using the Hungarian method (Kuhn, 1955). This evaluation condition is only used in the “Unsupervised” section of Table 2 (in the rows marked with optimal accuracy assignment). D Evaluation Metrics Label accuracy The standard metric for action segmentation (Sener and Yao, 2018; Richard et al., 2018) is timestep label accuracy, in datasets with a large amount of background, label accuracy on nonbackground timesteps. The CrossTask dataset has multiple reference step labels in the groundtruth for around 1% of timesteps, due to noisy region annotations that overlap slightly. We obtain a single reference label for these timesteps by taking the step that appears first in the canonical step ordering for the task. We then compute accuracy of the model predictions against these reference labels across all timesteps and all videos for a task (in the all label accuracy condition), or by filtering to those timesteps which have a step label (non-background) in the reference (to focus on the model’s ability to accurately predict step labels), in the step label accuracy condition. Step recall This metric (Zhukov et al., 2019) measures a model’s ability to pick out instants for each of the possible step types for a task, if they occur in a video. The model of Zhukov et al. (2019) predicted a single frame for each step type; while our extension of their model, ORDEREDDISCRIM, and our HSMM model can predict multiple, when computing this metric we obtain a single frame for each step type to make the numbers comparable to theirs.When a model predicts multiple frames per step type, we obtain a single one by taking the one closest to the middle of the temporal extent of the predicted frames for that step type. We then apply their recall metric: First, count the number of recovered steps, step types from the true labels for the video that were identified by one of the predicted labels (have a predicted label of the same type at one of the true label’s frames). These recovered step counts are summed across videos in the evaluation set for a given task, and normalized by the maximum number of possible recovered steps 2582 (the number of step types in each video, summed across videos) to produce a step recall fraction for the task. Sequence similarity This measures the similarity of the predicted sequence of regions in a video against the true sequence of regions. As in speech recognition, we are interested in the high-level sequence of steps recognized in a video (and wish to abstract away from noise in the boundaries of the annotated regions). We first compute the negated Levenshtein distance between the true sequence of steps and background r1, . . . , rK for a video and the and predicted sequence ˆr1, . . . , ˆr′ K. The negated distance for the sequence pairs for a given video are scaled to be between 0 and 100, where 0 indicates the Levenshtein distance is the maximum possible between two sequences of their respective lengths, and 100 corresponds to the sequences being identical. These similarities are then averaged across all videos in a task. E Comparing Generative and Discriminative Models We observe that the generative models tend to obtain higher performance on the action segmentation task, as measured by step label accuracy, than discriminative models of the same or similar class. We attribute this finding to two factors: first, the generative models explicitly parameterize probabilities for the steps, allowing better modeling of the full distribution of step labels. Second, the discriminative models are trained to optimize p(lt | xt) for all timesteps t. We would expect that this would produce better accuracies on metrics aligned with this objective (Klein and Manning, 2002) – and indeed the all timestep accuracy is higher for the discriminative models. However, the discriminative models’ high accuracy often comes at the expense of predicting background more frequently, leading to lower performance on step label accuracy. F Duration Model Ablation We examine the effect of the (hidden) semiMarkov model’s Poisson duration model by comparing to a (hidden) Markov model (HMM in the unsupervised/weakly-supervised settings, or MM in the supervised setting). We use the model as described in Sec. 4 except for fixing all durations to be a single timestep. We then train as described in Sec. 4.1. While this does away with explicit modeling of duration, the transition distribution still All Label Step Label Step Seq. Model Acc. Acc. Recall Sim. Supervised SMM, gen. 60.5 49.4 28.7 46.6 MM, gen. 60.1 48.6 28.2 46.8 SMM, disc. 66.0 37.3 24.1 50.5 MM, disc. 62.8 32.2 20.1 41.8 Weakly-Supervised HSMM 31.8 28.8 10.6 31.0 HMM 28.8 30.8 10.3 29.9 HSMM+Ord+Narr 61.2 15.9 17.2 55.0 HMM+Ord+Narr 60.6 17.0 20.0 55.0 Table 4: Comparison between the semi-Markov and hidden semi-Markov models (SMM and HSMM) with the Markov and hidden Markov (MM and HMM) models, which ablate the semi-Markov’s duration model. allows the model to learn expected durations for each region type by implicitly parameterizing a geometric distribution over region length. Results are shown in 4. We observe that results are overall very similar, with the exceptions that removing the duration model decreases performance substantially on all metrics in the discriminative supervised setting, and increases performance on step label accuracy and step recall in the constrained unsupervised setting (HSMM+Ord+Narr and HMM+Ord+Narr). This suggests that the HMM transition distribution is able to model region duration as well as the HSMM’s explicit duration model, or that duration overall plays a small role in modeling in most settings relative to the importance of the features. G Narration Features The benefit of narration-derived hard constraints on labels (following past work by Zhukov et al. 2019) raises the question of how much narration would help when used to provide features for the models. We obtain narration features for each video using FastText word embeddings (Mikolov et al., 2018) for the video’s time-aligned transcribed narration (see Zhukov et al. 2019 for details on this transcription), pooled within a sliding window to allow for imperfect alignment between activities mentioned in the narration and their occurrence in the video. The features for a given timestep t are produced by a weighted sum of embeddings for all the words in the transcribed narration within a 5-second window of t (i.e.from t −2 to t + 2), weighted using a Hanning window12 (so that words in the center of each window are most heavily weighted for that 12https://docs.scipy.org/doc/numpy/ reference/generated/numpy.hanning.html 2583 (a) Feature vectors colored by their step label in the reference annotations. (b) Feature vectors colored by the id of the video they occur in. All Label Step Label Step Acc. Acc. Recall Supervised Gaussian mixture 70.4 (+1.0) 43.7 (+3.1) 34.9 (+3.4) SMM, generative 63.3 (+2.8) 53.2 (+3.8) 32.1 (+3.4) Weakly-Supervised HSMM+Ord 53.6 (-1.9) 9.5 (+1.2) 8.5 (+1.2) HSMM+Narr 68.9 (+3.2) 8.0 (-1.6) 12.6 (+4.1) HSMM+Narr+Ord 64.3 (+3.1) 17.9 (+2.0) 21.9 (+4.7) Table 5: Performance of key supervised and weaklysupervised models on the validation data when adding narration vectors as features. Numbers in parentheses give the change from adding narration vectors to the systems from Table 2. window). We did not tune the window size, or experiment with other weighting functions. The word embeddings are pretrained on Common Crawl, and are not fine-tuned with the rest of the model parameters. Once these narration features are produced, as above, we treat them in the same way as the other feature types (activity recognition, object classification, and audio) described in Appendix B: reducing their dimensionality with PCA, and concatenating them with the other feature groups to produce the features xt. In Table 5, we show performance of key supervised and weakly-supervised models on the validation set, when using these narration features in addition to activity recognition, object detection, and audio features. Narration features improve performance over the corresponding systems from Table 2 (differences are shown in parentheses) in 13 out of 15 cases, typically by 1-4%. H Feature Visualizations To give a sense for feature similarities both within step types and within a video, we visualize feature vectors for 20 videos randomly chosen from the change a tire task, dimensionality-reduced using t-SNE (Maaten and Hinton, 2008) so that similar feature vectors are close in the visualization. Figure 4a shows feature vectors colored by step type: we see little consistent clustering of feature vectors by step. On the other hand, we observe a great deal of similarity across step types within a video (see Figure 4b); when we color feature vectors by video, different steps from the same video are close to each other in space. These together suggest that better featurization of videos can improve action segmentation. I Segmentation Visualizations In the following pages, we show example segmentations from the various systems. Figure 5 and 6 visualize predicted model segmentations for the unstructured Gaussian mixture and structured semiMarkov model in the supervised setting, in comparison to the ground-truth and the ordered uniform baseline. We see that while both models typically make similar predictions in the same temporal regions of the video, the structured model produces steps that are much less fragmented. Figure 7 and 8 visualize segmentations in the unsupervised and weakly-supervised settings for the HSMM model and ORDEREDDISCRIM of Zhukov et al. (2019). The unsupervised HSMM has difficulty distinguishing steps from background (see Appendix H), while the model trained with 2584 weak supervision from ordering and narration (HSMM+Ord+Narr) is better able to induce meaningful steps. The ORDEREDDISCRIM model, although it has been modified to allow predicting multiple timesteps per step, collapses to predicting a single label, background, nearly everywhere, which we conjecture is because the model is discriminatively trained: jointly inferring labels that are easy to predict, and the model parameters to predict them. 2585 Figure 5: Supervised segmentations We visualize segmentations from the validation set for a video from the task make kimchi fried rice. We show the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the unstructured Gaussian mixture model (GMM), and structured semi-Markov model (SMM) trained in the supervised setting. Predictions from the unstructured model are more fragmented than predictions from the SMM. The x-axis gives the timestep in the video. 2586 Figure 6: Supervised segmentations We visualize segmentations from the validation set for a video from the task build simple floating shelves. We show the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the unstructured Gaussian mixture model (GMM), and structured semi-Markov model (SMM) trained in the supervised setting. Predictions from the unstructured model are more fragmented than predictions from the SMM. The x-axis gives the timestep in the video. 2587 Figure 7: Unsupervised and weakly-supervised segmentations We visualize segmentations from the validation set for a video from the task make pancakes. We show the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the hidden semi-markov trained without constraints (HSMM) and with constraints from narration and ordering (HSMM+Narr+Ord), and the system of Zhukov et al. The x-axis gives the timestep in the video. 2588 Figure 8: Unsupervised and weakly-supervised segmentations We visualize segmentations from the validation set for a video from the task grill steak. We show the ground truth (GT), ordered uniform baseline (Uniform), and predictions from the hidden semi-markov trained without constraints (HSMM) and with constraints from narration and ordering (HSMM+Narr+Ord), and the system of Zhukov et al. The x-axis gives the timestep in the video.
2020
231
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2589–2602 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2589 Learning to execute instructions in a Minecraft dialogue Prashant Jayannavar Anjali Narayan-Chen University of Illinois at Urbana-Champaign {paj3, nrynchn2, juliahmr}@illinois.edu Julia Hockenmaier Abstract The Minecraft Collaborative Building Task is a two-player game in which an Architect A instructs a Builder B to construct a target structure out of 3D blocks. We consider the task of predicting B’s action sequences (block placements and removals) in a given game context, and show that capturing B’s past actions as well as B’s perspective leads to a significant improvement in performance on this challenging language understanding problem. 1 Introduction There is a long-standing interest in building interactive agents that can communicate with humans about and operate within the physical world (e.g. Winograd (1971)). The goal for agents in this scenario is to not only be able to engage in rich natural language discourse with their human conversation partners, but also to ground that discourse to physical objects, and execute instructions in the real world. Traditional dialogue scenarios are either completely ungrounded (Ritter et al., 2010; Schrading et al., 2015), focus on slot-value filling tasks (Kim et al., 2016b,a; Budzianowski et al., 2018) which instead require grounding to entities in a knowledge base, or operate within static environments, such as images (Das et al., 2017) or videos (Pasunuru and Bansal, 2018). Relevant efforts in robotics have largely focused on single-shot instruction following, and are mostly constrained to simple language (Roy and Reiter, 2005; Tellex et al., 2011) with limited resources (Thomason et al., 2015; Misra et al., 2016; Chai et al., 2018). The recently introduced Minecraft Collaborative Building Task and the corresponding Minecraft Dialogue Corpus (Narayan-Chen et al., 2019) is one attempt to bridge this gap within the simulated game world of Minecraft. In this task, two players, an Architect (A) instructs a Builder (B) to construct a target structure out of multi-colored building blocks. The corpus consists of 509 game logs between humans that perform this task. NarayanChen et al. (2019) focus on generating Architect utterances. In this paper, we explore models for building an automated Builder agent.1 We focus on the subtask of predicting the Builder’s block placements, and leave the back-and-forth dialogue aspect of the overall task required of a fully interactive Builder agent to future work. We define the Builder Action Prediction (BAP) task in Section 2, describe our models in Section 3, an approach to augment the training data in Section 4, and our experiments in Section 5. We analyze results and highlight challenges of the BAP task in Section 6. 2 Dataset and Task 2.1 The Minecraft Dialogue Corpus The Minecraft Dialogue Corpus (Narayan-Chen et al., 2019) consists of 509 human-human dialogues and game logs for the Minecraft Collaborative Building Task, a two-player game in a simulated Blocks World environment between an Architect (A) and a Builder (B). A is given a target structure (Target) and has to instruct B via a text chat interface to build a copy of Target on a given build region. A and B communicate back and forth via chat throughout the game (e.g. to resolve confusions or to correct B’s mistakes), but only B can move blocks, while A observes B operating in the world. B is given access to an inventory of 120 blocks of six given colors that it can place and remove. The resulting dialogues consist mainly of A providing instructions, often involving multiple actions to be taken, and grounded in the Builder’s perspective, while B executes those instructions and resolves 1For models and code see http://juliahmr.cs. illinois.edu/Minecraft 2590 (a) (b) (c) (d) (e) (f) (g) (h) Figure 1: A sample sequence of human-human game states. The game starts with an empty grid and an initial A instruction (a), which B executes in the first action sequence (b) by placing a single block. In (c), B begins to execute the next A instruction given in (b). However, A interrupts B in (c), leading to two distinct B action sequences: (b)–(c) (single block placement), and (c)–(h) (multiple placements and removals). any confusion through further dialogue. The task is complete when the structure built by B (Built) matches Target (allowing for translations within the horizontal plane and rotations about the vertical axis) and lies completely within the boundaries of the predefined build region. Games in this corpus are based on 150 distinct target structures, split into disjoint test, training, and development sets such that training targets do not appear during test or development. Game logs record all utterances and B’s actions (placements and removals), as well as the state of the world (i.e. the (x,y,z)-coordinates and colors of all blocks in the build region), and B’s (x,y,z) position, vertical rotation (pitch) and horizontal orientation (yaw) at the points in time when an utterance was recorded or an action performed. Since there are six block colors to be placed, we distinguish seven possible types of actions A ∈{BLUE, GREEN, ..., YELLOW, REMOVE}. B actions are 4-tuples ⟨A, x, y, z⟩consisting of an action type and cell coordinates. A block placement is feasible as long as an adjacent grid location is occupied, while REMOVE is feasible as long as that location is currently occupied by a block. These actions do not include B’s movement. B can assume any (continuous) 3D position and orientation, and the dataset records B’s position and orientation for each individual action. But since there are many positions and orientations from which blocks in a cell can be placed, B’s movement is secondary to the main task of constructing the target configuration. 2.2 The Builder Action Prediction Task Narayan-Chen et al. (2019) focused on creating models that can generate A utterances, whereas we aim to develop models that can perform B’s role. Although back-and-forth dialogue between the two players is a clear hallmark of this task, we leave the question of how to develop B agents that can decide when to speak and what to contribute to the conversation (either by way of chit-chat, verifications or clarification questions to A) to future work, and focus here on the subtask of predicting correct sequences of block placements and removals. Executing A instructions is B’s primary role, and a crucial component to overall task completion. Figure 1 shows an example from the Minecraft Dialogue Corpus that highlights some challenges 2591 of performing this task. A can move around freely, but remains invisible to B and views the structure from behind B when giving instructions. As a result, A instructions frequently include spatial relations, both between pairs of blocks or substructures (“put ... on top of..,”), and relative to B’s current position and perspective (“left”, “right”). A also often uses higher-level descriptions involving complex shapes (e.g. “staircase”, “v”). Due to the asynchronous nature of the dialogue, A often interrupts during B action sequences. A may also provide corrections and clarifications to fix B mistakes. Producing feasible sequences of B actions requires a certain amount of planning, since blocks can only be placed in grid cells that are adjacent to other blocks or the ground, and floating structures (a common occurrence among the target structures in this corpus) can only be built if supporting blocks that are not part of the target structure are present when the floating blocks are being placed. Despite these challenges, we show below that training models that use a rich representation of the world (Section 3) on sufficient amounts of diversified data (Section 4) produces promising initial results. To generate items for this task, we follow a similar strategy to Narayan-Chen et al. (2019), who, as a first step towards designing a fully interactive Architect, define an Architect Utterance Generation Task, where models are presented with a particular human-human game context in which a human Architect produced an utterance and are evaluated based on how well they can generate an appropriate utterance. Conversely, we define the Builder Action Prediction (BAP) Task as the task of predicting the sequence of actions (block placements and/or removals) that a human Builder performed at a particular point in a human-human game. 2.3 Evaluating Builder Action Predictions To evaluate models for the BAP task, we compare each model’s predicted action sequence Am against the corresponding action sequence Ah that the human builder performed at that point in the game. Specifically, for each pair of model and human action sequences (Am, Ah), where Ah = ⟨a(1) h , ...a(k) h ⟩led from a world state Wbefore to a world state Wh and Am = ⟨a(1) m , ...a(l) m ⟩led from the same Wbefore to Wm, we compute an F1 score over the net actions in Ah and Am, and report a micro-average over all sequences in the test (or development) data. Net actions ignore actions that were undone within the same sequence, e.g. if a block was placed and then removed. We consider any am action correct if the same action (involving the same grid cell and block color) occurs among the net actions in Ah. There are two reasons why we evaluate net rather than all actions: first, many structures contain floating blocks which require the placement of temporary “placeholder” blocks that are later removed. Placeholders’ colors are arbitrary, and there are often multiple possible locations where placeholders can be put; placeholder predictions should not be penalized, as long as they enable the correct target to be built. Human Builders are also prone to making small mistakes that are immediately resolved (e.g. by removing blocks that were accidentally placed). Evaluation should be robust to this noise in the ground truth sequences. The F1 metric ignores sequence information because it is either implicit in cases where it matters (e.g. building a vertical stack of blocks from the ground up), or irrelevant (e.g. building a line of blocks on the ground). Other metrics may also be suited for this task, but obvious choices such as an edit distance between Wm and Wh suffer from the problem that they favor models that place fewer blocks, since incorrect placements would incur twice the cost of no placements. However, our current definition of when an action is correct is relatively harsh, and could be relaxed in a number of ways. First, since it only considers an action correct if it matches a human action at the same grid cell, it penalizes cases where there are rotational equivalences between the built and the target structures (as may arise when the target has rotational symmetry). It also ignores any translational equivalences (which are very common at the beginning of a dialogue when the initial structure is empty, and may also need to be taken into account when the action sequence passes through an intermediate state in which all blocks have been removed). Second, looser F1 scores that evaluate actions only with regard to block locations (ignoring color) or colors (ignoring locations) might yield insight into how well models understand spatial relations, colors, or the number of blocks to be placed or removed. We leave exploring such variants to future work. While our evaluation allows us compare models directly and automatically against a common gold standard, it is important to keep in mind that such direct comparisons to human action sequences pro2592 vide only a lower bound on performance because they are based on the assumption that a) the human executed the instructions completely and correctly, and that b) there is only one way to execute the instructions correctly. But instructions are often vague or ambiguous: “Place a red block on the ground next to the blue block” may be resolved to any of four equally correct cells adjoining that block, and ideally, the evaluation metric should score them the same. And human action sequences do not always correspond to a complete execution of the previous instruction, e.g. when B is interrupted by A or stops to ask a question: A: now it will be a diagonal staircase with 4 steps angling towards the middle A: if that makes sense B puts down a red block B: diagonal staircase with this orientation? B puts down a red block A: towards where the yellow blocks are pointing B picks up 2 red blocks, puts down a red block 2.4 Related Work There is growing interest in situated collaborative scenarios involving instruction givers/followers with one-way (Hu et al., 2019; Suhr et al., 2019) and two-way (Kim et al., 2019; Ilinykh et al., 2019) communication. Here, we compare our task to related work on instruction following, both generally and within Blocks World and Minecraft. Instruction following: Prior approaches to instruction comprehension typically take a semantic parsing approach (Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Andreas and Klein, 2015). Semantic parsing components enable human-robot understanding (Tellex et al., 2011; Matuszek et al., 2013); some approaches to interactive robot design combine these architectures with physical robot exploration to enable online learning (Thomason et al., 2015, 2016, 2017). The SCONE corpus (Long et al., 2016) features tasks in three domains requiring context-dependent sequential instruction understanding, in which a system is given a world containing several predefined objects and properties and has to predict the final world state by parsing instructions to intermediate logical forms. Some papers have also applied neural action prediction models (Suhr and Artzi, 2018; Huang et al., 2019) to SCONE. More recently, Vision-and-Language Navigation (VLN), (Anderson et al., 2018), and its dialog counterpart, Cooperative Vision-and-Dialog Navigation (CVDN) (Thomason et al., 2019), focus on instruction following and cooperative interactions in photorealistic navigation settings. Since our dataset does not contain any logical forms, we also cannot use semantic parsing approaches, and have to resort to neural action prediction models. However, Minecraft instructions are more challenging than the SCONE tasks because our action space is significantly larger and our utterances are more complex. Minecraft dialogues are also more complex than the sequences of instructions in SCONE because we cannot assume that actions to be executed are described in the last utterance. Minecraft dialogues are also more complex than those in CVDN, because they contain more turns, and because communication is asynchronous. Moreover, construction differs fundamentally from navigation in that construction dynamically changes the environment. While referring expressions in navigation can be safely assumed to refer to objects that exist in the world, construction instructions frequently refer to objects that need to be built by the agent. And although more recent navigation tasks require real vision, their underlying world state space (as defined by fixed viewpoints and the underlying navigation graph) is just as highly discretized. Our task does not require vision, but poses an arguably more challenging planning problem, since its action space is much larger (7623 possible actions vs. six actions in the vision-language navigation work). Blocks World: There is a renewed interest in instruction comprehension in Blocks World scenarios. Voxelurn (Wang et al., 2017) interfaces with human users and learns to understand descriptions of voxel structures of increasing complexity, but does so by mapping them down to a core programmatic language. Bisk et al. (2016a,b, 2018) build models for understanding single-shot instructions that transform one world state to another using simulated 3D blocks. Blocks are viewed from a fixed bird’s-eye perspective, initialized randomly in the initial world state, and uniquely identifiable. The varying Builder perspective and lack of easily identifiable referents, along with the need to understand utterances in a dialogue context, make our task a much more challenging problem. Unlike traditional Blocks World, Minecraft allows blocks to float (requiring nonmonotonic action sequences where placement is followed by removal), or attach to any side of an existing block. 2593 Minecraft: Combining semantic parsing with simulated human-robot interaction, Facebook CraftAssist is a dialogue-enabled framework with an associated dataset for semantic parsing of instructions in Minecraft (Gray et al., 2019; Jernite et al., 2019; Szlam et al., 2019). Their setup enables two-way human-bot interactions in which a human architect can direct an automated builder using natural language to build complex structures. To bootstrap a semantic parser, they synthetically generate (using a hand-defined grammar) and crowdsource natural language instructions paired with logical tree structures consisting of action primitives. In addition to lacking such annotations, our work differs fundamentally in that our data is sourced from human-human dialogues; instructions are more ambiguous, dialogues have larger variety and Builder action sequences are noisier. Game History Action Sequence Decoder World State Action Predictor STOP Token Predictor + softmax 𝐚(#) 𝑾(#) + … 1×1×1 conv ReLU 1×1×1 conv ReLU 1×1×1 conv 𝑛−1 FF FF … maxpool ReLU 𝑙−1 𝑚−1 𝑘×𝑘×𝑘conv 1×1×1 conv ReLU ReLU 𝑘×𝑘×𝑘conv 𝑾(-) ReLU … GRU GRU make a column GRU 𝑊/ 𝑊/ 𝑊/ START 𝐚(#) GRU GRU GRU 𝐚(0) … FF ReLU FF ReLU FF ReLU … … … 𝑗 Figure 2: The Builder Action Prediction model. 3 Builder Action Prediction Models 3.1 Overall architecture Similar to e.g. the models of Suhr and Artzi (2018) for the SCONE tasks, models for the Builder Action Prediction task need to predict an appropriate, variable-length, sequence of actions (block placements and removals) in a given discourse and game context and world state. All our models (Figure 2) are based on a recurrent encoder-decoder architecture (Sutskever et al., 2014; Cho et al., 2014) in which a GRU-based encoder (bottom left box) captures the game context (dialogue and action history), and a CNN-based encoder (top left box) captures the world state at each time step. The decoder (right box) predicts one action per time step, based on the game history, the world state at that time, and the last action taken. It consists of another GRU backbone over action sequences (bottom right), and a multi-class classifier that reads in the output of the GRU backbone as well as the world state encoding produced by the CNN to predict either the next action (block placement or removal) to be taken, or a special STOP token that terminates the action sequence. The world state representation gets updated and re-encoded after each predicted action. We now describe these components in more detail. 3.2 Game history encoder Since B only knows what blocks to place after receiving an instruction from A, we can view the game history as a non-empty sequence of previous utterances (by both players), possibly interleaved with sequences of actions that were taken by B in earlier turns of the game. Our experiments examine the question of how much of this history should be given to our model, but all models examined in this paper treat the game history as a single sequence of tokens. Similar to Narayan-Chen et al. (2019), we encode the dialogue history as a sequence of tokens in which each player’s utterances are contained within speaker-specific start and end tokens (⟨A⟩. . . ⟨\A⟩or (⟨B⟩. . . ⟨\B⟩.). We also represent B’s prior actions naively as tokens that capture the action type (placement or removal) and block color (e.g. as “builder putdown red”). The 2 × 6 = 12 action tokens as well as the speaker tokens are encoded using 300-dimensional random vectors, while all other tokens are encoded as 300-dimensional pre-trained GloVe word embeddings (Pennington et al., 2014). The token embeddings are passed through a GRU to produce a H-dim embedding (H ∈{200, 300}) of the dialogue history in the GRU’s final hidden state. 3.3 World state encoder The world state is the current grid configuration that is fed into the action prediction model at each time step. We first describe how we represent the raw world state, before we explain how this representation is then encoded via a CNN-based architecture. Input: the raw world state Minecraft blocks are unit cubes that can be placed at integer-valued 2594 ⟨x, y, z⟩locations in a 3D grid; the Collaborative Building Task restricts these to a build region of size 11×9×11. Since we found it beneficial to explicitly capture empty grid cells, our baseline model represents each cell state as a 7-dim one-hot vector, yielding a 11×9×11×7 minimal world state representation encoding the presence (or absence) of blocks at any grid cell. We also found it useful to capture the relative position of each cell with respect to B’s current position and orientation, as well as which cells were affected by B’s most recent actions, and augment this model in two ways: Action history weights: Each action affects a single grid cell. Actions that follow each other often affect adjacent grid cells. We encode information about the most recent actions in our world state representation as follows: Given the chronological sequence of all actions A = a(1), a(2)...a(t−1) that took place before the t-th action to be predicted, we assign a real-valued weight α(i) to each action a(i) (where α(i) ≤α(i+1)), and include these action weights in the world state representation of the corresponding cells. We truncate the action history to the last five elements, assign integer weights 1...5 to a(t−5), ..., a(t−1) (and 0 to all a(i<t−5)), and then include these weights as a separate input feature in each cell. If a cell was affected more than once by the last five actions, we only use the weight of the most recent action. Our action weights do not distinguish between actions taken in the preceding action sequence and those in the current sequence. Perspective coordinates: B needs to understand the spatial relations in A’s instructions. Many of these relations (e.g. “left” in Figure 1) depend on B’s current position ⟨xB, yB, zB⟩and orientation (pitch φB ∈[−90, ..., +90], or vertical rotation, and yaw γB ∈[−180, ..., +180], horizontal orientation). Our models assume that spatial relations in an instruction are relative to B’s position at that time, and use that information to compute perspective coordinates. We calculate the relative perspective coordinates ⟨x′ c, y′ c, z′ c⟩of a cell c with absolute coordinates ⟨xc, yc, zc⟩by moving the frame of reference from ⟨0, 0, 0⟩to ⟨xB, yB, zB⟩, and rotating it to account for B’s yaw and pitch:2 ⟨x′ c, y′ c, z′ c⟩= P · Y · ⟨xc −xB, yc −yB, zc −zB⟩ We scale these perspective coordinates by a factor of .1 to keep their range closer to that of the cell 2P =  1 0 0 0 cos φB sin φB 0 −sin φB cos φB  and Y =  cos γB 0 −sin γB 0 1 0 sin γB 0 cos γB  state and action history weights. Our full model represents each cell as an 11dim vector (consisting of the 7-dim cell state, 1dim action history weight and 3-dim perspective coordinates), and the entire grid (which serves as input to a CNN-based encoder) as a 11×11×9×11 tensor. We refer to the grid at time step t as W (t) raw. Output: a CNN-based encoding To obtain a representation of each grid cell, we feed the raw world state tensor W (t) raw of Section 3.3 through a multi-layer CNN that embeds each grid cell conditioned on its neighborhood and recent actions (if using action history weights). The model consists of m 3d-conv layers with kernel size 3 (CNN3), stride 1 and padding 1, followed by a ReLU activation function. Between every successive pair of these layers is a 1 × 1 × 1 3d-conv layer (CNN1) with stride 1 and no padding, for dimensionality reduction purposes, again followed by ReLU. With W (t) 0 = W (t) raw, the first m −1 blocks of this model can be expressed as W (t) i = relu(CNNi 1(relu(CNNi 3(W (t) i−1)))). The m’th 3×3× 3 3d-conv layer CNNm 3 computes the final world state representation W (t) m = relu(CNNm 3 (W (t) m−1)) that is used to predict the next action. 3.4 Action Sequence Decoder The GRU backbone The GRU backbone of the decoder captures information about the current action sequence and the game history. We initialize its hidden state with the final hidden state of the game history encoder RNN of Section 3.2. Since the tensor representation of the grid is too unwieldy to be used as input to a recurrent net, we instead compute an explicit 11-dim representation a(t−1) of the action taken at the last time step, consisting of three components: a 2-dim one-hot vector for the action type (placement or removal), a 6dim one-hot vector for the block color (all zero for removals), and a 3-dim block location vector containing the absolute ⟨x, y, z⟩coordinates of the cell where the action took place. At the start of decoding, we use a zero vector as a start token. These action vectors get passed through j dense linear layers with ReLU before being fed to the GRU. Output: Next action prediction With seven possible actions per cell, there are 7623 possible actions (although only a small subset of these will be feasible at any point in time, a point that we will return to below). Since our models need to 2595 predict a variable length sequence of actions, we also need a special STOP action that is not associated with a single cell, but terminates the sequence. Our action prediction classifier has therefore two sub-components: a block action prediction model, and a stop prediction model. The stop prediction model returns a single element, which we append to the vector returned by the block action prediction model before feeding it through a softmax layer to return the most likely next action. Block actions scores: We use a CNN-based architecture with parameter sharing across cells to score each of the seven possible actions for every grid cell. The input to this model consists of the CNN-based world state representation W (t) m (Section 3.3), as well as the decoder GRU’s hidden state h(t), concatenated to each cell’s representation in W (t) m as additional channels. This model consists of n−1 1×1×1 3d-conv layers followed by ReLU (W ′(t) i = relu(CNNi 1(W ′(t) i−1)) and with the nth such 3d-conv layer with 7 output channels (and no ReLU): W ′(t) n = relu(CNNn 1(W ′(t) n−1)), which is flattened into a 7623-dim vector of action scores. STOP score: We also need to predict when an action sequence is complete. While this decision needs access to the same information as the block action scorer, it also needs access to a (compact) global representation of the grid, since the STOP action is not cell-specific. It also needs to know the uncertainty in the block action scorer, since STOP is more likely when it is less clear which block action should be performed, and vice versa. We take the output of the penultimate layer in the block action scorer and apply max-pooling to every cell’s vector representation, thus obtaining a single number for each of the 1089 cells. We concatenate these numbers into a single vector and use that as input to the STOP prediction model, which consists of l dense linear layers (with ReLU after each layer except the last), where the lth layer has a single output W ′′(t) l , the score for STOP. Final action prediction scores: Finally, we concatenate the block action and STOP scores and apply a softmax to obtain the final prediction a(t): at = arg max(softmax(vec(W ′(t) n ) ⊕W ′′(t) l )) 4 Data Augmentation The small size of the training set (3,709 examples) is a major limiting factor for training complex models. Here, we explore ways of generating synthetic data to augment the size and variety of our data. For each game log in the original training data, we generate twenty new game logs by combining the following data augmentation techniques: Utterance paraphrases: We generate paraphrases of the utterances in the dialogue by randomly substituting tokens with any of their synonyms in the hand-engineered synonym lexicon of Narayan-Chen et al. (2019). Color substitutions: We permute block colors by applying one of the 6! possible permutations, chosen at random, to the entire game log. These substitutions also change the language in the synthetic dialogues to reflect the updated colors. Spatial transformations: Since the world contains no landmarks besides the built region, absolute coordinates are somewhat arbitrary. We sample one (0, 90, -90, 180) rotation in the ground plane (affecting all ⟨x, z⟩coordinates, plus B’s yaw and position) per synthetic log (subject to the constraint that the target still fit in the built region). 5 Experiments We evaluate our world state encoders, game history and data augmentation schemes. Experimental Setup Our training, test and development splits contain 3709, 1616, and 1331 Builder action sequences respectively. We increase the training data to 7418 (2x), 14836 (4x) and 22254 (6x) items by sampling items from the synthetic data of Section 4. The average sequence length (in the development set) is 4.3 (with a std. deviation of 4.5). Target structures in the test data do not appear in the training or development data. We train models with AdamW (Loshchilov and Hutter, 2019) and weight decay regularization with a weight decay factor of 0.1. We use a learning rate of 0.001 for the original data and a slightly lower learning rate of 0.0001 in the case of augmented data. We use a batch size of 1. During training, we use teacher forcing and minimize the sum of the cross entropy losses between each predicted and ground truth action sequence (the action sequence performed by the human). We stop training early when loss on the held-out development set has increased monotonically for ten epochs. We use greedy decoding (max. sequence length of 10) to generate action sequences, which seems to work better than beam search decoding (for fixed beam sizes between 5 and 20). We report net action F1 (Section 2.3) on the test set. 2596 H1 H2 H3 BAP-base 11.8 12.4 14.6 + action history 14.6 18.2 19.7 + perspective 15.7 18.7 18.8 Table 1: The effect of varying game history and world state representations on test set performance. 2x 4x 6x BAP-baseH3 15.6 16.1 17.0 + action historyH3 16.9 20.0 18.4 + perspectiveH3 19.5 21.2 20.8 Table 2: The effect of data augmentation at 2x, 4x and 6x training data on test set performance. Model Variants The world state representation of the baseline model (BAP-base) consists of block colors at absolute ⟨x, y, z⟩coordinates. We examine the effect of augmenting BAP-base first with action history weights, and then also with relative perspective coordinates (both described in Section 3.3). For model hyperparameters, see Appendix A. Game History We experiment with three schemes for how much game history to provide to the models: H1 includes A’s last utterance and any following B utterances. H2 includes all utterances after B’s penultimate action sequence. H3 includes all utterances after B’s penultimate action sequence interleaved with a token representation of B’s last action sequence. If A’s last utterance was a standalone instruction, H1 should be sufficient. But prior discourse is often required: A instructions may span multiple utterances and can be interrupted by back-and-forth clarification dialogues. At the same time, B’s next action sequence is often directly related to (or a continuation of) their previous actions. This motivates H2 and H3: by including utterances that sandwich B’s previous action sequence, we include additional A history and B context. Finally, to investigate the degree to which previous B actions should be represented, H3 augments H2 with explicit representations of B’s actions (as described in Section 3.2). 6 Experimental Results 6.1 Quantitative Evaluation For each cell in Tables 1 and 2, we first perform a grid search over model hyperparameters and select the best performing model on the development set, then report its performance on the test set. Table 1 shows how the different game history and world state representations affect model performance. We see that performance increases as action weights are added and as the amount of history is increased. H3 consistently performs well across all model variants. Table 2 shows how different amounts of data augmentation affect performance. We train each model variant with H3 history on 2x, 4x and 6x augmented training data. This increases BAP-baseH3’s performance from 14.6 to 17.0 (with 6x data). With action history, performance increases from 19.7 to 20.0. With perspective coordinates, performance increases from 18.8 to 21.2 (both with 4x data). Perspective coordinates, thus, help with more training data (although it is unclear why performance drops again for the more complex models at 6x). Our best model is the full BAP model with action weights, perspective coordinates, history H3 and 4x augmented data (BAPH3,4x) with an F1 of 21.2. This is significantly better than the 11.8 F1 of our baseline BAP model with history H1 and without action history weights, perspective coordinates, or data augmentation (BAP-baseH1). We also see an improvement in mean sequence length from 2.23 to 2.66, even if the latter is still much smaller than the mean gold sequence length of 4.3. Infeasible Actions and Constrained Decoding In any given world state, only a small fraction of the 7623 actions are feasible: blocks can only be placed in locations that are currently empty and adjacent to existing blocks or on the ground, and blocks can only be removed from locations that are currently occupied. Surprisingly, less than 1% of action sequences generated by any of our models contain one or more infeasible actions. We can force our models to predict only feasible actions by multiplying the output of the block action prediction model (post softmax) with a bit mask over block actions that identifies which of the possible actions are feasible in the current world state, but this does not affect the F1 scores of either the baseline model or our best model. 6.2 Qualitative Evaluation We return to the development set to illustrate different aspects of BAPH3,4x’s generated action sequences. Figures 3 and 4 provide a few examples; more examples can be found in Appendix B. Colors: Our model is generally able to correctly identify colors of blocks to be placed. While in many cases continuing the color from the previous 2597 Initial Generated Ground Truth A: same on the other side B: (places purple at (-2, 3, 1)) A: add one red block on top of that Figure 3: Example 1: After B places the rightmost purple block, A directs B to place another red block on top of it. This occurs after a long back-and-forth clarification dialogue in which B struggles to understand A’s instructions; but the human B now completes the intended substructure by placing two red blocks and removing the purple. The model does not have access to the preceding dialogue, but interprets the most recent instruction correctly. Generated Ground Truth Initial A: now place two blue blocks on top of the edges of the line B: (places blue at (0, 2, -3), (0, 2, -1)) A: do it one more time Figure 4: Example 2: Here, B had just placed the two blocks atop the ends of the row of 3 blocks to create a U. Now, the model can interpret “do it one more time” and extends the U upwards by placing two more blocks. action sequence is sufficient, the model is also able to switch colors as needed based on A instructions. Numbers: Our model can sometimes identify the number of blocks to be placed when instructions mention them. But with vague instructions, the model struggles, stopping early or erroneously continuing long sequences of the same color. Spatial relations: Our model usually predicts a reasonable ballpark of locations for the next action sequence. While predicting correct locations exactly is still difficult, the model is usually able to distinguish “below” from “on top of”, and places blocks in the neighborhood of the true sequence. Placements vs. removals: Finally, our model is able to both place and remove blocks somewhat appropriately based on dialogue context. For instance, corrective utterances in the history (“sorry, my mistake”) usually trigger the model to undo previous actions. However, the model sometimes goes overboard: not knowing how much of the penultimate action sequence to remove, an entire sequence of correct blocks can be erroneously erased. 7 Conclusion and Future Work In the Minecraft Collaborative Building Task, Builders must be able to comprehend complex instructions in order to achieve their primary goal of building 3D structures. To this end, we define the challenging subtask of Builder Action Prediction, tasking models with generating appropriate action sequences learned from the actions of human Builders. Our models process the game history along with a 3D representation of the evolving world to predict actions in a sequence-to-sequence fashion. We show that these models, especially when conditioned on a suitable amount of game history and trained on larger amounts of synthetically generated data, improve over naive baselines. In the future, richer representations of the dialogue history (e.g. by using BERT (Devlin et al., 2019) or of past Builder actions) combined with de-noising of the human data and perhaps more exhaustive data augmentation should produce better output sequences. For true interactivity, the Builder must be augmented with the capability to determine when and how to respond when it is too uncertain to act. And, finally, an approach like the SpeakerFollower Models of Fried et al. (2018) could be used to train our Builder model and the Architect model of Narayan-Chen et al. (2019) jointly. Acknowledgements We would like to thank the reviewers for their valuable comments. This work was supported by Contract W911NF-15-1-0461 with the US Defense Advanced Research Projects Agency (DARPA) Communicating with Computers Program and the Army Research Office (ARO). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. 2598 References Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S¨underhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3674– 3683. IEEE Computer Society. Jacob Andreas and Dan Klein. 2015. Alignment-based compositional semantics for instruction following. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1165–1174, Lisbon, Portugal. Association for Computational Linguistics. Yoav Artzi and Luke Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49–62. Yonatan Bisk, Daniel Marcu, and William Wong. 2016a. Towards a dataset for human computer communication via grounded language acquisition. In AAAI Workshop: Symbiotic Cognitive Systems. Yonatan Bisk, Kevin Shih, Yejin Choi, and Daniel Marcu. 2018. Learning interpretable spatial operations in a rich 3D Blocks World. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5028–5036. Yonatan Bisk, Deniz Yuret, and Daniel Marcu. 2016b. Natural language communication with robots. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 751–761, San Diego, California. Association for Computational Linguistics. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Joyce Y. Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Language to action: Towards interactive task learning with physical agents. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18), pages 2–9. International Joint Conferences on Artificial Intelligence Organization. David Chen and Raymond Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 859–865. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 326– 335. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. 2018. Speaker-follower models for vision-and-language navigation. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montr´eal, Canada, pages 3318–3329. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy. PMLR. Jonathan Gray, Kavya Srinet, Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo, Siddharth Goyal, C. Lawrence Zitnick, and Arthur Szlam. 2019. CraftAssist: A framework for dialogue-enabled interactive agents. arXiv preprint arXiv:1907.08584. Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, and Mike Lewis. 2019. Hierarchical decision making by generating and following natural language instructions. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2599 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 10025–10034. Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. FlowQA: Grasping flow in history for conversational machine comprehension. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Nikolai Ilinykh, Sina Zarrieß, and David Schlangen. 2019. Meet Up! A corpus of joint activity dialogues in a visual environment. In Proceedings of the 23rd Workshop on the Semantics and Pragmatics of Dialogue - Full Papers, London, United Kingdom. SEMDIAL. Yacine Jernite, Kavya Srinet, Jonathan Gray, and Arthur Szlam. 2019. CraftAssist instruction parsing: Semantic parsing for a Minecraft assistant. arXiv preprint arXiv:1905.01978. Jin-Hwa Kim, Nikita Kitaev, Xinlei Chen, Marcus Rohrbach, Byoung-Tak Zhang, Yuandong Tian, Dhruv Batra, and Devi Parikh. 2019. CoDraw: Collaborative drawing as a testbed for grounded goaldriven communication. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6495–6513, Florence, Italy. Association for Computational Linguistics. Seokhwan Kim, Luis Fernando D’Haro, Rafael E. Banchs, Jason D. Williams, and Matthew Henderson. 2016a. The fourth dialog state tracking challenge. In Dialogues with Social Robots - Enablements, Analyses, and Evaluation, Seventh International Workshop on Spoken Dialogue Systems, IWSDS 2016, Saariselk¨a, Finland, January 13-16, 2016, volume 427 of Lecture Notes in Electrical Engineering, pages 435–449. Springer. Seokhwan Kim, Luis Fernando D’Haro, Rafael E. Banchs, Jason D. Williams, Matthew Henderson, and Koichiro Yoshino. 2016b. The fifth dialog state tracking challenge. In 2016 IEEE Spoken Language Technology Workshop, SLT 2016, San Diego, CA, USA, December 13-16, 2016, pages 511–517. IEEE. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456– 1465, Berlin, Germany. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Cynthia Matuszek, Evan Herbst, Luke Zettlemoyer, and Dieter Fox. 2013. Learning to parse natural language commands to a robot control system. In Proc. of the 13th Int’l Symposium on Experimental Robotics (ISER). Dipendra K. Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2016. Tell me Dave: Contextsensitive grounding of natural language to manipulation instructions. The International Journal of Robotics Research, 35(1-3):281–300. Anjali Narayan-Chen, Prashant Jayannavar, and Julia Hockenmaier. 2019. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5405–5415, Florence, Italy. Association for Computational Linguistics. Ramakanth Pasunuru and Mohit Bansal. 2018. Gamebased video-context dialogue. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 125–136, Brussels, Belgium. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172–180, Los Angeles, California. Association for Computational Linguistics. Deb Roy and Ehud Reiter. 2005. Connecting language to the world. Artificial Intelligence, 167(1-2):1–12. Nicolas Schrading, Cecilia Ovesdotter Alm, Ray Ptucha, and Christopher Homan. 2015. An analysis of domestic abuse discourse on Reddit. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2577– 2583, Lisbon, Portugal. Association for Computational Linguistics. Alane Suhr and Yoav Artzi. 2018. Situated mapping of sequential instructions to actions with single-step reward observation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2072– 2082, Melbourne, Australia. Association for Computational Linguistics. Alane Suhr, Claudia Yan, Jack Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situated collaborative interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2119–2130, Hong Kong, China. Association for Computational Linguistics. 2600 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, C. Lawrence Zitnick, and Jason Weston. 2019. Why build an assistant in Minecraft? arXiv preprint arXiv:1907.09273. Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 1507–1514. Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019. Vision-and-dialog navigation. arXiv preprint arXiv:1907.04957. Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin Hart, Peter Stone, and Raymond J. Mooney. 2017. Opportunistic active learning for grounding natural language descriptions. In Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17), pages 67–76, Mountain View, California. PMLR. Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, and Raymond J. Mooney. 2016. Learning multi-modal grounded linguistic semantics by playing “I Spy”. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16), pages 3477–3483, New York City. Jesse Thomason, Shiqi Zhang, Raymond J Mooney, and Peter Stone. 2015. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015), pages 1923–1929. Sida I. Wang, Samuel Ginn, Percy Liang, and Christopher D. Manning. 2017. Naturalizing a programming language via interactive learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 929–938, Vancouver, Canada. Association for Computational Linguistics. Terry Winograd. 1971. Procedures as a representation for data in a computer program for understanding natural language. Technical report, MIT. Cent. Space Res. H cnntype l j BAP-baseH1 300 cnnsmall 3 1 BAPH3,4x 300 cnnsmall 4 1 Table 3: Hyperparameter values for the baseline and full BAP models. A Model Hyperparameters We use Gated Recurrent Units (GRUs) (Chung et al., 2014) for all RNN modules and use 300-dimensional pretrained GloVe word embeddings (Pennington et al., 2014). All linear layers were initialized using Xavier initialization (Glorot and Bengio, 2010). All non-linearities in the model are ReLU. All 3×3×3 3d-conv layers have stride 1 and padding 1. All 1×1×1 3d-conv layers have stride 1 and no padding. For each model, we perform a grid search over the following hyperparameters: • The size of the GRU hidden state H ∈{200, 300} • The number of 3d-conv layers and channels in the world state encoder and action sequence decoder CNNs. We define a 3-tuple (echannels, m, n) where echannels defines the number of output channels for the first encoder-CNN 3dconv layer (which then determines the number of output channels for subsequent encoderCNN 3d-conv layers); m is the number of 3×3×3 3d-conv layers in the world state encoder; and n is the number of 1×1×1 3d-conv layers in the action sequence decoder. We choose between 2 hyperparameter configurations: cnntype ∈ {cnnsmall = (200, 2, 3), cnnbig = (300, 3, 2)}. • The number of dense linear layers in the STOP prediction model l ∈{3, 4} • The number of dense linear layers used to embed the action vectors before being fed to the decoder’s GRU j ∈{1, 2} Table 3 shows values of these hyperparameters for our baseline and best models. B Qualitative Examples Here, we provide more examples of action sequences generated by our model, along with the initial game state context and the human B’s actions as ground truth, in order to better highlight 2601 Generated Ground Truth Initial A: the next two blocks will be off the corners of each of those, in the direction of the last yellow block. B: (places yellow at (-4, 2, 1)) B: like that, or somewhere else? A: add one more block to the end of that on your side B: (places yellow at (-4, 2, 2)) A: and do the same on the other side Figure 5: Example 3. Generated Ground Truth Initial B: is this a 2d structure? A: yes … can you make a ring using the pillar we just made? … B: (builds a ring of blue blocks, while standing on the back side of the structure) A: yup, on the middle block of the ring’s right side, can you put a blue block? Figure 6: Example 4. Generated Ground Truth Initial A: so we are going to need blue placeholders to the left and right of the base block B: (places two blue blocks on the ground, then 2 red blocks atop them) … A: do that twice more B: (places blue and red blocks) A: ok now you can get rid of the blue blocks Figure 7: Example 5. Generated Ground Truth Initial A: lets start with green A: place two blocks flat on the floor towards the middle Figure 8: Example 6. Generated Ground Truth Initial A: now towards the middle of the board place 2 more green blocks overhanging the top so that the top has a row of 3 Final Target Figure 9: Example 7. 2602 the strengths and shortcomings of the full BAP model. Examples 5, 6 and 7 also examine the net actions F1 evaluation metric in context. Example 3 can be found in Figure 5. Over the course of some back-and-forth dialogue with A, B has just built the leftmost 2 yellow blocks of the left yellow row. From here, our model interprets “do the same on the other side” as placing another 2 yellow blocks, but places them in the wrong location. The human B is able to understand that A means to place the blocks on the other end of the row-in-progress. Example 4 can be found in Figure 6. This example occurs near the end of a game. B has just finished building a 3 × 3 ring of blue blocks, while facing the structure from the back side (i.e., facing the camera in the figure). Following the description “the middle block of the ring’s right side”, our model incorrectly predicts placing a blue block adjacent to one of the middle blocks of the ring, while the human B grounds this easily. Clearly, higher-level information needed to help ground the instruction is lost in context: earlier in the dialogue history (yet still within the window of utterances in the H3 history scheme), B has clarified with A that the structure is entirely 2D, which contradicts the model’s prediction. Example 5 can be found in Figure 7. B has built a V using blue blocks as placeholders to support the red blocks. Our model interprets “get rid of the blue blocks” partially correctly, and removes one blue block, but does not go all the way as the human B does, who removes all existing blue blocks. While both the model’s and human B’s action sequences are correct, the model’s actions are incomplete, and it is penalized according to net actions F1. Example 6 can be found in Figure 8. This example occurs at the beginning of a game. Here, A does not specify a specific location for the green blocks to be placed, just that they should be “towards the middle.” In this instance, both our model’s prediction and the human B’s actions are valid interpretations. However, our model’s output is penalized for not predicting the exact positions of the human B’s blocks. This highlights the net actions F1 metric’s inflexibility to ambiguous scenarios. Example 7 can be found in Figure 9. This example is similar to Example 8 in that the model predicts a sequence of actions that results in a structure that is rotationally equivalent to the human B’s resulting structure. However, in this case, A’s instruction to place the green blocks “towards the middle of the board” (a suggestion our model does not follow) is extremely important in the larger context of task completion: the model’s actions would result in a final structure that cannot fit within the grid boundaries. Here, the strictness of net action F1’s exact match requirement works as intended, to our benefit.
2020
232
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2603–2614 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2603 MART: Memory-Augmented Recurrent Transformer for Coherent Video Paragraph Captioning Jie Lei1∗ Liwei Wang2 Yelong Shen3∗ Dong Yu2 Tamara L. Berg1 Mohit Bansal1 1UNC Chapel Hill 2Tencent AI Lab Seattle USA 3Microsoft Dynamics 365 AI {jielei, tlberg, mbansal}@cs.unc.edu {liweiwang, dyu}@tencent.com, {yeshe}@microsoft.com Abstract Generating multi-sentence descriptions for videos is one of the most challenging captioning tasks due to its high requirements for not only visual relevance but also discoursebased coherence across the sentences in the paragraph. Towards this goal, we propose a new approach called Memory-Augmented Recurrent Transformer (MART), which uses a memory module to augment the transformer architecture. The memory module generates a highly summarized memory state from the video segments and the sentence history so as to help better prediction of the next sentence (w.r.t. coreference and repetition aspects), thus encouraging coherent paragraph generation. Extensive experiments, human evaluations, and qualitative analyses on two popular datasets ActivityNet Captions and YouCookII show that MART generates more coherent and less repetitive paragraph captions than baseline methods, while maintaining relevance to the input video events.1 1 Introduction In video captioning, the task is to generate a natural language description capturing the content of a video. Recently, dense video captioning (Krishna et al., 2017) has emerged as an important task in this field, where systems first generate a list of temporal event segments from a video, then decode a coherent paragraph (multi-sentence) description from the generated segments. Park et al. (2019) simplifies this task as generating a coherent paragraph from a provided list of segments, removing the requirements for generating the event segments, and focusing on decoding better paragraph captions from the segments. As noted by Xiong et al. ∗Work done while Jie Lei was an intern and Yelong Shen was an employee at Tencent AI Lab. 1All code is available open-source at https://github. com/jayleicn/recurrent-transformer (2018); Park et al. (2019), generating paragraph descriptions for videos can be very challenging due to the difficulties of having relevant, less redundant, as well as coherent generated sentences. Towards this goal, Xiong et al. (2018) proposed a variant of the LSTM network (Hochreiter and Schmidhuber, 1997) that generates a new sentence conditioned on previously generated sentences by passing the LSTM hidden states throughout the entire decoding process. Park et al. (2019) further augmented the above LSTM caption generator with a set of three discriminators that score generated sentences based on defined metrics, i.e., relevance, linguistic diversity, and inter-sentence coherence. Though different, both these methods use LSTMs as the language decoder. Recently, transformers (Vaswani et al., 2017) have proven to be more effective than RNNs (e.g., LSTM (Hochreiter and Schmidhuber, 1997), GRU (Chung et al., 2014), etc.), demonstrating superior performance in many sequential modeling tasks (Vaswani et al., 2017; Zhou et al., 2018; Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019). Zhou et al. (2018) first introduced the transformer model to the video paragraph captioning task, with a transformer captioning module decoding natural language sentences from encoded video segment representations. This transformer captioning model is essentially the same as the original transformer (Vaswani et al., 2017) for machine translation, except that it takes a video representation rather than a source sentence representation as its encoder input. However, in such design, each video segment caption is decoded individually without knowing the context (i.e., previous video segments and the captions that have already been generated), thus often leading to inconsistent and redundant sentences w.r.t. previously generated sentences (see Figure 3 for examples). Dai et al. (2019) recognize this problem as context fragmentation in 2604 the task of language modeling, where the transformers are operating on separated fixed-length segments, without any information flow across segments. Therefore, to generate more coherent video paragraphs, it is imperative to build a model that can span over multiple video segments and capture longer range dependencies. Hence, in this work, we propose the MemoryAugmented Recurrent Transformer (MART) model (see Section 3 for details), a transformer-based model that uses a shared encoder-decoder architecture augmented with an external memory module to enable the modeling of the previous history of video segments and sentences. Compared to the vanilla transformer video paragraph captioning model (Zhou et al., 2018), our first architecture change is the unified encoder-decoder design, i.e., the encoder and decoder in MART use shared transformer layers rather than separated as in Zhou et al. (2018); Vaswani et al. (2017). This unified encoderdecoder design is inspired by recent transformer language models (Devlin et al., 2019; Dai et al., 2019; Sun et al., 2019) to prevent overfitting and reduce memory usage. Additionally, the memory module works as a memory updater that updates its memory state using both the current inputs and previous memory state. The memory state can be interpreted as a container of the highly summarized video segments and caption history information. At the encoding stage, the current video segment representation is enhanced with the memory state from the previous step using cross-attention (Vaswani et al., 2017). Hence, when generating a new sentence, MART is aware of the previous contextual information and can generate paragraph captions with higher coherence and lower repetition. Transformer-XL (Dai et al., 2019) is a recently proposed transformer language model that also uses recurrence, and is able to resolve context fragmentation for language modeling (Dai et al., 2019). Different from MART that uses a highly-summarized memory to remember history information, Transformer-XL directly uses hidden states from previous segments. We modify the Transformer-XL framework for video paragraph captioning and present it as an additional comparison. We benchmark MART on two standard datasets: ActivityNet Captions (Krishna et al., 2017) and YouCookII (Zhou et al., 2017). Both automatic evaluation and human evaluation show that MART generates more satisfying results than previous LSTM-based approaches (Xiong et al., 2018; Zhou et al., 2019; Zhang et al., 2018) and transformer-based approaches (Zhou et al., 2018; Dai et al., 2019). In particular, MART can generate more coherent (e.g., coreference and order), less redundant paragraphs without losing paragraph accuracy (visual relevance). 2 Related Work Video Captioning Recently, video captioning has attracted much attention from both the computer vision and the natural language processing community. Methods for the task share the same intrinsic nature of taking a video as the input and outputting a language description that can best describe the content, though they differ from each other on whether a single sentence (Wang et al., 2019; Xu et al., 2016; Chen and Dolan, 2011; Pasunuru and Bansal, 2017a) or multiple sentences (Rohrbach et al., 2014; Krishna et al., 2017; Xiong et al., 2018; Zhou et al., 2018; Gella et al., 2018; Park et al., 2019) are generated for the given video. In this paper, our goal falls into the category of generating a paragraph (multiple sentences) conditioned on an input video with several pre-defined event segments. One line of work (Zhou et al., 2018, 2019) addresses the video paragraph captioning task by decoding each video event segment separately into a sentence. The final paragraph description is obtained by concatenating the generated single sentence descriptions. Though individual sentences may precisely describe the corresponding event segments, when put together the sentences often become inconsistent and redundant. Another line of works (Xiong et al., 2018; Gella et al., 2018) use the LSTM decoder’s last (word) hidden state from the previous sentence as the initial hidden state for the next sentence decoding, thus enabling information flow from previous sentences to subsequent sentences. While these methods have shown better performance than their single sentence counterpart, they are still undesirable as the sentence-level recurrence is achieved at word-level, and the context history information quickly decays due to vanishing gradients (Pascanu et al., 2013) problem. Additionally, these designs also have difficulty modeling long-term dependencies (Hochreiter et al., 2001). In comparison, the recurrence in MART resides in the sentence or segment level and is thus more robust to the aforementioned problems. AdvInf (Park 2605 et al., 2019) augments the above LSTM word-level recurrence methods with adversarial inference, using a set of separately trained discriminators to re-rank the generated sentences. The techniques in AdvInf can be viewed as an orthogonal way of generating captions with better quality. Transformers Transformer (Vaswani et al., 2017) is used as the basis of our approach. Different from RNNs (e.g., LSTM (Hochreiter and Schmidhuber, 1997), GRU (Chung et al., 2014), etc) that use recurrent structure to model long-term dependencies, transformer relies on self-attention to learn the dependencies between input words. Transformers have proven to be more efficient and powerful than RNNs, with superior performance in many sequential modeling tasks, including machine translation (Vaswani et al., 2017), language modeling/pre-training (Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019) and multi-modal representation learning (Tan and Bansal, 2019; Chen et al., 2019; Sun et al., 2019). Additionally, Zhou et al. (2018) have shown that a transformer model can generate better captions than the LSTM model. However, transformer architectures are still unable to model history information well. This problem is identified in the task of language modeling as context fragmentation (Dai et al., 2019), i.e., each language segment is modeled individually without knowing its surrounding context, leading to inefficient optimization and inferior performance. To resolve this issue, Transformer-XL (Dai et al., 2019) introduces the idea of recurrence to the transformer language model. Specifically, the modeling of a new language segment in Transformer-XL is conditioned on hidden states from previous language segments. Experimental results show TransformerXL has stronger language modeling capability than the non-recurrent transformer. Transformer-XL directly uses all the hidden states from the previous segment to enable recurrence. In comparison, our MART uses highly summarized memory states, making it more efficient in passing useful semantic or linguistic cues to future sentences. 3 Methods Though our method provides a general temporal multi-modal learning framework, we focus on the video paragraph captioning task in this paper. Given a video V , with several temporally ordered event segments [e1, e2, ..., eT ], the task is to generate a coherent paragraph consisting of multiple senMulti-Head Attention Add & Norm Feed Forward Add & Norm CNN Masked Multi-Head Attention Word Enbedding PE Add & Norm Multi-Head Attention Add & Norm Feed Forward Add & Norm x N x N Linear Softmax Outputs (shifted right) Video Segment The girl dances around the room. PE Figure 1: Vanilla transformer video captioning model (Zhou et al., 2018). PE denotes Positional Encoding, TE denotes token Type Embedding. tences [s1, s2, ..., sT ] to describe the whole video, where sentence st should describe the content in the segment et. In the following, we first describe the baseline transformer that generates sentences without recurrent architecture, then introduce our approach – Memory-Augmented Recurrent Transformer (MART). Besides, we also compare MART with the recently proposed Transformer-XL (Dai et al., 2019) in detail. 3.1 Background: Vanilla Transformer We start by introducing the vanilla transformer video paragraph captioning model proposed by Zhou et al. (2018), which is an application of the original transformer (Vaswani et al., 2017) model for video paragraph captioning. An overview of the model is shown in Figure 1. The core of the architecture is the scaled dot-product attention. Given query matrix Q ∈RTq×dk, key matrix K ∈RTv×dk and value matrix V ∈RTv×dv, the attentional output is computed as: A(Q, K, V ) = softmax QK⊤ √dk , dim=1  V, where softmax(·, dim=1) denotes performing softmax at the second dimension of the the input. Combining h paralleled scaled dot-product attention, 2606 r Linear Softmax Masked Multi-Head Attention Add & Norm Feed Forward Add & Norm x N Memory-Augmented Recurrent Transformer (at step )‰ Multi-Head Attention i  ‰−1 g r Concat m Memory Updater i  ‰−1 i  ‰ Feed Forward d¯  ‰ Multi-Head Attention d¯  ‰ i  ‰−1 g m Linear Linear Linear Linear tanh Add sigmoid Add _ ‰ v ‰ = (1 − ) ⊙ + ⊙ i  ‰ v ‰ _ ‰ v ‰ i  ‰−1 i  ‰ o‰ Linear Softmax Feed Forward Add & Norm x N Masked Multi-Head Attention with Relative PE oc( ) d −1 ‰−1 g r Concat m d −1 ‰ Add & Norm Transformer-XL (at step )‰ Word Enbedding Outputs (shifted right) CNN PE Video Segment Concat Linear & Norm Linear & Norm TE The girl dances around the room. Word Enbedding Outputs (shifted right) CNN Video Segment Concat Linear & Norm Linear & Norm TE The girl dances around the room. TE PE token Type Embedding Positional Encoding Element-wise Addition Figure 2: Left: Our proposed Memory-Augmented Recurrent Transformer (MART) for video paragraph captioning. Right: Transformer-XL (Dai et al., 2019) model for video paragraph captioning. Relative PE denotes Relative Positional Encoding (Dai et al., 2019). SG(·) denotes stop-gradient, ⊙denotes Hadamard product. we obtain the multi-head attention (Vaswani et al., 2017), we denote it as MultiHeadAtt(Q, K, V). The attention formulation discussed above is quite general. It can be used for various purposes, such as self-attention (Vaswani et al., 2017) where query, key, and value matrix are all the same, and crossattention (Vaswani et al., 2017) where the query matrix is different from the key and value matrix. In this paper, we also use multi-head attention for memory aggregation and update, as discussed later. The vanilla transformer video paragraph captioning model has N encoder layers and N decoder layers. At the l-th encoder layer, the multi-head attention module takes the last layer’s hidden states Hl−1 as inputs and performs self-attention. The attentional outputs are then projected by a feedforward layer. At the l-th decoder layer, the model first encodes the last decoder layer’s hidden states using masked multi-head attention.2 It then uses multi-head attention, with the masked outputs as query matrix, and the hidden states Hl from l-th encoder layer as key and value matrix to gather 2masked multi-head attention is used to prevent the model from seeing future words (Vaswani et al., 2017). information from the encoder side. Similarly, a feed-forward layer is used to encode the sentences further. Residual connection (He et al., 2016) and layer-normalization (Ba et al., 2016) are applied for each layer, for both encoder and decoder. 3.2 Memory-Augmented Recurrent Transformer The vanilla transformer captioning model follows the classical encoder-decoder architecture, where the encoder and decoder network are separated. In comparison, the encoder and decoder are shared in MART, as shown in Figure 2 (left). The video and text inputs are firstly separately encoded and normalized. We denote the encoded video and text embeddings as H0 video ∈RTvideo×d and H0 text ∈RTtext×d, where Tvideo and Ttext are the lengths of video and text, respectively. d denotes the hidden size. We then concatenate these two embeddings as input to the transformer layers: H0=[H0 video; H0 text] ∈RTc×d, where [; ] denotes concatenation, Tc=Tvideo + Ttext. This unified encoder-decoder design is inspired by recent works on multi-modal representation learning (Chen et al., 2019; Sun et al., 2019). We also use two trainable 2607 token type embedding vectors to indicate whether an input token is from video or text, similar to Devlin et al. (2019) where the token type embeddings are added to indicate different input sequences. We ignore the video token positions and only consider the text token positions when calculating loss and generating words. While the aforementioned vanilla transformer is a powerful method, it is less suitable for video paragraph captioning due to its inability to utilize video segments and sentences history information. Thus, given the unified encoder-decoder transformer, we augment it with an external memory module, which helps it to utilize video segments and the corresponding caption history to generate the next sentence. An overview of the memory module is shown in Figure 2 (left). At step t, i.e., decoding the t-th video segment, the l-th layer aggregates the information from both its intermediate hidden states ¯Hl t ∈RTc×d and the memory states Ml t−1 ∈RTm×d (Tm denotes memory state length or equivalently #slots in the memory) from the last step, using a multi-head attention. The input query matrix of the multihead attention Q= ¯Hl t, key and value matrices are K, V =[Ml t−1; ¯Hl t] ∈R(Tm+Tc)×d. The memory augmented hidden states are further encoded using a feed forward layer and then merged with the intermediate hidden states ¯Hl t using a residual connection and layer norm to form the hidden states output Hl t ∈RTc×d. The memory state Ml t−1 is updated as Ml t, using the intermediate hidden states ¯Hl t. This process is conducted in the Memory Updater module, illustrated in Figure 2. We summarize the procedure below: Sl t = MultiHeadAtt(Ml t−1, ¯Hl t, ¯Hl t), Cl t = tanh(W l mcMl t−1 + W l scSl t + bl c), Zl t = sigmoid(W l mzMl t−1 + W l szSl t + bl z), Ml t = (1 −Zl t) ⊙Cl t + Zl t ⊙Ml t−1, where ⊙denotes Hadamard product, W l mc, W l sc, W l mz, and W l sz are trainable weights, bl c and bl z are trainable bias. Cl t ∈RTm×d is the internal cell state. Zl t ∈RTm×d is the update gate that controls which information to retain from the previous memory state, and thus reducing redundancy and maintaining coherence in the generated paragraphs. This update strategy is conceptually similar to LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014). It differs in that multihead attention is used to encode the memory state and thus multiple memory slots are supported instead of a single one in LSTM and GRU, which gives it a higher capacity of modeling complex relations. Recent works (Sukhbaatar et al., 2015; Graves et al., 2014; Xiong et al., 2016a) introduce a memory component into neural networks, where the memory is mainly designed to memorize facts in the input context to support downstream tasks, e.g., copy (Graves et al., 2014) or question answering (Sukhbaatar et al., 2015; Xiong et al., 2016a). In comparison, the memory in MART is designed to memorize the sequence generation history to support the coherent generation of the next sequence. 3.3 Comparison with Transformer-XL Transformer-XL (Dai et al., 2019) is a recently proposed transformer-based language model that uses a segment-level recurrence mechanism to capture the long-term dependency in context. In Figure 2 (right) we show a modified version of TransformerXL for video paragraph captioning. At step t, at its l-th layer, Transformer-XL takes as inputs the last layer’s hidden states from both the current step and the last step, which we denote as Hl−1 t and SG(Hl−1 t−1), where SG(·) stands for stop-gradient, and is used to save GPU memory and computation (Dai et al., 2019). The input query matrix of the multi-head attention Q = Hl−1 t , key and value matrices are K, V = [SG(Hl−1 t−1); Hl−1 t ]. Note the multi-head attention here is integrated with relative positional encoding (Dai et al., 2019). Both designed to leverage the long-term dependency in context, the recurrence in Transformer-XL is between Hl t and Hl−1 t−1, which shifts one layer downwards per step. This mismatch in representation granularity may potentially be harmful to the learning process and affect the model performance. In contrast, the recurrence in MART is between ¯Hl t and Ml t−1 (updated using ¯Hl t−1) of the same layer. Besides, Transformer-XL directly uses all the hidden states from the last step to enable recurrence, which might be less effective as less relevant and repetitive information is also passed along. In comparison, MART achieves recurrence by using memory states that are highly summarized from previous steps, which may help the model to reduce redundancy and only keep important information from previous steps. 2608 4 Experiments We conducted experiments on two popular benchmark datasets, ActivityNet Captions (Krishna et al., 2017) and YouCookII (Zhou et al., 2017). We evaluate our proposed MART and compare it with various baseline approaches. 4.1 Data and Evaluation Metrics Datasets ActivityNet Captions (Krishna et al., 2017) contains 10,009 videos in train set, 4,917 videos in val set. Each video in train has a single reference paragraph while each video in val has two reference paragraphs. Park et al. (2019) uses the same set of videos (though different segments) in val for both validation and test. To allow better evaluation of the models, we use splits provided by Zhou et al. (2019), where the original val set is split into two subsets: ae-val with 2,460 videos for validation and ae-test with 2,457 videos for test. This setup makes sure the videos used for test will not be seen in validation. YouCookII (Zhou et al., 2017) contains 1,333 training videos and 457 validation videos. Each video has a single reference paragraph. Both datasets come with temporal event segments annotated with human written natural language sentences. On average, there are 3.65 event segments for each video in ActivityNet Captions, 7.7 segments for each video in YouCookII. Data Preprocessing We use aligned appearance and optical flow features extracted at 2FPS to represent videos, provided by Zhou et al. (2018). Specifically, for appearance, 2048D feature vectors from the ‘Flatten-673’ layer in ResNet-200 (He et al., 2016) are used; for optical flow, 1024D feature vectors from the ‘global pool’ layer of BNInception (Ioffe and Szegedy, 2015) are used. Both networks are pre-trained on ActivityNet (Caba Heilbron et al., 2015) for action recognition, provided by (Xiong et al., 2016b). We truncate sequences longer than 100 for video and 20 for text and set the maximum number of video segments to 6 for ActivityNet Captions and 12 for YouCookII. Finally, we build vocabularies based on words that occur at least 5 times for ActivityNet Captions and 3 times for YouCookII. The resulting vocabulary contains 3,544 words for ActivityNet Captions and 992 words for YouCookII. Evaluation Metrics (Automatic and Human) We evaluate the captioning performance at paragraph-level, following (Park et al., 2019; Xiong et al., 2018), reporting numbers on standard metrics, including BLEU@4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), CIDErD (Vedantam et al., 2015). Since these metrics mainly focus on whether the generated paragraph matches the ground-truth paragraph, they fail to evaluate the redundancy of these multi-sentence paragraphs. Thus, we follow previous works (Park et al., 2019; Xiong et al., 2018) to evaluate repetition using R@4. It measures the degree of N-gram (N=4) repetition in the descriptions. Besides the automated metrics, we also conduct human evaluations to provide additional comparisons between the methods. We consider two aspects in human evaluation, relevance (i.e., how related is a generated paragraph caption to the content of the given video) and coherence (i.e., whether a generated paragraph caption reads fluently and is linguistically coherent over its multiple sentences). 4.2 Implementation Details MART is implemented in PyTorch (Paszke et al., 2017). We set the hidden size to 768, the number of transformer layers to 2, and the number of attention heads to 12. For positional encoding, we follow Vaswani et al. (2017) to use the fixed scheme. For memory module, we set the length of recurrent memory state to 1, i.e., Tm=1. We optimize the model following the strategy used by Devlin et al. (2019). Specifically, we use Adam (Kingma and Ba, 2014) with an initial learning rate of 1e-4, β1=0.9, β2=0.999, L2 weight decay of 0.01, and learning rate warmup over the first 5 epochs. We train the model for at most 50 epochs with early stopping using CIDEr-D and batch size 16. We use greedy decoding as we did not observe better performance using beam search. 4.3 Baselines Vanilla Transformer This model originates from the transformer (Vaswani et al., 2017), proposed by Zhou et al. (2018) (more details in Section 3.1). It takes a single video segment as input and independently generates a single sentence describing the given segment. Note that Zhou et al. (2018) also have a separate proposal generation module, but here we only focus on its captioning module. To obtain paragraph-level captions, the independently generated single sentence captions are concatenated as the output paragraph. 2609 Model Re. ActivityNet Captions (ae-test) YouCookII (val) B@4 M C R@4 ↓ B@4 M C R@4 ↓ VTransformer (2018)  9.31 15.54 21.33 7.45 7.62 15.65 32.26 7.83 Transformer-XL (2019)  10.25 14.91 21.71 8.79 6.56 14.76 26.35 6.30 Transformer-XLRG  10.07 14.58 20.34 9.37 6.63 14.74 25.93 6.03 MART  9.78 15.57 22.16 5.44 8.00 15.9 35.74 4.39 Human 0.98 1.27 Table 1: Comparison with transformer baselines on ActivityNet Captions ae-test split and YouCookII val split. Re. indicates whether sentence-level recurrence is used. We report BLEU@4 (B@4), METEOR (M), CIDEr-D (C) and Repetition (R@4). VTransformer denotes vanilla transformer. Det. Re. B@4 M C R@4 ↓ LSTM based methods MFT (2018)   10.29 14.73 19.12 17.71 HSE (2018)   9.84 13.78 18.78 13.22 LSTM based methods with detection feature GVD (2019)   11.04 15.71 21.95 8.76 GVDsup (2019)   11.30 16.41 22.94 7.04 AdvInf (2019)   10.04 16.60 20.97 5.76 Transformer based methods VTransformer (2018)   9.75 15.64 22.16 7.79 Transformer-XL (2019)   10.39 15.09 21.67 8.54 Transformer-XLRG   10.17 14.77 20.40 8.85 (Ours) MART   10.33 15.68 23.42 5.18 Human 0.98 Table 2: Comparison with baselines on ActivityNet Captions ae-val split. Det. indicates whether the model uses detection feature. Models that use detection features are shown in gray background to indicate they are not in fair comparison with the others. Re. indicates whether sentence-level recurrence is used. VTransformer denotes vanilla transformer. Transformer-XL Transformer-XL is proposed by Dai et al. (2019) for modeling long-term dependency in natural language. Here we adapt it for video paragraph captioning (more details in Section 3.3). The original design of TransformerXL stops gradients from passing between different recurrent steps to save GPU memory and computation. To enable a more fair comparison with our model, we implemented a version that allows gradient flow through different steps, calling this Transformer-XLRG (Transformer-XL with Recurrent Gradient). AdvInf AdvInf (Park et al., 2019) uses a set of three discriminators to do adversarial inference on a strong LSTM captioning model. The input features of the LSTM model are the concatenation of image recognition, action recognition, and object detection features. To encourage temporal coherence between consecutive sentences, the last hidden state from the previous sentence is used as input to the decoder (Xiong et al., 2018; Gella et al., 2018). The three discriminators are trained adversarially and are specifically designed to reduce repetition and encourage fluency and relevance in the generated paragraph. GVD An LSTM based model for grounded video description (Zhou et al., 2019). It uses densely detected object regions as inputs, with a grounding module that grounds generated words to the regions. Additionally, we also consider a GVD variant (GVDsup) that uses grounding supervision from Zhou et al. (2019). MFT MFT (Xiong et al., 2018) uses an LSTM model with a similar sentence-level recurrence as in AdvInf (Park et al., 2019). HSE HSE (Zhang et al., 2018) is a hierarchical model designed to learn both clip-sentence and paragraph-video correspondences. Given the learned contextualized video embedding, HSE uses a 2-layer LSTM to generate captions. For AdvInf, MFT, HSE, GVD, and GVDsup, we obtain generated sentences from the authors. We only report their performance on ActivityNet Captions ae-val split to enable a fair comparison, as (i) AdvInf, MFT and HSE have different settings as ours, where ae-test videos are included as part of their validation set; (ii) we do not have access to the ae-test predictions of GVD and GVDsup. For vanilla transformer, Transformer-XL and Transformer-XLRG, we borrow/modify the model implementations from the original authors and train them under the same settings as MART. 4.4 Results Automatic Evaluation Table 1 shows the results of MART and several transformer baseline methods. We observe stronger or comparable performance for the language metrics (B@4, M, C) for 2610 MART wins (%) VTransformer wins (%) Delta relevance 37 29.5 +7.5 coherence 42.8 26.3 +16.5 MART wins (%) Transformer-XL wins (%) Delta relevance 40.0 39.5 +0.5 coherence 39.2 36.2 +3.0 Table 3: Human evaluation on ActivityNet Captions aetest set w.r.t. relevance and coherence. Top: MART vs. vanilla transformer (VTransformer). Bottom: MART vs. Transformer-XL. both ActivityNet Captions and YouCookII datasets. For R@4, MART produces significantly better results compared to the three transformer baselines, showing its effectiveness in reducing redundancy in the generated paragraphs. Table 2 shows the comparison of MART with state-of-the-art models on ActivityNet Captions. MART achieves the best scores for both CIDEr-D and R@4 and has a comparable performance for B@4 and METEOR. Note that the best B@4 model, GVDsup (Zhou et al., 2019), and the best METEOR model, AdvInf (Park et al., 2019), both use strong detection features, and GVDsup has also used grounding supervision. Regarding the repetition score R@4, MART has the highest score. It outperforms the strong adversarial model AvdInf (Park et al., 2019) even in an unfair comparison where AdvInf uses extra detection features. Additionally, AdvInf has a time-consuming adversarial training and decoding process where a set of discriminator models are trained and used to re-rank candidate sentences, while MART can do much faster inference with only greedy decoding and no further post-processing. The comparisons in Table 1 and Table 2 show that MART is able to generate less redundant (thus more coherent) paragraphs while maintaining relevance to the videos. Human Evaluation In addition to the automatic metrics, we also run human evaluation on Amazon Mechanical Turk (AMT) with 200 randomly sampled videos from ActivityNet Captions ae-test split, where each video was judged by three different AMT workers. We design a set of pairwise experiments (Pasunuru and Bansal, 2017b; Park et al., 2019), where we compare two models at a time. AMT workers are instructed to choose which caption is better or the two captions are not distinguishable based on relevance and coherence, respectively. The models are anonymized, and the predictions are shuffled. In total, we have 54 work#hidden layers mem. len. Re. B@4 M C R@4 ↓ #hidden layers MART 1 1  10.42 16.01 22.87 6.70 MART 5 1  10.48 16.03 24.33 6.74 mem. len. MART 2 2  10.30 15.66 22.93 5.94 MART 2 5  10.12 15.48 22.89 6.83 recurrence MART w/o re. 2  9.91 15.83 22.78 7.56 MART 2 1  10.33 15.68 23.42 5.18 Table 4: Model ablation on ActivityNet Captions aeval split. Re. indicates whether sentence-level recurrence is used. mem. len. indicates the length of the memory state. MART w/o re. denotes a MART variant without recurrence. Top two scores are highlighted. ers participated the MART vs. vanilla transformer experiments, 47 workers participated the MART vs. Transformer-XL experiments. In Table 3 we show human evaluation results, where the scores are calculated as the percentage of workers that have voted a certain option. With its sentence-level recurrence mechanism, MART is substantially better than the vanilla transformer model for both relevance and coherence. Compared to the strong baseline approach Transformer-XL, MART has similar performance in terms of relevance, but still reasonably better performance in terms of coherence. Model Ablation We show model ablation in Table 4. MART models with recurrence have better overall performance than the variant without, suggesting the effectiveness of our recurrent memory design. We choose to use the model with 2 hidden layers and memory state length 1 as it shows a good balance between performance and computation. Qualitative Examples In Figure 3, we show paragraph captions generated by vanilla transformer, Transformer-XL, and our method MART. Compared to the two baselines, MART produces more coherent and less redundant paragraphs. In particular, we noticed that vanilla transformer often uses incoherent pronouns/person mentions, while MART and Transformer-XL is able to use suitable pronouns/person mentions across the sentences and thus improve the coherence of the paragraph. Compare with Transformer-XL, we found that the paragraphs generated by MART have much less crosssentence repetitions. We attribute MART’s success to its recurrence design - the previous memory states are highly summarized, in which redundant information is removed. While there is less redun2611 Vanilla Transformer He is sitting down in a chair. He continues playing the harmonica and ends by looking off into the distance. He continues playing the harmonica and looking off into the distance. He stops playing and looks at the camera. Transformer-XL A man is seen speaking to the camera while holding a harmonica. He continues playing the harmonica while looking at the camera. He continues playing the instrument and looking off into the distance. He continues playing and stops playing. MART (ours) A man is sitting down talking to the camera while holding a camera. He takes a harmonica and begins playing his harmonica. He continues playing the harmonica as he continues playing. He stops and looks at the camera. Ground-Truth A young man wearing a Cuervo black shirt stares and speaks to the camera as he sits on his chair. He puts a harmonica to his mouth and begins playing. He plays on for about a minute and is very into his song. He then puts the harmonica down and looks into the camera as the video comes to an end. Vanilla Transformer A girl is seen climbing across a set of monkey bars and leads into her climbing across a set of. He jumps off the monkey bars and lands on a bridge. Transformer-XL A young child is seen climbing across a set of monkey bars and climbing across a set of monkey bars. The boy jumps down and jumps down and jumps down. MART (ours) A girl is seen speaking to the camera and leads into her climbing across a set of monkey bars. She jumps off the bar and walks back to the camera. Ground-Truth A little girl climbs the monkey bars of a play ground. Then, the little girl jumps to the ground and extend her arms. Figure 3: Qualitative examples. Red/bold indicates pronoun errors (inappropriate use of pronouns), blue/italic indicates repetitive patterns, underline indicates content errors. Compared to baselines, our model generates more coherent, less repeated paragraphs while maintaining relevance. A girl is giving a small dog a bath. She has an orange bottle in her hand… A man on a diving board walks to the end. The man bounces on the board two times then dives into the water… A young girl is seen walking to the end of a diving board with several other people around her… A little girl stands on a diving board. Then the little girl jumps, flip and dives in the swimming pool… Figure 4: Nearest neighbors retrieved using memory states. Top row shows the query, the 3 rows below it are the top-3 nearest neighbors. dancy between sentences generated by MART, in Figure 3 (left), we noticed that repetition still exists within a single sentence, suggesting further efforts on reducing the repetition in single sentence generation. More examples are in the appendix. Memory Ablation To explore whether the learned memory state could store useful information about the videos and captions, we conducted a video retrieval experiment on ActivityNet Captions train split with 10K videos, where we extract the last step memory state in the first layer of a trained MART model for each video as its representation to perform nearest neighbor search with cosine similarity. Though not explicitly trained for the retrieval task, we observe some positive examples in the experiments. We show an example in Figure 4, the neighbors mostly show related activities. 5 Conclusion In this work, we present a new approach – MemoryAugmented Recurrent Transformer (MART) for video paragraph captioning, where we designed an auxiliary memory module to enable recurrence in transformers. Experimental results on two standard datasets show that MART has better overall performance than the baseline methods. In particular, MART can generate more coherent, less redundant paragraphs without any degradation in relevance. Acknowledgments We thank the anonymous reviewers for their helpful comments and discussions. This work was performed while Jie Lei was an intern at Tencent AI Lab, Seattle, USA. It was later partially supported by NSF Awards CAREER-1846185, 1562098, DARPA KAIROS Grant FA8750-19-21004, and ARO-YIP Award W911NF-18-1-0336. The views contained in this article are those of the authors and not of the funding agency. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. Advances in 2612 NeurIPS 2016 Deep Learning Symposium. Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR. David L Chen and William B Dolan. 2011. Collecting highly parallel data for paraphrase evaluation. In ACL. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL. Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Spandana Gella, Mike Lewis, and Marcus Rohrbach. 2018. A dataset for telling the stories of social media videos. In EMNLP. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, J¨urgen Schmidhuber, et al. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In ICCV. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Jae Sung Park, Marcus Rohrbach, Trevor Darrell, and Anna Rohrbach. 2019. Adversarial inference for multi-sentence video description. In CVPR. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In ICML. Ramakanth Pasunuru and Mohit Bansal. 2017a. Multitask video captioning with video and entailment generation. In ACL. Ramakanth Pasunuru and Mohit Bansal. 2017b. Reinforced video captioning with entailment rewards. In EMNLP. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop. Anna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Manfred Pinkal, and Bernt Schiele. 2014. Coherent multi-sentence video description with variable level of detail. In GCPR. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NeurIPS. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In ICCV. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. In EMNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR. Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, YuanFang Wang, and William Yang Wang. 2019. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In ICCV. Caiming Xiong, Stephen Merity, and Richard Socher. 2016a. Dynamic memory networks for visual and textual question answering. In ICML. 2613 Yilei Xiong, Bo Dai, and Dahua Lin. 2018. Move forward and tell: A progressive generator of video descriptions. In ECCV. Yuanjun Xiong, Limin Wang, Zhe Wang, Bowen Zhang, Hang Song, Wei Li, Dahua Lin, Yu Qiao, Luc Van Gool, and Xiaoou Tang. 2016b. Cuhk & ethz & siat submission to activitynet challenge 2016. arXiv preprint arXiv:1608.00797. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msrvtt: A large video description dataset for bridging video and language. In CVPR. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS. Bowen Zhang, Hexiang Hu, and Fei Sha. 2018. Crossmodal and hierarchical modeling of video and text. In ECCV. Luowei Zhou, Yannis Kalantidis, Xinlei Chen, Jason J. Corso, and Marcus Rohrbach. 2019. Grounded video description. In CVPR. Luowei Zhou, Chenliang Xu, and Jason J. Corso. 2017. Towards automatic learning of procedures from web instructional videos. In AAAI. Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018. End-to-end dense video captioning with masked transformer. In CVPR. A Appendices A.1 Additional Qualitative Examples We show more caption examples in Figure 5. Overall, we see captions generated by models with sentence-level recurrence, i.e., MART and Transformer-XL, tend to be more coherent. Comparing with Transformer-XL, captions generated by MART are usually less repetitive. However, as shown in the two examples at the last row of Figure 5, all three models suffer from the content error, where the models are not able to recognize and describe the fine-grained details in the videos, e.g., gender and fine-grained objects/actions. 2614 Vanilla Transformer He continues speaking while holding the violin and showing how to play his hands. He continues playing the instrument while looking down at the camera. He continues playing the violin and then stops to speak to the camera. Transformer-XL A man is seen speaking to the camera while holding a violin. The man continues playing the instrument while moving his hands up and down. The man continues playing the instrument and ends by looking back to the camera. MART (ours) A man is seen speaking to the camera while holding a violin and begins playing the instrument. The man continues to play the instrument while moving his hands up and down. He continues to play and ends by moving his hands up and down. Ground-Truth A man is seen looking to the camera while holding a violin. The man then begins playing the instrument while the camera zooms in on his fingers. The man continues to play and stops to speak to the camera. Vanilla Transformer He is skateboarding down a road. He goes through the streets and goes. He is skateboarding down a road. Transformer-XL A man is riding a skateboard down a road. He is skateboarding down a road. He is skateboarding down a road. MART (ours) A man is seen riding down a road with a person walking into frame and speaking to the camera. The man continues riding down the road while looking around to the camera and showing off his movements. The man continues to ride around while looking to the camera. Ground-Truth A camera pans all around an area and leads into a man speaking to the camera. Several shots of the area are shown as well as dogs and leads into a man riding down a hill. The man rides a skateboard continuously around the area and ends by meeting up with the first man. Vanilla Transformer She continues moving around the room and leads into her speaking to the camera. She continues moving around on the step and ends by speaking to the camera. Transformer-XL A woman is standing in a gym. She begins to do a step. MART (ours) A woman is standing in a room talking. She starts working out on the equipment. Ground-Truth A woman is seen speaking to the camera and leads into her walking up and down the board. She then stands on top of the beam while speaking to the camera continuously. Vanilla Transformer Several shots are shown of people riding on the surf board and the people riding along the water. Several shots are shown of people riding around on a surf board and leads into several clips of people riding. Transformer-XL A large wave is seen followed by several shots of people riding on a surf board and riding along the. The people continue riding along the water while the camera pans around the area and leads into several more shots. MART (ours) A man is seen riding on a surfboard and surfing on the waves. The man continues surfing while the camera captures him from several angles. Ground-Truth A man is seen moving along the water on a surf board while another person watches on the side. The person continues riding around and slowing down to demonstrate how to play. Vanilla Transformer A young girl is seen climbing across a set of monkey bars. A young child is seen climbing across a set of monkey bars. A little girl is standing on a platform in a playground. Transformer-XL A young child is seen standing before a set of monkey bars and begins climbing across monkey bars. The girl then climbs back and fourth on the bars. MART (ours) A young child is seen climbing across a set of monkey bars while speaking to the camera. She then climbs down across the bars and begins swinging herself around. She continues to swing down and ends by jumping down. Ground-Truth A boy goes across the monkey bars as a lady watches and cheers him on. At the end he begins to struggle bit, but finally finished. When he is done another little boy comes and stands by him. Vanilla Transformer The man then holds up a bottle of mouthwash and talks to the camera. The man then puts lotion on her face and begins rubbing it down. The man then begins to blow dry her face and shows off the camera. Transformer-XL A man is seen speaking to the camera while holding up a brush. He then rubs lotion all over his face and begins brushing his face. He then puts the lotion on the face and rubs it on the wall. MART (ours) A man is seen speaking to the camera and leads into him holding up a bottle of water. The man then holds up a can and begins to shave his face. He finishes putting the paper into the mirror and smiles to the camera. Ground-Truth A girl's face is shown in front of the camera. She showed an orange bottle, read the label and squirt the orange content on her palm, showed the cream on the camera, then rub the cream all over her face. She bend down and rinse her face, when her face is visible on the camera her face is clear. Figure 5: Additional qualitative examples. Red/bold indicates pronoun errors (inappropriate use of pronouns or person mentions), blue/italic indicates repetitive patterns, underline indicates content errors. Compared to baselines, our model generates more coherent, less repeated paragraphs while maintaining relevance.
2020
233
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2615–2635 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2615 What is Learned in Visually Grounded Neural Syntax Acquisition Noriyuki Kojima, Hadar Averbuch-Elor, Alexander Rush and Yoav Artzi Department of Computer Science and Cornell Tech, Cornell University {nk654,he93,arush}@cornell.edu {yoav}@cs.cornell.edu Abstract Visual features are a promising signal for learning bootstrap textual models. However, blackbox learning models make it difficult to isolate the specific contribution of visual components. In this analysis, we consider the case study of the Visually Grounded Neural Syntax Learner (Shi et al., 2019), a recent approach for learning syntax from a visual training signal. By constructing simplified versions of the model, we isolate the core factors that yield the model’s strong performance. Contrary to what the model might be capable of learning, we find significantly less expressive versions produce similar predictions and perform just as well, or even better. We also find that a simple lexical signal of noun concreteness plays the main role in the model’s predictions as opposed to more complex syntactic reasoning. 1 Introduction Language analysis within visual contexts has been studied extensively, including for instruction following (e.g., Anderson et al., 2018b; Misra et al., 2017, 2018; Blukis et al., 2018, 2019), visual question answering (e.g., Fukui et al., 2016; Hu et al., 2017; Anderson et al., 2018a), and referring expression resolution (e.g., Mao et al., 2016; Yu et al., 2016; Wang et al., 2016). While significant progress has been made on such tasks, the combination of vision and language makes it particularly difficult to identify what information is extracted from the visual context and how it contributes to the language understanding problem. Recently, Shi et al. (2019) proposed using alignments between phrases and images as a learning signal for syntax acquisition. This task has been long-studied from a text-only setting, including recently using deep learning based approaches (Shen et al., 2018a, 2019; Kim et al., 2019; Havrylov et al., 2019; Drozdov et al., 2019, inter alia). While the introduction of images provides a rich new signal for the task, it also introduces numerous challenges, such as identifying objects and analyzing scenes. In this paper, we analyze the Visually Grounded Neural Syntax Learner (VG-NSL) model of Shi et al. (2019). In contrast to the tasks commonly studied in the intersection of vision and language, the existence of an underlying syntactic formalism allows for careful study of the contribution of the visual signal. We identify the key components of the model and design several alternatives to reduce the expressivity of the model, at times, even replacing them with simple non-parameterized rules. This allows us to create several model variants, compare them with the full VG-NSL model, and visualize the information captured by the model parameters. Broadly, while we would expect a parsing model to distinguish between tokens and phrases along multiple dimensions to represent different syntactic roles, we observe that the model likely does not capture such information. Our experiments show that significantly less expressive models, which are unable to capture such distinctions, learn a similar model of parsing and perform equally and even better than the original VG-NSL model. Our visualizations illustrate that the model is largely focused on acquiring a notion of noun concreteness optimized for the training data, rather than identifying higher-level syntactic roles. Our code and experiment logs are available at https://github. com/lil-lab/vgnsl_analysis. 2 Background: VG-NSL VG-NSL consists of a greedy bottom-up parser made of three components: a token embedding function (φ), a phrase combination function (combine), and a decision scoring function (score). The model is trained using a reward signal computed by matching constituents and images. 2616 Algorithm 1 VG-NSL greedy bottom-up parser Input: A sentence ¯x = ⟨x1, . . . , xn⟩. Definitions: φ(·) is a token embedding function; combine(·) and score(·) are learned functions defined in Section 2. 1: C, T ←{[i, i]}n i=1 2: x[i,i] ←φ(xi) ∀i = 1, . . . , n 3: while [1, n] /∈T do 4: i, k, j = argmax [i,k],[k+1,j]∈C score(x[i,k], x[k+1,j]) 5: x[i,j] ←combine(x[i,k], x[k+1,j]) 6: T ←T ∪{[i, j]} 7: C ←(C ∪{[i, j]}) \ {[i, k], [k + 1, j]} 8: return T Given a sentence ¯x with n tokens ⟨x1, . . . , xn⟩, the VG-NSL parser (Algorithm 1) greedily constructs a parse tree by building up a set of constituent spans T , which are combined spans from a candidate set C. Parsing starts by initializing the candidate set C with all single-token spans. At each step, a score is computed for each pair of adjacent candidate spans [i, k] and [k + 1, j]. The best span [i, j] is added to T and C, and the two sub-spans are removed from C. The parser continues until the complete span [1, n] is added to T . Scoring a span [i, j] uses its span embedding x[i,j]. First, a d-dimensional embedding for each single-token span is computed using φ. At each step, the score of all potential new spans [i, j] are computed from the candidate embeddings x[i,k] and x[k+1,j]. The VG-NSL scoring function is: score(x[i,k], x[k+1,j]) = MLPs([x[i,k]; x[k+1,j]]) , where MLPs is a two-layer feed-forward network. Once the best new span is found, its span embedding is computed using a deterministic combine function. VG-NSL computes the d-dimensional embedding of the span [i, j] as the L2-normalized sum of the two combined sub-spans: combine(x[i,k], x[k+1,j]) = x[i,k] + x[k+1,j] x[i,k] + x[k+1,j] 2 . Learning the token embedding function φ and scoring model MLPs relies on a visual signal from aligned images via a reward signal derived from matching constituents and the image. The process alternates between updating the parser parameters and an external visual matching function, which is estimated by optimizing a hinge-based triplet ranking loss similar to the image-caption retrieval loss of Kiros et al. (2014). The parser parameters are estimated using a policy gradient method based on the learned visual matching function, which encourages constituents that match with the corresponding image. This visual signal is the only objective used to learn the parser parameters. After training, the images are no longer used and the parser is text-only. 3 Model Variations We consider varying the parameterization of VGNSL, i.e., φ, combine, and score, while keeping the same inference algorithm and learning procedure. Our goal is to constrain model expressivity, while studying its performance and outputs. Embedding Bottleneck We limit the information capacity of the parsing model by drastically reducing its dimensionality from d = 512 to 1 or 2. We reduce dimensionality by wrapping the token embedding function with a bottleneck layer φB(x) = MLPB(φ(x)), where MLPB is a twolayer feed-forward network mapping to the reduced size. This bottleneck limits the expressiveness of phrase embeddings throughout the parsing algorithm. During training, we compute both original and reduced embeddings. The original embeddings are used to compute the visual matching reward signal, whereas the reduced embeddings are used by score to determine parsing decisions. At test time, only the reduced embeddings are used. In the case of d = 1, the model is reduced to using a single criteria. The low dimensional embeddings are also easy to visualize, and to characterize the type of information learned. Simplified Scoring We experiment with simplified versions of the score function. Together with the lower-dimensional representation, this enables controlling and analyzing the type of decisions the parser is capable of. As we control the information the embeddings can capture, simplifying the scoring function makes sure it does not introduce additional expressivity. The first variation uses a weighted sum with parameters u, v: scoreWS(x[i,k], x[k+1,j]) = u·x[i,k] +v ·x[k+1,j] . This formulation allows the model to learn structural biases, such as the head-initial (HI) bias common in English (Baker, 1987). The second is a nonparameterized mean, applicable for d = 1 only: scoreM(x[i,k], x[k+1,j]) = x[i,k] + τx[k+1,j] 1 + τ , where τ is a hyper-parameter that enables upweighting the right constituent to induce a HI inductive 2617 bias. We experiment with unbiased τ = 1 (scoreM) and HI-biased τ = 20 (scoreMHI) scoring. Reduced Dimension Combine In lower dimensions, the combine function no longer produces useful outputs, i.e., in d = 1 it always gives 1 or −1. We therefore consider mean or max pooling: combineME(x[i,k], x[k+1,j]) = x[i,k] + x[k+1,j] 2 combineMX(x[i,k], x[k+1,j]) = max(x[i,k], x[k+1,j]) . The mean variant computes the representation of a new span as an equal mixture of the two subspans, while the max directly copies to the new span representation information only from one of the spans. The max function is similar to how head rules lexicalize parsers (Collins, 1996). 4 Experimental Setup We train VG-NSL and our model variants using the setup of Shi et al. (2019), including three training extensions: (a) +HI: adding a head-initial inductive bias to the training objective; (b) +FastText: the textual representations are partially initialized with pre-trained FastText (Joulin et al., 2016); and (c) IN: 1 disabling the normalization of image features. We follow the Shi et al. (2019) setup. We train all VG-NSL variants on 82,783 images and 413,915 captions from the MSCOCO (Lin et al., 2014) training set. We evaluate unsupervised constituency parsing performance using 5,000 non-overlapping held-out test captions. We use additional 5,000 non-overlapping validation captions for model selection, as well as for our analysis and visualization in Section 5. We generate binary gold-trees using Benepar (Kitaev and Klein, 2018), an off-the-shelf supervised constituency parser. We notate model variations as d, score, combine. For example, 1, sWS, cME refers to dimensionality d = 1, weighted sum scoring function (sWS), and mean pooling combine (cME). We train five models for each variation, and select the best checkpoint for each model by maximizing the parse prediction agreement on the validation captions between five models. The agreement is measured by the self-F1 agreement score (Williams et al., 2018). This procedure is directly adopted from Shi et al. (2019). We use the hyper-parameters from the original implementation without further tuning. 1The authors of Shi et al. (2019) suggested this ablation as particularly impactful on the learning outcome. Model NP VP PP ADJP Avg. F1 Shi2019 79.6 26.2 42.0 22.0 50.4 ± 0.3 Shi2019∗ 80.5 26.9 45.0 21.3 51.4 ± 1.1 1, sWS, cME 77.2 17.0 53.4 18.2 49.7 ± 5.9 2, sWS, cME 80.8 19.1 52.3 17.1 51.6 ± 0.6 +HI Shi2019 74.6 32.5 66.5 21.7 53.3 ± 0.2 Shi2019∗ 73.1 33.9 64.5 22.5 51.8 ± 0.3 1, sWS, cME 74.0 35.2 62.0 24.2 51.8 ± 0.4 2, sWS, cME 73.8 30.2 63.7 21.9 51.3 ± 0.1 +HI+FastText Shi2019 78.8 24.4 65.6 22.0 54.4 ± 0.3 Shi2019∗ 77.3 23.9 64.3 21.9 53.3 ± 0.1 1, sWS, cME 76.6 21.9 68.7 20.6 53.5 ± 1.4 2, sWS, cME 77.5 22.8 66.3 19.3 53.6 ± 0.2 +HI+FastText-IN Shi2019∗ 78.3 26.6 67.5 22.1 54.9 ± 0.1 1, sM, cMX 79.6 29.0 38.3 23.5 49.7 ± 0.2 1, sMHI, cMX 77.6 45.0 72.3 24.3 57.5 ± 0.1 1, sM, cME 80.0 26.9 62.2 23.2 54.3 ± 0.2 1, sMHI, cME 76.5 20.5 63.6 22.7 52.2 ± 0.3 1, sWS, cME 77.7 26.3 72.5 22.0 55.5 ± 0.1 2, sWS, cME 78.5 26.3 69.5 21.1 55.2 ± 0.1 Table 1: Test results. We report the results from Shi et al. (2019) as Shi2019 and our reproduction (Shi2019∗). We report mean F1 and standard deviation for each system and recall for four phrasal categories. Our variants are specified using a representation embedding (d ∈{1, 2}), a score function (sM: mean, sMHI: mean+HI, sWS: weighted sum), and a combine function (cMX: max, cME: mean). We evaluate using gold trees by reporting F1 scores on the ground-truth constituents and recall on several constituent categories. We report mean and standard deviation across the five models. 5 Experiments Quantitative Evaluation Table 1 shows our main results. As the table illustrates, The model variations achieve F1 scores competitive to the scores reported by Shi et al. (2019) across training setups. They achieve comparable recall on different constituent categories, and robustness to parameter initialization, quantified by self-F1, which we report in an expanded version of this table in Appendix A. The model variations closest to the original model, 1, sWS, cME and 2, sWS, cME, yield similar performance to the original model across different evaluation categories and metrics, especially in the +HI and +HI+FastText settings. Most remarkably, our simplest variants, which use 1d embeddings and a non-parameterized scoring function, are still competitive (1, sM, cME) or even outperform (1, sMHI, cMX) the original VG-NSL. Our simplified model variations largely learn the 2618 Training Setting 1, sWS, cME 2, sWS, cME U Basic Setting 72.0 77.5 87.5 +HI 78.2 80.3 91.8 +HI+FastText 80.5 83.1 92.3 +HI+FastText-IN 85.6 86.4 92.8 Table 2: Self-F1 agreement between two of our variations and the original VG-NSL model. We also report the upper bound scores (U) calculated by directly comparing two separately trained sets of five original VG-NSL models. d = 2 d = 1 Figure 1: Token embedding visualization for 2, sWS, cME (top) and 1, sWS, cME (bottom) colored by universal POS tags (Petrov et al., 2012). Appendix A includes an expanded version of this figure. same parsing model as the original. Table 2 shows self-F1 agreement by comparing constituents predicted by our models in each training setting with the original model. We compute this agreement measure by training two sets of five models on the training data, and selecting checkpoints using the validation captions for each of our model variants and the original VG-NSL model. We parse the same validation captions using each model and generate ten parse trees for each caption, one for each model (i.e., five for each distinct set). We calculate self-F1 agreement between models by comparing parse trees from model variants to parse trees from the original VG-NSL. We permute all 25 (five by five) combinations of variant/VG-NSL pairs and obtain self-F1 agreement between the model variant and the original VG-NSL by averaging scores from each pair. For the upper-bound agreement calculation, we train two distinct sets of five original VG-NSL models. Our parsing model is very similar but not exactly identical: there is roughly a six points F1 agreement gap in the best case compared to the upper bound. We consider these numbers a worst-case scenario because selfF1 agreement measures on the validation data are used twice. First, for model selection to eliminate the variance of each five-model set, and second for the variant agreement analysis. Expressivity Analysis We analyze the embeddings of the two variants closest to the original Model 1, sWS, cME Turney et al. (2011) 0.73 Brysbaert et al. (2014) 0.75 Hessel et al. (2018) 0.89 Shi2019∗ 0.94 Table 3: Pearson correlation coefficient of concreteness estimates between our 1, sWS, cME variant and existing concreteness estimates, including reproduced estimates derived from VG-NSL by Shi et al. (2019). Figure 2: Noun distribution using the 1d representation from the 1, sWS, cME variant. The nouns are sorted by their representation value in increasing order from left. model, 1, sWS, cME and 2, sWS, cME, to identify the information they capture. Both behave similarly to the original VG-NSL. Figure 1 visualizes the token embedding space for these variants. Interestingly, the distribution of the 2d token embeddings seems almost linear, suggesting that the additional dimension is largely not utilized during learning, and that both have a strong preference for separating nouns from tokens belonging to other parts of speech. It seems only one core visual signal is used in the model and if this factor is captured, even a 1d model can propagate it through the tree. We hypothesize that the core visual aspect learned, which is captured even in the 1d setting, is noun concreteness. Table 3 shows that the reduced token embeddings have strong correlations with existing estimates of concreteness. Figure 2 shows the ordering of example nouns according to our 1d learned model representation. We observe that the concreteness estimated by our model correlates with nouns that are relatively easier to ground visually in MSCOCO images. For example, nouns like “giraffe” and “elephant” are considered most concrete. These nouns are relatively frequent in MSCOCO (e.g., “elephant” appears 4,633 times in the training captions) and also have a low variance in their appearances. On the other hand, nouns with high variance in images (e.g., “traveller”) or abstract nouns (e.g., “chart”, “spot”) are estimated to have low concreteness. Appendix A includes examples of concreteness. We quantify the role of concreteness-based noun identification in VG-NSL by modifying test-time captions to replace all nouns with the most concrete token (i.e., “elephant”), measured according 2619 Training Setting Token 1, sWS, cME Shi2019∗ Basic Setting herd 49.5 ⇒36.3 51.0 ⇒47.6 Basic Setting* cat 52.4 ⇒56.9 51.0 ⇒57.2 +HI elephant 51.7 ⇒63.7 51.6 ⇒59.8 +HI+FastText motorcycle 52.9 ⇒59.9 52.9 ⇒60.7 +HI+FastText-IN elephant 55.0 ⇒62.9 54.6 ⇒60.2 Table 4: F1 scores evaluated before and after replacing nouns in captions with the most concrete token predicted by models using the 1, sWS, cME configuration. The replacement occurs during test time only as described in Section 5. In Basic Setting∗, we remove one model from 1, sWS, cME which has a significantly low F1 agreement (54.2) to the rest of four models using the 1, sWS, cME configuration. to the 1d token embeddings learned by our model. We pick the most concrete noun for each training configuration using mean ranking across token embeddings of the five models in each configuration. For example, instead of parsing the original caption "girl holding a picture," we parse "elephant holding an elephant." This uses part-of-speech information to resolve the issue where nouns with low concreteness are treated in the same manner as other part-of-speech tokens. We compare the output tree to the original gold ones for evaluation. We observe that the F1 score, averaged across the five models, significantly improves from 55.0 to 62.9 for 1, sWS, cME and from 54.6 to 60.2 for the original VG-NSL before and after our caption modification. The performance increase shows that noun identification via concreteness provides an effective parsing strategy, and further corroborates our hypothesis about what phenomena underlie the strong Shi et al. (2019) result. Table 4 includes the results for the other training settings. 6 Conclusion and Related Work We studied the VG-NSL model by introducing several significantly less expressive variants, analyzing their outputs, and showing they maintain, and even improve performance. Our analysis shows that the visual signal leads VG-NSL to rely mostly on estimates of noun concreteness, in contrast to more complex syntactic reasoning. While our model variants are very similar to the original VG-NSL, they are not completely identical, as reflected by the self-F1 scores in Table 2. Studying this type of difference between expressive models and their less expressive, restricted variants remains an important direction for future work. For example, this can be achieved by distilling the original model to the less expressive variants, and observing both the agreement between the models and their performance. In our case, this requires further development of distillation methods for the type of reinforcement learning setup VG-NSL uses, an effort that is beyond the scope of this paper. Our work is related to the recent inference procedure analysis of Dyer et al. (2019). While they study what biases a specific inference algorithm introduces to the unsupervised parsing problem, we focus on the representation induced in a grounded version of the task. Our empirical analysis is related to Htut et al. (2018), who methodologically, and successfully replicate the results of Shen et al. (2018a) to study their performance. The issues we study generalize beyond the parsing task. The question of what is captured by vision and language models has been studied before, including for visual question answering (Agrawal et al., 2016, 2017; Goyal et al., 2017), referring expression resolution (Cirik et al., 2018), and visual navigation (Jain et al., 2019). We ask this question in the setting of syntactic parsing, which allows to ground the analysis in the underlying formalism. Our conclusions are similar: multi-modal models often rely on simple signals, and do not exhibit the complex reasoning we would like them to acquire. Acknowledgements Special thanks to Freda Shi for code release and prompt help in re-producing the experiments of Shi et al. (2019). This work was supported by the NSF (CRII-1656998, IIS-1901030), a Google Focused Award, and the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program. We thank Jack Hessel, Forrest Davis, and the anonymous reviewers for their helpful feedback. References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1955–1960. Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, and Devi Parikh. 2017. C-VQA: A compositional split of the visual question answering (VQA) v1.0 dataset. CoRR, abs/1704.08243. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018a. Bottom-up and top-down attention 2620 for image captioning and visual question answering. In The IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments. In The IEEE Conference on Computer Vision and Pattern Recognition, pages 3674–3683. Mark C. Baker. 1987. The atoms of language: The mind’s hidden rules of grammar. Basic books. Valts Blukis, Nataly Brukhim, Andrew Bennett, Ross A. Knepper, and Yoav Artzi. 2018. Following high-level navigation instructions on a simulated quadcopter with imitation learning. In Proceedings of the Robotics: Science and Systems Conference. Valts Blukis, Eyvind Niklasson, Ross A. Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quadcopter control using simulated flight. In Proceedings of the Conference on Robot Learning. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904–911. Volkan Cirik, Louis-Philippe Morency, and Taylor Berg-Kirkpatrick. 2018. Visual referring expression recognition: What do systems actually learn? In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 781–787. Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 184–191. Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive auto-encoders. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1129–1141. Chris Dyer, Gábor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. arXiv preprint arXiv:1909.09428. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 457– 468. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In The IEEE Conference on Computer Vision and Pattern Recognition, pages 6325–6334. Serhii Havrylov, Germán Kruszewski, and Armand Joulin. 2019. Cooperative learning of disjoint syntax and semantics. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1118–1128. Jack Hessel, David Mimno, and Lillian Lee. 2018. Quantifying the visual concreteness of words and topics in multimodal datasets. In Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2194–2205. Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 4998–5003. Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In The IEEE International Conference on Computer Vision, pages 804–813. Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Ie, and Jason Baldridge. 2019. Stay on the path: Instruction fidelity in vision-andlanguage navigation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1862–1872. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1105–1117. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2676–2686. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: 2621 Common objects in context. In European conference on computer vision, pages 740–755. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In The IEEE Conference on Computer Vision and Pattern Recognition, pages 11–20. Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3D environments with visual goal prediction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 2667–2678. Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping instructions and visual observations to actions with reinforcement learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1004–1015. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 2089–2096. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018a. Neural language modeling by jointly learning syntax and lexicon. In Proceedings of International Conference on Learning Representations. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In Proceedings of International Conference on Learning Representations. Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu. 2019. Visually grounded neural syntax acquisition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1842–1861. Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 680– 690. Mingzhe Wang, Mahmoud Azab, Noriyuki Kojima, Rada Mihalcea, and Jia Deng. 2016. Structured matching for phrase localization. In The European Conference on Computer Vision, pages 696–711. Adina Williams, Andrew Drozdov*, and Samuel R Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? In Transactions of the Association for Computational Linguistics, volume 6, pages 253–267. MIT Press. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. 2016. Modeling context in referring expressions. In The European Conference on Computer Vision, pages 69–85. 2622 A Additional Results and Visualizations Table 5 is an extended version of Table 1 from Section 5. We include standard deviation for the phrasal category recall and self-F1 scores evaluated across different parameter initializations. Figure 3 is a larger version of Figure 1 from Section 5. It visualizes the token embeddings of 1, sWS, cME and 2, sWS, cME for all universal parts-of-speech categories (Petrov et al., 2012). Figures 4 and 5 show several examples visualizing our learned representations with the 1, sWS, cME variant, the 1d variant closest to the original model, as a concreteness estimate. Figure 4 shows the most concrete nouns, and Figure 5 shows the least concrete nouns. We selected nouns from the top (bottom) 5% of the data as most (least) concrete. We randomly selected image-caption pairs for these nouns. At the end of the supplementary material, we include tree visualizations, comparing gold trees with phrasal categories, trees generated by the original VG-NSL, and trees generated by our best performing, simplified 1, sMHI, cMX variant. We select the trees to highlight the difference between VG-NSL and our variant. First, we select all development trees where all five VG-NSL models agree to avoid results that are likely due to initialization differences. We do the same for our variant. Finally, we select all trees where the two sets, from VG-NSL and our variant, disagree. This process leaves us with 814 development examples, out of the original 5,000 examples. We display ten examples from this final set. 2623 Model NP VP PP ADJP Avg. F1 Self-F1 Shi2019 79.6 ± 0.4 26.2 ± 0.4 42.0 ± 0.6 22.0 ± 0.4 50.4 ± 0.3 87.1 Shi2019∗ 80.5 ± 1.5 26.9 ± 0.9 45.0 ± 2.9 21.3 ± 1.2 51.4 ± 1.1 87.3 1, sWS, cME 77.2 ± 5.3 17.0 ± 5.2 53.4 ± 12.8 18.2 ± 1.0 49.7 ± 5.9 76.0 2, sWS, cME 80.8 ± 1.1 19.1 ± 1.1 52.3 ± 3.5 17.1 ± 1.0 51.6 ± 0.6 88.1 +HI Shi2019 74.6 ± 0.5 32.5 ± 1.5 66.5 ± 1.2 21.7 ± 1.1 53.3 ± 0.2 90.2 Shi2019∗ 73.1 ± 0.3 33.9 ± 0.8 64.5 ± 0.2 22.5 ± 0.4 51.8 ± 0.3 91.6 1, sWS, cME 74.0 ± 0.4 35.2 ± 2.0 62.0 ± 1.1 24.2 ± 0.9 51.8 ± 0.4 87.3 2, sWS, cME 73.8 ± 0.3 30.2 ± 0.4 63.7 ± 0.3 21.9 ± 0.3 51.3 ± 0.1 93.3 +HI+FastText Shi2019 78.8 ± 0.5 24.4 ± 0.9 65.6 ± 0.1 22.0 ± 0.7 54.4 ± 0.3 89.8 Shi2019∗ 77.3 ± 0.1 23.9 ± 0.5 64.3 ± 0.3 21.9 ± 0.3 53.3 ± 0.1 92.2 1, sWS, cME 76.6 ± 0.3 21.9 ± 2.3 68.7 ± 4.1 20.6 ± 0.9 53.5 ± 1.4 87.8 2, sWS, cME 77.5 ± 0.2 22.8 ± 0.4 66.3 ± 0.6 19.3 ± 0.7 53.6 ± 0.2 93.6 +HI+FastText-IN Shi2019∗ 78.3 ± 0.2 26.6 ± 0.3 67.5 ± 0.5 22.1 ± 1.0 54.9 ± 0.1 92.6 1, sM, cMX 79.6 ± 0.2 29.0 ± 0.7 38.3 ± 0.3 23.5 ± 0.6 49.7 ± 0.2 95.5 1, sMHI, cMX 77.6 ± 0.2 45.0 ± 0.8 72.3 ± 0.2 24.3 ± 1.0 57.5 ± 0.1 93.4 1, sM, cME 80.0 ± 0.2 26.9 ± 0.2 62.2 ± 0.4 23.2 ± 0.4 54.3 ± 0.2 95.7 1, sMHI, cME 76.5 ± 0.1 20.5 ± 0.8 63.6 ± 0.6 22.7 ± 0.7 52.2 ± 0.3 94.7 1, sWS, cME 77.7 ± 0.1 26.3 ± 0.4 72.5 ± 0.2 22.0 ± 0.6 55.5 ± 0.1 95.5 2, sWS, cME 78.5 ± 0.4 26.3 ± 0.6 69.5 ± 1.2 21.1 ± 0.5 55.2 ± 0.1 93.7 Table 5: Test results. We report the results from Shi et al. (2019) as Shi2019 and our reproduction as Shi2019∗. We report mean F1 and standard deviation for each system and mean recall and standard deviation for four phrasal categories. Our variants are specified using a representation embedding (d ∈{1, 2}), a score function (sM: mean, sMHI: mean+HI, sWS: weighted sum), and a combine function (cMX: max, cME: mean). Figure 3: Token embedding visualization for 2, sWS, cME (top) and 1, sWS, cME (bottom) colored by universal POS tags (Petrov et al., 2012). 2624 Elephant (4633 occurrences): (a) A person riding an elephant and carrying gas cylinders. (b) An elephant is in some brown grass and some trees. (c) A captive elephant stands amid the branches of a tree in his park-like enclosure. (d) Two baby gray elephant standing in front of each other. (e) The older elephant is standing next to the younger elephant. (a) (b) (c) (d) (e) Giraffe (5546 occurrences): (a) Two giraffe standing next to each other on a grassy field. (b) A giraffe laying down on the dirt ground. (c) A herd of giraffe standing next to each other on a field. (d) A giraffe stands beneath a tree beside a marina. (e) A giraffe rests its neck on a bunch of rocks. (a) (b) (c) (d) (e) Pizza (8340 occurrences): (a) A woman holding a pizza up in the air. (b) A slice of pizza sitting on top of a white plate. (c) A pizza sitting on top of a plate covered in cheese and tomatoes. (d) Three pieces of sliced pizza on a wooden surface. (e) Some boxes of frozen pizzas are in the store. (f) A pizza topped with cheese and pepperoni with veggies. (g) A large pizza is in a cardboard box. (a) (b) (c) (d) (e) (f) (g) Snowboarder (922 occurrences): (a) A snowboarder practicing his moves at a snow facility. (b) A snowboarder is coming down a hill and some trees. (c) A snowboarder rests in the snow on the snowboard. (d) A snowboarder jumps off of a hill instead of just sliding down it. (e) A snowboarder is jumping in the air with their board held to the side. (f) The snowboard is almost as big as the snowboarder. (a) (b) (c) (d) (e) (f) Figure 4: Image-caption pairs corresponding to noun tokens estimated as most concrete (bottom 5%) in our 1, sWS, cME variant. We also report the number of occurrences in the MSCOCO training set. 2625 Metal (1630 occurrences): (a) A pink piece of metal with a bolt and nut on top. (b) Wilting roses and greenery in a metal vase. (c) A couple of street signs sitting on top of a metal pole. (d) Kitchen with wooden cabinets and a metal sink. (e) A metal toilet and some tissue in a bathroom. (a) (b) (c) (d) (e) Palm (321 occurrences): (a) A motorcycle sits parked in palm tree lined driveway. (b) Two people in helmets on a parked motorcycle and a small palm tree to the side of them. (c) Two flat bed work trucks among palm trees . (d) A cake with palm trees, and a person on a surf board. (e) A pink cellphone and white palm pilot on a table. (a) (b) (c) (d) (e) Picture (5932 occurrences): (a) A blurry picture of a cat standing on a toilet. (b) Picture of a church and its tall steeple. (c) The street sign at the intersection of Broadway and 7th avenue is the star of this picture. (d) A picture of some people playing with a frisbee. (e) A little girl sitting in the middle of a restaurant and smiling for picture. (a) (b) (c) (d) (e) Time (1184 occurrences): (a) A time lapse photo of a skier skiing down a hill. (b) A skaterboarder getting major air over some stairs during a night time shoot. (c) The man is trying to eat three hot dogs are the same time. (d) A boy playing a WII game at Christmas time. (e) A large display of a hand holding a cell phone to tell the time. (a) (b) (c) (d) (e) Figure 5: Image-caption pairs corresponding to noun tokens estimated as least concrete (bottom 5%) in our 1, sWS, cME variant. We also report the number of occurrences in the MSCOCO training set. 2626 Gold Tree S NP DT A NN girl VP VBZ smiles SBAR IN as S NP PRP she VP VBZ holds NP DT a JJ kitty NN cat . . VG-NSL (original) a girl smiles as she holds a kitty cat Simplified VG-NSL (1, sMHI, cMX) a girl smiles as she holds a kitty cat 2627 Gold Tree NP NP DT A NN woman VP VBG riding NP DT a NN bike PP IN down NP NP DT a NN street ADVP JJ next PP TO to NP DT a NN divider . . VG-NSL (original) a woman riding a bike down a street next to a divider Simplified VG-NSL (1, sMHI, cMX) a woman riding a bike down a street next to a divider 2628 Gold Tree NP NP DT A NN bath NN tub VP VBG sitting ADVP JJ next PP TO to NP DT a NN sink PP IN in NP DT a NN bathroom . . VG-NSL (original) a bath tub sitting next to a sink in a bathroom Simplified VG-NSL (1, sMHI, cMX) a bath tub sitting next to a sink in a bathroom 2629 Gold Tree NP NP DT A NN parking NN meter PP IN on NP NP DT a NN street PP IN with NP NNS cars VG-NSL (original) a parking meter on a street with cars Simplified VG-NSL (1, sMHI, cMX) a parking meter on a street with cars 2630 Gold Tree NP NP NN Bathroom PP IN with NP NP DT a NX NN pedestal NN sink CC and NX NN claw NN foot NN bathtub VG-NSL (original) bathroom with a pedestal sink and claw foot bathtub Simplified VG-NSL (1, sMHI, cMX) bathroom with a pedestal sink and claw foot bathtub 2631 Gold Tree S NP DT A NN claw NN foot NN tub VP VBZ is PP PP IN in NP DT a JJ large NN bathroom PP IN near NP DT a NN pedestal NN sink . . VG-NSL (original) a claw foot tub is in a large bathroom near a pedestal sink Simplified VG-NSL (1, sMHI, cMX) a claw foot tub is in a large bathroom near a pedestal sink 2632 Gold Tree NP NP NN Picture PP IN of NP NP DT a NN church CC and NP PRP$ its JJ tall NN steeple . . VG-NSL (original) picture of a church and its tall steeple Simplified VG-NSL (1, sMHI, cMX) picture of a church and its tall steeple 2633 Gold Tree NP NP DT A NNP giraffe VP VBG laying PRT RP down PP IN on NP DT the NN dirt NN ground . . VG-NSL (original) a giraffe laying down on the dirt ground Simplified VG-NSL (1, sMHI, cMX) a giraffe laying down on the dirt ground 2634 Gold Tree NP NP DT A NN fire NN hydrant SBAR WHNP WDT that S VP VBZ is VP VBN painted S ADJP JJ yellow CC and JJ blue . . VG-NSL (original) a yellow and blue fire hydrant on the sidewalk Simplified VG-NSL (1, sMHI, cMX) a yellow and blue fire hydrant on the sidewalk 2635 Gold Tree S NP NP DT A NN bird PP IN with NP JJ red NNS eyes VP VBN perched PP IN on NP NP NN top PP IN of NP DT a NN tree NN branch . . VG-NSL (original) a bird with red eyes perched on top of a tree branch Simplified VG-NSL (1, sMHI, cMX) a bird with red eyes perched on top of a tree branch
2020
234
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2636–2649 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 2636 A Batch Normalized Inference Network Keeps the KL Vanishing Away Qile Zhu1, Wei Bi2, Xiaojiang Liu2, Xiyao Ma1, Xiaolin Li3 and Dapeng Wu1 1University of Florida, 2Tencent AI Lab, 3AI Institute, Tongdun Technology {valder,maxiy,dpwu}@ufl.edu {victoriabi,kieranliu}@tencent.com [email protected] Abstract Variational Autoencoder (VAE) is widely used as a generative model to approximate a model’s posterior on latent variables by combining the amortized variational inference and deep neural networks. However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as “posterior collapse”. Previous approaches consider the Kullback–Leibler divergence (KL) individual for each datapoint. We propose to let the KL follow a distribution across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keeping the expectation of the KL’s distribution positive. Then we propose Batch NormalizedVAE (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior’s parameters. Without introducing any new model component or modifying the objective, our approach can avoid the posterior collapse effectively and efficiently. We further show that the proposed BN-VAE can be extended to conditional VAE (CVAE). Empirically, our approach surpasses strong autoregressive baselines on language modeling, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE. 1 Introduction Variational Autoencoder (VAE) (Kingma and Welling, 2014; Rezende et al., 2014)is one of the most popular generative framework to model complex distributions. Different from the Autoencoder (AE), VAE provides a distribution-based latent representation for the data, which encodes the input x into a probability distribution z and reconstructs the original input using samples from z. When *This work was done when Qile Zhu was an intern at Tencent AI Lab. Wei Bi is the corresponding author. inference, VAE first samples the latent variable from the prior distribution and then feeds it into the decoder to generate an instance. VAE has been successfully applied in many NLP tasks, including topic modeling (Srivastava and Sutton, 2017; Miao et al., 2016; Zhu et al., 2018), language modeling (Bowman et al., 2016), text generation (Zhao et al., 2017b) and text classification (Xu et al., 2017). An autoregressive decoder (e.g., a recurrent neural network) is a common choice to model the text data. However, when paired with strong autoregressive decoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and trained under conventional training strategy, VAE suffers from a wellknown problem named the posterior collapse or the KL vanishing problem. The decoder in VAE learns to reconstruct the data independent of the latent variable z, and the KL vanishes to 0. Many convincing solutions have been proposed to prevent posterior collapse. Among them, fixing the KL as a positive constant is an important direction (Davidson et al., 2018; Guu et al., 2018; van den Oord et al., 2017; Xu and Durrett, 2018; Tomczak and Welling, 2018; Kingma et al., 2016; Razavi et al., 2019). Some change the Gaussian prior with other distributions, e.g., a uniform prior (van den Oord et al., 2017; Zhao et al., 2018) or a von Mises-Fisher (vMf) distribution (Davidson et al., 2018; Guu et al., 2018; Xu and Durrett, 2018). However, these approaches force the same constant KL and lose the flexibility to allow various KLs for different data points (Razavi et al., 2019). Without changing the Gaussian prior, free-bits (Kingma et al., 2016) adds a threshold (free-bits) of the KL term in the ELBO object and stops the optimization of the KL part when its value is smaller than the threshold. Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges. δ-VAE (Razavi et al., 2019) sets the parameters in a specific range 2637 to achieve a positive KL value for every latent dimension, which may limit the model performance. Other work analyzes this problem form a view of optimization (Bowman et al., 2016; Zhao et al., 2017a; Chen et al., 2017; Alemi et al., 2018). Recently, He et al. (2019) observe that the inference network is lagging far behind the decoder during training. They propose to add additional training loops for the inference network only. Li et al. (2019) further propose to initialize the inference network with an encoder pretrained from an AE objective, then trains the VAE with the free-bits. However, these two methods are much slower than the original VAE. The limitation of the constant KL and the high cost of additional training motivate us to seek an approach that allows flexible modeling for different data points while keeping as fast as the original VAE. In this paper, instead of considering the KL individually for each data point, we let it follow a distribution across the whole dataset. We demonstrate that keeping a positive expectation of the KL’s distribution is sufficient to prevent posterior collapse in practice. By regularizing the distribution of the approximate posterior’s parameters, a positive lower bound of this expectation could be ensured. Then we propose Batch Normalized-VAE (BN-VAE), a simple yet effective approach to achieving this goal, and discuss the connections between BN-VAE and previous enhanced VAE variants. We further extend BNVAE to the conditional VAE (CVAE). Last, experimental results demonstrate the effectiveness of our approach on real applications, including language modeling, text classification and dialogue generation. Empirically, our approach surpasses strong autoregressive baselines and is competitive with more sophisticated approaches while keeping extremely higher efficiency. Code and data are available at https://github.com/valdersoul/bn-vae. 2 Background and Related Work In this section, we first introduce the basic background of VAE, then we discuss the lagging problem (He et al., 2019). At last, we present more related work. 2.1 VAE Background VAE (Kingma and Welling, 2014; Rezende et al., 2014) aims to learn a generative model p(x, z) to maximize the marginal likelihood log p(x) on a dataset. The marginal likelihood cannot be calculated directly due to an intractable integral over the latent variable z. To solve this, VAE introduces a variational distribution qφ(z|x) which is parameterized by a complex neural network to approximate the true posterior. Then it turns out to optimize the ELBO of log p(x): L = Eqφ(z|x)[log pθ(x|z)] −KL(qφ(z|x)||p(z)), (1) where φ represents the inference network and θ denotes the decoder. The above first term is the reconstruction loss, while the second one is the KL between the approximate posterior and the prior. The Gaussian distribution N ∼(0, I) is a usual choice for the prior, and the KL between the approximate posterior qφ(z|x) and the prior p(z) can be computed as: KL = 1 2 n X i=1 (µ2 i + σ2 i −log σ2 i −1), (2) where µi and σi is the mean and standard deviation of approximate posterior for the ith latent dimension, respectively. When the decoder is autoregressive, it can recover the data independent of the latent z (Bowman et al., 2016). The optimization will encourage the approximate posterior to approach the prior which results to the zero value of the KL. 2.2 The Lagging Problem Recently, He et al. (2019) analyze posterior collapse with the Gaussian prior from a view of training dynamics. The collapse is a local optimum of VAE when qφ(z|x) = pθ(z|x) = p(z) for all inputs. They further define two partial collapse states: model collapse, when pθ(z|x) = p(z), and inference collapse, when qφ(z|x) = p(z). They observe that the inference collapse always happens far before the model collapse due to the existence of autoregressive decoders. Different from the model posterior, the inference network lacks of guidance and easily collapses to the prior at the initial stage of training, and thus posterior collapse happens. Based on this understanding, they propose to aggressively optimize the inference network. However, this approach cost too much time compared with the original VAE. In our work, we also employ the Gaussian prior and thus suffer from the same lagging problem. Yet, our proposed approach does 2638 not involve additional training efforts, which can effectively avoid the lagging problem (Section 3.3) and keep almost the same training efficiency as the original VAE (Section 5.1). More details can be found in Section 3.3. 2.3 Related Work To prevent posterior collapse, we have mentioned many work about changing the prior in the introduction. Besides these approaches, some work modifies the original training objective directly. For example, Bowman et al. (2016) introduce an annealing strategy, where they slightly increase the weight of KL from 0 to 1 during the warm-up period. β-VAE (Higgins et al., 2017) treats the KL weight as a hyperparameter to constrain the minimum value of the KL. Alemi et al. (2017), on the other hand, set a fixed KL weight to control the mutual information between z and x. Tolstikhin et al. (2018) leverage the wasserstein distance to replace the KL. Zhao et al. (2017a) replace the KL with maximum mean discrepancy. Fang et al. (2019) introduce sample-based representations which lead to implicit latent features with an auxiliary network. Some change the training strategy. Kim et al. (2018) address the amortization gap (Cremer et al., 2018) in VAE and propose Semi-Amortized VAE to compose the inference network with additional mean-field updates. Fu et al. (2019) propose a cyclical annealing schedule, which repeats the process of increasing β multiple times. There are various other approaches to solve the posterior collapse. For example, some researchers choose to weaken the decoder by replacing the LSTM decoder with convolution neural networks without autoregressive modeling (Semeniuta et al., 2017; Yang et al., 2017). Chen et al. (2017) input a lossy representation of data to the autoregressive decoder and enforce z to capture the information about the original input. Inheriting this idea, some following work add direct connections between z and x (Zhao et al., 2017b; Dieng et al., 2019). Ma et al. (2019) introduce an additional regularization to learn diverse latent representation. δ-VAE (Razavi et al., 2019) and free-bits (Kingma et al., 2016) set a minimum number of KL for each latent dimension to prevent the posterior collapse. Srivastava and Sutton (2017, 2018) find that using ADAM (Kingma and Ba, 2014) with a high learning rate to train VAE may cause the gradients to diverge early. Their explanation for the diverging behavior lies in the exponential curvature of the gradient from the inference network which produces the variance part of the approximate posterior. Then they apply batch normalization to the variance part to solve this problem. We use the simple SGD without momentum to train our model. Moreover, we apply batch normalization to the mean part of the inference network to keep the expectation of the KL’s distribution positive, which is different from their work. We also find that Sønderby et al. (2016) utilize batch normalization in all fully connected layers with nonlinear activation functions to improve the model performance. Different from it, our approach directly applies batch normalization to the parameters of the approximate posterior, which is the output of the inference network. 3 Batch-Normalized VAE In this section, we first derive the expectation of the KL’s distribution and show that it is enough to avoid posterior collapse by keeping the expectation of the KL’s distribution positive. Then we propose our regularization method on the parameters of the approximate posterior to ensure a positive lower bound of this expectation. We further discuss the difference between our approach and previous work. 3.1 Expectation of the KL’s Distribution Given an x ∈ X, the inference network parametrizes a n-dimension diagonal Gaussian distribution with its mean µ = fµ(x) and diagonal covariance Σ = diag(fΣ(x)), where fµ and fΣ are two neural networks. In practice, the ELBO is computed through a Monte Carlo estimation from b samples. The KL in Eq. 2 is then computed over b samples from X: KL = 1 2b b X j=1 n X i=1 (µ2 ij + σ2 ij −log σ2 ij −1) = 1 2 n X i=1 ( Pb j=1 µ2 ij b + Pb j=1 σ2 ij b − Pb j=1 log σ2 ij b −1). (3) When b gets larger, the above empirical value will approach the mean of the KL across the whole dataset. To make use of this observation, we assume that µi and log σ2 i for each latent dimension i follow 2639 a certain distribution with a fixed mean and variance across the dataset respectively. The distribution may vary between different latent dimensions. In this way, the KL turns to a distribution of µi’s and log σ2 i ’s. From Eq. 3, we can see that Pb j=1 µ2 ij/b is the sample mean of µ2 i , which converges to E[µ2 i ] = Var[µi] + E2[µi]. Similarly, Pb j=1 σ2 ij/b converges to E[σ2 i ], and Pb j=1 log σ2 ij/b to E[log σ2 i ]. Thus, we can derive the expectation of the KL’s distribution as: E[KL] = 1 2 n X i=1 (Var[µi] + E2[µi] + E[σ2 i ] −E[log σ2 i ] −1) ≥1 2 n X i=1 (Var[µi] + E2[µi]), (4) where E[σ2 i −log σ2 i ] ≥1 since the minimum of ex −x is 1. If we can guarantee a positive lower bound of E[KL], we can then effectively prevent the posterior collapse. Based on Eq. 4, the lower bound is only dependent on the number of latent dimensions n and µi’s mean and variance. This motivates our idea that with proper regularization on the distributions of µi’s to ensure a positive lower bound of E[KL]. 3.2 Normalizing Parameters of the Posterior The remaining key problem is to construct proper distributions of µi’s that can result in a positive lower bound of E[KL] in Eq. 4. Here, we propose a simple and efficient approach to accomplish this by applying a fixed batch normalization on the output of the inference network (µi). Batch Normalization (BN) (Ioffe and Szegedy, 2015) is a widely used regularization technique in deep learning. It normalizes the output of neurons and makes the optimization landscape significantly smoother (Santurkar et al., 2018). Different from other tasks that apply BN in the hidden layers and seek fast and stable training, here we leverage BN as a tool to transform µi into a distribution with a fixed mean and variance. Mathematically, the regularized µi is written by: ˆµi = γ µi −µBi σBi + β, (5) where µi and ˆµi are means of the approximate posterior before and after BN. µBi and σBi denote the mean and standard deviations of µi. They are biased estimated within a batch of samples for each dimension indecently. γ and β are the scale and shift parameter. Instead of using a learnable γ in Eq. 5, we use a fixed BN which freezes the scale γ. In this way, the distribution of µi has the mean of β and the variance of γ2. β is a learnable parameter that makes the distribution more flexible. Now, we derive the lower bound of E[KL] by using the fixed BN. With the fixed mean β and variance γ2 for µi in hand, we get a new lower bound as below: E[KL] ≥1 2 n X i (Var[µi] + E2[µi]) = n · (γ2 + β2) 2 . (6) To this end, we can easily control the lower bound of E[KL] by setting γ. Algorithm 1 shows the training process. Algorithm 1 BN-VAE training. 1: Initialize φ and θ. 2: for i = 1, 2, · · · Until Convergence do 3: Sample a mini-batch x. 4: µ, log σ2 = fφ(x). 5: µ′ = BNγ,β(µ). 6: Sample z ∼N(µ′, σ2) and reconstruct x from fθ(z). 7: Compute gradients gφ,θ ←∇φ,θL(x; φ, θ). 8: Update φ, θ using gφ,θ. 9: end for 3.3 Connections with Previous Approaches Constructing a positive KL: Both free-bits (Kingma et al., 2016) and δ-VAE (Razavi et al., 2019) set a threshold on the KL value. Free-bits changes the KL term in the ELBO to a hinge loss term: Pn i max(λ, KL(qφ(zi|x)||p(zi))). Another version of free-bits is to apply the threshold to the entire sum directly instead of the individual value. Training with the free-bits objective, the model will stop to drive down the KL value when it is already below λ. However, Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges. Our approach does not face the optimization problem since we use the original ELBO objective. δ-VAE sets a target rate of δ for each latent dimension by constraining the mean and variance of 2640 the approximate posterior: σq = σl q + (σu q −σl q) 1 1 + e−qφ(x) , (7) µ = 2δ + 1 + ln(σ2 q) −σ2 q + max(0, µφ(x)), (8) where [σl, σu] are the feasible interval for σq by solving ln(σ2 q)−σ2 q +2δ+1 ≥0. Although δ-VAE can ensure a minimum value for the KL, it limits the model performance due to that the parameters are constrained in the interval. Our approach only constrains the distributions of µ, which is more flexible than δ-VAE. Experiments further show that our approach surpass both free-bits and δ-VAE. Reducing inference lag: As we focus on the setting of the conventional Gaussian prior, the lagging problem mentioned in Section 2.2 is crucial. To this point, it is beneficial to analyze an alternate form of the ELBO: L = log pθ(x) −KL(qφ(z|x)||pθ(z|x)). (9) With this view, the only goal of the approximate posterior qφ(z|x) is to match the model posterior pθ(z|x). We examine the performance of our approach to reduce inference lag using the same synthetic experiment in He et al. (2019). Details can be found in Section 1 of the Appendix. The synthetic experiment indicates that our approach with the regularization is beneficial to rebalance the optimization between inference and generation, and finally overcomes posterior collapse. We also prefer a large γ due to that a small γ will push the approximate posterior to the prior. More details on the synthetic experiment can be found in the Appendix. 4 Extension to CVAE Given an observation x and its output y, CVAE (Sohn et al., 2015; Zhao et al., 2017b) models the conditional distribution p(y|x). The variational lower bound of the conditional log-likelihood is: L = Eqφ(z|x,y)[log pκ(y|x, z)] −KL(qφ(z|x, y)||pθ(z|x)) ≤log p(y|x). (10) Different from VAE, the prior pθ(z|x) in CVAE is not fixed, which is also parametrized by a neural network. It is possible to apply another BN on the mean of the prior with a different γ so that the expectation of the KL becomes a constant. However, this lower bound is uncontrollable due to the density of µ1 + µ2 is the convolution of their densities, which is intractable. 1 To overcome this issue, we propose to constrain the prior with a fixed distribution. We achieve it by adding another KL between the prior and a known Gaussian distribution r(z), i.e. KL(pθ(z|x)||r(z)). Instead of optimizing the ELBO in Eq. 10, we optimize a lower bound of the ELBO for CVAE: L′ = L −KL(pθ(z|x)||r(z)) ≤L. (11) The KL term in the new bound is the sum of KL(qφ(z|x, y)||pθ(z|x)) and KL(pθ(z|x)||r(z)), which can be computed as: KL = 1 2 n X i=1 ( σ2 qi + (µqi −µpi)2 σ2 pi + σ2 pi + µ2 pi −logσ2 qi −1), (12) where σq, µq and σp, µp are the parameters of qφ and pθ respectively. n denotes the hidden size. The KL term vanishes to 0 when and only when qφ and pθ collapse to r(z), which is the normal distribution. As we explained in Section 3.2, KL won’t be 0 when we apply BN in qφ. We then prove that when qφ collapses to pθ, the KL term is not the minima (details in Section 2 of the Appendix) so that KL(qφ(z|x, y)||pθ(z|x)) won’t be 0. In this way, we can avoid the posterior collapse in CVAE. Algorithm 2 shows the training details. Algorithm 2 BN-CVAE training. 1: Initialize φ, θ and κ. 2: for i = 1, 2, · · · Until Convergence do 3: Sample a mini-batch x, y. 4: µq, log σ2 q = fφ(x, y) and µp, log σ2 p = fθ(x). 5: µ′ q = BNγ,β(µq). 6: Sample z ∼N(µ′ q, σ2 q) and reconstruct y from fκ(z, x). 7: Compute gradients gφ,θ,κ ←∇φ,θ,κL′. 8: Update φ, θ, κ using gφ,θ,κ. 9: end for 1We perform empirical study on this method and find that the neural network can always find a small KL value in this situation. 2641 Yahoo Yelp Model NLL KL MI AU NLL KL MI AU Without a pretrained AE encoder CNN-VAE ≤332.1 10.0 ≤359.1 7.6 LSTM-LM 328 351.1 VAE 328.6 0.0 0.0 0.0 357.9 0.0 0.0 0.0 β-VAE (0.4) 328.7 6.3 2.8 8.0 358.2 4.2 2.0 4.2 cyclic ∗ 330.6 2.1 2.0 2.3 359.5 2.0 1.9 4.1 Skip-VAE ∗ 328.5 2.3 1.3 8.1 357.6 1.9 1.0 7.4 SA-VAE 327.2 5.2 2.7 9.8 355.9 2.8 1.7 8.4 Agg-VAE 326.7 5.7 2.9 15.0 355.9 3.8 2.4 11.3 FB (4) 331.0 4.1 3.8 3.0 359.2 4.0 1.9 32.0 FB (5) 330.6 5.7 2.0 3.0 359.8 4.9 1.3 32.0 δ-VAE (0.1) ∗ 330.7 3.2 0.0 0.0 359.8 3.2 0.0 0.0 vMF-VAE (13) ∗ 327.4 2.0 32.0 357.5 2.0 32.0 BN-VAE (0.6) ∗ 326.7 6.2 5.6 32.0 356.5 6.5 5.4 32.0 BN-VAE (0.7) ∗ 327.4 8.8 7.4 32.0 355.9 9.1 7.4 32.0 With a pretrained AE encoder cyclic ∗ 333.1 25.8 9.1 32.0 361.5 20.5 9.3 32.0 FB (4) ∗ 326.2 8.1 6.8 32.0 356.0 7.6 6.6 32.0 δ-VAE (0.15) ∗ 331.0 5.6 1.1 11.2 359.4 5.2 0.5 5.9 vMF-VAE (13) ∗ 328.4 2.0 32.0 357.0 2.0 32.0 BN-VAE (0.6) ∗ 326.7 6.4 5.8 32.0 355.5 6.6 5.9 32.0 BN-VAE (0.7) ∗ 326.5 9.1 7.6 32.0 355.7 9.1 7.5 32.0 Table 1: Results on Yahoo and Yelp datasets. We report mean values across 5 different random runs. ∗indicates the results are from our experiments, while others are from He et al. (2019); Li et al. (2019). We only show the best performance of every model for each dataset. More results on various parameters can be found in the Appendix. 5 Experiments 5.1 VAE for Language Modeling Setup: We test our approach on two benchmark datasets: Yelp and Yahoo corpora (Yang et al., 2017). We use a Gaussian prior N(0, I), and the approximate posterior is a diagonal Gaussian. Following previous work (Burda et al., 2016; He et al., 2019), we report the estimated negative log likelihood (NLL) from 500 importance weighted samples, which can provide a tighter lower bound compared to the ELBO and shares the same information with the perplexity (PPL). Besides the NLL, we also report the KL, the mutual information (MI) Iq (Alemi et al., 2017) and the number of activate units (AU) (Burda et al., 2016) in the latent space. The Iq can be calculated as: Iq =Epd(x)[KL(qφ(z|x)||p(z))]− KL(qφ(z)||p(z)), (13) where pd(x) is the empirical distribution. The aggregated posterior qφ(z) = Epd(x)[qφ(z|x)] and KL(qφ(z)||p(z)) can be approximated with Monte Carlo estimations. The AU is measured as Az = Cov(Ez∼q(z|x)[z]). We set the threshold of 0.01, which means if Azi > 0.01, the unit i is active. Configurations: We use a 512-dimension word embedding layer for both datasets. For the encoder and the decoder, a single layer LSTM with 1024 hidden size is used. We use z to generate the initial state of the encoder following Kim et al. (2018); He et al. (2019); Li et al. (2019). To optimize the objective, we use mini-batch SGD with 32 samples per batch. We use one NVIDIA Tesla v100 for the experiments. For all experiments, we use the linear annealing strategy that increases the KL weight from 0 to 1 in the first 10 epochs if possible. Compared methods: We compare our model with several strong baselines and methods that hold the previous state-of-the-art performance on text modeling benchmarks. • Baselines, including neural autoregressive models (the LSTM language model). • Methods with weakening the decoder: CNNVAE (Yang et al., 2017). • Methods with a modified model structure: SkipVAE (Dieng et al., 2019). • Methods with a modified training objective: – VAE with annealing (Bowman et al., 2016). – β-VAE (Higgins et al., 2017). – Cyclic annealing (Fu et al., 2019), we use the default cyclic schedule. • Methods with a lower bound for KL values: – Free-bits (FB) (Kingma et al., 2016). – δ-VAE (Razavi et al., 2019). – vMF-VAE (Xu and Durrett, 2018) • Methods with a modified training strategy. 2642 Yahoo Yelp Model Hours Ratio Hours Ratio VAE 3.83 1.00 4.50 1.00 SA-VAE 52.99 12.80 59.37 12.64 Agg VAE 11.76 2.84 21.44 4.56 AE+FB 7.70 2.01 9.22 2.05 BN-VAE 3.98 1.04 4.60 1.02 Table 2: Comparison of training time to convergence. We report both the absolute hours and relative speed. – Semi-amortized VAE (SA-VAE) (Kim et al., 2018). – VAE with an aggressive training (Agg-VAE) (He et al., 2019). – FB with a pretrained inference network (AE+FB) (Fu et al., 2019) Main results: Table 1 shows the results. We further split the results into two different settings, one for models with a pretrained inference network and one without it. Our approach achieves the best NLL in the setting without a pretrained inference network on both datasets and is competitive in the setting with a pretrained encoder. Moreover, we can observe that: • δ-VAE does not perform well in both settings, which shows that constraining the parameters in a small interval is harmful to the model. In vMFVAE, data points share the same KL value. Our approach is flexible and gets better performance. • Although Agg-VAE and SA-VAE both get good performance, they require additional updates on the inference network and cost more training efforts, which are validated in the next part. • Cyclic annealing with a pretrained inference network achieves the highest KL, but it may not be a good generative model. • Paired with a pretrained inference network, all methods except cyclic annealing can someway boost the performance. This phenomenon indicates that the lagging problem (He et al., 2019) is important in VAE training. When leveraging the pretrained inference network, our approach achieves the smallest performance gap compared with other methods. In other words, our approach can alleviate the lagging problem efficiently. Training time: Table 2 shows the training time (until convergence) and the relative ratio of the basic VAE, our approach and the other best three models in Table 1. SA-VAE is about 12 times slower than our approach due to the local update for each data point. Agg-VAE is 2-4 times slower #label 100 500 1k 2k 10k AE 81.1 86.2 90.3 89.4 94.1 VAE 66.1 82.6 88.4 89.6 94.5 δ-VAE 61.8 61.9 62.6 62.9 93.8 Agg-VAE 80.9 85.9 88.8 90.6 93.7 cyclic 62.4 75.5 80.3 88.7 94.2 FB (9) 79.8 84.4 88.8 91.12 94.7 AE+FB (6) 87.6 90.2 92.0 93.4 94.9 BN-VAE (0.7) 88.8 91.6 92.5 94.1 95.4 Table 3: Accuracy on Yelp. Model CVAE CVAE (BOW) BN-VAE PPL 36.40 24.49 30.67 KL 0.15 9.30 5.18 BLEU-4 10.23 8.56 8.64 A-bow Prec 95.87 96.89 96.64 A-bow Recall 90.93 93.95 94.43 E-bow Prec 86.26 83.55 84.69 E-bow Recall 77.91 81.13 81.75 Table 4: Comparison on dialogue generation. than ours because it requires additional training for the inference network. AE+FB needs to train an autoencoder before the VAE. However, our approach is fast since we only add one-layer batch normalization, and thus the training cost is almost the same as the basic VAE. More results about the training behavior can be found in Section 3 of the Appendix. Performance on a downstream task - Text classification: The goal of VAE is to learn a good representation of the data for downstream tasks. Here, we evaluate the quality of latent representations by training a one-layer linear classifier based on the mean of the posterior distribution. We use a downsampled version of the Yelp sentiment dataset (Shen et al., 2017). Li et al. (2019) further sampled various labeled data to train the classifier. To compare with them fairly, we use the same samples in Li et al. (2019). Results are shown in Table 3. Our approach achieves the best accuracy in all the settings. For 10k training samples, all the methods get a good result. However, when only using 100 training samples, different methods vary a lot in accuracy. The text classification task shows that our approach can learn a good latent representation even without a pretrained inference network. 5.2 CVAE for Dialogue Generation Setup: For dialogue generation, we test our approach in the setting of CVAE. Following previous work (Zhao et al., 2017b), we use the Switchboard (SW) Corpus (Godfrey and Holliman, 1997), which contains 2400 two-sided telephone conversations. 2643 Fluency Relevance Informativeness Model Avg #Accept #High Avg #Accept #High Avg #Accept #High CVAE 2.11 (0.58) 87% 23% 1.90 (0.49) 82% 8% 1.39 (0.59) 34% 5% CVAE (BOW) 2.08 (0.73) 84% 23% 1.86 (0.58) 75% 11% 1.54 (0.65) 46% 8% BN-CVAE 2.16 (0.71) 88% 27% 1.92 (0.67) 80% 12% 1.54 (0.67) 43% 10% Table 5: Human evaluation results. Numbers in parentheses is the corresponding variance on 200 test samples. Topic: ETHICS IN GOVERNMENT Context: have trouble drawing lines as to what’s illegal and what’s not Target (statement): well i mean the other problem is that they’re always up for CVAE CVAE (BOW) BN-CVAE 1. yeah 1. yeah 1. it’s not a country 2. yeah 2. oh yeah they’re not 2. it is the same thing that’s what i think is about the state is a state 3. yeah 3. no it’s not too bad 3. yeah it’s Table 6: Sampled generated responses. Only the last sentence in the context is shown here. We use a bidirectional GRU with hidden size 300 to encode each utterance and then a one-layer GRU with hidden size 600 to encode previous k-1 utterances as the context. The response decoder is a one-layer GRU with hidden size 400. The latent representation z has a size of 200. We use the evaluation metrics from Zhao et al. (2017b): (1) Smoothed Sentence-level BLEU (Chen and Cherry, 2014); (2) Cosine Distance of Bag-of-word Embedding, which is a simple method to obtain sentence embeddings. We use the pretrained Glove embedding (Pennington et al., 2014) and denote the average method as A-bow and the extreme method as E-bow. Higher values indicate more plausible responses. We compared our approach with CVAE and CVAE with bag-of-words (BOW) loss (Zhao et al., 2017b), which requires the decoder in the generation network to predict the bag-of-words in the response y based on z. Automatic evaluation: Table 4 shows the results of these three approaches. From the KL values, we find that CVAE suffers from posterior collapse while CVAE (BOW) and our approach avoid it effectively. For BLEU-4, we observe the same phenomenon in the previous work (Fu et al., 2019; Zhao et al., 2017b) that CVAE is slightly better than the others. This is because CVAE tends to generate the most likely and safe responses repeatedly with the collapsed posterior. As for precision, these three models do not differ much. However, CVAE (BOW) and our BN-VAE outperform CVAE in recall with a large margin. This indicates that BN-VAE can also produce diverse responses with good quality like CVAE (BOW). Human evaluation: We conduct the human evaluation by asking five annotators from a commercial annotation company to grade 200 sampled conversations from the aspect of fluency, relevance and informativeness on a scale of 1-3 (see Section 4 of the Appendix for more details on the criteria). We also report the proportion of acceptable/high scores (≥2 and = 3) on each metric. Table 5 shows the annotation results. Overall, our approach beats the other two compared methods in relevance and fluency with more informative responses. Also, our approach has the largest proportion of responses whose scores are High. This indicates that our model can produce more meaningful and relevant responses than the other two. Case study: Table 6 shows the sampled responses generated by the three methods (more can be found in the Appendix). By maintaining a reasonable KL, responses generated by our approach are more relevant to the query with better diversity compared to the other two. We test the three methods in the simplest setting of dialogue generation. Note that the focus of this work is to improve the CVAE itself by avoiding its KL vanishing problem but not to hack the state-of-the-art dialogue generation performance. To further improve the quality of generated responses, we can enhance our approach by incorporating knowledge such as dialogue acts (Zhao et al., 2017b), external facts (Ghazvininejad et al., 2018) and personal profiles (Zhang et al., 2018). 6 Conclusions and Future Work In this paper, we tackle the posterior collapse problem when VAE is paired with autoregressive decoders. Instead of considering the KL individually, we make it follow a distribution DKL and show that keeping the expectation of DKL positive is sufficient to prevent posterior collapse. We propose Batch Normalized VAE (BN-VAE), a simple but effective approach to set a lower bound of DKL 2644 by regularization the approximate posterior’s parameters. Our approach can also avoid the recently proposed lagging problem efficiently without additional training efforts. We show that our approach can be easily extended to CVAE. We test our approach on three real applications, language modeling, text classification and dialogue generation. Experiments show that our approach outperforms strong baselines and is competitive with more complex methods which keeping substantially faster. We leverage the Gaussian prior as the example to introduce our method in this work. The key to our approach to be applicable is that we can get a formula for the expectation of the KL. However, it is hard to get the same formula for some more strong or sophisticated priors, e.g., the Dirichlet prior. For these distributions, we can approximate them by the Gaussian distributions (such as in Srivastava and Sutton (2017)). In this way, we can batch normalize the corresponding parameters. Further study in this direction may be interesting. References Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. 2018. Fixing a broken elbo. In ICML. Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In ICLR. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CONLL. Yuri Burda, Roger B. Grosse, and Ruslan R. Salakhutdinov. 2016. Importance weighted autoencoders. In ICLR. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In ICLR. Chris Cremer, Xuechen Li, and David Duvenaud. 2018. Inference suboptimality in variational autoencoders. In ICML. Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. 2018. Hyperspherical variational auto-encoders. In UAI. Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. 2019. Avoiding latent variable collapse with generative skip models. In AISTATS. Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, and Changyou Chen. 2019. Implicit deep latent variable models for text generation. In EMNLP-IJCNLP. Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cyclical annealing schedule: A simple approach to mitigating KL vanishing. In NAACL. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI. J Godfrey and E Holliman. 1997. Switchboard-1 release 2: Linguistic data consortium. In SWITCHBOARD: A User’s Manual. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. In Transactions of the Association of Computational Linguistics. MIT Press. Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In ICLR. Irina Higgins, Lo¨ıc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. In Neural computation. MIT Press. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. Yoon Kim, Sam Wiseman, Andrew C. Miller, David A Sontag, and Alexander M. Rush. 2018. Semiamortized variational autoencoders. In ICML. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In NeurlIPS. Bohan Li, Junxian He, Graham Neubig, Taylor BergKirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In EMNLP-IJCNLP. 2645 Xuezhe Ma, Chunting Zhou, and Eduard Hovy. 2019. MAE: Mutual posterior-divergence regularization for variational autoencoders. In ICLR. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML. Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. 2017. Neural discrete representation learning. In NeurlIPS. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Ali Razavi, Aaron van den Oord, Ben Poole, and Oriol Vinyals. 2019. Preventing posterior collapse with delta-VAEs. In ICLR. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In ICML. Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. 2018. How does batch normalization help optimization? In NeurIPS. Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In EMNLP. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NeurlIPS. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NeurIPS. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder variational autoencoders. In NeurlIPS. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In ICLR. Akash Srivastava and Charles Sutton. 2018. Variational inference in pachinko allocation machines. In arXiv preprint arXiv:1804.07944. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2018. Wasserstein autoencoders. In ICLR. Jakub M. Tomczak and Max Welling. 2018. Vae with a vampprior. In AISTATS. Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In EMNLP. Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In AAAI. Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In ICML. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL. Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2017a. Infovae: Information maximizing variational autoencoders. In arXiv preprint arXiv:1706.02262. Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In ACL. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017b. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. Qile Zhu, Zheng Feng, and Xiaolin Li. 2018. Graphbtm: Graph enhanced autoencoded variational inference for biterm topic model. In EMNLP. 2646 A Appendix A.1 Experiments on Synthetic Data We follow the Agg-VAE and construct the synthetic data to validate whether our approach can avoid the lagging problem. VAE used in this synthetic task has a LSTM encoder and a LSTM decoder. We use a scalar latent variable because we need to compute µx,θ which is approximated by discretization of pθ(z|x). To visualize the training progress, we sample 500 data points from the validation set and show them on the mean space. We plot the mean value of the approximate posterior and the model posterior during training for the basic VAE and BN-VAE. As shown the first column in Fig. 1, all points have the zero mean of the model posterior (the x-axis), which indicates that z and x are independent at the beginning of training. For the basic VAE, points start to spread in the x-axis during training while sharing almost the same y value, since the model posterior pθ(z|x) is well learned with the help of the autoregressive decoder. However, the inference posterior qφ(z|x) is lagging behind pθ(z|x) and collapses to the prior in the end. Our regularization approximated by BN, on the other hand, pushes the inference posterior qφ(z|x) away from the prior (p(z)) at the initial training stage, and forces qφ(z|x) to catch up with pθ(z|x) to minimize KL(qφ(z|x)||pθ(z|x)) in Eq. 9. As in the second row of Fig. 1, points spread in both directions and towards the diagonal. We also report the results on different γ’s with different batch sizes (32 in Fig. 1). Fig. 2 shows the training dynamics. Both settings of γ avoid posterior collapse efficiently. A larger γ produces more diverse µ’s which spread on the diagonal. However, a small γ results in a small variance for the distribution of µ, thus µ’s in the bottom row are closer to the original (mean of the distribution). When γ is 0, posterior collapse happens. Different batch sizes do not diff a lot, so 32 is a decent choice. An intuitive improvement of our method is to automatically learn different γ for different latent dimensions, which we leave for future work. A.2 Proof in CVAE The KL can be computed as: KL = 1 2 n X i=1 ( σ2 qi + (µqi −µpi)2 σ2 pi (14) + σ2 pi + µ2 pi −logσ2 qi −1). We need to prove that KL will not achieve the minimum number when µpi equals to µqi and σpi equals σqi. We take hidden size as 1 for example. The binary function about µpi and σpi is: fµpi,σpi = ( σ2 qi + (µqi −µpi)2 σ2 pi (15) + σ2 pi + µ2 pi −logσ2 qi −1), the maxima and minima of fµpi,σpi must be the stationary point of fµpi,σpi due to its continuity. The stationary point is: ∂f ∂µpi = 2(µpi −µqi) σ2 pi + 2µpi (16) ∂f ∂σpi = −2(σ2 qi + (µqi −µpi)2) σ3 pi + 2σpi. (17) When µpi = µqi and σpi = σqi, both partial derivative is not 0. So it is not the stationary point of f, then it won’t be the minima. A.3 Language Modeling We investigate the training procedure for different models. We plot the MI Iq, DKL in the ELBO and the distance between the approximated posterior and the prior, DKL(qφ(z)||p(z)). As in Eq. 4 in the main paper, DKL in the ELBO is the sum of the other two. Fig. 3 shows these three values throughout the training. Although DKL is the upper bound of the mutual information, we notice that the gap is usually large. In the initial training stage, DKL increases in the basic VAE with annealing, while its MI remains small. With the weight decreases, the method finally suffers from posterior collapse. In contrast, our approach can obtain a high MI with a small DKL value like aggressive VAE. The full results on language modeling are in Table 8. A.4 CVAE for dialogue generation Human evaluation: We evaluate the generated responses from three aspects: relevance, fluency and informativeness. Here we introduce the criteria of the evaluation as shown in Table 7. We sample 200 conversations from the test set. For each conversation, we sample three generated responses from each model, totally 600 responses. Case study: We report 4 examples generated from these three models, shown in Table 9. CVAE (BOW) and our approach both can generate diverse responses. However, responses from ours are more related to the context compared with the other two. 2647 Basic BN-VAE(b=32) iter=0 Convergence iter=200 iter=2000 Figure 1: Visualization of 500 sampled data from the synthetic dataset during the training. The x-axis is µx,θ, the approximate model posterior mean. The y-axis is µx,φ, which represents the inference posterior mean. b is batch size and γ is 1 in BN. b=300 γ=0.3 iter=0 Convergence b=300 γ=1 iter=54 iter=2000 Figure 2: Visualization of our BN-VAE on different γ for synthetic data. 0 10 20 30 40 epoch −1 0 1 2 3 4 5 6 7 8 0 5 10 15 20 25 30 35 epoch −1 0 1 2 3 4 5 6 7 8 0 10 20 30 40 epoch −1 0 1 2 3 4 5 6 7 8 DKL(qφ(z|x)||p(z)) Iq DKL(qφ(z)∥p(z)) Figure 3: Training behavior on Yelp. Left/Middle/Right: VAE/Agg-VAE/BN-VAE (all models are with annealing). 2648 Table 7: Human evaluation criteria. Fluency Relevance Informativeness 1 Point 1. Hard to understand 2. Too many syntax mistakes Not related to the query at all 1. Generic responses. 2. Repeated query. 2 Points 1. Several syntax mistakes but still understandable 2. short responses, e.g., Generic responses 1. Response and query are in the same domain/topic but are not directly related 2. Generic responses between 1 and 3. 3 Points Only few syntax mistakes with a moderate length closely related to the query 1. Creative responses. 2. Contain new information about the query. Yahoo Yelp Model NLL KL MI AU NLL KL MI AU CNN-VAE ≤332.1 10.0 ≤359.1 7.6 LSTM-LM 328 351.1 VAE 328.6 0.0 0.0 0.0 357.9 0.0 0.0 0.0 β-VAE (0.2) 332.2 19.1 3.3 20.4 360.7 11.7 3.0 10.0 β-VAE (0.4) 328.7 6.3 2.8 8.0 358.2 4.2 2.0 4.2 β-VAE (0.6) 328.5 0.3 0.0 1.0 357.9 0.2 0.1 3.8 β-VAE (0.8) 328.8 0.0 0.0 0.0 358.1 0.0 0.0 0.0 cyclic ∗ 330.6 2.1 2.0 2.3 359.5 2.0 1.9 4.1 Skip-VAE ∗ 328.5 2.3 1.3 8.1 357.6 1.9 1.0 7.4 SA-VAE 327.2 5.2 2.7 9.8 355.9 2.8 1.7 8.4 Agg-VAE 326.7 5.7 2.9 15.0 355.9 3.8 2.4 11.3 FB (4) 331.0 4.1 3.8 3.0 359.2 4.0 1.9 32.0 FB (5) 330.6 5.7 2.0 3.0 359.8 4.9 1.3 32.0 δ-VAE (0.1) ∗ 330.7 3.2 0.0 0.0 359.8 3.2 0.0 0.0 δ-VAE (0.15) ∗ 331.6 4.8 0.0 0.0 360.4 4.8 0.0 0.0 δ-VAE (0.2) ∗ 332.2 6.4 0.0 0.0 361.5 6.4 0.0 0.0 δ-VAE (0.25) ∗ 333.5 8.0 0.0 0.0 362.5 8.0 0.0 0.0 vMF-VAE (13) ∗ 327.4 2.0 32.0 357.5 2.0 32.0 vMF-VAE (16) ∗ 328.5 3.0 32.0 367.8 3.0 32.0 vMF-VAE (20) ∗ 329.4 4.0 – 32.0 358.0 4.0 32.0 vMF-VAE (23) ∗ 328.7 5.0 32.0 357.3 5.0 32.0 vMF-VAE (25) ∗ 330.1 6.0 32.0 357.8 6.0 32.0 vMF-VAE (30) ∗ 329.5 7.0 32.0 357.8 7.0 32.0 BN-VAE (0.3) ∗ 328.1 1.6 1.4 32.0 356.7 1.7 1.4 32.0 BN-VAE (0.4) ∗ 327.7 2.7 2.2 32.0 356.2 3.1 2.5 32.0 BN-VAE (0.5) ∗ 327.4 4.2 3.3 32.0 356.4 4.4 3.8 32.0 BN-VAE (0.6) ∗ 326.7 6.2 5.6 32.0 356.5 6.5 5.4 32.0 BN-VAE (0.7) ∗ 327.4 8.8 7.4 32.0 355.9 9.1 7.4 32.0 Pretrained encoder +cyclic ∗ 333.1 25.8 9.1 32.0 361.5 20.5 9.3 32.0 +FB (2) ∗ 327.2 4.3 3.8 32.0 356.6 4.6 4.2 32.0 +FB (3) ∗ 327.1 4.5 3.9 32.0 356.3 5.8 5.2 32.0 +FB (4) ∗ 326.2 8.1 6.8 32.0 356.0 7.6 6.6 32.0 +FB (5) ∗ 326.6 8.9 7.3 32.0 356.5 9.0 7.4 32.0 +FB (6) ∗ 326.6 10.8 8.1 32.0 356.5 12.0 8.6 32.0 +FB (7) ∗ 326.6 12.1 8.5 32.0 356.8 13.4 8.9 32.0 +FB (8) ∗ 326.7 13.6 8.9 32.0 357.5 15.8 9.2 32.0 +δ-VAE (0.15) ∗ 331.0 5.6 1.1 11.2 359.4 5.2 0.5 5.9 vMF-VAE (13) ∗ 328.4 2.0 32.0 357.0 2.0 32.0 +BN-VAE (0.6) ∗ 326.7 6.4 5.8 32.0 355.5 6.6 5.9 32.0 +BN-VAE (0.7) ∗ 326.5 9.1 7.6 32.0 355.7 9.1 7.5 32.0 Table 8: Results on Yahoo and Yelp datasets. We report mean values across 5 different random runs. ∗indicates the results are from our experiments, while others are from previous report. 2649 Table 9: Sampled generated responses. Only the last sentence in the context is shown here. Topic: ETHICS IN GOVERNMENT Context: have trouble drawing lines as to what’s illegal and what’s not Target (statement): well i mean the other problem is that they’are always up for CVAE CVAE (BOW) BN-CVAE 1. yeah 1. yeah 1. it’s not a country 2. yeah 2. oh yeah, they’re not 2. it is the same thing that’s what i think is about the state is a state 3. yeah 3. no it’s not too bad 3. yeah it’s Topic:VACATION SPOTS Context: well i ’ ll talk to you later Target (conventional-closing) : okay now do you push the buttons now CVAE CVAE (BOW) BN-CVAE 1. okay bye - bye 1. okay so we’ll go ahead and start 1. okay bye - bye 2. bye 2. so i guess it depends on how much you are you 2. nice talking to you too 3. okay bye - bye 3. it’s 3. all right take care bye - bye Topic: RECYCLING Context: are they doing a lot of recycling out in georgia Target (statement-non-opinion) : well at my workplace they are we have places for aluminum cans and we have everybody’s been unk a separate trash can for CVAE CVAE (BOW) BN-CVAE 1. yeah 1. well that’s good for a while i’ll tell you that you’re not doing anything at ti and then you’re in a small town 1. well we do recycle newspapers 2. yeah 2. oh i know i’ve got a lot of trees and trees and stuff and 2. yes i do too 3. yeah 3. yeah it’s like you know people that want to be unk and they’re not going to bother you to make a mess 3. well we’re at a point where we’re going to be a landfill space Topic: UNIVERSAL HEALTH INS Context: some of the good obviously that nobody has to worry about health care Target (statement-non-opinion) : and i guess i’ll have to help with grandchildren one of these days i hope CVAE CVAE (BOW) BN-CVAE 1. um - hum 1. okay well see we don’t have any choice of any of those 1. well i hope that we should have a balanced budget 2. uh - huh 2.um - hum 2. uh - huh 3. uh - huh 3. yeah 3. well that’s a good idea
2020
235