text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3037–3043 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3037 Look Harder: A Neural Machine Translation Model with Hard Attention Sathish Indurthi Insoo Chung Sangha Kim Samsung Research, Seoul, South Korea {s.indurthi, insooo.chung, sangha01.kim}@samsung.com Abstract Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks. These models attend all the words in the source sequence for each target token, which makes them ineffective for long sequence translation. In this work, we propose a hard-attention based NMT model which selects a subset of source tokens for each target token to effectively handle long sequence translation. Due to the discrete nature of the hard-attention mechanism, we design a reinforcement learning algorithm coupled with reward shaping strategy to efficiently train it. Experimental results show that the proposed model performs better on long sequences and thereby achieves significant BLEU score improvement on English-German (EN-DE) and EnglishFrench (EN-FR) translation tasks compared to the soft-attention based NMT. 1 Introduction In recent years, soft-attention based neural machine translation models (Bahdanau et al., 2015; Gehring et al., 2017; Hassan et al., 2018) have achieved state-of-the-art results on different machine translation tasks. The soft-attention mechanism computes the context (encoder-decoder attention) vector for each target token by weighting and combining all the tokens of the source sequence, which makes them ineffective for long sequence translation (Lawson et al., 2017). Moreover, weighting and combining all the tokens of the source sequence may not be required – a few relevant tokens are sufficient for each target token. Different attention mechanisms have been proposed to improve the quality of the context vector. For example, Luong et al. (2015); Yang et al. (2018) proposed a local-attention mechanism to selectively focus on a small window of source tokens to compute the context vector. Even though local-attention has improved the translation quality, it is not flexible enough to focus on relevant tokens when they fall outside the specified window size. To overcome the shortcomings of the above approaches, we propose a hard-attention mechanism for a deep NMT model (Vaswani et al., 2017). The proposed model solely selects a few relevant tokens across the entire source sequence for each target token to effectively handle long sequence translation. Due to the discrete nature of the hard-attention mechanism, we design a Reinforcement Learning (RL) algorithm with reward shaping strategy (Ng et al., 1999) to train it. The proposed hard-attention based NMT model consistently outperforms the soft-attention based NMT model (Vaswani et al., 2017), and the gap grows as the sequence length increases. 2 Background A typical NMT model based on encoder-decoder architecture generates a target sequence y = {y1, · · · , yn} given a source sequence x = {x1, · · · , xm} by modeling the conditional probability p(y|x, θ). The encoder (θe) computes a set of representations Z = {z1, · · · , zm} ∈Rm×d corresponding to x and the decoder (θd) generates one target word at a time using the context vector computed using Z. It is trained on a set of D parallel sequences to maximize the log likelihood: J1(θ) = 1 N D X i=1 log p yi|xi; θ  , (1) where θ = {θe, θd}. In recent years, among all the encoder-decoder architectures for NMT, Transformer (Vaswani et al., 2017) has achieved the best translation quality (Wu et al., 2018). The encoder and decoder blocks of the Transformer are composed of a stack 3038 (a) (b) Figure 1: (a) Overview of hard-attention based Transformer network. (b) Overview of RL agent based hardattention and objective function. of N (=6) identical layers as shown in Figure 1a. Each layer in the encoder contains two sublayers, a multi-head self-attention mechanism and a position-wise fully connected feed-forward network. Each decoder layer consists of three sublayers; the first and third sub-layers are similar to the encoder sub-layers, and the additional second sub-layer is used to compute the encoderdecoder attention (context) vector based on the soft-attention based approaches (Bahdanau et al., 2015; Gehring et al., 2017). Here we briefly describe the soft computation of encoder-decoder attention vector in the Transformer architecture. Please refer Vaswani et al. (2017) for the detailed architecture. For each target word ˆyt, the second sub-layer in the decoder computes encoder-decoder attention at based on the encoder representations, Z. In practice we compute the attention vectors simultaneously for all the time steps by packing ˆyt’s and zi’s in to matrices. The soft attention of the encoder-decoder, Ai, for all the decoding steps is computed as follows: Ai( ˆY i−1, Z) = softmax ˆY i−1Z √ d ! Z, ∀i ∈N, (2) where d is the dimension and ˆY i−1 ∈Rn×d is the decoder output from the previous layer. 3 Proposed Model Section 3.1 introduces our proposed hard-attention mechanism to compute the context vector for each target token. We train the proposed model by designing a RL algorithm with reward shaping strategy - described in Section 3.2. 3.1 Hard Attention Instead of computing the weighted average over all the encoder output as shown in Eq. 2, we specifically select a subset of encoder outputs (zi’s) for the last layer (N) of the decoder using the hard-attention mechanism as shown in Figure 1a. This allows us to efficiently compute the encoder-decoder attention vector for long sequences. To compute the hard-attention between the last layers of the Transformer encoder-decoder blocks, we replace the second sub-layer of the decoder block’s last layer with the RL agent based attention mechanism. Overview of the proposed RL agent based attention mechanism is shown in Figure 1b and computed as follows: First, we learn the projections ˜Y N−1, ˜ Z for ˆY N−1 and Z as, ˜Y N−1 = tanh(W d 2 (W d 1 ˆY N−1 + bd 1) + bd 2), ˜ Z = tanh(W e 2 (W e 1 Z + be 1) + be 2). We then compute the attention scores S as, S( ˜Y N−1, ˜ Z) = ˜Y N−1 ˜ Z. (3) 3039 We apply the hard-attention mechanism on attention scores (S) to dynamically choose multiple relevant encoder tokens for each decoding token. Given S, this mechanism generates an equal length of binary random-variables, β = {β1, · · · , βm} for each target token, where βi = 1 indicates that zi is relevant whereas βi = 0 indicates that zi is irrelevant. The relevant tokens are sampled using bernoulli distribution over each βi for all the target tokens. This hard selection of encoder outputs introduces discrete latent variables and estimating them requires RL algorithms. Hence, we design the following reinforcement learner policy for the hard-attention for each decoding step t. πt(r|st, θh) = βt i (4) where βt i ∈β represents the probability of a encoder output (agent’s action) being selected at time t, and st ∈S is the state of the environment. Now, the hard encoder-decoder attention, ˜A, is calculated as, follows: ˆ Z = tanh(W ˆe 2 (W ˆe 1 Z + bˆe 1) + bˆe 2) (5) ˜ A = β ˆ Z (6) Unlike the soft encoder-decoder attention A in Eq. 2, which contains the weighted average of entire encoder outputs, the hard encoder-decoder attention ˜A in Eq. 6 contains information from only relevant encoder outputs for each decoding step. 3.2 Strategies for RL training The model parameters come from the encoder, decoder blocks and reinforcement learning agent, which are denoted as θe, θd and θh respectively. Estimation of θe and θd is done by using the objective J1 in Eq. 1 and gradient descent algorithm. However, estimating θh is difficult given their discrete nature. Therefore, we formulate the estimation of θh as a reinforcement learning problem and design a reward function over it. An overveiw of the proposed RL training is given in Algorithm 1. We use BLEU (Papineni et al., 2002) score between the predicted sequence and the target sequence as our reward function, denoted as R(y ′, y), where y ′ is the predicted output sequence. The objective is to maximize the reward with respect to the agent’s action in Eq. 4 and defined as, J2(θh) = n X t=1 logp(r|st, θh)R(y ′, y) (7) Algorithm 1: Hard Attention based NMT 1 Input: Training examples, {xi, yi}L i=1, hyperparameters such as learning rate (α), λ 2 Initialize model parameters θ = {θe, θh, θd} 3 while not done do 4 Sample k training examples 5 Compute attention scores S using Eq. 3 6 for each decoding step do 7 Compute the policy using Eq. 4 to select the relevant source sequence tokens 8 Compute the reward rt using Eq. 8 9 end 10 Compute J(θ) = −(J1(θe, θd) + J2(θh)) using Eq. 1 and Eq. 9 11 Update the parameters θ with gradient descent: θ ′ = θ −α∇J(θ) 12 end 13 Return: θ Reward Shaping To generate the complete target sentence, the agent needs to take actions at each target word, but only one reward is available for all these tens of thousands of actions. This makes RL training inefficient since the same terminal reward is applied to all the intermediate actions. To overcome this issue we adopt reward shaping strategy of Ng et al. (1999). This strategy assigns distinct rewards to each intermediate action taken by the agent. The intermediate reward, denoted as rt(y ′ t, y), for the agent action at decoding step t is computed as: rt(y ′ t, y) = R(y ′ 1..t, y) −R(y ′ 1..t−1, y) (8) During training, we use cumulative reward Pn t=1 rt(y ′ t, y)  achieved from the decoding step t to update the agent’s policy. Entropy Bonus We add entropy bonus to avoid policy to collapse too quickly. The entropy bonus encourages an agent to take actions more unpredictably, rather than less so. The RL objective function in Eq. 7 becomes, ˆJ2(θh) = J2(θh) + λH(πt(r|st, θh)) (9) We approximate the gradient ∆θh ˆJ2(θh) by using REINFORCE (Williams, 1992) algorithm which allows us to jointly train J1(θe, θd) and ˆJ2(θh). 3040 Architecture Model BLEU EN-DE EN-FR Vaswani et al. (2017) Transformer big 28.40 41.00 Wu et al. (2018) Transformer big + sequence-loss 28.75 41.47 Yang et al. (2018) Transformer big + localness 28.89 n/a this work Transformer big + hard-attention 29.29 42.26 Table 1: Performance of various models on EN-DE and EN-FR translation tasks. # Sequences in each group 1-10 11-20 21-30 31-40 41-50 51-60 ≥61 EN-DE 469 1148 796 383 160 40 8 EN-FR 235 765 796 596 396 165 90 Table 2: Number of sequences present in each group (based on sequence length) of EN-DE and EN-FR testsets. Figure 2: Performance of Transformer with Soft-Attention (TSA), and Transformer with Hard Attention (THA) for various sequence lengths on EN-DE and EN-FR translation tasks. 4 Experimental Results 4.1 Datasets We conduct experiments on WMT 2014 EnglishGerman (EN-DE) and English-French (EN-FR) translation tasks. The approximate number of training pairs in EN-DE and EN-FR datasets are 4.5M and 36M respectively; newstest2013 and newstest2014 are used as the dev and test sets. We follow the similar preprocessing steps as described in Vaswani et al. (2017) for both the datasets. We encode the sentences using word-piece vocabulary (Wu et al., 2016) and the shared source-target vocabulary size is set to 32000 tokens. 4.2 Results 4.3 Implementation Details We adopt the implementation of the Transformer (Vaswani et al., 2018) with transformer big settings. All the models are trained using 8 NVIDIA Tesla P40 GPUs on a single machine. The BLEU score used in the reward shaping is calculated similarly to Bahdanau et al. (2017); all the n-gram counts start from 1, and the resulting score is multiplied by the length of the target reference sentence. The beam search width (=4) is set empirically based on the dev set performance and λ in Eq. 9 is set to 1e-3 in all the experiments. Models We compare the proposed model with the soft-attention based Transformer model(Vaswani et al., 2017). To check whether the performance improvements are coming from the hard-attention mechanism (Eq. 4) or from the sequence reward incorporated in the objective function (Eq. 7), we compare our work with previously proposed sequence loss based NMT method (Wu et al., 2018). This NMT method is built on top of the Transformer model and trained by combing cross-entropy loss and sequence reward (BLEU score). We also compare our model with the recently proposed Localness Self-Attention network (Yang et al., 2018) which incorporates a localness bias into the Transformer attention distribution to capture useful local context. Main Results Table 1 shows the performance of various models on EN-DE and EN-FR translation tasks. These test set case sensitive BLEU scores 3041 are obtained using SacreBLEU toolkit1 (Post, 2018). The BLEU score difference between our hard-attention based Transformer model and the original soft-attention based Transformer model indicates the effectiveness of selecting a few relevant source tokens for each target token. The performance gap between our method and sequence loss based Transformer (Wu et al., 2018) shows that the improvements are indeed coming from the hard-attention mechanism. Our approach of incorporating hard-attention into decoder’s top selfattention layer to select relevant tokens yielded better results compared to the Localness SelfAttention (Yang et al., 2018) approach of incorporating localness bias only to lower self-attention layers. It can be noted that our model achieved 29.29 and 42.26 BLEU points on EN-DE and ENFR tasks respectively – surpassing the previously published models. Analysis To see the effect of the hard-attention mechanism for longer sequences, we group the sequences in the test set based on their length and compute the BLEU score for each group. Table 2 shows the number of sequences present in each group. Figure 2 shows that Transformer with hard attention is more effective in handling the long sequences. Specifically, the performance gap between our model (THA) and the original Transformer model (TSA) grows bigger as sequences become longer. 5 Related Work Even though RL based models are difficult to train, in recent years, multiple works (Mnih et al., 2014; Choi et al., 2017; Yu et al., 2017; Narayan et al., 2018; Sathish et al., 2018; Shen et al., 2018) have shown to improve the performance of several natural language processing tasks. Also, it has been used in NMT (Edunov et al., 2018; Wu et al., 2017; Bahdanau et al., 2017) to overcome the inconsistency between the token level objective function and sequence-level evaluation metrics such as BLEU. Our approach is also related to the method proposed by Lei et al. (2016) to explain the decision of text classifier. However, here we focus on selecting a few relevant tokens from a source sequence in a translation task. Recently, several innovations are proposed on top of the Transformer model to improve performance and training speed. For example, Shaw 1https://github.com/mjpost/sacrebleu et al. (2018) incorporated relative positions and Ott et al. (2018) proposed efficient training strategies. These improvements are complementary to the proposed method. Incorporating these techniques will further improve the performance of the proposed method. 6 Conclusion In this work, we proposed a hard-attention based NMT model which focuses solely on a few relevant source sequence tokens for each target token to effectively handle long sequence translation. We train our model by designing an RL algorithm with the reward shaping strategy. Our model sets new state-of-the-art results on EN-DE and EN-FR translation tasks. References Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. In Proceedings of the Fifth International Conference on Learning Representations, ICLR-2017. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 209–220. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 355–364. Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. CoRR, abs/1705.03122. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming 3042 Zhou. 2018. Achieving human parity on automatic chinese to english news translation. CoRR, abs/1803.05567. Dieterich Lawson, George Tucker, Chung-Cheng Chiu, Colin Raffel, Kevin Swersky, and Navdeep Jaitly. 2017. Learning hard alignments with variational inference. CoRR, abs/1705.05524. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107–117, Austin, Texas. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. 2014. Recurrent models of visual attention. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 2204–2212, Cambridge, MA, USA. MIT Press. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759. Association for Computational Linguistics. Andrew Y Ng, Daishi Harada, and Stuart J Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. pages 278–287. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1–9, Belgium, Brussels. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting bleu scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191. Association for Computational Linguistics. Indurthi Sathish, Seunghak Yu, Seohyun Back, and Heriberto Cuayahuitl. 2018. Cut to the chase: A context zoom-in network for reading comprehension. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 570–575. Association for Computational Linguistics. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018. Reinforced self-attention network: A hybrid of hard and soft attention for sequence modeling. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, IJCAI’18, pages 4345–4352. AAAI Press. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2Tensor for neural machine translation. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 193– 199, Boston, MA. Association for Machine Translation in the Americas. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Mach. Learn., 8(3-4):229–256. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3612–3621. Association for Computational Linguistics. Lijun Wu, Li Zhao, Tao Qin, Jianhuang Lai, and TieYan Liu. 2017. Sequence prediction with unlabeled data by reward function learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3098–3104. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant 3043 Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449– 4458. Association for Computational Linguistics. Adams Wei Yu, Hongrae Lee, and Quoc Le. 2017. Learning to skim text. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1880–1890. Association for Computational Linguistics.
2019
290
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044–3049 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3044 Robust Neural Machine Translation with Joint Textual and Phonetic Embedding Hairong Liu1 Mingbo Ma1,3 Liang Huang1,3 Hao Xiong2 Zhongjun He2 1Baidu Research, Sunnyvale, CA, USA 2Baidu, Inc., Beijing, China 3Oregon State University, Corvallis, OR, USA {liuhairong, mingboma, lianghuang, xionghao05, hezhongjun}@baidu.com Abstract Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations.1 We propose to improve the robustness of NMT to homophone noises by 1) jointly embedding both textual and phonetic information of source sentences, and 2) augmenting the training dataset with homophone noises. Interestingly, to achieve better translation quality and more robustness, we found that most (though not all) weights should be put on the phonetic rather than textual information. Experiments show that our method not only significantly improves the robustness of NMT to homophone noises, but also surprisingly improves the translation quality on some clean test sets. 1 Introduction Recently we witnessed tremendous progresses in the field of neural machine translation (NMT) (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014; Luong et al., 2015; Gehring et al., 2017), especially the birth of transformer network (Vaswani et al., 2017). Despite tremendous success, NMT models are very sensitive to the noises in input sentences (Belinkov and Bisk, 2017). The causes of such vulnerability are multifold, and some of them are: 1) neural networks are inherently sensitive to noises, such as adversarial examples (Goodfellow et al., 2014; Szegedy et al., 2013), 2) every input word can affect every output word generated by the decoder due to the global effects of attention, and 3) all NMT models have an input embedding layer, which is sensitive to noises in the input sentences. 1In this paper, the word “homophone” is loosely used to represent characters or words with similar pronunciations. In this paper, we focus on homophone noise, where words are replaced by other words with similar pronunciations, which is common in realworld systems. One example is speech translation (Ruiz et al., 2017; Ruiz and Federico, 2015; Ma et al., 2018), where an ASR system may output correct or almost correct phoneme sequences, but transcribe some words into their homophones. Another example is pronunciation-based input systems for non-phonetic writing systems such as Pinyin for Chinese or Katakana/Hiragana for Japanese. It is very common for a user to accidentally choose a homophone instead of the correct word. Existing NMT systems are very sensitive to homophone noises, and Table 1 illustrates such an example. The transformer model can correctly translate the clean input sentence; however, when one Mandarin character, ‘有’(yˇou), is replaced by one of its homophones, ‘又’(y`ou), the transformer generates a strange and irrelevant translation. The method proposed in this paper can generate correct results under such kind of noises, since it uses both textual and phonetic information. Since words are discrete signals, to feed them into a neural network, a common practice is to encode them into real-valued vectors through embedding. However, the output of the embedding layer is very sensitive to noises in the input sentences. This is because when a word a is replaced by another word b with different meanings, the embedding vector of b may be very different from the embedding vector of a, thus results in dramatic changes. To make things worse, the input embedding layer is usually the first layer of the network, and errors from this layer will propagate and be amplified in the following layers, leading to more severe errors. For homophone noises, since correct phonetic information exists, we can make use of it to make the output of the embedding layer much more robust. 3045 Clean Input 目前已发现有109人死亡, 另有57人获救 Output of Transformer at present, 109 people have been found dead and 57 have been rescued Noisy Input 目前已发现又109人死亡, 另有57人获救 Output of Transformer the hpv has been found dead so far and 57 have been saved Output of Our Method so far, 109 people have been found dead and 57 others have been rescued Table 1: The translation results on Mandarin sentences without and with homophone noises. The word ‘有’ (yˇou, “have”) in clean input is replaced by one of its homophone, ‘又’ (y`ou, “again”), to form a noisy input. This seemingly minor change completely fools the Transformer to generate something irrelvant (“hpv”). Our method, by contrast, is very robust to homophone noises thanks to the usage of phonetic information. In this paper, we propose to improve the robustness of NMT models to homophone noises by jointly embedding both textual and phonetic information. In our approach, both words and their corresponding pronunciations are embedded and then combined to feed into a neural network. This approach has the following advantages: 1. It is a simple but general approach, and easy to implement. 2. It can dramatically improve the robustness of NMT models to homophone noises. 3. It also improves translation quality on clean test sets. To further improve the robustness of NMT models to homophone noises, we use data augmentation to expand the training datasets, by randomly adding homophone noises. The experimental results clearly show that data augmentation improves the robustness of NMT models2. 2 Joint Embedding For a word a in the source language, suppose its pronunciation can be expressed by a sequence of pronunciation units, such as phonemes or syllables, denoted by Ψ(a) = {s1, . . . , sl}. Note that we use the term “word” loosely here, and in fact a may be a word or a subword, or even a character. We embed both pronunciation units and words, and both of them are learnt from scratch. For a pronunciation unit s, its embedding is denoted by π(s), and for a word a, its embedding is denoted by π(a). For a pair of a word a and its pronunciation sequence ψ(a) = {s1, . . . , sl}, we have l + 1 embedding vectors, that is, π(a), π(s1), ..., π(sl). To get a fixed length vector representation, we first 2See more information and our code at https:// phoneticmt.github.io/ merge π(s1), ..., π(sl) into a single vector by averaging, denoted by π(ψ(a)),3 then combine the word embedding and π(ψ(a)) as follows: π([a, ψ(a)]) = (1 −β) ∗π(a) + β ∗π(ψ(a)) (1) where β is a parameter. When β = 0, only textual embedding is used; while when β = 1, only phonetical embedding is used . The best balance, as demonstrated by our experiments, is a very large β close to but not 1. 3 Experiments 3.1 Models In our experiments, we use Transformer as baseline. Specifically, we use the PyTorch version (PyTorch 0.4.0) of OpenNMT. All models are trained with 8 GPUs, and the values of important parameters are: 6 layers, 8 heads attention, 2048 neurons in feed-forward layer, and 512 neurons in other layers, dropout rate is 0.1, label smoothing rate is 0.1, Adam optimizer, learning rate is 2 with NOAM decay. 3.2 Translation Tasks We evaluate our method on the translation task of Mandarin to English, and reported the 4-gram BLEU score (Papineni et al., 2002) as calculated by multi-bleu.perl. Pinyin is used as pronunciation units (Du and Way, 2017; Yang et al., 2018), and there are 404 types of pinyin syllables in total 4. A large Mandarin lexicon is used. For words or subwords not in the lexicon, if all of their characters have pinyins, the concatenation of these characters’s pinyins are used as the pinyin of the whole words 3We tried other approaches, such as using an LSTM network to merge them; however, we did not see obvious improvements in translation quality. 4For simplicity reasons, tone information is discarded. 3046 or subwords. Note that when there are multiple pronunciations, we just randomly pick one in both training and testing. For symbols or entries without pronunciation, we use a special pronunciation unit, ⟨unk⟩, to represent them. 3.3 Translation Results For the dataset, we use an extended NIST corpus which consists of 2M sentence pairs with about 51M Mandarin words and 62M English words, respectively. We apply byte-pair encodings (BPE) (Sennrich et al., 2016) on both Mandarin and English sides to reduce the vocabulary size down to 18K and 10K, respectively. Sentences longer than 256 subwords or words are excluded. 10000 20000 30000 40000 50000 60000 70000 80000 90000 Iteration 32 34 36 38 40 42 44 46 48 50 BLEU Score Baseline = 0.2 = 0.4 = 0.6 = 0.8 = 0.95 = 1.0 Figure 1: BLEU scores on the dev set for the baseline model (Transformer-base) and our models with different β. The x-axis is the number of iterations and the y-axis in the case-insensitive BLEU scores on multiple references. In Figure 1, we compare the performances, measured by BLEU scores to multiple references, of the baseline model and our models with β = 0.2, 0.4, 0.6, 0.8, 0.95, 1.0, respectively. We report the results every 10000 iterations from iteration 10000 to iteration 90000. Note that our model is almost exactly the same as baseline model, with only different source embeddings. In theory, when β = 0, our model is identical to baseline model. However, in practice, there is a slight difference: when β = 0, the embedding parameters are still there, which will affect the optimization procedure even no gradients flow back to these parameters. When β = 1, only phonetic information is used. There are some interesting observations from Figure 1. First, combing textual and phonetic information improves the performance of translation. Compared with baseline, when β = 0.2, the BLEU scores improves 1 −2 points, and when β = 0.4, 0.6, 0.8, 0.95, the BLEU scores improves 2 −3 points. Second, the phonetic information plays a very important role in translation. Even when β = 0.95, that is, the weight of phonetic embedding is 0.95 and the weight of word embedding is only 0.05, the performance is still very good. In fact, our best BLEU score (48.91), is achieved when β = 0.95. However, word embedding is still important. In fact, when we use only phonetic information (when β = 1.0), the performance become worse, almost the same as baseline (only using textual information). Our human only needs phonetic information to communicate with each other, this is probably because we have better ability to understand context than machines, thus do not need the help of textual information. Table 2 reports results on the baseline model and our models under different βs. NIST 06 is used as dev set to select the best models, and NIST 2002, 2003, 2004, 2005 and 2008 datasets are used as test sets. There are some interesting observations. First, combing textual and phonetic information improves the performance of translation. This seems to be surprising since no additional information is provided. Although the real reason is unknown, we suspect that it is because some kind of regularization effects from phonetic embeddings. Second, the phonetic information plays a very important role in translation. Even when β = 0.95, that is, most weights are put on phonetic embedding, the performance is still very good. In fact, our best BLEU score (48.91), is achieved when β = 0.95. However, word embedding is still important. In fact, when we use only phonetic information (β = 1.0), the performance degrades, almost the same as baseline (only using textual information). To understand why phonetic information helps the translation, it is helpful to visualize the embedding of pronunciation units. We projects the whole Pinyin embedding space into a 2-dimensional space using t-SNE technique (Maaten and Hinton, 2008), and illustrate a small region of it in Figure 2. An intriguing property of the embedding is that pinyins with similar pronunciations are close to each other, such as zhen and zheng, ji and qi, mu and hu. This is very helpful since in Mandarin, two characters with similar pronunciations will either 1) be represented by the same pinyin or 2) be represented by two pinyins with similar pronunciations. 3047 Models NIST06 (Dev Set) NIST02 NIST03 NIST04 NIST08 Transformer-base 45.97 47.40 46.01 47.25 41.71 β = 0.2 47.14 48.63 47.82 48.63 43.77 β = 0.4 48.56 49.41 48.73 50.53 45.16 β = 0.6 48.32 48.83 48.82 49.86 44.17 β = 0.8 48.15 49.42 49.44 49.98 44.86 β = 0.95 48.91 49.33 50.46 50.57 44.83 β = 1.0 45.6 47.04 46.42 47.65 40.27 Table 2: Translation results on NIST Mandarin-English test sets 3.5 3.0 2.5 2.0 1.5 1.0 0.5 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0 <unk> mu hu huo he mo chen cheng chan sheng jijinxi qi zhen zheng yan yin xia xiang ban kan zhi yong jing fei tanxin zao chu zhu jie shu ju che qiu Figure 2: Visualization of a small region in the embedding space. Note that pinyins with similar pronunciations are close in the embedding space. Homophones are very common in Mandarin. In our training dataset, about 55% Mandarin words have homophones. To test the robustness of NMT models to homophone noises, we created two noisy test sets, namely, NoisySet1, and NoisySet2, based on NIST06 Mandarin-English test set. The creation procedure is as follows: for each source sentence in NIST06, we scan it from left to right, and if a word has homophones, it will be replaced by one of its homophones by a certain probability (10% for NoisySet1 and 20% for NoisySet2). Baseline = 0.2 = 0.4 = 0.6 = 0.8 = 0.95 = 1.0 30.0 32.5 35.0 37.5 40.0 42.5 45.0 47.5 50.0 BLEU Score NoNoise NoisySet1 NoisySet2 Figure 3: BLEU scores on dataset without and with homophone noises. On both noisy test sets, as more weight are put on phonetic embedding, that is, as β grows, the translation quality improves. In Figure 3, we compare the performance of the baseline model and our models with β = 0.2, 0.4, 0.6, 0.8, 0.95, 1.0, respectively, on NIST06 test set and the two created noisy sets. The models are chosen based on their performance (BLEU scores) on NIST06 test set. As Figure 3 shows, as β grows, which means that more weights are put on phonetic information, the performances on both noisy test sets almost steadily improve. When β = 1.0, as expected, homophone noises will not affect the results since the model is trained solely based on phonetic information. However, this is not our best choice since the performance on the clean test set gets much worse. In fact, from the perspective of robustness to homophone noises, the best choice of β is still a value smaller but close to 1, which mainly focuses on phonetic information but still utilizes some textual information. Table 3 demonstrate the effects of homophone noises on two sentences. The baseline model can translate both sentences correctly; however, when only one word (preposition) is replaced by one of its homophones, the baseline model generates incorrect, redundant and strange translations. This shows the vulnerability of the baseline model. Note that since the replaced words are prepositions, the meaning of the noisy source sentences are still very clear, and it does not affect our human’s understanding at all. For our method, we use the model with β = 0.95, and it generates reasonable translations. To further improve the robustness of NMT models, we augment the training dataset by randomly picking training pairs from training datasets, and revising the source sentences by randomly replacing some words with their homophones. We add 40% noisy sentence pairs on the original 2M sen3048 Clean Input 古巴是第一个与(yˇu)新中国建交的拉美国家 Output of Transformer cuba was the first latin american country to establish diplomatic relations with new china Noisy Input 古巴是第一个于(y´u)新中国建交的拉美国家 Output of Transformer cuba was the first latin american country to discovering the establishment of diplomatic relations between china and new Zealand Output of Our Method cuba is the first latin american country to establish diplomatic relations with new china Clean Input 他认为, 格方对(du`ı)俄方的指责是荒谬的 Output of Transformer he believes that georgia’s accusation against russia is absurd Noisy Input 他认为, 格方憝(du`ı)俄方的指责是荒谬的 Output of Transformer he believes that the accusations by the russian side villains are absurd Output of Our Method he maintained that georgia’s accusation against russia is absurd Table 3: Two examples of homophone noises on source sentences. The underscored Mandarin characters are homophones, and their corresponding Pinyin pronunciations are in the parentheses. Note that textual-only embedding is very sensitive to homophone noises, thus generates weird outputs. However, when jointly embedding both textual and phonetic information in source sentences, the model is very robust. Models Before Augmentation After Augmentation NIST06 NoisySet1 NoisySet2 NIST06 NoisySet1 NoisySet2 Transformer-base 45.97 41.33 37.11 43.94 42.61 41.33 β = 0.95 48.91 45.71 42.66 48.06 47.37 46.47 Table 4: Comparison of models trained with and without data augmentation. tence pairs in the training set, resulting in a training dataset with about 2.8M sentence pairs. In Table 4, we report the performance of baseline model and our model with β = 0.95, with and without data augmentation. Not surprisingly, data augmentation significantly improves the robustness of NMT models to homophone noises. However, the noises in training data seem to hurt the performance of the baseline model (from 45.97 to 43.94), and its effect on our model seems to be much smaller, probably because our model mainly uses the phonetic information. 4 Related Work Formiga and Fonollosa (2012) proposed to use a character-level translator to deal with misspelled words in the input sentences, but in general their method cannot deal with homophone noises effectively. Cheng et al. (2018) proposed to use adversarial stability training to improve the robustness of NMT systems, but their method does not specifically target homophone noises and do not use phonetic information. The effects of ASR errors on machine translation have been extensively analyzed (Ruiz et al., 2017; Ruiz and Federico, 2015). In a parallel work, Li et al. (2018) also proposed to utilize both textual and phonetic information to improve the robustness of NMT systems, but their method is different with ours in how textual and phonetic information are combined. 5 Conclusion In this paper, we propose to use both textual and phonetic information in NMT by combining them in the input embedding layer of neural networks. Such combination not only makes NMT models much more robust to homophone noises, but also improves their performance on clean datasets. Our experimental results clearly show that both textual and phonetical information are important, and the best choice is to rely mostly on phonetic information. We also augment the training dataset by adding homophone noises, and our experiments demonstrate that this is very useful in improving the robustness of NMT models to homophone noises. 3049 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of ACL. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP. Jinhua Du and Andy Way. 2017. Pinyin as subword unit for chinese-sourced neural machine translation. In AICS, pages 89–101. Lluis Formiga and Jos´e AR Fonollosa. 2012. Dealing with input noise in statistical machine translation. Proceedings of COLING 2012: Posters, pages 319– 328. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680. Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, and Qun Liu. 2018. Improving the robustness of speech translation. arXiv preprint arXiv:1811.00728. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2018. STACL: Simultaneous translation with integrated anticipation and controllable latency. arXiv preprint arXiv:1810.08398. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9:2579–2605. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Nicholas Ruiz, Mattia Antonino Di Gangi, Nicola Bertoldi, and Marcello Federico. 2017. Assessing the tolerance of neural machine translation systems against speech recognition errors. In INTERSPEECH, pages 2635–2639. Nicholas Ruiz and Marcello Federico. 2015. Phonetically-oriented word error alignment for speech recognition error analysis in speech translation. In 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 296–302. IEEE. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL, volume 1, pages 1715–1725. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In Proceedings of ICML. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Jian Yang, Shuangzhi Wu, Dongdong Zhang, Zhoujun Li, and Ming Zhou. 2018. Improved neural machine translation with chinese phonologic features. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 303–315. Springer.
2019
291
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3050–3056 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3050 A Simple and Effective Approach to Automatic Post-Editing with Transfer Learning Gonc¸alo M. Correia Instituto de Telecomunicac¸˜oes Lisbon, Portugal [email protected] Andr´e F. T. Martins Instituto de Telecomunicac¸˜oes & Unbabel Lisbon, Portugal [email protected] Abstract Automatic post-editing (APE) seeks to automatically refine the output of a black-box machine translation (MT) system through human post-edits. APE systems are usually trained by complementing human post-edited data with large, artificial data generated through backtranslations, a time-consuming process often no easier than training a MT system from scratch. In this paper, we propose an alternative where we fine-tune pre-trained BERT models on both the encoder and decoder of an APE system, exploring several parameter sharing strategies. By only training on a dataset of 23K sentences for 3 hours on a single GPU we obtain results that are competitive with systems that were trained on 5M artificial sentences. When we add this artificial data, our method obtains state-of-the-art results. 1 Introduction The goal of automatic post-editing (APE; Simard et al., 2007) is to automatically correct the mistakes produced by a black-box machine translation (MT) system. APE is particularly appealing for rapidly customizing MT, avoiding to train new systems from scratch. Interfaces where human translators can post-edit and improve the quality of MT sentences (Alabau et al., 2014; Federico et al., 2014; Denkowski, 2015; Hokamp, 2018) are a common data source for APE models, since they provide triplets of source sentences (src), machine translation outputs (mt), and human post-edits (pe). Unfortunately, human post-edits are typically scarce. Existing APE systems circumvent this by generating artificial triplets (Junczys-Dowmunt and Grundkiewicz, 2016; Negri et al., 2018). However, this requires access to a high quality MT system, similar to (or better than) the one used in the black-box MT itself. This spoils the motivation of APE as an alternative to large-scale MT training in the first place: the time to train MT systems in order to extract these artificial triplets, combined with the time to train an APE system on the resulting large dataset, may well exceed the time to train a MT system from scratch. Meanwhile, there have been many successes of transfer learning for NLP: models such as CoVe (McCann et al., 2017), ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018), ULMFiT (Howard and Ruder, 2018), and BERT (Devlin et al., 2019) obtain powerful representations by training large-scale language models and use them to improve performance in many sentence-level and word-level tasks. However, a language generation task such as APE presents additional challenges. In this paper, we build upon the successes above and show that transfer learning is an effective and time-efficient strategy for APE, using a pretrained BERT model. This is an appealing strategy in practice: while large language models like BERT are expensive to train, this step is only done once and covers many languages, reducing engineering efforts substantially. This is in contrast with the computational and time resources that creating artificial triplets for APE needs—these triplets need to be created separately for every language pair that one wishes to train an APE system for. Current APE systems struggle to overcome the MT baseline without additional data. This baseline corresponds to leaving the MT uncorrected (“donothing” baseline).1 With only the small shared task dataset (23K triplets), our proposed strategy outperforms this baseline by −4.9 TER and +7.4 BLEU in the English-German WMT 2018 APE shared task, with 3 hours of training on a single GPU. Adding the artificial eSCAPE dataset (Negri et al., 2018) leads to a performance of 17.15 TER, a new state of the art. 1If an APE system has worse performance than this baseline, it is pointless to use it. 3051 Our main contributions are the following: • We combine the ability of BERT to handle sentence pair inputs together with its pre-trained multilingual model, to use both the src and mt in a cross-lingual encoder, that takes a multilingual sentence pair as input. • We show how pre-trained BERT models can also be used and fine-tuned as the decoder in a language generation task. • We make a thorough empirical evaluation of different ways of coupling BERT models in an APE system, comparing different options of parameter sharing, initialization, and fine-tuning. 2 Automatic Post-Editing with BERT 2.1 Automatic Post-Editing APE (Simard et al., 2007) is inspired by human post-editing, in which a translator corrects mistakes made by an MT system. APE systems are trained from triplets (src, mt, pe), containing respectively the source sentence, the machine-translated sentence, and its post-edited version. Artificial triplets. Since there is little data available (e.g WMT 2018 APE shared task has 23K triplets), most research has focused on creating artificial triplets to achieve the scale that is needed for powerful sequence-to-sequence models to outperform the MT baseline, either from “round-trip” translations (Junczys-Dowmunt and Grundkiewicz, 2016) or starting from parallel data, as in the eSCAPE corpus of Negri et al. (2018), which contains 8M synthetic triplets. Dual-Source Transformer. The current state of the art in APE uses a Transformer (Vaswani et al., 2017) with two encoders, for the src and mt, and one decoder, for pe (Junczys-Dowmunt and Grundkiewicz, 2018; Tebbifakhr et al., 2018). When concatenating human post-edited data and artificial triplets, these systems greatly improve the MT baseline. However, little successes are known using the shared task training data only. By contrast, with transfer learning, our work outperforms this baseline considerably, even without any auxiliary synthetic dataset; and, as shown in §3, it achieves state-of-the-art results by combining it with the aforementioned artificial datasets. Context Attention (Multi-Head Att.) Input Embedding Output Probabilities Add & Norm Linear Softmax Add & Norm Add & Norm Add & Norm Add & Norm Feed Forward Feed Forward N× N× Self-Attention (Multi-Head Att.) Self-Attention (Multi-Head Att.) Output Embedding Segments Positions Tokens pe1, …, peM mt1, …, mtK src1, …, srcN B, …, B B, …, B A, …, A 0, …, M-1 0, …, K-1 0, …, N-1 Figure 1: Dual-Source BERT. Dashed lines show shared parameters in our best configuration. 2.2 BERT as a Cross-Lingual Encoder Our transfer learning approach is based on the Bidirectional Encoder Representations from Transformers (BERT; Devlin et al., 2019). This model obtains deep bidirectional representations by training a Transformer (Vaswani et al., 2017) with a largescale dataset in a masked language modeling task where the objective is to predict missing words in a sentence. We use the BERTBASE model, which is composed of L = 12 self-attention layers, hidden size H = 768, A = 12 attention heads, and feedforward inner layer size F = 3072. In addition to the word and learned position embeddings, BERT also has segment embeddings to differentiate between a segment A and a segment B—this is useful for tasks such as natural language inference, which involve two sentences. In the case of APE, there is also a pair of input sentences (src, mt) which are in different languages. Since one of the released BERT models was jointly pre-trained on 104 languages,2 we use this multilingual BERT pre-trained model to encode the bilingual input pair of APE. Therefore, the whole encoder of our APE model is the multilingual BERT: we encode both src and 2 https://github.com/google-research/ bert/blob/master/multilingual.md 3052 mt in the same encoder and use the segment embeddings to differentiate between languages (Figure 1). We reset positional embeddings when the mt starts, since it is not a continuation of src. 2.3 BERT as a Decoder Prior work has incorporated pre-trained models in encoders, but not as decoders of sequence-tosequence models. Doing so requires a strategy for generating fluently from the pre-trained model. Note that the bidirectionality of BERT is lost, since the model cannot look at words that have not been generated yet, and it is an open question how to learn decoder-specific blocks (e.g. context attention), which are absent in the pre-trained model. One of our key contributions is to use BERT in the decoder by experimenting different strategies for initializing and sharing the self and context attention layers and the positionwise feed-forward layers. We tie together the encoder and decoder embeddings weights (word, position, and segment) along with the decoder output layer (transpose of the word embedding layer). We use the same segment embedding for the target sentence (pe) and the second sentence in the encoder (mt) since they are in the same language. The full architecture is shown in Figure 1. We experiment with the following strategies for coupling BERT pre-trained models in the decoder: • Transformer. A Transformer decoder as described in Vaswani et al. (2017) without any shared parameters, with the BERTBASE dimensions and randomly initialized weights. • Pre-trained BERT. This initializes the decoder with the pre-trained BERT model. The only component initialized randomly is the context attention (CA) layer, which is absent in BERT. Unlike in the original BERT model—which only encodes sentences—a mask in the self-attention is required to prevent the model from looking to subsequent tokens in the target sentence. • BERT initialized context attention. Instead of a random initialization, we initialize the context attention layers with the weights of the corresponding BERT self-attention layers. • Shared self-attention. Instead of just having the same initialization, the self-attentions (SA) in the encoder and decoder are tied during training. • Context attention shared with self-attention. We take a step further and tie the context attention and self attention weights—making all the attention transformation matrices (self and context) in the encoder and decoder tied. • Shared feed-forward. We tie the feed-forward weights (FF) between the encoder and decoder. 3 Experiments We now describe our experimental results. Our models were implemented on a fork of OpenNMTpy (Klein et al., 2017) using a Pytorch (Paszke et al., 2017) re-implementation of BERT.3 Our model’s implementation is publicly available.4 Datasets. We use the data from the WMT 2018 APE shared task (Chatterjee et al., 2018) (EnglishGerman SMT), which consists of 23,000 triplets for training, 1,000 for validation, and 2,000 for testing. In some of our experiments, we also use the eSCAPE corpus (Negri et al., 2018), which comprises about 8M sentences; when doing so, we oversample 35x the shared task data to cover 10% of the final training data. We segment words with WordPiece (Wu et al., 2016), with the same vocabulary used in the Multilingual BERT. At training time, we discard triplets with 200+ tokens in the combination of src and mt or 100+ tokens in pe. For evaluation, we use TER (Snover et al., 2006) and tokenized BLEU (Papineni et al., 2002). TER↓ BLEU↑ Transformer decoder 20.33 69.31 Pre-trained BERT 20.83 69.11 with CA ←SA 18.91 71.81 and SA ↔Encoder SA 18.44 72.25 and CA ↔SA 18.75 71.83 and FF ↔Encoder FF 19.04 71.53 Table 1: Ablation study of decoder configurations, by gradually having more shared parameters between the encoder and decoder (trained without synthetic data). ↔denotes parameter tying and ←an initialization. Training Details. We use Adam (Kingma and Ba, 2014) with a triangular learning rate schedule that increases linearly during the first 5,000 steps until 5 × 10−5 and has a linear decay afterwards. When using BERT components, we use a 3https://github.com/huggingface/ pytorch-pretrained-BERT 4https://github.com/deep-spin/ OpenNMT-APE 3053 test 2016 test 2017 test 2018 Model Train Size TER↓ BLEU↑ TER↓ BLEU↑ TER↓ BLEU↑ MT baseline (Uncorrected) 24.76 62.11 24.48 62.49 24.24 62.99 B´erard et al. (2017) 23K 22.89 — 23.08 65.57 — — Junczys-Dowmunt and Grundkiewicz (2018) 5M 18.92 70.86 19.49 69.72 — — Junczys-Dowmunt and Grundkiewicz (2018)×4 18.86 71.04 19.03 70.46 — — Tebbifakhr et al. (2018) 8M — — — — 18.62 71.04 Junczys-Dowmunt and Grundkiewicz (2018) 17.81 72.79 18.10 71.72 — — Junczys-Dowmunt and Grundkiewicz (2018)×4 17.34 73.43 17.47 72.84 18.00 72.52 Dual-Source Transformer† 23K 27.80 60.76 27.73 59.78 28.00 59.98 BERT Enc. + Transformer Dec. (Ours) 20.23 68.98 21.02 67.47 20.93 67.60 BERT Enc. + BERT Dec. (Ours) 18.88 71.61 19.03 70.66 19.34 70.41 BERT Enc. + BERT Dec. ×4 (Ours) 18.05 72.39 18.07 71.90 18.91 70.94 BERT Enc. + BERT Dec. (Ours) 8M 16.91 74.29 17.26 73.42 17.71 72.74 BERT Enc. + BERT Dec. ×4 (Ours) 16.49 74.98 16.83 73.94 17.15 73.60 Table 2: Results on the WMT 2016–18 APE shared task datasets. Our single models trained on the 23K dataset took only 3h20m to converge on a single Nvidia GeForce GTX 1080 GPU, while results for models trained on 8M triplets take approximately 2 days on the same GPU. Models marked with “×4” are ensembles of 4 models. Dual-Source Transformer† is a comparable re-implementation of Junczys-Dowmunt and Grundkiewicz (2018). ℓ2 weight decay of 0.01. We apply dropout (Srivastava et al., 2014) with pdrop = 0.1 to all layers and use label smoothing with ϵ = 0.1 (Pereyra et al., 2017). For the small data experiments, we use a batch size of 1024 tokens and save checkpoints every 1,000 steps; when using the eSCAPE corpus, we increase this to 2048 tokens and 10,000 steps. The checkpoints are created with the exponential moving average strategy of Junczys-Dowmunt et al. (2018) with a decay of 10−4. At test time, we select the model with best TER on the development set, and apply beam search with a beam size of 8 and average length penalty. Initialization and Parameter Sharing. Table 1 compares the different decoder strategies described in §2.3 on the WMT 2018 validation set. The best results were achieved by sharing the self-attention between encoder and decoder, and by initializing (but not sharing) the context attention with the same weights as the self-attention. Regarding the selfattention sharing, we hypothesize that its benefits are due to both encoder and decoder sharing a common language in their input (in the mt and pe sentence, respectively). Future work will investigate if this is still beneficial when the source and target languages are less similar. On the other hand, the initialization of the context attention with BERT’s selfattention weights is essential to reap the benefits of BERT representations in the decoder—without it, using BERT decreases performance when compared to a regular transformer decoder. This might be due to the fact that context attention and selfattention share the same neural block architecture (multi-head attention) and thus the context attention benefits from the pre-trained BERT’s better weight initialization. No benefit was observed from sharing the feed-forward weights. Final Results. Finally, Table 2 shows our results on the WMT 2016–18 test sets. The model named BERT Enc. + BERT Dec. corresponds to the best setting found in Table 1, while BERT Enc. + Transformer Dec. only uses BERT in the encoder. We show results for single models and ensembles of 4 independent models. Using the small shared task dataset only (23K triplets), our single BERT Enc. + BERT Dec. model surpasses the MT baseline by a large margin (−4.90 TER in test 2018). The only system we are aware to beat the MT baseline with only the shared task data is B´erard et al. (2017), which we also outperform (−4.05 TER in test 2017). With only about 3 GPU-hours and on a much smaller dataset, our model reaches a performance that is comparable to an ensemble of the best WMT 2018 system with an artificial dataset of 5M triplets (+0.02 TER in test 2016), which is much more expensive to 3054 train. With 4× ensembling, we get competitive results with systems trained on 8M triplets. When adding the eSCAPE corpus (8M triplets), performance surpasses the state of the art in all test sets. By ensembling, we improve even further, achieving a final 17.15 TER score in test 2018 (−0.85 TER than the previous state of the art). 4 Related Work In their Dual-Source Transformer model, JunczysDowmunt and Grundkiewicz (2018) also found gains by tying together encoder parameters, and the embeddings of both encoders and decoder. Our work confirms this but shows further gains by using segment embeddings and more careful sharing and initialization strategies. Sachan and Neubig (2018) explore parameter sharing between Transformer layers. However, they focus on sharing decoder parameters in a one-to-many multilingual MT system. In our work, we share parameters between the encoder and the decoder. As stated in §3, B´erard et al. (2017) also showed improved results over the MT baseline, using exclusively the shared task data. Their system outputs edit operations that decide whether to insert, keep or delete tokens from the machine translated sentence. Instead of relying on edit operations, our approach mitigates the small amount of data with transfer learning through BERT. Our work makes use of the recent advances in transfer learning for NLP (Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019). Pre-training these large language models has largely improved the state of the art of the GLUE benchmark (Wang et al., 2018). Particularly, our work uses the BERT pre-trained model and makes use of the representations obtained not only in the encoder but also on the decoder in a language generation task. More closely related to our work, Lample and Conneau (2019) pre-trained a BERT-like language model using parallel data, which they used to initialize the encoder and decoder for supervised and unsupervised MT systems. They also used segment embeddings (along with word and position embeddings) to differentiate between a pair of sentences in different languages. However, this is only used in one of the pre-training phases of the language model (translation language modelling) and not in the downstream task. In our work, we use segment embeddings during the downstream task itself, which is a perfect fit to the APE task. Lopes et al. (2019) used our model on the harder English-German NMT subtask to obtain better TER performance than previous state of the art. To obtain this result, the transfer learning capabilities of BERT were not enough and further engineering effort was required. Particularly, a conservativeness factor was added during beam decoding to constrain the changes the APE system can make to the mt output. Furthermore, the authors used a data weighting method to augment the importance of data samples that have lower TER. By doing this, data samples that required less post-editing effort are assigned higher weights during the training loop. Since the NMT system does very few errors on this domain this data weighting is important for the APE model to learn to do fewer corrections to the mt output. However, their approach required the creation of an artificial dataset to obtain a performance that improved the MT baseline. We leave it for future work to investigate better methods to obtain results that improve the baseline using only real post-edited data in these smaller APE datasets. 5 Conclusion and Future Work We proposed a transfer learning approach to APE using BERT pre-trained models and careful parameter sharing. We explored various ways for coupling BERT in the decoder for language generation. We found it beneficial to initialize the context attention of the decoder with BERT’s self-attention and to tie together the parameters of the self-attention layers between the encoder and decoder. Using a small dataset, our results are competitive with systems trained on a large amount of artificial data, with much faster training. By adding artificial data, we obtain a new state of the art in APE. In future work, we would like to do an extensive analysis on the capabilities of BERT and transfer learning in general for different domains and language pairs in APE. Acknowledgments This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundac¸˜ao para a Ciˆencia e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We thank the anonymous reviewers for their feedback. 3055 References Vicent Alabau, Christian Buck, Michael Carl, Francisco Casacuberta, Mercedes Garc´ıa-Mart´ınez, Ulrich Germann, Jes´us Gonz´alez-Rubio, Robin Hill, Philipp Koehn, Luis Leiva, et al. 2014. CASMACAT: A Computer-assisted Translation Workbench. In Proceedings of the Demonstrations at EACL. Alexandre B´erard, Laurent Besacier, and Olivier Pietquin. 2017. LIG-CRIStAL Submission for the WMT 2017 Automatic Post-Editing Task. In Proceedings of WMT17. Rajen Chatterjee, Matteo Negri, Raphael Rubino, and Marco Turchi. 2018. Findings of the WMT 2018 Shared Task on Automatic Post-Editing. In Proceedings of WMT18. Michael Denkowski. 2015. Machine Translation for Human Translators. Ph.D. thesis, Carnegie Mellon University. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL. Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Matteo Negri, Marco Turchi, Marco Trombetti, Alessandro Cattelan, Antonio Farina, Domenico Lupinetti, Andrea Martines, et al. 2014. The MateCat Tool. In Proceedings of COLING, System Demonstrations. Christopher M Hokamp. 2018. Deep Interactive Text Prediction and Quality Estimation in Translation Interfaces. Ph.D. thesis, Dublin City University. Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In Proceedings of ACL. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2016. Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing. In Proceedings of WMT16. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2018. MS-UEdin Submission to the WMT2018 APE Shared Task: Dual-Source Transformer for Automatic Post-Editing. In Proceedings of WMT18. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, et al. 2018. Marian: Fast Neural Machine Translation in C++. In Proceedings of ACL, System Demonstrations. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. preprint arXiv:1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: OpenSource Toolkit for Neural Machine Translation. In Proceedings of ACL 2017, System Demonstrations. Guillaume Lample and Alexis Conneau. 2019. Crosslingual Language Model Pretraining. preprint arXiv:1901.07291. Ant´onio V. Lopes, M. Amin Farajian, Gonc¸alo M. Correia, Jonay Trenous, and Andr´e F. T. Martins. 2019. Unbabel’s Submission to the WMT2019 APE Shared Task: BERT-based Encoder-Decoder for Automatic Post-Editing. In Proceedings of WMT19. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in Translation: Contextualized Word Vectors. In Proceedings of NeurIPS. Matteo Negri, Marco Turchi, Rajen Chatterjee, and Nicola Bertoldi. 2018. eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing. In Proceedings of LREC. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of ACL. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. Proceedings of NeurIPS Autodiff Workshop. Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing Neural Networks by Penalizing Confident Output Distributions. preprint arXiv:1701.06548. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of NAACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Understanding by Generative Pre-Training. preprint. Devendra Sachan and Graham Neubig. 2018. Parameter Sharing Methods for Multilingual SelfAttentional Translation Models. In Proceedings of WMT18. Michel Simard, Nicola Ueffing, Pierre Isabelle, and Roland Kuhn. 2007. Rule-Based Translation with Statistical Phrase-Based Post-Editing. In Proceedings of WMT07. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of AMTA. 3056 Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Amirhossein Tebbifakhr, Ruchit Agrawal, Matteo Negri, and Marco Turchi. 2018. Multi-source Transformer with Combined Losses for Automatic PostEditing. In Proceedings of WMT18. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proceedings of NeurIPS. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of EMNLP Workshop BlackboxNLP. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. preprint arXiv:1609.08144.
2019
292
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3057–3062 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3057 Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation Nima Pourdamghani♣Nada Aldarrab♠Marjan Ghazvininejad♦ Kevin Knight♥Jonathan May♠ ♣Amazon ♠USC Information Sciences Institute ♦Facebook AI Research ♥DiDi Labs [email protected] [email protected] [email protected] [email protected] [email protected] Abstract Given a rough, word-by-word gloss of a source language sentence, target language natives can uncover the latent, fully-fluent rendering of the translation. In this work we explore this intuition by breaking translation into a two step process: generating a rough gloss by means of a dictionary and then ‘translating’ the resulting pseudo-translation, or ‘Translationese’ into a fully fluent translation. We build our Translationese decoder once from a mish-mash of parallel data that has the target language in common and then can build dictionaries on demand using unsupervised techniques, resulting in rapidly generated unsupervised neural MT systems for many source languages. We apply this process to 14 test languages, obtaining better or comparable translation results on high-resource languages than previously published unsupervised MT studies, and obtaining good quality results for low-resource languages that have never been used in an unsupervised MT scenario. 1 Introduction Quality of machine translation, especially neural MT, highly depends on the amount of available parallel data. For a handful of languages, where parallel data is abundant, MT quality has reached quite good performance (Wu et al., 2016; Hassan et al., 2018). However, the quality of translation rapidly deteriorates as the amount of parallel data decreases (Koehn and Knowles, 2017). Unfortunately, many languages have close to zero parallel texts. Translating texts from these languages requires new techniques. Hermjakob et al. (2018) presented a hybrid human/machine translation tool that uses lexical translation tables to gloss a translation and relies on human language and world models to propagate glosses into fluent translations. Inspired by that work, this work investigates the following question: Can we replace the human in the loop with more technology? We provide the following two-step solution to unsupervised neural machine translation: 1. Use a bilingual dictionary to gloss the input into a pseudo-translation or ‘Translationese’. 2. Translate the Translationese into target language, using a model built in advance from various parallel data, with the source side converted into Translationese using Step 1. The notion of separating adequacy from fluency components into a pipeline of operations dates back to the early days of MT and NLP research, where the inadequacy of word-by-word MT was first observed (Yngve, 1955; Oswald, 1952). A subfield of MT research that seeks to improve fluency given disfluent but adequate first-pass translation is automatic post-editing (APE) pioneered by Knight and Chander (1994). Much of the current APE work targets correction of black-box MT systems, which are presumed to be supervised. Early approaches to unsupervised machine translation include decipherment methods (Nuhn et al., 2013; Ravi and Knight, 2011; Pourdamghani and Knight, 2017), which suffer from a huge hypothesis space. Recent approaches to zero-shot machine translation include pivot-based methods (Chen et al., 2017; Zheng et al., 2017; Cheng et al., 2016) and multi-lingual NMT methods (Firat et al., 2016a,b; Johnson et al., 2017; Ha et al., 2016, 2017). These systems are zero-shot for a specific source/target language pair, but need parallel data from source to a pivot or multiple other languages. More recently, totally unsupervised NMT methods are introduced that use only monolingual data for training a machine translation system. Lample et al. (2018a,c), Artetxe et al. (2018), and 3058 Yang et al. (2018) use iterative back-translation to train MT models in both directions simultaneously. Their training takes place on massive monolingual data and requires a long time to train as well as careful tuning of hyperparameters. The closest unsupervised NMT work to ours is by Kim et al. (2018). Similar to us, they break translation into glossing and correction steps. However, their correction step is trained on artificially generated noisy data aimed at simulating glossed source texts. Although this correction method helps, simulating noise caused by natural language phenomena is a hard task and needs to be tuned for every language. Previous zero-shot NMT work compensates for a lack of source/target parallel data by either using source/pivot parallel data, extremely large monolingual data, or artificially generated data. These requirements and techniques limit the methods’ applicability to real-world low-resource languages. Instead, in this paper we propose using parallel data from high-resource languages to learn ‘how to translate’ and apply the trained system to low resource settings. We use off-theshelf technologies to build word embeddings from monolingual data (Bojanowski et al., 2017) and learn a source-to-target bilingual dictionary using source and target embeddings (Lample et al., 2018b). Given a target language, we train sourceto-target dictionaries for a diverse set of highresource source languages, and use them to convert the source side of the parallel data to Translationese. We combine this parallel data and train a Translationese-to-target translator on it. Later, we can build source-to-target dictionaries on-demand, generate Translationese from source texts, and use the pre-trained system to rapidly produce machine translation for many languages without requiring a single line of source-target parallel data. We introduce the following contributions in this paper: • Following Hermjakob et al. (2018), we propose a two step pipeline for building a rapid neural MT system for many languages. The pipeline does not require parallel data or parameter fine-tuning when adapting to new source languages. • The pipeline only requires a comprehensive source to target dictionary. We show that this dictionary can be easily obtained using offthe shelf tools within a few hours. • We use this system to translate test texts from 14 languages into English. We obtain better or comparable quality translation results on high-resource languages than previously published unsupervised MT studies, and obtain good quality results for low-resource languages that have never been used in an unsupervised MT scenario. To our knowledge, this is the first unsupervised NMT work that shows good translation results on such a large number of languages. 2 Method We introduce a two-step pipeline for unsupervised machine translation. In the first step a source text is glossed into a pseudo-translation or Translationese, while in the second step a pre-trained model translates the Translationese into target. We introduce a fully unsupervised method for converting the source into Translationese, and we show how to train a Translationese to target system in advance and apply it to new source languages. 2.1 Building a Dictionary The first step of our proposed pipeline includes a word-by-word translation of the source texts. This requires a source/target dictionary. Manually constructed dictionaries exist for many language pairs, however cleaning these dictionaries to get a word to word lexicon is not trivial, and these dictionaries often cover a small portion of the source vocabulary, focusing on stems and specifically excluding inflected variants. In order to have a comprehensive, word to word, inflected bi-lingual dictionary we look for automatically built ones. Automatic lexical induction is an active field of research (Fung, 1995; Koehn and Knight, 2002; Haghighi et al., 2008; Lample et al., 2018b). A popular method for automatic extraction of bilingual dictionaries is through building cross-lingual word embeddings. Finding a shared word representation space between two languages enables us to calculate the distance between word embeddings of source and target, which helps us to find translation candidates for each word. We follow this approach for building the bilingual dictionaries. For a given source and target language, we start by separately training source and target word embeddings S and T, and use the method introduced by Lample et al. (2018b) to find a linear mapping W that maps the source 3059 embedding space to the target: SW = T. Lample et al. (2018b) propose an adversarial method for estimating W, where a discriminator is trained to distinguish between elements randomly sampled from WS and T, and W is trained to prevent the discriminator from making accurate classifications. Once the initial mapping matrix W is trained, a number of refinement steps is performed to improve performance over less frequent words by changing the metric of the space. We use the trained matrix W to map the source embeddings into the space of the target embeddings. Then we find the k-nearest neighbors among the target words for each source word, according to the cosine distance metric. These nearest neighbors represent our translation options for that source word. 2.2 Source to Translationese Once we have the translation options for tokens in the source vocabulary we can perform a word by word translation of the source into Translationese. However, a naive translation of each source token to its top translation option without considering the context is not the best way to go. Given different contexts, a word should be translated differently. We use a 5-gram target language model to look at different translation options for a source word and select one based on its context. This language model is trained in advance on large target monolingual data. In order to translate a source sentence into Translationese we apply a beam search with a stack size of 100 and assign a score equal to αPLM + βd(s, t) to each translation option t for a source token s, where PLM is the language model score, and d(s, t) is the cosine distance between source and target words. We set α = 0.01 and β = 0.5 2.3 Translationese to Target We train a transformer model (Vaswani et al., 2017) on parallel data from a diverse set of highresource languages to translate Translationese into a fluent target. For each language we convert the source side of the parallel data to Translationese as described in Section 2.2. Then we combine and shuffle all the Translationese/target parallel data and train the model on the result. Once the model is trained, we can apply it to the Translationese coming from any source language. We use the tensor2tensor implementation1 of the transformer model with the transformer base set of hyperparameters (6 layers, hidden layer size of 512) as our translation model. 3 Data and Parameters For all our training and test languages, we use the pre-trained word embeddings2 trained on Wikipedia data using fastText (Bojanowski et al., 2017). These embeddings are used to train bilingual dictionaries. We select English as the target language. In order to avoid biasing the trained system toward a language or a specific type of parallel data, we use diverse parallel data on a diverse set of languages to train the Translationese to English system. We use Arabic, Czech, Dutch, Finnish, French, German, Italian, Russian, and Spanish as the set of out training languages. We use roughly 2 million sentence pairs per language and limit the length of the sentences to 100 tokens. For Dutch, Finnish, and Italian we use Europarl (Koehn, 2005) for parallel data. For Arabic we use MultiUN (Tiedemann, 2012). For French we use CommonCrawl. For German we use a mix of CommonCrawl (1.7M), and NewsCommentary (300K). The numbers in parentheses show the number of sentences for each dataset. For Spanish we use CommonCrawl (1.8M), and Europarl (200K). For Russian we use Yandex (1M), CommonCrawl (800K), and NewsCommentary (200K), and finally for Czech we use a mix of ParaCrawl (1M), Europarl (640K), NewsCommentary (200K), and CommonCrawl (160K). We train one model on these nine languages and apply it to test languages not in this set. Also, to test on each of the training languages, we train a model where the parallel data for that language is excluded from the training data. In each experiment we use 3000 blind sentences randomly selected out of the combined parallel data as the development set. We use the default parameters in Lample et al. (2018b) to find the cross-lingual embedding vectors. In order to create the dictionary we limit the size of the source and target (English) vocabulary 1https://github.com/tensorflow/ tensor2tensor 2https://github.com/facebookresearch/ fastText/blob/master/pretrained-vectors. md 3060 to 100K tokens. For each source token we find 20 nearest neighbors in the target language. We use a 5-gram language model trained on 4 billion tokens of Gigaword to select between the translation options for each token. We use Moses scripts for tokenizing and lowercasing the data. We do not apply BPE (Sennrich et al., 2016) on the data. In order to be comparable to Kim et al. (2018) we split German compound words only for the newstest2016 test data. We use the CharSplit3 python package for this purpose. We use tensor2tensor’s transformer base hyperparameters to train the transformer model on a single gpu for each language. 4 Experiments We report translation results on newstest2013 for Spanish, newstest2014 for French, and newstest2016 for Czech, German, Finnish, Romanian, and Russian. We also report results on the first 3000 sentences of GlobalVoices20154 for Dutch, Bulgarian, Danish, Indonesian, Polish, Portuguese, and Catalan. In each experiment we report the quality of the intermediate Translationese as well as the scores for our full model. fr-en de-en ru-en ro-en Lample et al. (2018a) 14.3 13.3 Artetxe et al. (2018) 15.6 10.2 Yang et al. (2018) 15.6 14.6 Lample et al. (2018c) (transformer) 24.2 21.0 9.1 19.4 Kim et al. (2018) 16.5 17.2 Translationese 11.6 13.8 5.7 8.1 Full Model 21.0 18.7 12.0 16.3 Table 1: Comparing translation results on newstest2014 for French, and newstest2016 for Russian, German, and Romanian with previous unsupervised NMT methods. Kim et al. (2018) is the method closest to our work. We report the quality of Translationese as well as the scores for our full model. We compare our results against all the existing fully unsupervised neural machine translation 3https://github.com/dtuggener/ CharSplit 4http://opus.nlpl.eu/GlobalVoices.php methods in Table 1 and show better results on common test languages compared to all of them except Lample et al. (2018c) where, compared to their transformer model,5 we improve results for Russian, but not for other languages. The first four methods that we compare against are based on back-translation. These methods require huge monolingual data and large training time to train a model per test language. The fifth method, which is most similar to our approach (Kim et al., 2018), can be trained quickly, but still is fine tuned for each test language and performs worse than our method. Unlike the previous works, our model can be trained once and applied to any test language on demand. Besides this, these methods use language-specific tricks and development data for training their models while our system is trained totally independent of the test language. We also show acceptable BLEU scores for ten other languages for which no previous unsupervised NMT scores exist, underscoring our ability to produce new systems rapidly (Table 2). cs-en es-en fi-en nl-en bg-en Translationese 7.4 12.7 3.8 16.9 10.0 Full Model 13.7 22.2 7.2 22.0 16.8 da-en id-en pl-en pt-en ca-en Translationese 13.6 7.4 8.3 15.2 10.1 Full Model 18.5 13.7 14.8 23.1 19.8 Table 2: Translation results on ten new languages: Czech, Spanish, Finnish, Dutch, Bulgarian, Danish, Indonesian, Polish, Portuguese, and Catalan 5 Conclusion We propose a two step pipeline for building a rapid unsupervised neural machine translation system for any language. The pipeline does not require retraining the neural translation model when adapting to new source languages, which makes its application to new languages extremely fast and easy. The pipeline only requires a comprehensive source-to-target dictionary. We show how to easily obtain such a dictionary using off-the shelf tools. We use this system to translate test texts from 14 languages into English. We obtain better or comparable quality translation results on high-resource languages than previously published unsupervised 5They present better results when combining their transformer model with an unsupervised phrase-based translation model. 3061 MT studies, and obtain good quality results for ten other languages that have never been used in an unsupervised MT scenario. Acknowledgements The research is based upon the work that took place in Information Sciences Institute (ISI) which was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via AFRL Contract FA8650-17-C-9116 and by the Defense Advanced Research Projects Agency (DARPA) via contract HR0011-15-C-0115. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In Proc. ICLR. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proc. ACL. Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016a. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proc. NAACL. Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine translation. In Proc. EMNLP. Pascale Fung. 1995. Compiling bilingual lexicon entries from a non-parallel English-Chinese corpus. In Workshop on Very Large Corpora. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder. arXiv preprint arXiv:1611.04798. Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2017. Effective strategies in zero-shot neural machine translation. arXiv preprint arXiv:1711.07893. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proc. ACL. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, et al. 2018. Achieving human parity on automatic Chinese to English news translation. arXiv preprint arXiv:1803.05567. Ulf Hermjakob, Jonathan May, Michael Pust, and Kevin Knight. 2018. Translating a language you don’t know in the Chinese room. In Proc. ACL, System Demonstrations. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Yunsu Kim, Jiahui Geng, and Hermann Ney. 2018. Improving unsupervised word-by-word translation with language model and denoising autoencoder. In Proc. EMNLP. Kevin Knight and Ishwar Chander. 1994. Automated postediting of documents. In Proc AAAI. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT summit. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proc. ACL workshop on Unsupervised lexical acquisition. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proc. ACL Workshop on Neural Machine Translation. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proc. ICLR. Guillaume Lample, Alexis Conneau, Marc ´Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018b. Word translation without parallel data. In Proc. ICLR. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine translation. In Proc. EMNLP. Malte Nuhn, Julian Schamper, and Hermann Ney. 2013. Beam search for solving substitution ciphers. In Proc. ACL. 3062 Victor Oswald. 1952. Word-by-word translation. In Proc. intervention `a la Conf´erence du MIT. Nima Pourdamghani and Kevin Knight. 2017. Deciphering related languages. In Proc. EMNLP. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proc. ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. ACL. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proc. Lrec. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NIPS. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proc. ACL. Victor H. Yngve. 1955. Sentence-for-sentence translation. Mechanical Translation, 2(2):29–37. Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zeroresource neural machine translation. In Proc. IJCAI.
2019
293
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063–3068 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3063 Training Neural Machine Translation To Apply Terminology Constraints Georgiana Dinu Prashant Mathur Marcello Federico Yaser Al-Onaizan Amazon AI {gddinu,pramathu,marcfede,onaizan}@amazon.com Abstract This paper proposes a novel method to inject custom terminology into neural machine translation at run time. Previous works have mainly proposed modifications to the decoding algorithm in order to constrain the output to include run-time-provided target terms. While being effective, these constrained decoding methods add, however, significant computational overhead to the inference step, and, as we show in this paper, can be brittle when tested in realistic conditions. In this paper we approach the problem by training a neural MT system to learn how to use custom terminology when provided with the input. Comparative experiments show that our method is not only more effective than a state-of-the-art implementation of constrained decoding, but is also as fast as constraint-free decoding. 1 Introduction Despite the high quality reached nowadays by neural machine translation (NMT), its output is often still not adequate for many specific domains handled daily by the translation industry. While NMT has shown to benefit from the availability of in-domain parallel or monolingual data to learn domain specific terms (Farajian et al., 2018), it is not a universally applicable solution as often a domain may be too narrow and lacking in data for such bootstrapping techniques to work. For this reason, most multilingual content providers maintain terminologies for all their domains which are created by language specialists. For example, an entry such as Jaws (en) →Lo Squalo (it) would exist in order to indicate that the input Jaws is a scary movie should be translated as Lo Squalo `e un film pauroso. While translation memories can be seen as ready-to-use training data for NMT domain adaptation, terminology databases (in short term bases) are more difficult to handle and there has been significant work on proposing methods to integrate domain terminology into NMT at run time. Constrained decoding is the main approach to this problem. In short, it uses the target side of terminology entries whose source side match the input as decoding-time constraints. Constrained decoding and various improvements were addressed in Chatterjee et al. (2017), Hasler et al. (2018), Hokamp and Liu (2017) among others. Hokamp and Liu (2017) recently introduced the grid beam search (GBS) algorithm which uses a separate beam for each supplied lexical constraint. This solution however increases the run time complexity of the decoding process exponentially in the number of constraints. Post and Vilar (2018) recently suggested using a dynamic beam allocation (DBA) technique that reduces the computational overhead to a constant factor, independent from the number of constraints. In practice, results reported in Post and Vilar (2018) show that constrained decoding with DBA is effective but still causes a 3-fold increase in translation time when used with a beam size of 5. In this paper we address the problem of constrained decoding as that of learning a copy behaviour of terminology at training time. By modifying the training procedure of neural MT we are completely eliminating any computational overhead at inference time. Specifically, the NMT model is trained to learn how to use terminology entries when they are provided as additional input to the source sentence. Term translations are inserted as inline annotations and additional input streams (so called source factors) are used to signal the switch between running text and target terms. We present experiments on Englishto-German translation with terms extracted from two terminology dictionaries. As we do not assume terminology is available at train-time, all our 3064 Append En All0 alternates1 Stellvertreter2 shall0 be0 elected0 for0 one0 term0 Replace En All0 Stellvertreter2 shall0 be0 elected0 for0 a0 term0 De Alle Stellvertreter werden f¨ur eine Amtszeit gew¨ahlt Table 1: The two alternative ways used to generate sourcetarget training data, including target terms in the source and factors indicating source words (0), source terms (1), and target terms (2). tests are performed in a zero-shot setting, that is with unseen terminology terms. We compare our approach against the efficient implementation of constrained decoding with DBA proposed by Post and Vilar (2018). While our goal resembles that of Gu et al. (2017) (teaching NMT to use translation memories) and of Pham et al. (2018) (exploring network architectures to enforce copy behaviour), the method we propose works with a standard transformer NMT model (Vaswani et al., 2017) which is fed a hybrid input containing running text and inline annotations. This decouples the terminology functionality from the NMT architecture, which is particularly important as the state-of-theart architectures are continuously changing. 2 Model We propose an integrated approach in which the MT model learns, at training time, how to use terminology when target terms are provided in input. In particular, the model should learn to bias the translation to contain the provided terms, even if they were not observed in the training data. We augment the traditional MT input to contain a source sentence as well as a list of terminology entries that are triggered for that sentences, specifically those whose source sides match the sentence. While many different ways have been explored to augment MT input with additional information, we opt here for integrating terminology information as inline annotations in the source sentence, by either appending the target term to its source version, or by directly replacing the original term with the target one. We add an additional parallel stream to signal this “code-switching” in the source sentence. When the translation is appended this stream has three possible values: 0 for source words (default), 1 for source terms, and 2 for target terms. The two tested variants, one in which the source side of the terminology is retained and one in which it is discarded, are illustrated with an example in Table 1. 2.1 Training data creation As we do not modify the original sequence-tosequence NMT architecture, the network can learn the use of terminology from the augmentation of the training data. We hypothesize that the model will learn to use the provided terminology at training time if it holds true that when a terminology entry (ts, tt) is annotated in the source, the target side tt is present in the reference. For this reason we annotate only terminology pairs that fit this criterion. The term bases used in the experiments are quite large and annotating all matches leads to most of the sentences containing term annotations. Since we want to model to perform equally well in a baseline, constraint-free condition, we limit the number of annotations by randomly ignoring some of the matches. A sentence s may contain multiple matches from a term base, but we keep the longest match in the case of overlapping source terms. Moreover, when checking for matches of a term inside a sentence, we apply approximate matching to allow for some morphological variations in the term. In our current implementation, we use a simple character sequence match, allowing for example for base word forms to be considered matches even if they are inflected or as part of compounds. 3 Experiments 3.1 Evaluation setting Parallel data and NMT architecture We test our approach on the WMT 2018 English-German news translation tasks1, by training models on Europarl and news commentary data, for a total 2.2 million sentences. The baselines use this train data as is. For the other conditions sentences containing term annotations are added amounting to approximately 10% of the original data. We limit the amount of data added (by randomly ignoring some of the matched terms) as we want the model to work equally well when there are no terms provided as input. Note that these sentences are from the original data pool and therefore no actual new data is introduced. We tokenize the corpora using Moses (Koehn et al., 2007) and perform joint source and target 1http://www.statmt.org/wmt18/translation-task.html 3065 BPE encoding (Sennrich et al., 2016) to a vocabulary of 32K tokens. We use the source factor streams described in the previous section which are broadcast from word streams to BPE streams in a trivial way. We embed the three values of this additional stream into vectors of size 16 and concatenate them to the corresponding sub-word embeddings. We train all models using a transformer network (Vaswani et al., 2017) with two encoding layers and two decoding layers, shared source and target embeddings, and use the Sockeye toolkit (Hieber et al., 2018) (see full training configuration in the Appendix). The WMT newstest 2013 development set is used to compute the stopping criterion and all models are trained for a minimum of 50 and a maximum of 100 epochs. We compare the two methods we propose, train-by-appending and train-by-replace with the constrained decoding algorithm of Post and Vilar (2018) available in Sockeye in identical conditions and using a beam size of 5. Terminology databases We extracted the English-German portions of two publicly available term bases, Wiktionary and IATE.2 In order to avoid spurious matches, we filtered out entries occurring in the top 500 most frequent English words as well as single character entries. We split the term bases into train and test lists by making sure there is no overlap on the source side. 3.2 Results We perform our evaluation on WMT newstest 2013/2017 as development (dev) and test sets respectively and use the test portions of Wiktionary and IATE to annotate the test set.3 We select the sentences in which the term is used in the reference and therefore the copy behaviour is justified. The test set extracted with the Wiktionary term base contains 727 sentences and 884 terms, while the IATE one contains 414 sentences and 452 terms. Table 2 shows the results. We report decoding speed, BLEU scores, as well as term use rates, computed as the percentage of times the term translation was generated in the output out of the total number of term annotations. 2Available at https://iate.europa.eu and https://www.wiktionary.org/ 3https://github.com/mtresearcher/ terminology_dataset Term use rates and decoding speed The first observation we make is that the baseline model already uses the terminology translation at a high rate of 76%. Train-by-appending settings reach a term usage rate of around 90% while train-byreplace reaches even higher usage rates (93%94%) indicating that completely eliminating the source term enforces the copy behaviour even more strongly. All these compare favourably to constrained decoding which reaches 99% on Wiktionary but only 82% on IATE.4 Second, the decoding speed of both our settings is comparable with that of the baseline, thus three times faster than the translation speed of constrained decoding (CD). This is an important difference because a three-fold increase of decoding time can hinder the use of terminology in latencycritical applications. Notice that decoding times were measured by running experiments with batch size 1 on a single GPU P3 AWS instance.5 Wikt Model Term% BLEU (∆) Time(s) Baseline 76.9 26.0 0.19 Constr. dec. 99.5 25.8 (-0.2) 0.68 Train-by-app. 90.7 26.9 (+0.9)↑ 0.19 Train-by-rep. 93.4 26.3 (+0.3) 0.19 IATE Model Term% BLEU (∆) Time(s) Baseline 76.3 25.8 0.19 Constr. dec. 82.0 25.3 (-0.5)↓ 0.68 Train-by-app. 92.9 26.0 (+0.2) 0.19 Train-by-rep. 94.5 26.0 (+0.2) 0.20 Table 2: Term usage percentage and BLEU scores of systems supplied with correct term entries, exactly matching the source and the target. We also provide the P99 latency numbers (i.e. 99% of the times the translations were completed within the given number of seconds). ↑and ↓represent significantly better and worse systems than the baseline system at a p-value < 0.05. Translation quality Surprisingly, we observe significant variance w.r.t BLEU scores. Note that the terminologies affect only a small part of a sentence and most of the times the baseline already contains the desired term, therefore high BLEU variations are impossible on this test set. Constrained decoding does not lead to any changes in BLEU, other than a decrease on IATE with a small beam size of 5. However, all train-bymodels show BLEU increases (+0.2 to +0.9), in 4We ran an additional experiment with a larger beam size of 20 and confirmed that constrained decoding can reach 99% term use on IATE, however at a drastic latency cost. 5The reason for using batch size 1 is that the CD implementation does not yet offer an optimized batched version. 3066 src Plain clothes officers from Dusseldorf’s police force managed to arrest two women and two men, aged between 50 and 61, on Thursday. constr dec Plain Kleidungsbeamte der Polizei Dusseldorf konnten am Donnerstag zwei Frauen und zwei M¨anner im Alter von 50 bis 61 Festnahme festzunehmen. train-by-app Plain Kleidungsbeamte der Polizei von Dusseldorf konnten am Donnerstag zwei Frauen und zwei M¨anner festzunehmen , die zwischen 50 und 61 Jahre alt waren. ref Zivilfahndern der Dsseldorfer Polizei gelang am Donnerstag die Festnahme von zwei Frauen und zwei Mnnern im Alter von 50 bis 61 Jahren. src The letter extends an offer to cooperate with German authorities “when the difficulties of this humanitarian situation have been resolved” . constr dec Das Schreiben erweitert ein Angebot zur Zusammenarbeit mit den deutschen Beh¨orden, “wenn die Schwierigkeiten dieser humanit¨ar gel¨ost sind”. train-by-app Das Schreiben erweitert ein Angebot zur Zusammenarbeit mit den deutschen Beh¨orden, “wenn die Schwierigkeiten dieser humanit¨aren Situation gel¨ost sind.” ref ”In seinem Brief macht Snowden den deutschen Beh¨orden ein Angebot der Zusammenarbeit, wenn die Schwierigkeiten rund um die humanit¨are Situation gel¨ost wurden . Table 3: Examples in which constrained decoding leads to lower translation quality due to strict enforcement of constraints. The terms are arrest →Festnahme and humanitarian →humanit¨ar (IATE terminology) Wiktionary IATE Model BLEU (∆) BLEU (∆) Baseline 25.0 25.0 Constr. dec. 24.1 (-0.9)↓ 23.7 (-1.3)↓ Train-by-app. 25.0 (0.0) 25.4 (+0.4) Train-by-rep. 24.8 (-0.2) 25.3 (+0.3) Table 4: Machine translation results of systems supplied with term entries showing exact source matches and approximate reference matches. ↓represent significantly worse system than baseline with a p-value < 0.05. particular the train-by-appending ones which have a lower terminology use rate. When examining the errors of the methods we observe cases in which constrained decoding alters the translation to accommodate a term even if a variation of that term is already in the translation as in the festzunehmen/Festnahme example of Table 3 (and sometimes even if the identical term is already used). A closer look at previous constrained decoding literature shows that most of the evaluations are performed differently than in this paper: The data sets contain only sentences for which the reference contains the term and also the baseline fails to produce it. This is an ideal setting which we believe to mimic few, if any, real world applications. We observed an additional surprisingly positive behavior with our approach which constrained decoding does not handle: in some cases, our models generate morphological variants of terminology translations provided by the term base. Following up on this we set up an additional experiment by extending the previous set to also include approximate matches on the target side (identical to the approximate match in training explained in Section 2.1). Table 4 shows these results. We observe that this test case is already more difficult for constrained decoding as well as for train-by-replace, most likely due to the removal of the original source side content. On the other hand, trainby-append still performs better than the baseline, while constrained decoding shows significant BLEU score reductions of 0.9-1.3 BLEU points. The humanitarian →humanit¨ar example in Table 3 is a representative of the errors introduced by constrained decoding in case of source matching terms whose target side needs to be inflected. 4 Conclusion While most of previous work on neural MT addressed the integration of terminology with constrained decoding, we proposed a black-box approach in which a generic neural MT architecture is directly trained to learn how to use an external terminology that is provided at run-time. We performed experiments in a zero-shot setting, showing that the copy behaviour is triggered at test time with terms that were never seen in training. In contrast to constrained decoding, we have also observed that the method exhibits flexible use of terminology as in some cases the terms are used in their provided form while other times inflection is performed. 6 To our knowledge there is no existing work that 6Luong et al. (2015) and SYSTRANs Pure NMT system (Crego et al., 2016) are an exception to the constrained decoding approach as they replace entities with special tags that remain unchanged during translation and are replaced in a postprocessing step. However this method also lacks flexibility, as the model will always replace the placeholder with the same phrase irrespective of grammatical context. We leave comparison to their approach to future work. 3067 has a better speed vs performance trade-off than our method in the space of constrained decoding algorithms for neural MT, which we believe makes it particularly suitable for production environments. 5 Aknowledgments The authors would like to thank Wael Hamza, Faisal Ladhak, Mona Diab and the anonymous reviewers for their advice and comments. References Rajen Chatterjee, Matteo Negri, Marco Turchi, Marcello Federico, Lucia Specia, and Fr´ed´eric Blain. 2017. Guiding neural machine translation decoding with external knowledge. In Proceedings of the Second Conference on Machine Translation, pages 157–168, Copenhagen, Denmark. Association for Computational Linguistics. Josep Maria Crego, Jungi Kim, Guillaume Klein, Anabel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, Satoshi Enoue, Chiyo Geiss, Joshua Johanson, Ardas Khalsa, Raoum Khiari, Byeongil Ko, Catherine Kobus, Jean Lorieux, Leidiana Martins, Dang-Chuan Nguyen, Alexandra Priori, Thomas Riccardi, Natalia Segal, Christophe Servan, Cyril Tiquet, Bo Wang, Jin Yang, Dakun Zhang, Jing Zhou, and Peter Zoldan. 2016. Systran’s pure neural machine translation systems. CoRR, abs/1610.05540. M. Amin Farajian, Nicola Bertoldi, Matteo Negri, Marco Turchi, and Marcello Federico. 2018. Evaluation of Terminology Translation in Instance-Based Neural MT Adaptation. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation, pages 149–158, Alicante, Spain. European Association for Machine Translation. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2017. Search engine guided nonparametric neural machine translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5133–5140, New Orleans, Louisiana, USA. Association for the Advancement of Artificial Intelligence. Eva Hasler, Adri`a Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation decoding with terminology constraints. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 506–512. Association for Computational Linguistics. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 200–207. Association for Machine Translation in the Americas. Chris Hokamp and Qun Liu. 2017. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1535–1546. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180. Association for Computational Linguistics. Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025. Ngoc-Quan Pham, Jan Niehues, and Alexander H. Waibel. 2018. Towards one-shot learning for rareword translation with external experts. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, NMT@ACL 2018, Melbourne, Australia, July 20, 2018, pages 100–109. Matt Post and David Vilar. 2018. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1314–1324. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. 3068 NMT Sockeye train parameters encoder-config: act_type: relu attention_heads: 8 conv_config: null dropout_act: 0.1 dropout_attention: 0.1 dropout_prepost: 0.1 dtype: float32 feed_forward_num_hidden: 2048 lhuc: false max_seq_len_source: 101 max_seq_len_target: 101 model_size: 512 num_layers: 2 positional_embedding_type: fixed postprocess_sequence: dr preprocess_sequence: n use_lhuc: false decoder config: act_type: relu attention_heads: 8 conv_config: null dropout_act: 0.1 dropout_attention: 0.1 dropout_prepost: 0.1 dtype: float32 feed_forward_num_hidden: 2048 max_seq_len_source: 101 max_seq_len_target: 101 model_size: 512 num_layers: 2 positional_embedding_type: fixed postprocess_sequence: dr preprocess_sequence: n config_loss: !LossConfig label_smoothing: 0.1 name: cross-entropy normalization_type: valid vocab_size: 32302 config_embed_source: ! EmbeddingConfig dropout: 0.0 dtype: float32 factor_configs: null num_embed: 512 num_factors: 1 vocab_size: 32302 config_embed_target: ! EmbeddingConfig dropout: 0.0 dtype: float32 factor_configs: null num_embed: 512 num_factors: 1 vocab_size: 32302
2019
294
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3069–3075 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3069 Leveraging Local and Global Patterns for Self-Attention Networks Mingzhou Xu† Derek F. Wong†∗Baosong Yang† Yue Zhang‡ Lidia S. Chao† †NLP2CT Lab / Department of Computer and Information Science, University of Macau ‡School of Engineering, Westlake University nlp2ct.{mzxu,baosong}@gmail.com, {derekfw,lidiasc}@um.edu.mo, [email protected] Abstract Self-attention networks have received increasing research attention. By default, the hidden states of each word are hierarchically calculated by attending to all words in the sentence, which assembles global information. However, several studies pointed out that taking all signals into account may lead to overlooking neighboring information (e.g. phrase pattern). To address this argument, we propose a hybrid attention mechanism to dynamically leverage both of the local and global information. Specifically, our approach uses a gating scalar for integrating both sources of the information, which is also convenient for quantifying their contributions. Experiments on various neural machine translation tasks demonstrate the effectiveness of the proposed method. The extensive analyses verify that the two types of contexts are complementary to each other, and our method gives highly effective improvements in their integration. 1 Introduction Self-attention networks (SANs) (Parikh et al., 2016; Lin et al., 2017) have shown promising results for a range of NLP tasks, including machine translation (Vaswani et al., 2017), contextualized word embedding learning (Devlin et al., 2019), dependency parsing (Kitaev and Klein, 2018) and semantic role labeling (Tan et al., 2018). They learn hidden representations of a sequence by letting each word attend to all words in the sentence regardless of their distances. Such a fully connected structure endows SANs with the appealing strength of collecting the global information (Yu et al., 2018; Shen et al., 2018; Chen et al., 2018; Zhang et al., 2017a; Yang et al., 2019a). However, some recent researches observe that a fully connected SANs may overlook the important ∗Corresponding author neighboring information (Luong et al., 2015; Sperber et al., 2018; Yang et al., 2019a). They find that SANs can be empirically enhanced by restricting the attention scope to a local area. One interesting question arises: how the local and global patterns quantitatively affect the SANs. To this end, we make empirical investigations with a hybrid attention mechanism, which integrates a local and a global attentive representation via a gating scalar. Empirical results on English-to-German and Japanese-to-English tasks demonstrate the effectiveness of using both the local and global information, which are shown complementary with each other. Our conceptually simple model consistently improves the performance over existing methods with fewer parameters. The probing tasks demonstrate that the local information is beneficial to the extraction of syntactic features, integrating with the global information further improves the performance on semantic probing tasks. The quantification analysis of gating scalar also indicates that different types of words have different requirements for the local and global information. 2 Related Works Previous work has shown that modeling locality benefits SANs for certain tasks. Luong et al. (2015) proposed a Gaussian-based local attention with a predictable position; Sperber et al. (2018) differently applied a local method with variable window size for acoustic task; Yang et al. (2018) investigated the affect of the dynamical local Gaussian bias by combining these two approaches for the translation task. Different from these methods using a learnable local scope, Yang et al. (2019b) and Wu et al. (2019) restricted the attention area with fixed size by borrowing the concept of convolution into SANs. Although both these methods yield considerable improvements, 3070 they to some extent discard long-distance dependencies and the global information. On the contrary, other researchers observed that global feature fusion is one of the salient advantages of SANs. Shen et al. (2018) and Yu et al. (2018) succeeded to employ SANs on capturing global context for their downstream NLP tasks. Recent works also suggested that such the contextual information can improve word sense disambiguation (Zhang et al., 2017a), dependency parsing (Choi et al., 2017) and semantic modeling (Yang et al., 2019a). For exploring the contribution of them, our work integrates both the local and global information under a unified framework. 3 Hybrid Attention Mechanism In order to quantify the contribution of the local and global patterns, we propose a hybrid attention mechanism. The model first generates the local and global representations (Section 3.1), which are then dynamically integrated into the final output using a gating scalar (Section 3.2). 3.1 Patterns in Attention Our approach generates the local and global pattern from the same source. As illustrated in Figure 1, for a given input sentence X = {x1, ..., xn}, self-attention model first linearly projects its embedding H ∈Rn×d into queries Q ∈Rn×d, keys K ∈Rn×d and values V ∈Rn×d. The i-th attention energy ξi is generated with a dot-product attention algorithm (Luong et al., 2015): ξi = QiKT √ d ∈Rn (1) Then, the energy is use to produce the local and global attention distribution. Global Pattern: One strength of SAN is capturing global knowledge by explicitly attending to all the signals. Accordingly, we immediately serve the original attention distribution as the global pattern of our approach. The global representation corresponding to the i-th element is calculated as: Att(ξi, V ) = softmax(ξi)V ∈Rd (2) Local Pattern: The local attention enhances the neighbor signals via restricting the attention scope to a local part surrounding the current element. Following Yang et al. (2019b), we employ a hard We have an indulgence from the Pope Energy Hybrid Attention Aggregation Global Pattern Local Pattern Figure 1: Illustration of hybrid attention mechanism. The global pattern attends to all signals (both grey and red) in the given sentence while the local pattern merely focuses on the neighboring information (red) surrounding the current word “indulgence” (Qi). bias to revise the attention energy for simplification: B(ξi) = ( ξi,j, i −m ≤j ≤i + m, −∞, otherwise. (3) where ξi,j denotes the energy between the i- and j−th elements. m is the amount of one-side adjacent signals considered in local attention. 3.2 Hybrid Attention Aggregation To leverage the local and global information from the two patterns, we apply a gating scalar to dynamically integrate them to the final representation, which can be formally expressed as: ˆ Hi =(1 −gi) ∗Att(ξi, Vi)+ gi ∗Att(B(ξi), Vi) (4) The gating scalar gi conditions on Hi, namely: gi = σ(WHi) ∈(0, 1) (5) where σ(.) denotes the logistic sigmoid function. As seen, gating scalar offers the model a possibility to explicitly quantify the contribution of the local and global representations. 4 Experiments We evaluate the effectiveness of the proposed approach on widely used WMT 14 English-toGerman (En-De) and WAT17 Japanese-to-English (Ja-En) translation tasks. For the WAT17 benchmark, we follow (Morishita et al., 2017) to use the 3071 first two sections of WAT17 dataset as the training data, which contains 2M sentences. The Japanese sentences are segmented by the word segmentation toolkit KeTea (Neubig et al., 2011). To alleviate the problem of Out-of-Vocabulary, all the data are segmented into subword units using bytepair encoding (Sennrich et al., 2016) with 32K merge operations. We incorporate the proposed model 1 into the widely used SAN-based framework – TRANSFORMER (Vaswani et al., 2017) and following their network configuration. We refer readers to Appendix A.1 for the details of our data and experimental settings. Prior studies reveal that modeling locality in lower layers can achieve better performance (Shen et al., 2018; Yu et al., 2018; Yang et al., 2018). Therefore, we merely apply the locality model at the lowest two layers of the encoder. According to our empirical results (Section 5.2), we set the window size to 3 (i.e. m = 1). The 4-gram case-sensitive NIST BLEU score (Papineni et al., 2002) is used as the evaluation metric. 4.1 Results In this section, we give the ablation study of the proposed model and compare several existing works upon the same architecture. Effectiveness of Hybrid Attention Mechanism To make the evaluation convincing, we reproduced the reported results in Vaswani et al. (2017) on the same data as the baseline. We first investigate the effect of the local pattern without the global information. As shown in Table 1, restricting the attention scope to a local part is able to improve the performance of translation task, showing the effectiveness of localness modeling. By integrating with the global information, the hybrid models progressively improves the translation quality, confirming that the local and global information are complementary to each other. Specifically, we investigate two combination methods: one uses gating scalar, the other simply concatenates the two sources of information. Obviously, dynamically combining two types of representations using gating scalar outperforms its fixed counterpart (concatenation). It is worth noting that the additional projection layer used in the concatenation method brings additional parameters over the method which using the gating scalar. 1Our codes are released at: https://github.com/ scewiner/Leveraging Model Param. BLEU TRANSFORMER 88.0M 27.67 + NEIGHBOR +0.4M 27.90 + LOCAL H +0.4M 28.03 + LOCAL S +0.8M 28.11 + LOCAL PATTERN +0.0M 28.13 + HYBRID (Concate) +0.3M 28.15 + HYBRID (Gate) +0.0M 28.31 Table 1: Results of the re-implemented approaches and our method on En-De translation task. NEIGHBOR (Sperber et al., 2018) and LOCAL H (Luong et al., 2015) apply Gaussian biases to regularize the conventional attention distribution with a learnable window size and a predicable central position, respectively. LOCAL S (Yang et al., 2018) is the combination of these two approaches. “Param.” denotes the model size. Model En-De Ja-En TRANSFORMER 27.67 28.10 + LOCAL PATTERN 28.13 28.23 + HYBRID (Gate) 28.31⇑ 28.66⇑ Table 2: Experimental results on WMT17 En⇒De and WAT17 Ja⇒En test sets. “⇑”: significant over the vanilla self-attention counterpart (p < 0.05), tested by bootstrap resampling (Koehn, 2004). Comparison to Existing Approaches We reimplement and compare several existing methods (Sperber et al., 2018; Luong et al., 2015; Yang et al., 2018, 2019b) upon TRANSFORMER. Table 1 reports the results on the En-De test set. Clearly, all the models improve translation quality, reconfirming the necessity of modeling locality for SANs. By leveraging the local and global properties, our models outperform all the related works with fewer additional parameters. Performance across Languages We further conduct experiments on WAT17 Ja-En task, which is a distant language pair (Isozaki et al., 2010). As concluded in Table 2, the proposed hybrid attention mechanism consistently improves translation performance over strong TRANSFORMER baselines across language pairs, which demonstrates the universality of the proposed approach. 5 Analysis We further investigate how the local and global patterns matter SANs. In this section, we try to answer two questions: 1) which linguistic properties are exactly improved by the proposed method; 3072 Model Surf. Sync. Semc. TRANSFORMER 76.75 64.67 74.88 + LOCAL PATTERN 77.15 66.00 74.74 + HYBRID (Gate) 76.25 65.60 75.14 Table 3: Classification accuracy on 10 probing tasks of evaluating the linguistic properties. We category 10 probing tasks into three groups (“Surf.”: surface, “Sync.”: syntax and “Semc.”: semantics) following the setting in Conneau et al. (2018). For simplistic, we merely reported the average score on each group. and 2) how different representations learn the locality and globality. 5.1 Linguistic Properties Although the proposed model improves the translation performance dramatically, we still lack of understanding on which linguistic perspectives are exactly improved by the two sources of information. To this end, we follow Conneau et al. (2018) and Li et al. (2019) to conduct 10 classification tasks to study what linguistic properties are enhanced by our model. Experiment Setting These tasks are divided into three categories (Conneau et al., 2018): tasks in “Surf.” focus on the surface properties learned in the sentence embedding; “Sync.” are the tasks which designed to evaluate the capabilities of the encoder on capturing the syntactic information; and “Semc.” tasks assess the ability of a model to understanding the denotation of a sentence. For the model setting, we replace the decoder of our translation model to a MLP classifier and keep the encoder with the configuration shown in Section 4. The mean of the last encoding layer is passed to the classifier as the sentence representation. We train and examine all the model of each task on the dataset provided by Conneau et al. (2018), which contains 100k sentences for training, 10k sentences for validating and testing, respectively. To quantify the linguistic properties of the pre-trained encoders, the parameters of the encoders are fixed, while merely update those in the output layer. We set the hyper parameters of these tasks following the configuration of Conneau et al. (2018). The mini-batch size is 1k samples. The training of each model early-stops with the accuracy on the validation set. More details of the evaluation setting and accuracy in finergrained level can be found in Appendix B. Figure 2: The BLEU scores of the model with different window sizes and their associated weights for the local context. The axis of the histogram (BLEU) is shown in the left, and the right is the axis of the curve (weight). Obviously, the window size being 3 results in the best performance on validation set and the contribution of the local context increases along with the window size. Results of Probing Tasks As reported in Table 3, our methods outperform baseline model on both ‘‘Sync.” and “Semc.” tasks. Specifically, the local information is obviously more conducive to the “Sync.” tasks, which indicates that enhancing the local information in the lower layer could improve the ability to learn the syntactic properties (FitzGerald et al., 2015). Nevertheless, further integrating with the global information benefits to the capturing of the semantic information (Yang et al., 2019a). Moreover, the hybrid model underperforms baseline model on “Surf.’ tasks, the reason is that a model tends to forget these superficial features for capturing deeper linguistic properties (Conneau et al., 2018; Hao et al., 2019). 5.2 Analysis on Different Representations We further investigate how the local and global patterns harmonically work with different representations via reporting the average weight output by the gating scalar (Equation 5). Investigation of Window size Figure 2 depicts the results of our investigations with the different window sizes on the En-De validation set. In order to measure the reliability of the evaluation, we assess each setting via averaging the best 5 models in different training steps. As seen, the model with the window size of 3 (i.e m = 1) gets a slight improvement over the others. This is inconsistent with the previous findings (Luong et al., 2015; Yang et al., 2019b) which show that the window size being 11 leads to the best performance. One possible reason is that their models will discard the global information when assigns a small local scope. On the contrary, our hybrid model not 3073 Figure 3: Visualization of the importance of the local information on different layers. The importance is assessed by averaging the scalar factors in Equation 5 over the validation set. only utilizes the local context but also exploits the global information. Accordingly, the local pattern can attend to a smaller scope without the loss of global context. The hypothesis can be confirmed by the curve regarding to weights of the local pattern. As seen, the requirement of the local information increases with the window size. Gating Scalar across Layers As visualized in Figure 3, the requirements of the local information are reduced with the stacking of layers. This is consistent with the prior findings that the lower layers tend to learn more word- and phrase-level properties than the higher layers, while the top layers of SANs seek more global information (Peters et al., 2018; Yang et al., 2018; Devlin et al., 2019). Moreover, the local information is less than the global information even in the first layer, verifying our hypothesis that both the local and global patterns are necessary for SANs. Gating Scalar across POS We further explore how different types of words learn the local information. In response to this problem, we categorize different words in validation set using the Universal Part-of-Speech tagset.2 Figure 4 shows the averaged factors learned for different types of words at the first layer. As seen, contrary to the content words (e.g, “NOUN”,“VERB”,“ADJ”), the function words (e.g, “CONJ” and “PRON”), which have little substantive meaning, seek to more global information in the source sentence. However, we also find that other function words (e.g, “ADP”, “NUM”,“SYM”) pay more attention on neighboring signals. We attribute this to the fact 2Including: “SYM”-symbols, “DET”determiner, “CONJ”-conjuntion, “PRT”-partical, “PRON”-pronoun, “ADP”-adposition, “NOUN”-noun, “VERB”-verb, “ADV”adverb, “NUM”-number, “ADJ”-adjective, and “X”-others. Avg. Weight 0.48 0.49 0.50 0.51 NOUN VERB ADJ ADV CONJ PRON DET PRT ADP NUM SYMX Figure 4: The weights of the local information corresponding to different POS. Obviously, different types of representations need different requirements of the local and global information. that these function words need more local context to determine their syntactic and semantic roles in the sentence. Both these results show that different words indeed have distinct requirements of the local and global information. Therefore, modeling locality and globality in a flexible fashion is necessary for SANs on sentence modeling. 6 Conclusion In this study, we propose to integrate the local and global information for enhancing the performance of SANs. Experimental results on various machine translation tasks demonstrate the effectiveness of the proposed model. We further empirically compare the two kinds of contextual information for different types of representations and probing tasks. The extensive analyses verify that: 1) fully leveraging both of the local and global information is beneficial to generate a meaningful representation; and 2) different types of representations indeed have distinct requirements with respect to the local and global information. The proposed method gives highly effective improvements in their integration. Acknowledgements This work is supported in part by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (Grant No. 045/2017/AFJ) and the Multi-Year Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST). Yue Zhang is supported by the startup grant at Westlake University. We would like to thank the anonymous reviewers for their insightful comments. 3074 References Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. 2018. The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation. In ACL. Heeyoul Choi, Kyunghyun Cho, and Yoshua Bengio. 2017. Context-Dependent Word Representation for Neural Machine Translation. COMPUT SPEECH LANG. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What You Can Cram into A Single $&!#∗Vector: Probing Sentence Embeddings for Linguistic Properties. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL. Nicholas FitzGerald, Oscar T¨ackstr¨om, Kuzman Ganchev, and Dipanjan Das. 2015. Semantic Role Labeling with Neural Network Factors. In EMNLP. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling Recurrence for Transformer. In NAACL. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Language Pairs. In EMNLP. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. ICLR. Nikita Kitaev and Dan Klein. 2018. Constituency Parsing with A Self-Attentive Encoder. In ACL. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In EMNLP. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu. 2019. Information Aggregation for Multi-Head Attention with Routing-by-Agreement. In NAACL. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A Structured Self-Aattentive Sentence Embedding. In ICLR. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In EMNLP. Makoto Morishita, Jun Suzuki, and Masaaki Nagata. 2017. NTT Neural Machine Translation Systems at WAT 2017. In WAT. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis. In ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In ACL. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In EMNLP. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting Contextual Word Embeddings: Architecture and Representation. In EMNLP. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, and Chengqi Zhang. 2018. Bi-Directional Block SelfAttention for Fast and Memory-Efficient Sequence Modeling. In ICLR. Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian St¨uker, and Alex Waibel. 2018. SelfAttentional Acoustic Models. In Interspeech. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep Semantic Role Labeling with Self-attention. In AAAI. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NISP. Felix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, and Michael Auli. 2019. Pay Less Attention with Lightweight and Dynamic Convolutions. In ICLR. Baosong Yang, Jian Li, Derek Wong, Lidia S Chao, Xing Wang, and Zhaopeng Tu. 2019a. ContextAware Self-Attention Networks. In AAAI. Baosong Yang, Zhaopeng Tu, Derek F Wong, Fandong Meng, Lidia S Chao, and Tong Zhang. 2018. Modeling Localness for Self-Attention Networks. In EMNLP. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019b. Convolutional Self-Attention Networks. In NAACL. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension. In ICLR. Biao Zhang, Deyi Xiong, Jinsong Su, and Hong Duan. 2017a. A Context-Aware Recurrent Encoder for Neural Machine Translation. TASLP. Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huanbo Luan, and Yang Liu. 2017b. THUMT: An Open Source Toolkit for Neural Machine Translation. arXiv:1706.06415. 3075 Model Surf. Sync. Semc. SeLn WC TDep ToCo BShif Tense SubN ObjN SoMo CoIn TRANSFORMER 90.1 63.4 43.9 78.5 71.6 88.7 85.8 85.0 51.7 63.2 LOCAL PATTERN 91.3 63.0 44.8 78.8 74.4 88.8 86.1 84.7 51.8 62.3 HYBRID 89.9 62.6 44.9 78.4 73.5 88.5 87.0 85.4 52.1 62.8 Table 4: The classification accuracy of 10 probing tasks. We pass the representations from the last encoding layer to the classifier. A Machine Translation A.1 Experimental Setting We evaluate our method on the advanced TRANSFORMER architecture (Vaswani et al., 2017) that was reproduced by the toolkit THUMT (Zhang et al., 2017b). We use the same configuration as Vaswani et al. (2017), in which the hidden size is 512, the number of encoder and decoder layer is 6, the number of head is 8 and the label smoothing is 0.1. Different to Vaswani et al. (2017), we set the L2 regularization to λ = 10−7. The training of each model was early-stopped to maximize the BLEU score on the development set. The training set is shuffled after each epoch. We use Adam (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98 and ϵ = 10−9. The learning rate linearly warms up over the first 4,000 steps, and decreases thereafter proportionally to the inverse square root of the step number. We use a dropout rate of 0.1 on all layers. All the models are trained with each batch containing approximately 25000 source tokens and 25000 target tokens. B Probing Tasks We conduct 10 classification tasks (Conneau et al., 2018) to study what linguistic properties are enhanced by the proposed model. B.1 Tasks Description As seen in Table 4, “SeLn” is to predict the length of a given sentence. “WC” tests whether it is possible to recover information about the word from the sentence embedding. “TDep” checks whether an encoder infers the hierarchical structure of input sentences. In “ToCo” task, sentences should be classified in terms of the sequence of top constituents. “BShif” tests whether two consecutive tokens within the sentence have been inverted. “Tense” is a task for evaluating the tense of the main-clause verb. “SubN” focuses on finding out the number of the subject of the main clause. “ObjN” tests the number of the direct object of the main clause. In “SoMo”, a noun or verb of the sentence are replaced with another noun or verb and the classifier should tell whether a sentence has been modified or not. “CoIn” divides a sentence into two coordinate clauses. Half of the sentences are inverted the order of the clauses and the task is to tell whether a sentence is intact or modified. B.2 Results in Detail We investigate the performance of the proposed model on probing tasks and list the result in Table 4. As seen, TRANSFORMER which seeks more global information outperforms other models in both the “WC” and “CoIn” tasks. On the contrary, modeling locality is beneficial to “SeLn”, “ToCo”, “BShif” and “Tense” tasks. By combining these two sources of information, the model with hybrid attention aggregation gets better performance in 3 of the five “Semc.” tasks, which demonstrates that leveraging both the local and global information is able to raise the ability of SANs to learn semantic properties. Moreover, HYBRID underperforms the others in “Surf.” tasks, which means that this model is more suitable for capturing deeper linguistic properties.
2019
295
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3076–3082 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3076 Sentence-Level Agreement for Neural Machine Translation Mingming Yang1∗, Rui Wang2, Kehai Chen2, Masao Utiyama2, Eiichiro Sumita2, Min Zhang1,3, and Tiejun Zhao1 1School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China 2National Institute of Information and Communications Technology (NICT), Kyoto, Japan 3School of Computer Science and Technology, Soochow University, Suzhou, China [email protected], [email protected], [email protected] {wangrui, khchen, mutiyama, eiichiro.sumita}@nict.go.jp Abstract The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the entire neural network and the training objective is computed in wordlevel. In this paper, we propose a sentencelevel agreement module to directly minimize the difference between the representation of source and target sentence. The proposed agreement module can be integrated into NMT as an additional training objective function and can also be used to enhance the representation of the source sentences. Empirical results on the NIST Chinese-to-English and WMT English-to-German tasks show the proposed agreement module can significantly improve the NMT performance. 1 Introduction Neural network based methods have been applied to several natural language processing tasks (Zhang et al., 2016; Li et al., 2018; Chen et al., 2018; Li et al., 2019; He et al., 2018). In neural machine translation (NMT), unlike conventional phrase-based statistical machine translation, an attention mechanism is adopted to help align output with input words (Bahdanau et al., 2015). It is based on the estimation of a probability distribution over all input words for each target word. However, source and target words are in different representation space, and they still have to go through a long information processing procedure that may lead to the source words are incorrectly translated into the target words. ∗Mingming Yang was an internship research fellow at NICT when conducting this work. Based on this hypothesis, Kuang et al. (2018) proposed a direct bridging model, which directly connects source and target word embeddings seeking to minimize errors in the translation. Tu et al. (2017) incorporated a reconstructor module into NMT, which reconstructs the input source sentence from the hidden layer of the output target sentence to enhance source representation. However, in previous studies, the training objective function was usually based on word-level and lacked explicit sentencelevel relationships (Zhang and Zhao, 2019). Although Transformer model (Vaswani et al., 2017) has archived state-of-the-art performance of NMT, more attention is paid to the words-level relationship via self-attention networks. Sentence-level agreement method has been applied to many natural language processing tasks. Aliguliyev (2009) used sentence similarity measure technique for automatic text summarization. Liang et al. (2010) have shown that the sentence similarity algorithm based on VSM is beneficial to address the FAQ problem. Su et al. (2016) presented a sentence similarity method for spoken dialogue system to improve accuracy. Rei and Cummins (2016) proposed sentence similarity measures to improve the estimation of topical relevance. Wang et al. (2017b; 2018) used sentence similarity to select sentences with the similar domains. The above methods only considered monolingual sentence-level agreement. In human translation, a translator’s primary concern is to translate a sentence through its entire meaning rather than word-by-word meaning. Therefore, in early machine translation studies, such as example-based machine translation (Nagao, 1984; Nio et al., 2013), use the sentence similarity matching between the sentences to be translated and the sentences in the 3077 bilingual corpora to extract translation. Inspired by these studies, we establish a sentence-level agreement channel directly in the deep neural network to shorten the distance between the source and target sentence-level embeddings. Specifically, our model can be effectively applied to NMT in two aspects: • Sentence-Level Agreement as Training Objective: we use the sentence-level agreement as a part of the training objective function. In this way, we not only consider the translation of the word level but also consider the sentence level. • Enhance Source Representation: As our model can make the vector distribution of the sentence-level between source-side and target-side closer, we can combine their sentence-level embeddings to enhance the source representation. Experimental results on Chinese-to-English and English-to-German translation tasks demonstrate that our model is able to effectively improve the performance of NMT. 2 Neural Machine Translation In this section, we take the Transformer architecture proposed by Vaswani et al. (2017), which is the state-of-the-art translation architecture, as the baseline system. As an encoder-to-decoder architecture, X = {x1, x2, ..., xJ} represents a source sentence and Y = {y1, y2, ..., yI} represents a target sentence. The encoder-to-decoder model learns to estimate the conditional probability from the source sentence to the target sentence word by word: P(y|x; θ) = IY i=1 P(yi|y<i, x; θ), (1) where θ is a set of model parameters and y<i denotes a partial translation. Different from the other NMT, Transformer has the self-attention layers that can operate in parallel. A single self-attention layer has two sub-layers: a multi-head self-attention layer and a feed forward network. The feed forward network consists of two simple fully connected networks with a ReLU activation function in between: FFN(x) = max(0, xW1 + b1)W2 + b2, (2) where W1 and W2 are both linear transformation networks, b1 and b2 are both bias. We define Henc as the sentence representation of X via the self-attention layers in encoder, and Hdec as the sentence representation of words Y via embedding layers in decoder. The parameters of Transformer are trained to minimize the following objective function on a set of training examples {(Xn, Y n)}N n=1: Lmle = −1 N N X n=1 Iy X i=1 logP(yn i |yn <i, Henc, Hdec). (3) 3 Agreement on Source and Target Sentence Some studies (Luong et al., 2015; Tu et al., 2016; Chen et al., 2017a,b; Kuang et al., 2018) showed that improving word alignment is beneficial to machine translation. Their idea is based on word-level agreement and make the embeddings of source words and corresponding target words similar. In this paper, we investigate the sentence-level relationship between the source and target sentences. We propose a sentence-level agreement method which can make the sentencelevel semantics of the source and target closer. The entire architecture of the proposed method is illustrated in Figure 1. 3.1 Sentence-Level Agreement First, we need to get the sentence-level representation of the source and target. Some studies showed that the Mean operation is an effective method to represent sentence of sequence words (Mitchell and Lapata, 2010; Mikolov et al., 2013; Le and Mikolov, 2014), especially for NMT (Wang et al., 2017a). Motivated by this, we adopt Mean to represent the source and target sentences as shown in Figure 1(a). Denote eHenc is the mean of Henc and eHdec is the mean of Hdec. We design a Sentence Agreement Loss Lmse to measure the distance between the source and target sentence-level vectors: Lmse = || eHenc −eHdec||2. (4) Finally, our goal is to improve translation with shortening the distance in sentence-level. Thus, 3078 Input Embedding Source Multi-Head Attention Add & Norm Feed Forward Add & Norm Input Embedding Target Multi-Head Attention Add & Norm Feed Forward Add & Norm Positional Encoding softmax N × × N Multi-Head Attention Add & Norm MLE Loss Sentence Agreement Loss Loss Positional Decoding Encoder Decoder 𝐻𝑒𝑛𝑐 𝐻𝑑𝑒𝑐 Mean Mean ෩𝐻𝑒𝑛𝑐 ෩𝐻𝑑𝑒𝑐 (a) Input Embedding Source Multi-Head Attention Add & Norm Feed Forward Add & Norm Input Embedding Target Multi-Head Attention Add & Norm Feed Forward Add & Norm Positional Encoding softmax N × × N Multi-Head Attention Add & Norm MLE Loss Sentence Agreement Loss Loss Positional Decoding Encoder Decoder 𝐻𝑒𝑛𝑐 ෪ EH𝑒𝑛𝑐 𝐻𝑑𝑒𝑐 Mean Mean Ehance EH𝑒𝑛𝑐 ෩𝐻𝑑𝑒𝑐 Concat (b) Figure 1: (a) Architecture of Sentence-Level Agreement Loss; (b) Architecture of Enhance Source Representation. the final objective of our model is composed of two parts, the formula is as follows: L = Lmle + Lmse. (5) 3.2 Enhance Source Representation Sentence-level agreement helps make the targetside sentence representation closer to the source. Intuitively, we can also use this mechanism to strengthen the source representation to improve the translation. Further, we propose a simple and efficient architecture in Figure 1(b). First, we map Henc to the target-side vector EHenc through a simple feed forward network TFFN by eq.(2): EHenc = TFFN(Henc). (6) In particular, we use a Tanh activation function instead of ReLU in the feed forward network. The value range of Tanh is -1 to 1, which indicates some information should be counterproductive. Our Enhanced Sentence Agreement Loss LEmse is to measure the distance between the source and target sentence-level vectors: LEmse = ||g EHenc −eHdec||2, (7) where g EHenc is the mean of EHenc. Le and Mikolov (2014) use concatenation as the method to combine the sentence vectors to strengthen the capacity of representation. We also use the same method to combine Henc and g EHenc: CHenc = Concat(Henc, g EHenc). (8) In this way, we can enhance the source representation with a sentence-level representation closer to the target-side. The updated translation training objective is: LEmle = −1 N N X n=1 Iy X i=1 logP(yn i |yn <i, CHenc, Hdec). (9) Thus, the final objective is as follows: LE = LEmle + LEmse. (10) 4 Experiments 4.1 Dataset For Chinese-English (ZH-EN) translation, our training data for the translation task consists of 1.25M Chinese-English sentence pairs extracted from LDC corpora1. The NIST02 testset is chosen as the development set, and the NIST03, 1The corpora include LDC2002E18, LDC2003E07,LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 3079 # Model NIST WMT 03 04 05 06 Avg 14 Existing NMT Systems 1 EDR (Tu et al., 2017) N/A N/A 33.73 34.15 N/A N/A 2 DB (Kuang et al., 2018) 38.02 40.83 N/A N/A N/A N/A Our NMT Systems 3 Transformer(Base) 45.57 46.40 46.11 44.92 45.75 27.28 4 +lossmse 46.71† 47.23† 47.12† 45.78† 46.71 28.11† 5 +lossmse + enhanced 46.94† 47.52† 47.43† 46.04† 46.98 28.38† 6 Transformer(Big) 46.73 47.36 47.15 46.82 47.01 28.36 7 +lossmse 47.43† 47.96 47.78 47.39 47.74 28.71 8 +lossmse + enhanced 47.68† 48.13† 47.96† 47.56† 47.83 28.92† Table 1: Translation results for Chinese-English and English-German translation task. “†”: indicates statistically better than Transformer(Base/Big) (ρ < 0.01). # Model BLEU Param Speed (tokens/s) WMT14 Train Decode 1 Transformer(Base) 27.28 93.3M 9,950 150 2 +lossmse 28.11 93.3M 9,850 150 3 +lossmse + enhanced 28.38 93.9M 9,780 146 4 Transformer(Big) 28.36 274.7M 4,340 95 5 +lossmse 28.71 274.7M 4,300 95 6 +lossmse + enhanced 28.92 276.8M 4,150 88 Table 2: The efficiency analysis on English-German task. “Param” denotes the trainable parameter size of each model (M=million) and Beam=10. NIST04, NIST05, NIST06 datasets are testsets. We use the case-insensitive 4-gram NIST BLEU score as our evaluation metric (Papineni et al., 2002). The training data of English-German (EN-DE) translation is from WMT14, which consists of 4.5M sentence pairs. We use byte-pair encoding (Sennrich et al., 2016b) to segment words. The newstest2013 was used as a development set and the newstest2014 as test sets that are evaluated by SacreBLEU (Post, 2018). To efficiently train NMT models, we train each model with sentences of length up to 50 words. In this way, about 90% and 89% of ZHEN and EN-DE parallel sentences are covered in the experiments. In addition, we use byte pair encoding (Sennrich et al., 2016a) with 32K merges to segment words into sub-word units for all languages to alleviate the out-of-vocabulary problem. We evaluate the proposed approaches on our re-implemented Transformer model (Vaswani et al., 2017). We test both the Base and Big models, which differ at the dimensionality of input and output (512 vs 1024), the number of attention head (8 vs 16) and the inner-layer size (2048 vs 4096). We set 6 layers for encoder and decoder. All the models were trained on a single NVIDIA P100 GPU, which is allocated a minibatch of 4096 tokens. About 200K minibathes are trained. 4.2 Performance Table 1 shows the performances measured in terms of BLEU score. On ZH-EN task, Transformer(Base) outperforms the existing systems EDR (Tu et al., 2017) and DB (Kuang et al., 2018) by 11.5 and 6.5 BLEU points. With respect to BLEU scores, all the proposed models (Row 4-5) consistently outperform Transformer(base) by 0.96 and 1.23 BLEU points. The big models (Row 7-8) also achieve similar improvement by 0.73 and 0.82 BLEU points on a larger parameters model. These findings suggest a sentence-level agreement between source-side and target-side is helpful for NMT. Further, we use it to enhance the source representation is an effective way to improve the translation. In 3080 Model NIST WMT BLEU sim(‰) BLEU sim(‰) Transformer(Base) 45.75 13.7 27.28 36.2 +lossmse 46.71 19.8 28.11 48.3 +lossmse + enhanced 46.98 26.9 28.38 57.6 Transformer(Big) 47.01 13.5 28.36 41.5 +lossmse 47.74 18.3 28.71 56.4 +lossmse + enhanced 47.83 23.2 28.92 68.2 Table 3: Source-to-target sentence-level similarity analysis on Chinese-English and English-German translation task. addition, the proposed methods gain similar improvements on EN-DE task. 4.3 Efficiency Analysis In Table 2, we analyze the efficiency of the proposed methods. lossmse (Row 2 and 5) gets better translation without any added parameters, only decrease approximately 1% train speed. It shows that sentence-level agreement is helpful for translation. Compared with Row 1 and 4, lossmse + enhanced (Row 3 and 6) increases little parameters about 0.6M and 2.1M, train and decode speed drop very little. However, it has greatly improved the translation performance. In particular, by comparing Row 3 and 4, we find that our proposed methods achieve a similar performance with the Transformer(Big) and gain a faster speed with fewer parameters. It indicates that enhancing source representation with a sentence-level representation is an effective method for improving translation performance. 4.4 Sentence-Level Similarity Analysis We further study how the proposed models influenced sentence-level similarity in translation. For this, we follow the method of Lapata and Barzilay (2005) to measure sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between source and target sentences is determined by the cosine of their means: sim = cos( eHenc, eHdec). (11) As Table 3 shows, the sentence-level similarity of the proposed method is higher than the corresponding baselines. In addition, there is a correlation between NMT performance (BLEU) and the sentence-level similarity. This indicates that the proposed method can improve the sentence-level similarity between source and target sentences and the performance of NMT. 5 Conclusion In this work, we have presented a sentence-level agreement method for NMT. Our goal is to bring the sentence representation of the source-side and the target-side closer together. At the same time, we can utilize this information to enhance source representation. Our study suggests the source-totarget sentence-level relationship is very useful for translation. In future work, we intend to apply these methods to other natural language tasks. Acknowledgments The corresponding authors are Rui Wang and Min Zhang. This work was partially conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan. Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation”. Min Zhang is partially supported by the National Natural Science Foundation of China via No. 61525205. References Ramiz M. Aliguliyev. 2009. A new sentence similarity measure and sentence based extractive technique for automatic text summarization. Expert Syst. Appl., 36(4):7764–7772. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly 3081 learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA. Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017a. Neural machine translation with source dependency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2846– 2852, Copenhagen, Denmark. Association for Computational Linguistics. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2017b. Context-aware smoothing for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 11–20, Taipei, Taiwan. Asian Federation of Natural Language Processing. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4792–4799, New Olreans, LA. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2061–2071, Melbourne, Australia. Shaohui Kuang, Junhui Li, Ant´onio Branco, Weihua Luo, and Deyi Xiong. 2018. Attention focusing for neural machine translation by bridging source and target embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1767–1776. Association for Computational Linguistics. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: models and representations. In International Joint Conference on Artificial Intelligence, pages 1085–1090. Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML, volume 32 of JMLR Workshop and Conference Proceedings, pages 1188–1196. JMLR.org. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203–3214, Santa Fe, New Mexico, USA. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. CoRR, abs/1901.05280. Xu Liang, Dongjiao Wang, and Ming Huang. 2010. Improved sentence similarity algorithm based on vsm and its application in question answering system. In 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems, volume 1, pages 368–371. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34:1388–1429. Makoto Nagao. 1984. A framework of a mechanical translation between japanese and english by analogy principle. In Proc. Of the International NATO Symposium on Artificial and Human Intelligence, pages 173–180, New York, NY, USA. Elsevier North-Holland, Inc. Lasguido Nio, Sakriani Sakti, Graham Neubig, Tomoki Toda, and Satoshi Nakamura. 2013. Combination of example-based and smt-based approaches in a chatoriented dialog system. Proc. of ICE-ID. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191. Association for Computational Linguistics. Marek Rei and Ronan Cummins. 2016. Sentence similarity measures for fine-grained estimation of topical relevance in learner essays. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications, pages 283– 288, San Diego, CA. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for wmt 16. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371–376, Berlin, Germany. Association for Computational Linguistics. 3082 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Bo-Hao Su, Ta-Wen Kuan, Shih-Pang Tseng, JhingFa Wang, and Po-Huai Su. 2016. Improved tfidf weight method based on sentence similarity for spoken dialogue system. 2016 International Conference on Orange Technologies (ICOT), pages 36–39. Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, and Hang Li. 2017. Neural machine translation with reconstruction. In Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA., pages 3097–3103. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 76–85, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Proceedings of Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 560–566, Vancouver, Canada. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(10):1727–1741. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488, Copenhagen, Denmark. Association for Computational Linguistics. Huan Zhang and Hai Zhao. 2019. Minimum divergence vs. maximum margin: An empirical comparison on seq2seq models. In Proceedings of the Seventh International Conference on Learning Representations, New Orleans, USA. Zhisong Zhang, Hai Zhao, and Lianhui Qin. 2016. Probabilistic graph-based dependency parsing with convolutional neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1382–1392, Berlin, Germany.
2019
296
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3083–3089 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3083 Multilingual Unsupervised NMT using Shared Encoder and Language-Specific Decoders Sukanta Sen, Kamal Kumar Gupta, Asif Ekbal, Pushpak Bhattacharyya Department of Computer Science and Engineering Indian Institute of Technology Patna Patna, India {sukanta.pcs15,kamal.pcs17,asif,pb}@iitp.ac.in Abstract In this paper, we propose a multilingual unsupervised NMT scheme which jointly trains multiple languages with a shared encoder and multiple decoders. Our approach is based on denoising autoencoding of each language and back-translating between English and multiple non-English languages. This results in a universal encoder which can encode any language participating in training into an interlingual representation, and language-specific decoders. Our experiments using only monolingual corpora show that multilingual unsupervised model performs better than the separately trained bilingual models achieving improvement of up to 1.48 BLEU points on WMT test sets. We also observe that even if we do not train the network for all possible translation directions, the network is still able to translate in a many-to-many fashion leveraging encoder’s ability to generate interlingual representation. 1 Introduction Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015) has become a dominant paradigm for machine translation achieving state-of-the-art results on publicly available benchmark datasets. An effective NMT system requires supervision of a huge amount of high-quality parallel data which is not easily available for many language pairs. In absence of such huge amount of parallel data, NMT systems tend to perform poorly (Koehn and Knowles, 2017). However, NMT without using any parallel data such as bilingual translations, bilingual dictionary or comparable translations, has recently become reality and opened up exciting opportunities for future research (Lample et al., 2018; Artetxe et al., 2018; Yang et al., 2018). It completely eliminates the need of any kind of parallel data and depends heavily on cross-lingual embeddings and iterative back-translations (Sennrich et al., 2016) between the source and target language using monolingual corpora. On the architectural point of view, the approaches combine one encoder and one (Lample et al., 2018) or two (Artetxe et al., 2018) decoders. In supervised NMT settings, combining multiple languages to jointly train an NMT system has been found to be successful in improving the performance (Dong et al., 2015; Firat et al., 2016; Johnson et al., 2017). However, to the best of our knowledge, this is the very first attempt which aims at combining multiple languages in an unsupervised NMT training. To translate between many languages using bilingual version of unsupervised NMT, we require an encoder and one (Lample et al., 2018) or two (Artetxe et al., 2018) decoders for each pair of languages. However, we may not need separate decoders depending on the source language. We can train source-independent, target-specific decoders, wherein each decoder will take an intermediate representation of a source sentence obtained from the shared encoder to translate into their corresponding language. Also, to translate in manyto-many direction for n languages using bilingual unsupervised NMT (Artetxe et al., 2018), we may need n autoencodings and n ∗(n −1) backtranslations in each iteration during training. In this work, we propose to combine multiple languages in an unsupervised NMT training using a shared-encoder and language-specific decoders through one source to many targets and many targets to one source translations. Our proposed approach needs only 2 ∗(n −1) back-translations in each iteration during training. Specifically, we train an NMT system, using only monolingual corpora, for 6 translation directions using 4 languages (English, French, German and Spanish) to perform translation in 12 directions. We take En3084 glish as the anchor language and map three nonEnglish languages’ embeddings into the English embedding space. We train the network to denoise all the four languages and back-translate between English and non-English languages. We evaluate on newstest13 and newstest14 using BLEU (Papineni et al., 2002) score. We find that the multilingual model outperforms the bilingual models by up to 1.48 BLEU points. We also find that the network learns to translate between the nonEnglish (French, German and Spanish) language pairs as well even though it does not explicitly see these pairs during training. To translate between a non-English language pair, no modification to the network is required at inference time. We also evaluate the performance of the non-English language pairs and achieve a maximum BLEU score of 13.92. The key contributions of our current work are as follows: (i) we propose a strategy to train multilingual unsupervised NMT for one source to many targets and many targets to one source translations; (ii) we empirically show that jointly training multiple languages improves separately trained bilingual models; and (iii) we also show that without training the network for many-to-many translations, the network can translate between all the languages participating in the training. 2 Related Work Training multiple languages using a single network is a well known approach in NMT. All the previous works in this line were carried out by using parallel data only. Dong et al. (2015) introduced one-to-many translation using a single encoder for the source language and a decoder for each target language. Firat et al. (2016) proposed multi-way multilingual NMT using multiple encoders and decoders with a single shared attention mechanism. Johnson et al. (2017) came up with a simpler but effective approach that needed only a single encoder and a single decoder, in which all the parallel data were merged into a single corpus after appending some special tokens at the beginning of each sentence. Our multilingual unsupervised translation approach is inspired by Artetxe et al. (2018). We use single encoder which is shared by all languages and a decoder for each language. 3 Background In this section, we briefly describe the basic unsupervised NMT model as proposed in Artetxe et al. (2018). The architecture has one shared encoder and two language specific decoders, and uses following two strategies to train the NMT system in an unsupervised manner: Denoising Autoencoding: The shared encoder takes a noisy (noise through random swaps between two adjacent words) sentence in a given language, initialized with cross-lingual embeddings, encodes into an intermediate representations, and the decoder of that specific language reconstructs the original sentence from that intermediate representations. Back-translation: Training strategy with denoising involves one language at a time, thus it is nothing more than a copying task. In order to perform actual translation without violating the constraint of using nothing but monolingual corpora, back-translation approach is adapted to generate synthetic parallel sentences. At first, for a given sentence in one language, authors (Artetxe et al., 2018) use the system in inference mode to translate it in another language using greedy decoding. Then, the system is trained to predict the original sentence from this synthetic sentence. 4 Proposed Approach Our proposed approach comprises mainly two steps: in the first step, we map multiple languages into a shared latent space through cross-lingual embedding mapping, and in the second step, using the shared representation we train NMT using only monolingual corpora with the help of a shared encoder and language-specific decoders through denoising and back-translation. 4.1 Cross-lingual Embedding For creating cross-lingual embedding, we follow the work by Conneau et al. (2018), which is a fully unsupervised approach to aligning monolingual word embeddings and is based on the existing work of Mikolov et al. (2013). At first, two monolingual embedding spaces X and Y are learned. Then using adversarial training (Ganin et al., 2016), a translation matrix W is learned to map X into Y . A discriminator is trained to discriminate between WX and Y , while W is trained 3085 to prevent the discriminator from doing so by making WX and Y as similar as possible. Using W, a small bilingual dictionary of frequent words is learned. A new translation matrix W that translates between X and Y space is induced by solving the Orthogonal Procrustes problem: W ∗= argminW ||WX −Y ||F = UV T (1) s.t WW T = I, UΣV T = SV D(Y XT ) (2) This step can be iterated multiple times by using new W to extract new translation pairs. New translation pairs between the two languages are produced using cross-domain similarity local scaling (CSLS) (Conneau et al., 2018). 4.2 Multilingual Embeddings In general, for n languages, we choose one language L1 as anchor to map other n −1 languages into its embedding space. To do so, we first train monolingual word embeddings for each of n languages. Then one by one, we map each of n −1 languages’ embedding into embedding space of L1. In our experiments, we consider 4 languages, namely English, French, Spanish and German. We create three cross-lingual embeddings for French, Spanish, and German by keeping English embedding fixed. 4.3 Multilingual NMT Training NMT systems are ideally trained to predict a target sentence given a source sentence. However, in case of unsupervised version of NMT training, we only have monolingual corpora. In absence of a true source-target pair, we depend on synthetic source-target pair having a authentic monolingual sentence at the target side and synthetic equivalent of target at the source side. Our proposed multilingual unsupervised NMT training strategy is inspired by a recent work of Artetxe et al. (2018), which has mainly two steps, viz. (i) denoising autoencoding of the sentences of source and target; and (ii) back-translation between source and target. For n languages L1, L2, ..., Ln, in each iteration, we perform denoising of n languages, back-translation from L1 to the other n −1 languages, and back-translation of n −1 languages to L1. Figure 1 shows the block-diagrammatic representation. In our experimental setting, we have 4 languages and L1 is English. In denoising autoencoding step, sentences in one language are corrupted by some random shuffle of words and the decoder is trained to predict the original sentences. In back-translation step, to train the system for a source-to-target direction, first a target sentence is translated to a source sentence using the system in inference mode (using the shared encoder and the source language decoder) to generate pseudo sourcetarget parallel sentence and then this pseudo parallel sentence is used to train the network for sourceto-target direction. Similarly for a target-to-source training, the process is analogous to the above approach. L1 Decoder L2 Decoder Ln Decoder Shared Encoder L1 L2 Ln L3 Decoder L3 ... ... L1 L2 L3 Ln ... Figure 1: Block diagrammatic view of the proposed network. The shared encoder and decoders of each language are 2-layered bidirectional GRUs. In each iteration of the training: 1. we denoise all languages (L1, L2, L3, ..., Ln); 2. back-translate from each Li to L1 as shown using red arrows; 3. back-translate from L1 to each Li as shown using blue arrows, where i ∈{2, 3, ..., n}. 5 Datasets and Experimental Setup 5.1 Datasets We use monolingual English, French, and German news corpora from WMT 20141 (Bojar et al., 2014) and Spanish from WMT 20132 (Bojar et al., 2013) for the experiments. The number of tokens for English, German, French and Spanish are 495.5, 622.6, 224.3 and 122.9 millions, respectively. For English-{French, German}, we use newstest2013 and newstest2014, and for EnglishSpanish, we use newstest2013. We do not use any parallel data to train, or development set to tune a model. We tokenize and truecase the data using Moses tokenizer3 and truecaser scripts. 1http://www.statmt.org/wmt14/translation-task.html 2http://www.statmt.org/wmt13/translation-task.html 3https://github.com/mosessmt/mosesdecoder/blob/RELEASE3.0/scripts/tokenizer/tokenizer.perl 3086 5.2 Experimental Setup Monolingual embeddings are trained using fastText4 using the skip-gram model with vector dimension of 300. For other hyperparameters, we keep default values of fastText (Bojanowski et al., 2017). After getting monolingual embedding for each language, we map every non-English embedding into the embedding space of English using the cross-lingual embedding mapping code MUSE5 by Conneau et al. (2018). For mapping, we use no bilingual data. We implement the proposed multilingual NMT architecture using PyTorch6, and is based on the implementation of Artetxe et al. (2018). The encoder and decoders are 2-layered bidirectional gated recurrent units (Cho et al., 2014). We keep the maximum sentence length to 50 tokens. For training, we keep embedding dimension of 300 and hidden dimension of 600, vocabulary size 50K, learning rate 0.0002 with Adam optimizer (Kingma and Ba, 2015). As we do not use any development set, we run all the models (bilingual as well as multilingual) for 200k iterations keeping batch size of 50 sentences, and take the final models for evaluation. 6 Results and Analysis We train bilingual models for English↔{French, German, Spanish} as the baselines following Artetxe et al. (2018). We present the BLEU score for each translation direction using bilingual and multilingual models in Table 1. From Table 1, we observe that proposed multilingual model outperforms the separately trained bilingual models for all translation directions on both test sets with a maximum improvement of 1.48 BLEU points for for Spanish to English on newstest2013. As the parameters are shared at only encoder side and a separate decoder is used for each target language, multilingual training provides an improved performance for all the language pairs without loosing their own linguistic characteristics. Though, for one translation direction (En→Fr), the improvement on newstest2014 is only 0.12 BLEU points. The proposed method is still useful as our method shows consistent improvements over all the baseline models. In supervised multilingual NMT, specifically for one-to-many translation directions, this consistency is absent in some 4https://github.com/facebookresearch/fastText 5https://github.com/facebookresearch/MUSE 6https://pytorch.org existing works (Dong et al., 2015; Firat et al., 2016; Johnson et al., 2017). However, in this work, we find that using shared encoder with fixed cross-lingual embedding improves performance in all the translation directions. Though, it may not be fair to compare this unsupervised approach with the supervised ones, but this suggests that supervised multilingual NMT can be improved with cross-lingual embeddings. We leave it for future work. We also study the outputs produced by the different models. We find that multilingual models are better than bilingual models at lexical selection. For example, French words préparation and payons are translated as build-up and owe by bilingual model. However, the correct translations preparation and pay are generated by the multilingual model. For more examples and the quality of outputs, refer to Table 3 in Appendix A. newstest2013 newstest2014 System Base Multi ▲ Base Multi ▲ Fr→En 13.81 14.47 +0.66 14.98 15.76 +0.78 Es→En 13.97 15.45 +1.48 En→Fr 13.28 13.71 +0.43 14.57 14.69 +0.12 En→Es 14.01 14.82 +0.81 De→En 11.30 11.94 +0.64 10.48 11.21 +0.73 En→De 7.24 8.09 +0.85 6.24 6.77 +0.53 Table 1: BLEU scores on newstest2013 and newstest2014. ▲shows improvements over bilingual models. Spanish (Es) is not part of the newstest2014 test set. Base: Baseline. Multi: Multingual 6.1 Translation between Unseen Language Pairs In Table 2, we show the results of the language pairs never seen explicitly during training. During training, we only back-translate between English and non-English (Spanish, French, German) languages, but the network learns to translate between the non-English language pairs as well. For example, to translate from Spanish to French, we encode a Spanish sentence and the encoded output of the encoder is decoded by the French decoder. For evaluation, we use the newstest20137 test set for Spanish-French, Spanish-German, and French-German language pairs. From Table 2, we see translations between French and Spanish achieve very encouraging BLEU scores of 13.87 and 13.92, and pairs involving German achieve 7It is a multilingual test set. 3087 moderate BLEU score of up to 7.40 considering the fact that the network is not trained for these pairs. For sample outputs, refer to Table 4 in Appendix A. → Es Fr De Es 13.92 4.78 Fr 13.87 4.59 De 7.40 6.78 Table 2: BLEU scores of translation between nonEnglish languages on newstest2013. Consider rows are source and columns are target. The network is not trained for these language pairs and still it is possible to translate between these pairs by using the shared encoder and language specific decoders. 6.2 Interlingual Representations Though the network is not trained for many-tomany translation direction, it is still able to translate in all directions. In multilingual training, the encoder is shared by all the languages while each language has a separate decoder. The hidden vectors generated by the shared encoder is consumed by a language-specific decoder to generate the translation in that specific language. The network learns to translate between the non-English languages as well, though the network is not trained to do so. It may happen that the encoder generates an interlingual representation from which a language-specific decoder is able to generate the translation. To see if the encoded representations share any pattern, we project them using t-SNE8 (Maaten and Hinton, 2008) for some sentences in all the four languages. From the projection as shown in Figure 6.2, we see that there are wellformed clusters, each representing a sentence in four languages. It means that for a sentence, the shared encoder generates approximately the same hidden contexts for all the four languages. 7 Conclusion In this paper, we propose a multilingual unsupervised NMT framework to jointly train multiple languages using a shared encoder and languagespecific decoders. Our approach is based on denoising autoencoding of all languages and backtranslating between English and non-English languages. Our approach shows consistent improvement over the baselines in all the translation di8https://projector.tensorflow.org English Spanish French German Figure 2: t-SNE projection of hidden vectors obtained from the shared encoder for some sentences in four languages. Each cluster indicates one sentence in four languages. Dots are the words in a sentence. Color represents the languages. rections with a maximum improvement of 1.48 BLEU points. We also observe that the network learns to translate between unseen language pairs. This is due to the ability of the shared encoder in our proposed network to generate languageindependent representation. In future, we would like to explore other languages with diverse linguistic characteristics. Acknowledgments The authors would like to thank the anonymous reviewers for their thoughtful comments. Asif Ekbal gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award supported by the Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, and implemented by Digital India Corporation (formerly Media Lab Asia). References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised Neural Machine Translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representation (ICLR 2015). Piotr Bojanowski, Edouard Grave, Armand Joulin, and 3088 Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics (TACL), 5:135– 146. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation (WMT 2013), pages 1–44. Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the ninth workshop on statistical machine translation (WMT 2014), pages 12–58. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the Properties of Neural Machine Translation: Encoder-decoder Approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word Translation Without Parallel Data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-Task Learning for Multiple Language Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACLIJCNLP 2015) (Volume 1: Long Papers), pages 1723–1732. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016), pages 866–875. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-Adversarial Training of Neural Networks. The Journal of Machine Learning Research, 17(1):2096–2030. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transactions of the Association for Computational Linguistics (TACL), 5:339–351. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1700–1709. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representation (ICLR 2015). Philipp Koehn and Rebecca Knowles. 2017. Six Challenges for Neural Machine Translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Unsupervised Machine Translation using Monolingual Corpora Only. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting Similarities among Languages for Machine Translation. arXiv preprint arXiv:1309.4168. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th annual meeting on association for computational linguistics (ACL 2002), pages 311–318. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016) (Volume 1: Long Papers), pages 86–96. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of Advances in neural information processing systems (NIPS 2014), pages 3104–3112. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised Neural Machine Translation with Weight Sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 46–55. A Sample Outputs We present sample outputs, generated by bilingual and proposed multilingual models, in Table 3. We find that multilingual models are better at lexical selection (see the underlined words in Table 3). Table 4 shows sample outputs on news2013 for unseen language pairs. 3089 Source Reference Bilingual Multilingual French→English La préparation à gérer une classe dans un contexte nord-américain, québécois. Preparation to manage a class in a North-American and Quebec context. The build-up to manage a class in a Australian, Australian. The preparation to handle a class in a Latin American context. Il va y avoir du changement dans la façon dont nous payons ces taxes. There is going to be a change in how we pay these taxes. There will be the change in the course of whom we owe these bills. There will be the change in the way we pay these taxes. German→English Auch diese Frage soll letztlich Aufschluss darüber geben, welche Voraussetzungen es für die Entstehung von Leben gibt. This question should also provide information regarding the preconditions for the origins of life. This question will also ultimately give clues about what there are for the evolution of life. This question will ultimately give clues to how there is conditions for the emergence of life. Ihm werde weiterhin vorgeworfen, unerlaubt geheime Informationen weitergegeben zu haben. He is still accused of passing on secret information without authorisation. Him will continue to be accused of stealing unlawful information. Him would continue to be accused of illegally of leaking secret information. Spanish→English Los estudiantes, por su parte, aseguran que el curso es uno de los más interesantes. Students, meanwhile, say the course is one of the most interesting around. The students, by their part, say the practice is one of the most intriguing. The students, by their part, say the course is one of the most interesting. No duda en contestar que nunca aceptaría una solicitud de una persona desconocida. He does not hesitate to reply that he would never accept a request from an unknown person. No doubt ever answering doubt it would never accept an argument an unknown person. No doubt in answer that he would never accept a request of a unknown person. Table 3: Sample outputs for bilingual and multilingual models on newstest2013 test set. We observe that the multilingual model is better at lexical selection. Underlined words are some examples of our observation. Source Reference Multilingual French→Spanish Les dirigeants républicains justifièrent leur politique par la nécessité de lutter contre la fraude électorale. Los dirigentes republicanos justificaron su política por la necesidad de luchar contra el fraude electoral. Los dirigentes republicanos <OOV> su política por la necesidad de luchar contra la fraude electoral. French→German Chacun sait que son livre fait partie de cet édifice. Jeder weiß , dass sein Buch Teil dieses Gebäudes ist. Jeder weiß , dass sein Buch Teil seines Gebäudes machte. German→Spanish Seine Zahlen auf Ebene der internationalen Turniere sind beeindruckend. Sus números a nivel de torneos internacionales son impresionantes. Sus cifras sobre el nivel de torneos internacionales son impresionantes. German→French Diese Einschränkungen sind nicht ohne Folgen. Ces restrictions ne sont pas sans conséquence. Ces restrictions ne sont pas sans conséquences. Spanish→German Tomemos por caso la elección directa del presidente , que ha sido un logro de la presión pública. Nehmen Sie nur einmal die direkte Wahl des Präsidenten, die ein Verdienst des öffentlichen Drucks war. Nehmen Sie über die direkte Wahl des Präsidenten, hat dies ein Erfolg ein der öffentlichen Druck. Spanish→French Las inversiones en la materia superan los 1.5 billones de dólares. Les investissements dans ce domaine dépassent les 1,5 milliards de dollars. Les investissements dans la matière dépassent les 1,5 milliards de dollars. Table 4: Sample outputs for unseen language pairs on newstest2013 test set.
2019
297
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3090–3097 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3090 Lattice-Based Transformer Encoder for Neural Machine Translation Fengshun Xiao1,2, Jiangtong Li2,3, Hai Zhao1,2,∗, Rui Wang4, Kehai Chen4 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, China 3College of Zhiyuan, Shanghai Jiao Tong University, China 4National Institute of Information and Communications Technology (NICT) {felixxiao, keep moving-lee}@sjtu.edu.cn, [email protected], {wangrui, khchen}@nict.go.jp Abstract Neural machine translation (NMT) takes deterministic sequences for source representations. However, either wordlevel or subword-level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes. We hypothesize that the diversity in segmentations may affect the NMT performance. To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training. We propose two methods: 1) lattice positional encoding and 2) lattice-aware self-attention. These two methods can be used together and show complementary to each other to further improve translation performance. Experiment results show superiorities of lattice-based encoders in word-level and subword-level representations over conventional Transformer encoder. 1 Introduction Neural machine translation (NMT) has achieved great progress with the evolvement of model structures under an encoder-decoder framework (Sutskever et al., 2014; Bahdanau et al., 2014). Recently, the self-attention based Transformer model has achieved state-of-theart performance on multiple language pairs (Vaswani et al., 2017; Marie et al., 2018). Both representations of source and target sentences in ∗Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100) and key projects of National Natural Science Foundation of China (No. U1836222 and No. 61733011). Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): “Unsupervised Neural Machine Translation in Universal Scenarios” and NICT tenure-track researcher startup fund “Toward Intelligent Machine Translation”. v8 mao yi fa zhan ju fu zong cai v7 v6 v5 v4 v3 v2 v1 v0 mao-yi fa-zhan ju fu zong-cai fa-zhan-ju mao-yi-fa-zhan fu-zong-cai v0 v1 v2 v3 mao-yi-fa-zhan ju fu-zong-cai v0 v1 v2 v3 mao-yi fa-zhan-ju fu-zong-cai v0 v1 v2 v3 v4 v5 mao-yi fa-zhan ju fu zong-cai (1)Segmentaion 1 (2)Segmentaion 2 (3)Segmentation 3 (4)Lattice v8 mao yi fa zhan ju fu zong cai v7 v6 v5 v4 v3 v2 v1 v0 e0:2:mao-yi e2:4:fa-zhan e4:5:ju e5:6:fu e6:8:zong-cai e2:5:fa-zhan-ju e0:4:mao-yi-fa-zhan e5:8:fu-zong-cai c1 c2 c3 c4 c5 c6 c7 c8 Figure 1: Incorporating three different segmentation for a lattice graph. The original sentence is “mao-yifa-zhan-ju-fu-zong-cai”. In Chinese it is “贸易发展局 副总裁”. In English it means “The vice president of Trade Development Council” NMT can be factorized in character (Costa-Jussa and Fonollosa, 2016), word (Sutskever et al., 2014), or subword (Sennrich et al., 2015) level. However, only using 1-best segmentation as inputs limits NMT encoders to express source sequences sufficiently and reliably. Many East Asian languages, including Chinese are written without explicit word boundary, so that their sentences need to be segmented into words firstly (Zhao et al., 2019; Cai et al., 2017; Cai and Zhao, 2016; Zhao et al., 2013; Zhao and Kit, 2011). By different segmentors, each sentence can be segmented into multiple forms as shown in Figure 1. Even for those alphabetical languages with clear word boundary like English, there is still an issue about selecting a proper subword vocabulary size, which determines the segmentation granularities for word representation. In order to handle this problem, Morishita et al. (2018) used hierarchical subword features to represent sequence with different subword granularities. Su et al. (2017) proposed the first word-lattice based recurrent neural network 3091 (RNN) encoders which extended Gated Recurrent Units (GRUs) (Cho et al., 2014) to take in multiple sequence segmentation representations. Sperber et al. (2017) incorporated posterior scores to Tree-LSTM for building a lattice encoder in speech translation. All these existing methods serve for RNN-based NMT model, where lattices can be formulized as directed graphs and the inherent directed structure of RNN facilitates the construction of lattice. Meanwhile, the selfattention mechanism is good at learning the dependency between characters in parallel, which can partially compare and learn information from multiple segmentations (Cherry et al., 2018). Therefore, it is challenging to directly apply the lattice structure to Transformer. In this work, we explore an efficient way of integrating lattice into Transformer. Our method can not only process multiple sequences segmented in different ways to improve translation quality, but also maintain the characteristics of parallel computation in the Transformer. 2 Background 2.1 Transformer Transformer stacks self-attention and point-wise, fully connected layers for both encoders and decoders. Decoder layers also have another sublayer which performs attention over the output of the encoder. Residual connections around each layer are employed followed by layer normalization (Ba et al., 2016). To make use of the order of the sequence, Vaswani et al. (2017) proposed Positional Encodings to indicate the absolute or relative position of tokens in input sequence which are calculated as: p(j,2i) = sin(j/100002i/d) p(j,2i+1) = cos(j/100002i/d), where j is the position, i is the dimension and d is the model dimension. Then positional encodings p1:M = {p1, ..., pM} are added to the embedding of each token t1:M = {t1, ..., tM} and are propagated to higher layers via residual connections. 2.2 Self-Attention Transformer employs H attention heads to perform self-attention over a sequence individually and finally applies concatenation and linear transformation to the results from Conditions Explanation lad i < j = p < q ei:j is left adjacent to ep:q. rad p < q = i < j ei:j is right adjacent to ep:q. inc i ≤p < q ≤j ei:j includes ep:q. ind p ≤i < j ≤q ei:jis included in ep:q. its i < p < j < q or ei:j is intersected with ep:q. p < i < q < j pre i < j < p < q ei:j is preceding edge to ep:q. suc p < q < i < j ei:j is succeeding edge to ep:q. Table 1: Relations possibly satisfied by any two different edges ei:j and ep:q in the lattice. Note that two equal signs cannot stand at the same time in condition inequality for inc and ind. . each head, which is called multi-head attention (Vaswani et al., 2017). Every single head attention in multi-head attention is calculated in a scaled dot product form: uij = (tiW Q)(tjW K)T √ d , (1) where d is the model dimension, t1:M is the input sequence and uij are normalized by a softmax function: αij = exp(uij) PM k=1 exp(uik) , (2) and αij are used to calculate the final output hidden representations: oi = M X j=1 αij(tjW V ), (3) where o1:M is outputs and W Q,W K, and W V are learnable projections matrices for query, key, and value in a single head, respectively. 3 Models 3.1 Lattices Lattices can represent multiple segmentation sequences in a directed graph, as they merge the same subsequence of all candidate subsequences using a compact way. As shown in Figure 1, we follow Su et al. (2017) to apply different segmentator to segment an element1 sequence c1:N = {c1, c2, ..., cN} into different word or subword sequences to construct a lattice G = ⟨V, E⟩, a directed, connected, and acyclic graph, where V is node set and E is edge 1Character for word lattice and minimum subword unit in our predefined subword segmentations for subword lattice. 3092 Input Embedding Lattice sequence Inputs Lattice Positional Encoding Lattice-aware self-attention Add & Norm Add & Norm Feed Forward Hidden representations N x t1 t2 t3 t4 t5 t1 t2 t3 t4 t5 Figure 2: The architecture of lattice-based Transformer encoder. Lattice positional encoding is added to the embeddings of lattice sequence inputs. Different colors in lattice-aware self-attention indicate different relation embeddings. set, node vi ∈V denotes the gap between ci and ci+1, edge ei:j ∈E departing from vi and arrives at vj (i < j) indicates a possible word or subword unit covering subsequence ci+1:j. All the edges in the lattice G are the actual input tokens for NMT. For two different edges ei:j and ep:q, all possible relations can be enumerated as in Table 1. 3.2 Lattice-Based Encoders We place all edges E in the lattice graph into an input sequence t1:M = {t1, t2, ..., tM} for Transformer; then we modify the positional encoding to indicate the positional information of input tokens, namely all edges in the lattice graph. In addition, we propose a lattice-aware selfattention to directly represent position relationship among tokens. The overall architecture is shown in Figure 2. Lattice Positional Encoding (LPE) Original positional encoding indicates the order of the sequence in an ascending form {p1, p2, ..., pM}. We hypothesize that increasing positional encodings can indicate the order of sequential sentence. As shown in Figure 3, we scan a source sequence by element c1:N = {c1, c2, ..., cN} (for example, ci is character in Figure 3) and record their position p1:N = {p1, p2, ..., pN}. Then we use the positional encoding of the first element in lattice edge to represent current token’s position, which can ensure that every edge in each path departing from v0 and arriving at vN in lattice will v8 mao yi fa zhan ju fu zong cai v7 v6 v5 v4 v3 v2 v1 v0 mao-yi:1 fa-zhan:3 ju:5 fu:6 zong-cai:7 fa-zhan-ju:3 mao-yi-fa-zhan:1 fu-zong-cai:6 v0 v1 v2 v3 v4 v5 mao-yi:1 fa-zhan:2 ju:3 fu:4 zong-cai:5 (1)position encodings (2)LPE and LSA 1 2 3 4 5 6 7 8 rad inc inc lad pre its self lad self suc suc suc rad rad lad ind Figure 3: Lattice positional encoding pi+1 (in green) for edge ei:j in the lattice graph and the relation embeddings r in lattice-aware self-attention based on the timestep of token fa-zhan-ju (in red) and fu (in purple). have an increasing positional encoding order. The property mentioned above is easy to prove, since start and end points vi, vj of each edge ei:j strictly satisfy i < j and next edge ej:k will start from vj and thus get a larger positional encoding. Formally, for any input token tk, namely edge ei:j covering elements ci+1:j, positional encoding pi+1 will be used to represent its position and be added to its embedding. Lattice-aware Self-Attention (LSA) We also directly modify self-attention to a lattice-aware way which makes self-attention aware of the relations between any two different edges. We modified Equations (1) and (3) in the same way of Shaw et al. (2018) to indicate edge relation: eij = (tiW Q)(tjW K + rK ij )T √ d , (4) oi = M X j=1 αij(tjW V + rV ij), (5) where rK ij and rV ij are relation embeddings which are added to the keys and values to indicate relation between input tokens ti and tj, namely edges ep:q and ek:l in lattice graph, respectively. To facilitate parallel computation, we add an additional embedding (self) for a token when it is conducted dot-product attention with itself, so we train eight (seven in Table 1) different relation embeddings aV 1:8 and aK 1:8 as look-up table for keys and values, respectively. rK ij and rV ij can look up for aV 1:8 and aK 1:8 based on the relation between ti and tj. Figure 3 shows an example of embeddings in lattice-aware self-attentions based on the timestep of token fa-zhan-ju and fu. 3093 System Input MT05 MT02 MT03 MT04 MT06 MT08 ALL RNN PKU 31.42 34.68 33.08 35.32 31.61 23.58 31.76 CTB 31.38 34.95 32.85 35.44 31.75 23.33 31.78 MSR 29.92 34.49 32.06 35.10 31.23 23.12 31.35 Lattice-RNN Lattice 32.40 35.75 34.32 36.50 32.77 24.84 32.95 Transformer PKU 41.67 43.61 41.62 43.66 40.25 31.62 40.24 CTB 41.87 43.72 42.11 43.58 40.41 31.76 40.35 MSR 41.17 43.11 41.38 43.60 39.67 31.02 39.87 Transformer + LPE Lattice 42.37 43.71 42.67 44.43 41.14 32.09 40.93↑ Transformer + LSA 42.28 43.56 42.73 43.81 41.01 32.39 40.77↑ Transformer + LPE + LSA 42.65 44.14 42.24 44.81 41.37 32.98 41.26↑ Table 2: Evaluation of translation performance on NIST Zh-En dataset. RNN and Lattice-RNN results are from (Su et al., 2017). We highlight the highest BLEU score in bold for each set. ↑indicates statistically significant difference (p <0.01) from best baseline. Since self-attention is computed parallelly, we generate a matrix with all lattice embeddings in it for each sentence which can be easily incorporated into standard self-attention by matrix multiplication. We use different relation embeddings for different Transformer layers but share the same one between different heads in a single layer. 4 Experiments 4.1 Setup We conducted experiments on the NIST ChineseEnglish (Zh-En) and IWSLT 2016 EnglishGerman (En-De) datasets. The Zh-En corpus consists of 1.25M sentence pairs and the En-De corpus consists of 191K sentence pairs. For ZhEn task, we chose the NIST 2005 dataset as the validation set and the NIST 2002, 2003, 2004, 2006, and 2008 datasets as test sets. For EnDe task, tst2012 was used as validation set and tst2013 and tst2014 were used as test sets. For both tasks, sentence pairs with either side longer than 50 were dropped. We used the case-sensitive 4-gram NIST BLEU score (Papineni et al., 2002) as the evaluation metric and sign-test (Collins et al., 2005) for statistical significance test. For Zh-En task, we followed Su et al. (2017) to use the toolkit2 to train segmenters on PKU, MSR (Emerson, 2005), and CTB corpora (Xue et al., 2005), then we generated word lattices with different segmented training data. Both source and target vocabularies are limited to 30K. For En-De task, we adopted 8K, 16K and 32K 2https://nlp.stanford.edu/software/segmenter.html#Download BPE merge operations (Sennrich et al., 2015) to get different segmented sentences for building subword lattices. 16K BPE merge operations are employed on the target side. We set batch size to 1024 tokens and accumulated gradient 16 times before a backpropagation. During training, we set all dropout to 0.3 and chose the Adam optimizer (Kingma and Ba, 2014) with β1 = 0.9, β2 = 0.98 and ϵ = 10−9 for parameters tuning. During decoding, we used beam search algorithm and set the beam size to 20. All other configurations were the same with Vaswani et al. (2017). We implemented our model based on the OpenNMT (Klein et al., 2017) and trained and evaluated all models on a single NVIDIA GeForce GTX 1080 Ti GPU. 4.2 Overall Performance From Table 2, we see that our LPE and LSA models both outperform the Transformer baseline model of 0.58 and 0.42 BLEU respectively. When we combine LPE and LSA together, we get a gain of 0.91 BLEU points. Table 3 shows that our method also works well on the subword level. The base Transformer system has about 90M parameters and our LPE and LSA models introduce 0 and 6k parameters over it, respectively, which shows that our lattice approach improves Transformer with little parameter accumulation. During training, base Transformer performs about 0.714 steps per second while LPE + LSA model can process around 0.328. As lattice-based method usually seriously slows down the training, our lattice design and implementation over the Transformer only shows moderate efficiency 3094 System Input tst2012 tst2013 tst2014 RNN 16k 26.24 28.22 24.17 Transformer 8k 27.31 29.56 25.57 16k 27.35 29.02 25.12 32k 27.15 28.61 24.88 + LPE Lattice 27.34 29.48 25.88↑ + LSA 27.44 29.73↑ 25.65 + LPE + LSA 27.76 30.28↑ 26.22↑ Table 3: Evaluation of translation performance on IWSLT2016 En-De dataset. RNN results are reported from Morishita et al. (2018). ↑indicates statistically significant difference (p <0.01) from best baseline. . Systems PE PE + LSA ALL 40.54 40.90 Table 4: Translation performance (BELU score) with normal positional encodings and normal positional encodings with LSA model on NIST Zh-En dataset. reduction. 4.3 Analysis3 Effect of Lattice-Based Encoders To show the effectiveness of our method, we placed all edges in the lattice of a single sequence in a relative right order based on their first character, then we applied normal positional encodings (PE) to the lattice inputs on our base Transformer model. As shown in Table 4, our LPE and LSA method outperforms normal positional encodings by 0.39 and 0.23 BLEU respectively which shows that our methods are effective. Complementary of LPE and LSA Our LPE method allows edges in all paths in an increasing positional encoding order which seems to focus on long-range order but ignore local disorder. While our LSA method treats all preceding and succeeding edges equally which seems to address local disorder better but ignore long-range order. To show the complementary of these two methods, we also placed all edges of lattice in a single sequence in a relative right order based on their first character and use normal positional encodings and our LSA method; we obtained a BLEU of 40.90 which is 0.13 higher than single LSA model. From this, we can see that long-range position information is indeed beneficial to our LSA model. 3All analysis experiments conducted on NIST dataset. 5 Related Work Neural network based methods have been applied to several natural language processing tasks (Li et al., 2018; Zhang et al., 2019; Chen et al., 2018, 2017; Li et al., 2019; He et al., 2018; Zhou and Zhao, 2019), especially to NMT (Bahdanau et al., 2015; Wang et al., 2017a,b, 2018; Wang et al., 2018; Zhang et al., 2018; Zhang and Zhao, 2019). Our work is related to the source side representations for NMT. Generally, the NMT model uses the word as a basic unit for source sentences modeling. In order to obtain better source side representations and avoid OOV problems, recent research has modeled source sentences at character level (Ling et al., 2015; Costa-Jussa and Fonollosa, 2016; Yang et al., 2016; Lee et al., 2016), subword level (Sennrich et al., 2015; Kudo, 2018; Wu and Zhao, 2018) and mixed character-word level (Luong and Manning, 2016). All these methods show better translation performance than the word level model. As models mentioned above only use 1-best segmentation as inputs, lattice which can pack many different segmentations in a compact form has been widely used in statistical machine translation (SMT) (Xu et al., 2005; Dyer et al., 2008) and RNN-based NMT (Su et al., 2017; Sperber et al., 2017). To enhance the representaions of the input, lattice has also been applied in many other NLP tasks such as named entity recognition (Zhang and Yang, 2018), Chinese word segmentation (Yang et al., 2019) and part-of-speech tagging (Jiang et al., 2008; Wang et al., 2013). 6 Conclusions In this paper, we have proposed two methods to incorporate lattice representations into Transformer. Experimental results in two datasets on word-level and subword-level respectively validate the effectiveness of the proposed approaches. Different from Veliˇckovi´c et al. (2017), our work also provides an attempt to encode a simple labeled graph into Transformer and can be used in any tasks which need Transformer encoder to learn sequence representation. 3095 References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for Chinese. arXiv preprint arXiv:1606.04300. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. arXiv preprint arXiv:1704.07047. Kehai Chen, Rui Wang, Masao Utiyama, Lemao Liu, Akihiro Tamura, Eiichiro Sumita, and Tiejun Zhao. 2017. Neural machine translation with source dependency representation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 2846– 2852. Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. 2018. Syntax-directed attention for neural machine translation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI 2018), pages 4792– 4799. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting character-based neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4295– 4305. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 531–540. Marta R Costa-Jussa and Jos´e AR Fonollosa. 2016. Character-based neural machine translation. arXiv preprint arXiv:1603.00810. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL 2008), pages 1012–1020. Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the fourth SIGHAN workshop on Chinese Language Processing, pages 123–133. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 2061–2071. Wenbin Jiang, Haitao Mi, and Qun Liu. 2008. Word lattice reranking for Chinese word segmentation and part-of-speech tagging. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 385–392. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 66– 75. Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pages 3203–3214. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019). Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W. Black. 2015. Character-based neural machine translation. arXiv preprint arXiv:1511.04586. Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 1054–1063. 3096 Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2018. Nict’s neural and statistical machine translation systems for the wmt18 news translation task. In Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, pages 453–459. Makoto Morishita, Jun Suzuki, and Masaaki Nagata. 2018. Improving neural machine translation by incorporating hierarchical subword features. In Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018), pages 618–629. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 311– 318. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2017. Neural lattice-to-sequence models for uncertain inputs. arXiv preprint arXiv:1704.00559. Jinsong Su, Zhixing Tan, Deyi Xiong, Rongrong Ji, Xiaodong Shi, and Yang Liu. 2017. Latticebased recurrent neural network encoders for neural machine translation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI 2017), pages 3302–3308. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS 2014), pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31stst Conference on Neural Information Processing Systems (NIPS 2017), pages 5998–6008. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), pages 560–566. Rui Wang, Masao Utiyama, Andrew Finch, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2018. Sentence selection and weighting for neural machine translation domain adaptation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26:1727–1741. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 1482–1488. Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2018. Dynamic sentence sampling for efficient training of neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 298– 304. Zhiguo Wang, Chengqing Zong, and Nianwen Xue. 2013. A lattice-based framework for joint Chinese word segmentation, POS tagging and parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), pages 623–627. Yingting Wu and Hai Zhao. 2018. Finding better subword segmentation for neural machine translation. In The Seventeenth China National Conference on Computational Linguistics (CCL 2018), pages 53–64. Jia Xu, Evgeny Matusov, Richard Zens, and Hermann Ney. 2005. Integrated Chinese word segmentation in statistical machine translation. In International Workshop on Spoken Language Translation (IWSLT 2005). Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Jie Yang, Yue Zhang, and Shuailong Liang. 2019. Subword encoding in lattice LSTM for Chinese word segmentation. In Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL 2019). Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2016. A character-aware encoder for neural machine translation. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016), pages 3063–3070. Huan Zhang and Hai Zhao. 2019. Minimum divergence vs. maximum margin: An empirical comparison on seq2seq models. In Proceedings of the Seventh International Conference on Learning Representations (ICLR 2019). 3097 Yue Zhang and Jie Yang. 2018. Chinese NER using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018), pages 1554–1564. Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, and Hai Zhao. 2018. Exploring recombination for efficient decoding of neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4785–4790. Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2019. Neural-based pinyin-to-character conversion with adaptive vocabulary. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Hai Zhao, Deng Cai, Changning Huang, and Chunyu Kit. 2019. Chinese word segmentation: Another decade review (2007-2017). arXiv preprint arXiv:1901.06079. Hai Zhao and Chunyu Kit. 2011. Integrating unsupervised and supervised word segmentation: The role of goodness measures. Information Sciences, 181(1):163–183. Hai Zhao, Masao Utiyama, Eiichiro Sumita, and Bao-Liang Lu. 2013. An empirical study on word segmentation for Chinese machine translation. In International Conference on Intelligent Text Processing and Computational Linguistics (CICLing 2013), pages 248–263. Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on Penn Treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019).
2019
298
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098–3112 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3098 Multi-Source Cross-Lingual Model Transfer: Learning What to Share Xilun Chen†∗ Ahmed Hassan Awadallah‡ Hany Hassan‡ Wei Wang‡ Claire Cardie† †Cornell University Ithaca, NY {xlchen,cardie}@cs.cornell.edu ‡Microsoft Research Redmond, WA {hassanam,hanyh,Wei.Wang}@microsoft.com Abstract Modern NLP applications have enjoyed a great boost utilizing neural networks models. Such deep neural models, however, are not applicable to most human languages due to the lack of annotated training data for various NLP tasks. Cross-lingual transfer learning (CLTL) is a viable method for building NLP models for a low-resource target language by leveraging labeled data from other (source) languages. In this work, we focus on the multilingual transfer setting where training data in multiple source languages is leveraged to further boost target language performance. Unlike most existing methods that rely only on language-invariant features for CLTL, our approach coherently utilizes both languageinvariant and language-specific features at instance level. Our model leverages adversarial networks to learn language-invariant features, and mixture-of-experts models to dynamically exploit the similarity between the target language and each individual source language1. This enables our model to learn effectively what to share between various languages in the multilingual setup. Moreover, when coupled with unsupervised multilingual embeddings, our model can operate in a zero-resource setting where neither target language training data nor cross-lingual resources are available. Our model achieves significant performance gains over prior art, as shown in an extensive set of experiments over multiple text classification and sequence tagging tasks including a large-scale industry dataset. 1 Introduction Recent advances in deep learning enabled a wide variety of NLP models to achieve impressive performance, thanks in part to the availability of ∗Most work was done while the first author was an intern at Microsoft Research. 1The code is available at https://github.com/ microsoft/Multilingual-Model-Transfer. large-scale annotated datasets. However, such an advantage is not available to most of the world languages since many of them lack the the labeled data necessary for training deep neural nets for a variety of NLP tasks. As it is prohibitive to obtain training data for all languages of interest, crosslingual transfer learning (CLTL) offers the possibility of learning models for a target language using annotated data from other languages (source languages) (Yarowsky et al., 2001). In this paper, we concentrate on the more challenging unsupervised CLTL setting, where no target language labeled data is used for training.2 Traditionally, most research on CLTL has been devoted to the standard bilingual transfer (BLTL) case where training data comes from a single source language. In practice, however, it is often the case that we have labeled data in a few languages, and would like to be able to utilize all of the data when transferring to other languages. Previous work (McDonald et al., 2011) indeed showed that transferring from multiple source languages could result in significant performance improvement. Therefore, in this work, we focus on the multi-source CLTL scenario, also known as multilingual transfer learning (MLTL), to further boost the target language performance. One straightforward method employed in CLTL is weight sharing, namely directly applying the model trained on the source language to the target after mapping both languages to a common embedding space. As shown in previous work (Chen et al., 2016), however, the distributions of the hidden feature vectors of samples from different languages extracted by the same neural net remain divergent, and hence weight sharing is not sufficient for learning a language-invariant feature space that generalizes well across languages. As such, previ2In contrast, supervised CLTL assumes the availability of annotations in the target language. 3099 ous work has explored using language-adversarial training (Chen et al., 2016; Kim et al., 2017) to extract features that are invariant with respect to the shift in language, using only (non-parallel) unlabeled texts from each language. On the other hand, in the MLTL setting, where multiple source languages exist, languageadversarial training will only use, for model transfer, the features that are common among all source languages and the target, which may be too restrictive in many cases. For example, when transferring from English, Spanish and Chinese to German, language-adversarial training will retain only features that are invariant across all four languages, which can be too sparse to be informative. Furthermore, the fact that German is more similar to English than to Chinese is neglected because the transferred model is unable to utilize features that are shared only between English and German. To address these shortcomings, we propose a new MLTL model that not only exploits languageinvariant features, but also allows the target language to dynamically and selectively leverage language-specific features through a probabilistic attention-style mixture of experts mechanism (see §3). This allows our model to learn effectively what to share between various languages. Another contribution of this paper is that, when combined with the recent unsupervised cross-lingual word embeddings (Lample et al., 2018; Chen and Cardie, 2018b), our model is able to operate in a zero-resource setting where neither task-specific target language annotations nor general-purpose cross-lingual resources (e.g. parallel corpora or machine translation (MT) systems) are available. This is an advantage over many existing CLTL works, making our model more widely applicable to many lower-resource languages. We evaluate our model on multiple MLTL tasks ranging from text classification to named entity recognition and semantic slot filling, including a real-world industry dataset. Our model beats all baseline models trained, like ours, without crosslingual resources. More strikingly, in many cases, it can match or outperform state-of-the-art models that have access to strong cross-lingual supervision (e.g. commercial MT systems). 2 Related Work The diversity of human languages is a critical challenge for natural language processing. In order to alleviate the need for obtaining annotated data for each task in each language, cross-lingual transfer learning (CLTL) has long been studied (Yarowsky et al., 2001; Bel et al., 2003, inter alia). For unsupervised CLTL in particular, where no target language training data is available, most prior research investigates the bilingual transfer setting. Traditionally, research focuses on resource-based methods, where general-purpose cross-lingual resources such as MT systems or parallel corpora are utilized to replace taskspecific annotated data (Wan, 2009; Prettenhofer and Stein, 2010). With the advent of deep learning, especially adversarial neural networks (Goodfellow et al., 2014; Ganin et al., 2016), progress has been made towards model-based CLTL methods. Chen et al. (2016) propose languageadversarial training that does not directly depend on parallel corpora, but instead only requires a set of bilingual word embeddings (BWEs). On the other hand, the multilingual transfer setting, although less explored, has also been studied (McDonald et al., 2011; Naseem et al., 2012; T¨ackstr¨om et al., 2013; Hajmohammadi et al., 2014; Zhang and Barzilay, 2015; Guo et al., 2016), showing improved performance compared to using labeled data from one source language as in bilingual transfer. Another important direction for CLTL is to learn cross-lingual word representations (Klementiev et al., 2012; Zou et al., 2013; Mikolov et al., 2013). Recently, there have been several notable work for learning fully unsupervised cross-lingual word embeddings, both for the bilingual (Zhang et al., 2017; Lample et al., 2018; Artetxe et al., 2018) and multilingual case (Chen and Cardie, 2018b). These efforts pave the road for performing CLTL without cross-lingual resources. Finally, a related field to MLTL is multi-source domain adaptation (Mansour et al., 2009), where most prior work relies on the learning of domaininvariant features (Zhao et al., 2018; Chen and Cardie, 2018a). Ruder et al. (2019) propose a general framework for selective sharing between domains, but their method learns static weights at the task level, while our model can dynamically select what to share at the instance level. A very recent work (Guo et al., 2018) attempts to model the relation between the target domain and each source domain. Our model combines the strengths of these methods and is able to simul3100 Shared Feature Extractor Fs MoE Private Feature Extractor Fp MoE Task-Specific Predictor C Task Label Multilingual Word Representation Input Text JC Language Discriminator D Language Label JD −λ1JD Gate Label λ2Jg Forward and backward passes when updating the parameters of Fs, Fp and C Forward and backward passes when updating the parameters of D Figure 1: An overview of the MAN-MoE model. taneously utilize both the domain-invariant and domain-specific features in a coherent way. 3 Model One commonly adopted paradigm for neural cross-lingual transfer is the shared-private model (Bousmalis et al., 2016), where the features are divided into two parts: shared (languageinvariant) features and private (language-specific) features. As mentioned before, the shared features are enforced to be language-invariant via language-adversarial training, by attempting to fool a language discriminator. Furthermore, Chen and Cardie (2018a) propose a generalized sharedprivate model for the multi-source setting, where a multinomial adversarial network (MAN) is adopted to extract common features shared by all source languages as well as the target. On the other hand, the private features are learned by separate feature extractors, one for each source language, capturing the remaining features outside the shared ones. During training, the labeled samples from a certain source language go through the corresponding private feature extractor for that particular language. At test time, there is no private feature extractor for the target language; only the shared features are used for cross-lingual transfer. As mentioned in §1, using only the shared features for MLTL imposes an overly strong constraint and many useful features may be wiped out by adversarial training if they are shared only between the target language and a subset of source languages. Therefore, we propose to use a mixture-of-experts (MoE) model (Shazeer et al., 2017; Gu et al., 2018) to learn the private features. The idea is to have a set of language expert networks, one per source language, each responsible for learning language-specific features for that source language during training. However, instead of hard-switching between the experts, each sample uses a convex combination of all experts, dictated by an expert gate. Thus, at test time, the trained expert gate can decide the optimal expert weights for the unseen target language based on its similarity to the source languages. Figure 1 shows an overview of our MAN-MoE model for multilingual model transfer. The boxes illustrate various components of the MAN-MoE model (§3.1), while the arrows depict the training flow (§3.2). 3.1 Model Architecture Figure 1 portrays an abstract view of the MAN-MoE model with four components: the Multilingual Word Representation, the MAN Shared Feature Extractor Fs (together with the Language Discriminator D), the MoE Private Feature Extractor Fp, and finally the MoE Predictor C. Based on the actual task (e.g. sequence tagging, text classification, sequence to sequence, etc.), different architectures may be adopted, as explained below. Multilingual Word Representation embeds words from all languages into a single semantic space so that words with similar meanings are close to each other regardless of language. In this work, we mainly rely on the MUSE embeddings (Lample et al., 2018), which are trained in a fully unsupervised manner. We map all other languages into English to obtain a multilingual embedding space. However, in certain experiments, MUSE yields 0 accuracy on one or more language pairs (Søgaard et al., 2018), in which case the VecMap embeddings (Artetxe et al., 2017) are used. It uses identical strings as supervision, which does not require parallel corpus or human annotations. We further experiment with the recent unsupervised multilingual word embeddings (Chen and Cardie, 2018b), which gives improved performance (§4.2). In addition, for tasks where morphological fea3101 BiLSTM … … … … BiLSTM BiLSTM Multilingual Word Representation Driving directions to … … … … EN Expert MLP Expert Gate ES Expert MLP ZH Expert MLP ⍺1 ⍺2 ⍺3 Private Features Gate Label λ2Jg Figure 2: The MoE Private Feature Extractor Fp with three source languages: English (EN), Spanish (ES), and Chinese (ZH). tures are important, one can add character-level word embeddings (Dos Santos and Zadrozny, 2014) that captures sub-word information. When character embeddings are used, we add a single CharCNN that is shared across all languages, and the final word representation is the concatenation of the word embedding and the char-level embedding. The CharCNN can then be trained end to end with the rest of the model. MAN Shared Feature Extractor Fs is a multinomial adversarial network (Chen and Cardie, 2018a), which is an adversarial pair of a feature extractor (e.g. LSTM or CNN) and a language discriminator D. D is a text classifier (Kim, 2014) that takes the shared features (extracted by Fs) of an input sequence and predicts which language it comes from. On the other hand, Fs strives to fool D so that it cannot identify the language of a sample. The hypothesis is that if D cannot recognize the language of the input, the shared features then do not contain language information and are hence language-invariant. Note that D is trained only using unlabeled texts, and can therefore be trained on all languages including the target language. MoE Private Feature Extractor Fp is a key difference from previous work, shown in Figure 2. The figure shows the Mixture-of-Experts (Shazeer et al., 2017) model with three source languages, English, Spanish, and Chinese. Fp has a shared BiLSTM at the bottom that extracts contextualized word representations for each token w in the input sentence. The LSTM hidden representation hw is then fed into the MoE module, where each source language has a separate expert network (a MLP). In addition, the expert gate G is a linear transformation that takes hw as input and outputs a softmax score αi for each expert. The final private feature vector is a mixture of all expert outputs, dictated by the expert gate weights α. During training, the expert gate is trained to predict the language of a sample using the gate loss Jg, where the expert gate output α is treated as the softmax probability of the predicted languages. In other words, the more accurate the language prediction is, the more the correct expert gets used. Therefore, Jg is used to encourage samples from a certain source language to use the correct expert, and each expert is hence learning languagespecific features for that language. As the BiLSTM is exposed to all source languages during training, the trained expert gate will be able to examine the hidden representation of a token to predict the optimal expert weights α, even for unseen target languages at test time. For instance, if a German test sample is similar to the English training samples, the trained expert gate will predict a higher α for the English expert, resulting in a heavier use of it in the final feature vector. Therefore, even for the unforeseen target language (e.g. German), Fp is able to dynamically determine what knowledge to use from each individual source language at a token level. MoE Task-Specific Predictor C is the final module that make predictions for the end task, and may take different forms depending on the task. For instance, for sequence tagging tasks, the shared and private features are first concatenated for each token, and then past through a MoE module similar to Fp (as shown in Figure 6 in the Appendix). It is straightforward to adapt C to work for other tasks. For example, for text classification, a pooling layer such as dot-product attention (Luong et al., 2015) is added at the bottom to fuse token-level features into a single sentence feature vector. C first concatenates the shared and private features to form a single feature vector for each token. It then has another MoE module that outputs a softmax probability over all labels for each token. The idea is that it may be favorable to put different weights between the language-invariant and language-specific features for different target languages. Again consider the example of English, German, Spanish and Chinese. When transferring to Chinese from the other three, the source lan3102 Algorithm 1 MAN-MoE Training Require: labeled corpus X; unlabeled corpus U; Hyperpamameter λ1, λ2 > 0, k ∈N 1: repeat 2: ▷D iterations 3: for diter = 1 to k do 4: lD = 0 5: for all l ∈∆do ▷For all languages 6: Sample a mini-batch x ∼Ul 7: fs = Fs(x) ▷Shared features 8: lD += LD(D(fs); l) ▷D loss 9: Update D parameters using ∇lD 10: ▷Main iteration 11: loss = 0 12: for all l ∈S do ▷For all source languages 13: Sample a mini-batch (x, y) ∼Xl 14: fs = Fs(x) ▷Shared features 15: fp, g1 = Fp(x) ▷Private feat. & gate outputs 16: ˆy, g2 = C(fs, fp) 17: loss += LC(ˆy; y) + λ2(Lg(g1; l) + Lg(g2; l)) 18: for all l ∈∆do ▷For all languages 19: Sample a mini-batch x ∼Ul 20: fs = Fs(x) ▷Shared features 21: loss += −λ1 · LD(D(fs); l) ▷Confuse D 22: Update Fs, Fp, C parameters using ∇loss 23: until convergence guages are similar to each other while all being rather distant from Chinese. Therefore, the adversarially learned shared features might be more important in this case. On the other hand, when transferring to German, which is much more similar to English than to Chinese, we might want to pay more attention to the MoE private features. Therefore, we adopt a MoE module in C, which provides more flexibility than using a single MLP3. 3.2 Model Training Denote the set of all N source languages as S, where |S| = N. Denote the target language as T , and let ∆= S ∪T be the set of all languages. Denote the annotated corpus for a source language l ∈S as Xl, where (x, y) ∼Xl is a sample drawn from Xl. In addition, unlabeled data is required for all languages to facilitate the MAN training. We hence denote as Ul′ the unlabeled texts from a language l′ ∈∆. The overall training flow of variant components is illustrated in Figure 1, while the training algorithm is depicted in Algorithm 1. Similar to MAN, there are two separate optimizers to train MAN-MoE, one updating the parameters of D (red arrows), while the other updating the parameters of all other modules (green arrows). In Algo3We also experimented with an attention mechanism between the shared and private features, or a gating mechanism to modulate each feature channel, but got sub-optimal results. rithm 1, LC, LD and Lg are the loss functions for the predictor C, the language discriminator D, and the expert gates in Fp and C, respectively. In practice, we adopt the NLL loss for LC for text classification, and token-level NLL loss for sequence tagging: LNLL(ˆy; y) = −log P(ˆy = y) (1) LT-NLL(ˆy; y) = −log P(ˆy = y) = − X i log P( ˆyi = yi) (2) where y is a scalar class label, and y is a vector of token labels. LC is hence interpreted as the negative log-likelihood of predicting the correct task label. Similarly, D adopts the NLL loss in (1) for predicting the correct language of a sample. Finally, the expert gates G use token-level NLL loss in (2), which translates to the negative loglikelihood of using the correct language expert for each token in a sample. Therefore, the objectives that C, D and G minimize are, respectively: JC = X l∈S E (x,y)∈Xl [LC (C(Fs(x), Fp(x)); y)] (3) JD = X l∈∆ E x∈Ul [LD(D(Fs(x)); l)] (4) JG = X l∈S E x∈Xl "X w∈x LG(G(hw); l) # (5) where hw in (5) is the BiLSTM hidden representation in Fp as shown in Figure 2. In addition, note that D is trained using unlabeled corpora over all languages (∆), while the training of Fp and C (and hence G) only take place on source languages (S). Finally, the overall objective function is: J = JC −λ1JD + λ2(J(1) G + J(2) G ) (6) where J(1) G and J(2) G are the two expert gates in Fp and C, respectively. More implementation details can be found in Appendix B. 4 Experiments In this section, we present an extensive set of experiments across three datasets. The first experiment is on a real-world multilingual slot filling (sequence tagging) dataset, where the data is used in a commercial personal virtual assistant. In addition, we conduct experiments on two public 3103 English German Spanish Chinese Domain #Train #Dev #Test #Train #Dev #Test #Train #Dev #Test #Train #Dev #Test #Slot Navigation 311045 23480 36625 13356 1599 2014 13862 1497 1986 7472 1114 1173 8 Calendar 64010 5946 8260 8261 1084 1366 6706 926 1081 2056 309 390 4 Files 30339 2058 5355 3005 451 480 6082 843 970 1289 256 215 5 Domain Examples Navigation [Driving]transportation type directions to [Walmart]place name in [New York]location. Calendar Add [school meeting]title to my calendar on [Monday]start date at [noon]start time. Files Search for [notes]data type with [grocery list]keyword. Table 1: Statistics for the Multilingual Semantic Slot Filling dataset with examples from each domain. academic datasets, namely the CoNLL multilingual named entity recognition (sequence tagging) dataset (Sang, 2002; Sang and Meulder, 2003), and the multilingual Amazon reviews (text classification) dataset (Prettenhofer and Stein, 2010). 4.1 Cross-Lingual Semantic Slot Filling As shown in Table 1, we collect data for four languages: English, German, Spanish, and Chinese, over three domains: Navigation, Calendar, and Files. Each domain has a set of pre-determined slots (the slots are the same across languages), and the user utterances in each language and domain are annotated by crowd workers with the correct slots (see the examples in Table 1). We employ the standard BIO tagging scheme to formulate the slot filling problem as a sequence tagging task. For each domain and language, the data is divided into a training, a validation, and a test set, with the number of samples in each split shown in Table 1. In our experiments, we treat each domain as a separate experiment, and consider each of German, Spanish and Chinese as the target language while the remaining three being source languages, which results in a total of 9 experiments. 4.1.1 Results In Table 2, we report the performance of MAN-MoE compared to a number of baseline systems. All systems adopt the same base architecture, which is a multi-layer BiLSTM sequence tagger (˙Irsoy and Cardie, 2014) with a token-level MLP on top (no CRFs were used). MT baselines employ machine translation (MT) for cross-lingual transfer. In particular, the trainon-trans(lation) method translates the entire English training set into each target language which are in turn used to train a supervised system on the target language. On the other hand, the test-ontrans(lation) method trains an English sequence tagger, and utilizes MT to translate the test set of each target language into English in order to make predictions. In this work, we adopt the Microsoft Translator4, a strong commercial MT system. Note that for a MT system to work for sequence tagging tasks, word alignment information must be available, in order to project wordlevel annotations across languages. This rules out many MT systems such as Google Translate since they do not provide word alignment information through their APIs. BWE baselines rely on Bilingual Word Embeddings (BWEs) and weight sharing for CLTL. Namely, the sequence tagger trained on the source language(s) are directly applied to the target language, in hopes that the BWEs could bridge the language gap. This simple method has been shown to yield strong results in recent work (Upadhyay et al., 2018). The MUSE (Lample et al., 2018) BWEs are used by all systems in this experiment. 1-to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages are leveraged.5 The final baseline is the MAN model (Chen and Cardie, 2018a), presented before our MAN-MoE approach. As shown in Table 2, MAN-MoE substantially outperforms all baseline systems that do not employ cross-lingual supervision on almost all domains and languages. Another interesting observation is that MAN performs strongly on Chinese while being much worse on German and Spanish compared to the BWE baseline. This corroborates our hypothesis that MAN only leverages features that are invariant across all languages for CLTL, and it learns such features better than weight sharing. Therefore, when transferring to German or Spanish, which is similar to a subset of source languages, the performance of 4https://azure.microsoft.com/en-us/services/ cognitive-services/translator-text-api/ 5MAN and MAN-MoE results are always 3-to-1. 3104 German Spanish Chinese Domain Navi. Cal. Files avg. Navi. Cal. Files avg. Navi. Cal. Files avg. Methods with cross-lingual resources MT (train-on-trans.) 59.95 63.53 38.68 54.05 64.37 59.93 67.55 63.95 60.56 66.49 61.01 62.69 MT (test-on-trans.) 54.49 51.74 55.87 54.03 52.13 58.10 55.00 55.08 54.23 22.71 64.01 46.98 Methods without cross-lingual resources BWE (1-to-1) 57.53 58.28 35.73 50.51 62.54 44.44 57.56 54.85 17.62 22.48 21.32 20.47 BWE (3-to-1) 61.03 67.66 51.30 60.00 63.74 45.10 64.47 57.77 20.91 13.70 28.47 21.03 MAN 59.07 60.24 39.35 52.89 58.86 37.90 46.75 47.84 34.45 13.53 40.63 29.54 MAN-MoE 62.73 75.13 59.19 65.68 66.57 50.21 70.91 62.56 34.18 29.36 41.70 35.08 Table 2: F1 scores on the Multilingual Semantic Slot Filling dataset. The highest performance is in bold; the highest performance within method group (with vs. without cross-lingual resources) is underlined (sic passim). German Spanish Chinese Domain Navi. Cal. Files avg Navi. Cal. Files avg Navi. Cal. Files avg MAN-MoE 62.73 75.13 59.19 65.68 66.57 50.21 70.91 62.56 34.18 29.36 41.70 35.08 - C MoE 63.42 76.68 55.68 65.26 65.50 47.51 69.67 60.89 27.71 21.75 41.77 30.41 - Fp MoE 58.33 48.85 37.35 48.18 58.99 36.67 48.39 48.02 39.61 14.64 38.08 30.78 - both MoE 59.07 60.24 39.35 52.89 58.86 37.90 46.75 47.84 34.45 13.53 40.63 29.54 - MAN 60.64 67.69 55.10 61.14 65.38 46.71 68.25 60.11 18.43 10.82 28.90 19.38 Table 3: Ablation (w.r.t. MAN-MoE) results on the Multilingual Semantic Slot Filling dataset. MAN degrades significantly. On the other hand, when Chinese serves as the target language, where all source languages are rather distant from it, MAN has its merit in extracting language-invariant features that could generalize to Chinese. With MAN-MoE, however, this trade-off between close and distant language pairs is well addressed by the combination of MAN and MoE. By utilizing both language-invariant and language-specific features for transfer, MAN-MoE outperforms all crosslingually unsupervised baselines on all languages. Furthermore, even when compared with the MT baseline, which has access to hundreds of millions of parallel sentences, MAN-MoE performs competitively on German and Spanish. It even significantly beats both MT systems on German as MT sometimes fails to provide accurate word alignment for German. On Chinese, where the unsupervised BWEs are much less accurate (BWE baselines only achieve 20% F1), MAN-MoE is able to greatly improve over the BWE and MAN baselines and shows promising results for zero-resource CLTL even between distant language pairs. 4.1.2 Feature Ablation In this section, we take a closer look at the various modules of MAN-MoE and their impacts on performance (Table 3). When the MoE in C is removed, moderate decrease is observed on all languages. The performance degrades the most on Chinese, suggesting that using a single MLP in C is not ideal when the target language is not similar to the sources. When removing the private MoE, the MoE in C no longer makes much sense as C only has access to the shared features, and the performance is even slightly worse than removing both MoEs. With both MoE modules removed, it reduces to the MAN model, and we see a significant drop on German and Spanish. Finally, when removing MAN while keeping MoE, where the shared features are simply learned via weight-sharing, we see a slight drop on German and Spanish, but a rather great one on Chinese. The ablation results support our hypotheses and validate the merit of MAN-MoE. 4.2 Cross-Lingual Named Entity Recognition In this section, we present experiments on the CoNLL 2002 & 2003 multilingual named entity recognition (NER) dataset (Sang, 2002; Sang and Meulder, 2003), with four languages: English, German, Spanish and Dutch. The task is also formulated as a sequence tagging problem, with four types of tags: PER, LOC, ORG, and MISC. The results are summarized in Table 4. We observe that using only word embeddings does not yield satisfactory results, since the out-ofvocabulary problem is rather severe, and morphological features such as capitalization is crucial for NER. We hence add character-level word embeddings for this task (§3.1) to capture subword fea3105 Target Language de es nl avg Methods with cross-lingual resources T¨ackstr¨om et al. (2012) 40.4 59.3 58.4 52.7 Nothman et al. (2013) 55.8 61.0 64.0 60.3 Tsai et al. (2016) 48.1 60.6 61.6 56.8 Ni et al. (2017) 58.5 65.1 65.4 63.0 Mayhew et al. (2017) 57.5 66.0 64.5 62.3 Methods without cross-lingual resources MAN-MoE 55.1 59.5 61.8 58.8 BWE+CharCNN (1-to-1) 51.5 61.0 67.3 60.0 BWE+CharCNN (3-to-1) 55.8 70.4 69.8 65.3 Xie et al. (2018)* 56.9 71.0 71.3 66.4 MAN-MoE+CharCNN 56.7 71.0 70.9 66.2 MAN-MoE+CharCNN+UMWE 56.0 73.5 72.4 67.3 * Contemporaneous work Table 4: F1 scores for the CoNLL NER dataset on German (de), Spanish (es) and Dutch (nl). tures and alleviate the OOV problem. For German, however, all nouns are capitalized, and the capitalization features learned on the other three languages would lead to poor results. Therefore, for German only, we lowercase all characters in systems that adopt CharCNN. Table 4 also shows the performance of several state-of-the-art models in the literature6. Note that most of these systems are specifically designed for the NER task, and exploit many taskspecific resources, such as multilingual gazetteers, or metadata in Freebase or Wikipedia (such as entity categories). Among these, T¨ackstr¨om et al. (2012) rely on parallel corpora to learn crosslingual word clusters that serve as features. Nothman et al. (2013); Tsai et al. (2016) both leverage information in external knowledge bases such as Wikipedia to learn useful features for crosslingual NER. Ni et al. (2017) employ noisy parallel corpora (aligned sentence pairs, but not always translations) and bilingual dictionaries (5k words for each language pair) for model transfer. They further add external features such as entity types learned from Wikipedia for improved performance. Finally, Mayhew et al. (2017) propose a multi-source framework that utilizes large cross-lingual lexica. Despite using none of these resources, general or task-specific, MAN-MoE nonetheless outperforms all these methods. The only exception is German, where task-specific resources remain helpful due to its unique capitalization rules and high OOV rate. 6We also experimented with the MT baselines, but it often failed to produce word alignment, resulting in many empty predictions. The MT baselines attain only a F1 score of ∼30%, and were thus excluded for comparison. en fr ja Target Lang: de 0.25 0.30 0.35 0.40 Average Gate Weights en de ja Target Lang: fr en de fr Target Lang: ja Figure 3: Average expert gate weights aggregated on a language level for the Amazon Reviews dataset. In a contemporaneous work by (Xie et al., 2018), they propose a cross-lingual NER model using Bi-LSTM-CRF that achieves similar performance compared to MAN-MoE+CharCNN. However, our architecture is not specialized to the NER task, and we did not add task-specific modules such as a CRF decoding layer, etc. Last but not least, we replace the MUSE embeddings with the recently proposed unsupervised multilingual word embeddings (Chen and Cardie, 2018b), which further boosts the performance, achieving a new state-of-the-art performance as shown in Table 4 (last row). 4.3 Cross-Lingual Text Classification on Amazon Reviews Finally, we report results on a multilingual text classification dataset (Prettenhofer and Stein, 2010). The dataset is a binary classification dataset where each review is classified into positive or negative sentiment. It has four languages: English, German, French and Japanese. As shown in Table 5, MT-BOW uses machine translation to translate the bag of words of a target sentence into the source language, while CL-SCL learns a cross-lingual feature space via structural correspondence learning (Prettenhofer and Stein, 2010). CR-RL (Xiao and Guo, 2013) learns bilingual word representations where part of the word vector is shared among languages. Bi-PV (Pham et al., 2015) extracts bilingual paragraph vector by sharing the representation between parallel documents. UMM (Xu and Wan, 2017) is a multilingual framework that could utilize parallel corpora between multiple language pairs, and pivot as needed when direct bitexts are not available for a specific source-target pair. Finally CLDFA (Xu and Yang, 2017) proposes cross-lingual distillation on parallel corpora for CLTL. Unlike other works listed, however, they adopt a task-specific parallel corpus (translated Amazon reviews) that are difficult to obtain in practice, making the num3106 German French Japanese Domain books dvd music avg books dvd music avg books dvd music avg Methods with general-purpose cross-lingual resources MT-BOW1 79.68 77.92 77.22 78.27 80.76 78.83 75.78 78.46 70.22 71.30 72.02 71.18 CL-SCL1 79.50 76.92 77.79 78.07 78.49 78.80 77.92 78.40 73.09 71.07 75.11 73.09 CR-RL2 79.89 77.14 77.27 78.10 78.25 74.83 78.71 77.26 71.11 73.12 74.38 72.87 Bi-PV3 79.51 78.60 82.45 80.19 84.25 79.60 80.09 81.31 71.75 75.40 75.45 74.20 UMM4 81.65 81.27 81.32 81.41 80.27 80.27 79.41 79.98 71.23 72.55 75.38 73.05 Methods with task-specific cross-lingual resources CLDFA5 83.95 83.14 79.02 82.04 83.37 82.56 83.31 83.08 77.36 80.52 76.46 78.11 Methods without cross-lingual resources BWE (1-to-1) 76.00 76.30 73.50 75.27 77.80 78.60 78.10 78.17 55.93 57.55 54.35 55.94 BWE (3-to-1) 78.35 77.45 76.70 77.50 77.95 79.25 79.95 79.05 54.78 54.20 51.30 53.43 MAN-MoE 82.40 78.80 77.15 79.45 81.10 84.25 80.90 82.08 62.78 69.10 72.60 68.16 1 Prettenhofer and Stein (2010) 2 Xiao and Guo (2013) 3 Pham et al. (2015) 4 Xu and Wan (2017) 5 Xu and Yang (2017) Table 5: Results for the Multilingual Amazon Reviews dataset. Numbers indicate binary classification accuracy. VecMap embeddings (Artetxe et al., 2017) are used for this experiment as MUSE training fails on Japanese (§3.1). bers not directly comparable to others. Among these methods, UMM is the only one that does not require direct parallel corpus between all source-target pairs. It can instead utilize pivot languages (e.g. English) to connect multiple languages. MAN-MoE, however, takes another giant leap forward to completely remove the necessity of parallel corpora while achieving similar results on German and French compared to UMM. On Japanese, the performance of MAN-MoE is again limited by the quality of BWEs. (BWE baselines are merely better than randomness.) Nevertheless, MAN-MoE remains highly effective and the performance is only a few points below most SoTA methods with cross-lingual supervision. For a better understanding of the model behavior, Figure 3 visualizes the expert weights when transferring to different languages, which corroborates our model hypothesis and the findings in §4.1.2 (see Appendix A for more details). 5 Conclusion In this paper, we propose MAN-MoE, a multilingual model transfer approach that exploits both language-invariant (shared) features and language-specific (private) features, which departs from most previous models that can only make use of shared features. Following earlier work, the shared features are learned via languageadversarial training (Chen et al., 2016). On the other hand, the private features are extracted by a mixture-of-experts (MoE) module, which is able to dynamically capture the relation between the target language and each source language on a token level. This is extremely helpful when the target language is similar to a subset of source languages, in which case traditional models that solely rely on shared features would perform poorly. Furthermore, MAN-MoE is a purely model-based transfer method, which does not require parallel data for training, enabling fully zero-resource MLTL when combined with unsupervised cross-lingual word embeddings. This makes MAN-MoE more widely applicable to lower-resourced languages. Our claim is supported by a wide range of experiments over multiple text classification and sequence tagging tasks, including a large-scale industry dataset. MAN-MoE significantly outperforms all cross-lingually unsupervised baselines regardless of task or language. Furthermore, even considering methods with strong cross-lingual supervision, MAN-MoE is able to match or outperform these models on closer language pairs. When transferring to distant languages such as Chinese or Japanese (from European languages), where the quality of cross-lingual word embeddings are unsatisfactory, MAN-MoE remains highly effective and substantially mitigates the performance gap introduced by cross-lingual supervision. For future work, we plan to apply MAN-MoE to more challenging languages for tasks such as syntactic parsing, where multilingual data exists (Nivre et al., 2017). Furthermore, we would like to experiment with multilingual contextualized embeddings such as the Multilingual BERT (Devlin et al., 2018). 3107 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Association for Computational Linguistics. Nuria Bel, Cornelis H. A. Koster, and Marta Villegas. 2003. Cross-lingual text categorization. In Research and Advanced Technology for Digital Libraries, pages 126–139, Berlin, Heidelberg. Springer Berlin Heidelberg. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems 29, pages 343–351. Curran Associates, Inc. Xilun Chen and Claire Cardie. 2018a. Multinomial adversarial networks for multi-domain text classification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1226– 1240. Association for Computational Linguistics. Xilun Chen and Claire Cardie. 2018b. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261–270, Brussels, Belgium. Association for Computational Linguistics. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. ArXiv e-prints. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. C´ıcero Nogueira Dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, pages II–1818–II–1826. JMLR.org. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096–2030. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 344–354. Association for Computational Linguistics. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing. In AAAI Conference on Artificial Intelligence. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703, Brussels, Belgium. Association for Computational Linguistics. Mohammad Sadegh Hajmohammadi, Roliana Ibrahim, Ali Selamat, and Alireza Yousefpour. 2014. Combination of multi-view multi-source language classifiers for cross-lingual sentiment classification. In Intelligent Information and Database Systems, pages 21–30. Springer International Publishing. Ozan ˙Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 720–728. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2832–2838. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Association for Computational Linguistics. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. 3108 Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459–1474, Mumbai, India. The COLING 2012 Organizing Committee. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. 2009. Domain adaptation with multiple sources. In Advances in Neural Information Processing Systems 21, pages 1041–1048. Curran Associates, Inc. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2536–2545. Association for Computational Linguistics. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 62–72, Stroudsburg, PA, USA. Association for Computational Linguistics. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629–637, Jeju Island, Korea. Association for Computational Linguistics. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1470– 1480. Association for Computational Linguistics. Joakim Nivre, ˇZeljko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Aljoscha Burchardt, Marie Candito, Gauthier Caron, G¨uls¸en Cebiro˘glu Eryi˘git, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Silvie Cinkov´a, C¸ a˘grı C¸ ¨oltekin, Miriam Connor, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Marhaba Eli, Ali Elkahky, Tomaˇz Erjavec, Rich´ard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cl´audia Freitas, Katar´ına Gajdoˇsov´a, Daniel Galbraith, Marcos Garcia, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh G¨okırmak, Yoav Goldberg, Xavier G´omez Guinovart, Berta Gonz´ales Saavedra, Matias Grioni, Normunds Gr¯uz¯ıtis, Bruno Guillaume, Nizar Habash, Jan Hajiˇc, Jan Hajiˇc jr., Linh H`a M˜y, Kim Harris, Dag Haug, Barbora Hladk´a, Jaroslava Hlav´aˇcov´a, Petter Hohle, Radu Ion, Elena Irimia, Anders Johannsen, Fredrik Jørgensen, H¨uner Kas¸ıkara, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, V´aclava Kettnerov´a, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Phng Lˆe H`ˆong, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Nikola Ljubeˇsi´c, Olga Loginova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, C˘at˘alina M˘ar˘anduc, David Mareˇcek, Katrin Marheinecke, H´ector Mart´ınez Alonso, Andr´e Martins, Jan Maˇsek, Yuji Matsumoto, Ryan McDonald, Gustavo Mendonc¸a, Anna Missil¨a, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili M¨u¨urisep, Pinkey Nainwani, Anna Nedoluzhko, Lng Nguy˜ˆen Thi., Huy`ˆen Nguy˜ˆen Thi. Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, CenelAugusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Martin Popel, Lauma Pretkalnin¸a, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker, Livy Real, Siva Reddy, Georg Rehm, Larissa Rinaldi, Laura Rituma, Rudolf Rosa, Davide Rovati, Shadi Saleh, Manuela Sanguinetti, Baiba Saul¯ıte, Yanin Sawanakunanon, Sebastian Schuster, Djam´e Seddah, Wolfgang Seeker, Mojgan Seraji, Lena Shakurova, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simk´o, M´aria ˇSimkov´a, Kiril Simov, Aaron Smith, Antonio Stella, Jana Strnadov´a, Alane Suhr, Umut Sulubacak, Zsolt Sz´ant´o, Dima Taji, Takaaki Tanaka, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeˇnka Ureˇsov´a, Larraitz Uria, Hans Uszkoreit, Gertjan van Noord, Viktor Varga, Veronika Vincze, Jonathan North Washington, Zhuoran Yu, Zdenˇek ˇZabokrtsk´y, Daniel Zeman, and Hanzhi Zhu. 2017. Universal dependencies 2.0 CoNLL 2017 shared task development and test data. Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R. Curran. 2013. Learning mul3109 tilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151–175. Hieu Pham, Minh-Thang Luong, and Christopher Manning. 2015. Learning distributed representations for multilingual text sequences. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 88–94, Denver, Colorado. Association for Computational Linguistics. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1118–1127. Association for Computational Linguistics. Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. 2019. Latent multi-task architecture learning. In AAAI Conference on Artificial Intelligence. Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061–1071, Atlanta, Georgia. Association for Computational Linguistics. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 477–487. Association for Computational Linguistics. Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 219–228. Association for Computational Linguistics. Shyam Upadhyay, Manaal Faruqui, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2018. (almost) zeroshot cross-lingual spoken language understanding. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034–6038. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235– 243. Association for Computational Linguistics. Min Xiao and Yuhong Guo. 2013. Semi-supervised representation learning for cross-lingual text classification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1465–1475. Association for Computational Linguistics. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379. Association for Computational Linguistics. Kui Xu and Xiaojun Wan. 2017. Towards a universal sentiment classifier in multiple languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 511– 520, Copenhagen, Denmark. Association for Computational Linguistics. Ruochen Xu and Yiming Yang. 2017. Cross-lingual distillation for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415–1425. Association for Computational Linguistics. D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 3110 pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1857–1867, Lisbon, Portugal. Association for Computational Linguistics. Han Zhao, Shanghang Zhang, Guanhang Wu, Jos´e M. F. Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adaptation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 8568–8579. Curran Associates, Inc. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398, Seattle, Washington, USA. Association for Computational Linguistics. 3111 Appendix A Visualization of Expert Gate Weights In Figure 4 and 5, we visualize the average expert gate weights for each of the three target languages in the Amazon and CoNLL datasets, respectively. For each sample, we first compute a sentencelevel aggregation by averaging over the expert gate weights of all its tokens. These sentence-level expert gate weights are then further averaged across all samples in the validation set, which forms a final language-level average expert gate weight for each target language. For the Amazon dataset, we take the combination of all three domains (books, dvd, music). The visualization further collaborates with our hypothesis that our model makes informed decisions when selecting what features to share to the target language. On the Amazon dataset, it can be seen that when transferring to German or French (from the remaining three), the Japanese expert is less utilized compared to the European languages. On the other hand, it is interesting that when transferring to Japanese, the French and English experts are used more than the German one, and the exact reason remains to be investigated. However, this phenomenon might be of less significance since the private features may not play a very important role when transferring to Japanese as the model is probably focusing more on the shared features, according to the ablation study in Section 4.1.2. In addition, on the CoNLL dataset, we observe that when transferring to German, the experts from the two more similar lanaguages, English and Dutch, are favored over the Spanish one. Similarly, when transferring to Dutch, the highly relevant German expert is heavily used, and the Spanish expert is barely used at all. Interestingly, when transferring to Spanish, the model also shows a skewed pattern in terms of expert usage, and prefers the German expert over the other two. Appendix B Implementation Details In all experiments, Adam (Kingma and Ba, 2015) is used for both optimizers (main optimizer and D optimizer), with learning rate 0.001 and weight decay 10−8. Batch size is 64 for the slot filling experiment and 16 for the NER and Amazon Reviews experiments, which is selected mainly due to memory concerns. CharCNN increases the GPU memory usage and NER hence could only λ1 λ2 k Slot Filling 0.01 1 5 CoNLL NER 0.0001 0.01 1 Amazon 0.002 0.1 1 Table 6: The hyperparameter choices for different experiments. use a batch size of 16 to fit in 12GB of GPU memory. The Amazon experiment does not employ character embeddings but the documents are much longer, and thus also using a smaller batch size. All embeddings are fixed during training. Dropout (Srivastava et al., 2014) with p = 0.5 is applied in all components. Unless otherwise mentioned, ReLU is used as non-linear activation. Bidirectional LSTM is used in the feature extractors for all experiments. In particular, Fs is a two-layer BiLSTM of hidden size 128 (64 for each direction), and Fp is a two-layer BiLSTM of hidden size 128 stacked with a MoE module (see Figure 2). Each expert network in the MoE module of Fp is a two-layer MLP again of hidden size of 128. The final layer in the MLP has a tanh activation instead of ReLU to match the LSTMextracted shared features (with tanh activations). The expert gate is a linear transformation (matrix) of size 128 × N, where N is the number of source languages. On the other hand, the architecture of the task specific predictor C depends on the task. For sequence tagging experiments, the structure of C is shown in Figure 6, where each expert in the MoE module is a token-level two-layer MLP with a softmax layer on top for making token label predictions. For text classification tasks, a dotproduct attention mechanism (Luong et al., 2015) is added after the shared and private features are concatenated. It has a length 256 weight vector that attends to the feature vectors of each token and computes a softmax mixture that pools the token-level feature vectors into a single sentencelevel feature vector. The rest of C remains the same for text classification. For the language discriminator D, a CNN text classifier (Kim, 2014) is adopted in all experiments. It takes as input the shared feature vectors of each token, and employs a CNN with maxpooling to pool them into a single fixed-length feature vector, which is then fed into a MLP for clas3112 en fr ja Target Lang: de 0.25 0.30 0.35 0.40 Average Gate Weights en de ja Target Lang: fr 0.25 0.30 0.35 0.40 en de fr Target Lang: ja 0.25 0.30 0.35 0.40 Figure 4: Average expert gate weights aggregated on a language level for the Amazon dataset. en es nl Target Lang: de 0.0 0.1 0.2 0.3 0.4 Average Gate Weights en de nl Target Lang: es 0.0 0.1 0.2 0.3 0.4 0.5 0.6 en de es Target Lang: nl 0.0 0.2 0.4 0.6 Figure 5: Average expert gate weights aggregated on a language level for the CoNLL dataset. EN Expert MLP Expert Gate DE Expert MLP ES Expert MLP ⍺1 ⍺2 ⍺3 Task Label Gate Label λ2Jg Private Features Shared Features JC Figure 6: The MoE Predictor C for Sequence Tagging. sifying the language of the input sequence. The number of kernels is 200 in the CNN, while the kernel sizes are 3, 4, and 5. The MLP has one hidden layer of size 128. The MUSE, VecMap, and UMWE embeddings are trained with the monolingual 300d fastText Wikipedia embeddings (Bojanowski et al., 2017). When character-level word embeddings are used, a CharCNN is added that takes randomly initialized character embeddings of each character in a word, and passes them through a CNN with kernel number 200 and kernel sizes 3, 4, and 5. Finally, the character embeddings are max-pooled and fed into a single fully-connected layer to form a 128 dimensional character-level word embedding, which is concatenated with the pre-trained cross-lingual word embedding to form the final word representation of that word. The remaining hyperparameters such as λ1, λ2 and k (see Algorithm 1) are tuned for each individual experiment, as shown in Table 6.
2019
299
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 22–31 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 22 Improving Multi-turn Dialogue Modelling with Utterance ReWriter Hui Su1∗, Xiaoyu Shen2∗, Rongzhi Zhang3, Fei Sun4, Pengwei Hu5 Cheng Niu1 and Jie Zhou1 1Pattern Recognition Center, Wechat AI, Tencent Inc, China 2MPI Informatics & Spoken Language Systems (LSV), Saarland Informatics Campus 3Institute of Software, University of Chinese Academy of Science 4Alibaba Group 5IBM Research, China [email protected],[email protected] Abstract Recent research has made impressive progress in single-turn dialogue modelling. In the multi-turn setting, however, current models are still far from satisfactory. One major challenge is the frequently occurred coreference and information omission in our daily conversation, making it hard for machines to understand the real intention. In this paper, we propose rewriting the human utterance as a pre-process to help multi-turn dialgoue modelling. Each utterance is first rewritten to recover all coreferred and omitted information. The next processing steps are then performed based on the rewritten utterance. To properly train the utterance rewriter, we collect a new dataset with human annotations and introduce a Transformer-based utterance rewriting architecture using the pointer network. We show the proposed architecture achieves remarkably good performance on the utterance rewriting task. The trained utterance rewriter can be easily integrated into online chatbots and brings general improvement over different domains.1 1 Introduction Dialogue systems have made dramatic progress in recent years, especially in single-turn chit-chat and FAQ matching (Shang et al., 2015; Ghazvininejad et al., 2018; Molino et al., 2018; Chen et al., 2019). Nonethless, multi-turn dialogue modelling still remains extremely challenging (Vinyals and Le, 2015; Serban et al., 2016, 2017; Shen et al., 2018a,b). The challenge is multi-sided. One most important difficulty is the frequently occurred coreference and information omission in our daily conversations, especially in pro-drop languages like Chinese or Japanese. From our preliminary study of 2,000 Chinese multi-turn con∗Both authors contributed equally. 1The code is available on https://github.com/ chin-gyou/dialogue-utterance-rewriter. Context 1 Utterance 1 Human: 梅西有多高? (Translation) Human: How tall is Messi? Utterance 2 ChatBot: 官方说法他的身高是5英尺7英寸。 ChatBot: Officially he is 5ft 7 inches. Utterance 3 Human: 他和C罗谁是最好的球员? Human: Who is the best, he or C.Ronaldo? Utterance 3′ Human: 梅西和C罗谁是最好的球员? Human: Who is the best, Messi or C.Ronaldo? Context 2 Utterance 1 Human: 你最喜欢什么电影? Human: What movie do you like most? Utterance 2 ChatBot: 泰坦尼克。 ChatBot: Titanic. Utterance 3 Human: 为什么呢? Human: Why? Utterance 3′ Human: 为什么最喜欢泰坦尼克? Human: Why do you like Titanic most? Table 1: An example of multi-turn dialogue. Each utterance 3 is rewritten into Utterance 3′. Green means coreference and blue means omission. versations, different degrees of coreference and omission exist in more than 70% of the utterances. Capturing the hidden intention beneath them requires deeper understanding of the dialogue context, which is difficult for current neural networkbased systems. Table 1 shows two typical examples in multi-turn dialogues. “他”(he) from Context 1 is a coreference to “梅西”(Messi) and “为什 么”(Why) from Context 2 omits the further question of “为什么最喜欢泰坦尼克”(Why do you like Tatanic most)?. Without expanding the coreference or omission to recover the full information, the chatbot has no idea how to continue the talk. To address this concern, we propose simplifying the multi-turn dialogue modelling into a singleturn problem by rewriting the current utterance. The utterance rewriter is expected to perform (1) coreference resolution and (2) information completion to recover all coreferred and omitted mentions. In the two examples from Table 1, each utterance 3 will be rewritten into utterance 3′. Afterwards, the system will generate a reply by only looking into the utterance 3′ without considering the previous turns utterance 1 and 2. This simplification shortens the length of dialogue con23 text while still maintaining necessary information needed to provide proper responses, which we believe will help ease the difficulty of multi-turn dialogue modelling. Compared with other methods like memory networks (Sukhbaatar et al., 2015) or explicit belief tracking (Mrkˇsi´c et al., 2017), the trained utterance rewriter is model-agnostic and can be easily integrated into other black-box dialogue systems. It is also more memory-efficient because the dialogue history information is reflected in a single rewritten utterance. To get supervised training data for the utterance rewriting, we construct a Chinese dialogue dataset containing 20k multi-turn dialogues. Each utterance is paired with corresponding manually annotated rewritings. We model this problem as an extractive generation problem using the Pointer Network (Vinyals et al., 2015). The rewritten utterance is generated by copying words from either the dialogue history or the current utterance based on the attention mechanism (Bahdanau et al., 2014). Inspired by the recently proposed Transformer architecture (Vaswani et al., 2017) in machine translation which can capture better intra-sentence word dependencies, we modify the Transformer architecture to include the pointer network mechanism. The resulting model outperforms the recurrent neural network (RNN) and original Transformer models, achieving an F1 score of over 0.85 for both the coreference resolution and information completion. Furthermore, we integrate our trained utterance rewriter into two online chatbot platforms and find it leads to more accurate intention detection and improves the user engagement. In summary, our contributions are: 1. We collect a high-quality annotated dataset for coreference resolution and information completion in multi-turn dialogues, which might benefit future related research. 2. We propose a highly effective Transformerbased utterance rewriter outperforming several strong baselines. 3. The trained utterance rewriter, when integrated into two real-life online chatbots, is shown to bring significant improvement over the original system. In the next section, we will first go over some related work. Afterwards, in Section 3 and 4, our collected dataset and proposed model are introduced. The experiment results and analysis are presented in Section 5. Finally, some conclusions are drawn in Section 6. 2 Related Work 2.1 Sentence Rewriting Sentence rewriting has been widely adopted in various NLP tasks. In machine translation, people have used it to refine the output generations from seq2seq models (Niehues et al., 2016; JunczysDowmunt and Grundkiewicz, 2017; Grangier and Auli, 2017; Gu et al., 2017). In text summarization, reediting the retrieved candidates can provide more accurate and abstractive summaries (See et al., 2017; Chen and Bansal, 2018; Cao et al., 2018). In dialogue modelling, Weston et al. (2018) applied it to rewrite outputs from a retrieval model, but they pay no attention to recovering the hidden information under the coreference and omission. Concurrent with our work, Rastogi et al. (2019) adopts a similar idea on English conversations to simplify the downstream SLU task by reformulating the original utterance. Rewriting the source input into some easy-to-process standard format has also gained significant improvements in information retrieval (Riezler and Liu, 2010), semantic parsing (Chen et al., 2016) or question answering (Abujabal et al., 2018), but most of them adopt a simple dictionary or template based rewriting strategy. For multi-turn dialogues, due to the complexity of human languages, designing suitable template-based rewriting rules would be timeconsuming. 2.2 Coreference Resolution Coreference resolution aims to link an antecedent for each possible mention. Traditional approaches often adopt a pipeline structure which first identify all pronouns and entities then run clustering algorithms (Haghighi and Klein, 2009; Lee et al., 2011; Durrett and Klein, 2013; Bj¨orkelund and Kuhn, 2014). At both stages, they rely heavily on complicated, fine-grained features. Recently, several neural coreference resolution systems (Clark and Manning, 2016a,b) utilize distributed representations to reduce human labors. Lee et al. (2017) reported state-of-the-art results with an end-to-end neural coreference resolution system. However, it requires computing the scores for all possible spans, which is computationally inefficient on online dialogue systems. The recently proposed Transformer adopted the self24 attention mechanism which could implicitly capture inter-word dependencies in an unsupervised way (Vaswani et al., 2017). However, when multiple coreferences occur, it has problems properly distinguishing them. Our proposed architecture is built upon the Transformer architecture, but perform coreference resolution in a supervised setting to help deal with ambiguous mentions. 3 Dataset To get parallel training data for the sentence rewriting, we crawled 200k candidate multi-turn conversational data from several popular Chinese social media platforms for human annotators to work on. Sensitive information is filtered beforehand for later processing. Before starting the annotation, we randomly sample 2,000 conversational data and analyze how often coreference and omission occurs in multi-turn dialogues. Table 2 lists the statistics. As can be seen, only less than 30% utterances have neither coreference nor omission and quite a few utterances have both. This further validates the importance of addressing the these situations in multi-turn dialogues. % Rate Coreference 33.5 Omission 52.4 Neither 29.7 Table 2: Proportion of utterances containing coreference and omission in multi-turn conversation In the annotation process, human annotators need to identify these two situations then rewrite the utterance to cover all hidden information. An example is shown in Table 1. Annotators are required to provide the rewritten utterance 3′ given the original conversation [utterance 1,2 and 3]. To ensure the annotation quality, 10% of the annotations from each annotator are daily examined by a project manager and feedbacks are provided. The annotation is considered valid only when the accuracy of examined results surpasses 95%. Apart from the accuracy examination, the project manage is also required to (1) select topics that are more likely to be talked about in daily conversations, (2) try to cover broader domains and (3) balance the proportion of different coreference and omission patterns. The whole annotation takes 4 months to finish. In the end, we get 40k highquality parallel samples. Half of them are negative samples which do not need any rewriting. The other half are positive samples where rewriting is needed. Table 3 lists the statistics. The rewritten utterance contains 10.5 tokens in average, reducing the context length by 80%. Dataset size: 40,000 Avg. length of original conversation: 48.8 Avg. length of rewritten utterance: 10.5 Table 3: Statistics of dataset. Length is counted in the unit of Chinese characters. 4 Model 4.1 Problem Formalization We denote each training sample as (H, Un →R). H = {U1, U2, . . . , Un−1} represents the dialogue history containing the first n −1 turn of utterances. Un is the nth turn of utterance, the one that needs to be rewritten. R is the rewritten utterance after recovering all corefernced and omitted information in Un. R could be identical to Un if no coreference or omission is detected (negative sample). Our goal is to learn a mapping function p(R|(H, Un)) that can automatically rewrite Un based on the history information H. The process is to first encode (H, Un) into s sequence of vectors, then decode R using the pointer network. The next section will explain the steps in order. 4.2 Encoder We unfold all tokens in (H, Un) into (w1, w2, . . . , wm). m is the number of tokens in the whole dialogue. An end-of-turn delimiter is inserted between each two turns. The unfolded sequence of tokens are then encoded with Transformer. We concatenate all tokens in (H, Un) as the input, in hope that the Transformer can learn rudimentary coreference information within them by means of the self-attention mechanism. For each token wi, the input embedding is the sum of its word embedding, position embedding and turn embedding: I(wi) = WE(wi) + PE(wi) + TE(wi) The word embedding WE(wi) and position embedding PE(wi) are the same as in normal Transformer architectures (Vaswani et al., 2017). We 25 Figure 1: Architecture of our proposed model. Green box is the Transformer encoder and pink box is the decoder. The decoder computes the probability λ at each step to decide whether to copy from the context or utterance. add an additional turn embedding TE(wi) to indicate which turn each token belongs to. Tokens from the same turn will share the same turn embedding. The input embeddings are then forwarded into L stacked encoders to get the final encoding representations. Each encoder contains a self-attention layer followed by a feedforward neural network.: E(0) = h I(w1), I(w2), . . . , I(wm) i E(l) = FNN(MultiHead(E(l−1), E(l−1), E(l−1))) FNN is the feedforward neural network and MultiHead(Q, K, V) is a multi-head attention function taking a query matrix Q, a key matrix K, and a value matrix V as inputs. Each self-attention and feedforward component comes with a residual connection and layer-normalization step, which we refer to Vaswani et al. (2017) for more details. The final encodings are the output from the Lth encoder E(L). 4.3 Decoder The decoder also contains L layers, each layer is composed of three sub-layers. The first sub-layer is a multi-head self-attention: Ml = MultiHead(D(l−1), D(l−1), D(l−1)) D(0) = R. The second sub-layer is encoderdecoder attention that integrates E(L) into the decoder. In our task, as H and Un serve different purposes, we use separate key-value matrix for tokens coming from the dialogue history H and those coming from Un. The encoded sequence E(L) obtained from the last section is split into E(L) H (encodings of tokens from H) and E(L) Un (encodings of tokens from Un) then processed separately. The encoder-decoder vectors are computed as follows: C(H)l = MultiHead(M(l), E(L) H , E(L) H ) C(Un)l = MultiHead(M(l), E(L) Un , E(L) Un ) The third sub-layer is a position-wise fully connected feed-forward neural network: D(l) = FNN([C(H)l ◦C(Un)l]) where ◦denotes vector concatenation. 4.4 Output Distribution In the decoding process, we hope our model could learn whether to copy words from H or Un at different steps. Therefore, we impose a soft gating weight λ to make the decision. The decoding probability is computed by combining the atten26 tion distribution from the last decoding layer: p(Rt=w|H, Un, R<t)=λ X i:(wi=w)∧(wi∈H) at,i +(1−λ) X j:(wj=w)∧(wj∈Un) a′ t,j a = Attention(M(L), E(L) Un ) a′ = Attention(M(L), E(L) H ) λ = σ w⊤ d DL t + w⊤ HC(H)L t + w⊤ U C(Un)L t  a and a′ are the attention distribution over tokens in H and Un respectively. wd, wH, and wU are parameters to be learned, σ is the sigmoid function to output a value between 0 and 1. The gating weight λ works like a sentinel to inform the decoder whether to extract information from the dialogue history H or directly copy from Un. If Un contains neither coreference nor information omission. λ would be always 1 to copy the original Un as the output. Otherwise λ becomes 0 when a coreference or omission is detected. The attention mechanism is then responsible of finding the proper coreferred or omitted information from the dialogue history. The whole model is trained endto-end by maximizing p(R|H, Un). 5 Experiments We train our model to perform the utterance rewriting task on our collected dataset. In this section, we focus on answering the following two questions: (1) How accurately our proposed model can perform coreference resolution and information completion respectively and (2) How good the trained utterance rewriter is at helping off-theshelf dialogue systems provide more appropriate responses. To answer the first question, we compare our models with several strong baselines and test them by both automatic evaluation and human judgement. For the second question, we integrate our rewriting model to two online dialogue systems and analyze how it affects the humancomputer interactions. The following section will first introduce the compared models and basic settings, then report our evaluation results. 5.1 Compared Models When choosing compared models, we are mainly curious to see (1) whether the self-attention based Transformer architecture is superior to other networks like LSTMs, (2) whether the pointer-based generator is better than pure generation-based models and (3) whether it is preferred to split the attention by a coefficient λ as in our model. With these intentions, we implement the following four types of models for comparison: 1. (L/T)-Gen: Pure generation-based model. Words are generated from a fixed vocabulary. 2. (L/T)-Ptr-Net: Pure pointer-based model as in Vinyals et al. (2015). Words can only be copied from the input. 3. (L/T)-Ptr-Gen: Hybrid pointer+generation model as in See et al. (2017). Words can be either copied from the input or generated from a fixed vocabulary. 4. (L/T)-Ptr-λ: Our proposed model which split the attention by a coefficient λ. (L/T) denotes the encoder-decoder structure is the LSTM or Transformer. For the first three types of models, we unfold all tokens from the dialogue as the input. No difference is made between the dialogue history and the utterance to be rewritten. 5.2 Experiment Settings Transformer-based models We set the hidden size as 512. The attention has 8 individual heads and the encoder/decoder have 6 individual stacked layers. Models are optimized with the Adam optimizer. The initial learning rate is 0.0001 and batch size is 64. All hyperparameters are tuned base on the performance on the validation data. LSTM-based Models We encode words with a single-layer bidirectional LSTM and decode with a uni-directional LSTM. We use 128-dimensional word embeddings and 256-dimensional hidden states for both the encoder and decoder.2 The batch size is set as 128. Models are trained using Adagrad with learning rate 0.15 and initial accumulator value 0.1, same as in See et al. (2017). General Setup We built our vocabulary based on character-based segmentation for Chinese scripts. For non-Chinese characters, like frequently mentioned entity names “Kobe” and “NBA”, we split them by space and keep all unique tokens which appear more than twice. The resulting vocabulary size is 5629 (4813 Chinese 2We tried increasing the dimension but find it degrades the performance. 27 BLEU-1 BLEU-2 BLEU-4 ROUGE-1 ROUGE-2 ROUGE-L EM L-Gen 65.49 55.38 38.69 65.57 48.57 66.38 47.14|80.18 L-Ptr-Gen 69.78 59.25 43.07 68.24 54.13 70.36 47.35|84.09 L-Ptr-Net 71.70 60.29 44.72 70.81 56.35 72.33 48.24|91.94 L-Ptr-λ 72.26 62.15 47.11 73.47 57.51 74.55 51.66|93.01 T-Gen 68.74 59.09 42.57 69.12 50.92 69.70 48.59|87.61 T-Ptr-Gen 70.67 62.80 45.17 73.96 53.14 72.07 49.86|89.62 T-Ptr-Net 75.10 66.89 48.11 76.10 58.51 75.54 53.30|94.71 T-Ptr-λ 77.85 68.21 52.47 78.49 60.53 77.70 55.84|98.14 Table 4: BLEU, ROUGE (F1), and EM scores on the test set. EM score is split into the results on the positive (left) and negative (right) test samples. The first half is LSTM-based models and the second half is Transformer-based. Bold denotes best results. characters and 816 other tokens), including the end-of-turn delimiter and a special UNK token for all unknown words. In the testing stage, all models decode words by beam search with beam size set to 4. 5.3 Quality of Sentence ReWriting Precision Recall F1 Lee et al. (2017) 0.82 0.78 0.80 L-Gen 0.76 0.66 0.71 L-Ptr-Gen 0.81 0.76 0.78 L-Ptr-Net 0.83 0.78 0.81 L-Ptr-λ 0.85 0.82 0.83 T-Gen 0.80 0.75 0.77 T-Ptr-Gen 0.85 0.81 0.83 T-Ptr-Net 0.88 0.87 0.88 T-Ptr-λ 0.93 0.90 0.92 Table 5: Precision, recall and F1 score of coreference resolution. First row is the current state-of-the-art coreference resolution model Accuracy of Generation We first evaluate the accuracy of generation leveraging three metrics: BLEU, ROUGE, and the exact match score(EM) (the percentage of decoded sequences that exactly match the human references). For the EM score, we report separately on the positive and negative samples to see the difference. We report BLEU-1, 2, 4 scores and the F1 scores of ROUGE-1, 2, L. The results are listed in Table 4. We can have several observations in response to the three questions proposed in the beginning of Section 5.1: 1. Transformer-based models lead to significant improvement compare with LSTMbased counterparts. This implies the selfattention mechanism is helpful in identifying coreferred and omitted information. More analysis on how it helps coreference resolution can be seen in the next section. 2. The generation mode does not work well in our setting since all words can be retrieved from either H or Un. Pointer-based models outperform the more complex generationbased and hybrid ones. 3. Separately processing H and Un then combine their attention with a learned λ performs better than treating the whole dialogue tokens as s single input, though the improvement is less significant compared with previous two mentions. Overall our proposed model achieves remarkably good performance, with 55.84% of its generations exactly matches the human reference on the positive samples. For negative samples, our model properly copied the the original utterances in 98.14% of the cases. It suggests our model is already able to identify the utterances that do not need rewriting. Future work should work on improving the rewriting ability on positive samples. Coreference Resolution Apart from the standard metrics for text generation, we specifically test the precision, recall and F1 score of coreference resolution on our task. A pronoun or a noun is considered as properly coreferred if the rewritten utterance contains the correct mention in the corresponding referent. The result is shown in Table 5. To compare with current state-of-the28 History U1: 你看莎士比亚吗U2: 特别喜欢罗密欧与朱丽叶 U1: 你玩英雄联盟吗U2: 是的 (Translation) U1: Do you read Shakespeare U2: I especially like Romeo and Juliet U1: Do you play League of Legends U2: Yes. Utterance U3:喜欢哪个角色 U3: 什么时候开始的 U3: Which character do you like U3: When did it start Ground Truth 你喜欢罗密欧与朱丽叶哪个角色 什么时候开始玩英雄联盟的 Which character do you like in Romeo and Juliet When did you start to play League of Legends L-Gen 你喜欢莎士比亚吗// Do you like Shakespeare 什么时候开始开始开始// When start start start L-Ptr-Gen 你喜欢罗密欧角色角色// You like Romeo character character 什么时候开始的// When did it start L-Ptr-Net 你喜欢罗密欧与朱丽叶// You like Romeo and Juliet 什么时候英雄联盟开始的// When did League of Legends start L-Ptr-λ 你喜欢罗密欧与朱丽叶角色// You like Romeo and Juliet character 什 什 什么 么 么时 时 时候 候 候开 开 开始 始 始玩 玩 玩英 英 英雄 雄 雄联 联 联盟 盟 盟的 的 的// When did you start to play League of Legends T-Gen 你喜欢罗密欧与朱丽叶// You like Romeo and Juliet 是的什么时候开始玩的// Yes When start to play T-Ptr-Gen 你喜欢罗密欧与朱丽叶哪个// Which do you like in Romeo and Juliet 什么时候开始的// When did it start T-Ptr-Net 你喜欢罗密欧与朱丽叶角色// Character you like Romeo and Juliet 英雄联盟什么时候开始玩的// League of Legends When did you start to play T-Ptr-λ 你 你 你喜 喜 喜欢 欢 欢罗 罗 罗密 密 密欧 欧 欧与 与 与朱 朱 朱丽 丽 丽叶 叶 叶哪 哪 哪个 个 个角 角 角色 色 色// Which character do you like Romeo and Juliet 什 什 什么 么 么时 时 时候 候 候开 开 开始 始 始玩 玩 玩英 英 英雄 雄 雄联 联 联盟 盟 盟的 的 的// When did you start to play League of Legends Table 6: Examples of rewritten utterances. Highlighted utterances are exactly the same as the ground truth. Figure 2: Visualization of the self-attention weights in Transformer. “他”(he) is properly aligned to “梅 西”(Messi). art models. We train the model from Lee et al. (2017) on our task and report the results on the first row. The result is quite consistent with the findings from the last section. Our final model outperforms the others by a large margin, reaching a precision score of 93% and recall score of 90%. It implies our model is already quite good at finding the proper coreference. Future challenges would be more about information completion. Figure 2 further provides an examples of how the Transformer can help implicitly learn the coreference resolution through the self-attention mechanism. The same example is also shown in Table 1. The pronoun “他”(he) in the utterance is properly aligned to the mention “梅西”(Messi) in the dialogue history, also partially to “球员”(player) which is the occupation of him. The implicitly learned coreference relation should be part of the reason that Transformers outperform LSTM models on the coreference resolution task. Model Recall Precision F1 Fluency L-Gen 0.65 0.70 0.67 4.31 L-Ptr-Gen 0.70 0.74 0.72 4.52 L-Ptr-Net 0.78 0.81 0.79 4.74 L-Ptr-λ 0.80 0.82 0.81 4.82 T-Gen 0.71 0.74 0.73 4.74 T-Ptr-Gen 0.77 0.81 0.79 4.85 T-Ptr-Net 0.82 0.84 0.83 4.87 T-Ptr-λ 0.85 0.87 0.86 4.90 Human 4.97 Table 7: Recall, Precision, F1 score on information completion and Human evaluation results on fluency. Information Completion Similar as coreference resolution, we evaluate the quality of information completeness separately. One omitted information is considered as properly completed if the rewritten utterance recovers the omitted words. Since it inserts new words to the original utterance, we further conduct a human evaluation to measure the fluency of rewritten utterances. We randomly sample 600 samples from our positive test set. Three participants were asked to judge whether the rewritten utterance is a fluent sentence with the score 1(not fluent)-5(fluent). The fluency score for each model is averaged over all human evaluated scores. The results are shown in Table 7. Basically the condition is similar as in Table 5. T-Ptr-λ achieves the best performance, with the F1 score of 0.86. The performance is slightly worse than coreference resolution since information omission is more implicit. Retrieving all hidden information is sometimes difficult even for humans. Moreover, the fluency of our model’s generations is very good, only slightly worse than the human reference (4.90 vs 4.97). Information completeness does not have much effects on the fluency. Exam29 Task-Oriented Chatbot Context U1: 北京天气怎么样U2: 天气晴朗,温度适宜 (Translation) U1: How is the weather in Beijing U2: The weather is fine and the temperature is suitable Utterance U3: 那穿什么衣服合适 U3: 北京天气穿什么合适 U3: Then what clothes are suitable to wear U3: What clothes are suitable for weather in Beijing Intention 生活购物 城市天气 Life Shopping City Weather Chatbot Answer 您想要购买什么类型的衣服 根据天气推荐穿一件外套 What type of clothes do you want to buy You’d better wear a coat according to the weather Chit-Chat Chatbot Context U1: 库里的三分真准啊U2: 勇士今年又是冠军 U1: Curry’s 3-pointer is really good U2: The Warriors are the champion again this year Utterance U3: 我也觉得 U3: 我也觉得勇士今年又是冠军 U3: I agree U3: I agree that the Warriors are the champion again this year Chatbot Answer 觉得什么 勇士真的厉害啊 agree what The Warriors are so strong Table 8: Examples of integrated test. Left column is the original system and right is the one with utterance rewriter. Blue words denote completed information by the utterance rewriter. Model Intention Precision CPS Original 80.77 6.3 With Rewrite 89.91 7.7 Table 9: Results of integrated testing. Intention precision for task-oriented and conversation-turns-persession (CPS) for chitchat. ples of rewritten utterances are shown in Table 6. 5.4 Integration Testing In this section, we study how the proposed utterance rewriter can be integrated into off-the-shelf online chatbots to improve the quality of generated responses. We use our best model T-Ptr-λ to rewrite each utterance based on the dialogue context. The rewritten utterance is then forwarded to the system for response generation. We apply on both a task-oriented and chitchat setting. The results are compared with the original system having no utterance rewriter. Task-oriented Our task-oriented dialogue system contains an intention classifier built on FastText(Bojanowski et al., 2017) and a set of templates that perform policy decision and slot-value filling sequentially. Intention detection is a most important component in task-oriented dialogues and its accuracy will affect all the following steps. We define 30 intention classes like weather, hotel booking and shopping. The training data contains 35,447 human annotations. With the combination of our rewriter, the intention classier is able to achieve a precision of 89.91%, outperforming the original system by over 9%. The improved intention classification further lead to better conversations. An example is shown in Table 8, a multiturn conversation about the weather. The user first asks “How is the weather in Beijing”, then follows with a further question about “Then what clothes are suitable to wear”. The original system wrongly classified the user intention as shopping since this is a common conversational pattern in shopping. In contrast, our utterance rewriter is able to recover the omitted information “under the weather in Beijing”. Based on the rewritten utterance, the classifier is able to correctly detect the intention and provide proper responses. Chitchat Our social chatbot contains two separate engines for multi-turn and single-turn dialogues. Each engine is a hybrid retrieval and generation model. In real-life applications, a user query would be simultaneously distributed to these two engines. The returned candidate responses are then reranked to provide the final response. Generally the model is already able to provide rather high-quality responses under the single-turn condition, but under multi-turn conversations, the complex context dependency makes the generation difficult. We integrate our utterance rewriter into the single-turn engine and compare with the original model by conducting the online A/B test. Specifically, we randomly split the users into two groups. One talks with the original system and the other talks with the system integrated with the utterance rewriter. All users are unconscious of the 30 details about our system. The whole test lasted one month. Table 9 shows the Conversation-turns Per Session (CPS), which is the average number of conversation-turns between the chatbot and the user in a session. The utterance rewriter increases the average CPS from 6.3 to 7.7, indicating the user is more engaged with the integrated model. Table 8 shows an example of how the utterance rewriter helps with the generation. After the rewriting, the model can better understand the dialogue is about the NBA team Warriors, but the original model feels confused and only provides a generic response. 6 Conclusion In this paper, we propose improving multi-turn dialogue modelling by imposing a separate utterance rewriter. The rewriter is trained to recover the coreferred and omitted information of user utterances. We collect a high-quality manually annotated dataset and designed a Transformer-pointer based architecture to train the utterance rewriter. The trained utterance rewriter performs remarkably well and, when integrated into two online chatbot applications, significantly improves the intention detection and user engagement. We hope the collected dataset and proposed model can benefit future related research. Acknowledgments We thank all anonymous reviewers and the dialogue system team of Wechat AI for valuable comments. Xiaoyu Shen is supported by IMPRS-CS fellowship. References Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2018. Never-ending learning for open-domain question answering over knowledge bases. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1053–1062. International World Wide Web Conferences Steering Committee. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Anders Bj¨orkelund and Jonas Kuhn. 2014. Learning structured perceptrons for coreference resolution with latent antecedents and non-local features. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 47–57. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 152–161. Bo Chen, Le Sun, Xianpei Han, and Bo An. 2016. Sentence rewriting for semantic parsing. CoRR, abs/1901.02998. Shiqian Chen, Chenliang Li, Feng Ji, Wei Zhou, and Haiqing Chen. 2019. Driven answer generation for product-related questions in e-commerce. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 411– 419. ACM. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. arXiv preprint arXiv:1805.11080. Kevin Clark and Christopher D Manning. 2016a. Deep reinforcement learning for mention-ranking coreference models. arXiv preprint arXiv:1609.08667. Kevin Clark and Christopher D Manning. 2016b. Improving coreference resolution by learning entitylevel distributed representations. arXiv preprint arXiv:1606.01323. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1971–1982. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. David Grangier and Michael Auli. 2017. Quickedit: Editing text & translations via simple delete actions. arXiv preprint arXiv:1711.04805. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2017. Search engine guided nonparametric neural machine translation. arXiv preprint arXiv:1705.07267. Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3-Volume 3, pages 1152–1161. Association for Computational Linguistics. 31 Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2017. An exploration of neural sequence-tosequence architectures for automatic post-editing. arXiv preprint arXiv:1706.04138. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s multi-pass sieve coreference resolution system at the conll-2011 shared task. In Proceedings of the fifteenth conference on computational natural language learning: Shared task, pages 28–34. Association for Computational Linguistics. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045. Piero Molino, Huaixiu Zheng, and Yi-Chia Wang. 2018. Cota: Improving the speed and accuracy of customer support through ranking and deep networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 586–595. ACM. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788. Jan Niehues, Eunah Cho, Thanh-Le Ha, and Alex Waibel. 2016. Pre-translation for neural machine translation. arXiv preprint arXiv:1610.05243. Pushpendre Rastogi, Arpit Gupta, Tongfei Chen, and Lambert Mathias. 2019. Scaling multi-domain dialogue state tracking via query reformulation. NAACL. Stefan Riezler and Yi Liu. 2010. Query rewriting using monolingual statistical machine translation. Computational Linguistics, 36(3):569–582. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. AAAI. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Xiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018a. Nexus network: Connecting the preceding and the following in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4316– 4327. Xiaoyu Shen, Hui Su, Shuzi Niu, and Vera Demberg. 2018b. Improving variational encoder-decoders in dialogue generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87–92.
2019
3
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 316–322 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 316 You Only Need Attention to Traverse Trees Mahtab Ahmed, Muhammad Rifayat Samee and Robert E. Mercer Department of Computer Science, University of Western Ontario {mahme255, msamee, rmercer}@uwo.ca Abstract In recent NLP research, a topic of interest is universal sentence encoding, sentence representations that can be used in any supervised task. At the word sequence level, fully attention-based models suffer from two problems: a quadratic increase in memory consumption with respect to the sentence length and an inability to capture and use syntactic information. Recursive neural nets can extract very good syntactic information by traversing a tree structure. To this end, we propose Tree Transformer, a model that captures phrase level syntax for constituency trees as well as word-level dependencies for dependency trees by doing recursive traversal only with attention. Evaluation of this model on four tasks gets noteworthy results compared to the standard transformer and LSTM-based models as well as tree-structured LSTMs. Ablation studies to find whether positional information is inherently encoded in the trees and which type of attention is suitable for doing the recursive traversal are provided. 1 Introduction Following the breakthrough in NLP research with word embeddings by Mikolov et al. (2013), recent research has focused on sentence representations. Having good sentence representations can help accomplish many NLP tasks because we eventually deal with sentences, e.g., question answering, sentiment analysis, semantic similarity, and natural language inference. Most of the existing task specific sequential sentence encoders are based on recurrent neural nets such as LSTMs or GRUs (Conneau et al., 2017; Lin et al., 2017; Liu et al., 2016). All of these works follow a common paradigm: use an LSTM/GRU over the word sequence, extract contextual features at each time step, and apply some kind of pooling on top of that. However, a few works adopt some different methods. Kiros et al. (2015) propose a skip-gram-like objective function at the sentence level to obtain the sentence embeddings. Logeswaran and Lee (2018) reformulate the task of predicting the next sentence given the current one into a classification problem where instead of a decoder they use a classifier to predict the next sentence from a set of candidates. The attention mechanism adopted by most of the RNN based models require access to the hidden states at every time step (Yang et al., 2016; Kumar et al., 2016). These models are inefficient and at the same time very hard to parallelize. To overcome this, Parikh et al. (2016) propose a fully attention-based neural network which can adequately model the word dependencies and at the same time is parallelizable. Vaswani et al. (2017) adopt the multi-head version in both the encoder and decoder of their Transformer model along with positional encoding. Ahmed et al. (2017) propose a multi-branch attention framework where each branch captures a different semantic subspace and the model learns to combine them during training. Cer et al. (2018) propose an unsupervised sentence encoder by leveraging only the encoder part of the Transformer where they train on the large Stanford Natural Language Inference (SNLI) corpus and then use transfer learning on smaller task specific corpora. Apart from these sequential models, there has been extensive work done on the tree structure of natural language sentences. Socher et al. (2011b, 2013, 2014) propose a family of recursive neural net (RvNN) based models where a composition function is applied recursively bottom-up on children nodes to compute the parent node representation until the root is reached. Tai et al. (2015) propose two variants of sequential LSTM, child sum tree LSTM and N-ary tree LSTM. The same gating structures as in standard LSTM are used except 317 A B C G F E D (a) Tree to traverse (c) Traversal technique E D Attention Block B G F Attention Block C C' B' Attention Block A A' tanh tanh tanh Multi Head Attention Linear K K Linear PCNN PCNN α α Input output (b) Attention block used Figure 1: Attention over the tree structure the hidden and cell states of a parent are dependent only on the hidden and cell states of its children. Recently, Shen et al. (2018) propose a ParsingReading-Predict Network (PRPN) which can induce syntactic structure automatically from an unannotated corpus and can learn a better language model with that induced structure. Later, Htut et al. (2018) test this PRPN under various configurations and datasets and further verified its empirical success for neural network latent tree learning. Williams et al. (2018) also validate the effectiveness of two latent tree based models but found some issues such as being biased towards producing shallow trees, inconsistencies during negation handling, and a tendency to consider the last two words of a sentence as constituents. In this paper, we propose a novel recursive neural network architecture consisting of a decomposable attention framework in every branch. We call this model Tree Transformer as it is solely dependent on attention. In a subtree, the use of a composition function is justified by a claim of Socher et al. (2011b, 2014). In this work, we replace this composition function with an attention module. While Socher et al. (2011b, 2014) consider only the child representations for both dependency and constituency syntax trees, in this work, for dependency trees, the attention module takes both the child and parent representations as input and produces weighted attentive copies of them. For constituency trees, as the parent vector is entirely dependent on the upward propagation, the attention module works only with the child representations. Our extensive evaluation proves that our model is better or at least on par with the existing sequential (i.e., LSTM and Transformer) and tree structured (i.e., Tree LSTM and RvNN) models. 2 Proposed Model Our model is designed to address the following general problem. Given a dependency or constituency tree structure, the task is to traverse every subtree within it attentively and infer the root representation as a vector. Our idea is inspired by the RvNN models from Socher et al. (2013, 2011b, 2014) where a composition function is used to transform a set of child representations into one single parent representation. In this section, we describe how we use the attention module as a composition function to build our Tree Transformer. Figure 1 gives a sketch of our model. A dependency tree contains a word at every node. To traverse a subtree in a dependency tree, we look at both the parent and child representations (Xd in Eqn. 1). In contrast, in a constituency tree, only leaf nodes contain words. The nonterminal vectors are calculated only after traversing each subtree. Consequently, only the child representations (Xc in Eqn. 1) are considered. Xd =   pv c1v... cnv   Xc =   c1v c2v... cnv   (1) Here, pv is the parent representation and the civ’s are the child representations. For both of these trees, Eqn. 2 computes the attentive transformed 318 representation. ˜P = f(x), where x ∈{Xd, Xc} (2) Here, f is the composition function using the multi-branch attention framework (Ahmed et al., 2017). This multi-branch attention is built upon the multi-head attention framework (Vaswani et al., 2017) which further uses scaled dot-product attention (Parikh et al., 2016) as the building block. It operates on a query Q, key K and value V as follows Attention(Q, K, V) = softmax QKT √dk  V (3) where dk is the dimension of the key. As we are interested in n branches, n copies are created for each (Q, K, V), converted to a 3D tensor, and then a scaled dot-product attention is applied using Bi = Attention(QiWQ i , KiWK i , ViWV i ) (4) where i ∈[1, n] and the Wi’s are the parameters that are learned. Note that WQ i , WK i and WV i ∈Rdm×dk. Instead of having separate parameters for the transformation of leaves, internal nodes and parents (Socher et al., 2014), we keep WQ i , WK i and WV i the same for all these components. We then project each of the resultant tensors into different semantic sub-spaces and employ a residual connection (He et al., 2016; Srivastava et al., 2015) around them. Lastly, we normalize the resultant outputs using a layer normalization block (Ba et al., 2016) and apply a scaling factor κ to get the branch representation. All of these are summarized in Eqn. 5. Bi = LayerNorm(BiWb i + Bi) × κi (5) Here, Wb i ∈Rn×dv×dm and κ ∈Rn are the parameters to be learned. Note that we choose dk = dq = dv = dm/n. Following this, we take each of these B’s and apply a convolutional neural network (see Eqn. 6) consisting of two transformations on each position separately and identically with a ReLU activation (R) in between. PCNN(x) = Conv(R(Conv(x) + b1)) + b2 (6) We compute the final attentive representation of these subspace semantics by doing a linearly weighted summation (see Eqn. 7) where α ∈Rn is learned as a model parameter. BranchAttn(Q, K, V) = n X i=1 αiPCNN(Bi) (7) Lastly, we employ another residual connection with the output of Eqn. 7, transform it non-linearly and perform an element-wise summation (EwS) to get the final parent representation as in Eqn. 8. ˜P = EwS(tanh((˜x + x)W + b)) (8) Here, x and ˜x depict the input and output of the attention module. 3 Experiments In this section, we present the effectiveness of our Tree Transformer model by reporting its evaluation on four NLP tasks. We present a detailed ablation study on whether positional encoding is important for trees and also demonstrate which attention module is most suitable as a composition function for the recursive architectures. Experimental Setup: We initialize the word embedding layer weights with GloVe 300dimensional word vectors (Pennington et al., 2014). These embedding weights are not updated during training. In the multi-head attention block, the dimension of the query, key and value matrices are set to 50 and we use 6 parallel heads on each input. The multi-branch attention block is composed of 6 position-wise convolutional layers. The number of branches is also set to 6. We use two layers of convolutional neural network as the composition function for the PCNN layer. The first layer uses 341 1d kernels with no dropout and the second layer uses 300 1d kernels with dropout 0.1. During training, the model parameters are updated using the Adagrad algorithm (Duchi et al., 2011) with a fixed learning rate of 0.0002. We trained our model on an Nvidia GeForce GTX 1080 GPU and used PyTorch 0.4 for the implementation under the Linux environment. Datasets: Evaluation is done on four tasks: the Stanford Sentiment Treebank (SST) (Socher et al., 2011b) for sentiment analysis, Sentences Involving Compositional Knowledge (SICK) (Marelli et al., 2014) for semantic relatedness (-R) and natural language inference (-E), and the Microsoft Research Paraphrase (MSRP) corpus (Dolan et al., 2004) for paraphrase identification. The samples in the SST dataset are labelled for both the binary and the 5-class classification task. In this work we are using only the binary classification labels. The MSRP dataset is labelled with two classes. The samples in the SICK dataset are labelled for both the 3-class SICK-E classification 319 Types of Models Model SICK-E SICK-R SST MSRP (Acc.) (MSE) (Acc.) (Acc.) Tree Structured SDT-RNN (Socher et al., 2014) .3848 RAE (Socher et al., 2011a) 82.40 76.80 MV-RNN (Socher et al., 2012) 58.14 † 82.90 66.91 † RNTN (Socher et al., 2013) 59.42 † 85.40 66.91 † DT-RNN (Socher et al., 2014) 63.38 † .3822 86.60 67.51 † DT-LSTM (Tai et al., 2015) 83.11 † .2532/.2625 † 85.70/85.10 † 72.07 † CT-LSTM (Tai et al., 2015) 82.00 † .2734/.2891 † 88.00/87.27 † 70.07 † LSTM LSTM (Tai et al., 2015) 76.80 .2831 84.90 71.70 Bi-LSTM (Tai et al., 2015) 82.11 † .2736 87.50 72.70 2-layer LSTM (Tai et al., 2015) 78.54 † .2838 86.30 69.35 † 2-layer Bi-LSTM (Tai et al., 2015) 79.66 † .2762 87.20 70.40 † Infersent (Conneau et al., 2017) 84.62 .2732 86.00 74.46 Transformer USE T (Cer et al., 2018) 81.15 .5241 † 85.38 74.96 † USE T+DAN (Cer et al., 2018) 86.62 USE T+CNN (Cer et al., 2018) 86.69 Tree Transformer Dependency Tree Transformer (DTT) 82.95 .2774 83.12 70.34 Constituency Tree Transformer (CTT) 82.72 .3012 86.66 71.73 Table 1: Performance comparison of the Tree Transformer against some state-of-the-art sentence encoders. Models that we implemented are marked with †. task and the SICK-R regression task which uses real-valued labels between 1 and 5. Instead of doing a regression on SICK-R to predict the score, we are using the same setup as Tai et al. (2015) who compute a target distribution p as a function of the predicted score y given by Eqn. 9. ˜pi =      y −⌊y⌋, if i = ⌊y⌋+ 1 ⌊y⌋−y + 1, if i = ⌊y⌋ 0, otherwise (9) The SST dataset includes already generated dependency and constituency trees. As the other two datasets do not provide tree structures, we parsed each sentence using the Stanford dependency and constituency parser (Manning et al., 2014). For the sentiment classification (SST), natural language inference (SICK-E), and paraphrase identification (MSRP) tasks, accuracy, the standard evaluation metric, is used. For the semantic relatedness task (SICK-R), we are using mean squared error (MSE) as the evaluation metric. We use KL-divergence as the loss function for SICK-R to measure the distance between the predicted and target distribution. For the other three tasks, we use cross entropy as the loss function. Table 1 shows the results of the evaluation of the model on the four tasks in terms of task specific evaluation metrics. We compare our Tree Transformer against tree structured RvNNs, LSTM based, and Transformer based architectures. To do a fair comparison, we implemented both variants of Tree LSTM and Transformer based architectures and some of the RvNN and LSTM based models which do not have reported results for every task. Instead of assessing on transfer performance, the evaluation is performed on each corpus separately following the standard train/test/valid split. For SICK-E, our model achieved 82.95% and 82.72% accuracy with dependency and constituency tree, respectively, which is on par with DT-LSTM (83.11%) as well as CT-LSTM (82.00%) and somewhat better than the standard Transformer (81.15%). As can be seen, all of the previous recursive architectures are somewhat inferior to the Tree Transformer results. For SICK-R, we are getting .2774 and .3012 MSE whereas the reported MSE for DT-LSTM and CT-LSTM are .2532 and .2734, respectively. However, in our implementation of those models with the same hyperparameters, we haven’t been able to reproduce the reported results. Instead we ended up getting .2625 and .2891 MSE for DTLSTM and CT-LSTM, respectively. On this task, our model is doing significantly better than the standard Transformer (.5241 MSE). On the SST dataset, our model (86.66% Acc.) is again on par with tree LSTM (87.27% Acc.) and better than Transformer (85.38% Acc.) as well as Infersent (86.00% Acc.)1. On the MSRP dataset, our dependency tree version (70.34% Acc.) is below DT-LSTM (72.07% 1The official implementation available at https: //github.com/facebookresearch/InferSent is used. Reported hyperparameters are used except LSTM hidden state, 1024d is chosen due to hardware limitations. 320 Model PE SICK-E SICK-R SST MSRP DTT On 78.58 .3383 83.03 69.01 Off 82.28 .2774 83.12 70.34 CTT On 81.83 .3088 83.96 71.73 Off 82.72 .3012 86.66 68.62 Table 2: Effect of Positional Encoding (PE). Acc.). However, for the constituency tree version, we are getting better accuracy (71.73%) than CT-LSTM (70.07%). It is to be noted that all of the sequential models, i.e., Transformer, Infersent and LSTMs, are doing better compared to the tree structured models on this paraphrase identification task. Model S/M/B SICK-E SICK-R SST MSRP DTT S 82.95 .3004 81.71 68.62 M 82.86 .2955 82.97 69.07 B 82.28 .2774 83.12 70.34 CTT S 80.17 .4657 84.58 69.35 M 79.66 .4346 83.74 70.01 B 82.72 .3012 86.32 71.73 Table 3: Effect of different attention modules as a composition function. S: single-head attention, M: multihead attention, B: multi-branch attention. Since positional encoding is a crucial part of the standard Transformer, Table 2 presents its effect on trees. In constituency trees, positional information is inherently encoded in the tree structure. However, this is not the case with dependency trees. Nonetheless, our experiments suggest that for trees, positional encoding is irrelevant information as the performance drops in all but one case. We also did an experiment to see which attention module is best suited as a composition function and report the results in Table 3. As can be seen, in almost all the cases, multi-branch attention has much better performance compared to the other two. This gain by multi-branch attention is much more significant for CTT than for DTT. Figure 2 visualizes how our CTT model puts attention on different phrases in a tree to compute the correct sentiment. Space limitations allow only portions of the tree to be visualized. As can be seen, the sentiment is positive (+1) at the root and the model puts more attention on the right branch as it has all of the positive words, whereas the left branch (NP) is neutral (0). The bottom three trees are the phrases which contain the positive words. The model again puts more attention on the relevant branches. The words ‘well’ and ‘sincere’ are inherently positive. In the corpus the Doug Liman the director of Bourne directs the traffic well gets a nice wintry look from his locations  absorbs us with the movie 's spycraft and uses Damon 's ability to be focused and sincere Root NP VP .21 .79 +1 0 +1 X VP .13 .87 +1 0 +1 VP .34 .66 +1 0 +1 X JJ .23 .77 +1 0 +1 well sincere us ADJP VBZ PRP ADVP Figure 2: Attentive tree visualization (CTT) word ‘us’ is tagged as positive for this sentence. 4 Conclusion In this paper, we propose Tree Transformer which successfully encodes natural language grammar trees utilizing the modules designed for the standard Transformer. We show that we can effectively use the attention module as the composition function together with grammar information instead of just bag of words and can achieve performance on par with Tree LSTMs and even better performance than the standard Transformer. Acknowledgements This research is partially funded by The Natural Sciences and Engineering Research Council of Canada (NSERC) through a Discovery Grant to Robert E. Mercer. We also acknowledge the helpful comments provided by the reviewers. References Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. arXiv preprint arXiv:1711.02132. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiacob, Rhomni St John, Noah Constant, Mario Guajardo-C´espedes, Steve Yuanc, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680. 321 Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th International Conference on Computational Linguistics, pages 350–356. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778. Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 371–373. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems, pages 3294–3302. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In International Conference on Machine Learning, pages 1378–1387. Zhouhan Lin, Minwei Feng, C´ıcero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In 5th International Conference on Learning Representations (ICLR) Conference Track Proceedings. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In 6th International Conference on Learning Representations (ICLR) Conference Track Proceedings. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1–8. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In 6th International Conference on Learning Representations (ICLR) Conference Track Proceedings. Richard Socher, Eric H Huang, Jeffrey Pennington, Christopher D Manning, and Andrew Y Ng. 2011a. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801– 809. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011b. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 129–136. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. 322 Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Adina Williams, Andrew Drozdov*, and Samuel R. Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1480–1489.
2019
30
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3113–3124 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3113 Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models Takashi Wada1, Tomoharu Iwata2,3, and Yuji Matsumoto1,3 1Nara Institute of Science and Technology 2NTT Communication Science Laboratories 3RIKEN Center for Advanced Intelligence Project (AIP) 1{wada.takashi.wp7,matsu}@is.naist.jp [email protected] Abstract Recently, a variety of unsupervised methods have been proposed that map pre-trained word embeddings of different languages into the same space without any parallel data. These methods aim to find a linear transformation based on the assumption that monolingual word embeddings are approximately isomorphic between languages. However, it has been demonstrated that this assumption holds true only on specific conditions, and with limited resources, the performance of these methods decreases drastically. To overcome this problem, we propose a new unsupervised multilingual embedding method that does not rely on such assumption and performs well under resource-poor scenarios, namely when only a small amount of monolingual data (i.e., 50k sentences) are available, or when the domains of monolingual data are different across languages. Our proposed model, which we call ‘Multilingual Neural Language Models’, shares some of the network parameters among multiple languages, and encodes sentences of multiple languages into the same space. The model jointly learns word embeddings of different languages in the same space, and generates multilingual embeddings without any parallel data or pre-training. Our experiments on word alignment tasks have demonstrated that, on the low-resource condition, our model substantially outperforms existing unsupervised and even supervised methods trained with 500 bilingual pairs of words. Our model also outperforms unsupervised methods given different-domain corpora across languages. Our code is publicly available1. 1 Introduction Learning cross-lingual or multilingual word embedding has been recognised as a very important research topic in natural language processing 1https://github.com/twadada/multilingual-nlm (NLP). Its objective is to map monolingual word embeddings of different languages into a common space, and this research has been applied to many multilingual tasks such as machine translation (Zou et al., 2013) and bilingual named entity recognition (Rudramurthy et al., 2016). It also enables the transfer of knowledge from one language into another (Xiao and Guo, 2014; Adams et al., 2017). A number of supervised and unsupervised methods have been proposed that obtain crosslingual word embeddings. Both supervised and unsupervised methods aim to find such a linear transformation that maps word embeddings in a source language into a target language space. Supervised methods employ bilingual dictionaries to learn the mapping (Mikolov et al., 2013b; Xing et al., 2015; Smith et al., 2017; Artetxe et al., 2018a), while unsupervised ones utilise the similarities or distance of word embeddings spaces across different languages (Conneau et al., 2018; Zhang et al., 2017a; Xu et al., 2018; Artetxe et al., 2018b). Since the common objective of most of the supervised and unsupervised methods is to find an orthogonal linear mapping between languages, they heavily rely on the assumption that monolingual word embeddings are approximately isomorphic. However, Søgaard et al. (2018) have found that this assumption does not hold true in general, and demonstrated that it requires three specific conditions for the unsupervised method of Conneau et al. (2018) to perform well. The conditions are: Languages to align are linguistically similar; Monolingual word embeddings are trained by the same algorithms; And the domains of the monolingual corpora are similar across languages. In particular, the last condition is hard to assume when dealing with resource-poor languages, for which unsupervised methods can be 3114 beneficial in reality. To overcome the limitations of previous work, we propose a new unsupervised multilingual word embedding method called Multilingual Neural Language Model (MNLM). In what follows, we summarise our main contributions and novelty of our proposed model: Contributions • We have discovered another limitation of the existing unsupervised methods: They do not perform well under the low-resource condition, namely when only small monolingual corpora (i.e., 50k sentences) are available in source and/or target languages. We have also confirmed that word embeddings are far from being isomorphic across languages under this condition, indicating that the conventional approachs are effective only for resource-rich languages. This is a serious problem since unsupervised learning is supposed to be beneficial when dealing with low-resource languages. • We propose a new unsupervised multilingual word embedding method that overcomes the limitations of the existing methods. Our approach can successfully obtain multilingual word embeddings under the challenging conditions when only small monolingual corpora are available, or when the domains of the monolingual corpora are different across languages (we define these conditions as ‘low-resource condition’ and ‘differentdomain condition’, respectively). Novelty of Our Proposed Model Whereas the existing unsupervised methods aim to map pre-trained word embeddings between languages based on the strong assumption that monolingual word embeddings are approximately isomorphic, our method does not require such assumption or pre-trained word embeddings; instead, it learns multilingual word embeddings jointly using forward and backward LSTM language models (Mikolov et al., 2010). Our model shares the language models among multiple languages and aims to learn a common sequential structure of different languages such as a common basic word order rule (e.g., subject-verbobject). The word embeddings of each language are trained independently, but sharing the LSTM networks encourages the embeddings to be mapped into the same space, generating multilingual word embeddings. Our experiments show that our unique approach makes it possible to obtain multilingual word embeddings with limited resources. 2 Related Work Mikolov et al. (2013b) have proposed to obtain cross-lingual word representations by learning a linear mapping between two monolingual word embedding spaces. Later, Xing et al. (2015) have shown that enforcing an orthogonality constraint on the mapping improves the performance, and that offers a closed form Procrustes solution obtained from the singular value decomposition (SVD) of Y XT W ∗= arg min W ∥WX −Y ∥2 = UV T, s.t. UΣV T = SVD(Y XT), (1) where W is a mapping matrix and Σ is a diagonal matrix. Following this work, a variety of unsupervised methods have been proposed that obtain crosslingual representations without any bilingual supervision. Zhang et al. (2017a) have proposed an unsupervised method that obtains the linear transformation using adversarial training (Goodfellow et al., 2014): during the training, a discriminator is trained to distinguish between the mapped source embeddings and the target embeddings, while the mapping matrix is trained to fool the discriminator. Conneau et al. (2018) employ a similar approach to Zhang et al. (2017a); they acquire an initial matrix using adversarial training and refine it by solving the orthogonal Procrustes problem. Zhang et al. (2017b) and Xu et al. (2018) obtain cross-lingual representations by minimising the earth-mover’s distance and Sinkhorn distance, respectively. Artetxe et al. (2018b) propose an unsupervised self-learning method. Their method starts from roughly aligning words across languages using structural similarities of word embedding spaces, and refines the word alignment by repeating a robust self-learning method until convergence. They show that their approach is more effective than Zhang et al. (2017a) and Conneau et al. (2018) when languages to align are distant or monolingual corpora are not comparable across language. Recently, Chen and Cardie (2018) and Alaux et al. (2018) have proposed un3115 supervised multilingual word embedding methods. Their methods map word embeddings of more than two languages into a common space by capturing the inter-dependencies among multiple languages. 3 Our Model 3.1 Overview We propose a new unsupervised multilingual word embeddings method called Multilingual Neural Language Model. Fig.1 briefly illustrates our proposed model. The model consists of bidirectional language models similar to ELMo (Peters et al., 2018), and most of the parameters are shared among multiple languages. In what follows, we summaries which parameters are shared across languages or specific to each language: • Shared Parameters – − → f and ← − f : LSTM networks which perform as forward and backward language models, independently. – EBOSfwd and EBOSbkw: The embeddings of initial inputs to the forward and backward language models, respectively. – W EOS: The linear mapping for <EOS>, which is used to calculate the probability of the end of a sentence at every time-step. • Specific Parameters to Language ℓ – Eℓ: Word embeddings of language ℓ – W ℓ: Linear projection of language ℓ, which is used to calculate the probability distribution of the next word. The LSTMs −→f and ←−f are shared among multiple languages and capture a common language structure. On the other hand, the word embeddings Eℓand linear projection W ℓare specific to each language ℓ. Since different languages are encoded by the same LSTM functions, similar words across different languages should have a similar representation so that the shared LSTMs can encode them effectively. For instance, suppose our model encodes an English sentence ‘He drives a car.’ and its Spanish translation ‘El conduce un coche.’ In these sentences, each English word corresponds to each Spanish one in the same order. Therefore, these equivalent words would have Figure 1: Illustration of our proposed model Multilingual Neural Language Models. similar representations so that the shared language models can encode the English and Spanish sentences effectively. Although in general, each language has its different grammar rules, the shared language models are trained to roughly capture the common structure such as common basic word order rules (e.g., subject-verb-object) among different languages. Sharing <BOS> and <EOS> symbols ensures that the beginning and end of the hidden states are in the same space regardless of language, which encourages the model to obtain multilingual representations. The limitation of our model is that it is only applicable to the languages that have common word order rules such as subject-verb-object and subject-object-verb. Although this limitation may sound somewhat significant, our experiments show that our model performs well not only for closely related language pairs such as French-English but also for linguistically distant languages such as English-Finnish2 and TurkishJapanese. In fact, our experiments show that it is extremely difficult for the existing unsupervised methods as well as for our model to align very distant languages which have different word order, such as English and Japanese. 3.2 Network Structure Suppose a sentence with N words in language ℓ, ⟨wℓ 1..., wℓ N⟩. The forward and backward language models calculate the probability of a next word wℓ t 2Finnish is often considered as a non-Indo-European synthetic language, whereas English is often regarded as an Indo-European analytic language. 3116 given the previous words: p(wℓ 1..., wℓ N) = N Y t=1 p(wℓ t|wℓ 1..., wℓ t−1). (2) p(wℓ 1..., wℓ N) = N Y t=1 p(wℓ t|wℓ t+1..., wℓ N). (3) The tth hidden states hℓ t of the forward and backward language models are calculated based on the previous hidden state and word embedding, −→h ℓ t = −→f (−→h ℓ t−1, xℓ t−1), (4) ←−h ℓ t = ←−f (←−h ℓ t+1, xℓ t+1), (5) xℓ t =      EBOSfwd if t = 0 , EBOSbkw if t = N+1, Eℓ(wℓ t) otherwise, (6) where −→f (·) and ←−f (·) are the standard LSTM functions. Note that the same word embedding function Eℓis used among the forward and backward language models. The probability distribution of the upcoming word wℓ t is calculated by the forward and backward models independently based on their current hidden state: p(wℓ t|wℓ 1..., wℓ t−1) = softmax(gℓ(−→h ℓ t))), (7) p(wℓ t|wℓ t+1..., wℓ N) = softmax(gℓ(←−h ℓ t)), (8) gℓ(hℓ t) = [W ℓ(hℓ t), W EOS(hℓ t)], (9) where [x, y] means the concatenation of x and y. W EOS and W ℓare matrices with the size of (1×d) and (V ℓ× d), where d is the size of hidden state and V ℓis the vocabulary size of language ℓexcluding <EOS>. As with the word embeddings, those matrices are shared among the forward and backward language models. The proposed model is trained by maximising the log likelihood of the forward and backward directions for each language ℓ: L X l=1 Sℓ X i=1 Ni X t=1 log p(wℓ i,t|wℓ i,1...wℓ i,t−1; −→θ ) + log p(wℓ i,t|wℓ i,t+1...wℓ i,Ni; ←−θ ), where L and Sℓdenote the numbers of languages and sentences of language ℓ. −→θ and ←−θ denote the parameters for the forward and backward LSTMs −→f and ←−f , respectively. 4 Experiments 4.1 Data and Experimental Conditions We trained our model and baselines under the following two conditions: 1. low-resource condition: Only small monolingual corpora are available. 2. different-domain condition: Relatively large monolingual corpora are available but their domains are different across languages. On each condition, we conducted cross-lingual and multilingual embedding experiments, respectively. 4.1.1 Cross-lingual Word Embedding In the experiments of cross-lingual embedding, we evaluated the quality of cross-lingual embeddings between seven pairs of source-target languages: {German, Spanish, French, Russian, Czech, Finnish, Japanese}-English. For the lowresource condition, we used subsets of News Crawl monolingual corpora3. We used 50k sentences for source languages, and either 50k or 1M sentences for the target language (i.e., English). This condition simulates two realistic scenarios; the case when analysing inter-dependencies between multiple minor languages, or between minor and major languages. For the different-domain condition, we added {Tamil, Turkish}-Japanese pairs and North Saami-{Finnish, English} pairs to the seven pairs described above. North Saami is one of the minor languages spoken in northern Finland, Sweden and Norway, and it is so close to Finnish that transfer learning between them is very effective in dependency parsing (Lim et al., 2018). Note that the basic word order of Tamil, Turkish and Japanese is subject-object-verb (SOV), while the one of the other languages is SVO. We used Europarl corpus (Koehn, 2005) for English, Wikipedia for Japanese, SIKOR North Saami corpus4 for North Saami, and news data for the other languages5. We extracted 1M sentences 3downloaded from http://www.statmt.org and http://wortschatz.uni-leipzig.de/en/ download 4https://dataverse.no/dataset.xhtml? persistentId=doi:10.18710/8AK7KZ 5The vocabulary sizes of Europarl and News Crawl corpora in English are significantly different (79,258 v.s. 265,368 words), indicating the major differences between these domains 3117 from these corpora except for North Saami, for which we used the whole corpus which contains 0.75M sentences. This different-domain condition also simulates the cases of analysing inter-dependencies among minor languages; large monolingual data containing up to 1M sentences may be available in each language, but it is hard to assume that their domains are similar across languages. 4.1.2 Multilingual Word Embedding We trained multilingual word embeddings among the four linguistically similar languages: German, Spanish, French, and English. We conducted experiments under the following three conditions: (a) 50k sentences in News Crawl are used for each language; (b) 50k sentences in News Crawl are used for German, Spanish, French and 1M for English; (c) 1M sentences in News Crawl are used for German, Spanish, French and 1M sentences in Europarl for English. 4.2 Evaluation In our experiment, we evaluated cross-lingual and multilingual word embeddings on the word alignment tasks. In the cross-lingual experiments, we used 1000 unique pairs of words in the dictionaries and we report p@5 in each language. That is, for each word in the 1000 source words, we extracted the 5 most similar words from the 1000 target words and checked how often the correct translation is included in them. In the multilingual experiments, we extracted 500 words aligned among English, French, Spanish and German and evaluated p@5 of ‘joint’ alignment among the four languages. That is, for each English word we extracted the 5 most similar words in French, German and Spanish independently, and evaluated how often the correct translation of the English word is included in all of the three languages. In most language pairs, these 1000 and 500 words were extracted from bilingual dictionaries published by Conneau et al. (2018) so that they did not contain any unknown words in all the training settings6. For North Saami-{Finnish, English}, we used the North Saami-Finnish dictionary7 used by Lim et al. (2018) and aligned it with a FinnishEnglish dictionary published by Conneau et al. 6For {Tamil, Turkish}-Japanese, we aligned the {Tamil, Turkish}-English dictionaries with the Japanese-English dictionary. 7https://github.com/jujbob/multilingual-models (2018) to build a North Saami-English dictionary. When only 50k sentences were used both for the source and target languages, we trained all the models three times with different random seeds and calculated the average precision in both the cross-lingual and multilingual experiments. This is because unsupervised learning with small data can be unstable. 4.3 Baseline Baseline models aim to map pre-trained word embeddings of different languages into a common space. For a fair comparison to our model, we used word2vec (Mikolov et al., 2013a), that pretrain word embeddings at a token level. We used their code with the default setting8 except for the embedding size and minimum frequency, which were set the same as our model. Note that these pre-trained embeddings were used only by baseline models, not by ours. As baselines of cross-lingual word embedding methods, we chose Xu et al. (2018), Artetxe et al. (2018b), and Conneau et al. (2018) with and without normalisation. We also compared our model against (weakly) supervised cross-lingual word embedding methods (Artetxe et al., 2018a). The supervised methods exploited 500 pairs of equivalent words that are not used in the evaluation data9, and weakly supervised methods exploited pseudo bilingual pairs of words (auto seeds): the words with the same spellings among different languages were deemed as equivalent words. We trained the cross-lingual baselines and our model in each language pair. As baselines of multilingual word embedding models, we used Chen and Cardie (2018) with or without auto seeds. We also compared our model against the cross-lingual baselines. While Chen and Cardie (2018) and our model jointly train multilingual word embeddings, the crosslingual models independently map the word embeddings of German, Spanish, and French into the English embedding space. Regarding Artetxe et al. (2018b) and Artetxe et al. (2018a), we omitted the re-weighting, whitening, and normalisation processes in the multilingual experiments10. This 8The code is at https://code.google.com/ archive/p/word2vec, and the default algorithm is Continuous Bag of Words (CBOW) with its window size 5 9These 500 words were also extracted in the same way as explained in 4.2. 10To omit these processes, we used ‘–orthogonal’ option 3118 src de es fr ru cs fi ja Method data size(tgt) 50k 1M 50k 1M 50k 1M 50k 1M 50k 1M 50k 1M 50k 1M (weakly) supervised Artetxe et al. (2018b)+char 5.6 2.5 12.1 5.1 9.2 4.0 2.9 1.4 5.0 0.5 1.3 1.6 2.1 9.2 Artetxe et al. (2018a)+dict 9.6 9.7 15.0 19.7 13.3 19.5 5.7 8.0 5.5 8.0 3.8 5.0 6.1 11.2 Conneau et al. (2018)+dict 11.1 9.7 18.0 20.4 19.2 20.7 4.7 5.2 7.1 4.8 1.7 3.2 7.5 18.7 unsupervised Xu et al. (2018) 3.9 0.7 6.8 0.5 4.4 0.2 1.4 1.3 2.7 0.3 0.9 0.5 1.9 0.6 Artetxe et al. (2018b) 3.9 0.6 7.5 0.8 6.5 1.0 1.0 1.0 0.7 1.1 1.1 1.7 1.6 1.3 Conneau et al. (2018) 3.0 0.8 11.0 0.2 7.8 0.4 1.0 0.4 1.1 0.4 0.5 0.5 1.3 0.4 Conneau et al. (2018)+norm 2.1 0.7 11.3 0.7 9.2 0.3 0.7 0.3 0.6 0.2 0.6 0.3 1.7 0.5 OURS 14.2 20.8 26.1 37.5 21.8 35.3 13.6 14.1 13.8 18.8 12.7 12.4 2.3 2.3 Table 1: The precision p@5 of the cross-lingual word alignment task on the low-resource condition. We used 50k sentences for the source languages and either 50k or 1M sentences for the target language (English). The best scores among the (weakly) supervised or unsupervised methods are bold-faced, and the best scores of all the methods are underlined. Method src-tgt de-en es-en fr-en ru-en cs-en fi-en tr-ja ta-ja ja-en se-fise-en (weakly) supervised Artetxe et al. (2018b)+char 49.0 59.5 59.6 10.7 38.6 18.2 40.4 28.6 11.6 32.6 14.2 Artetxe et al. (2018a)+dict 35.6 49.7 49.4 38.5 38.5 28.3 25.8 46.6 24.5 42.9 20.2 Conneau et al. (2018)+dict 53.6 66.8 67.6 53.1 54.0 43.4 41.1 36.2 34.1 43.8 32.5 unsupervised Xu et al. (2018) 0.8 3.2 32.7 0.8 6.9 3.2 5.8 0.1 0.6 20.4 1.6 Artetxe et al. (2018b) 5.6 47.4 47.1 9.0 3.1 1.2 3.7 1.5 1.8 13.6 0.3 Conneau et al. (2018) 0.8 0.7 1.7 0.5 0.9 1.6 1.3 0.6 1.4 13.1 0.7 Conneau et al. (2018)+norm 0.7 2.5 0.6 0.6 0.3 0.2 0.1 1.3 0.9 23.2 0.2 OURS 26.4 54.9 54.0 22.7 26.8 19.2 18.1 10.4 1.8 37.9 18.3 Table 2: The precision p@5 of cross-lingual word alignment task on the different-domain condition. The best scores among the (weakly) supervised or unsupervised methods are bold-faced, and the best scores of all the methods are underlined. is because these processes transform both source and target word embeddings, and that makes it impossible to map word embeddings of multiple languages into a single embedding space. To implement these baselines, we used the code published by the authors11,12,13(Conneau et al., 2018; Artetxe et al., 2018b; Xu et al., 2018) 4.4 Training Settings In the cross-lingual and multilingual experiments, we trained our model among two and four lanin their code. 11https://github.com/facebookresearch/ MUSE 12https://github.com/artetxem/vecmap 13https://github.com/xrc10/ unsup-cross-lingual-embedding-transfer. guages, respectively. When the size of the source and target corpora were different, we conducted oversampling to generate the same number of mini-batches for source and target languages. We trained our model for 10 epochs with the minibatch size 64, and stopped training when the training loss saturates (i.e., when the loss decreases by less than 1% compared to the previous epoch). For each iteration, our model alternately read minibatches of each language and updated its parameters. We set the size of word embeddings as 300, and used two-layer LSTM networks for the forward and backward language models, respectively. We set the size of the hidden state as 300 and 1024 for the low-resource and different-domain conditions. Dropout (Srivastava et al., 2014) is ap3119 Method Condition (a) (b) (c) (weakly) supervised Artetxe et al. (2018b)+char 2.3 0.6 44.6 Artetxe et al. (2018a)+dict 5.6 8.6 37.2 Conneau et al. (2018)+dict 6.4 7.0 51.6 Chen and Cardie (2018)+char 5.2 2.8 53.4 unsupervised Xu et al. (2018) 1.1 0.0 0.0 Artetxe et al. (2018b) 0.9 0.0 5.2 Conneau et al. (2018) 1.3 0.0 0.0 Conneau et al. (2018)+norm 0.3 0.0 0.0 Chen and Cardie (2018) 1.0 0.0 3.0 OURS 10.4 16.2 37.0 Table 3: The precision p@5 of multilingual word alignment task on the three different conditions (a), (b), and (c) described in 4.1.2. The best scores among the (weakly) supervised or unsupervised methods are boldfaced, and the best scores of all the methods are underlined. plied to the hidden state with a rate of 0.3. We used SGD (Bottou, 2010) as an optimiser with the learning rate 1.0. All of the parameters of our model including word embeddings were uniformly initialised in [-0.1, 0.1], and gradient clipping (Pascanu et al., 2013) was used with the clipping value 5.0. We included those words in vocabulary that were used at least 3, 5, and 20 times for 50k, 100k-250k, and 1M sentences in News Crawl and Wikipedia. For Europarl and SIKOR North Saami corpora, we set the threshold as 10. We fed the most 15,000 frequent words to train Xu et al. (2018) and the discriminator in Conneau et al. (2018). As a preprocess, we tokenized the monolingual corpora using Moses toolkit14 for European languages and Polyglot15 for Tamil, Turkish and Japanese. We also lowercased all the corpora. 4.5 Results 4.5.1 Cross-lingual Word Embedding Table 1 illustrates the results of the cross-lingual word alignment task under the low-resource condition. The methods with ‘+char’ use character information to obtain a pseudo dictionary, and the ones with ‘+dict’ use a gold dictionary that con14https://github.com/moses-smt/ mosesDecoder 15https://polyglot.readthedocs.io/en/ latest/Tokenization.html tains 500 pairs of words. The table shows that our model substantially outperforms the unsupervised baseline models in all of the language pairs. Our model also achieves better results than supervised methods except in the Japanese-English pair, which has different word order (SOV v.s SVO). Another interesting finding is that when the size of the target corpus increases from 50k to 1M sentences, our model improves its performance whereas the performance of the unsupervised baseline models drops substantially. For instance, when the size of the target corpus increases from 50k to 1M, Conneau et al. (2018) decreases the precision in Spanish-English from 11.0 to 0.2, while our model increases the precision from 26.1 to 37.5. Table 2 shows the results on the differentdomain condition. It shows that our method achieves better results overall than the unsupervised baseline models. The extremely poor performance of Conneau et al. (2018) under this condition is compatible with the results reported by Søgaard et al. (2018). Regarding the JapaneseEnglish pair, none of the unsupervised methods including ours perform well, demonstrating that it is difficult to align languages without any supervision if the basic word order is different. Supervised methods, on the other hand, perform well in all the languages and outperform our model. This result indicates that even if domains of monolingual corpora are different across languages, the conventional approach of learning a linear transformation can be effective with (weak) bilingual supervision. Impact of Data Size To evaluate the effects of the data size on the model performances, we increased the size of both source and target corpora from 50k to 250k by 50k sentences. All of these sentences were extracted from News Crawl. Fig. 2 illustrates how p@5 changes depending on the data size. It shows that our model overall performs better than the baselines, especially among the distant language pairs such as Finnish-English. Although Artetxe et al. (2018b) report positive results on word alignment tasks between Finnish and English, our experiments show that their method requires much larger monolingual corpora such as Wikipedia on both the source and target sides to achieve good performance. 3120 50 100 150 200 250 5 10 15 20 25 30 35 German-English 50 100 150 200 250 10 20 30 40 50 Spanish-English 50 100 150 200 250 10 20 30 40 50 French-English 50 100 150 200 250 0 5 10 15 20 25 Russian-English 50 100 150 200 250 0 5 10 15 20 25 30 35 Czech-English 50 100 150 200 250 0 5 10 15 20 25 30 Finnish-English OURS Xu et al. (2018) Artex et al. (2018b) Conneau et al. (2017) Conneau et al. (2017) + normalize Figure 2: The change in p@5 achieved by the unsupervised methods on word alignment tasks. The x-axis denotes the number of sentences (thousand) in the source and target corpora, and the y-axis denotes the average precision p@5 over three runs for each method. src-tgt lang (src) de es fr ru cs fi ja same domain 50k-50k 9.7 10.6 12.4 6.5 7.5 6.6 6.5 250k-250k 18.5 23.4 24.6 12.7 17.5 12.5 11.7 1M-1M 20.6 28.0 29.6 18.3 23.2 17.0 15.4 50k-1M 6.1 9.5 10.9 4.3 4.1 3.7 7.1 different domain 1M-1M 15.7 19.5 22.2 17.9 19.5 15.2 13.6 Table 4: The ratio (%) of the monolingual word embeddings being roughly isomorphic across a source and target language (English). Each row describes the number of sentences in source and target corpora used to train word embeddings, and each column denotes the source language. 4.5.2 Multilingual Word Embedding Table 3 describes the results under the three conditions described in 4.1.2. It shows that our model substantially outperforms the unsupervised and supervised baseline models under the lowresource conditions (a) and (b). As in the case of 4.5.1, when the size of the English corpus increases from 50k (a) to 1M (b), our model improves its performance while the unsupervised baselines perform worse. Under the differentdomain condition (c), our model also achieves much better results than the unsupervised baselines, but cannot outperform supervised methods. POS lang (src) de es fr ru cs fi ADJ 25.2 36.8 35.5 23.1 38.6 20.7 ADV 68.8 82.6 71.9 82.6 81.6 66.2 NOUN 24.5 53.8 51.1 13.2 16.4 9.2 VERB 16.1 66.7 73.6 34.4 34.9 19.7 Table 5: The ratio (%) of correctly matched POS tags using our model under the different-domain condition. For each language, the best and worst ratios among the four POS tags are bold-faced and underlined. 5 Analysis 5.1 Validation of Isomorphism Our experiments show that our model substantially outperforms both supervised and unsupervised methods under the low-resource condition. We conjecture that this large improvement is owing to our unique approach of obtaining multilingual word embeddings; unlike the conventional approach, our method does not assume that word embedding spaces are approximately isomorphic across languages. In fact, when word embeddings are trained with small data, they should contain a lot of noises and are unlikely to be isomorphic across languages. This suggests that it would be extremely difficult to learn a linear mapping across languages using the existing unsupervised methods. To verify this hypothesis, we investigated how likely monolingual word embeddings were more or less isomorphic across languages. For each pair of a language ℓand English, we sampled 10 pairs of equivalent words from a bilingual dictionary and built non-directed adjacency matrices of the nearest neighbour graphs G(ℓ) and G(en) independently. Then, we conducted an element-wise comparison of the two matrices and deemed them as roughly isomorphic if more than 80% of the elements are the same. Table 4 shows how often the graphs were roughly isomorphic over 1,000 samples. The row indicates the size of the source and target corpora. It clearly shows that monolingual corpora trained on small data (i.e. 50k sentences) are far from being isomorphic between any language pair, and the linguistically distant languages such as Finnish-English and Japanese-English are less isomorphic than close languages. This result clearly explains why the existing unsupervised methods do not perform well on the low-resource 3121 condition, or among distant language pairs. Another intriguing finding is that word embeddings trained with 50k and 1M sentences in a source and target languages are overall less isomorphic than those trained with 50k source and target sentences. This result explains why the performance of the unsupervised baseline methods decreases given the additional target data in 4.5.1 and 4.5.2. Our method, on the other hand, can effectively utilise the additional data to improve its performance, demonstrating its robustness under the low-resource condition. 5.2 POS tags of Matched Words To analyse the performance of our model, we checked Part-of-Speech (POS) tags of the English words used in the word alignment task and investigated what kind of words were correctly matched by our model. Since a word is given without any context in the word alignment task and it is not possible to infer its POS tag, we assigned to each word its most frequent POS tag in Brown Corpus (Kucera and Francis, 1967). For instance, since ‘damage’ is used as a noun more often than as a verb in Brown Corpus, we define its POS tag as ‘noun’. Table 5 shows p@5 of the word alignment task grouped by the four major POS tags, namely adjective, adverb, verb, and noun16. It clearly indicates that an adverb can be easily matched in every language pair. This would be because there are less adverbs than other tags in the evaluation data, and also because there are common word order rules about an adverb among all the languages: an adverb usually comes before an adjective to modify it, and when modifying a verb, it comes either before or after it. Refer to the Appendix B for the statistics regarding word order in each language. Among French, Spanish and English, the matching accuracy of a noun and verb is very high, and their word order is in fact very similar; as shown in the Appendix B, the basic word order of these languages is strictly subject-verb-object, and that makes it easy to align words among them. On the other hand, the word order between a noun and adjective is very different among these languages, explaining why the precision of matching adjectives is lower than the other tags. As for the other languages, they have more flexible word order than English and that makes it difficult to align 16When there are X nouns and Y of them are matched correctly in the alignment task, the ratio is 100Y X % words across languages. For instance, in German, Russian and Czech a subject sometimes comes after a verb, and in German and Finnish an object can come before a verb. These findings clearly indicate that our model employs sequential similarities among different languages to obtain multilingual word embeddings without any supervision. 6 Conclusion In this paper, we proposed a new unsupervised multilingual word embedding approach. Whereas conventional methods aim to map pre-trained word embeddings into a common space, ours jointly generates multilingual word embeddings by extracting a common language structure among multiple languages. Our experiments on word alignment tasks have demonstrated that our proposed model substantially outperforms the existing cross-lingual and multilingual unsupervised models under resource-poor conditions, namely when only small data are available or when domains of corpora are different across languages. Under the first condition, our model even outperforms supervised methods trained with 500 bilingual pairs of words. By analysing the nearest neighbour graphs of monolingual word embeddings, we have verified that word embeddings are far from being isomorphic when they are trained on small data, explaining why existing unsupervised methods did not perform well on the lowresource condition. We have also found that the performance of our model is closely related to word order rules, and our model can align words very well when they are used in a similar order across different languages. Our future work is to exploit character and subword information in our model and see how those information affect the performance in each language pair. It would be also interesting to investigate how our approach compares to the baselines given a large amount of data such as Wikipedia. 7 Acknowledgement We are grateful to all the anonymous reviewers for their insightful comments and advice. References Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language modeling. In Proceedings of the 15th Conference of the 3122 European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 937–947. Association for Computational Linguistics. Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2018. Unsupervised hyperalignment for multilingual word embeddings. CoRR, abs/1811.01124. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5012–5019. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798. Association for Computational Linguistics. L´eon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), pages 177– 187, Paris, France. Springer. Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261–270, Brussels, Belgium. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In International Conference on Learning Representations. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. H. Kucera and W. N. Francis. 1967. Computational analysis of present-day American English. Brown University Press. KyungTae Lim, Niko Partanen, and Thierry Poibeau. 2018. Multilingual Dependency Parsing for LowResource Languages: Case Studies on North Saami and Komi-Zyrian. Miyazaki, Japan. ELRA. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (Workshop). Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 1310–1318. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. V Rudramurthy, Mitesh M. Khapra, and Pushpak Bhattacharyya. 2016. Sharing network parameters for crosslingual named entity recognition. CoRR, abs/1607.00198. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In International Conference on Learning Representations. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788. Association for Computational Linguistics. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 119–129. Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational 3123 Linguistics: Human Language Technologies, pages 1006–1011. Association for Computational Linguistics. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945. Association for Computational Linguistics. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1393–1398. Association for Computational Linguistics. dep-head(rel) en de es fr ru cs fi N-V(nsubj) 93.7 77.5 89.0 95.4 80.4 77.1 88.2 N-V(obj) 0.7 56.3 0.5 0.2 4.0 10.0 20.8 ADJ-N 98.9 99.9 30.3 30.9 99.3 93.9 100.0 ADV-ADJ 98.3 93.6 95.0 99.2 96.6 94.9 98.8 ADV-V 75.6 65.8 68.7 61.2 79.1 80.4 47.9 Table 6: The ratio (%) of a dependent being put before its head. N, V, ADJ, and ADV denote noun, verb, adjective and adverb, respectively. The dependency relation of ADJ-N is amod, and the one of ADV-ADJ and ADV-V is advmod. Refer to the download page of PUD for the definition of the dependency relations. A Visualisation Figure 3 visualises the multilingual word embeddings obtained by our model and (Chen and Cardie, 2018) under the low-resource condition. It shows the most frequent 1000 words in Spanish, French, German and English. The figure clearly shows that the word embeddings obtained by (Chen and Cardie, 2018) form some clusters based on their languages. In particular, many of the German words are mapped near the centre of 15 10 5 0 5 10 15 15 10 5 0 5 10 15 OURS Spanish French German English 10 5 0 5 10 15 15 10 5 0 5 10 Chen and Cardie (2018) Spanish French German English Figure 3: Scatter plot of multilingual word embeddings of French, English, German and Spanish obtained by our model and Chen and Cardie (2018) under the lowresource condition. The embeddings are reduced to 2D using tSNE (van der Maaten and Hinton, 2008). the figure and make a large cluster. On the other hand, the word embeddings trained by our model are not clustered by language, indicating that our model successfully maps word embeddings into a common space. B Word Order To obtain statistics about word order rules in each language, we used Parallel Universal Dependencies (PUD) treebanks17. PUD contains 1000 parallel sentences aligned among 18 languages, and those sentences are annotated morphologically and syntactically according to Google universal annotation guidelines. Since these sentences are aligned among all the languages, it is possible to compare the syntactical differences across languages. Table 6 shows the ratio of a dependent being put before its head in PUD treebanks in each language. As can be seen, the word order of ADV17available at http://universaldependencies.org/ 3124 ADJ (advmod) is very similar among all the language pairs: an adverb is put before an adverb to modify it. The order of ADV-V (advmod) is rather flexible regardless of language, indicating that an adverb can modify a verb from either left or right. These common word order rules of adverbs explain why our model successfully matched adverbs very well in every language pair. The table also indicates that the word order of N-V is very similar among English, Spanish and French and the basic word order is strictly subject-verb-object. This explains why our model performed well overall among these languages. However, the word order of ADJ-N is significantly different among these languages, and that would lead to the low performance of our model in matching adjectives.
2019
300
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3125–3135 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3125 Choosing Transfer Languages for Cross-Lingual Learning Yu-Hsiang Lin∗, Chian-Yu Chen∗, Jean Lee∗, Zirui Li∗, Yuyan Zhang∗, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell†, Graham Neubig Language Technologies Institute, Carnegie Mellon University †National Research Council, Canada Abstract Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on lowresource languages. However, given a particular task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative for each different NLP tasks, which may inform future ad hoc selection even without use of our method.1 1 Introduction A common challenge in applying natural language processing (NLP) techniques to low-resource languages is the lack of training data in the languages in question. It has been demonstrated that through cross-lingual transfer, it is possible to leverage one or more similar high-resource languages to improve the performance on the low-resource languages in several NLP tasks, including machine ∗Equal contribution 1Code, data, and pre-trained models are available at https://github.com/neulab/langrank score(Ltf,1, Ltk) score(Ltf,2, Ltk) ... Ltf,1: Transfer Language 1 Ltk: Task Language Ltk: Task Language Transfer Learning Transfer Learning Generate Training Data Train Transfer Language Ranker Learning to Rank ... ... ... Ltf,2: Transfer Language 2 NLP Model 1 score(Ltf,1, Ltk) NLP Model 2 score(Ltf,2, Ltk) Transfer Language Ranker ... Figure 1: Workflow of learning to select the transfer languages for an NLP task: (1) train a set of NLP models with all available transfer languages and collect evaluation scores, (2) train a ranking model to predict the top transfer languages. translation (Zoph et al., 2016; Johnson et al., 2017; Nguyen and Chiang, 2017; Neubig and Hu, 2018), parsing (T¨ackstr¨om et al., 2012; Ammar et al., 2016; Ahmad et al., 2018; Ponti et al., 2018), partof-speech or morphological tagging (T¨ackstr¨om et al., 2013; Cotterell and Heigold, 2017; Malaviya et al., 2018; Plank and Agi´c, 2018), named entity recognition (Zhang et al., 2016; Mayhew et al., 2017; Xie et al., 2018), and entity linking (Tsai and Roth, 2016; Rijhwani et al., 2019). There are many methods for performing this transfer, including joint training (Ammar et al., 2016; Tsai and Roth, 2016; Cotterell and Heigold, 2017; Johnson et al., 2017; Malaviya et al., 2018), annotation projection (T¨ackstr¨om et al., 2012; T¨ackstr¨om et al., 2013; Zhang et al., 2016; Ponti et al., 2018; Plank and Agi´c, 2018), fine-tuning (Zoph et al., 2016; Neubig and Hu, 2018), data augmentation (Mayhew et al., 2017), or zero-shot transfer (Ahmad et al., 2018; Xie et al., 2018; Neubig and Hu, 3126 2018; Rijhwani et al., 2019). The common thread is that data in a high-resource transfer language is used to improve performance on a low-resource task language. However, determining the best transfer language for any particular task language remains an open question – the choice of transfer language has traditionally been done in a heuristic manner, often based on the intuition of the experimenter. A common method of choosing transfer languages involves selecting one that belongs to the same language family or has a small phylogenetic distance in the language family tree to the task language (Dong et al., 2015; Johnson et al., 2017; Cotterell and Heigold, 2017). However, it is not always true that all languages in a single language family share the same linguistic properties (Ahmad et al., 2018). Therefore, another strategy is to select transfer languages based on the typological properties that are relevant to the specific NLP task, such as word ordering for parsing tasks (Ammar et al., 2016; Ahmad et al., 2018). With several heuristics available for selecting a transfer language, it is unclear a priori if any single attribute of a language will be the most reliable criterion in determining whether cross-lingual learning is likely to work for a specific NLP task. Other factors, such as lexical overlap between the training datasets or size of available data in the transfer language, could also play a role in selecting an appropriate transfer language. Having an empirical principle regarding how to choose the most promising languages or corpora to transfer from has the potential to greatly reduce the time and effort required to find, obtain, and prepare corpora for a particular language pair. In this paper, we propose a framework, which we call LANGRANK, to empirically answer the question posed above: given a particular task lowresource language and NLP task, how can we determine which languages we should be performing transfer from? We consider this language prediction task as a ranking problem, where each potential transfer language is represented by a set of attributes including typological information and corpus statistics, such as word overlap and dataset size. Given a task language and a set of candidate transfer languages, the model is trained to rank the transfer languages according to the performance achieved when they are used in training a model to process the task low-resource language. These models are trained by performing a computationand resource-intensive exhaustive search through the space of potential transfer languages, but at test time they can rapidly predict optimal transfer languages, based only on a few dataset and linguistic features, which are easily obtained. In experiments, we examine cross-lingual transfer in four NLP tasks: machine translation (MT), entity linking (EL), part-of-speech (POS) tagging and dependency parsing (DEP). We train gradient boosted decision trees (GBDT; Ke et al. (2017)) to select the best transfer languages based on the aforementioned features. We compare our ranking models with several reasonable baselines inspired by the heuristic approaches used in previous work, and show that our ranking models significantly improve the quality of the selection of the top languages for cross lingual transfer. In addition, through an ablation study and examining the learned decisions trees, we glean insights about which features were found to be useful when choosing transfer languages for each task. This may inform future attempts for heuristic selection of transfer languages, even in the absence of direct use of LANGRANK. 2 Problem Formulation We define the task language t as the language of interest for a particular NLP task, and the transfer language a as the additional language that is used to aid in training models. Formally, during the training stage of transfer learning, we perform a model training step: Mt,a = train(⟨x(trn) t , y(trn) t ⟩, ⟨x(trn) a , y(trn) a ⟩), where x(trn) and y(trn) indicate input and output training data for each training language, and Mt,a indicates the resulting model trained on languages t and a. The actual model and training procedure will vary from task to task, and we give several disparate examples in our experiments in §5.1. The model can then be evaluated by using it to predict outputs over the test set, and evaluating the results: ˆy(tst) t,a = predict(x(tst) t ; Mt,a) ct,a = evaluate(y(tst) t , ˆy(tst) t,a ), where ct,a is the resulting test-set score achieved by using a as an transfer language. Assuming we want to get the highest possible performance on task language t, one way to do so 3127 is to exhaustively enumerate over every single potential transfer language a, train models, and evaluate the test set. In this case, the optimal transfer language for task language t can be defined as: a∗ t = argmaxact,a. However, as noted in the introduction, this bruteforce method for finding optimal transfer languages is not practical: if resources for many languages are available a priori, it is computationally expensive to train all of the models, and in many cases these resources are not-available a priori and need to be gathered from various sources before even starting experimentation. Thus, we turn to formulating our goal as a ranking task: given an NLP task, a low-resource task language t, and a list of J available high-resource transfer languages a1, a2, . . . , aJ, attempt to predict their ranking according to their expected scores ct,a1, ct,a2, . . . , ct,aJ without actually calculating the scores themselves. To learn this ranker, we need to first create training data for the ranker, which we create by doing an exhaustive sweep over a set of training task languages t1, t2, . . . , tI, which results in sets of scores {ct1,a1, . . . , ct1,aJ, }, . . . , {ctI,a1, . . . , ctI,aJ}. These scores that can be used to train a ranking system, using standard methods for learning to rank (see, e.g., Liu et al. (2009)). Specifically, these methods work by extracting features from the pair of languages ⟨ti, aj⟩: φti,aj = feat extract(ti, aj) and then using these features to predict a relative score for each pair of task and transfer languages rti,aj = rank score(φti,aj; θ) where θ are the parameters of the ranking model. These parameters θ are learned in a way such that the order of the ranking scores rti,a1, . . . , rti,aJ match as closely as possible with those of the goldstandard evaluation scores cti,a1, . . . , cti,aJ. Now that we have described the overall formulation of the problem, there are two main questions left: how do we define our features φti,aj, and how do we learn the parameters θ of the ranking model? 3 Ranking Features We represent each language pair/corpus by a set of features, split into two classes: dataset-dependent and dataset-independent. 3.1 Data-dependent Features Dataset-dependent features are statistical features of the particular corpus used, such as dataset size and the word overlap between two corpora. Importantly, these features require the dataset to already be available for processing and thus are less conducive to use in situations where resources have not yet been acquired. Specifically, we examine the following categories: Dataset Size: We denote the number of training examples in the transfer and task languages by stf and stk, respectively. For MT, POS and DEP, this is the number of sentences in a corpus, and for EL the dataset size is the number of named entities in a bilingual entity gazetteer. In our experiments, we also consider the ratio of the dataset size, stf/stk, as a feature, since we are interested in how much bigger the transfer-language corpus is than the task-language corpus. Type-Token Ratio (TTR): The TTR of the transfer- and task-language corpora, ttf and ttk, respectively, is the ratio between the number of types (the number of unique words) and the number of tokens (Richards, 1987). It is a measure for lexical diversity, as a higher TTR represents higher lexical variation. We also consider the distance between the TTRs of the transfer- and tasklanguage corpora, which may very roughly indicate their morphological similarity: dttr =  1 −ttf ttk 2 . Transfer and task languages that have similar lexical diversity are expected to have dttr close to 0. The data for the entity linking task consists only of named entities, so the TTR is typically close to 1 for all languages. Therefore, we do not include TTR related features for the EL task. Word overlap and subword overlap: We measure the similarity between the vocabularies of task- and transfer-language corpora by word overlap ow, and subword overlap osw: ow = |Ttf ∩Ttk| |Ttf| + |Ttk|, osw = |Stf ∩Stk| |Stf| + |Stk|, where Ttf and Ttk are the sets of types in the transfer- and task-language corpora, and Stf and Stk are their sets of subwords. The subwords are obtained by an unsupervised word segmentation algorithm (Sennrich et al., 2016; Kudo, 3128 2018). Note that for EL, we do not consider subword overlap, and the word overlap is simply the count of the named entities that have exactly the same representations in both transfer and task languages. We also omit subword overlap in the POS and DEP tasks, as some low-resource languages do not have enough data for properly extracting subwords. 3.2 Dataset-independent Features Dataset-independent features are measures of the similarity between a pair of languages based on phylogenetic or typological properties established by linguistic study. Specifically, we leverage six different linguistic distances queried from the URIEL Typological Database (Littell et al., 2017): Geographic distance (dgeo): The orthodromic distance between the languages on the surface of the earth, divided by the antipodal distance, based primarily on language location descriptions in Glottolog (Hammarstr¨om et al., 2018). Genetic distance (dgen): The genealogical distance of the languages, derived from the hypothesized tree of language descent in Glottolog. Inventory distance (dinv): The cosine distance between the phonological feature vectors derived from the PHOIBLE database (Moran et al., 2014), a collection of seven phonological databases. Syntactic distance (dsyn): The cosine distance between the feature vectors derived from the syntactic structures of the languages (Collins and Kayne, 2011), derived mostly from the WALS database (Dryer and Haspelmath, 2013). Phonological distance (dpho): The cosine distance between the phonological feature vectors derived from the WALS and Ethnologue databases (Lewis, 2009). Featural distance (dfea): The cosine distance between feature vectors combining all 5 features mentioned above. 4 Ranking Model Having defined our features, the next question is what type of ranking model to use and how to learn its parameters θ. As defined in §2, the problem is a standard learning-to-rank problem, so there are a myriad of possibilities for models and learning algorithms (Liu et al., 2009). We opt to use the GBDT (Ke et al., 2017) model with LambdaRank as our training method (Burges, 2010), as it has two major advantages. First, its empirical performance – it is currently one of the state-of-the-art methods for ranking, especially in settings that have few features and limited data. Second, but perhaps more interesting, is its interpretability. Decision-tree based algorithms are relatively interpretable, as it is easy to visualize the learned tree structure. One of our research goals is to understand what linguistic or statistical features of a dataset play important roles in transfer learning, so the interpretable nature of the treebased model can provide valuable insights, which we elaborate further in §6.2. 5 Experimental Settings 5.1 Testbed Tasks We investigate the performance of LANGRANK on four common NLP tasks: MT, EL, POS tagging, and DEPendency parsing. We briefly outline the settings for all four NLP tasks. Machine Translation We train a standard attention-based sequence-to-sequence model (Bahdanau et al., 2015), using the XNMT toolkit (Neubig et al., 2018). We perform training on the multilingual TED talk corpus of Qi et al. (2018), using 54 task and 54 transfer languages, always translating into English, which results in 2,862 task/transfer pairs and 54 single-source training settings. Transfer is performed by joint training over the concatenated task and transfer corpora. Entity Linking The cross-lingual EL task involves linking a named entity mention in the task language to an English knowledge base. We train two character-level LSTM encoders, which are trained to maximize the cosine similarity between parallel (i.e., linked) entities (Rijhwani et al., 2019). We use the same dataset as Rijhwani et al. (2019), which contains language-linked Wikipedia article titles from 9 low-resource task languages and 53 potential transfer languages, resulting in 477 task/transfer pairs. We perform training in a zero-shot setting, where we train on corpora only in the transfer language, and test entity linking accuracy on the task language without joint training or fine-tuning. POS Tagging We train a bi-directional LSTMCNNs-CRF model (Ma and Hovy, 2016) on word 3129 sequences without using pre-trained word embeddings. The implementation is based on the NCRF++ toolkit (Yang and Zhang, 2018). We perform training on the Universal Dependencies v2.2 dataset (Nivre et al., 2018), using 26 languages that have the least training data as task languages, and 60 transfer languages,2 resulting in 1,545 pairs of transfer-task languages. Transfer is performed by joint training over the concatenated task and transfer corpora if the task language has training data, and training only with transfer corpora otherwise. The performance is measured by POS tagging accuracy on the task language. Dependency Parsing For the dependency parsing task, we follow the settings of (Ahmad et al., 2018) and utilize a deep biaffine attentional graphbased model (Dozat and Manning, 2016). We select 30 languages from Universal Dependencies v2.2 (Nivre et al., 2018), resulting in 870 pairs of transfer-task languages. For this task, transfer is performed in the zero-shot setting where no task language annotations are available in training. We rely on the multi-lingual embeddings which are mapped into the same space with the offline method of Smith et al. (2017) and directly adopt the model trained with the transfer language to task languages. The performance is measured by LAS (Labeled Attachment Accuracy) excluding punctuation. 5.2 Evaluation Protocol We evaluate all our models on all NLP tasks with leave-one-out cross validation. For each crossvalidation fold, we leave one language ℓ(tst) out from the N languages we have as the test set, and train our ranking model θℓ(tst) using all remaining languages, {ℓ(trn) 1 , . . . , ℓ(trn) N−1}, as the training set. During training, each ℓ(trn) i is treated as the task language in turn, and the other N −2 languages in the training set as transfer languages. We then test the learned model θℓ(tst) by taking ℓ(tst) as the task language, and {ℓ(trn) 1 , . . . , ℓ(trn) N−1} as the set of transfer languages, and predict the ranking scores {rℓ(tst),ℓ(trn) 1 , . . . , rℓ(tst),ℓ(trn) N−1 }. We repeat this process with each language in all N languages as the test language ℓ(tst), and collect N learned models. We use Normalized Discounted Cumulative 2For each language, we choose the treebank that has the least number of training instances, which results in 60 languages with training data and 11 without training data. Gain (NDCG) (J¨arvelin and Kek¨al¨ainen, 2002) to evaluate the performance of the ranking model. The NDCG at position p is defined as: NDCG @p = DCG @p IDCG @p, where the Discounted Cumulative Gain (DCG) at position p is DCG @p = p X i=1 2γi −1 log2(i + 1). Here γi is the relevance of the language ranked at position i by the model being evaluated. We keep only the top-γmax transfer languages as our learning signal: the true best transfer language has γ = γmax, and the second-best one has γ = γmax −1, and so on until γ = 1, with the remaining languages below the top-γmax ones all sharing γ = 0. The Ideal Discounted Cumulative Gain (IDCG) uses the same formula as DCG, except it is calculated over the gold-standard ranking. When the predicted ranking matches the “true” ranking, then NDCG is equal to 1. 5.3 Method Parameters and Baselines We use GBDT to train our LANGRANK models. For each LANGRANK model, we train an ensemble of 100 decision trees, each with 16 leaves. We use the LightGBM implementation (Ke et al., 2017) of the LambdaRank algorithm in our training. In our experiments, we set γmax = 10, and evaluate the models by NDCG@3. The threshold of 3 was somewhat arbitrary, but based on our intuition that we would like to test whether LANGRANK can successfully recommend the best transfer language within a few tries, instead of testing its ability to accurately rank all available transfer languages. The results in Table 1 report the average NDCG@3 across all crossvalidation folds. For LANGRANK (all) we include all available features in our models, while for LANGRANK (dataset) and LANGRANK (ling) we include only the subsets of dataset-dependent and dataset-independent features, respectively. We consider the following baseline methods: • Using a single dataset-dependent feature: While dataset-dependent features have not typically been used as criteria for selecting transfer languages, they are a common feature in data selection methods for crossdomain transfer (Moore and Lewis, 2010). In 3130 Method MT EL POS DEP dataset word overlap ow 28.6 30.7 13.4 52.3 subword overlap osw 29.2 – – – size ratio stf/stk 3.7 0.3 9.5 24.8 type-token ratio dttr 2.5 – 7.4 6.4 ling. distance genetic dgen 24.2 50.9 14.8 32.0 syntactic dsyn 14.8 46.4 4.1 22.9 featural dfea 10.1 47.5 5.7 13.9 phonological dpho 3.0 4.0 9.8 43.4 inventory dinv 8.5 41.3 2.4 23.5 geographic dgeo 15.1 49.5 15.7 46.4 LANGRANK (all) 51.1 63.0 28.9 65.0 LANGRANK (dataset) 53.7 17.0 26.5 65.0 LANGRANK (URIEL) 32.6 58.1 16.6 59.6 Table 1: Our LANGRANK model leads to higher average NDCG@3 over the baselines on all four tasks: machine translation (MT), entity linking (EL), part-ofspeech tagging (POS) and dependency parsing (DEP). view of this, we include selecting the transfer languages by sorting against each single one of ow, osw, and stf/stk in descending order, and sorting against dttr in ascending order, as baseline methods. • Using a single linguistic distance feature: More common heuristic criteria of selection the transfer languages are choosing ones that have small phylogenetic distance to the task language (Dong et al., 2015; Cotterell and Heigold, 2017). We therefore include selecting the transfer languages by sorting against each single one of dgen, dsyn, dfea, dpho, dinv, and dgeo in ascending order as our baseline methods. 6 Results and Analysis 6.1 Main Results The performance of predicting transfer languages for the four NLP tasks using single-feature baselines and LANGRANK is shown in Table 1. First, using LANGRANK with either all features or a subset of the features leads to substantially higher NDCG than using single-feature heuristics. Although some single-feature baselines manage to achieve high NDCG for some tasks, the predictions of LANGRANK consistently surpass the baselines on all tasks. In fact, for the MT and POS tagging tasks, the ranking quality of the best LANGRANK model is almost double that of the best single-feature baseline. Furthermore, using dataset-dependent features on top of the linguistic distance ones enhances the 1 2 3 4 5 6 7 8 9 10 K, number of recommended transfer languages 0.80 0.85 0.90 0.95 1.00 Max evaluation score MT LangRank EL LangRank POS LangRank DEP LangRank MT Subword Overlap EL Genetic POS Geographic DEP Word Overlap Figure 2: The best evaluation score (BLEU for MT, accuracy for EL and POS, and LAS for DEP) attainable by trying out the top K transfer languages recommended by the LANGRANK models and the singlefeature baselines. quality of the LANGRANK predictions. The best results for EL and POS tagging are obtained using all features, while for MT the best model is the one using dataset-only features. The best performance on DEP parsing is achieved with both settings. LANGRANK with only dataset features outperforms the linguistics-only LANGRANK on the MT and POS tagging tasks. It is, however, severely lacking in the EL task, likely because EL datasets lack most dataset features as discussed in the previous section; the EL data only consists of pairs of corresponding entities and not complete sentences as in the case of the other tasks’ datasets. In addition, it is important to note that LANGRANK with only linguistic database information still outperforms all heuristic baselines on all tasks. This means that our model is potentially useful even before any resources for the language and task of interest have been collected, and could inform the data creation process. Finally, from a potential user’s point of view, a practical question is: If we train models on the top K transfer languages suggested by the ranking model and pick the best one, how good is the best model expected to be? If a user could obtain a good transfer model by trying out only a small number of transfer languages as suggested by our ranking model, the overhead of searching for a good transfer language is immensely reduced. Figure 2 compares the BLEU score (for MT), accuracy (for EL and POS) and LAS (for DEP) of the best transfer model attainable by using one of the top K transfer languages recommended by LANGRANK (all) and by the best single feature 3131 Task LANG Best Best True Lang RANK Dataset URIEL Best ow dfea MT tur (1) tur (1) ara (32) tur (1) aze fas (3) hrv (5) fas (3) kor (2) hun (4) ron (31) sqi (22) fas (3) ow dgeo MT hun (1) vie (3) mya (30) hun (1) ben tur (2) ita (20) hin (27) tur (2) fas (4) por (18) mar (41) vie (3) ow dinv EL amh (6) amh (6) pan (2) hin (1) tel orm (40) swa (32) hin (1) pan (2) msa (7) jav (9) ben (5) mar (3) Table 2: Examples of predicted top-3 transfer languages (and true ranks). The languages are denoted by the ISO 639-2 Language Codes. The first three task languages (aze, fin, ben) are on the MT task, and the last one (tel) is on the EL task. baseline. We plot the ratio of the best score to that of the ground-truth best transfer model ct,a∗ t , averaged over all task languages. On the MT task, the best transfer models obtained by the suggestions of our LANGRANK (all) model constantly outperforms the models obtained from the best baseline. On the POS tagging task, the best transfer models obtained by our ranking model are generally comparable to those using baseline suggestions. We note that in the EL task, after looking beyond the top 3 LANGRANK predictions, the best baseline models on average seem to give much more relevant transfer language suggestions than our LANGRANK models. However, this is a case where averaging is possibly misleading. In fact, the LANGRANK model manages to select the correct top-1 language for 7 of the 9 task languages. The other two languages (Telugu and Uyghur) do not have any typologically similar languages in the small training set, and hence the learned model fails to generalize to these languages. In Table 2 we include a few representative examples of the top-3 transfer languages selected by LANGRANK and the baselines.3 In the first case (aze) LANGRANK outperforms the already strong baselines by being able to consider both dataset and linguistic features, instead of considering them in isolation. In the second case (ben) where no baselines provide useful recommendations, LANGRANK still displays good performance; interestingly Turkish and Hungarian 3Detailed results are in the supplementary material. Figure 3: Normalized feature importance for the MT, EL, POS and DEP tasks. proved good transfer languages for a large number of task languages (perhaps to large data size and difficulty as tasks), and LANGRANK was able to learn to fall back to these when it found no good typological or dataset-driven matches otherwise – behavior that would have be inconceivable without empirical discovery of transfer languages. The final failure case (tel), as noted above, can be attributed to overfitting the small EL dataset, and may be remedied by either creating larger data or training LANGRANK jointly over multiple tasks. 6.2 Towards Better Educated Guesses for Choosing Transfer Languages Our transfer language rankers are trained on a few languages for the particular tasks. It is possible that our models will not generalize well on a different set of languages or on other NLP tasks. However, generating training data for ranking with exhaustive transfer experiments on a new task or set of languages will not always be feasible. It could, therefore, be valuable to analyze the learned models and extract “rules of thumb” that can be used as educated guesses in choosing transfer languages. They might still be ad-hoc, but they may prove superior to the intuition-based heuristic approaches used in previous work. To elucidate how LANGRANK determines the best transfer languages for each task, Figure 3 shows the feature importance for each of the NLP tasks. The feature importance is defined as the number of times a feature is chosen to be the splitting feature in a node of the decision trees. For the MT task, we find that dataset statistics features are more influential than the linguistic features, especially the dataset size ratio and 3132 the word overlap. This indicates that a good transfer language for machine translation depends more on the dataset size of the transfer language corpus and its the word and subword overlap with the task language corpus. This is confirmed by results of the LANGRANK (dataset) model in Table 1, which achieves the best performance by only using the subset of dataset statistics features. At the same time, we note that the dataset size ratio and TTR distance, although of high importance among all features, when used alone result in very poor performance. This phenomenon may be understood by looking at an example of a small decision tree in Figure 4: a genetic distance of less than 0.4 would produce a high ranking regardless of dataset size. The dataset feature in this tree provides a smaller gain than two typological features, although it still informs the decision. For POS tagging, the two most important features are dataset size and the TTR distance. On the other hand, the lack of rich dataset-dependent features for the EL task leads to the geographic and syntactic distance being most influential. There are several relatively important features for the DEP parsing task, with geographic and genetic distance standing out, as well as word overlap. These are features that also yield good scores on their own (see Table 1) but LANGRANK is able to combine them and achieve even better results. 7 Related Work Cross-lingual transfer has been extensively used in several NLP tasks. In Section 1, we provided a (non-exhaustive) list of examples that employ cross-lingual transfer across several tasks. Other work has performed large-scale studies on the importance of appropriately selecting a transfer language, such as Paul et al. (2009), which performed an extensive search for a “pivot language” in statistical MT, but without attempting to actually learn or predict which pivot language is best. Typologically-informed models are another vein of research that is relevant to our work. The relationship between linguistic typology and statistical modeling has been studied by Gerz et al. (2018) and Cotterell et al. (2018), with a focus on language modeling. Tsvetkov et al. (2016b) used typological information in the target language as additional input to their model for phonetic representation learning. Ammar et al. (2016) and Ahmad et al. (2018) used similar ideas for dgen ≤0.43 output: 0 dsyn > 0.56 output: 2 output: 3 stf stk > 1.61 output: 1 yes no yes no yes no Figure 4: An example of the decision tree learned in the machine translation task for Galician as task language. dependency parsing, incorporating linguisticallyinformed vectors into their models. O’Horan et al. (2016) survey typological resources available and their utility in NLP tasks. Although not for cross-lingual transfer, there has been prior work on data selection for training models. Tsvetkov et al. (2016a) and Ruder and Plank (2017) use Bayesian optimization for data selection. van der Wees et al. (2017) study the effect of data selection of neural machine translation, as well as propose a dynamic method to select relevant training data that improves translation performance. Plank and van Noord (2011) design a method to automatically select domain-relevant training data for parsing in English and Dutch. 8 Conclusion We formulate the task of selecting the optimal transfer languages for an NLP task as a ranking problem. For machine translation, entity linking, part-of-speech tagging, and dependency parsing, we train ranking models to predict the most promising transfer languages to use given a task language. We show that by taking multiple dataset statistics and language attributes into consideration, the learned ranking models recommend much better transfer languages than the ones suggested by considering only single language or dataset features. Through analyzing the learned ranking models, we also gain some insights on the types of features that are most influential in selecting transfer languages for each of the NLP tasks, which may inform future ad hoc selection even without useing our method. Acknowledgments This project was supported in part by NSF Award No. 1761548 “Discovering and Demon3133 strating Linguistic Features for Language Documentation,” and the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this doc- ument are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2018. Near or far, wide range zero-shot crosslingual dependency parsing. arXiv preprint arXiv:1811.00570. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015 (arXiv:1409.0473). Chris J.C. Burges. 2010. From RankNet to LambdaRank to LambdaMART: An overview. Technical report, Microsoft Research. Chris Collins and Richard Kayne. 2011. Syntactic structures of the world’s languages. Ryan Cotterell and Georg Heigold. 2017. Crosslingual character-level neural morphological tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 748–759, Copenhagen, Denmark. Association for Computational Linguistics. Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536–541. Association for Computational Linguistics. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723–1732, Beijing, China. Association for Computational Linguistics. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Daniela Gerz, Ivan Vuli´c, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the relation between linguistic typology and (limitations of) multilingual language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 316–327. Association for Computational Linguistics. Harald Hammarstr¨om, Robert Forkel, and Martin Haspelmath. 2018. Glottolog 3.3. Max Planck Institute for the Science of Human History. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Melvin Johnson, Mike Schuster, Quoc Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernand a Vi ˜A c⃝gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3146–3154. Curran Associates, Inc. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Association for Computational Linguistics. Paul M Lewis. 2009. Ethnologue: Languages of the World Sixteenth edition. Dallas, Texas: SIL International. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 8–14. 3134 Tie-Yan Liu et al. 2009. Learning to rank for information retrieval. Foundations and Trends R⃝in Information Retrieval, 3(3):225–331. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064–1074. Chaitanya Malaviya, Matthew R. Gormley, and Graham Neubig. 2018. Neural factor graph models for cross-lingual morphological tagging. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2653–2663, Melbourne, Australia. Association for Computational Linguistics. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2536–2545. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden. Association for Computational Linguistics. Steven Moran, Daniel McCloy, and Richard Wright, editors. 2014. PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Padmanabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Liming Wang. 2018. XNMT: The extensible neural machine translation toolkit. In Conference of the Association for Machine Translation in the Americas (AMTA) Open Source Software Showcase, Boston. Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proc. IJCNLP, volume 2, pages 296–301. Joakim Nivre, Mitchell Abrams, ˇZeljko Agi´c, and et al. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Helen O’Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, and Anna Korhonen. 2016. Survey on the use of typological information in natural language processing. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1297–1308, Osaka, Japan. The COLING 2016 Organizing Committee. Michael Paul, Hirofumi Yamamoto, Eiichiro Sumita, and Satoshi Nakamura. 2009. On the importance of pivot language selection for statistical machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 221–224. Association for Computational Linguistics. Barbara Plank and ˇZeljko Agi´c. 2018. Distant supervision from disparate sources for low-resource partof-speech tagging. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 614–620. Association for Computational Linguistics. Barbara Plank and Gertjan van Noord. 2011. Effective measures of domain similarity for parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1566–1576, Portland, Oregon, USA. Association for Computational Linguistics. Edoardo Maria Ponti, Roi Reichart, Anna Korhonen, and Ivan Vuli ¨A‡. 2018. Isomorphic transfer of syntactic structures in cross-lingual nlp. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1531–1542, Melbourne, Australia. Association for Computational Linguistics. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Padmanabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535. Association for Computational Linguistics. Brian Richards. 1987. Type/token ratios: What do they really tell us? Journal of child language, 14(2):201– 209. Shruti Rijhwani, Jiateng Xie, Graham Neubig, and Jaime Carbonell. 2019. Zero-shot neural transfer for cross-lingual entity linking. In Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), Honolulu, Hawaii. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian Optimization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 372–382, Copenhagen, Denmark. Association for Computational Linguistics. 3135 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 477– 487, Montr´eal, Canada. Association for Computational Linguistics. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wikification using multilingual embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 589–598, San Diego, California. Association for Computational Linguistics. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016a. Learning the Curriculum with Bayesian Optimization for TaskSpecific Word Representation Learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 130–139, Berlin, Germany. Association for Computational Linguistics. Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W Black, Lori Levin, and Chris Dyer. 2016b. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1357–1366, San Diego, California. Association for Computational Linguistics. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic Data Selection for Neural Machine Translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1400–1410, Copenhagen, Denmark. Association for Computational Linguistics. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379. Association for Computational Linguistics. Jie Yang and Yue Zhang. 2018. Ncrf++: An opensource neural sequence labeling toolkit. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xiaocheng Feng, Heng Ji, and Weiran XU. 2016. Bitext name tagging for cross-lingual entity annotation projection. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 461–470, Osaka, Japan. The COLING 2016 Organizing Committee. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568–1575, Austin, Texas. Association for Computational Linguistics.
2019
301
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3136–3145 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3136 CogNet: a Large-Scale Cognate Database Khuyagbaatar Batsuren† Gábor Bella† Fausto Giunchiglia†§ DISI, University of Trento, Trento, Italy† Jilin University, Changchun, China§ {k.batsuren; gabor.bella; fausto.giunchiglia}@unitn.it Abstract This paper introduces CogNet, a new, large-scale lexical database that provides cognates—words of common origin and meaning—across languages. The database currently contains 3.1 million cognate pairs across 338 languages using 35 writing systems. The paper also describes the automated method by which cognates were computed from publicly available wordnets, with an accuracy evaluated to 94%. Finally, statistics and early insights about the cognate data are presented, hinting at a possible future exploitation of the resource1 by various fields of lingustics. 1 Introduction Cognates are words in different languages that share a common origin and the same meaning, such as the English letter and the French lettre. Cognates and the problem of cognate identification have been extensively studied in the fields of language typology and historical linguistics, as cognates are considered useful for researching the relatedness of languages (Bhattacharya et al., 2018). Cognates are also used in computational linguistics, e.g., for lexicon extension (Wu and Yarowsky, 2018) or to improve cross-lingual NLP tasks such as machine translation or bilingual word recognition (Kondrak et al., 2003; Tsvetkov and Dyer, 2015). Despite the interest in using cognate data for research, state-of-the-art cognate databases have had limited practical uses from an applied perspective, for two reasons. Firstly, popular cognatecoded databases that are used in historical linguistics, such as ASJP (Wichmann et al., 2010), 1The CogNet resource and WikTra tool are available on http://cognet.ukc.disi.unitn.it. IELex2, or ABVD (Greenhill et al., 2008), cover only the small set of 225 Swadesh basic concepts, although with an extremely wide coverage of up to 4000 languages. Secondly, in these databases, lexical entries that belong to scripts other than Latin or Cyrillic mostly appear in phonetic transcription instead of using their actual orthographies in their original scripts. These limitations prevent such resources from being used in real-world computational tasks on written language. This paper describes CogNet, a new large-scale, high-precision, multilingual cognate database, as well as the method used to build it. Our main technical contributions are (1) a general method to detect cognates from multilingual lexical resources, with precision and recall parametrable according to usage needs; (2) a large-scale cognate database containing 3.1 million word pairs across 338 languages, generated with the method above; (3) WikTra, a multilingual transliteration dictionary and library derived from Wiktionary data; and (4) an online platform that lets users explore the resource. The paper is organised as follows. Section 2 presents the state of the art. Section 3 describes the main cognate discovery algorithm and section 4 the way various forms of evidence used by the algorithm are computed. The method is parametrised and the results are evaluated in section 5. Section 6 describes the resulting CogNet database in terms of structure and statistical insights. Finally, section 7 concludes the paper. 2 State of the Art To our knowledge, cognates have so far been defined and explored in two fundamental ways by two distinct research communities. On the 2Indo-European Lexical Cognacy Database, http://ielex.mpi.nl/ 3137 one hand, cognate identification has been studied within linguistic typology and historical linguistics. On the other hand, computational linguists have been researching methods for cognate production. The very definition of the term ‘cognate’ varies according to the research community. In historical linguistics, cognates must have a provable etymological relationship and must be translated into each language (Bhattacharya et al., 2018). Accordingly, the English skyscraper and the German Wolkenkratzer are considered as cognates but the English song and the Japanese ソS⇣/songu/) are not. In computational linguistics, the notion of cognate is more relaxed with respect to etymology and loanwords are also considered as cognates (Kondrak et al., 2003). For our work we adopted the latter, computational point of view. In historical linguistics, cognate identification methods proceed in two main steps. First, a similarity matrix of all words is estimated by three types of similarity measures: semantic similarity, phonetic similarity, and orthographic similarity. For information on semantic similarity, special-purpose multilingual dictionaries, such as the well-known Swadesh List, are used. For orthographic similarity, string metrics (Hauer and Kondrak, 2011; St Arnaud et al., 2017) are often employed, e.g., edit distance, Dice’s coefficient, or LCSR. As these methods do not work across scripts, they are completed by phonetic similarity, exploiting transformations and sound changes across related languages (Kondrak, 2000; Jäger, 2013; Rama et al., 2017). Phonetic similarity measures, however, require phonetic transcriptions to be a priori available. More recently, historical linguists have started exploiting identified cognates to infer phylogenetic relationships across languages (Rama et al., 2018; Jäger, 2018). In computational linguistics, cognate production consists of finding for a word in a given language its cognate pair in another language. Stateof-the-art methods (Beinborn et al., 2013; Sennrich et al., 2016) have employed character-based machine translation, trained from parallel corpora, to produce cognates or transliterations. (Wu and Yarowsky, 2018) also employs similar techniques, as well as multilingual dictionaries, to produce large-scale cognate clusters for Romance and Turkic languages. Although the cognates produced in this manner are, in principle, a good source for improving certain cross-lingual tasks in NLP, the quality of the output often suffers due to not being able to handle certain linguistic phenomena properly. For example, words in languages such as Arabic or Hebrew are written without vowels and machine-produced transliterations often fail to vowelize such words (Karimi et al., 2011). The solution we propose is the use of a dictionary-based transliteration tool over machine transliteration. Our method provides new contributions for both research directions. Firstly, to our knowledge no other work on cognate generation has so far used high-quality multilingual lexical resources on a scale as large as ours, covering hundreds of languages and more than 100,000 cross-lingual concepts. Secondly, this large cross-lingual coverage could only be achieved thanks to a robust transliteration tool that is part of the contributions of our paper. Finally, our novel, combined use of multiple—orthographic, semantic, geographic, and etymological—sources of evidence for detecting cognates was crucial to obtain high-quality results, in terms of both precision and recall. 3 The Algorithm For our work we have adopted a computationallinguistic interpretation of the notion of cognate (Kondrak et al., 2003): two words in different languages are cognates if they have the same meaning and present a similarity in orthography, resulting from a supposed underlying etymological relationship (common ancestry or borrowing). Based on this interpretation, our algorithm is based on three main principles: (1) semantic equivalence, i.e., that the two words share a common meaning; (2) sufficient proof of etymological relatedness; and (3) the logical transitivity of the cognate relationship. The core resource for obtaining cross-lingual evidence on semantic equivalence—i.e., the sameness of word meanings—is the Universal Knowledge Core (UKC), a large multilingual lexicosemantic database (Giunchiglia et al., 2018) already used both in linguistics research as well as for practical applications (Bella et al., 2016; Giunchiglia et al., 2017; Bella et al., 2017). The UKC includes the lexicons and lexicosemantic relations for 338 languages, containing 1,717,735 words and 2,512,704 languagespecific word meanings. It was built from wordnets (Miller, 1995) and wiktionaries converted 3138 into wordnets (Bond and Foster, 2013)). As all of the resources composing the UKC were built and validated by humans(Giunchiglia et al., 2015), we consider the quality of our input data to be high enough for obtaining accurate results on cognates (Giunchiglia et al., 2017). As most wordnets map their units of meaning (synsets in WordNet terminology) to English meanings, they can effectively be interconnected into a cross-lingual lexical resource. The UKC reifies all of these mappings as supra-lingual lexical concepts (107,196 in total, excluding named entities such as Ulanbaatar). For example, if the German Fahrrad and the Italian bicicletta are mapped to the English bicycle then a single concept is created to which all three language-specific meanings (i.e., wordnet synsets) will be mapped. In terms of etymological evidence, we use both direct and indirect evidence of etymological relatedness. Direct evidence is provided by goldstandard etymological resources, such as the one we use and present in section 4.1. Such evidence, however, is relatively sparse and would not, in itself, provide high recall. We therefore also consider indirect evidence in the form of a combined orthographic–geographic relatedness: a measure of geographic proximity of languages combined with the orthographic similarity of words, involving transliteration, can provide strong clues on language contact and probable cross-lingual lexical borrowing. Finally, we exploit logical transitivity in order further to improve recall: we build on the intuition that if words wa and wb are cognates and wb and wc are cognates then wa and wc are also cognates. For example, if the German Katze is found to be a cognate of the English cat (based on direct etymological evidence) and cat is found to be a cognate of the French chat (based on orthography) then Katze and chat are also considered to be cognates). Based on these principles, we have implemented a cognate discovery algorithm as shown in algorithm 1. Its input is a single lexical concept from the UKC (the algorithm being applicable to every concept in loop). It builds an undirected graph where each node represents a word and each edge between two nodes represents a cognate relationship. The process starts by retrieving the lexicalisations of the input concept in all available lanAlgorithm 1: Cognate Discovery Algorithm Input : c, a lexical concept Input : R, a lexical resource Output : G+, graph of all cognates of c 1 V, E ;; 2 L LanguagesR(c); 3 for each language l 2 L do 4 for each word w 2 WordsR(c, l) do 5 V V [ {v =<w, l>}; 6 for each node v1 =<w1, l1> 2 V do 7 for each node v2 =<w2, l2> 2 V do 8 if l1 = l2 then 9 continue; 10 if EtyRel(w1, l1, w2, l2) then 11 E E [ {e = <v1, v2>}; 12 else if OrthSim(w1, l1, w2, l2) + TG ⇥ GeoProx(l1, l2) > TF then 13 E E [ {e = <v1, v2>}; 14 G < V, E >; 15 G+ = TransitiveClosure(G) 16 return G+; guages and creating the corresponding word nodes in the graph (lines 2–5). All such words thus fulfil the criterion of semantic equivalence above. Then, for all different-language word pairs that express the concept (lines 6–9), we verify whether etymological evidence exists for a potential cognate relationship. The latter may either be direct evidence (EtyRel, line 10) or indirect, which we implement as a score of relatedness combined of orthographic similarity (OrthSim) and geographic proximity (GeoProx). We consider indirect evidence to be sufficient if this combined score is superior to an experimental threshold TF (line 12). In case either direct or indirect evidence is found, an edge between the two word nodes is created (lines 10–13). As the last step, in order to apply the principle of logical transitivity, the transitive closure of the graph is computed (line 15). In the resulting graph G+ each connected subgraph represents a group of cognate words. 4 Computing Etymological Relatedness Our method predicts the etymological relatedness of words based on both direct and indirect etymological evidence. Section 4.1 below describes how the EtyRel function provides direct evidence. Sections 4.2 and 4.3 explain how indirect evidence is computed based on orthographic similarity using 3139 the OrthSim function and on geographic proximity using the GeoProx function. 4.1 Direct Etymological Evidence The EtyRel function in algorithm 1 uses goldstandard evidence to compute the etymological relatedness of words. It exploits etymological ancestor (marked as Anc below) relations for each word of the word pair being evaluated as cognates. Two words are considered as etymologically related if they are found to have at least one common etymological ancestor word (such as the German Ross and the English horse having as ancestor the protoGermanic root *harss-). EtyRel(w1, l1, w2, l2) = = ( true if Anc(w1, l1) \ Anc(w2, l2) 6= ; false otherwise (1) Ancestor relations are retrieved from the Etymological WordNet (EWN)3 (De Melo, 2014), a lexical resource providing relations between words, e.g., derivational or etymological. EWN was automatically built by harvesting etymological information encoded in Wiktionary. In this work, we have only used its 94,832 cross-lingual etymological relations. 4.2 Orthographic Similarity Orthographic similarity is computed using a string similarity metric LCSSim based on the longest common subsequence (LCS) of the two input words, returning a similarity score between 0 and 1: LCSSim(w1, w2) = 2 ⇥len(LCS(w1, w2)) len(w1) + len(w2) (2) When w1 and w2 belong to different writing systems, LCS returns 0 and thus the formula above is not directly usable. In order to be able to identify cognates across writing systems, we apply transliteration to the Latin script (also known as romanization) using the WikTra tool. Orthographic similarity is thus computed as: OrthSim(w1, w2) = max{LCSSim(w1, w2), LCSSim(WikTra(w1), WikTra(w2))} (3) 3http://www1.icsi.berkeley.edu/⇠demelo/etymwn/, accessed on 10/14/2018. WikTra is a dictionary-based transliteration tool compiled from information collected from Wiktionary and developed specifically for this work by the authors4. It is Unicode-based and supports 85 languages in 35 writing systems, defining transliteration rules and codes according to international standards, as developed by the Wiktionary community (the largest community in lexicography). An illustration of the output provided by WikTra compared to three existing transliteration tools is provided in table 1. The use of WikTra with respect to existing tools is justified by a need for high-quality results that also cover complex cases of orthography, e.g., in Semitic scripts where vowels are typically omitted. In particular, Junidecode5 is a character-based transliterator, an approach that seriously limits its accuracy. The Google transliterator is dictionary-based and is therefore of higher quality, but it supports a lower number of languages and is not freely available. Finally, uroman (Hermjakob et al., 2018) is a new, high-quality, dictionary-based tool that nevertheless provides a limited support for scripts without vowels (e.g., Arabic or Hebrew), as also visible in table 1. While WikTra gains its high accuracy from human-curated Wiktionary data, it still needs to be improved for Thai and Japanese. In Thai, WikTra only works on monosyllabic words, and it needs an additional tool to recognize syllables. In Japanese, it only works with Hiragana and Katakana scripts and not with Kanji (Chinese characters). We therefore combined WikTra with the Kuromoji6 transliteration tool. 4.3 Geographic Proximity We exploit geographic information on languages in order to take into account the proximity of language speakers for the prediction of borrowing. Our hypothesis is that, even if in the last century lexical borrowing on a global scale has been faster than ever before, the effect of geographic distance is still a significant factor when applying cognate discovery to entire vocabularies. This effect is combined with orthographic similarity in line 12 of algorithm 1, in a way that geographic proximity increases the overall likelihood of word pairs being cognates, without being a necessary condition. 4https://github.com/kbatsuren/wiktra 5https://github.com/gcardone/junidecode 6https://github.com/atilika/kuromoji 3140 Table 1: Comparison with state-of-the art transliteration tools # Languages Word Uroman Junidecode Google WikTra 1 English book book book book book 2 Malayalam malayaallam mlyaallN malay¯al.a ˙m malay¯al.am. 3 Arabic nwaa nw@ nawa naw¯atun 4 Japanese ◆S4E⌧タ konpyuta konpiyuta konpy¯ut¯a konpy¯ut¯a* 5 Thai raachaatiraa raachaathiraad r¯a ch¯a thi r¯ad raa-chaa-tí-râat b 6 Russian moskva moskva moskva moskva 7 Hindi devanaa devnaagrii devanaagaree devn¯agr¯ı 8 Bengali baangla baaNlaa b¯anl¯a bangla 9 Greek anaute anauteo an¯aftéo anauté¯o 10 Kashmiri kampivwuttar khampy[?]w?ttar kampe¯ut.ar 11 Persian armnstan rmnstn armanestân 12 Hebrew yshshkr yshshkr yissachar yi´s´s¯ak¯¯ar 13 Tamil rehs reHs reh.s rex 14 Ethiopic aadise aababaa ‘aadise ’aababaa ¯ad¯ısi ¯abeba -ädis -äbäba 15 Tibetan kha·pa kh-pr kha par 16 Korean B jᄀ ◆ò : r megapon megapon megapon megapon 17 Armenian hayiastan hayastan hayastan hayastan 18 Uyghur yeayealae y’y’-lae a’ile 19 Khmer kromaaro krmaar krama r krâméar 20 Telugu amkapali aNkpaalli a˙nkap¯al.i a˙nkap¯al.i 21 Odia oddishaa rodd’ishaa ori´sa 22 Burmese sannykhre snny[?]:kh[?]e saeehkyay sany:hkre * WikTra in Japanese language only works with scripts of Hiragana and Katakana. b WikTra in Thai language only works with a sequence of syllables. Our relatively simple solution considers only the languages of the input words, computing a language proximity value between 0 and 1, as follows: GeoProx(l1, l2) = min( TD GeoDist(l1, l2), 1.0) (4) The function GeoDist(l1, l2) is an approximate ‘geographic distance’ between two languages l1 and l2, based on the geographical areas where the languages are spoken. The constant TD corresponds to a minimal distance: if two languages are spoken within this distance then they have maximum geographic relatedness. TD is empirically set as described in section 5.2. Distances between languages are provided by the WALS resource7, one of most comprehensive language databases. WALS provides latitude and longitude coordinates for a language given as input. While a single coordinate returned for a language may in some cases be a crude approximation of linguistic coverage (e.g., Spanish is spoken both in Spain and in most countries of Latin America), even this level of precision was found to improve our evaluation results. 7https://wals.info 5 Evaluation This section describes how CogNet was evaluated on a manually built cognate corpus and how its parameters were tuned to optimise results. 5.1 Dataset Annotation While our initial idea was to use existing cognate datasets for evaluation, the most comprehensive databases turned out to represent cognates in their phonetic transcriptions instead of having words written in their original scripts. Such data was not usable to test our method that performs transliteration on its own. Consequently, we created a dataset of 40 concepts with fully annotated sets of cognate groups. On average, a concept was represented in 107 languages by 129 words: 5,142 words in total for the 40 concepts. The concepts were chosen from the Swadesh basic word list and from the WordNet core concepts (Boyd-Graber et al., 2006). The lexicalizations (words) corresponding to these concepts were retrieved from the UKC. For each concept, we asked two language experts to find cognate clusters among its lexicalizations. The experts made their decisions based on online resources such as Wiktionary and the Online Etymology Dictionary8. Cohen’s Kappa score, inter8https://www.etymonline.com 3141 Table 2: Parameter configuration and comparisons. Methods TF TG TD P R F1 Baseline 1: LCS 0.60 94.70 25.62 40,32 Baseline 2: Consonant 98.07 19.11 31,98 LCS + Geo 0.60 0.01 1.3 94.02 27.63 42.71 LCS + Geo + EWN 0.60 0.01 1.3 94.10 30.41 45.97 LCS + Geo + WikTra 0.63 0.02 1.2 94.15 42.42 58.49 LCS + Geo + WikTra + EWN 0.63 0.02 1.2 94.20 44.86 60.78 LCS + Geo + Trans 0.68 0.02 1.2 95.94 44.27 60.59 LCS + Geo + Trans + EWN 0.70 0.06 1.3 97.32 53.53 69.07 LCS + Geo + Trans + WikTra 0.72 0.06 1.2 94.14 77.59 85.07 LCS + Geo + Trans + WikTra + EWN 0.71 0.04 1.1 93.94 86.32 89.97 annotator agreement, was 95.15%. The resulting human-annotated dataset contained 5,142 words, 38,447 pairs of cognate words and 320,338 pairs of non-cognate words. We divided this dataset into two equal parts: the first 20 concepts for parameter configuration and the second 20 concepts for evaluation. 5.2 Algorithm Configuration The goal of configuration was to optimise the algorithm with respect to three hyperparameters: the threshold of combined orthographic–geographic relatedness TF (section 3), the geographic proximity contribution parameter TG, and the minimum distance TD (section 4.3). We have created a three-dimensional grid with TF = [0.0; 1.0] (the higher the value, the more the strings need to be similar to be considered as cognates), TG = [0.0; 1.0] (the higher the value, the more geographic proximity is considered as evidence), and TD = [0.0; 22.0] (here, the unit of 1.0 corresponds to a distance of 1000km, within which geographic relatedness is a constant maximum). In this grid, we computed optimal values for each parameter (in increments of 0.01) based on performance on the configuration dataset described in section 5.1. With these optimal settings, we evaluated all possible combinations of the various components of the cognate generation method, in order to understand their relative contribution to the overall score. Since our ultimate goal is to generate high-quality knowledge, we favoured precision over recall, setting our minimum precision threshold to 95% and maximizing recall with respect to this constraint. The best settings (computed on the parameter configuration dataset) as well as the corresponding precision–recall figures (computed on the evaluation dataset) are reported in table 2. Although we set the precision threshold to 95% for the configuration dataset, we obtained precision results that are slightly lower, about 94%, on the evaluation dataset. The results of configuration can be seen in table 2. The optimal geographic region parameter TD varies between 1.1 and 1.3, which correspond to a radius of 1,100–1,300km: languages spoken within such a distance tend to share more cognates. One interesting insight from table 2 concerns the use of logical transitivity. While it is an extremely efficient component in our algorithm, in order to maintain precision it requires the relatedness threshold TS to be increased from [0.6; 0.63] to [0.68; 0.71] and the influence of geographic relatedness TG from [0.1; 0.2] to [0.2; 0.6]. This means that in order for transitivity to hold, both the overall relatedness criterion and the geographic proximity need to become more strict. 5.3 Evaluation Results We evaluated the effect of the various components of our method (geographic relatedness, WikTra transliteration, Etymological WordNet, transitivity) on its overall performance. As a baseline, we used two string similarity methods often used in cognate identification (St Arnaud et al., 2017): LCS, i.e., the longest common subsequence ratio of two words (which we also use in equation 2), and Consonant, which is a heuristic method that checks if the first two consonants of the words are identical. Although the baseline Consonant method achieved the highest precision of 98.07%, its recall is the lowest, 19.11%, due to being lim3142 ited to Latin characters. Adding geographic proximity, direct etymological evidence, and transliteration to the algorithm increased recall in a consistent manner, by about 2%, 3%, and 15%, respectively, all the while maintaining precision at the same level. Computing the transitive closure, finally, had a major multiplicator effect on recall, bringing it to 86.32%. With this full setup we were able to generate 3,167,642 cognate pairs across 338 languages. In order to cross-check the quality of the output, we randomly sampled 400 cognate pairs not covered by the evaluation corpus and had them re-evaluated by the same experts. Accuracy was found to fall in the 93–97% range, very much in line with the goal of 95% we initially set in section 5.2. 6 Exploring CogNet At an accuracy of 94%, our algorithm has generated 3,167,642 cognates. They cover 567,960 words and 80,836 concepts, corresponding to 33.06% of all words and 73.52% of all concepts in the UKC: one word out of three and three concepts out of four have at least one cognate relationship. In terms of WordNet formalism, cognate relationships can be expressed as cross-lingual sense relations that connect (word, synset) pairs—reified in wordnets as senses—across languages. As not all wordnets represent senses explicitly, CogNet encodes these relationships in the following tuple form: (PWN_synset, w1, l1, w2, l2, metadata) where PWN_synset is the Princeton WordNet English synset ID representing the shared meaning of the cognate pair, w1 and w2 are the two words, l1 and l2 are their respective languages (expressed as ISO-639-3 codes), and metadata is a set of attributes describing the cognate pair, such as the type of evidence for the relationship (direct etymological or indirect). The entire CogNet resource is described and freely downloadable from the web9. While we expect CogNet to provide linguistic insights for both theoretical and applied research, we are just starting to exploit its richness. As a first result, we have developed an online tool10 for the visual exploration of cognate data (see figure 1 for an illustration). In the long term, this web tool 9http://cognet.ukc.disi.unitn.it 10http://linguarena.eu is intended for linguists both for the exploration of data and for collaborative work on extending the resource. We also carried out an initial exploration of cognate data along the axes of language, language family, and geographic distance. Figure 2 shows the number of cognates found at a given geographic distance (i.e., the distance of the speakers of the two languages, as defined in section 4.3). We observe that the vast majority of cognates is found within a distance of about 3,000km. Our interpretation of these results is that, by and large, locality is still a major influence on modern lexicons, despite the globalising effects of the last centuries. Let us note that the geographic proximity component of our algorithm alone could not have caused this distribution, as it had a relatively minor overall contribution on the results (see the geographic factor TG = 0.04 in table 2). In order to avoid biasing per-language statistics by the incompleteness of the lexicons (wordnets) used, we limited our study to the 45 languages with a vocabulary size larger than 10,000 words. As a further abstraction from lexicon size, we introduce the notion of cognate density, defined over a set of words as the ratio of words covered by at least one cognate pair of CogNet. In other words, working with cognate densities allows us to characterise the ‘cognate content’ of each language independently of the wordnet size. Cognate densities for the 45 languages studied show a wide spread between languages with the highest density (the top five language being Indonesian: 60.80%, Czech: 59.05%, Catalan: 58.66%, Malay: 57.63%, and French: 57.25%) and those with the lowest (the bottom five languages being Thai: 7.87%, Arabic: 9.01%, Persian: 9.64%, Mongolian: 10.37%, and Mandarin Chinese: 11.03%). The main factor behind high cognate density is the presence of closely related languages in our data: as Malay and Indonesian are mutually intelligible registers of the same language, the existence of separate wordnets for the two naturally results in a high proportion of shared vocabulary. Inversely, languages on the other end of the spectrum tend not to have major living languages that are closely related. Let us finally note that non-perfect transliteration and failed transliteration-based matches may also be a reason for low cognate recall for languages with very different scripts, such as Chinese, Arabic, or 3143 Figure 1: Cognate sets of the concept ‘song’, represented with different colours. It is easy to observe the effects of language families (e.g., red triangles stand for Romance languages) and geographic proximity (e.g., the higher density of orange in South-West Asia and green in Central Asia). 0 50000 100000 150000 0 5 10 15 geographic distance (1 in 1000km) #cognates Figure 2: The number of cognates according to the geographic distance of the language speakers. Thai. In order to verify these intuitions, we examined cognate densities for the 45 languages manually clustered into 16 language families (see table 3, the language name was kept for clusters of size 1). Indeed, families such as Malay, Romance, Slavic, or Indo-Aryan, well known for containing several mutually intelligible language pairs, came out on top, while families with generally fewer or mutually non-intelligible members at the bottom. The only outlier is Basque that, despite being an isolate, is close to the resource-wide average cognate density of 33%. 7 Conclusions In this paper, we have demonstrated a general method for building a cognate database using existing wordnet resources. Identifying cognates based on orthography for words written in 35 different writing systems, as opposed to phonetic data, made the problem statement novel with respect to existing research in cognate identification. Family Density Family Density Malay 59.22% Greek 22.99% Romance 53.32% Niger-Congo 18.63% Slavic 36.67% Japanese 12.16% Indo-Aryan 36.08% Sino-Tibetan 11.22% Germanic 34.10% Mongolian 10.37% Basque 32.82% Persian 9.64% Dravidian 24.79% Arabic 9.01% Finno-Ugric 24.57% Thai 7.87% Table 3: Cognate density by language family, computed over the 45 largest-vocabulary languages. The use of a large-scale cross-lingual database and a combination of linguistic, semantic, etymological, and geographic evidence resulted in what in our knowledge is the largest cognate database both in terms of the number of concepts and of the writing systems covered. The evaluation showed that the resource has promisingly high quality, with precision and recall adjustable through the algorithm parameters. The resource has been made available online, together with a graphical webbased tool for the exploration of cognate data, our hope being to attract both linguists and computer scientists as potential users. Acknowledgments This paper was partly supported by the InteropEHRate project, co-funded by the European Union (EU) Horizon 2020 programme under grant number 826106. The first author is supported by the Cyprus Center for Algorithmic Transparency, which has received funding from the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement No. 810105. 3144 References Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2013. Cognate production using character-based machine translation. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 883–891. Gabor Bella, Fausto Giunchiglia, and Fiona McNeill. 2017. Language and Domain Aware Lightweight Ontology Matching. Web Semantics: Science, Services and Agents on the World Wide Web. Gabor Bella, Alessio Zamboni, and Fausto Giunchiglia. 2016. Domain-Based Sense Disambiguation in Multilingual Structured Data. In The Diversity Workshop at the 22nd European Conference on Artificial Intelligence (ECAI 2016). Tanmoy Bhattacharya, Nancy Retzlaff, Damián E Blasi, William Croft, Michael Cysouw, Daniel Hruschka, Ian Maddieson, Lydia Müller, Eric Smith, Peter F Stadler, et al. 2018. Studying language evolution in the age of big data. Journal of Language Evolution. Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In ACL (1), pages 1352–1362. Jordan Boyd-Graber, Christiane Fellbaum, Daniel Osherson, and Robert Schapire. 2006. Adding dense, weighted connections to wordnet. In Proceedings of the third international WordNet conference, pages 29–36. Citeseer. Gerard De Melo. 2014. Etymological wordnet: Tracing the history of words. In LREC, pages 1148– 1154. Citeseer. Fausto Giunchiglia, Khuyagbaatar Batsuren, and Gabor Bella. 2017. Understanding and exploiting language diversity. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pages 4009–4017. Fausto Giunchiglia, Khuyagbaatar Batsuren, and Abed Alhakim Freihat. 2018. One world—seven thousand languages. In Proceedings 19th International Conference on Computational Linguistics and Intelligent Text Processing, CiCling2018, 18-24 March 2018. Fausto Giunchiglia, Mladjan Jovanovic, Mercedes Huertas-Migueláñez, and Khuyagbaatar Batsuren. 2015. Crowdsourcing a large scale multilingual lexico-semantic resource. In AAAI Conference on Human Computation and Crowdsourcing (HCOMP15). Simon J Greenhill, Robert Blust, and Russell D Gray. 2008. The austronesian basic vocabulary database: from bioinformatics to lexomics. Evolutionary Bioinformatics, 4:EBO–S893. Bradley Hauer and Grzegorz Kondrak. 2011. Clustering semantically equivalent words into cognate sets in multilingual lists. In Proceedings of 5th international joint conference on natural language processing, pages 865–873. Ulf Hermjakob, Jonathan May, and Kevin Knight. 2018. Out-of-the-box universal romanization tool uroman. Proceedings of ACL 2018, System Demonstrations, pages 13–18. Gerhard Jäger. 2013. Phylogenetic inference from word lists using weighted alignment with empirically determined weights. Language Dynamics and Change, 3(2):245–291. Gerhard Jäger. 2018. Global-scale phylogenetic linguistic inference from lexical resources. CoRR, abs/1802.06079. Sarvnaz Karimi, Falk Scholer, and Andrew Turpin. 2011. Machine transliteration survey. ACM Computing Surveys (CSUR), 43(3):17. Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 288–295. Association for Computational Linguistics. Grzegorz Kondrak, Daniel Marcu, and Kevin Knight. 2003. Cognates can improve statistical translation models. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume of the Proceedings of HLT-NAACL 2003–short papers-Volume 2, pages 46–48. Association for Computational Linguistics. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Taraka Rama, Johann-Mattis List, Johannes Wahle, and Gerhard Jäger. 2018. Are automatic methods for cognate detection good enough for phylogenetic reconstruction in historical linguistics? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 2, pages 393–400. Taraka Rama, Johannes Wahle, Pavel Sofroniev, and Gerhard Jäger. 2017. Fast and unsupervised methods for multilingual cognate clustering. arXiv preprint arXiv:1702.04938. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1715–1725. 3145 Adam St Arnaud, David Beck, and Grzegorz Kondrak. 2017. Identifying cognate sets across dictionaries of related languages. In Proceedings of the EMNLP 2017, pages 2519–2528. Yulia Tsvetkov and Chris Dyer. 2015. Lexicon stratification for translating out-of-vocabulary words. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 125–131. Søren Wichmann, André Müller, Viveka Velupillai, Cecil H Brown, Eric W Holman, Pamela Brown, Sebastian Sauppe, Oleg Belyaev, Matthias Urban, Zarina Molochieva, et al. 2010. The asjp database (version 13). URL: http://email. eva. mpg. de/˜ wichmann/ASJPHomePage. htm, 3. Winston Wu and David Yarowsky. 2018. Creating large-scale multilingual cognate tables. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).
2019
302
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3146–3155 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3146 Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B Jiaming Luo CSAIL, MIT j [email protected] Yuan Cao Google Brain [email protected] Regina Barzilay CSAIL, MIT [email protected] Abstract In this paper we propose a novel neural approach for automatic decipherment of lost languages. To compensate for the lack of strong supervision signal, our model design is informed by patterns in language change documented in historical linguistics. The model utilizes an expressive sequence-to-sequence model to capture character-level correspondences between cognates. To effectively train the model in an unsupervised manner, we innovate the training procedure by formalizing it as a minimum-cost flow problem. When applied to the decipherment of Ugaritic, we achieve a 5.5% absolute improvement over state-of-the-art results. We also report the first automatic results in deciphering Linear B, a syllabic language related to ancient Greek, where our model correctly translates 67.3% of cognates.1 1 Introduction Decipherment is an ultimate low-resource challenge for both humans and machines. The lack of parallel data and scarce quantities of ancient text complicate the adoption of neural methods that dominate modern machine translation. Even for human experts this translation scenario proved to be onerous: a typical decipherment spans over decades and requires encyclopedic domain knowledge, prohibitive manual effort and sheer luck (Robinson, 2002). Moreover, techniques applied for the decipherment of one lost language are rarely reusable for another language. As a result, every significant human decipherment is considered to be one of a kind, “the rarest category of achievement” (Pope, 1975). Prior work has demonstrated the feasibility of automatic decipherment. Snyder et al. (2010) 1Code and all datasets are hosted in https:// github.com/j-luo93/NeuroDecipher. translated the ancient Semitic language Ugaritic into Hebrew. Since both languages are derived from the same proto-Semitic origin, the translation involved matching their alphabets at the character level and mapping cognates at the word level. The effectiveness of their approach stemmed from its ability to incorporate expansive linguistic knowledge, including expected morphological correspondences, the nature of alphabet-level alignment, etc. As with human decipherment, this approach is highly customized for a given language pair and does not generalize to other lost languages. In this paper, we introduce a neural decipherment algorithm that delivers strong performances across several languages with distinct linguistic characteristics. As in prior work, our input consists of text in a lost language and a non-parallel corpus in a known related language. The model is evaluated on the accuracy of aligning words from the lost language to their counterparts in the known language. To maintain the language-independent nature of the approach, we want to build the model around the most basic decipherment principles applicable across multiple languages. These principles are informed by known patterns in language change extensively documented in historical linguistics (Campbell, 2013). At the character level, we know that characters that originate from the same proto-language have similar distributional profiles with respect to their occurrences. Another important constraint at the character level is that cognate alignment is monotonic since character reorderings within cognate pairs are rare. At the vocabulary level, we want to enforce skewed mapping at the word level assuming roughly oneto-one correspondence. Finally, we want to ensure that the resulting vocabulary mapping covers a significant portion of the lost language vocabu3147 lary and can also account for the presence of words which are not cognates. Our model captures both character-level and word-level constraints in a single generative framework wherein vocabulary level alignment is a latent variable. We model cognate generation process using a character-level sequence-tosequence model which is guided towards monotonic rewriting via regularization. Distributional similarity at the character level is achieved via universal character embeddings. We enforce constraints on the vocabulary mapping via minimumcost flow formulation that controls structural sparsity and coverage on the global cognate assignment. The two components of the model – sequence-to-sequence character alignment and flow constraints – are trained jointly using an EMstyle procedure. We evaluate our algorithm on two lost languages – Ugaritic and Linear B. In the case of Ugaritic, we demonstrate improved performance of cognate identification, yielding 5.5% absolute improvement over previously published results (Snyder et al., 2010). This is achieved without assuming access to the morphological information in the known language. To demonstrate the applicability of our model to other linguistic families, we also consider decipherment of Linear B, an ancient script dating back to 1450BC. Linear B exhibits a number of significant differences from Ugaritic, most noted among them its syllabic writing system. It has not been previously deciphered by automatic means. We were able to correctly translate 67.3% of Linear B cognates into their Greek equivalents in the decipherment scenario. Finally, we demonstrate that the model achieves superior performance on cognate datasets used in previous work (BergKirkpatrick and Klein, 2013). 2 Related Work Decoding of Ciphered Texts Early work on decipherment was primarily focused on man-made ciphers, such as substitution ciphers. Most of these approaches are based on EM algorithms which are further adjusted for target decipherment scenarios. These adjustments are informed by assumptions about ciphers used to produce the data (Knight and Yamada, 1999; Knight et al., 2006; Ravi and Knight, 2011; Pourdamghani and Knight, 2017). Besides the commonly used EM algorithm, (Nuhn et al., 2013; Hauer et al., 2014; Kambhatla et al., 2018) also tackles substitution decipherment and formulate this problem as a heuristic search procedure, with guidance provided by an external language model (LM) for candidate rescoring. So far, techniques developed for man-made ciphers have not been shown successful in deciphering archaeological data. This can be attributed to the inherent complexity associated with processes behind language evolution of related languages. Nonparallel Machine Translation Advancements in distributed representations kindled exciting developments in this field, including translations at both the lexical and the sentence level. Lexical translation is primarily formulated as alignment of monolingual embedding spaces into a crosslingual representation using adversarial training (Conneau et al., 2017), VAE (Dou et al., 2018), CCA (Haghighi et al., 2008; Faruqui and Dyer, 2014) or mutual information (Mukherjee et al., 2018). The constructed monolingual embedding spaces are usually of high quality due to the large amount of monolingual data available. The improved quality of distributed representations has similarly strong impact on non-parallel translation systems that operate at the sentence level (Pourdamghani and Knight, 2017). In that case, access to a powerful language model can partially compensate for the lack of explicit parallel supervision. Unfortunately, these methods cannot be applied to ancient texts due to the scarcity of available data. Decoding of Ancient Texts (Snyder et al., 2010) were the first to demonstrate the feasibility of automatic decipherment of a dead language using non-parallel data. The success of their approach can be attributed to cleverly designed Bayesian model that structurally incorporated powerful linguistic constraints. This includes customized priors for alphabet matching, incorporation of morphological structure, etc. (Berg-Kirkpatrick and Klein, 2011) proposed an alternative decipherment approach based on a relatively simple model paired with sophisticated inference algorithm. While their model performed well in a noise-free scenario when matching vocabularies only contain cognates, it has not been shown successful in a full decipherment scenario. Our approach outperforms these models in both scenarios. Moreover, we have demonstrated that the 3148 same architecture deciphers two distinct ancient languages Ugaritic and Linear B. The latter result is particularly important given that Linear B is a syllabic language. 3 Approach The main challenge of the decipherment task is the lack of strong supervision signal that guides standard machine translation algorithms. Therefore, the proposed architecture has to effectively utilize known patterns in language change to guide the decipherment process. These properties are summarized below: 1. Distributional Similarity of Matching Characters: Since matching characters appear in similar places in corresponding cognates, their contexts should match. 2. Monotonic Character Mapping within Cognates: Matching cognates rarely exhibit character reordering, therefore their alignment should be order preserving. 3. Structural Sparsity of Cognate Mapping: It is well-documented in historical linguistics that cognate matches are mostly one-to-one, since both words are derived from the same protoorigin. 4. Significant Cognate Overlap Within Related Languages: We expect that the derived vocabulary mapping will have sufficient coverage for lost language cognates. 3.1 Generative framework We encapsulate these basic decipherment principles into a single generative framework. Specifically, we introduce a latent variable F = {fi,j} that represents the word-level alignment between the words in the lost language X = {xi} and those in the known language Y = {yj}. More formally, we derive the joint probability Pr(X, Y) = X F∈F Pr(F) Pr(X|F) Pr(Y|F, X) ∝ X F∈F Pr(Y|X, F) = X F∈F Y yj∈Y Pr(yj|X, F), (1) by assuming a uniform prior on both Pr(F) and Pr(X|F), and i.i.d. for every yj ∈Y. We use F to describe the set of valid values for the latent variable F, subject to the global constraints as stated in Property 3 and 4. More specifically, we utilize a minimum-cost flow setup to enforce these properties. The probability distribution Pr(yj|X, F) is further defined as Pr(yj|X, F) = X xi∈X fi,j · Prθ(yj|xi), (2) where the conditional probability Prθ(yj|xi) is modeled by a character-based neural network parameterized by θ, which incorporates the character-level constraints as stated in Property 1 and 2. Directly optimizing Equation (1) is infeasible since it contains a summation over all valid flows. To bypass this issue, we adopt an EMstyle iterative training regime. Specifically, the training process involves two interleaving steps. First, given the value of the flow F, the neural model is trained to optimize the likelihood function Q yj∈Y Pr(yj|X, F). Next, the flow is updated by solving a minimum-cost flow problem given the trained neural model. A detailed discussion of the training process is presented in Section 4. We now proceed to provide details on both the neural model and the minimum-flow setup. 3.2 Neural decipherment model We use a character-based sequence-to-sequence (seq2seq) model to incorporate the local constraints (Figure 1). Specifically, we integrate Property 1 by using a shared universal character embedding space and a residual connection. Furthermore, the property of monotonic rewriting is realized by a regularization term based on edit distance. We detail each component in the following paragraphs. Universal character embedding We directly require that character embeddings of the two languages reside in the same space. Specifically, we assume that any character embedding in a given language is a linear combination of universal embeddings. More formally, we use a universal embedding matrix U ∈Mnu×d, a lost language character weight matrix Wx ∈Mnx×nu and a known language character weight matrix Wy ∈Mny×nu. We use nu to denote the size of the universal character inventory, and nx, ny the number of unique 3149 ... LSTM LSTM LSTM ... LSTM LSTM LSTM ... <s> κ ς Softmax Softmax Softmax ... Attention κ ν ... ... ... Lost language Known language </s> Figure 1: Architecture of our proposed model. For simplicity, we omit lines for residual connections linking weighted sum of input embeddings and softmax. Inputs to the encoder and decoder are the lost and known languages respectively. See Sec. 3.2 for details. characters in the lost and the known languages, respectively. Embedding matrices for both languages are computed by Ex = WxU, Ey = WyU. This formulation reflects the principle underlying crosslingual embeddings such as MUSE (Conneau et al., 2017). Along a similar line, previous work has demonstrated the effectiveness of using universal word embeddings, in the context of lowresource neural machine translation (Gu et al., 2018). Residual connection Character alignment is mostly local in nature, but this fact is not reflected by how the next character is predicted by the model. Specifically, the prediction is made based on the context vector ˜h, which is a nonlinear function of the hidden states of the encoder and the decoder. As a result, ˜h captures a much wider context due to the nature of a recurrent neural network. To address this issue and directly improve the quality of character alignment, we add a residual connection from the encoder embedding layer to the decoder projection layer. Specifically, letting α be the predicted attention weights, we compute c = X i αiEx(i), ˆh = c ⊕˜h, (3) where Ex(i) is the encoder character embedding at position i, and c is the weighted character embedding. ˆh is subsequently used to predict the next character. A similar strategy has also been adopted κ ν ω σ ο ς ✔ ✖ ✔ ✔ ✔ ✖ (Linear B) (Greek) Figure 2: An example of alignment between a Linear B word and Greek word.  and  denote correct and wrong alignment positions respectively. The misalignment between E and ν incurs a deletion error; 1 and ζ incurs an insertion error. by Nguyen and Chiang (2018) to refine the quality of lexical translations in NMT. Monotonic alignment regularization We design a regularization term that guides the model towards monotonic rewriting. Specifically, we penalizes the model whenever insertions or deletions occur. More concretely, for each word in the lost language xi, we first compute the alignment probability Pr(at i|xi) over the input sequence at decoder time step t, predicted by the attention mechanism. Then we compute the expected alignment position as pt i = X k k · Pr(at i = k|xi), where k is any potential aligned position. The regularization term is subsequently defined as Ω1({pt i}) = X t (pt i −pt−1 i −1)2. (4) 3150 S T x1 y1 x2 y2 yM xN ... ... ... Figure 3: Minimum-cost flow. S, T stands for source and sink respectively; xi, yj are the ith and jth word in X and Y. Each edge is associated with a flow fij and cost ¯dij. See Sec. 3.3 for details. Note that no loss is incurred when the current alignment position immediately follows the previous position, namely pt i = pt−1 i + 1. Furthermore, we use a quadratic loss function to discourage expensive multi-character insertions and deletions. For Linear B, we modify this regularization term to accommodate the fact that it is a syllabic language and usually one linear B script corresponds to two Greek letters. Particularly, we use the following regularization term for Linear B Ω2({pt i}) = X t=1 (pt i −pt−2 i −1)2. (5) Figure 2 illustrates one alignment matrix from Linear B to Greek. In this example, the Linear B character E is supposed to be aligned with Greek characters ν and ω but only got assigned to ω, hence incurring a deletion error; 1 is supposed to be only aligned to σ and o, but assigned an extra alignment to ζ, incurring an insertion error. 3.3 Minimum-cost flow The latent variable F captures the global constraints as stated in Property 3 and 4. Specifically, F should identify a reasonable number of cognate pairs between the two languages, while meeting the requirement that word-level alignments are one-to-one. To this end, we cast the task of identifying cognate pairs as a minimum-cost flow problem (Figure 3). More concretely, we have three sets of edges in the flow setup: • fs,i: edges from the source node to the word xi in the lost language, • fj,t: edges from the word yj in the known language to the sink node, • fi,j: edges from xi to yj. Each edge has a capacity of 1, effectively enforcing the one-to-one constraint. Only the edges fi,j have associated costs. We define this cost as the expected distance between xi and yj: ¯di,j = Ey∼Pr(y|xi) d(y, yj), (6) where d(·, ·) is the edit distance function, and Pr(y|xi) is given by the neural decipherment model. We use a sampling procedure proposed by Shen et al. (2016) to compute this expected distance. To provide a reasonable coverage of the cognate pairs, we further specify the demand constraint P j fj,t = D with a given hyperparameter D. We note that the edit distance cost plays an essential role of complementing the neural model. Specifically, neural seq2seq models are notoriously inadequate at capturing insertions and deletions, contributing to many issues of overgeneration or undergeneration in NMT (Tu et al., 2016). These problems are only accentuated due to a lack of supervision. Using edit distance in the flow setup helps alleviate this issue, since a misstep of insertion or deletion by the neural model will still generate a string that resembles the ground truth in terms of edit distance. In other words, the edit distance based flow can still recover from the mistakes the neural model makes. 4 Training We note that with weak supervision, a powerful neural model can produce linguistically degenerate solutions. To prevent the neural model from getting stuck at an unreasonable local minimum, we make three modifications detailed in the following paragraphs. The entire training procedure is illustrated in Alg 1. Flow decay The flow solver returns sparse values – the flow values for the edges are mostly zero. It is likely that this will discard many true cognate pairs, and the neural model trained on these sparse values can be easily misled and get stuck at some suboptimal local minimum. To alleviate this issue, we apply an exponential decay to the flow values, and compute an interpolation between the new flow result and the previous one. Specifically, we update the flow at iteration τ as f(τ) i,j = γ · f(τ−1) i,j + (1 −γ) · ˜f(τ) i,j , ∀i, j, (7) 3151 Algorithm 1 Iterative training Require: X, Y: vocabularies, T: number of iterations, N: number of cognate pairs to identify. 1: f(0) i,j ← N |X|·|Y| ▷Initialize 2: for τ ←1 to T do 3: θ(τ) ←MLE-TRAIN(f(τ−1) i,j ) 4: ¯d(τ) i,j ←EDIT-DIST(xi, yj, θ(τ)) 5: ˜f(τ) i,j ←MIN-COST-FLOW( ¯d(τ) i,j ) 6: f(τ) i,j ←γ · f(τ−1) i,j + (1 −γ) · ˜f(τ) i,j 7: RESET(θ(τ)) 8: return f(T) i,j 9: function MLE-TRAIN(f(τ) i,j ) 10: θ(τ) ←arg maxθ Q yj∈Y Prθ(yj|X, F) 11: return θ(τ) where ˜f(τ) i,j is the raw output given by the flow solver, and γ is a hyperparameter. Norm control Recall that the residual connection combines a weighted character embedding c, and a context vector ˜h (Equation (3)). We observe that during training, ˜h has a much bigger norm than c, essentially defeating the purpose of improving character alignment by using a residual connection. To address this issue, we rescale ˜h so that the norm of ˜h does not exceed a certain percentage of the norm of c. More formally, given a ratio r < 1.0, we compute the residual output as ˆh = c ⊕(g · ˜h) g = min(r ∗∥c∥2 ∥˜h∥2 , 1.0) Periodic reset We re-initialize the parameters of the neural model and reset the state of the optimizer after each iteration. Empirically, we found that our neural network can easily converge to a suboptimal local minimum given a poor global word-level alignment. Resetting the model parameters periodically helps with limiting the negative effect caused by such alignments. 5 Experiments Datasets We evaluate our system on the following datasets: • UGARITIC: Decipherment from Ugaritic to Hebrew. Ugaritic is an ancient Semitic language closely related to Hebrew, which was used for the decipherment of Ugaritic. This dataset has been previously used for decipherment by Snyder et al. (2010). • Linear B: Decipherment from Linear B to Greek. Linear B is a syllabic writing system used to write Mycenaean Greek dating back to around 1450BC. Decipherment of a syllabic language like Linear B is significantly harder, since it employs a much bigger inventory of symbols (70 in our corpus), and the symbols that have the same consonant or vowel look nothing alike2. We extracted pairs of Linear B scripts (i.e., words) and Greek pronunciations from a compiled list of Linear B lexicon3. We process the data by removing some uncertain translations, eventually retaining 919 pairs in total. The linear B scripts are kept as it is, and we remove all diacritics in the Greek data. We also consider a subset of the Greek data to simulate an actual historical event where many linear B syllabograms were deciphered by being compared with Greek location names. On the Greek side, we retain 455 proper nouns such as locations, names of Gods or Goddesses, and personal names. The entire vocabulary of the Linear B side is kept as it is. This results in a dataset with roughly 50% unpaired words on the Linear B side. We call this subset Linear B/names. To the best of our knowledge, our experiment is the first attempt of deciphering Linear B automatically. • ROMANCE: Cognate detection between three Romance languages. It contains phonetic transcriptions of cognates in Italian, Spanish and Portuguese. This dataset has been used by Hall and Klein (2010) and Berg-Kirkpatrick and Klein (2011). Data statistics are summarized in Table 1. 2For instance, k, K and T encode “ka”, “ke” and “te”, respectively. 3https://archive.org/details/ LinearBLexicon/page/n5 3152 Dataset #Cognates #Tokens (lost/known) #Symbols (lost/known) UGARITIC 2214 7353/41263 30/23 Linear B 919 919/919 70/28 Linear B/names 455 919/455 70/28 ROMANCE 583 583/583 25/31/28 (Es/It/Pt) Table 1: Statistics of datasets used in our experiments. Systems We report numbers for the following systems: • Bayesian: the Bayesian model by Snyder et al. (2010) that automatically deciphered Ugaritic to Hebrew • Matcher: the system using combinatorial optimization, proposed by Berg-Kirkpatrick and Klein (2011). • NeuroCipher: our proposed model. We directly quote numbers from their papers for the UGARITIC and ROMANCE datasets. To facilitate direct comparison, we follow the same data processing procedure as documented in the literature. Training details Our neural model uses a biredictional-LSTM as the encoder and a singlelayer LSTM as the decoder. The dimensionality of character embeddings and the hidden size of LSTM are set to 250 for all experiments. The size of the universal character inventory is 50 for all datasets except Linear B for which we use 100. The hyperparameter for alignment regularization is set to 0.5, and the ratio r to control the norm of the context vector is set to 0.2. We use ADAM (Kingma and Ba, 2015) to optimize the neural model. To speed up the process of solving the minimum-cost flow problem, we sparsify the flow graph by only considering the top 5 candidates for every xi. γ = 0.9 is used for the flow decay on all datasets except on UGARITIC for which we use γ = 0.25. We use the OR-Tools optimization toolkit4 as the flow solver. We found it beneficial to train our model only on a randomly selected subset (10%) of the entire corpus with the same percentage of noncognates, and test it on the full dataset. It is common for the dataset UGARITIC to contain several cognates for the same Ugaritic word, and we found that 4https://github.com/google/or-tools relaxing the capacity fj,t to 3 yields a better result. For Linear B, similar to the finding by (BergKirkpatrick and Klein, 2013), random restarting and choosing the best model based on the objective produces substantial improvements. In scenarios where many unpaired cognates are present, we follow Haghighi et al. (2008) to gradually increase the number of cognate pairs to identify. 6 Results UGARITIC We evaluate our system in two settings. First, we test the model under the noiseless condition where only cognates pairs are included during training. This is the setting adopted by Berg-Kirkpatrick and Klein (2011). Second, we conduct experiments in the more difficult and realistic scenario where there are unpaired words in both Ugaritic and Hebrew. This is the noisy setting considered by Snyder et al. (2010). As summarized by Table 2, our system outperforms existing methods by 3.1% under the noiseless condition, and 5.5% under the noisy condition. We note that the significant improvement under the noisy condition is achieved without assuming access to any morphological information in Hebrew. In costrast, previous system Bayesian utilized an inventory of known morphemes and complete morphological segmentations in Hebrew during training. The significant gains in identifying cognate pairs suggest that our proposed model provide a strong and viable approach towards automatic decipherment. System Noiseless Noisy Matcher 90.4 Bayesian 60.4 NeuroCipher 93.5 65.9 Table 2: Cognate identification accuracy (%) for UGARITIC under noiseless and noisy conditions. The noiseless baseline result is quoted from (BergKirkpatrick and Klein, 2011), and the noisy baseline result is quoted from (Snyder et al., 2010). 3153 System Linear B Linear B/names NeuroCipher 84.7 67.3 Table 3: Cognate identification accuracy (%) for LinearB under noiseless and noisy conditions. Linear B To illustrate the applicability of our system to other linguistic families, we evaluate the model on Linear B and Linear B/names. Table 3 shows that our system reaches high accuracy at 84.7% in the noiseless LinearB corpus, and 67.3% accuracy in the more challenging and realistic LinearB-names dataset. We note that our system is able to achieve a reasonable level of accuracy with minimal change to the system. The only significant modification is the usage of a slightly different alignment regularization term (Equation (5)). We also note that this language pair is not directly applicable to both of the previous systems Bayesian and Matcher. The flexibility of the neural decipherment model is one of the major advantages of our approach. ROMANCE Finally, we report results for ROMANCE (Hall and Klein, 2010) in Table 4, as further verification of the efficacy of our system. We include the average cognate detection accuracy across all language pairs as well as the accuracies for individual pairs. Note that in this experiment the dataset does not contain unpaired words. Table 4 shows that our system improves the overall accuracy by 1.5%, mostly contributed by Es⇌It and It⇌Pt.5 System EsIt EsPt ItPt Avg Matcher 88.9 95.6 85.7 90.1 NeuroCipher 92.3 95.0 87.3 91.6 Table 4: Cognate identification accuracy (%) for ROMANCE. Avg means the average accuracy across all six language pairs. EsIt, EsPt, ItPt are average accuracy for each language pair respectively (Es=Spanish, It=Italian, Pt=Portuguese). Results for Matcher are quoted from (Berg-Kirkpatrick and Klein, 2011). Ablation study Finally, we investigate contribution of various components of the model architecture to the decipherment performance. Specifically, we look at the design choices directly in5We nevertheless observed a slight drop for Es⇌Pt. However, for this language pair, the absolute accuracy is already very high (≥95%). We therefore suspect that performance on this language pair is close to saturation. System UGARITIC NeuroCipher 65.9 -monotonic 0.0 -residual 0.0 -flow 8.6 Table 5: Results for the noisy setting of UGARITIC. -monotonic and -residual remove the monotonic alignment regularization and the residual connection, and -flow does not use flow or iterative training. formed by patterns in language change: In all the above cases, the reduced decipherment model fails. The first two cases reach 0% accuracy, and the third one barely reaches 10%. This illustrates the utmost importance of injecting prior linguistic knowledge into the design of modeling and training, for the success of decipherment. 7 Conclusions We proposed a novel neural decipherment approach. We design the model and training procedure following fundamental principles of decipherment from historical linguistics, which effectively guide the decipherment process without supervision signal. We use a neural sequence-tosequence model to capture character-level cognate generation process, for which the training procedure is formulated as flow to impose vocabularylevel structural sparsity. We evaluate our approach on two lost languages, Ugaritic and Linear B, from different linguistic families, and observed substantially high accuracy in cognate identification. Our approach also demonstrated significant improvement over existing work on Romance languages. Acknowledgments This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA865017-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. The authors are also grateful for the support of MIT Quest for Intelligence program. 3154 References Taylor Berg-Kirkpatrick and Dan Klein. 2011. Simple effective decipherment via combinatorial optimization. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 313–321. Association for Computational Linguistics. Taylor Berg-Kirkpatrick and Dan Klein. 2013. Decipherment with a million random restarts. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 874– 878. Association for Computational Linguistics. Lyle Campbell. 2013. Historical Linguistics: An Introduction. Edinburgh University Press. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Zi-Yi Dou, Zhi-Hao Zhou, and Shujian Huang. 2018. Unsupervised bilingual lexicon induction via latent variable models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 621–626. Association for Computational Linguistics. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471. Association for Computational Linguistics. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 344–354. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL08: HLT, pages 771–779. Association for Computational Linguistics. David Hall and Dan Klein. 2010. Finding cognate groups using phylogenies. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1030–1039. Association for Computational Linguistics. Bradley Hauer, Ryan Hayward, and Grzegorz Kondrak. 2014. Solving substitution ciphers with combined language models. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2314–2325. Dublin City University and Association for Computational Linguistics. Nishant Kambhatla, Anahita Mansouri Bigvand, and Anoop Sarkar. 2018. Decipherment of substitution ciphers with neural language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 869–874. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 499–506. Association for Computational Linguistics. Kevin Knight and Kenji Yamada. 1999. A computational approach to deciphering unknown scripts. In Unsupervised Learning in Natural Language Processing. Tanmoy Mukherjee, Makoto Yamada, and Timothy Hospedales. 2018. Learning unsupervised word translations without adversaries. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 627–632. Toan Nguyen and David Chiang. 2018. Improving lexical choice in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 334–343. Malte Nuhn, Julian Schamper, and Hermann Ney. 2013. Beam search for solving substitution ciphers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1568–1576. Association for Computational Linguistics. Maurice Pope. 1975. The Story of Decipherment: From Egyptian Hieroglyphic to Linear B. Thames & Hudson. Nima Pourdamghani and Kevin Knight. 2017. Deciphering related languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2513–2518. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 12–21. Association for Computational Linguistics. Andrew Robinson. 2002. Lost languages: the enigma of the world’s undeciphered scripts. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum 3155 risk training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1683–1692. Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048–1057. Association for Computational Linguistics. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 76–85.
2019
303
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3156–3161 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3156 Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network Kun Xu1, Liwei Wang1, Mo Yu2, Yansong Feng3, Yan Song1, Zhiguo Wang4, Dong Yu1 1Tencent AI Lab 2IBM T.J. Watson Research 3Peking University 4Amazon AWS {syxu828,wlwsjtu1989,zgw.tomorrow}@gmail.com [email protected], [email protected], {clksong,dyu}@tencent.com Abstract Previous cross-lingual knowledge graph (KG) alignment studies rely on entity embeddings derived only from monolingual KG structural information, which may fail at matching entities that have different facts in two KGs. In this paper, we introduce the topic entity graph, a local sub-graph of an entity, to represent entities with their contextual information in KG. From this view, the KB-alignment task can be formulated as a graph matching problem; and we further propose a graph-attention based solution, which first matches all entities in two topic entity graphs, and then jointly model the local matching information to derive a graphlevel matching vector. Experiments show that our model outperforms previous state-of-theart methods by a large margin. 1 Introduction Multilingual knowledge graphs (KGs), such as DBpedia (Auer et al., 2007) and Yago (Suchanek et al., 2007), represent human knowledge in the structured format and have been successfully used in many natural language processing applications. These KGs encode rich monolingual knowledge but lack the cross-lingual links to bridge the language gap. Therefore, the cross-lingual KG alignment task, which automatically matches entities in a multilingual KG, is proposed to address this problem. Most recently, several entity matching based approaches (Hao et al., 2016; Chen et al., 2016; Sun et al., 2017; Wang et al., 2018) have been proposed for this task. Generally, these approaches first project entities of each KG into lowdimensional vector spaces by encoding monolingual KG facts, and then learn a similarity score function to match entities based on their vector representations. However, since some entities in different languages may have different KG KG-1 KG-2 Matched e0 e0’ Figure 1: A challenging entity matching example. facts, the information encoded in entity embeddings may be diverse across languages, making it difficult for these approaches to match these entities. Figure 1 illustrates such an example where we aim to align e0 with e′ 0, but there is only one aligned neighbor in their surrounding neighbors. In addition, these methods do not encode the entity surface form into the entity embedding, also making it difficult to match entities that have few neighbors in the KG that lacks sufficient structural information. To address these drawbacks, we propose a topic entity graph to represent the KG context information of an entity. Unlike previous methods that utilize entity embeddings to match entities, we formulate this task as a graph matching problem between the topic entity graphs. To achieve this, we propose a novel graph matching method to estimate the similarity of two graphs. Specifically, we first utilize a graph convolutional neural network (GCN) (Kipf and Welling, 2016; Hamilton et al., 2017) to encode two graphs, say G1 and G2, resulting in a list of entity embeddings for each graph. Then, we compare each entity in G1 (or G2) against all entities in G2 (or G1) by using an attentive-matching method, which generates cross-lingual KG-aware matching vectors for all entities in G1 and G2. Consequently, we apply another GCN to propagate the local matching information throughout the entire graph. This produces a global matching vector for each topic graph that is used for the final prediction. The 3157 GCN#1  Lebron James Miami Heat      Akron, Ohio Cleveland Cavaliers National Basketball Association Chinese Knowledge Graph English Knowledge Graph GCN#1 GCN#2 GCN#2 Graph Embedding Graph Embedding   Graph Matching Model Figure 2: A running example of our model for aligning Lebron James in the English and Chinese knowledge graph. motivation behind is that, the graph convolution could jointly encode all entity similarities, including both the topic entity and its neighbor entities, into a matching vector. Experimental results show that our model outperforms previous state-of-the-art models by a large margin. Our code and data is available at https://github. com/syxu828/Crosslingula-KG-Matching. 2 Topic Entity Graph As indicated in Wang et al. (2018), the local contextual information of an entity in the KG is important to the KG alignment task. In our model, we propose a structure, namely topic entity graph, to represent relations among the given entity (called topic entity) and its neighbors in the knowledge base. Figure 2 shows the topic graphs of Lebron James in the English and Chinese knowledge graph. In order to build the topic graph, we first collect 1-hop neighbor entities of the topic entity, resulting in a set of entities, {e1, ..., en}, which are the nodes of the graph. Then, for each entity pair (ei, ej), we add one directed edge between their corresponding nodes in the topic graph if ei and ej are directly connected through a relation, say r, in the KG. Notice that, we do not label this edge with r that ei and ej hold in the KG, but just retain r’s direction. In practice, we find this strategy significantly improves both the efficiency and performance, which we will discuss in §4. 3 Graph Matching Model Figure 2 gives an overview of our method for aligning Lebron James in the English and Chinese knowledge graph1. Specifically, we fist retrieve topic entity graphs of Lebron James from two KGs, namely G1 and G2. Then, we propose a graph matching model to estimate the probability that G1 and G2 are describing the same entity. In particular, the matching model includes the following four layers: Input Representation Layer The goal of this layer is to learn embeddings for entities that occurred in topic entity graphs by using a GCN (henceforth GCN1) (Xu et al., 2018a). Recently, GCN has been successfully applied in many NLP tasks, such as semantic parsing (Xu et al., 2018b), text representation (Zhang et al., 2018), relation extraction (Song et al., 2018) and text generation (Xu et al., 2018c). We use the following embedding generation of entity v as an example to explain the GCN algorithm: (1) We first employ a word-based LSTM to transform v’s entity name to its initial feature vector av; (2) We categorize the neighbors of v into incoming neighbors N⊢(v) and outgoing neighbors N⊣(v) according to the edge direction. (3) We leverage an aggregator to aggregate the incoming representations of v’s incoming neighbors {hk−1 u⊢, ∀u ∈N⊢(v)} into a single vector, hk N⊢(v), where k is the iteration index. This aggregator 1Lebron James is translated to 勒布朗·詹姆斯in Chinese. 3158 feeds each neighbor’s vector to a fully-connected neural network and applies an element-wise meanpooling operation to capture different aspects of the neighbor set. (4) We concatenate v’s current incoming representation hk−1 v⊢ with the newly generated neighborhood vector hk N⊢(v) and feed the concatenated vector into a fully-connected layer to update the incoming representation of v, hk v⊢for the next iteration; (5) We update the outgoing representation of v, hk v⊣using the similar procedure as introduced in step (3) and (4) except that operating on the outgoing representations; (6) We repeat steps (3)∼(5) by K times and treat the concatenation of final incoming and outgoing representations as the final representation of v. The outputs of this layer are two sets of entity embeddings {e1 1, ..., e1 |G1|} and {e2 1, ..., e2 |G2|}. Node-Level (Local) Matching Layer In this layer, we compare each entity embedding of one topic entity graph against all entity embeddings of the other graph in both ways (from G1 to G2 and from G2 to G1), as shown in Figure 2. We propose an attentive-matching method similar to (Wang et al., 2017). Specifically, we first calculate the cosine similarities of entity e1 i in G1 with all entities {e2 j} in G2 in their representation space. αi,j = cosine(e1 i , e2 j) j ∈{1, ..., |G2|} Then, we take these similarities as the weights to calculate an attentive vector for the entire graph G2 by weighted summing all the entity embeddings of G2. ¯e1 i = P|G2| j=1 αi,j · e2 j P|G2| j=1 αi,j We calculate matching vectors for all entities in both G1 and G2 by using a multi-perspective cosine matching function fm at each matching step (See Appendix A for more details): matt i = fm(e1 i , ¯e1 i ) matt j = fm(e2 j, ¯e2 j) Graph-Level (Global) Matching Layer Intuitively, the above matching vectors (matts) capture how each entity in G1 (G2) can be matched by the topic graph in the other language. However, they are local matching states and are not sufficient to measure the global graph similarity. For example, many entities only have few neighbor entities that co-occurr in G1 and G2. For those entities, a model that exploits local matching information may have a high probability to incorrectly predict these two graphs are describing different topic entities since most entities in G1 and G2 are not close in their embedding space. To overcome this issue, we apply another GCN (henceforth GCN2) to propagate the local matching information throughout the graph. Intuitively, if each node is represented as its own matching state, by design a GCN over the graph (with a sufficient number of hops) is able to encode the global matching state between the pairs of whole graphs. We then feed these matching representations to a fully-connected neural network and apply the element-wise max and mean pooling method to generate a fixed-length graph matching representation. Prediction Layer We use a two-layer feedforward neural network to consume the fixedlength graph matching representation and apply the softmax function in the output layer. Training and Inference To train the model, we randomly construct 20 negative examples for each positive example <e1 i , e2 j> using a heuristic method. That is, we first generate rough entity embeddings for G1 and G2 by summing over the pretrained embeddings of words within each entity’s surface form; then, we select 10 closest entities to e1 i (or e2 j) in the rough embedding space to construct negative pairs with e2 j (or e1 i ). During testing, given an entity in G1, we rank all entities in G2 by the descending order of matching probabilities that estimated by our model. 4 Experiments We evaluate our model on the DBP15K datasets, which were built by Sun et al. (2017). The datasets were generated by linking entities in the Chinese, Japanese and French versions of DBpedia into English version. Each dataset contains 15,000 interlanguage links connecting equivalent entities in two KGs of different languages. We use the same train/test split as previous works. We use the Adam optimizer (Kingma and Ba, 2014) to update parameters with mini-batch size 32. The learning rate is set to 0.001. The hop size K of GCN1 and GCN2 are set to 2 and 3, respectively. The 3159 Method ZH-EN EN-ZH JA-EN EN-JA FR-EN EN-FR @1 @10 @1 @10 @1 @10 @1 @10 @1 @10 @1 @10 Hao (2016) 21.27 42.77 19.52 39.36 18.92 39.97 17.80 38.44 15.38 38.84 14.61 37.25 Chen (2016) 30.83 61.41 24.78 52.42 27.86 57.45 23.72 49.92 24.41 55.55 21.26 50.60 Sun (2017) 41.18 74.46 40.15 71.05 36.25 68.50 38.37 67.27 32.39 66.68 32.97 65.91 Wang (2018) 41.25 74.38 36.49 69.94 39.91 74.46 38.42 71.81 37.29 74.49 36.77 73.06 BASELINE 59.64 72.30 57.66 70.44 67.01 79.53 62.48 77.54 83.45 91.56 81.03 90.79 NodeMatching 62.03 75.12 60.17 72.67 69.82 80.19 66.74 80.10 84.71 92.35 84.15 91.76 Ours HopGCN2 = 1 66.91 77.52 64.01 78.12 72.63 85.09 69.76 83.48 87.62 94.19 87.65 93.66 HopGCN2 = 3 67.93 78.48 65.28 79.64 73.97 87.15 71.29 84.63 89.38 95.24 88.18 94.75 HopGCN2 = 5 67.92 78.36 65.21 79.48 73.52 86.87 70.18 84.29 88.96 94.28 88.01 94.37 Table 1: Evaluation results on the datasets. non-linearity function σ is ReLU (Glorot et al., 2011) and the parameters of aggregators are randomly initialized. Since KGs are represented in different languages, we first retrieve monolingual fastText embeddings (Bojanowski et al., 2017) for each language, and apply the method proposed in Conneau et al. (2017) to align these word embeddings into a same vector space, namely, crosslingual word embeddings. We use these embeddings to initialize word representations in the first layer of GCN1. Results and Discussion. Following previous works, we used Hits@1 and Hits@10 to evaluate our model, where Hits@k measures the proportion of correctly aligned entities ranked in the top k. We implemented a baseline (referred as BASELINE in Table 1) that selects k closest G2 entities to a given G1 entity in the cross-lingual embedding space, where an entity embedding is the sum of embeddings of words within its surface form. We also report results of an ablation of our model (referred as NodeMatching in Table 1) that uses GCN1 to derive the two topic entity embeddings and then directly feeds them to the prediction layer without using matching layer. Table 1 summarizes the results of our model and existing works. We can see that even without considering any KG structural information, the BASELINE significantly outperforms previous works that mainly learn entity embeddings from the KG structure, indicating that the surface form is an important feature for the KG alignment task. Also, the NodeMatching, which additionally encodes the KG structural information into entity embeddings using GCN1, achieves better performance compared to the BASELINE. In addition, we find the graph matching method significantly outperforms all baselines, which suggests that the global context information of topic entities is important to establish their similarities. Let us first look at the impacts of hop size of GCN2 to our model. From Table 1, we can see that our model could benefit from increasing the hop size of GCN2 until it reaches a threshold λ. In experiments, we find the model achieves the best performance when λ = 3. To better understand on which type of entities that our model could better deal with due to introducing the graph matching layer, we analyze the entities that our model correctly predicts while NodeMatching does not. We find the graph matching layer enhances the ability of our model in handling the entities whose most neighbors in two KGs are different. For such entities, although most local matching information indicate that these two entities are irrelevant, the graph matching layer could alleviate this by propagating the most relevant local matching information throughout the graph. Recall that our proposed topic entity graph only retains the relation direction while neglecting the relation label. In experiments, we find incorporating relation labels as distinct nodes that connecting entity nodes into the topic graph hurts not only the performance but efficiency of our model. We think this is due to that (1) relation labels are represented as abstract symbols in the datasets, which provides quite limited knowledge about the relations, making it difficult for the model to learn their alignments in two KGs; (2) incorporating relation labels may significantly increase the topic entity graph size, which requires bigger hop size and running time. 5 Conclusions Previous cross-lingual knowledge graph alignment methods mainly rely on entity embeddings 3160 that derived from the monolingual KG structural information, thereby may fail at matching entities that have different facts in two KGs. To address this, we introduce the topic entity graph to represent the contextual information of an entity within the KG and view this task as a graph matching problem. For this purpose, we further propose a graph matching model which induces a graph matching vector by jointly encoding the entitywise matching information. Experimental results on the benchmark datasets show that our model significantly outperforms existing baselines. In the future, we will explore more applications of the proposed idea of attentive graph matching. For example, the metric learning based few-shot knowledge base completion (Xiong et al., 2018) can be directly formulated as a similar graph matching problem in this paper. References Sren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In ISWC/ASWC. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2016. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. arXiv preprint arXiv:1611.03954. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, pages 315–323. William L Hamilton, Rex Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. arXiv preprint arXiv:1706.02216. Yanchao Hao, Yuanzhe Zhang, Shizhu He, Kang Liu, and Jun Zhao. 2016. A joint embedding method for entity alignment of knowledge bases. In China Conference on Knowledge Graph and Semantic Computing, pages 3–14. Springer. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. N-ary relation extraction using graph state lstm. arXiv preprint arXiv:1808.09101. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In WWW. Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attributepreserving embedding. In International Semantic Web Conference, pages 628–644. Springer. Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 349– 357. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814. Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1980–1990. Kun Xu, Lingfei Wu, Zhiguo Wang, and Vadim Sheinin. 2018a. Graph2seq: Graph to sequence learning with attention-based neural networks. arXiv preprint arXiv:1804.00823. Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. 2018b. Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. arXiv preprint arXiv:1808.07624. Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, and Vadim Sheinin. 2018c. Sql-to-text generation with graph-to-sequence model. arXiv preprint arXiv:1809.05255. Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentencestate lstm for text representation. arXiv preprint arXiv:1805.02474. A Matching Function fm fm is a multi-perspective cosine matching function that compares two vectors m = fm(v1, v2; W ) where v1 and v2 are two d-dimensional vectors, W ∈ℜl×d is a trainable parameter with the shape l × d, l is the number of perspectives, and the returned value m is a l-dimensional vector m = 3161 [m1, ..., mk, ..., ml]. Each element mk ∈m is a matching value from the k-th perspective, and it is calculated by the cosine similarity between two weighted vectors mk = cosine(Wk ◦v1, Wk ◦v2) where ◦is the element-wise multiplication, and Wk is the k-th row of W , which controls the k-th perspective and assigns different weights to different dimensions of the d-dimensional space.
2019
304
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3162–3172 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3162 Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention Xiangyu Duan1,2∗, Mingming Yin1∗, Min Zhang1,2, Boxing Chen3, Weihua Luo3 1 Institute of Artificial Intelligence, Soochow University, Suzhou, China 2 School of Computer Science and Technology, Soochow University, Suzhou, China 3 Alibaba DAMO Academy, Hangzhou, China [email protected]; [email protected]; [email protected] {boxing.cbx,weihua.luowh}@alibaba-inc.com Abstract Abstractive Sentence Summarization (ASSUM) targets at grasping the core idea of the source sentence and presenting it as the summary. It is extensively studied using statistical models or neural models based on the large-scale monolingual source-summary parallel corpus. But there is no cross-lingual parallel corpus, whose source sentence language is different to the summary language, to directly train a cross-lingual ASSUM system. We propose to solve this zero-shot problem by using resource-rich monolingual ASSUM system to teach zero-shot cross-lingual ASSUM system on both summary word generation and attention. This teaching process is along with a back-translation process which simulates source-summary pairs. Experiments on cross-lingual ASSUM task show that our proposed method is significantly better than pipeline baselines and previous works, and greatly enhances the cross-lingual performances closer to the monolingual performances. We release the code and data at https://github.com/KelleyYin/ Cross-lingual-Summarization. 1 Introduction Abstractive Sentence Summarization (ASSUM) is a task of condensing the source sentences into the summaries based on the core meaning of the source sentences. ASSUM provides quick access to the important content of the source sentences through the informative re-written summaries. Major ASSUM explorations are monolingual based. There is an urgent demand of crosslingual ASSUM which produces summaries for people who do not speak the language the same to the source language. Unlike the monolingual ASSUM receiving extensive studies that are based on the large-scale ∗ Equal contribution. monolingual ASSUM corpus, the cross-lingual ASSUM is seldom explored due to the lack of training corpus. This zero-shot challenge drives the cross-lingual ASSUM to resort to two existing independent techniques, i.e., the monolingual ASSUM and the bilingual translation. The both techniques should be leveraged together to overcome the difficulty of data scarcity in the cross-lingual ASSUM. Regarding the techniques of the monolingual ASSUM, neural methods become dominant in this area since the creation of the large-scale ASSUM corpus (Rush et al., 2015; Nallapati et al., 2016; Hu et al., 2015). The corpus consists of huge number of source-summary pairs, and neural methods model these pairs as as a sequence-to-sequence task by encoding the source sentence into vectorized information and decoding it into the abstractive summary. Regarding the techniques of the bilingual translation, recent years witnessed the method transition from statistical machine translation (SMT) (Koehn et al., 2003) to neural machine translation (NMT). NMT employs the sequence-to-sequence architecture with various implementations such as RNN-based (Sutskever et al., 2014; Bahdanau et al., 2015), CNN-based (Gehring et al., 2017), and Transformer (Vaswani et al., 2017). Early works on the cross-lingual ASSUM leverage the above two techniques through using bilingual features to cooperate with the monolingual ASSUM based on the data condition that largescale monolingual ASSUM corpus is not available while large-scale translation corpora are easy to obtain. They utilize bilingual features such as phrase pairs or predicate-argument parallel structures, which are obtained from SMT systems, to achieve extractive or abstractive cross-lingual summarization (Wan, 2011; Yao et al., 2015; Zhang et al., 2016). 3163 Recently, Ayana et al. (2018) propose the first large-scale corpus-based cross-lingual ASSUM system in which the ASSUM corpus is monolingual. They generate summaries using the monolingual ASSUM system, and train the cross-lingual ASSUM based on these pseudo summaries. On the contrary, we propose in this paper to use genuine summaries paired with the generated pseudo sources to train the cross-lingual ASSUM system. We use the teacher-student framework in which the monolingual ASSUM system is taken as the teacher and the cross-lingual ASSUM system is the student. The teacher let the student to simulate both the summary word distribution and attention weights according to those of the teacher networks. In comparison to the pseudo summaries used in the work of Ayana et al. (2018), we generate pseudo sources instead and use true summaries to constitute source-summary pairs. This is motivated by the successful application of backtranslation which generates pseudo-source paired with true-target for NMT (Sennrich et al., 2016a; Lample et al., 2018). The main contributions of this paper include: • We propose teaching both summary word generation distribution and attention weights in the cross-lingual ASSUM networks by using the monolingual ASSUM networks. The distribution teacher is directly from the monolingual ASSUM, while the attention weights teacher is obtained by an attention relay mechanism. • We use a back-translation procedure that generates pseudo source sentences paired with the true summaries to build a training corpus for the cross-lingual ASSUM. This alleviates the data scarcity that no cross-lingual ASSUM corpus is available. • Extensive experimental results on two benchmark datasets show that our proposed method is able to perform better than several baselines and related works, and significantly reduce the performance gap between the crosslingual ASSUM and the monolingual ASSUM. 2 Related Work 2.1 Monolingual ASSUM There are various methods exploring the effective way to model the monolingual ASSUM process including statistical models (Banko et al., 2000; Cohn and Lapata, 2008) or neural models (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016). Neural models become dominant in this task since the creation of the large-scale ASSUM corpus (Rush et al., 2015; Nallapati et al., 2016; Hu et al., 2015). On the basis of the sequence-tosequence neural architecture, there are many further explorations such as using rich linguistic features and large vocabulary set (Nallapati et al., 2016), global training procedures on the sentence level (Ayana et al., 2016; Li et al., 2018; Edunov et al., 2018; Wang et al., 2018), topic enhancement in the summaries (Wang et al., 2018), additional selective gate networks in the encoder (Zhou et al., 2017), and facts fusion measures (Cao et al., 2018). 2.2 Zero Resource Neural Machine Translation Current state-of-the-art NMT models are effective in modeling the translation process, but they are highly dependent on the large-scale parallel corpus. When applied on zero resource language pairs such as the two languages that do not have direct parallel corpus, the NMT systems perform well below the satisfactory level. To address such problem, three NMT paradigms are explored. The first is the triangular NMT systems that add one additional resource rich language to the zero resource language pair to build a triangular translation scenario (Chen et al., 2017; Zheng et al., 2017; Cheng et al., 2017), the second is the multilingual translation system that concatenates parallel corpora of different language pairs and builds one NMT model for all (Johnson et al., 2017), the third is the unsupervised NMT systems that do not use any parallel data resources (Artetxe et al., 2018; Lample et al., 2018). Our work is closely related to the first paradigm in which source language, pivot language, and target language form a triangular translation scenario. In our setting, the target language {sentence, summary} pair and the source language sentence form the triangle in which the target language sentence functions as the pivot. We adopt the teacher-student framework that is also applied 3164 in Chen et al. (2017), but we have significant difference to them in that we generate pseudo source while they generate pseudo target, which results in different teacher-student networks. 2.3 Cross-lingual Summarization Early explorations on cross-lingual summarization mainly depend on the traditional monolingual summarization methods, and integrate bilingual parallel informations into the monolingual methods through sentence selection based on translation quality estimation (Wan et al., 2010), sentence ranking based on cross-lingual sentence similarity (Wan, 2011), or abstractive summarization based on phrase pair (Yao et al., 2015) and predicate-argument structure fusing (Zhang et al., 2016). The first cross-lingual ASSUM system based on the large-scale monolingual ASSUM corpus is proposed by Ayana et al. (2018), which is most related to our work. It is motivated by the triangular NMT systems with pseudo target in the teacher-student networks. In contrast, we use pseudo source and apply different teacher-student networks. 3 Our Approach 3.1 Overall Framework To overcome the data scarcity in the cross-lingual ASSUM, it is easy to come up with the pipeline method at the first thought. The source language sentence can be translated to the target language sentence, followed by target language summarization step to get the final target language summary. Alternatively, the source language sentence can be summarized into source language summary at first, then is translated into the target language summary. Both pipeline methods face the error propagation problem that errors in the early steps will harm the final summary quality. We propose a jointly optimizing framework that avoids the independent two steps in the pipeline methods. Figure 1 (a) illustrates our overall framework. We introduce a bridge between the source language sentence and the target language summary. The target language sentence functions as the bridge convenient for the information flow from the source sentence to the target summary. The overall framework mainly consists of two modules: the teacher networks and the student networks. The teacher is the monolingual ASSUM teacher src lang sentence tgt lang sentence tgt lang summary NMT student (a) teacher src lang sentence tgt lang sentence tgt lang summary NMT student (b) Figure 1: Illustration of the comparison between (a) our overall framework and (b) the framework of Ayana et al. (2018). Solid line boxes denote genuine data, while dashed line boxes denote automatically generated pseudo data. Solid line arrows denote the summarization direction, while dashed line arrows denote pseudo data generation direction. Note that the genuine data is used in the teacher of our framework, while pseudo data is used in the teacher of the framework of Ayana et al. (2018). neural networks trained on the large-scale monolingual ASSUM corpus. Note that in our framework, the teacher is strong since the utilized monolingual ASSUM corpus is genuine and no pseudo data is used in the teacher. The student is the crosslingual ASSUM networks trained to mimic the behavior of the teacher. To manifest the difference between our framework and the most related framework of Ayana et al. (2018), we depict both in Figure 1. In the framework of Ayana et al. (2018), the source language sentence is automatically translated into the target language sentence, which is automatically summarized into the target language summary. The data on both sides of the teacher networks are pseudo. This is significantly different to our framework in which the teacher networks have the strong data basis that all data on both sides of the teacher networks are genuine. When comparing the student networks, we can find that we adopt pseudo source sentence, while Ayana et al. (2018) adopt pseudo target summary. Furthermore, we also teach the student with the teacher’s attention weights via a new attention relay mechanism. 3165 3.2 Back-Translation Our framework contains a back-translation procedure which is inspired by that used in semisupervised or unsupervised machine translation (MT) (Sennrich et al., 2016a; Lample et al., 2018). In MT, the back-translation process translates unpaired target text into source text. The resulted pseudo source-target pair serves as additional training data for source-to-target translation. Our proposed back-translation procedure involves triple kinds of data. It translates the target language sentence back into the source language sentence by a third-party NMT system. The generated pseudo source is paired with the true target summary to build a training resource for the student networks. The back-translation procedure is denoted as the dashed arrow NMT in Figure 1(a). 3.3 The Teacher-Student Training Procedure We use the monolingual ASSUM system as the teacher networks, and use the cross-lingual ASSUM system as the student networks. Both the teacher and the student apply Transformer architecture which is effective for modeling sequenceto-sequence tasks such as machine translation (Vaswani et al., 2017). Two functions of the teacher are set as the learning objective for the student. One is the probability distribution of the summary word generation, the other is the attention weights in the attention mechanism. Given the source language text X, the target language text Y, and the target language summary YS, the training procedure for the teacher-student framework is presented in the following: Teaching The Summary Word Generation Let P(YSi|YSi−1 1 , Y) denote the teacher distribution of the summary word given summary word generation history and Y, P(YSi|YSi−1 1 , bX) denote the student distribution of the summary word given summary word generation history and bX. bX denotes the pseudo source which is generated by the back-summarization procedure. We use cross entropy loss to encourage the similarity between the two distributions: Lgen = −P(YSi|YSi−1 1 , Y)logP(YSi|YSi−1 1 , bX) (1) Through Equation (1), the cross-lingual ASSUM learns from the monolingual ASSUM about Y1 Y2 Y3 Y4 Y5 YS1 YS2 Figure 2: Illustration of the attention relay. The arrows are the attentions with the direction of decoder side word attending to encoder side words. Solid arrows are top-k or the biggest attention weights, and the dashed arrows are the left attention weights. k is 2 in the figure. how to generate summary word under appropriate distribution. Teaching The Attention Weights via Attention Relay Besides the summary word generation distribution, the attention of the monolingual ASSUM is also a valuable learning resource. But such attention only connects the encoder and the decoder of the monolingual ASSUM system, it has to be relayed to reach the other language to teach the cross-lingual ASSUM system. The attention relay mechanism is illustrated in Figure 2. The monolingual attention weights of YS attending to Y is relayed to form the teacher attention weights of YS attending to bX. In particular, Y2 and Y4 receive top-2 attention weights from YS1 in Figure 2, and Y2 receives biggest attention from bX1, Y4 receives biggest attention from bX2. Then the attention weights of YS1 attending to bX1 and bX2 are set 1/2. Other attention weights distributed over the rest words of bX are set zero. In general case, if top-k attention weights are relayed from YS to bX, then the teacher attention weights over the k words of bX are set 1/k each, and other attention weights are set zero 1. We use the Euclidean distance between the teacher attention weights and the student attention 1We also use the attention matrix of YS attending to Y multiplies the attention matrix of bX attending to Y to form the teacher attention, but we found that the teacher attention weights are evenly distributed, resulting in worse student performance. 3166 weights as the loss to encourage their consistency: Latt = sX j (Aj −¯Aj)2 (2) where Aj denotes a teacher attention weight, ¯Aj denotes a student attention weight. Note that in our work, the attention only refers to the encoder-decoder attention, not the selfattention in Transformer. Since our teacher networks and the student networks adopt Transformer architecture which contains multi-head attention, we use the average attention that averages attention weights of all heads in the same layer. 3.4 Training and Testing The training objective is to minimize the joint loss: L = λLgen + (1 −λ)Latt (3) where λ is the weight balancing Lgen and Latt. During testing, only the student networks are used to decode X into YS. In detail, only P(YSi|YSi−1 1 , X) participates in the beam search, the summary word generation teacher and whole Latt-related teacher-student networks are not involved in the testing process. 4 Experiments We conduct experiments on Chinese-to-English ASSUM, which takes Chinese sentence as input, and outputs English abstractive summary. We build evaluation sets for this task by manually translating English sentences of the existing English evaluation sets into Chinese inputs. To the best of our knowledge, these are the first evaluation sets on which the cross-lingual ASSUM system and the monolingual ASSUM system can be directly compared. 4.1 Datasets In our experiments, the English ASSUM system and the English-Chinese NMT system are involved. The data for training both systems are presented below. The data for training the English ASSUM system is from the annotated Gigaword corpus, and we preprocess it identically to Rush et al. (2015), which results in around 3.8M training pairs, 190K validation pairs and 1951 test pairs. In this data, the sentence-summary pairs are built by pairing the first sentence of each article with the article’s headline. Additionally, DUC-2004 is adopted as another English data set only for testing. It contains 500 documents, and each document has four human-generated reference summaries. To build the evaluation sets, English sentences of the validation and test sets of Gigaword and DUC2004 are manually translated into Chinese by graduate students of the linguistics department and our institute, who are bilingual with Chinese as the mother tongue. Specifically, in the Gigaword validation set, we randomly select 2000 sentence-summary pairs and manually translate their English sentences into Chinese. The English summaries are not translated. The Chinese sentences are segmented by the word segmentation tool Jieba2. Additionally, we also implement some baselines for comparison, some of which utilize a large corpus of Chinese short text summarization (LCSTS) (Hu et al., 2015), which is collected from the Chinese microblogging website Sina Weibo with 2.4M sentence-summary pairs for training and 725 pairs for testing. 4.2 Experimental Configuration Baseline Systems • The pipeline of translating source sentence into target sentence at first, then summarizing the target sentence into the summary. We denote this method Pipeline-TS. • The pipeline of summarizing the source sentence into the source summary, then translating the source summary into the target summary. We denote this method Pipeline-ST. • The framework of Ayana et al. (2018), which uses pseudo summary for training. We denote it Pseudo-Summary3. • The pivot system that enforcing the sourceto-pivot system and the pivot-to-target system sharing the same pivot language embedding (Cheng et al., 2017). We denote it Pivotbased. 2https://pypi.org/project/jieba/ 3We also implement the framework that uses the NMT model to teach the cross-lingual ASSUM (Ayana et al., 2018). Since it highly depends on the LCSTS data, whose style is different to our evaluation sets, it performs significantly worse. 3167 NIST02 NIST03 NIST04 NIST05 NIST08 Avg Our Transformer Cn2En 45.58 45.19 46.80 46.56 37.27 44.28 Robust Translation Cn2En (Cheng et al., 2018) 46.10 44.07 45.61 44.06 34.94 42.96 Our Transformer En2Cn 39.38 34.48 38.10 36.20 30.80 35.79 Table 1: BLEU of the NMT systems on NIST evaluation sets. Cn2En denotes Chinese-to-English translation, and En2Cn denotes the reverse direction. System Gigaword DUC2004 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L ABS+ (Rush et al., 2015) 29.8 11.9 27.0 28.2 8.5 23.8 Actor-Critic (Li et al., 2018) 36.1 17.4 33.5 29.4 9.8 25.9 StructuredLoss (Edunov et al., 2018) 36.7 17.9 34.3 FactAware (Cao et al., 2018) 37.3 17.7 34.2 Transformer 37.1 18.2 34.4 30.6 10.5 26.6 Transformerbpe 38.1 19.1 35.2 31.2 10.7 27.1 Table 2: Comparison on the monolingual ASSUM performances. “-” denotes that no score is reported in that work. • Translating the English sentences into the Chinese sentences, and pair these pseudo Chinese sentences with English summaries to build a training corpus for Chinese-toEnglish ASSUM. We denote it PseudoChinese. We implement it by using Transformer machine translation model to translate the English sentences, and use Transformer architecture to train a Chinese-toEnglish ASSUM system. Note that this is just the student network without being taught by a teacher network. Parameter Setup and Evaluation Metric Transformer is employed as our basis architecture4 (Vaswani et al., 2017). Six layers are stacked in both the encoder and decoder, and the dimensions of the embedding vectors and all hidden vectors are set 512. We set eight heads in the multi-head attention. The source embedding, the target embedding and the linear sublayer are shared in the teacher networks, while are not shared in the student networks. Byte-pair encoding is employed with a vocabulary of about 32k tokens on English side and Chinese side respectively (Sennrich et al., 2016b). During evaluation, we employ ROUGE (Lin, 2004) as our evaluation metric. On Gigaword, the full-length F-1 based ROUGE scores are reported. On DUC2004, the recall based ROUGE scores are reported to be consistent with previous works. NMT Performance The NMT system involved in all our experiments 4https://github.com/pytorch/fairseq is Transformer, with the same parameter setup to those of ASSUM systems. It is trained on 1.25M sentence pairs extracted from LDC corpora5, and is evaluated on NIST sets using multibleu.perl. Chinese-to-English results of caseinsensitive BLEU and English-to-Chinese results of character-based BLEU are reported in Table 1. Since there are four English references for one Chinese sentence in NIST evaluation sets, we report averaged BLEU of four English input sentences in English-to-Chinese translation. Compared to Cheng et al. (2018) on Chineseto-English translation, which targets at robust machine translation and uses the same data to ours, our Transformer significantly outperforms their work, indicating that we build a solid system for machine translation. 4.3 Experimental Results Monolingual ASSUM Performance We build a strong monolingual ASSUM system as shown in Table 2. The comparison is made between our basis architecture Transformer and previous works including state-of-the-art monolingual ASSUM systems. The work of ABS+ (Rush et al., 2015) is the pioneer work of using neural models for monolingual ASSUM. The works of Actor-Critic (Li et al., 2018) and StructuredLoss (Edunov et al., 2018) are training methods avoiding exposure bias problems in sequence-tosequence learning. The work of FactAware (Cao et al., 2018) encode factual informations such as 5The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. 3168 System Gigaword DUC2004 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L Transformerbpe 38.1 19.1 35.2 31.2 10.7 27.1 Pipeline-TS 25.8 9.7 23.6 23.7 6.8 20.9 Pipeline-ST 22.0 7.0 20.9 20.9 5.3 18.3 Pseudo-Summary (Ayana et al., 2018) 21.5 6.6 19.6 19.3 4.3 17.0 Pivot-based (Cheng et al., 2017) 26.7 10.2 24.3 24.0 7.0 21.3 Pseudo-Chinese 27.9 10.9 25.6 24.4 6.6 21.4 Teaching Generation 29.6 12.1 27.3 25.6 7.9 22.7 Teaching Attention 28.1 11.4 26.0 24.3 7.4 21.7 Teaching Generation+Attention 30.1 12.2 27.7 26.0 8.0 23.1 Table 3: Comparison on the cross-lingual ASSUM performances. those extracted from openIE and dependency relations into the neural network to get factual summaries. Transformer with BPE pre-processing (denoted by Transformerbpe) performs consistently better than the related monolingual ASSUM systems. We build the cross-lingual ASSUM system basing on Transformerbpe. Cross-lingual ASSUM performance Table 3 mainly presents the results of the crosslingual ASSUM systems. The first row lists the performance of Transformerbpe, which is the monolingual ASSUM system. It sets the ceiling of the cross-lingual ASSUM performance since the cross-lingual process introduces information loss when using another language. Comparisons between the Baselines The middle part of Table 3 is about baseline systems. It shows that Pipeline-TS is significantly better than Pipeline-ST. The optimal order of the two steps in the pipeline methods should be translating source sentence at first, then summarizing the translation. The Pseudo-Summary method (Ayana et al., 2018) performs even below the PipelineST method. It indicates that using the pseudo target side is not effective for learning better crosslingual summarization model. Meanwhile, as Figure 1(b) illustrates, both source side and target side of the teacher network in the framework of Ayana et al. (2018) are pseudo, resulting in less solid data basis for training the student. The pseudo source side is generated by translating LCSTS Chinese sentences. The two baseline systems that surpass the pipeline systems are Pivot-based system and Pseudo-Chinese system. We re-implement the Pivot-based system but using Transformer instead of RNN, which is used in Cheng et al. (2017). Pseudo-Chinese system is the best baseline system indicating that pseudo source based parallel data is effective for training cross-lingual ASSUM system. Our Systems VS. the Baselines The bottom part of Table 3 lists the performances of our methods. It manifests that both teaching summary word generation and teaching attention weights are able to improve the performance over the baselines. When the summary word generation and attention weights are taught simultaneously (denoted by Teaching Generation+Attention), the performance is further improved, surpassing the best baseline by more than two points on Gigaword evaluation set and more than one point on DUC2004. Our Systems VS. the Ceiling Teaching Generation+Attention greatly reduces the gap between the cross-lingual ASSUM performance and the performance ceiling, i.e., the monolingual ASSUM performance shown in the first row. The gap is narrowed from 10.2 ROUGE-1 points to 8 ROUGE-1 points. In fact, our best method performs even better than ABS+, which is the early system for monolingual ASSUM (Rush et al., 2015). 4.4 Experiment Analyses Hyper-Parameters λ ROUGE-1 ROUGE-2 ROUGE-L 0.1 44.8 22.0 41.7 0.3 45.1 22.3 42.0 0.5 45.0 22.2 41.9 0.7 44.9 22.2 41.8 0.9 44.8 21.8 41.7 top-k ROUGE-1 ROUGE-2 ROUGE-L 2 44.8 22.4 41.9 3 44.9 22.0 42.0 4 45.1 22.3 42.0 5 45.1 22.2 41.8 Table 4: Performances of varying hyper-parameters on the validation set. 3169 5 10 15 20 25 30 0-10 11-20 21-30 31-40 41-50 >50 ROUGE F-1 Scores Input Sentence Length Pseudo-Chinese Teaching Generation Teaching Attention Teaching Generation+Attention Figure 3: ROUGE-1 scores on different length source sentences in the Gigaword test set. There are two main hyper-parameters. One is λ in Equation (3) that balances the weights between teaching generation and teaching attention during training. The other is top-k which controls how many portion of the monolingual ASSUM attention can be relayed to the source side as illustrated in Figure 2. Table 4 presents the performance variance when the two hyper-parameters vary. It shows that the performance is best when λ is 0.3, indicating that training process is balanced towards teaching attention via attention relay. Based on the best λ of 0.3, we explore top-k ranging from 2 to 5. We can find that top-4 monolingual ASSUM attention weights achieve the best performance on the validation set. We select the best hyper-parameters according to Table 4 for testing. Layers for Attention Relay Transformer architecture used in our experiment is with six layers on both encoder and decoder. Attention relay can take place on each layer. Since each layer has eight heads for attention computation, we average the weights of all eight heads in the same layer. We study the attention relay effects on all six layers. The results in Table 5 show that relaying attention on the last layer achieves the best performance. Performances on Different Lengths We study the performance of each system on sets with different source sentence lengths. The source sentences are divided into six groups according to their lengths. Figure 3 presents the ROUGELayer ROUGE-1 ROUGE-2 ROUGE-L 1 44.7 21.8 41.6 2 44.7 22.3 41.7 3 45.0 22.0 41.8 4 44.9 22.1 41.3 5 44.9 22.1 41.9 6 45.1 22.3 42.0 Table 5: Validation set performances of using different layers for attention relay. 1 scores on the test set. The strongest baseline Pseudo-Chinese is used in this study. It shows that our methods perform better than PseudoChinese on most groups, while teaching attention is slightly worse on the group with the longest length. The sentences with length range 10-50 take up 94.2% of the whole test set. Our methods are consistently better than Pseudo-Chinese on theses sentences. Qualitative Analysis Table 6 presents some examples of the crosslingual ASSUM. The differences between our methods and the strongest baseline PseudoChinese are highlighted. It shows that more accurate summary words are produced in our systems. In contrast, Pseudo-Chinese may produce incorrect words that are even contrary to the meaning of the original sentence. 5 Conclusion In this paper, we propose a teacher-student framework together with the back-translation procedure to deal with the zero-shot challenge of cross3170 Cn-sentence 据周六报道,印度最高核专家对广岛日印度人的反核抗议不屑一顾,称激进分子应该在华 盛顿和莫斯科喊口号。 En-sentence a india ’s top nuclear expert shrugged off antinuclear protests by indians on hiroshima day , saying the activists should instead shout slogans in washington and moscow , a newspaper reported saturday . Ref-summary top nuclear scientist shrugs off indian antinuclear protests Psueo-Chinese india ’s top nuclear expert calls for anti-nuke demo in hiroshima Teaching-Generation india ’s top nuclear expert warns against nuclear protests Teaching-Attention india ’s top nuclear scientist defies hiroshima protest Teaching-Gener+Attn india ’s top nuclear expert defies anti-nuclear protests Cn-sentence 黎巴嫩总理拉菲克- 哈里里星期二指责英国支持以色列袭击黎巴嫩真主党游击队,同时他宣布 计划访问伦敦。 En-sentence lebanese prime minister rafic hariri accused britain on tuesday of supporting the israeli assault on hezbollah guerrillas in lebanon as he announced plans to visit london . Ref-summary hariri to visit britain which he accuses of backing israel Psueo-Chinese lebanese pm accuses britain of supporting hezbollah Teaching-Generation lebanese pm accuses britain of backing hezbollah attacks Teaching-Attention lebanese pm accuses britain of supporting hezbollah Teaching-Gener+Attn lebanese pm accuses britain of backing israel Cn-sentence 苏丹武装部队发言人今天说,政府军击退了叛军沿苏丹东部边境发动的攻击。 En-sentence government troops has repelled an attack by rebel forces along sudan ’s eastern borders , the spokesman of the sudanese armed forces said today . Ref-summary government forces repel rebel attack in eastern Psueo-Chinese sudanese government forces attack rebels in eastern sudan Teaching-Generation government troops repulse rebel attack in eastern sudan Teaching-Attention sudanese army says it foiled rebel attack on eastern border Teaching-Gener+Attn government troops repel rebel attack in eastern sudan Table 6: Examples of the cross-lingual ASSUM. lingual ASSUM, which has no parallel data for training. We use monolingual ASSUM which has large-scale training resources as the teacher, and set the cross-lingual ASSUM as the student. Two properties of the teacher are proposed to teach the student. One is the summary word generation probabilities, the other is the attention weights. We also propose attention relay mechanism to form the attention weights of the teacher. Experiments show that our method performs significantly better than several baselines, and is able to significantly reduce the performance gap between the cross-lingual ASSUM and the monolingual ASSUM over the benchmark datasets. Acknowledgments The authors would like to thank the anonymous reviewers for the helpful comments. This work was supported by National Key R&D Program of China (Grant No. 2016YFE0132100), National Natural Science Foundation of China (Grant No. 61525205, 61673289), and was also partially supported by the joint research project of Alibaba and Soochow University. References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261–270. Ayana, Shi-Qi Shen, Yun Chen, Yang Cheng, Zhiyuan Liu, and Maosong Sun. 2018. Zero-shot crosslingual neural headline generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(12):2319–2327. Ayana, Shiqi Shen, Yu Zhao, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with sentence-wise optimization. arXiv preprint arXiv:1604.01904. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc of International Conference on Learning Representations. Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 310–317. Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstractive summarization. In Thirty-Second AAAI Conference on Artificial Intelligence. Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zeroresource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925–1935. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756– 1766. 3171 Yong Cheng, Qian Yang, Yang Liu, Maosong Sun, and Wei Xu. 2017. Joint training for pivot-based neural machine translation. In Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence, pages 3974–3980. Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93–98. Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 137–144. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 355–364. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, pages 1243–1252. JMLR.org. Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, pages 339–351. Philipp Koehn, Franz J Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 127–133. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based and neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Piji Li, Lidong Bing, and Wai Lam. 2018. Actorcritic based training framework for abstractive summarization. arXiv:1803.11070. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81. Ramesh Nallapati, Bowen Zhou, Cicero Nogueira Dos Santos, Caglar Gulcehre, and Xiang Bing. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290. Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1546–1555. Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917–926. Li Wang, Junlin Yao, Yunzhe Tao, Li Zhong, Wei Liu, and Qiang Du. 2018. A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 4453–4460. Jinge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 118–127. Jiajun Zhang, Yu Zhou, and Chengqing Zong. 2016. Abstractive cross-language summarization via translation model enhanced predicate argument structure 3172 fusing. IEEE/ACM Trans. Audio Speech, Lang. Process, vol. 24, no. 10. Hao Zheng, Yong Cheng, and Yang Liu. 2017. Maximum expected likelihood estimation for zeroresource neural machine translation. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, pages 4251–4257. Qingyu Zhou, Yang Nan, Furu Wei, and Zhou Ming. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1095– 1104.
2019
305
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3173–3179 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3173 Improving Low-Resource Cross-lingual Document Retrieval by Reranking with Deep Bilingual Representations Rui Zhang Caitlin Westerfield Sungrok Shim Garrett Bingham Alexander Fabbri William Hu Neha Verma Dragomir Radev Department of Computer Science, Yale University {r.zhang, dragomir.radev}@yale.edu Abstract In this paper, we propose to boost lowresource cross-lingual document retrieval performance with deep bilingual query-document representations. We match queries and documents in both source and target languages with four components, each of which is implemented as a term interaction-based deep neural network with cross-lingual word embeddings as input. By including query likelihood scores as extra features, our model effectively learns to rerank the retrieved documents by using a small number of relevance labels for low-resource language pairs. Due to the shared cross-lingual word embedding space, the model can also be directly applied to another language pair without any training label. Experimental results on the MATERIAL dataset show that our model outperforms the competitive translation-based baselines on English-Swahili, English-Tagalog, and English-Somali cross-lingual information retrieval tasks. 1 Introduction Cross-lingual relevance ranking, or Cross-Lingual Information Retrieval (CLIR), is the task of ranking foreign documents against a user query (Hull and Grefenstette, 1996; Ballesteros and Croft, 1996; Oard and Hackett, 1997; Darwish and Oard, 2003). As multilingual documents are more accessible, CLIR is increasingly more important whenever the relevant information is in other languages. Traditional CLIR systems consist of two components: machine translation and monolingual information retrieval. Based on the translation direction, it can be further categorized into the document translation and the query translation approaches (Nie, 2010). In both cases, we first solve the translation problem, and the task is transformed to the monolingual setting. However, while conceptually simple, the performance of this Document in Target Query in Target Document in Source Relevance Score Query in Source Source Target Source Source Target Target Target Source Figure 1: Cross-lingual Relevance Ranking with Bilingual Query and Document Representation. modular approach is fundamentally limited by the quality of machine translation. Recently, many deep neural IR models have shown promising results on monolingual data sets (Huang et al., 2013; Guo et al., 2016; Pang et al., 2016; Mitra et al., 2016, 2017; Xiong et al., 2017; Hui et al., 2017, 2018; McDonald et al., 2018). They learn a scoring function directly from the relevance label of query-document pairs. However, it is not clear how to use them when documents and queries are not in the same language. Furthermore, those deep neural networks need a large amount of training data. This is expensive to get for lowresource language pairs in our cross-lingual case. In this paper, we propose a cross-lingual deep relevance ranking architecture based on a bilingual view of queries and documents. As shown in Figure 1, our model first translates queries and documents and then uses four components to match them in both the source and target language. Each component is implemented as a deep neural network, and the final relevance score combines all components which are jointly trained given the relevance label. We implement this based on state3174 Query in Source Language Document in Target Language Cosine Similarity max and k-max Term Gating (a) Bilingual POSIT-DRMM. The colored box represents hidden states in bidirectional LSTMs. Query in Source Language Document in Target Language Cosine Similarity Conv2D max k-max IDF (b) Bilingual PACRR-DRMM. The colored box represents cross-lingual word embeddings. Bilingual PACRR is the same except it uses a single MLP at the final stage. Figure 2: Model architecture. We only show the component of the source query with the target document. of-the-art term interaction models because they enable us to make use of cross-lingual embeddings to explicitly encode terms of queries and documents even if they are in different languages. To deal with the small amount of training data, we first perform query likelihood retrieval and include the score as an extra feature in our model. In this way, the model effectively learns to rerank from a small number of relevance labels. Furthermore, since the word embeddings are aligned in the same space, our model can directly transfer to another language pair with no additional training data. We evaluate our model on the MATERIAL CLIR dataset with three language pairs including English to Swahili, English to Tagalog, and English to Somali. Experimental results demonstrate that our model outperforms other translation-based query likelihood retrieval and monolingual deep relevance ranking approaches. 2 Our Method In cross-lingual document retrieval, given a user query in the source language Q and a document in the target language D, the system computes a relevance score s(Q, D). As shown in Figure 1, our model first translates the document as ˆD or the query as ˆQ, and then it uses four separate components to match: (1) source query with target document, (2) source query with source document, (3) target query with source document, (4) target query with target document. The final relevance score combines all components: s(Q, D) = s(Q, D) + s(Q, ˆD) + s( ˆQ, ˆD) + s( ˆQ, D) To implement each component, we extend three state-of-the-art term interaction models: PACRR (Position-Aware Convolutional Recurrent Relevance Matching) proposed by Hui et al. (2017), POSIT-DRMM (POoled SImilariTy DRMM) and PACRR-DRMM proposed by McDonald et al. (2018). In term interaction models, each query term is scored to a document’s terms from the interaction encodings, and scores for different query terms are aggregated to produce the querydocument relevance score. 2.1 Bilingual POSIT-DRMM This model is illustrated in Figure 2a. We first use bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) to produce the context-sensitive encoding of each query and document term. We also add residual connection to combine the pre-trained term embedding and the LSTM hidden states. For the source query and document term, we can use the pre-trained word embedding in the source language. For the target query and document term, we first align the pre-trained embedding in the target language to the source language and then use this cross-lingual word embedding as the input to LSTM. Thereafter, we produce the documentaware query term encoding by applying max pooling and k-max pooling over the cosine similarity matrix of query and document terms. We then use an MLP to produce term scores, and the relevance score is a weighted sum over all terms in the query with a term gating mechanism. 3175 EN->SW EN->TL EN->SO # Document 813 844 695 # Document Token (Min/Avg/Max) 34/341/1724 32/404/2501 69/370/2671 Query Set Q1 Q2 Q3 Q1 Q2 Q3 Q1 # Query 300 400 600 300 400 600 300 # Relevant Pairs 411 489 828 236 576 1018 496 Table 1: The MATERIAL dataset statistics. For SW and TL, we use the ANALYSIS document set with Q1 for training, Q2 for dev, and Q3 for test. For transfer learning to SO, we use the DEV document set with Q1. Q1 contains open queries where performers can conduct any automatic or manual exploration while Q2 and Q3 are closed queries where results must be generated with fully automatic systems with no human in the loop. 2.2 Bilingual PACRR and Bilingual PACRR-DRMM These models are shown in Figure 2b. We first align the word embeddings in the target language to the source language and build a querydocument similarity matrix that encodes the similarity between the query and document term. Depending on the query language and document language, we construct four matrices, SIMQ,D, SIMQ, ˆD, SIM ˆQ, ˆD, SIM ˆQ,D, for each of the four components. Then, we use convolutional neural networks over the similarity matrix to extract n-gram matching features. We then use maxpooling and k-max-pooling to produce the feature matrix where each row is a document-aware encoding of a query term. The final step computes the relevance score: Bilingual PACRR uses an MLP on the whole feature matrix to get the relevance score, while Bilingual PACRR-DRMM first uses an MLP on individual rows to get query term scores and then use a second layer to combine them. 3 Related Work Cross-lingual Information Retrieval. Traditional CLIR approaches include document translation and query translation, and more research efforts are on the latter (Oard and Hackett, 1997; Oard, 1998; McCarley, 1999; Franz et al., 1999). Early methods use the dictionary to translate the user query (Hull and Grefenstette, 1996; Ballesteros and Croft, 1996; Pirkola, 1998). Other methods include the single best SMT query translation (Chin et al., 2014) and the weighted SMT translation alternatives known as the probabilistic structured query (PSQ) (Darwish and Oard, 2003; Ture et al., 2012). Recently, Bai et al. (2010) and Sokolov et al. (2013) propose methods to learn the sparse query-document associations from supervised ranking signals on cross-lingual Wikipedia and patent data, respectively. Furthermore, Vuli´c and Moens (2015) and Litschko et al. (2018) use cross-lingual word embeddings to represent both queries and documents as vectors and perform IR by computing the cosine similarity. Schamoni et al. (2014) and Sasaki et al. (2018) also use an automatic process to build CLIR datasets from Wikipeida articles. Neural Learning to Rank. Most of neural learning to rank models can be categorized in two groups: representation based (Huang et al., 2013; Shen et al., 2014) and interaction based (Pang et al., 2016; Guo et al., 2016; Hui et al., 2017; Xiong et al., 2017; McDonald et al., 2018). The former builds representations of query and documents independently, and the matching is performed at the final stage. The latter explicitly encodes the interaction between terms to direct capture word-level interaction patterns. For example, the DRMM (Guo et al., 2016) first compares the term embeddings of each pair of terms within the query and the document and then generates fixedlength matching histograms. 4 Experiments Training and Inference. We first use the Indri1 system which uses query likelihood with Dirichlet Smoothing (Zhai and Lafferty, 2004) to preselect the documents from the collection. To build the training dataset, for each positive example in the returned list, we randomly sample one negative example from the documents returned by Indri. The model is then trained with a binary crossentropy loss. On validation or testing set, we use our prediction scores to rerank the documents returned by Indri. Extra Features. Following the previous work (Severyn and Moschitti, 2015; Mohan et al., 2017; McDonald et al., 2018), we compute the final relevance score by a linear model to combine the model output with the following set of extra fea1www.lemurproject.org/indri.php 3176 EN->SW EN->TL MAP P@20 NDCG@20 AQWV MAP P@20 NDCG@20 AQWV Query Translation and Document Translation with Indri Dictionary-Based Query Translation (DBQT) 20.93 4.86 28.65 6.50 20.01 5.42 27.01 5.93 Probabilistic Structured Query (PSQ) 27.16 5.81 36.03 12.56 35.20 8.18 44.04 19.81 Statistical MT (SMT) 26.30 5.28 34.60 13.77 37.31 8.77 46.77 21.90 Neural MT (NMT) 26.54 5.26 34.83 15.70 33.83 8.20 43.17 18.56 Deep Relevance Ranking PACRR 24.69 5.24 32.85 11.73 32.53 8.42 41.75 17.48 PACRR-DRMM 22.15 5.14 30.28 8.50 32.59 8.60 42.17 16.59 POSIT-DRMM 23.91 6.04 33.83 12.06 25.16 8.15 34.80 9.28 Deep Relevance Ranking with Extra Features in Section 4 PACRR 27.03 5.34 35.36 14.18 41.43 8.98 49.96 27.46 PACRR-DRMM 25.46 5.50 34.15 12.18 35.61 8.69 45.34 22.70 POSIT-DRMM 26.10 5.26 34.27 14.11 39.35 9.24 48.41 25.01 Ours with Extra Features in Section 4: In-Language Training Bilingual PACRR 29.64 5.75 38.27 17.87 43.02 9.63 52.27 29.12 Bilingual PACRR-DRMM 26.15 5.84 35.54 12.92 38.29 9.21 47.60 22.94 Bilingual POSIT-DRMM 30.13 6.28 39.68 18.69 43.67 9.73 52.80 29.12 Bilingual POSIT-DRMM (3-model ensemble) 31.60 6.37 41.25 20.19 45.35 9.84 54.26 31.08 Table 2: Test set result on English to Swahili and English to Tagalog. We report the TREC ad-hoc retrieval evaluation metrics (MAP, P@20, NDCG@20) and the Actual Query Weighted Value (AQWV). Train: EN->SW + EN->TL, Test: EN->SO MAP P@20 AQWV PSQ 17.52 5.45 2.35 SMT 19.04 6.12 4.62 Bilingual POSIT-DRMM 20.58 6.51 5.71 +3-model ensemble 21.25 6.68 5.89 Table 3: Zero-shot transfer learning on English to Somali test set. tures: (1) the Indri score with the language modeling approach to information retrieval. (2) the percentage of query terms with an exact match in the document, including the regular percentage and IDF weighted percentage. (3) the percentage of query term bigrams matches in the document. Cross-lingual Word Embeddings. We apply the supervised iterative Procrustes approach (Xing et al., 2015; Conneau et al., 2018) to align two pretrained mono-lingual fastText (Bojanowski et al., 2016) word embeddings using the MUSE implementation2. To build the bilingual dictionary, we use the translation pages of Wiktionary3. For Swahili, we build a training dictionary for 5301 words and a testing dictionary for 1326 words. For Tagalog, the training dictionary and testing dictionary contains 7088 and 1773 words, respectively. For Somali, the corresponding number is 7633 and 1909. We then learn the cross-lingual word embeddings from Swahili to English, from Tagalog 2github.com/facebookresearch/MUSE 3https://www.wiktionary.org/ to English, and from Somali to English. Therefore, all three languages are in the same word embedding space. Data Sets and Evaluation Metrics. Our experiments are evaluated on the MATERIAL4 program as summarized in Table 1. It consists of three language pairs with English queries on Swahili (EN>SW), Tagalog (EN->TL), Somali documents (EN->SO). We use the TREC ad-hoc retrieval evaluation script5 to compute Precision@20, Mean Average Precision (MAP), Normalized Discounted Cumulative Gain@20 (NDCG@20). We also report the Actual Query Weighted Value (AQWV) (NIST, 2017), a set-based metric with penalty for both missing relevant and returning irrelevant documents. We use β = 40.0 and find the best global fixed cutoff over all queries. Baselines. For traditional CLIR approaches, we use query translation and document translation with the Indri system. For query translation, we use Dictionary-Based Query Translation (DBQT) and Probabilistic Structured Query (PSQ). For document translation, we use Statistical Machine Translation (SMT) and Neural Machine Translation (NMT). For SMT, we use the moses system (Koehn et al., 2007) with word alignments using mGiza and 5-gram KenLM language model (Heafield, 2011). For NMT, we use sequence-to4www.iarpa.gov/index.php/ research-programs/material 5https://trec.nist.gov/trec_eval/ 3177 sequence model with attention (Bahdanau et al., 2015; Miceli Barone et al., 2017) implemented in Marian (Junczys-Dowmunt et al., 2018). For deep relevance ranking baselines, we investigate recent state-of-the-art models including PACRR, PACRR-DRMM, and POSIT-DRMM. These models and our methods all use an SMTbased document translation as input. Implementation Details. For POSIT-DRMM and Bilingual POSIT-DRMM, we use the k-maxpooling with k = 5 and 0.3 dropout of the BiLSTM output. For PACRR, PACRR-DRMM and their bilingual counterparts, we use convolutional filter sizes with [1,2,3], and each filter size has 32 filters. We use k = 2 in the k-max-pooling. The loss function is minimized using the Adam optimizer (Kingma and Ba, 2014) with the training batch size as 32. We monitor the MAP performance on the development set after each epoch of training to select the model which is used on the test data. 4.1 Results and Discussion Table 2 shows the result on EN->SW and EN>TL where we train and test on the same language pair. Performance of Baselines. For query translation, PSQ is better than DBQT because PSQ uses a weighted alternative to translate query terms and does not limit to the fixed translation from the dictionary as in DBQT. For document translation, we find that both SMT and NMT have a similar performance which is close to PSQ. The effectiveness of different approaches depends on the language pair (PSQ for EN->SW and SMT for EN->TL), which is a similar finding with McCarley (1999) and Franz et al. (1999). In our experiments with deep relevance ranking models, we all use SMT and PSQ because they have strong performances in both language pairs and it is fair to compare. Effect of Extra Features and Bilingual Representation. While deep relevance ranking can achieve decent performance, the extra features are critical to achieve better results. Because the extra features include the Indri score, the deep neural model essentially learns to rerank the document by effectively using a small number of training examples. Furthermore, our models with bilingual representations achieve better results in both language pairs, giving additional 1-3 MAP improvements over their counterparts. To compare language pairs, EN->TL has larger improvements over EN->SW. This is because EN->TL has better query translation, document translation, and query likelihood retrieval results from the baselines, and thus it enjoys more benefits from our model. We also found POSIT-DRMM works better than the other two, suggesting term-gating is useful especially when the query translation can provide more alternatives. We then perform ensembling of POSIT-DRMM to further improve the results. Zero-Shot Transfer Learning. Table 3 shows the result for a zero-shot transfer learning setting where we train on EN->SW + EN->TL and directly test on EN->SO without using any Somali relevance labels. This transfer learning delivers a 1-3 MAP improvement over PSQ and SMT. This presents a promising approach to boost performance by utilizing relevance labels from other language pairs. 5 Conclusion We propose to improve cross-lingual document retrieval by utilizing bilingual query-document interactions and learning to rerank from a small amount of training data for low-resource language pairs. By aligning word embedding spaces for multiple languages, the model can be directly applied under a zero-shot transfer setting when no training data is available for another language pair. We believe the idea of combining bilingual document representations using cross-lingual word embeddings can be generalized to other models as well. Acknowledgements We thank Petra Galuˇsˇc´akov´a, Douglas W. Oard, Efsun Kayi, Suraj Nair, Han-Chin Shing, and Joseph Barrow for their helpful discussion and feedback. This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9117. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. 3178 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Olivier Chapelle, and Kilian Weinberger. 2010. Learning to rank with (a lot of) word features. Information retrieval, 13(3):291–314. Lisa Ballesteros and Bruce Croft. 1996. Dictionary methods for cross-lingual information retrieval. In International Conference on Database and Expert Systems Applications, pages 791–801. Springer. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Jeffrey Chin, Maureen Heymans, Alexandre Kojoukhov, Jocelyn Lin, and Hui Tan. 2014. Cross-language information retrieval. US Patent 8,799,307. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In ICLR. Kareem Darwish and Douglas W Oard. 2003. Probabilistic structured query methods. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 338–344. ACM. Martin Franz, J Scott McCarley, and Salim Roukos. 1999. Ad hoc and multilingual information retrieval at ibm. NIST special publication SP, pages 157– 168. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 55–64. ACM. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pages 187–197. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2333–2338. ACM. Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. Pacrr: A position-aware neural ir model for relevance matching. arXiv preprint arXiv:1704.03940. Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-pacrr: A context-aware neural ir model for ad-hoc retrieval. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 279–287. ACM. David A Hull and Gregory Grefenstette. 1996. Querying across languages: a dictionary-based approach to multilingual information retrieval. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, pages 49–57. ACM. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116– 121, Melbourne, Australia. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177–180. Robert Litschko, Goran Glavaˇs, Simone Paolo Ponzetto, and Ivan Vuli´c. 2018. Unsupervised crosslingual information retrieval using monolingual data only. In SIGIR. J Scott McCarley. 1999. Should we translate the documents or the queries in cross-language information retrieval? In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 208–214. Association for Computational Linguistics. Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2018. Deep relevance ranking using enhanced document-query interactions. arXiv preprint arXiv:1809.01682. Antonio Valerio Miceli Barone, Jindˇrich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep architectures for neural machine translation. In Proceedings of the Second Conference on Machine Translation, Copenhagen, Denmark. Association for Computational Linguistics. 3179 Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World Wide Web, pages 1291–1299. International World Wide Web Conferences Steering Committee. Bhaskar Mitra, Eric Nalisnick, Nick Craswell, and Rich Caruana. 2016. A dual embedding space model for document ranking. arXiv preprint arXiv:1602.01137. Sunil Mohan, Nicolas Fiorini, Sun Kim, and Zhiyong Lu. 2017. Deep learning for biomedical information retrieval: learning textual relevance from click logs. BioNLP 2017, pages 222–231. Jian-Yun Nie. 2010. Cross-language information retrieval. Synthesis Lectures on Human Language Technologies, 3(1):1–125. NIST. 2017. The Official Original Derivation of AQWV. Douglas W Oard. 1998. A comparative study of query and document translation for cross-language information retrieval. In Conference of the Association for Machine Translation in the Americas, pages 472–483. Springer. Douglas W Oard and Paul Hackett. 1997. Document translation for cross-language text retrieval at the university of maryland. In TREC, pages 687–696. Citeseer. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2016. A study of matchpyramid models on ad-hoc retrieval. arXiv preprint arXiv:1606.04648. Ari Pirkola. 1998. The effects of query structure and dictionary setups in dictionary-based cross-language information retrieval. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 55–63. ACM. Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, and Kentaro Inui. 2018. Cross-lingual learning-to-rank with shared representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 458–463. Shigehiko Schamoni, Felix Hieber, Artem Sokolov, and Stefan Riezler. 2014. Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval. In Proceedings of the 52 Annual Meeting of the Association for Computational Linguistics (ACL). Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search. In Proceedings of the 23rd International Conference on World Wide Web, pages 373– 374. ACM. Artem Sokolov, Laura Jehl, Felix Hieber, and Stefan Riezler. 2013. Boosting cross-language retrieval by learning bilingual phrase associations from relevance rankings. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1688–1699. Ferhan Ture, Jimmy Lin, and Douglas Oard. 2012. Combining statistical translation techniques for cross-language information retrieval. Proceedings of COLING 2012, pages 2685–2702. Ivan Vuli´c and Marie-Francine Moens. 2015. Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 363–372. ACM. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 55–64. ACM. Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Transactions on Information Systems (TOIS), 22(2):179–214.
2019
306
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3180–3189 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3180 Are Girls Neko or Sh¯ojo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization Mozhi Zhang1 Keyulu Xu2 Ken-ichi Kawarabayashi3 Stefanie Jegelka2 Jordan Boyd-Graber1 1University of Maryland, College Park, Maryland, USA 2Massachusetts Institute of Technology, Cambridge, Massachusetts, USA 3National Institue of Informatics, Tokyo, Japan {mozhi,jbg}@umiacs.umd.edu {keyulu,stefje}@mit.edu [email protected] Abstract Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For nonisomorphic pairs, our method (Iterative Normalization) transforms monolingual embeddings to make orthogonal alignment easier by simultaneously enforcing that (1) individual word vectors are unit length, and (2) each language’s average vector is zero. Iterative Normalization consistently improves word translation accuracy of three CLWE methods, with the largest improvement observed on EnglishJapanese (from 2% to 44% test accuracy). 1 Orthogonal Cross-Lingual Mappings Cross-lingual word embedding (CLWE) models map words from multiple languages to a shared vector space, where words with similar meanings are close, regardless of language. CLWE is widely used in multilingual natural language processing (Klementiev et al., 2012; Guo et al., 2015; Zhang et al., 2016). Recent CLWE methods (Ruder et al., 2017; Glavas et al., 2019) independently train two monolingual embeddings on large monolingual corpora and then align them with a linear transformation. Previous work argues that these transformations should be orthogonal (Xing et al., 2015; Smith et al., 2017; Artetxe et al., 2016): for any two words, the dot product of their representations is the same as the dot product with the transformation. This preserves similarities and substructure of the original monolingual word embedding but enriches the embeddings with multilingual connections between languages. Thus, many state-of-the-art mapping-based CLWE methods impose an orthogonal constraint (Artetxe et al., 2017; Conneau et al., 2018; Alvarez-Melis and Jaakkola, 2018; Artetxe et al., 2018; Ruder et al., 2018; Alvarez-Melis et al., 2019). The success of orthogonal methods relies on the assumption that embedding spaces are isomorphic; i.e., they have the same inner-product structures across languages, but this does not hold for all languages (Søgaard et al., 2018; Fujinuma et al., 2019). For example, English and Japanese fastText vectors (Bojanowski et al., 2017) have different substructures around “girl” (Figure 1 left). As a result, orthogonal mapping fails on some languages— when Hoshen and Wolf (2018) align fastText embeddings with orthogonal mappings, they report 81% English–Spanish word translation accuracy but only 2% for the more distant English–Japanese. While recent work challenges the orthogonal assumption (Doval et al., 2018; Joulin et al., 2018; Jawanpuria et al., 2019), we focus on whether simple preprocessing techniques can improve the suitability of orthogonal models. Our iterative method normalizes monolingual embeddings to make their structures more similar (Figure 1), which improves subsequent alignment. Our method is motivated by two desired properties of monolingual embeddings that support orthogonal alignment: 1. Every word vector has the same length. 2. Each language’s mean has the same length. Standard preprocessing such as dimension-wise mean centering and length normalization (Artetxe et al., 2016) do not meet the two requirements at the same time. Our analysis leads to Iterative Normalization, an alternating projection algorithm that normalizes any word embedding to provably satisfy both conditions. After normalizing the monolingual embeddings, we then apply mapping-based CLWE algorithms on the transformed embeddings. 3181 彼女 kanojo “she” 少年 shōnen “boy” 少女 shōjo “girl” 猫 neko “cat” 妹 imōto “sister” 娘 musume “daughter” .98 .98 .98 .98 .98 girls woman girl teenager teenage boy .65 .62 .59 .58 .58 girls woman girl teenager teenage boy .62 .57 .55 .54 .53 女の子 onna no ko “girls” 少年 shōnen “boy” 少女 shōjo “girl” 美少女 bishōjo “pretty girl” 乙女 otome “maiden” 魔法 mahō “magic” .48 .56 .51 .51 .48 Iterative Normalization Iterative Normalization Figure 1: The most similar Japanese words for 少女(sh¯ojo “girl”) and English words for “girl”, measured by cosine similarity on Wikipedia fastText vectors, before (left) and after (right) Iterative Normalization. In the original embedding spaces, “boy” is the nearest neighbor for both languages but with a very different cosine similarity, and “cat” in English is not close to “girl”: both violate the isomorphism assumed by an orthogonal transformation for cross-lingual representations. Iterative Normalization replaces 猫(neko “cat”) with the more relevant 美少女(bish¯ojo “pretty girl”) and brings cosine similarities closer. We empirically validate our theory by combining Iterative Normalization with three mapping-based CLWE methods. Iterative Normalization improves word translation accuracy on a dictionary induction benchmark across thirty-nine language pairs. 2 Learning Orthogonal Mappings This section reviews learning orthogonal crosslingual mapping between word embeddings and, along the way, introduces our notation. We start with pre-trained word embeddings in a source language and a target language. We assume1 all embeddings are d-dimensional, and the two languages have the same vocabulary size n. Let X ∈Rd×n be the word embedding matrix for the source language, where each column xi ∈Rd is the representation of the i-th word from the source language, and let Z ∈Rd×n be the word embedding matrix for the target language. Our goal is to learn a transformation matrix W ∈Rd×d that maps the source language vectors to the target lan1Word translation benchmarks use the same assumptions. guage space. While our experiments focus on the supervised case with a seed dictionary D with translation pairs (i, j), the analysis also applies to unsupervised projection. One straightforward way to learn W is by minimizing Euclidean distances between translation pairs (Mikolov et al., 2013a). Formally, we solve: min W X (i,j)∈D ∥Wxi −zj∥2 2. (1) Xing et al. (2015) further restrict W to orthogonal transformations; i.e., W⊤W = I. The orthogonal constraint significantly improves word translation accuracy (Artetxe et al., 2016). However, this method still fails for some language pairs because word embeddings are not isomorphic across languages. To improve orthogonal alignment between non-isomorphic embedding spaces, we aim to transform monolingual embeddings in a way that helps orthogonal transformation. 3182 3 When Orthogonal Mappings Work When are two embedding spaces easily aligned? A good orthogonal mapping is more likely if word vectors have two properties: length-invariance and center-invariance. Length-Invariance. First, all word vectors should have the same, constant length. Lengthinvariance resolves inconsistencies between monolingual word embedding and cross-lingual mapping objectives (Xing et al., 2015). During training, popular word embedding algorithms (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017) maximize dot products between similar words, but evaluate on cosine similarity. To make things worse, the transformation matrix minimizes a third metric, Euclidean distance (Equation 1). This inconsistency is naturally resolved when the lengths of word vectors are fixed. Suppose u ∈Rd and v ∈Rd have the same length, then u⊤v ∝cos(u, v) = 1 −1 2∥u −v∥2 2. Minimizing Euclidean distance is equivalent to maximizing both dot product and cosine similarity with constant word vector lengths, thus making objectives consistent. Length-invariance also satisfies a prerequisite for bilingual orthogonal alignment: the embeddings of translation pairs should have the same length. If a source word vector xi can be aligned to its target language translation zj = Wxi with an orthogonal matrix W, then ∥zj∥2 = ∥Wxi∥2 = ∥xi∥2, (2) where the second equality follows from the orthogonality of W. Equation (2) is trivially satisfied if all vectors have the same length. In summary, lengthinvariance not only promotes consistency between monolingual word embedding and cross-lingual mapping objective but also simplifies translation pair alignment. Center-Invariance. Our second condition is that the mean vector of different languages should have the same length, which we prove is a pre-requisite for orthogonal alignment. Suppose two embedding matrices X and Z can be aligned with an orthogonal matrix W such that Z = WX. Let ¯x = 1 n Pn i=1 xi and ¯z = 1 n Pn i=1 zi be the mean vectors. Then ¯z = W¯x. Since W is orthogonal, ∥¯z∥2 = ∥W¯x∥2 = ∥¯x∥2. In other words, orthogonal mappings can only align embedding spaces with equal-magnitude centers. A stronger version of center-invariance is zeromean, where the mean vector of each language is zero. Artetxe et al. (2016) find that centering improves dictionary induction; our analysis provides an explanation. 4 Iterative Normalization We now develop Iterative Normalization, which transforms monolingual word embeddings to satisfy both length-invariance and center-invariance. Specifically, we normalize word embeddings to simultaneously have unit-length and zero-mean. Formally, we produce embedding matrix X such that ∥xi∥2 = 1 for all i, (3) and n X i=1 xi = 0. (4) Iterative Normalization transforms the embeddings to make them satisfy both constraints at the same time. Let x(0) i be the initial embedding for word i. We assume that all word embeddings are non-zero.2 For every word i, we iteratively transform each word vector xi by first making the vectors unit length, y(k) i = x(k−1) i /∥x(k−1) i ∥2, (5) and then making them mean zero, x(k) i = y(k) i −1 n n X i=1 y(k) i . (6) Equation (5) and (6) project the embedding matrix X to the set of embeddings that satisfy Equation (3) and (4). Therefore, our method is a form of alternating projection (Bauschke and Borwein, 1996), an algorithm to find a point in the intersection of two closed sets by alternatively projecting onto one of the two sets. Alternating projection guarantees convergence in the intersection of two convex sets at a linear rate (Gubin et al., 1967; Bauschke and Borwein, 1993). Unfortunately, the unit-length constraint is non-convex, ruling out the classic convergence proof. Nonetheless, we use recent results on alternating non-convex projections (Zhu and Li, 2018) to prove Iterative Normalization’s convergence (details in Appendix A). 2For such vectors, a small perturbation is an easy fix. 3183 Method Normalization JA ZH HI TR DA DE ES Procrustes None 1.7 32.5 33.3 44.9 54.0 73.5 81.4 C+L 12.3 41.1 34.0 46.5 54.9 74.6 81.3 IN 44.3 44.2 36.7 48.7 58.4 75.5 81.5 Procrustes + refine None 1.7 32.5 33.6 46.3 56.8 74.3 81.9 C+L 13.1 42.3 34.9 48.7 59.3 75.2 82.4 IN 44.3 44.2 37.7 51.7 60.9 76.0 82.5 RCSLS None 14.6 17.1 5.0 18.3 19.2 43.6 50.5 C+L 16.1 45.1 36.2 50.7 58.3 77.5 83.6 IN 56.3 48.6 38.0 52.4 60.5 78.1 83.9 Table 1: Word translation accuracy aligning English embeddings to seven languages. We combine three normalizations—no normalization (None), mean centering and length normalization (C+L), and Iterative Normalization (IN) for five rounds—with three CLWEs: Procrustes, Procrustes with refinement (Conneau et al., 2018), and RCSLS (Joulin et al., 2018). Procrustes with C+L is equivalent to Artetxe et al. (2016). The best result for each CLWE in each column in bold. Iterative Normalization has the best accuracy of the three normalization techniques. Theorem 1. If the embeddings are non-zero after each iteration; i.e., x(k) i ̸= 0 for all i and k, then the sequence n X(k)o produced by Iterative Normalization is convergent. All embeddings in our experiments satisfy the non-zero assumption; it is violated only when all words have the same embedding. In degenerate cases, the algorithm might converge to a solution that does not meet the two requirements. Empirically, our method always satisfy both constraints. Previous approach and differences. Artetxe et al. (2016) also study he unit-length and zeromean constraints, but our work differs in two aspects. First, they motivate the zero-mean condition based on the heuristic argument that two randomly selected word types should not be semantically similar (or dissimilar) in expectation. While this statement is attractive at first blush, some word types have more synonyms than others, so we argue that word types might not be evenly distributed in the semantic space. We instead show that zero-mean is helpful because it satisfies center-invariance, a necessary condition for orthogonal mappings. Second, Artetxe et al. (2016) attempt to enforce the two constraints by a single round of dimension-wise mean centering and length normalization. Unfortunately, this often fails to meet the constraints at the same time—length normalization can change the mean, and mean centering can change vector length. In contrast, Iterative Normalization simultaneously meets both constraints and is empirically better (Table 1) on dictionary induction. 5 Dictionary Induction Experiments On a dictionary induction benchmark, we combine Iterative Normalization with three CLWE methods and show improvement in word translation accuracy across languages. 5.1 Dataset and Methods We train and evaluate CLWE on MUSE dictionaries (Conneau et al., 2018) with default split. We align English embeddings to thirty-nine target language embeddings, pre-trained on Wikipedia with fastText (Bojanowski et al., 2017). The alignment matrices are trained from dictionaries of 5,000 source words. We report top-1 word translation accuracy for 1,500 source words, using crossdomain similarity local scaling (Conneau et al., 2018, CSLS). We experiment with the following CLWE methods.3 Procrustes Analysis. Our first algorithm uses Procrustes analysis (Schönemann, 1966) to find the orthogonal transformation that minimizes Equation 1, the total distance between translation pairs. Post-hoc Refinement. Orthogonal mappings can be improved with refinement steps (Artetxe et al., 2017; Conneau et al., 2018). After learning an initial mapping W0 from the seed dictionary D, we build a synthetic dictionary D1 by translating each word with W0. We then use the new dictionary D1 to learn a new mapping W1 and repeat the process. 3We only report accuracy for one run, because these CLWE methods are deterministic. 3184 Relaxed CSLS Loss (RCSLS). Joulin et al. (2018) optimize CSLS scores between translation pairs instead of Equation (1). RCSLS has state-ofthe-art supervised word translation accuracies on MUSE (Glavas et al., 2019). For the ease of optimization, RCSLS does not enforce the orthogonal constraint. Nevertheless, Iterative Normalization also improves its accuracy (Table 1), showing it can help linear non-orthogonal mappings too. 5.2 Training Details We use the implementation from MUSE for Procrustes analysis and refinement (Conneau et al., 2018). We use five refinement steps. For RCSLS, we use the same hyperparameter selection strategy as Joulin et al. (2018)—we choose learning rate from {1, 10, 25, 50} and number of epochs from {10, 20} by validation. As recommended by Joulin et al. (2018), we turn off the spectral constraint. We use ten nearest neighbors when computing CSLS. 5.3 Translation Accuracy For each method, we compare three normalization strategies: (1) no normalization, (2) dimensionwise mean centering followed by length normalization (Artetxe et al., 2016), and (3) five rounds of Iterative Normalization. Table 1 shows word translation accuracies on seven selected target languages. Results on other languages are in Appendix B. As our theory predicts, Iterative Normalization increases translation accuracy for Procrustes analysis (with and without refinement) across languages. While centering and length-normalization also helps, the improvement is smaller, confirming that one round of normalization is insufficient. The largest margin is on English-Japanese, where Iterative Normalization increases test accuracy by more than 40%. Figure 1 shows an example of how Iterative Normalization makes the substructure of an English-Japanese translation pair more similar. Surprisingly, normalization is even more important for RCSLS, a CLWE method without orthogonal constraint. RCSLS combined with Iterative Normalization has state-of-the-art accuracy, but RCSLS is much worse than Procrustes analysis on unnormalized embeddings, suggesting that length-invariance and center-invariance are also helpful for learning linear non-orthogonal mappings. Dataset Before After WS-353 73.9 73.7 MC 81.2 83.9 RG 79.7 80.0 YP-130 53.3 57.6 Table 2: Correlations before and after applying Iterative Normalization on four English word similarity benchmarks: WS-353 (Finkelstein et al., 2002), MC (Miller and Charles, 1991), RG (Rubenstein and Goodenough, 1965), and YP-130 (Yang and Powers, 2006). The scores are similar, which shows that Iterative Normalization retains useful structures from the original embeddings. 5.4 Monolingual Word Similarity Many trivial solutions satisfy both lengthinvariance and center-invariance; e.g., we can map half of words to e and the rest to −e, where e is any unit-length vector. A meaningful transformation should also preserve useful structure in the original embeddings. We confirm Iterative Normalization does not hurt scores on English word similarity benchmarks (Table 2), showing that Iterative Normalization produces meaningful representations. 6 Conclusion We identify two conditions that make cross-lingual orthogonal mapping easier: length-invariance and center-invariance, and provide a simple algorithm that transforms monolingual embeddings to satisfy both conditions. Our method improves word translation accuracy of different mapping-based CLWE algorithms across languages. In the future, we will investigate whether our method helps other downstream tasks. Acknowledgments We thank the anonymous reviewers for comments. Boyd-Graber and Zhang are supported by DARPA award HR0011-15-C-0113 under subcontract to Raytheon BBN Technologies. Jegelka and Xu are supported by NSF CAREER award 1553284. Xu is also supported by a Chevron-MIT Energy Fellowship. Kawarabayashi is supported by JST ERATO JPMJER1201 and JSPS Kakenhi JP18H05291. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. 3185 References David Alvarez-Melis and Tommi S. Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of Empirical Methods in Natural Language Processing. David Alvarez-Melis, Stefanie Jegelka, and Tommi S Jaakkola. 2019. Towards optimal transport with global invariances. In Proceedings of Artificial Intelligence and Statistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of Empirical Methods in Natural Language Processing. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the Association for Computational Linguistics. Heinz H. Bauschke and Jonathan M. Borwein. 1993. On the convergence of von Neumann’s alternating projection algorithm for two sets. Set-Valued Analysis, 1(2):185–212. Heinz H. Bauschke and Jonathan M. Borwein. 1996. On projection algorithms for solving convex feasibility problems. SIAM review, 38(3):367–426. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Felix E. Browder. 1967. Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces. Archive for Rational Mechanics and Analysis, 24(1):82–90. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In Proceedings of the International Conference on Learning Representations. Yerai Doval, Jose Camacho-Collados, Luis EspinosaAnke, and Steven Schockaert. 2018. Improving cross-lingual word embeddings by meeting in the middle. In Proceedings of Empirical Methods in Natural Language Processing. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on information systems, 20(1):116–131. Yoshinari Fujinuma, Jordan Boyd-Graber, and Michael J. Paul. 2019. A resource-free evaluation metric for cross-lingual word embeddings based on graph modularity. In Proceedings of the Association for Computational Linguistics. Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the Association for Computational Linguistics. L.G. Gubin, B.T. Polyak, and E.V. Raik. 1967. The method of projections for finding the common point of convex sets. USSR Computational Mathematics and Mathematical Physics, 7(6):1–24. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of the Association for Computational Linguistics. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of Empirical Methods in Natural Language Processing. Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learning multilingual word embeddings in latent metric space: a geometric approach. Transactions of the Association for Computational Linguistics, 7:107–120. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of Empirical Methods in Natural Language Processing. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. Proceedings of International Conference on Computational Linguistics. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems. George A. Miller and Walter G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of Empirical Methods in Natural Language Processing. 3186 Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Sebastian Ruder, Ryan Cotterell, Yova Kementchedjhieva, and Anders Søgaard. 2018. A discriminative latent-variable model for bilingual lexicon induction. In Proceedings of Empirical Methods in Natural Language Processing. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual embedding models. arXiv preprint arXiv:1706.04902. Peter H. Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the International Conference on Learning Representations. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Conference of the North American Chapter of the Association for Computational Linguistics. Dongqiang Yang and David M. Powers. 2006. Verb similarity on the taxonomy of wordnet. In International WordNet Conference. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag – multilingual POS tagging via coarse mapping between embeddings. In Conference of the North American Chapter of the Association for Computational Linguistics. Zhihui Zhu and Xiao Li. 2018. Convergence analysis of alternating nonconvex projections. arXiv preprint arXiv:1802.03889. 3187 A Proof for Theorem 1 Our convergence analysis is based on a recent result on alternating non-convex projections. Theorem 1 in the work of Zhu and Li (2018) states that the convergence of alternating projection holds even if the constraint sets are non-convex, as long as the two constraint sets satisfy the following assumption: Assumption 1. Let X and Y be any two closed semi-algebraic sets, and let {(xk, yk)} be the sequence of iterates generated by the alternating projection method (e.g., Iterative Normalization). Assume the sequence {(xk, yk)} is bounded and the sets X and Y obey the following properties: (i) three-point property of Y: there exists a nonnegative function δα : Y×Y →R with α > 0 such that for any k ≥1, we have δα yk, yk−1  ≥α∥yk −yk−1∥2 and δα yk−1, yk  +∥xk−yk∥2 2 ≤∥xk−yk−1∥2 2, (ii) local contraction property of X: there exist ϵ > 0 and β > 0 such that when ∥yk − yk−1∥2 ≤ϵ, we have ∥xk+1 −xk∥2 = ∥PX(yk) −PX(yk−1)∥2 ≤β∥yk −yk−1∥2 where PX is the projection onto X. Zhu and Li (2018) only consider sets of vectors, but our constraint are sets of matrices. For ease of exposition, we treat every embedding matrix X ∈Rd×n as a vector by concatenating the column vectors: X = [x1, x2, · · · , xn]. The l2-norm of the concatenated vector ∥X∥2 is equivalent to the Frobenius norm of the original matrix ∥X∥F . The two operations in Iterative Normalization, Equation (5) and (6), are projections onto two constraint sets, unit-length set Y =  X ∈Rd×n : ∀i, ∥xi∥2 = 1 and zero-mean set X =  X ∈Rd×n : Pn i=1 xi = 0 . To prove convergence of Iterative Normalization, we show that Y satisfies the three-point property, and X satisfies the local contraction property. Three-point property of Y. For any Y′ ∈Y and X ∈Rn×d, let Y be the projection of X onto the constraint set Y with Equation (5). The columns of Y and Y′ have the same length, so we have ∥X −Y′∥2 2 −∥X −Y∥2 2 = n X i=1 ∥xi −y′ i∥2 −∥xi −yi∥2 2 = n X i=1 2x⊤ i yi −2x⊤ i y′ i. (7) Since Y is the projection of X onto the unit-length set with Equation (5); i.e., yi = xi/∥xi∥2, we can rewrite Equation (7). ∥X −Y′∥2 2 −∥X −Y∥2 2 = n X i=1 ∥xi∥2(2y⊤ i yi −2y⊤ i y′ i). (8) All columns of Y and Y′ are unit-length. Therefore, we can further rewrite Equation (8). ∥X −Y′∥2 2 −∥X −Y∥2 2 = n X i=1 ∥xi∥2(2 −2y⊤ i y′ i) = n X i=1 ∥xi∥2∥yi −y′ i∥2 2. Let l = mini {∥xi∥2} be the minimum length of the columns in X. We have the following inequality: ∥X −Y′∥2 2 −∥X −Y∥2 2 ≥ n X i=1 l∥yi −y′ i∥2 2 = l||Y −Y′∥2 2. From our non-zero assumption, the minimum column length l is always positive. Let lk be the minimum column length of the embedding matrix X(k) after the k-th iteration. It follows that Y satisfies the three-point property with α = mink {lk} and δα(Y, Y′) = α∥Y −Y′∥2 2. Local contraction property of X. The zeromean constraint set X is convex and closed: if two matrices X and Y both have zero-mean, their linear interpolation λX + (1 −λ)Y must also have zeromean for any 0 < λ < 1. Projections onto convex 3188 sets in a Hilbert space are contractive (Browder, 1967), and therefore X satisfies the local contraction property with any positive ϵ and β = 1. In summary, the two constraint sets that Iterative Normalization projects onto satisfy Assumption 1. Therefore, Iterative Normalization converges following the analysis of Zhu and Li (2018). B Results on All Languages Table 3 shows word translation accuracies on all target languages. Iterative Normalization improves accuracy on all languages. 3189 Procrustes Procrustes + refine RCSLS Target None C+L IN None C+L IN None C+L IN AF 26.3 28.3 29.7 27.7 28.7 30.4 9.3 28.6 29.3 AR 36.5 37.1 37.9 36.5 37.1 37.9 18.4 40.5 41.5 BS 22.3 23.5 24.4 23.3 23.9 26.6 5.4 25.5 26.6 CA 65.9 67.6 68.9 66.5 67.6 68.9 43.0 68.9 69.5 CS 54.0 54.7 55.3 54.0 54.7 55.7 29.9 57.8 58.2 DA 54.0 54.9 58.4 56.8 59.3 60.9 19.2 58.3 60.5 DE 73.5 74.6 75.5 74.3 75.2 76.0 43.6 77.5 78.1 EL 44.0 44.9 47.5 44.6 45.9 47.9 14.0 47.1 48.5 ES 81.4 81.3 81.5 81.9 82.1 82.5 50.5 83.6 83.9 ET 31.9 34.5 36.1 31.9 35.3 36.4 8.1 37.3 39.4 FA 33.1 33.7 37.3 33.1 34.1 37.3 5.9 37.5 38.3 FI 47.6 48.5 50.9 47.6 50.1 51.1 20.9 52.3 53.3 FR 81.1 81.3 81.7 82.1 82.7 82.4 53.1 83.9 83.9 HE 40.2 43.1 43.7 40.2 43.1 43.7 13.1 49.7 50.1 HI 33.3 34.0 36.7 33.6 34.9 37.7 5.0 36.2 38.0 HR 37.0 37.8 40.2 37.6 37.8 40.2 14.5 41.1 42.6 HU 51.8 54.1 55.5 53.3 54.1 56.1 11.7 57.3 58.2 ID 65.6 65.7 67.9 67.7 68.4 70.3 24.8 68.9 70.0 IT 76.2 76.6 76.6 77.5 78.1 78.1 48.4 78.8 79.1 JA 1.7 13.1 44.3 1.7 13.1 44.3 14.6 16.1 56.3 KO 31.5 32.1 33.9 31.5 32.1 33.9 6.4 37.5 37.5 LT 22.5 22.8 23.2 22.5 22.8 23.3 7.6 23.3 23.5 LV 23.6 24.9 26.1 23.6 24.9 26.1 10.1 28.3 28.7 MS 44.0 45.4 48.9 46.5 48.3 51.1 19.9 49.1 50.2 NL 72.8 73.7 74.1 73.8 75.1 75.8 46.7 75.6 75.8 PL 58.2 60.2 60.1 58.5 60.2 60.4 39.4 62.4 62.5 PT 79.5 79.7 79.9 79.9 81.0 81.2 63.1 81.1 81.7 RO 58.1 60.5 61.8 59.9 60.5 62.5 27.1 61.9 63.3 RU 51.7 52.1 52.1 51.7 52.1 52.1 26.6 57.1 57.9 SK 38.0 39.3 40.4 38.0 39.3 41.7 13.3 41.5 42.3 SL 32.5 34.3 36.7 32.5 34.4 36.7 12.3 36.0 37.9 SQ 23.5 25.1 27.3 23.5 25.1 27.3 4.4 26.5 27.3 SV 58.7 59.6 60.7 60.9 61.2 62.6 35.6 63.8 63.9 TA 15.1 15.5 16.8 15.1 15.5 17.7 6.7 16.3 17.1 TH 22.5 23.3 22.9 22.5 23.3 22.9 9.4 23.7 23.9 TR 44.9 46.5 48.7 46.3 48.7 51.7 18.3 50.7 52.4 UK 34.8 35.9 36.3 35.5 35.9 36.5 18.8 40.7 40.8 VI 41.3 42.1 43.7 42.1 42.7 44.2 14.2 43.3 43.9 ZH 32.5 42.3 44.2 32.5 42.3 44.2 17.1 45.1 48.6 Average 44.7 46.3 48.4 45.3 47.0 49.1 21.8 49.0 50.9 Table 3: Word translation accuracy aligning English embeddings to thirty-nine languages. We combine three normalizations—no normalization (None), mean centering and length normalization (C+L), and Iterative Normalization (IN) for five rounds—with three CLWEs: Procrustes, Procrustes with refinement (Conneau et al., 2018), and RCSLS (Joulin et al., 2018). Procrustes with C+L is equivalent to Artetxe et al. (2016). The best result for each CLWE in each column in bold. Iterative Normalization has the best accuracy of the three normalization techniques.
2019
307
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3190–3196 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3190 MAAM: A Morphology-Aware Alignment Model for Unsupervised Bilingual Lexicon Induction Pengcheng Yang1,2, Fuli Luo2, Peng Chen2, Tianyu Liu2, Xu Sun1,2 1Deep Learning Lab, Beijing Institute of Big Data Research, Peking University 2MOE Key Lab of Computational Linguistics, School of EECS, Peking University {yang pc, luofuli, chen.peng, tianyu0421, xusun}@pku.edu.cn Abstract The task of unsupervised bilingual lexicon induction (UBLI) aims to induce word translations from monolingual corpora in two languages. Previous work has shown that morphological variation is an intractable challenge for the UBLI task, where the induced translation in failure case is usually morphologically related to the correct translation. To tackle this challenge, we propose a morphology-aware alignment model for the UBLI task. The proposed model aims to alleviate the adverse effect of morphological variation by introducing grammatical information learned by the pre-trained denoising language model. Results show that our approach can substantially outperform several state-of-the-art unsupervised systems, and even achieves competitive performance compared to supervised methods. 1 Introduction The task of unsupervised bilingual lexicon induction aims at identifying translational equivalents across two languages (Kementchedjhieva et al., 2018). It can be applied in plenty of real-scenarios, such as machine translation (Artetxe et al., 2018b), transfer learning (Zhou et al., 2016), and so on. Based on the observation that embedding spaces of different languages exhibit similar structures, a prominent approach is to align monolingual embedding spaces of two languages with a simple linear mapping (Zhang et al., 2017a; Lample et al., 2018). However, previous work (Artetxe et al., 2018a; Søgaard et al., 2018) has shown that morphological variation is an intractable challenge for the UBLI task. The induced translations in failure cases are usually morphologically related words. Due to similar semantics, these words can easily confuse the system to make the incorrect alignment. Table 1 presents three randomly selected failure examples of MUSE (Lample et al., 2018) Source word Top-3 of retrieved nearest neighbors mangez eats eat buttered suspendit suspending suspend suspended diffusant broadcasts broadcast broadcasting Table 1: Three randomly selected failure examples of MUSE on FR-EN language pair. Red words are correct translations, which are all not the nearest translations. on the FR-EN language pair, showing that all failures can be attributed to morphological variation. For instance, for the French source word “mangez”, MUSE translates it to morphologically related word “eats”, instead of the correct English translation “eat”. However, we find that additional grammatical information can help alleviate the adverse effect of morphological variation. In detail, since lexicon induction (word alignment) can be regarded as word-to-word translation, the fluency of the translated sentence can reflect the quality of word alignment. If the model can retrieve the correct translation for each word in a source sentence, the translated sentence is more likely to be fluent and grammatically correct. Considering some problems (e.g. word order) of the naive word-toword translation can also lead to poor fluency, we pre-train a denoising auto-encoder (DAE) to clean noise in the original translated sentence. Figure 1 visually shows an example. For the French source word “mangez”, if the model translates it to “eats” instead of the correct English translation “eat”, the denoised translated sentence “you eats meat” is grammatically unreasonable. Therefore, by considering the fluency of the denoised translated sentence, these morphologically related erroneous translations can be reasonably punished. Motivated by this, we propose a morphologyaware alignment model to alleviate the adverse effect of morphological variation by introducing ad3191 Word-to-Word Translation Denoising vous mangez da la viande you eat of the meat you eat meat Word-to-Word Translation Denoising vous mangez da la viande you eats of the meat you eats meat Wrong Translation Correct Translation Figure 1: Alleviate the adverse effect of morphological variation via grammatical information. ditional grammatical information. The proposed model consists of a learnable linear transformation W between two languages and a parameter-fixed denoising evaluator E. W is responsible for performing word-to-word translation on sentences in the source monolingual corpus. E first applies a DAE to clean noise in the original translated sentence, and then evaluates the fluency of the denoised translated sentence via a language model pre-trained on the target monolingual corpus to guide the training of W. Due to the discrete operation of word-to-word translation, we employ REINFORCE algorithm (Williams, 1992) to estimate the corresponding gradient. With the grammatical information contained in E, the adverse effect of morphological differences can be alleviated. Our main contributions are listed as follows: • We propose a morphology-aware alignment model for unsupervised bilingual lexicon induction, which aims to alleviate the adverse effect of morphological variation by introducing grammatical information learned from pre-trained language model. • Extensive experimental results show that our approach achieves better performance than several state-of-the-art unsupervised systems, and even achieves competitive performance compared to supervised methods. 2 Proposed Model We use X = {xi}n1 i=1 and Y = {yi}n2 i=1 to denote the source and target monolingual embeddings, respectively. The task aims to find a linear transformation W so that for any source word embedding x, Wx lies close to the embedding y of its translation. Figure 2 presents the sketch of our proposed morphology-aware alignment model, which consists of a learnable linear transformation W and a parameter-fixed denoising evaluator E. Word-to-Word Translation word-to-word translation Denoising Evaluator denoising auto-encoder language model ݖଵ ݖଶ ݖ௭ ڮ ݏଵ ݏଶ ݏ௠ ڮ –ଵ ݐଶ ݐ୫ ڮ Figure 2: The sketch of the proposed model. 2.1 Word-to-Word Translation The word-to-word translation is accomplished by linear transformation W. Specifically, for each word si in a source sentence s = (s1, · · · , sm), it is translated by retrieving the nearest target word ti based on cosine1 similarity. ti = argmax t cos(Wxsi, yt) (1) where xsi and yt represent the pre-trained monolingual embedding of the source word si and target word t, respectively. 2.2 Denoising Evaluator The denoising evaluator E aims to utilize learned grammar information to guild the training of W. It contains two crucial components: a denoising auto-encoder (DAE) and a language model. Both components are pre-trained on the target monolingual corpus and remain fixed during training. Denoising Auto-Encoder Considering some ingrained problems (e.g. word order) of the naive word-to-word translation, the original translation t can be regarded as a noisy version of the ground-truth translation. Therefore, we adopt a DAE (Vincent et al., 2008) to clean noise in t = (t1, · · · , tm) so that E can provide a more accurate supervisory signal. Here we implement the DAE as an encoder-decoder framework (Bahdanau et al., 2015). The input is the noisy version N(c) and the output is the cleaned sentence c, where c is a sentence sampled from the target monolingual corpus. Following Kim et al. (2018), we construct N(c) by designing three noises: insertion, deletion, and reordering. Readers can refer to Kim et al. (2018) for more technical explanations. 1For simplicity, we employ the cosine similarity. Readers can also adopt other retrieval methods (e.g. CSLS) to obtain better performance. 3192 Language Model For a source sentence s, if W is of high quality, the denoised translated sentence should keep fluent and grammatically correct. Otherwise, if W retrieves a morphologically related but erroneous word, the denoised translated sentence tends to be grammatically incorrect, leading to poor fluency. Therefore, a language model is used to evaluate the fluency of translation to guide the training of W. We implement the language model as an LSTM (Hochreiter and Schmidhuber, 1997) structure with weight tying. Since this part is not the focus of our work, readers can refer to Press and Wolf (2017) for the details. With the grammatical information learned by the pre-trained language model, erroneous word alignment due to morphological variation is penalized. Therefore, W is encouraged to retrieve correct word translation with appropriate morphology. 2.3 Training and Testing We encourage W to perform correct word alignment so that the denoised translated sentences are fluent and grammatically correct. Therefore, the training objective is to minimize the negative expected reward, which is formulated as follows: L(s) = −Et  R(zt)logp(t|s)  (2) where zt is the output of denoising auto-encoder with t as the input, R(zt) is the reward evaluating the fluency of zt, and p(t|s) is the probability of W outputs t by performing word-to-word translation on s. We introduce them in detail as follows. For the i-th word si in the source sentence s, the probability of W retrieving the target translation ti can be characterized by the cosine similarity of both embedding Wxsi and yti. Formally, p(ti|si) = exp  cos(Wxsi, yti)   t exp  cos(Wxsi, yt)  (3) Therefore, p(t|s) can be defined as the product of the probability corresponding to each position: p(t|s) = m  i=1 p(ti|si) (4) The reward R(zt) aims at evaluating the fluency of the denoised translated sentence zt to guide the training of W, which is defined as follows: R(zt) = exp  1 |zt| |zt| i=1 logq(zi|z<i) (5) where zi is the i-th word in zt = (z1, · · · , z|z|), z<i refers to the sequence (z1, · · · , zi−1), and q(zi|z<i) is the probability that the pre-trained language model outputs the word zi conditioned on z<i. If zt is fluent and grammatically correct, the corresponding reward R(zt) is relatively large. Therefore, the reward R(zt) can be used as feedback to guide the training of W. Since operation of word-to-word translation is discrete, we use REINFORCE algorithm (Williams, 1992) to estimate the gradient of Eq. (2) as follows: ∇WL(s) ≈−  R(zt) −b  ∇Wlog  p(t|s)  (6) where b is the baseline that is responsible for reducing the variance of gradient estimate. 3 Experiments 3.1 Experiment Settings We conduct experiments on the 300-dim fastText embeddings trained on Wikipedia. All words are lower-cased and only the frequent 200K words are used. We utilize approach in Artetxe et al. (2018a) to provide the initial linear transformation and lexicon constructed by Lample et al. (2018) is used for evaluation. Here we report accuracy with nearest neighbor retrieval based on cosine similarity. The parameters of the DAE and language model are provided in the Appendix. We set the batch size to 64 and the optimizer is SGD. The learning rate is initialized to 10−5 and it is halved after every training epoch. The unsupervised criterion proposed in Lample et al. (2018) is adopted as both a stopping criterion and a model selection criterion. 3.2 Experimental Results Table 2 presents the results of different systems, showing that our proposed model achieves the best performance on all test language pairs under unsupervised settings. In addition, our approach is able to achieve completely comparable or even better performance than supervised systems. This illustrates that the quality of word alignment can be improved by introducing grammar information from the pre-trained denoising language model. Our denoising evaluator encourages the model to retrieve the correct translation with appropriate morphological by assessing the fluency of sentences obtained by word-to-word translation. This alleviates the adverse effect of morphological variation. 3193 Methods DE-EN EN-DE ES-EN EN-ES FR-EN EN-FR IT-EN EN-IT Supervised: Mikolov et al. (2013a) 61.93 73.07 74.00 80.73 71.33 82.20 68.93 77.60 Xing et al. (2015) 67.73 69.53 77.20 78.60 76.33 78.67 72.00 73.33 Shigeto et al. (2015) 71.07 63.73 81.07 74.53 79.93 73.13 76.47 68.13 Artetxe et al. (2016) 69.13 72.13 78.27 80.07 77.73 79.20 73.60 74.47 Artetxe et al. (2017) 68.07 69.20 75.60 78.20 74.47 77.67 70.53 71.67 Unsupervised: Zhang et al. (2017a) 40.13 41.27 58.80 60.93 57.60 43.60 44.53 Zhang et al. (2017b) 55.20 70.87 71.40 64.87 65.27 Lample et al. (2018) 69.73 71.33 79.07 78.80 77.87 78.13 74.47 75.33 Xu et al. (2018) 67.00 69.33 77.80 79.53 75.47 77.93 72.60 73.47 Artetxe et al. (2018a) 72.27 73.60 81.60 80.67 80.20 80.40 76.33 77.13 Ours 73.13 74.47 82.13 81.87 81.53 81.27 77.60 78.33 Table 2: The accuracy of different methods in various language pairs. Bold indicates the best supervised and unsupervised results, respectively. “-” means that the model fails to converge and hence the result is omitted. Models EN-ES EN-FR EN-DE EN-IT Full model 81.87 81.27 74.47 78.33 w/o Evaluator 80.67 80.40 73.60 77.13 w/o DAE 81.33 80.93 74.20 77.73 Table 3: Results of ablation study. 3.3 Ablation Study Here we perform an ablation study to understand the importance of different components. Table 3 presents the performance of different ablated versions, showing that our denoising evaluator can bring stable improvements in performance. This illustrates that introducing grammatical information learned by the pre-trained denoising language model is of great help to perform accurate word alignment. By imposing the penalty to the retrieved morphologically related but erroneous translations, this additional grammatical information can alleviate the adverse effects of morphological variation. In addition, we can find that the DAE plays an active role in improving results. By cleaning the noise in the original translated sentence, the DAE makes the reward provided by evaluator more accurate, leading to the improvements in model performance. 3.4 The Validity of Cleaning Noise By cleaning the noise in the original word-to-word translation, the denoising auto-encoder (DAE) can benefit the evaluator E to feedback more accurate evaluation signals. Here Table 4 presents several examples output by the DAE on the FR-EN language pair. The results show that there exist some obvious grammatical errors in the naive word-toword translation. For instance, the word “to” is Input: ˆEtre adulte, c’est ˆetre seul. Noisy translation: Be adult, it’s be alone. Cleaned translation: To be an adult is to be alone. Ground truth: To be an adult is to be alone. Input: L’histoire se r´ep`ete. Noisy translation: History itself repeats. Cleaned translation: History repeats itself. Ground truth: History repeats itself. Table 4: Several examples output by the denoising auto-encoder on the FR-EN language pair. missing in the first example and the words in the second example are not organized in a grammatical order. However, our pre-trained DAE is able to correct these errors by inserting or deleting appropriate words or adjusting the word order. This intuitively demonstrates the effectiveness of our DAE in cleaning noise contained in the original translated sentence. 3.5 Case Study Table 5 lists several word translation examples on the FR-EN language pair. The results show that the baselines retrieve morphologically related but erroneous translations, while our approach is able to perform the correct word alignment. Our approach can constrain the retrieved translation to have the correct morphology by introducing grammatical information, leading to improved performance. Figure 3 presents the visualization of joint semantic space of FR-EN language pair using tSNE (Maaten and Hinton, 2008), showing that word pairs that can be translated mutually are represented by almost the same point. This intuitively reveals that our approach can capture the common linguistic regularities of different languages. 3194 Source word MUSE Vecmap Ours suspendit suspending suspend suspended diffusant broadcasts broadcast broadcasting atteint reaching reach reached Table 5: Translations of various systems on the FR-EN language pair. Red words are correct translations. 4 Related Work This paper is mainly related to the following two lines of work. Supervised cross-lingual embedding. Inspired by the isometric observation between monolingual word embeddings of two different languages, Mikolov et al. (2013b) propose to learn cross-lingual word mapping by minimizing mean squared error. Latter, Dinu and Baroni (2015) investigate the hubness problem and Faruqui and Dyer (2014) incorporates the semantics of a word in multiple languages into its embedding. Furthermore, Xing et al. (2015) propose to impose the orthogonal constraint to the linear mapping and Artetxe et al. (2016) present a series of techniques, including length normalization and mean centering, to improve bilingual results. There also exist some other representative researches. For instance, Smith et al. (2017) present inversesoftmax which normalizes the softmax probability over source words rather than target words and Artetxe et al. (2017) present a self-learning framework to perform iterative refinement, which is also adopted in some unsupervised settings and plays a crucial role in improving performance. Unsupervised cross-lingual embedding. The endeavors to explore unsupervised cross-lingual embedding are mainly divided into two categories. One line focuses on designing heuristics or utilizing the structural similarity of monolingual embeddings. For instance, Hoshen and Wolf (2018) present a non-adversarial method based on the principal component analysis. Both Aldarmaki et al. (2018) and Artetxe et al. (2018a) take advantage of geometric properties across languages to perform word retrieval to learn the initial word mapping. Cao and Zhao (2018) formulate this problem as point set registration to adopt a point set registration method. However, these methods usually require plenty of random restarts or additional skills to achieve satisfactory performance. Another line strives to learn unsupervised word Figure 3: Visualization of two monolingual embedding spaces (left) and aligned embedding space (right). mapping by direct distribution-matching. For example, Lample et al. (2018) and Zhang et al. (2017a) completely eliminate the need for any supervision signal by aligning the distribution of transferred embedding and target embedding with GAN. Furthermore, Zhang et al. (2017b) and Xu et al. (2018) adopt the Earth Mover’s distance and Sinkhorn distance as the optimized distance metrics, respectively. There are also some attempts on distant language pairs. For instance, Kementchedjhieva et al. (2018) generalize Procrustes analysis by projecting the two languages into a latent space and Nakashole (2018) propose to learn neighborhood sensitive mapping by training non-linear functions. As for the hubness problem, Ruder et al. (2018) propose a latent-variable model learned with Viterbi EM algorithm. Recently, Alaux et al. (2018) work on the problem of aligning more than two languages simultaneously by a formulation ensuring composable mappings. 5 Conclusion In this work, we present a morphology-aware alignment model for unsupervised bilingual lexicon induction. The proposed model is able to alleviate the adverse effect of morphological variation by introducing grammatical information learned from pre-trained denoising language model. The results show that our approach can achieve better performance than several state-of-the-art unsupervised systems, and even achieves competitive performance compared to supervised methods. Acknowledgement We thank the anonymous reviewers for their thoughtful comments. We also would like to thank Shuangzhi Wu and Dongdong Zhang for their insightful suggestions. Xu Sun is the contact author of this paper. 3195 References Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2018. Unsupervised hyperalignment for multilingual word embeddings. arXiv preprint arXiv:1811.01124. Hanan Aldarmaki, Mahesh Mohan, and Mona T. Diab. 2018. Unsupervised word mapping using structural similarities in monolingual embeddings. TACL, 6:185–196. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word embeddings while preserving monolingual invariance. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2289–2294. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 451–462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 789–798. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised neural machine translation. In 6th International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations. Hailong Cao and Tiejun Zhao. 2018. Point set registration for unsupervised bilingual lexicon induction. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, pages 3991–3997. Georgiana Dinu and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In 3rd International Conference on Learning Representations. Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Yedid Hoshen and Lior Wolf. 2018. An iterative closest point method for unsupervised word translation. arXiv preprint arXiv:1801.06126v1. Yova Kementchedjhieva, Sebastian Ruder, Ryan Cotterell, and Anders Søgaard. 2018. Generalizing procrustes analysis for better bilingual dictionary induction. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 211–220. Yunsu Kim, Jiahui Geng, and Hermann Ney. 2018. Improving unsupervised word-by-word translation with language model and denoising autoencoder. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 862–868. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In 6th International Conference on Learning Representations. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. A dual reinforcement learning framework for unsupervised text style transfer. arXiv preprint arXiv:1905.10060. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Ndapa Nakashole. 2018. NORMA: neighborhood sensitive maps for multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 512–522. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Papers, pages 157–163. Sebastian Ruder, Ryan Cotterell, Yova Kementchedjhieva, and Anders Søgaard. 2018. A discriminative latent-variable model for bilingual lexicon induction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 458–468. Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, and Yuji Matsumoto. 2015. Ridge regression, hubness, and zero-shot learning. In Machine Learning and Knowledge Discovery in Databases European Conference, pages 135–151. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In 5th International Conference on Learning Representations. 3196 Anders Søgaard, Sebastian Ruder, and Ivan Vulic. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 778–788. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Machine Learning, Proceedings of the Twenty-Fifth International Conference, pages 1096– 1103. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474. Pengcheng Yang, Fuli Luo, Shuangzhi Wu, Jingjing Xu, Dongdong Zhang, and Xu Sun. 2018. Learning unsupervised word mapping by maximizing mean discrepancy. arXiv preprint arXiv:1811.00275. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers, pages 1959–1970. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934–1945. Yi Zhang, Jingjing Xu, Pengcheng Yang, and Xu Sun. 2018. Learning sentiment memories for sentiment modification without parallel data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1103–1108. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Cross-lingual sentiment classification with bilingual document representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1: Long Papers.
2019
308
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3197–3203 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3197 Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings Mikel Artetxe University of the Basque Country (UPV/EHU)∗ [email protected] Holger Schwenk Facebook AI Research [email protected] Abstract Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC mining task and the UN reconstruction task by more than 10 F1 and 30 precision points, respectively. Filtering the EnglishGerman ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version. 1 Introduction While Neural Machine Translation (NTM) has obtained breakthrough improvements in standard benchmarks, it is known to be particularly sensitive to the size and quality of the training data (Koehn and Knowles, 2017; Khayrallah and Koehn, 2018). In this context, effective approaches to mine and filter parallel corpora are crucial to apply NMT in practical settings. Traditional parallel corpus mining has relied on heavily engineered systems. Early approaches were mostly based on metadata information from web crawls (Resnik, 1999; Shi et al., 2006). More recent methods focus on the textual content instead. For instance, Zipporah learns a classifier ∗This work was performed during an internship at Facebook AI Research. over bag-of-word features to distinguish between ground truth translations and synthetic noisy ones (Xu and Koehn, 2017). STACC uses seed lexical translations induced from IBM alignments, which are combined with set expansion operations to score translation candidates through the Jaccard similarity coefficient (Etchegoyhen and Azpeitia, 2016; Azpeitia et al., 2017, 2018). Many of these approaches rely on cross-lingual document retrieval (Utiyama and Isahara, 2003; Munteanu and Marcu, 2005, 2006; Abdul-Rauf and Schwenk, 2009) or machine translation (Abdul-Rauf and Schwenk, 2009; Bouamor and Sajjad, 2018). More recently, a new research line has shown promising results using multilingual sentence embeddings alone1 (Schwenk, 2018; Guo et al., 2018). These methods use an NMT inspired encoder-decoder to train sentence embeddings on existing parallel data, which are then directly applied to retrieve and filter new parallel sentences using nearest neighbor retrieval over cosine similarity with a hard threshold (Espa˜na-Bonet et al., 2017; Hassan et al., 2018; Schwenk, 2018). In this paper, we argue that this retrieval method suffers from the scale of cosine similarity not being globally consistent. As illustrated by the example in Table 1, some sentences without any correct translation have overall high cosine scores, making them rank higher than other sentences with a correct translation. This issue was also pointed out by Guo et al. (2018), who learn an encoder to score known translation pairs above synthetic negative examples and train a separate model to dynamically scale and shift the dot product on held out supervised data. In contrast, our 1Multilingual entence embeddings have also been used as part of a larger system, either to obtain an initial alignment that is then further filtered (Bouamor and Sajjad, 2018) or as an intermediate representation of an end-to-end classifier (Gr´egoire and Langlais, 2017). 3198 (A) Les produits agricoles sont constitu´es de th´e, de riz, de sucre, de tabac, de camphre, de fruits et de soie. 0.818 Main crops include wheat, sugar beets, potatoes, cotton, tobacco, vegetables, and fruit. 0.817 The fertile soil supports wheat, corn, barley, tobacco, sugar beet, and soybeans. 0.814 Main agricultural products include grains, cotton, oil, pigs, poultry, fruits, vegetables, and edible fungus. 0.808 The important crops grown are cotton, jowar, groundnut, rice, sunflower and cereals. (B) Mais dans le contexte actuel, nous pourrons les ignorer sans risque. 0.737 But, in view of the current situation, we can safely ignore these. 0.499 But without the living language, it risks becoming an empty shell. 0.498 While the risk to those working in ceramics is now much reduced, it can still not be ignored. 0.488 But now they have discovered they are not free to speak their minds. Table 1: Motivating example of the proposed method. We show the nearest neighbors of two French sentences on the BUCC training set along with their cosine similarities. Only the nearest neighbor of B is a correct translation, yet that of A has a higher cosine similarity. We argue that this is caused by the cosine similarity of different sentences being in different scales, making it a poor indicator of the confidence of the prediction. Our method tackles this issue by considering the margin between a given candidate and the rest of the k nearest neighbors. proposed method tackles this issue by considering the margin between the cosine of a given sentence pair and that of its respective k nearest neighbors. 2 Multilingual sentence embeddings Figure 1 shows our encoder-decoder architecture to learn multilingual sentence embeddings, which is based on Schwenk (2018). The encoder consists of a bidirectional LSTM, and our sentence embeddings are obtained by applying a max-pooling operation over its output. These embeddings are fed into an LSTM decoder in two ways: 1) they are used to initialize its hidden and cell state after a linear transformation, and 2) they are concatenated to the input embeddings at every time step. We use a shared encoder and decoder for all languages with a joint 40k BPE vocabulary learned on the concatenation of all training corpora.2 The encoder is fully language agnostic, without any explicit signal of the input or output language, whereas the decoder receives an output language ID embedding at every time step. Training minimizes the cross-entropy loss on parallel corpora, alternating over all combinations of the languages involved. We train on 4 GPUs with a total batch size of 48,000 tokens, using Adam with a learning rate of 0.001 and dropout set to 0.1. We use a single layer for both the encoder and the decoder with a hidden size of 512 and 2048, respectively, yielding 1024 dimensional sentence embeddings. The input embeddings size is set to 512, while the lan2Prior to BPE segmentation, we tokenize and lowercase the input text using standard Moses tools. As the only exception, we use Jieba (https://github.com/fxsjy/ jieba) for Chinese word segmentation. guage ID embeddings have 32 dimensions. After training, the decoder is discarded, and the encoder is used to map a sentence to a fixed-length vector. 3 Scoring and filtering parallel sentences The multilingual encoder can be used to mine parallel sentences by taking the nearest neighbor of each source sentence in the target side according to cosine similarity, and filtering those below a fixed threshold. While this approach has been reported to be competitive (Schwenk, 2018), we argue that it suffers from the scale of cosine similarity not being globally consistent across different sentences.3 For instance, Table 1 shows an example where an incorrectly aligned sentence pair has a larger cosine similarity than a correctly aligned one, thus making it impossible to filter it through a fixed threshold. In that case, all four nearest neighbors have equally high values. In contrast, for example B, there is a big gap between the nearest neighbor and its other candidates. As such, we argue that the margin between the similarity of a given candidate and that of its k nearest neighbors is a better indicator of the strength of the alignment.4 We next describe our scoring method inspired by this idea in Section 3.1, and discuss our candidate generation and filtering strategy in Section 3.2. 3Note that, even if cosine similarity is normalized in the (-1, 1) range, it is still susceptible to concentrate around different values. 4As a downside, this approach will penalize sentences with many paraphrases in the corpus. While possible, we argue that such cases rarely happen in practice and, even when they do, filtering them is unlikely to cause any major harm. 3199 DECODER … sent Lid BPE LSTM <s> sent Lid BPE LSTM y1 sent Lid BPE LSTM yn y1 softmax y2 softmax </s> softmax … … … BPE emb BiLSTM x1 BPE emb BiLSTM x2 BPE emb BiLSTM </s> … sent emb max pooling W ENCODER Figure 1: Architecture of our system to learn multilingual sentence embeddings. 3.1 Margin-based scoring We consider the margin between the cosine of a given candidate and the average cosine of its k nearest neighbors in both directions as follows: score(x, y) = margin(cos(x, y), X z∈NNk(x) cos(x, z) 2k + X z∈NNk(y) cos(y, z) 2k ) where NNk(x) denotes the k nearest neighbors of x in the other language excluding duplicates,5 and analogously for NNk(y). We explore the following variants of this general definition: • Absolute (margin(a, b) = a): Ignoring the average. This is equivalent to cosine similarity and thus our baseline. • Distance (margin(a, b) = a −b): Subtracting the average cosine similarity from that of the given candidate. This is proportional to the CSLS score (Conneau et al., 2018), which was originally motivated to mitigate the hubness problem on Bilingual Lexicon Induction (BLI) over cross-lingual word embeddings.6 • Ratio (margin(a, b) = a b ): The ratio between the candidate and the average cosine of its nearest neighbors in both directions. 3.2 Candidate generation and filtering When mining parallel sentences, we explore the following strategies to generate candidates: 5Unless otherwise indicated, we use k = 4. 6While our work is motivated by thresholding, which is not used in BLI, this connection points out a related problem that our approach also addresses: even when the source sentence is fixed, the potentially different scales of its target candidates might also affect their relative ranking, which ultimately causes the hubness problem. Thanks to its bidirectional nature, our proposed scoring method penalizes target sentences with overall high cosine similarities, so it can learn better alignments that account for this factor. • Forward: Each source sentence is aligned with exactly one best scoring target sentence.7 Some target sentences may be aligned with multiple source sentences or with none. • Backward: Equivalent to the forward strategy, but going in the opposite direction. • Intersection of forward and backward candidates, which discards sentences with inconsistent alignments. • Max. score: Combination of forward and backward candidates that, instead of discarding all inconsistent alignments, it selects those with the highest score. These candidates are then sorted according to their margin scores, and a threshold is applied. This can be either optimized on the development data, or adjusted to obtain the desired corpus size. 4 Experiments and results We next present our results on the BUCC mining task, UN corpus reconstruction, and machine translation over filtered ParaCrawl. All experiments use an English/French/Spanish/German multilingual encoder trained on Europarl v7 (Koehn, 2005) for 10 epochs. To cover all languages in BUCC, we use a separate English/French/Russian/Chinese model trained on the UN corpus (Ziemski et al., 2016) for 4 epochs. 4.1 BUCC mining task The shared task of the workshop on Building and Using Comparable Corpora (BUCC) is a wellestablished evaluation framework for bitext mining (Zweigenbaum et al., 2017, 2018). The task is 7For efficiency, only the k nearest neighbors over cosine similarity are considered, where the neighborhood size k is the same as that used for the margin-based scoring. 3200 Func. Retrieval EN-DE EN-FR P R F1 P R F1 Abs. (cos) Forward 78.9 75.1 77.0 82.1 74.2 77.9 Backward 79.0 73.1 75.9 77.2 72.2 74.7 Intersection 84.9 80.8 82.8 83.6 78.3 80.9 Max. score 83.1 77.2 80.1 80.9 77.5 79.2 Dist. Forward 94.8 94.1 94.4 91.1 91.8 91.4 Backward 94.8 94.1 94.4 91.5 91.4 91.4 Intersection 94.9 94.1 94.5 91.2 91.8 91.5 Max. score 94.9 94.1 94.5 91.2 91.8 91.5 Ratio Forward 95.2 94.4 94.8 92.4 91.3 91.8 Backward 95.2 94.4 94.8 92.3 91.3 91.8 Intersection 95.3 94.4 94.8 92.4 91.3 91.9 Max. score 95.3 94.4 94.8 92.4 91.3 91.9 Table 2: BUCC results (precision, recall and F1) on the training set, used to optimize the filtering threshold. to mine for parallel sentences between English and four foreign languages: German, French, Russian and Chinese. There are 150K to 1.2M sentences for each language, split into a sample, training and test set. About 2–3% of the sentences are parallel. Table 2 reports precision, recall and F1 scores on the training set.8 Our results show that multilingual sentence embeddings already achieve competitive performance using standard forward retrieval over cosine similarity, which is in line with Schwenk (2018). Both of our bidirectional retrieval strategies achieve substantial improvements over this baseline while still relying on cosine similarity, with intersection giving the best results. Moreover, our proposed margin-based scoring brings large improvements when using either the distance or the ratio functions, outperforming cosine similarity by more than 10 points in all cases. The best results are achieved by ratio, which outperforms distance by 0.3-0.5 points. Interestingly, the retrieval strategy has a very small effect in both cases, suggesting that the proposed scoring is more robust than cosine. Table 3 reports the results on the test set for both the Europarl and the UN model in comparison to previous work.9 Our proposed system outperforms all previous methods by a large margin, 8Note that the gold standard information was exclusively used to optimize the filtering threshold for each configuration, making results comparable across different variants. 9We use the ratio margin function with maximum score retrieval for our method. The filtering threshold was optimized to maximize the F1 score on the training set for each language pair and model. The gold-alignments of the test set are not publicly available – these scores on the test set are calculated by the organizers of the BUCC workshop. We have done one single submission. en-de en-fr en-ru en-zh Azpeitia et al. (2017) 83.7 79.5 Azpeitia et al. (2018) 85.5 81.5 81.3 77.5 Bouamor and Sajjad (2018) 76.0 Schwenk (2018) 76.9 75.8 73.8 71.6 Proposed method (Europarl) 95.6 92.9 Proposed method (UN) 92.0 92.6 Table 3: BUCC results (F1) on the test set. We use the ratio function with maximum score retrieval and the filtering threshold optimized on the training set. en-fr en-es Guo et al. (2018) 48.90 54.94 Proposed method 83.27 85.78 Table 4: Results on UN corpus reconstruction (P@1) obtaining improvements of 10-15 F1 points and showing very consistent performance across different languages, including distant ones. 4.2 UN corpus reconstruction So as to compare our method to the similarly motivated system of Guo et al. (2018), we mimic their experiment on aligning the 11.3M sentences of the UN corpus. This task does not require any filtering, so we use forward retrieval with the ratio margin function. As shown in Table 4, our system outperforms that of Guo et al. (2018) by a large margin despite using only a fraction of the training data (2M sentences from Europarl in contrast with over 400M sentences from Google’s internal data). 4.3 Filtering ParaCrawl for NMT Finally, we filter the English-German ParaCrawl corpus and evaluate NMT models trained on them. Our NMT models use fairseq’s implementation of the big transformer model (Vaswani et al., 2017), using the same configuration as Ott et al. (2018) and training for 100 epochs. Following common practice, we use newstest2013 and newstest2014 as our development and test sets, respectively, and report both tokenized and detokenized BLEU scores as computed by multi-bleu.perl and sacreBLEU. We decode with a beam size of 5 using an ensemble of the last 10 epochs. One single model is only slightly worse. Given the large size of ParaCrawl, we first preprocess it to remove all duplicated sentence pairs, 3201 ● ● ● ● ● ● ● ● ● 25.5 26.0 26.5 27.0 27.5 28.0 0 10 20 30 Sentences (millions) BLEU (detok) Figure 2: English-German Dev results (newstest2013) using different thresholds to filter ParaCrawl. #SENT BLEU tok detok BiCleaner v1.2 17.4M 30.05 29.37 Zipporah v1.2 40.5M 24.78 24.38 Proposed method 10.0M 31.19 30.53 Table 5: Results on English-German newstest2014 for different filtered versions of the ParaCrawl corpus. sentences for which the fastText language identification model10 predicts a different language, those with less than 3 or more than 80 tokens, or those with either an overlap of at least 50% or a ratio above 2 between the source and target tokens. This reduces the corpus size from 4.59 billion to 64.4 million sentence pairs, mostly due to deduplication. We then score each sentence pair with the ratio function, processing the entire corpus in batches of 5 million sentences, and take the top scoring entries up to the desired size. Figure 2 shows the development BLEU scores of the resulting system for different thresholds, which peaks at 10 million sentences. As shown in Table 5, this model clearly outperforms the two official filtered versions of ParaCrawl in the test set. Finally, Table 6 compares our results to previous works in the literature using different training data. In addition to our ParaCrawl system, we include an additional one combining it with all parallel data from WMT18 except CommonCrawl. As it can be seen, our system outperforms all previous systems but Edunov et al. (2018), who use a large in-domain monolingual corpus through back-translation, making both works complementary. Quite remarkably, our full system outperforms Ott et al. (2018) by nearly 2 points despite using the same configuration and training data, so 10https://fasttext.cc/docs/en/ language-identification.html DATA BLEU tok detok Wu et al. (2016) wmt 26.3 Gehring et al. (2017) wmt 26.4 Vaswani et al. (2017) wmt 28.4 Ahmed et al. (2017) wmt 28.9 Shaw et al. (2018) wmt 29.2 Ott et al. (2018) wmt 29.3 28.6 Ott et al. (2018) wmt+pc 29.8 29.3 Edunov et al. (2018) wmt+nc 35.0 33.8 Proposed method pc 31.2 30.5 wmt+pc 31.8 31.1 Table 6: Results on English-German newstest2014 in comparison to previous work. wmt for WMT parallel data (excluding ParaCrawl), pc for ParaCrawl, and nc for monolingual News Crawl with back-translation. our improvement can be attributed to a better filtering of ParaCrawl.11 5 Conclusions and future work In this paper, we propose a new method for parallel corpus mining based on multilingual sentence embeddings. We use a sequence-to-sequence architecture to train a multilingual sentence encoder on an initial parallel corpus, and a novel marginbased scoring method that overcomes the scale inconsistencies of cosine similarity. Our experiments show large improvements over previous methods. Our system obtains the best published results on the BUCC mining task, outperforming previous systems by more than 10 F1 points for all the four language pairs. In addition, our method obtains up to 85% precision at reconstructing the 11.3M sentence pairs from the UN corpus, improving over the similarly motivated method of Guo et al. (2018) by more than 30 points. Finally, we show that our improvements also carry over to downstream machine translation, as we obtain 31.2 BLEU points for EnglishGerman newstest2014 training on our filtered version of ParaCrawl, an improvement of more than one point over the best performing official release. The code of this work is freely available as part of the LASER toolkit, together with an additional single encoder which covers 93 languages.12 11To confirm this, we trained a separate model on WMT data, obtaining 29.4 tokenized BLEU. This is on par with the results reported by Ott et al. (2018) for the same data (29.3 tokenized BLEU). This shows that the difference cannot be attributed to implementation details. 12https://github.com/facebookresearch/ LASER 3202 References Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the Use of Comparable Corpora to Improve SMT performance. In EACL, pages 16–23. Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted Transformer Network for Machine Translation. arXiv:1711.02132. Andoni Azpeitia, Thierry Etchegoyhen, and Eva Mart´ınez Garcia. 2017. Weighted Set-Theoretic Alignment of Comparable Sentences. In BUCC, pages 41–45. Andoni Azpeitia, Thierry Etchegoyhen, and Eva Mart´ınez Garcia. 2018. Extracting Parallel Sentences from Comparable Corpora with STACC Variants. In BUCC. Houda Bouamor and Hassan Sajjad. 2018. H2@BUCC18: Parallel Sentence Extraction from Comparable Corpora Using Multilingual Sentence Embeddings. In BUCC. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word Translation Without Parallel Data. In ICLR. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding Back-Translation at Scale. In EMNLP, pages 489–500. Cristina Espa˜na-Bonet, ´Ad´am Csaba Varga, Alberto Barr´on-Cede˜no, and Josef van Genabith. 2017. An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification. IEEE Journal of Selected Topics in Signal Processing, pages 1340–1348. Thierry Etchegoyhen and Andoni Azpeitia. 2016. SetTheoretic Alignment for Comparable Corpora. In ACL, pages 2009–2018. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In ICML, pages 1243–1252. Francis Gr´egoire and Philippe Langlais. 2017. BUCC 2017 Shared Task: a First Attempt Toward a Deep Learning Framework for Identifying Parallel Sentences in Comparable Corpora. In BUCC, pages 46– 50. Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Effective Parallel Corpus Mining using Bilingual Sentence Embeddings. In WMT, pages 165–176. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving Human Parity on Automatic Chinese to English News Translation. arXiv:1803.05567. Huda Khayrallah and Philipp Koehn. 2018. On the Impact of Various Types of Noise on Neural Machine Translation. In WNMT, pages 74–83. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit. Philipp Koehn and Rebecca Knowles. 2017. Six Challenges for Neural Machine Translation. In WNMT, pages 28–39. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving Machine Translation Performance by Exploiting Non-Parallel Corpora. Computational Linguistics, 31(4):477–504. Dragos Stefan Munteanu and Daniel Marcu. 2006. Extracting Parallel Sub-Sentential Fragments from Non-Parallel Corpora. In ACL, pages 81–88. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT, pages 1–9. Philip Resnik. 1999. Mining the Web for Bilingual Text. In ACL. Holger Schwenk. 2018. Filtering and Mining Parallel Data in a Joint Multilingual Space. In ACL, pages 228–234. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-Attention with Relative Position Representations. In NAACL, pages 464–468. Lei Shi, Cheng Niu, Ming Zhou, and Jianfeng Gao. 2006. A DOM Tree Alignment Model for Mining Parallel Data from the Web. In ACL, pages 489– 496. Masao Utiyama and Hitoshi Isahara. 2003. Reliable Measures for Aligning Japanese-English News Articles and Sentences. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000–6010. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s 3203 Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv:1609.08144. Hainan Xu and Philipp Koehn. 2017. Zipporah: a Fast and Scalable Data Cleaning System for Noisy WebCrawled Parallel Corpora. In EMNLP, pages 2945– 2950. Michał Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The United Nations Parallel Corpus v1.0. In LREC. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2017. Overview of the Second BUCC Shared Task: Spotting Parallel Sentences in Comparable Corpora. In BUCC, pages 60–67. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the Third BUCC Shared Task: Spotting Parallel Sentences in Comparable Corpora. In BUCC.
2019
309
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 323–330 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 323 Cross-Domain Generalization of Neural Constituency Parsers Daniel Fried∗ Nikita Kitaev∗ Dan Klein Computer Science Division University of California, Berkeley {dfried,kitaev,klein}@cs.berkeley.edu Abstract Neural parsers obtain state-of-the-art results on benchmark treebanks for constituency parsing—but to what degree do they generalize to other domains? We present three results about the generalization of neural parsers in a zero-shot setting: training on trees from one corpus and evaluating on out-of-domain corpora. First, neural and non-neural parsers generalize comparably to new domains. Second, incorporating pre-trained encoder representations into neural parsers substantially improves their performance across all domains, but does not give a larger relative improvement for out-of-domain treebanks. Finally, despite the rich input representations they learn, neural parsers still benefit from structured output prediction of output trees, yielding higher exact match accuracy and stronger generalization both to larger text spans and to out-of-domain corpora. We analyze generalization on English and Chinese corpora, and in the process obtain state-of-the-art parsing results for the Brown, Genia, and English Web treebanks. 1 Introduction Neural constituency parsers have obtained increasingly high performance when measured by F1 scores on in-domain benchmarks, such as the Wall Street Journal (WSJ) (Marcus et al., 1993) and Penn Chinese Treebank (CTB) (Xue et al., 2005). However, in order to construct systems useful for cross-domain NLP, we seek parsers that generalize well to domains other than the ones they were trained on. While classical, non-neural parsers are known to perform better in their training domains than on out-of-domain corpora, their out-ofdomain performance degrades in well-understood ways (Gildea, 2001; Petrov and Klein, 2007), and improvements in performance on in-domain ∗Equal contribution. treebanks still transfer to out-of-domain improvements (McClosky et al., 2006). Is the success of neural constituency parsers (Henderson 2004; Vinyals et al. 2015; Dyer et al. 2016; Cross and Huang 2016; Choe and Charniak 2016; Stern et al. 2017; Liu and Zhang 2017; Kitaev and Klein 2018, inter alia) similarly transferable to out-of-domain treebanks? In this work, we focus on zero-shot generalization: training parsers on a single treebank (e.g. WSJ) and evaluating on a range of broad-coverage, out-of-domain treebanks (e.g. Brown (Francis and Kuˇcera, 1979), Genia (Tateisi et al., 2005), the English Web Treebank (Petrov and McDonald, 2012)). We ask three questions about zero-shot generalization properties of state-of-the-art neural constituency parsers: First, do non-neural parsers have better out-ofdomain generalization than neural parsers? We might expect neural systems to generalize poorly because they are highly-parameterized, and may overfit to their training domain. We find that neural and non-neural parsers generalize similarly, and, encouragingly, improvements on indomain treebanks still transfer to out-of-domain. Second, does pre-training particularly improve out-of-domain performance, or does it just generally improve test accuracies? Neural parsers incorporate rich representations of language that can easily be pre-trained on large unlabeled corpora (Ling et al., 2015; Peters et al., 2018; Devlin et al., 2019) and improve accuracies in new domains (Joshi et al., 2018). Past work has shown that lexical supervision on an out-of-domain treebank can substantially improve parser performance (Rimell and Clark, 2009). Similarly, we might expect pre-trained language representations to give the largest improvements on out-of-domain treebanks, by providing representations of language disparate from the training domains. Surprisingly, however, we find that pre-trained representations give similar error reductions across domains. 324 Berkeley BLLIP In-Order Chart F1 ∆Err. F1 ∆Err. F1 ∆Err. F1 ∆Err. WSJ Test 90.06 +0.0% 91.48 +0.0% 91.47 +0.0% 93.27 +0.0% Brown All 84.64 +54.5% 85.89 +65.6% 85.60 +68.9% 88.04 +77.7% Genia All 79.11 +110.2% 79.63 +139.1% 80.31 +130.9% 82.68 +157.4% EWT All 77.38 +127.6% 79.91 +135.8% 79.07 +145.4% 82.22 +164.2% Table 1: Performance and relative increase in error (both given by F1) on English corpora as parsers are evaluated out-of-domain, relative to performance on the in-domain WSJ Test set. Improved performance on WSJ Test translates to improved performance out-of-domain. The two parsers with similar absolute performance on WSJ (BLLIP and In-Order) have comparable generalization out-of-domain, despite one being neural and one non-neural. Finally, how much does structured prediction help neural parsers? While neural models with rich modeling of syntactic structure have obtained strong performance on parsing (Dyer et al., 2016; Liu and Zhang, 2017) and a range of related tasks (Kuncoro et al., 2018; Hale et al., 2018), recent neural parsers obtain state-of-the-art F1 on benchmark datasets using rich input encoders without any explicit modeling of correlations in output structure (Shen et al., 2018; Kitaev and Klein, 2018). Does structural modeling still improve parsing performance even with these strong encoder representations? We find that, yes, while structured and unstructured neural models (using the same encoder representations) obtain similar F1 on in-domain datasets, the structured model typically generalizes better to longer spans and out-of-domain treebanks, and has higher exact match accuracies in all domains. 2 Experimental setup We compare the generalization of strong nonneural parsers against recent state-of-the-art neural parsers on English and Chinese corpora. Non-neural models We use publicly released code and models for the Berkeley Parser (Petrov and Klein, 2007) and BLLIP Parser (Charniak, 2000; Charniak and Johnson, 2005) for English; and ZPar (Zhang and Clark, 2011) for Chinese. Neural models We use two state-of-the-art neural models: the Chart model of Kitaev and Klein (2018), and In-Order shift-reduce model of Liu and Zhang (2017). These parsers differ in their modeling both of input sentences and output structures. The Chart model uses a self-attentive encoder over the input sentence, and does not explicitly model output structure correlations, predicting tree span labels independently conditioned on the encoded input.1 The In-Order shift-reduce model of Liu and Zhang (2017) uses a simpler LSTM-based encoding of the input sentence but a decoder that explicitly conditions on previously constructed structure of the output tree, obtaining the best performance among similarly structured models (Dyer et al., 2016; Kuncoro et al., 2017). The In-Order model conditions on predicted part-of-speech tags; we use tags predicted by the Stanford tagger (following the setup of Cross and Huang (2016)). At test time, we use Viterbi decoding for the Chart model and beam search with beam size 10 for the In-Order model. To control for randomness in the training procedure of the neural parsers, all scores reported in the remainder of the paper for the Chart and InOrder parsers are averaged across five copies of each model trained from separate random initializations. Corpora The English parsers are trained on the WSJ training section of the Penn Treebank. We perform in-domain evaluation of these parsers on the WSJ test section, and out-of-domain evaluation using the Brown, Genia, and English Web Treebank (EWT) corpora. For analysis and comparisons within parsers, we evaluate on the entirety of each out-of-domain treebank; for final results and comparison to past work we use the same testing splits as the past work. The Chinese parsers are trained on the training section of the Penn Chinese Treebank (CTB) v5.1 (Xue et al., 2005), consisting primarily of newswire. For out-of-domain evaluation on Chinese, we use treebank domains introduced in CTB versions 7 and 8: broadcast conversations (B. Conv), broadcast news (B. News), web discussion forums (Forums) and weblogs (Blogs). 1The only joint constraint on span predictions is to ensure they constitute a valid tree. 325 ZPar In-Order F1 ∆Err. F1 ∆Err. CTB Test 83.01 +0.0% 83.67 +0.0% B. News 77.22 +34.1% 77.83 +35.8% Forums 74.31 +51.2% 75.71 +48.7% Blogs 73.90 +53.6% 74.74 +54.7% B. Conv. 66.70 +96.0% 67.69 +97.9% Table 2: Performance on Chinese corpora and increase in error (relative to the CTB test set) as parsers are evaluated out-of-domain. The non-neural (ZPar) and neural (In-Order) parser generalize similarly. 3 How well do neural parsers generalize? Table 1 compares the generalization performance of the English parsers, both non-neural (Berkeley, BLLIP) and neural (Chart, In-Order). None of these parsers use additional data beyond the WSJ training section of the PTB: we use the version of the BLLIP parser without self-training on unlabeled data, and use the In-Order parser without external pre-trained word embeddings. Across all parsers, higher performance on the WSJ Test set corresponds to higher performance on each outof-domain corpus, showing that the findings of McClosky et al. (2006) extend to recent neural parsers. In particular, the Chart parser has highest performance in all four domains. The ∆Err. column shows the generalization gap for each parser on each corpus: the parser’s relative increase in error (with error defined by 100−F1) from the WSJ Test set (lower values are better). Improved performance on the WSJ Test set corresponds to increased generalization gaps, indicating that to some extent parser improvements on WSJ have come at the expense of out-ofdomain generalization. However, the two parsers with similar absolute performance on WSJ—the BLLIP parser and In-Order parser—have comparable generalization gaps, despite one being neural and one non-neural. Table 2 shows results for ZPar and the In-Order parser on the Chinese treebanks, with ∆Err. computed relative to the in-domain CTB Test set. As with the English parsers and treebanks, increased performance on the in-domain test set corresponds to improvements on the out-of-domain treebanks (although these differences are small enough that this result is less conclusive than for English). In addition, as with English, we observe similar generalization performance of the non-neural and neural parsers across the out-of-domain treebanks. In-Order +Embeddings +BERT F1 F1 ∆Err. F1 ∆Err. WSJ Test 91.47 92.13 -7.7% 95.71 -49.7% Brown All 85.60 86.78 -8.2% 93.53 -55.0% Genia All 80.31 81.64 -6.8% 87.75 -37.8% EWT All 79.07 80.50 -6.8% 89.27 -48.7% CTB Test 83.67 85.69 -12.4% 91.81 -49.9% B. News 77.83 81.64 -17.2% 88.41 -47.7% Forums 75.71 79.44 -15.4% 87.04 -46.6% Blogs 74.74 78.21 -13.7% 84.29 -37.8% B. Conv. 67.69 70.34 -8.2% 75.88 -25.3% Table 3: Performance of the In-Order parser, comparing using no pre-trained representations (first column), word embeddings, and BERT, on English (top) and Chinese (bottom) corpora. ∆Err. shows change in F1 error relative to the base parser (without pretraining). For both pre-training methods, error reduction is not typically greater out-of-domain than in-domain. 4 How much do pretrained representations help out-of-domain? Pre-trained word representations have been shown to increase in-domain parsing accuracies. Additionally, Joshi et al. (2018) showed that these representations (in their case, from ELMo, Peters et al. 2018) allow a parser to transfer well across domains. We analyze whether pre-trained representations provide a greater benefit in-domain or out-of-domain, by comparing relative performance improvements on in-domain and out-ofdomain treebanks when augmenting the neural parsers with pre-trained language representations. We evaluate non-contextual word embeddings produced by structured skip-gram (Ling et al., 2015), as well as the current state-of-the-art contextual representations from BERT (Devlin et al., 2019). 4.1 Word embeddings We use the same pre-trained word embeddings as the original In-Order English and Chinese parsers,2 trained on English and Chinese Gigaword (Parker et al., 2011) respectively. Table 3 compares models without (In-Order column) to models with embeddings (+Embeddings), showing that embeddings give comparable error reductions both in-domain (the WSJ Test and CTB Test rows) and out-of-domain (the other rows). 4.2 BERT For the Chart parser, we compare the base neural model (Sec. 2 and 3) to a model that uses a pre2 https://github.com/LeonCrashCode/InOrderParser 326 Chart +BERT F1 F1 ∆Err. WSJ Test 93.27 95.64 -35.2% Brown All 88.04 93.10 -42.3% Genia All 82.68 87.54 -28.1% EWT All 82.22 88.72 -36.6% Table 4: Performance of the Chart parser on English, comparing using no pretrained representations to using BERT. ∆Err. shows change in F1 error relative to the base parser. BERT does not generally provide a larger error reduction out-of-domain than in-domain. trained BERT encoder (Kitaev et al., 2019), using the publicly-released code3 to train and evaluate both models. For the In-Order parser, we introduce a novel integration of a BERT encoder with the parser’s structured tree decoder. These architectures represent the best-performing types of encoder and decoder, respectively, from past work on constituency parsing, but have not been previously combined. We replace the word embeddings and predicted part-of-speech tags in the InOrder parser’s stack and buffer representations with BERT’s contextual embeddings. See Appendix A.1 for details on the architecture. Code and trained models for this system are publicly available.4 Both the Chart and In-Order parsers are trained in the same way: the parameters of the BERT encoder (BERTLARGE, Uncased English or BERTBASE Chinese) are fine-tuned during training on the treebank data, along with the parameters of the parser’s decoder. See Appendix A.2 for details. Results for the In-Order parser are shown in the +BERT section of Table 3, and results for the chart parser are shown in Table 4. BERT is effective across domains, providing between 25% and 55% error reduction over the base neural parsers. However, as for word embeddings, the pre-trained BERT representations do not generally provide a larger error reduction in out-of-domain settings than in in-domain (although a possible confound is that the BERT model is fine-tuned on the relatively small amount of in-domain treebank data, along with the other parser parameters). For English, error reduction from BERT is comparable between WSJ and EWT, largest on Brown, and smallest on Genia, which may indicate a dependence on the similarity between the out-of3 https://github.com/nikitakit/self-attentive-parser 4 https://github.com/dpfried/rnng-bert F1 Exact Match Chart In-Order Chart In-Order +BERT +BERT +BERT +BERT WSJ Test 95.64 95.71 55.11 57.05 Brown All 93.10 93.54 49.23 51.98 EWT All 88.72 89.27 41.83 43.98 Genia All 87.54 87.75 17.46 18.03 CTB Test 92.14 91.81 44.42 44.94 B. News 88.21 88.41 15.91 17.29 Forums 86.72 87.04 20.00 21.95 Blogs 84.28 84.29 17.14 18.85 B. Conv. 76.35 75.88 17.24 18.99 Table 5: F1 and exact match accuracies comparing the Chart (unstructured) and In-Order (structured) parsers with BERT pretraining on English (top) and Chinese (bottom) corpora. domain treebank and the pre-training corpus.5 For Chinese, the relative error reduction from BERT is largest on the in-domain CTB Test corpus. 5 Can structure improve performance? When using BERT encoder representations, the Chart parser (with its unstructured decoder) and In-Order parser (with its conditioning on a representation of previously-constructed structure) obtain roughly comparable F1 (shown in the first two columns of Table 5), with In-Order better on seven out of nine corpora but often by slight margins. However, these aggregate F1 scores decompose along the structure of the tree, and are dominated by the short spans which make up the bulk of any treebank. Structured-conditional prediction may plausibly be most useful for predicting larger portions of the tree, measurable in exact match accuracies and in F1 on longer-length spans (containing more substructure). First, we compare the tree-level exact match accuracies of the two parsers. In the last two columns of Table 5, we see that the In-Order parser consistently achieves higher exact match than the Chart parser across domains (including the indomain WSJ and CTB Test sets), with improvements ranging from 0.5 to 2.8 percentage absolute. In fact, for several corpora (Blogs and B. Conv) the In-Order parser outperforms the Chart parser on exact match despite having the same or lower F1. This suggests that conditioning on structure in the model induces a correlation between spanlevel decisions that becomes most apparent when using a metric defined on the entire structure. 5BERT is pre-trained on books and Wikipedia; Genia consists of biomedical text. 327 0 10 20 30 40 Minimum span length 0.95 0.96 0.97 0.98 0.99 1.00 Labelled span F1 chart inorder (a) WSJ Test 0 20 40 60 80 100 120 140 Minimum span length 0.4 0.5 0.6 0.7 0.8 0.9 Labelled span F1 chart inorder (b) Brown All 0 20 40 60 80 100 120 140 Minimum span length 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Labelled span F1 chart inorder (c) EWT All 0 20 40 60 80 100 120 140 Minimum span length 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Labelled span F1 chart inorder (d) Genia All Figure 1: Labelled bracketing F1 versus minimum span length for the English corpora. F1 scores for the In-Order parser with BERT (orange) and the Chart parser with BERT (cyan) start to diverge for longer spans. Chart In-Order prior work +BERT +BERT Brown Test 87.7 (C+’15) 93.16 93.66 Genia Test 79.4 (C+’15) 86.11 86.45 EWT Test 83.5 (L+’12) 89.13 89.62 Table 6: Comparison of F1 scores for neural models with BERT pretraining to past state-of-the art results on transfer to the out-of-domain treebanks: (C+’15: Choe et al. 2015, L+’12: Le Roux et al. 2012).6 EWT scores are averaged across the 3 SANCL’12 test sets, as reported by Petrov and McDonald (2012). Second, we compare the performance of the two parsers on longer spans of text. Figure 1 plots F1 by minimum span length for the In-Order and Chart parsers with BERT encoders on the English treebanks. Across datasets, the improvement of the In-Order parser is slight when computing F1 across all spans in the dataset (x = 0), but becomes pronounced when considering longer spans. This effect is not observed in the WSJ test set, which may be attributable to its lack of sufficiently many long spans for us to observe a similar effect there. The curves start to diverge at span lengths of around 30–40 words, longer than the median length of a sentence in the WSJ (23 words). 6 Discussion Neural parsers generalize surprisingly well, and are able to draw benefits both from pre-trained language representations and structured output prediction. These properties allow single-model parsers to surpass previous state-of-the-art systems on out-of-domain generalization (Table 6). 6Although the F1 scores obtained here are higher than the zero-shot transfer results of Joshi et al. (2018) on the Brown and Genia corpora due to the use of improved encoder (BERT) and decoder (self-attentive Chart and In-Order) models, we note the results are not directly comparable due to the use of different sections of the corpora for evaluation. We note that these systems from prior work (Choe et al., 2015; Petrov and McDonald, 2012; Le Roux et al., 2012) use additional ensembling or selftraining techniques, which have also been shown to be compatible with neural constituency parsers (Dyer et al., 2016; Choe and Charniak, 2016; Fried et al., 2017; Kitaev et al., 2019) and may provide benefits orthogonal to the pre-trained representations and structured models we analyze here. Encouragingly, parser improvements on the WSJ and CTB treebanks still transfer out-of-domain, indicating that improving results on these benchmarks may still continue to yield benefits in broader domains. Acknowledgements This research was supported by DARPA through the XAI program, as well as by a Tencent AI Lab fellowship to the first author. This research used the Savio computational cluster provided by the Berkeley Research Computing program at the University of California, Berkeley. References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 173–180, Ann Arbor, Michigan. Association for Computational Linguistics. 328 Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331–2336, Austin, Texas. Association for Computational Linguistics. Do Kook Choe, David McClosky, and Eugene Charniak. 2015. Syntactic parse fusion. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1360– 1366, Lisbon, Portugal. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1–11. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. Winthrop Nelson Francis and Henry Kuˇcera. 1979. Manual of information to accompany a standard corpus of present-day edited American English, for use with digital computers. Brown University, Department of Linguistics. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161–166, Vancouver, Canada. Association for Computational Linguistics. Daniel Gildea. 2001. Corpus variation and parser performance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2727–2736, Melbourne, Australia. Association for Computational Linguistics. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 95–102, Barcelona, Spain. Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1190–1199, Melbourne, Australia. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686. Association for Computational Linguistics. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1249–1258, Valencia, Spain. Association for Computational Linguistics. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. LSTMs can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436, Melbourne, Australia. Association for Computational Linguistics. Joseph Le Roux, Jennifer Foster, Joachim Wagner, Rasul Kaljahi, and Anton Bryl. 2012. Dcu-paris13 systems for the sancl 2012 shared task. In SANCL Shared Task. Wang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1299–1304. Association for Computational Linguistics. Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413–424. 329 Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 337–344. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. ArXiv. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English gigaword. Linguistic Data Consortium. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York. Association for Computational Linguistics. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Laura Rimell and Stephen Clark. 2009. Porting a lexicalized-grammar parser to the biomedical domain. Journal of biomedical informatics, 42(5):852–865. Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171–1180, Melbourne, Australia. Association for Computational Linguistics. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827. Association for Computational Linguistics. Yuka Tateisi, Akane Yakushiji, Tomoko Ohta, and Jun’ichi Tsujii. 2005. Syntax annotation for the genia corpus. In Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Advances in neural information processing systems, pages 2773–2781. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1). 330 A Appendix A.1 Integrating BERT into the In-Order Parser In this section we describe our integration of the BERT encoder into the In-Order parser decoder. We refer to the original In-Order (Liu and Zhang, 2017) and BERT (Devlin et al., 2019) papers for full details about the model architectures, only describing the modifications we make at the interface between the two. Code and pre-trained models for this integrated parser are publicly available.7 BERT divides each word in an input sentence into one or more subword units and produces a contextual representation for each subword unit using a self-attentive architecture (Devlin et al., 2019). Following the implementation of Kitaev et al. (2019) for the Chart parser, we take the contextual representation vector for the last subword unit in each word wi as the word’s representation, ewi, replacing the (non-contextual) word and POS tag vectors used in the original In-Order parser. We use a learned linear projection to scale ewi to a vector xi of size 128 (compare with section 4.1 of Liu and Zhang (2017)). These contextual word representations xi enter into the In-Order parser’s decoder in two positions: the stack (representing the parse tree as constructed so far) and the buffer (representing the remainder of the sentence to be parsed). We retain the stack representation, but omit the LSTM which the original In-Order work uses to summarize the words remaining on the buffer. We instead use the representation xi as the buffer summary for the word i when i is word at the front of the buffer (the next word in the sentence to be processed). In early experiments we found that removing the LSTM summary of the buffer in this manner had no consistent effect on performance, indicating that the BERT contextual vectors already sufficiently aggregate information about the input sentence so that an additional LSTM provides no further benefit. We pass values and gradients between the DyNet (Neubig et al., 2017) implementation of the In-Order parser and the Tensorflow (Abadi et al., 2016) implementation of BERT using the Tensorflow C++ API. 7https://github.com/dpfried/rnng-bert A.2 BERT Optimization Settings We train the In-Order parser with BERT following the optimization procedure used in Kitaev et al. (2019)’s publicly-released implementation of the BERT Chart parser: training with mini-batches of size 32 using the Adam optimizer (Kingma and Ba, 2015); halving the base learning rates for Adam whenever 2 epochs of training pass without improved F1 on the development set, and using a warmup period for the BERT learning rate. For the In-Order parser, we use initial Adam learning rates of 2 × 10−5 for the BERT encoder parameters and 1 × 10−3 for the In-Order decoder parameters, β1 = 0.9, β2 = 0.999, and a BERT learning rate warmup period of 160 updates.
2019
31
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3204 JW300: A Wide-Coverage Parallel Corpus for Low-Resource Languages ˇZeljko Agi´c Department of Computer Science IT University of Copenhagen, Denmark [email protected] Ivan Vuli´c PolyAI Ltd. London, United Kingdom [email protected] Abstract Viable cross-lingual transfer critically depends on the availability of parallel texts. Shortage of such resources imposes a development and evaluation bottleneck in multilingual processing. We introduce JW300, a parallel corpus of over 300 languages with around 100 thousand parallel sentences per language pair on average. In this paper, we present the resource and showcase its utility in experiments with crosslingual word embedding induction and multisource part-of-speech projection. 1 Introduction In natural language processing (NLP) the rule of thumb is that if we possess some parallel data for a low-resource target language, then we can yield feasible basic tools such as part-of-speech taggers for that language. Without such distant supervision, this task and many others remain unattainable, leaving the majority of languages in the world without basic language technology. Parallel data features a prominent role in building multilingual word representations (Ruder et al., 2017), annotation projection for parts-of-speech and syntactic dependencies (Das and Petrov, 2011; Tiedemann, 2014) and naturally machine translation. The shortage of parallel data in turn creates a bottleneck in cross-lingual processing: without parallel sentences, we cannot yield usable models, nor can we robustly evaluate them, if even just approximately (cf. Agi´c et al. 2017). This absence has over the recent years materialized the proxy fallacy, whereby intended low-resource methods are tested by proxy, exclusively on resource-rich languages, because of the absence of test data or the lack of effort to produce it for approximate evaluation. We seek to alleviate these issues by a significant new addition to the limited pool of parallel texts for low-resource languages. Figure 1: Our dataset JW300 in comparison to other massive parallel text collections with respect to multilingual breadth and volume of parallel sentences. The y-axis depicts the mean number of parallel sentences per language pair. Contributions. A massive collection of parallel texts for over 300 diverse languages is our main contribution to facilitate multilingual NLP. The dataset is freely available for all non-commercial use.1 We also show how simple techniques over our data yield competitive results in building crosslingual word embeddings and annotation projection for part-of-speech tagger induction. 2 Dataset JW300 spans across 343 languages, and comprises a total of 1,335,376 articles, with a bit over 109 million sentences, and 1.48 billion tokens. Sources and structure. The data is a complete crawl of all the publications from the website jw.org. A vast majority of texts come from the magazines Awake! and Watchtower. While the texts do stem from a religious society, they cover an immense range of topics. The multilingual articles are mainly translations from a source in English. The dataset is organized by language and by article. Articles carry unique identifiers which 1http://zeljkoagic.github.io/jw300/ 3205 span across the languages: all translations of the same article carry the same identifier number. This way we denote “parallel articles” as the base of all further processing. Curation. All articles are converted from their HTML source into plain text format, one sentence per line, and tokenized. We also preserve the original formatting. We apply Polyglot (Al-Rfou, 2015) for sentence splitting and tokenization. For languages uncovered by Polyglot, we use its built-in language identifier to select the closest fit. Roughly 40% of all articles were split using a “neighbor language” tokenizer. Such broad strokes are necessary when dealing with massively multilingual datasets with low-resource languages where not even the basic processing is available, cf. Agi´c et al. (2016) who used only whitespace tokenization. For all language pairs, and for all article pairs carrying the same identifier number, we perform sentence alignment using the aligner Yasa (Lamraoui and Langlais, 2013) with default settings. This way we align more than 50 thousand language pairs with over 90 thousand parallel sentences per language pair on average (see Table 1). The basic statistics of JW300 in Table 1 reveal a small number of outliers with up to 2.5 million sentences like English, French, and Italian which are all rich in resources. However, the long tail of lowresource languages typically still offers between 50-100 thousand sentences. Comparison. With its balance between multilingual breadth and monolingual depth, JW300 fills an important gap in cross-lingual resources: it comprises a multitude of low-resource languages while still offering ample sentences for each individual language, and parallel sentences for language pairs. To illustrate, for JW300 the breadth × depth ratio is 1.2x larger than for OPUS (Tiedemann, 2012), 2x larger than for the full Bible, and even 3x that of New Testament (see Figure 1). JW300 still does come with its own caveats. The crucial one is surely bias: For example, could we indiscriminately use JW300 to train complex machine learning systems that further propagate the attitude of jw.org towards gender differences? From another viewpoint, however, should we rather train part-of-speech taggers through multi-source annotation projection from Watchtower articles on one side, or OPUS Ubuntu menu localizations or Bible psalms on the other side? languages covered 343 language datasets 417 aligned pairs of languages 54,376 µ σ articles 3,202.34 ± 5,946.68 sentences 261,573.37 ± 464,343.05 tokens 3,544,039.82 ± 7,472,321.78 alignments 92,111.61 ± 176,563.25 Table 1: Basic statistics for the JW300 corpus: counts of articles, sentences, words, and alignments, as well as an illustration of their distributions. Counts are reported for languages with at least one non-empty alignment to another language. Some languages have multiple datasets, e.g. different scripts, sign language. Moreover, the ideological bias of JW300 is fairly well-defined. In that sense, while bias may invalidate the use of our corpus in some application areas, we argue that a wide-coverage collection of parallel data with known bias may in fact be valuable for research on bias in NLP (Bolukbasi et al., 2016; Caliskan et al., 2017; Dev and Phillips, 2019; Gonen and Goldberg, 2019), especially in multilingual settings (Lauscher and Glavaˇs, 2019).2 JW300 excels in low-resource language coverage. For example, OPUS offers over 100 million English-German parallel sentences, and JW300 only 2.1 million. However, in another example, for Afrikaans-Croatian the counts are 300 thousand in OPUS and 990 thousand in JW300, and moreover, the OPUS data for this language pair contains only Linux localizations. Availability. Our dataset is freely available for all non-commercial use. The exact terms of use are provided by the copyright holder; see https:// www.jw.org/en/terms-of-use/. For all practical purposes their custom terms of use are very closely aligned with the more well-known CC2We acknowledge the anonymous area chair who contributed this valuable argument as part of their meta-review. 3206 EN ET HR MR MT EN – 0.280 0.254 0.0 0.001 ET 0.314 – 0.302 0.001 0.0 HR 0.269 0.334 – 0.002 0.0 MR 0.094 0.144 0.112 – 0.001 MT 0.131 0.206 0.164 0.141 – Table 2: BLI results (MRR scores) on a small subset of JW300 language pairs. The scores with the best-performing unsupervised cross-lingual word embedding model (Artetxe et al., 2018) are in gray cells over the main diagonal; the scores with a simple supervised method (Smith et al., 2017) are below the main diagonal. Better performance for each pair in bold. BY-NC-SA license.3 3 Experiments 3.1 Cross-lingual word embedding induction A recent trend in cross-lingual word embedding induction are fully unsupervised projection-based methods that learn on the basis of monolingual data only (Conneau et al., 2018; Alvarez-Melis and Jaakkola, 2018; Chen and Cardie, 2018, inter alia). The main idea is to construct a seed bilingual dictionary in an unsupervised fashion relying on adversarial training (Conneau et al., 2018), monolingual similarity distributions (Artetxe et al., 2018) or PCA projection similarities (Hoshen and Wolf, 2018), and then learn (gradually refined) projections of two monolingual embedding spaces into a shared cross-lingual space (by also iteratively refining the seed dictionary). Such models hold promise to support crosslingual representation learning for resource-poor language pairs. However, besides their problems with training divergence (Søgaard et al., 2018), a recent empirical study (Glavaˇs et al., 2019) has demonstrated that even most robust projectionbased unsupervised models cannot match the performance of projection-based methods which require only 1K-5K seed translation pairs. The largescale JW300 corpus offers such supervision (i.e., seed translation pairs) for a large number of language pairs. In other words, instead of resorting to fully unsupervised models for the language pairs included in JW300, we can use seed bilingual dictionaries from the parallel data to learn the projections. Based on the findings from Glavaˇs et al. (2019), we compare the most effective and the most robust unsupervised method of Artetxe et al. (2018) 3https://creativecommons.org/licenses/ by-nc-sa/4.0/ to a simple supervised method (Smith et al., 2017) in the bilingual lexicon induction task (BLI).4 For the demonstration purposes, we work with all pairs from the following language set: English (EN), Estonian (ET), Croatian (HR), Marathi (MR), and Maltese (MT). Our seed bilingual dictionaries are extracted from the JW300 corpora by taking the most probable target translation for each source word from IBM1-based word translation tables. Following prior work, we use the 5K most frequent translation pairs from training, while the next 2K pairs are used for testing. We use 300-dim monolingual fastText embeddings pretrained on Wikipedia for all languages (Bojanowski et al., 2017),5 but the same trends are observed with other monolingual embeddings. The results in terms of Mean Reciprocal Rank (MRR) are summarized in Table 2. The BLI results are straightforward to interpret: for all experimental runs a simple supervised model with its supervision extracted from the JW300 corpus outperforms its unsupervised competition, further confirming the findings of Glavaˇs et al. (2019). The unsupervised model is even unable to converge for most language pairs, yielding extremely low MRR scores. The scores on another test set (Conneau et al., 2018) for EN-ET and EN-HR also favour the supervised model: 0.342 vs. 0.313 on EN-ET, and 0.289 vs. 0.261 on EN-HR. In sum, these preliminary experiments indicate the potential of JW300 in guiding cross-lingual representation learning. 3.2 Part-of-speech projection Massively parallel data has proven most useful in inducing basic NLP models such as part-of-speech taggers. The formative work by Yarowsky et al. (2001) has inspired many influential works in projecting sequential labels from multiple source languages (Das and Petrov, 2011; T¨ackstr¨om et al., 2013), as well as projecting more complex annotations such as syntactic and semantic dependencies (Hwa et al., 2005; Pad´o and Lapata, 2009; Agi´c et al., 2016). Here we implement an experiment with projecting parts of speech from multiple sources to multiple targets following the line of work by Agi´c et al. (2015) and subsequently Plank et al. (2018), to showcase our corpus. 4We expect even better performance with recently developed more sophisticated supervised methods such as RCSLS proposed by Joulin et al. (2018), see Glavaˇs et al. (2019). 5https://fasttext.cc/docs/en/ english-vectors.html 3207 Setup. We work with a large collection of multilingual sentences, where each sentence is a graph G = (V, A). Its vertices V are sentence words for all involved languages, while its edges A are alignments between these words. One sentence t is declared as target sentence and indexed as i = 0, while the remaining n sentences are sources: Target words are then vertices vt ∈V0, while the vertices vs ∈Vi, 1 ≤i ≤n are the source words. The word alignments a(vs, vt) ∈A are also word aligner confidences: a(vs, vt) ∈(0, 1). The graph is thus bipartite between the target words V0 and all the source words Vi, i > 0. The source sentences are tagged for parts of speech and thus each source word vs packs a label distribution p(l|vs) of tagger confidences across parts of speech l ∈L. On top of this parallel dataset, we implement the best practices in annotation projection of sequential labels from multiple sources with low-resource target languages in mind: – Word alignments are obtained from an IBM1 model Efmaral ( ¨Ostling and Tiedemann, 2016) as Agi´c et al. (2016) show that simpler alignment models favor low-resource languages. Thus we acquire all a(vs, vt) ∈A. – Source sentences are tagged for parts of speech by a state-of-the-art neural tagger with default settings (Plank et al., 2016). That way all source words attain a tag distribution p(l|vs). – Source tags are projected through the word alignments and accumulated at the target ends: BALLOT(l|vt) = X vs∈Vs p(l|vs)a(vs, vt). The part-of-speech tag for each target word vt is finally decoded through simple weighted majority voting: LABEL(vt) = arg max l BALLOT(l|vt). – The sentences are further filtered so as to remove noisy instances. The model by Plank et al. (2018) is used, whereby for training we select only the top 10 thousand target sentences ranked by mean word alignment coverage ct: ct = 1 n n X i=1 ci,t. Mean coverage ct is defined through individual source-target coverages, for all i > 0: ci,t = |{vt : ∃vs, vs ∈Vi, a(vs, vt) ∈A}| |Vt| . BIBLE DSDS JW300 PROJ Bulgarian (BG) 77.7 83.9 82.7 Croatian (HR) 67.1 78.0 77.7 Czech (CS) 73.3 86.8 82.5 Danish (DA) 79.0 84.5 84.8 English (EN) 73.0 85.7 80.3 French (FR) 76.6 88.7 84.9 German (DE) 80.2 84.1 83.3 Greek (EL) 52.3 81.1 76.1 Hindi (HI) 67.6 63.1 73.4 Hungarian (HU) 72.0 77.3 76.3 Italian (IT) 76.9 92.1 85.2 Norwegian (NO) 76.7 86.2 83.1 Persian (FA) 59.6 43.6 66.6 Polish (PL) 75.1 84.4 83.2 Portuguese (PT) 83.8 89.4 86.9 Spanish (ES) 81.4 91.7 87.0 Swedish (SV) 75.2 83.1 79.7 µ 73.4 81.4 80.8 Table 3: Accuracy of part-of-speech taggers induced by projection from multiple sources of JW300, in comparison to projections from the Bible by Agi´c et al. (2015) and the DSDS system by Plank et al. (2018) which learns from multiple sources of weak supervision including annotation projection. We also remove all sentences under 3 and over 100 tokens. Finally, the target language taggers are trained on these 10 thousand filtered projections and evaluated on held-out test data. We use the same part-of-speech tagger by Plank et al. (2016) for the target languages as we did for the source languages. Baselines and data. In this experiment we compare three distantly supervised systems: – the bare-bones BIBLE annotation projection by Agi´c et al. (2015), – a state-of-the-art system DSDS by Plank et al. (2018) which combines annotation projection, type supervision with Wiktionary and UniMorph (Kirov et al., 2018), word embeddings, and subword representations, and finally – JW300 PROJ which is our own multi-source projection with JW300 data as defined above. The training data is Universal Dependencies version 2.3 (Nivre et al., 2018). The test data amounts to 17 languages at the intersection of the three systems and comes from Plank and Agi´c (2018). All tags are converted to the tagset of Petrov et al. (2011) for comparability. 3208 Results. Table 3 lists the tagging accuracy by language and system. Projections from our system JW300 PROJ are expectedly superior to those by BIBLE by +7.4 increase in mean accuracy across all 17 languages. On a more interesting note, our barebones approach to annotation projection falls only -0.6 points short of DSDS on average, which is an admirable feat since DSDS is an intricate multi-task learning system which learns from several disparate signals of distant supervision, only one of which is annotation projection. Beyond the confines of the 17-language comparison from Table 3, we also conduct one larger experiment with 42 languages in the overlap of JW300 and Universal Dependencies v2.3. The mean accuracy for the 17 languages in Table 3 increases with this additional multi-source support by +0.8 points absolute, to 81.6 which now just surpasses the score of DSDS. Since these systems are complementary, future work could further explore the benefits of injecting the improved JW300 projections to more complex learners such as DSDS. In particular, DSDS would likely benefit from better projections, since the ones that its current instance uses are inferior to JW300. 4 Related work Our work is a contribution to the pool of massively multilingual resources. In that pool we already singled out OPUS (Tiedemann, 2012) as the largest collection of freely available parallel sentences to date. OPUS is a collection that covers large datasets such as Europarl (Koehn, 2005), OpenSubtitles (Lison and Tiedemann, 2016), along with many others. OPUS also contains a smaller snapshot of Tatoeba, whose original collection hosts 337 languages and 22,427 (±106,815) sentences on average.6 Moving from OPUS and Tatoeba towards greater linguistic breadth, there are several publicly available Bible datasets, most notably those by Mayer and Cysouw (2014) and Christodouloupoulos and Steedman (2015). The Bible datasets are typically aligned by verse and not by sentence, because verse identifiers are assigned by humans, with absolute accuracy. However, a verse sometimes comprises several sentences, or alternatively just parts of one sentence, thus in effect replacing one type of alignment noise with another. Our results strongly favor JW300 for part-of-speech projection. 6https://tatoeba.org/eng/stats/ sentences_by_language Prior to our work, Agi´c et al. (2016) have also collected a smaller dataset from jw.org to produce cross-lingual dependency parsers with multisource projection. Their dataset covers 135 languages with a mean of 115,856 sentences per language (±34,898), but with sentence alignments only within a group of 27 languages. Our contribution JW300 strikes a balance between multilingual and intra-language coverage that will greatly facilitate future research in largescale cross-lingual processing. Our work is entirely complementary to related efforts in bringing forth massively multilingual resources. 5 Conclusion We introduced JW300, a large collection of parallel texts that spans over more than 300 languages, and offers 54 thousand pairs of alignments, each with roughly 100 thousand parallel sentences on average. We posit that the dataset would prove immensely useful for a wide variety of research in cross-lingual processing. JW300 is freely available for all noncommercial use as per terms of the data owner. Our two experiments show that even with simple models JW300 offers top performance in crosslingual word embedding induction and multilingual projection for part-of-speech tagging, where we reach or even surpass more advanced models. Acknowledgements The authors acknowledge the NVIDIA Corporation for supporting their work. We are also thankful to the anonymous reviewers and area chairs for their incisive reviews. References ˇZeljko Agi´c, Dirk Hovy, and Anders Søgaard. 2015. If all you have is a bit of the Bible: Learning POS taggers for truly low-resource languages. In Proceedings of ACL, pages 268–272. ˇZeljko Agi´c, Anders Johannsen, Barbara Plank, H´ector Mart´ınez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301–312. ˇZeljko Agi´c, Barbara Plank, and Anders Søgaard. 2017. Cross-lingual tagger evaluation without test data. In Proceedings of EACL, pages 248–253. 3209 Rami Al-Rfou. 2015. Polyglot: A Massive Multilingual Natural Language Processing Pipeline. Ph.D. thesis, Stony Brook University. David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-Wasserstein alignment of word embedding spaces. In Proceedings of EMNLP, pages 1881– 1890. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of ACL, pages 789–798. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Proceedings of NIPS, pages 4356–4364. Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of EMNLP, pages 261–270. Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: The bible in 100 languages. Language Resources and Evaluation, 49(2):375–395. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of ICLR. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of ACL, pages 600–609. Sunipa Dev and Jeff Phillips. 2019. Attenuating bias in word vectors. In Proceedings of AISTATS. Goran Glavaˇs, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. CoRR, abs/1902.00508. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of NAACL-HLT, pages 609–614. Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of EMNLP, pages 469–478. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311–325. Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Herv´e J´egou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of EMNLP, pages 2979–2984. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, G´eraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian Mielke, Arya D McCarthy, Sandra K¨ubler, et al. 2018. Unimorph 2.0: Universal morphology. arXiv preprint arXiv:1810.11101. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of MT Summit, volume 5, pages 79–86. Fethi Lamraoui and Philippe Langlais. 2013. Yet another fast, robust and open source sentence aligner: Time to reconsider sentence alignment? In Proceedings of the XIV Machine Translation Summit. Anne Lauscher and Goran Glavaˇs. 2019. Are we consistently biased? multidimensional analysis of biases in distributional word vectors. In Proceedings of *SEM, pages 85–91. Pierre Lison and J¨org Tiedemann. 2016. OpenSubtitles2016: extracting large parallel corpora from movie and tv subtitles. In Proceedings of LREC. Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel bible corpus. In Proceedings of LREC. Joakim Nivre et al. 2018. Universal Dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Robert ¨Ostling and J¨org Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106:125–146. Sebastian Pad´o and Mirella Lapata. 2009. Crosslingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36:307– 340. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086. Barbara Plank and ˇZeljko Agi´c. 2018. Distant supervision from disparate sources for low-resource part-ofspeech tagging. In Proceedings of EMNLP, pages 614–620. 3210 Barbara Plank, Sigrid Klerke, and ˇZeljko Agi´c. 2018. The best of both worlds: Lexical resources to improve low-resource part-of-speech tagging. arXiv preprint arXiv:1811.08757. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of ACL, pages 412–418. Sebastian Ruder, Ivan Vuli´c, and Anders Søgaard. 2017. A survey of cross-lingual word embedding models. arXiv preprint arXiv:1706.04902. Samuel L. Smith, David H.P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of ACL, pages 778–788. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of LREC, pages 2214–2218. J¨org Tiedemann. 2014. Rediscovering annotation projection for cross-lingual parser induction. In Proceedings of COLING, pages 1854–1864. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of NAACL-HLT, pages 1–8.
2019
310
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3211–3223 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3211 Cross-Lingual Syntactic Transfer through Unsupervised Adaptation of Invertible Projections Junxian He1, Zhisong Zhang1, Taylor Berg-Kirkpatrick2, and Graham Neubig1 1Language Technologies Institute, Carnegie Mellon University 2Department of Computer Science and Engineering, University of California San Diego {junxianh,zhisongz,gneubig}@cs.cmu.edu, [email protected] Abstract Cross-lingual transfer is an effective way to build syntactic analysis tools in low-resource languages. However, transfer is difficult when transferring to typologically distant languages, especially when neither annotated target data nor parallel corpora are available. In this paper, we focus on methods for cross-lingual transfer to distant languages and propose to learn a generative model with a structured prior that utilizes labeled source data and unlabeled target data jointly. The parameters of source model and target model are softly shared through a regularized log likelihood objective. An invertible projection is employed to learn a new interlingual latent embedding space that compensates for imperfect crosslingual word embedding input. We evaluate our method on two syntactic tasks: part-ofspeech (POS) tagging and dependency parsing. On the Universal Dependency Treebanks, we use English as the only source corpus and transfer to a wide range of target languages. On the 10 languages in this dataset that are distant from English, our method yields an average of 5.2% absolute improvement on POS tagging and 8.3% absolute improvement on dependency parsing over a direct transfer method using state-of-the-art discriminative models.1 1 Introduction Current top performing systems on syntactic analysis tasks such as part-of-speech (POS) tagging and dependency parsing rely heavily on largescale annotated data (Huang et al., 2015; Dozat and Manning, 2017; Ma et al., 2018). However, because creating syntactic treebanks is an expensive and time consuming task, annotated data is scarce for many languages. Prior work has 1Code is available at https://github.com/jxhe/ cross-lingual-struct-flow. 0.4 0.5 0.6 0.7 0.8 language distance to English 30 40 50 60 70 80 90 POS Tagging Accuracy% nl sv da no es fr pt it bg he hr hi tr ko id ja zh fa ar nearby languages distant languages 0.4 0.5 0.6 0.7 0.8 language distance to English 30 40 50 60 70 80 90 Dependency Parsing UAS% nl sv da no es fr pt itbg he hr hi tr ko id ja zh fa ar nearby languages distant languages Figure 1: Left: POS tagging transfer accuracy of the Bidirectional LSTM-CRF model, Right: Dependency parsing transfer UAS of the “SelfAtt-Graph” model (Ahmad et al., 2019). These models are trained on the labeled English corpus and directly evaluated on different target languages. The x-axis represents language distance to English (details in Section 2.1). Both models take pre-trained cross-lingual word embeddings as input. The parsing model also uses gold universal POS tags. demonstrated the efficacy of cross-lingual learning methods (Guo et al., 2015; Tiedemann, 2015; Guo et al., 2016; Zhang et al., 2016; Ammar et al., 2016; Ahmad et al., 2019; Schuster et al., 2019), which transfer models between different languages through the use of shared features such as cross-lingual word embeddings (Smith et al., 2017; Conneau et al., 2018) or universal part-ofspeech tags (Petrov et al., 2012). In the case of zero-shot transfer (i.e. with no target-side supervision), a common practice is to train a strong supervised system on the source language and directly apply it to the target language over these shared embedding or POS spaces. This method has demonstrated promising results, particularly for transfer of models to closely related target languages (Ahmad et al., 2019; Schuster et al., 2019). However, this direct transfer approach often produces poor performance when transferring to more distant languages that are less similar to the source. For example, in Figure 1 we show 3212 the results of direct transfer of POS taggers and dependency parsers trained on only English and evaluated on 20 target languages using pretrained cross-lingual word embeddings, where the x-axis shows the linguistic distance from English calculated according to the URIEL linguistic database (Littell et al., 2017) (more details in Section 2). As we can see, these systems suffer from a large performance drop when applied to distant languages. The reasons are two-fold: (1) Cross-lingual word embeddings of distant language pairs are often poorly aligned with current methods that make strong assumptions of orthogonality of embedding spaces (Smith et al., 2017; Conneau et al., 2018). (2) Divergent syntactic characteristics make the model trained on the source language non-ideal, even if the crosslingual word embeddings are of high quality. In this paper we take a drastically different approach from most previous work: instead of directly transferring a discriminative model trained only on labeled data in another language, we use a generative model that can be trained in an supervised fashion on labeled data in another language, but also perform unsupervised training to directly maximize likelihood of the target language. This makes it possible to specifically adapt to the language that we would like to analyze, both with respect to the cross-lingual word embeddings and the syntactic parameters of the model itself. Specifically, our approach builds on two previous works. We follow a training strategy similar to Zhang et al. (2016), who have previously demonstrated that it is possible to do this sort of crosslingual unsupervised adaptation, although limited to the sort of linear projections that we argue are too simple for mapping between embeddings in distant languages. To relax this limitation, we follow He et al. (2018) who, in the context of fully unsupervised learning, propose a method using invertible projections (which is also called flow) to learn more expressive transformation functions while nonetheless maintaining the ability to train in an unsupervised manner to maximize likelihood. We learn this structured flow model (detailed in Section 3.1) on both labeled source data and unlabeled target data through a soft parameter sharing scheme. We describe how to apply this method to two syntactic analysis tasks: POS tagging with a hidden Markov model (HMM) prior and dependency parsing with a dependency model Language Category Language Names Distant Chinese (zh, 0.86), Persian (fa, 0.86), Arabic (ar, 0.86), Japanese (ja, 0.71), Indonesian (id, 0.71), Korean (ko, 0.69), Turkish (tr, 0.62), Hindi (hi, 0.61), Croatian (hr, 0.59), Hebrew (he, 0.57) Nearby Bulgarian (bg, 0.50), Italian (it, 0.50), Portuguese (pt, 0.48), French (fr, 0.46), Spanish (es, 0.46), Norwegian (no, 0.45) Danish (da, 0.41), Swedish (sv, 0.40) Dutch (nl, 0.37), German (de, 0.36) Table 1: 20 selected target languages. Numbers in the parenthesis denote the distances to English. with valence (DMV; Klein and Manning (2004)) prior (Section 4.3). We evaluate our method on Universal Dependencies Treebanks (v2.2) (Nivre et al., 2018), where English is used as the only labeled source data. 10 distant languages and 10 nearby languages are selected as the target without labels. On 10 distant transfer cases – which we focus on in this paper – our approach achieves an average of 5.2% absolute improvement on POS tagging and 8.3% absolute improvement on dependency parsing over strong discriminative baselines. We also analyze the performance difference between different systems as a function of language distance, and provide preliminary guidance on when to use generative models for cross-lingual transfer. 2 Difficulties of Cross-Lingual Transfer on Distant Languages In this section, we demonstrate the difficulties involved in performing cross-lingual transfer to distant languages. Specficially, we investigate the direct transfer performance as a function of language distances by training a high-performing system on English and then apply it to target languages. We first introduce the measurement of language distances and selection of 20 target languages, then study the transfer performance change on POS tagging and dependency parsing tasks. 2.1 Language Distance To quantify language distances, we make use of the URIEL (Littell et al., 2017) database,2 which represents over 8,000 languages as informationrich typological, phylogenetic, and geographical vectors. These vectors are sourced and predicted 2http://www.cs.cmu.edu/˜dmortens/ uriel.html 3213 from a variety of linguistic resources such as WALS (Dryer, 2013), PHOIBLE (Moran et al., 2014), Ethnologue (Lewis et al., 2015), and Glottolog (Hammarstrm et al., 2015). Based on these vectors, this database provides ready-to-use distance statistics between any pair of languages included in the database in terms of various metrics including genetic distance, geographical distance, syntactic distance, phonological distance, and phonetic inventory distance. These distances are represented by values between 0 and 1. Since phonological and inventory distances mainly characterize intra-word phonetic/phonological features that have less effect on word-level language composition rules, we remove these two and take the average of genetic, geographic, and syntactic distances as our distance measure. We rank all languages in Universal Dependencies (UD) Treebanks (v2.2) (Nivre et al., 2018) according to their distances to English, with the distant ones on the top. Then we select 10 languages from the top that represent the distant language group, and 10 languages from the bottom that represent the nearby language group. The selected languages are required to meet the following two conditions: (1) at least 1,000 unlabeled training sentences present in the treebank since a reasonably large amount of unlabeled data is needed to study the effect of unsupervised adaptation, and (2) an offline pre-trained word embedding alignment matrix is available.3 The 20 selected target languages are shown in Table 1, which contains distant languages like Persian and Arabic, but also closely related languages like Spanish and French. Detailed statistical information of the selected languages and corresponding treebanks can be found in Appendix A. 2.2 Observations In the direct transfer experiments, we use the pre-trained cross-lingual fastText word embeddings (Bojanowski et al., 2017), aligned with the method of Smith et al. (2017). These embeddings are fixed during training otherwise the alignment would be broken. We employ a bidirectional LSTM-CRF (Huang et al., 2015) model for POS tagging using NCRF++ toolkit (Yang and Zhang, 3Following Ahmad et al. (2019), we use the offline pre-trained alignment matrix present in https://github.com/Babylonpartners/ fastText_multilingual, which contains alignment matrices for 78 languages, which also allows comparison with their numbers in Section 4.3. z1 <latexit sha1_base64="jbtr+W9vpRfg2saI7YImOblkNik=">ACHicZVDLSgMxFM3UV62vqks 3g0VwUcqMCOqu6MZlRcW2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DRy0TRahHJeqE2BNOYuoBw47cSKYhFw2g7GN9m8/USVZjJ6gElMf YGHEQsZwWCo+e+26/WnIaTl70M3ALUFGtfvW3N5AkETQCwrHWXdeJwU+xAkY4nVZ6iaYxJmM8pF0DIyo9tPc6tQ+MczADqUyJwI7Z/9vpFhogWFklFnTc7OMASm5rhsVjETWsmfyu56IoB6IeiZSOtQLRi C89FMWxQnQiMx8hAm3QdpZLPaAKUqATwzARDHzFZuMsMIETHgVk5G7mMgy8M4aVw3n7rzWvC7CKqMjdIxOkYsuUBPdohbyEFD9Ire0Lv1Yn1Yn9bXTFqyip1DNFfW9x/uTZqw</latexit> <latexit sha1_base64="jbtr+W9vpRfg2saI7YImOblkNik=">ACHicZVDLSgMxFM3UV62vqks 3g0VwUcqMCOqu6MZlRcW2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DRy0TRahHJeqE2BNOYuoBw47cSKYhFw2g7GN9m8/USVZjJ6gElMf YGHEQsZwWCo+e+26/WnIaTl70M3ALUFGtfvW3N5AkETQCwrHWXdeJwU+xAkY4nVZ6iaYxJmM8pF0DIyo9tPc6tQ+MczADqUyJwI7Z/9vpFhogWFklFnTc7OMASm5rhsVjETWsmfyu56IoB6IeiZSOtQLRi C89FMWxQnQiMx8hAm3QdpZLPaAKUqATwzARDHzFZuMsMIETHgVk5G7mMgy8M4aVw3n7rzWvC7CKqMjdIxOkYsuUBPdohbyEFD9Ire0Lv1Yn1Yn9bXTFqyip1DNFfW9x/uTZqw</latexit> <latexit sha1_base64="jbtr+W9vpRfg2saI7YImOblkNik=">ACHicZVDLSgMxFM3UV62vqks 3g0VwUcqMCOqu6MZlRcW2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DRy0TRahHJeqE2BNOYuoBw47cSKYhFw2g7GN9m8/USVZjJ6gElMf YGHEQsZwWCo+e+26/WnIaTl70M3ALUFGtfvW3N5AkETQCwrHWXdeJwU+xAkY4nVZ6iaYxJmM8pF0DIyo9tPc6tQ+MczADqUyJwI7Z/9vpFhogWFklFnTc7OMASm5rhsVjETWsmfyu56IoB6IeiZSOtQLRi C89FMWxQnQiMx8hAm3QdpZLPaAKUqATwzARDHzFZuMsMIETHgVk5G7mMgy8M4aVw3n7rzWvC7CKqMjdIxOkYsuUBPdohbyEFD9Ire0Lv1Yn1Yn9bXTFqyip1DNFfW9x/uTZqw</latexit> z2 <latexit sha1_base64="ZNKuRe23lzCJMNesh6cRgweKXJI=">ACHicZVDLSgMxFM3UV62vqks3 g0VwUcpMEdRd0Y3Lio4W2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DBy0TRahHJeqE2BNOYuoBw47cSKYhFw+hiMr7P54xNVmsnoHiYx9Q UeRixkBIOh7p7zX615jScvOxl4Baghopq96u/vYEkiaAREI617rpODH6KFTDC6bTSzSNMRnjIe0aGFBtZ/mVqf2iWEGdiVORHYOft/I8VCwjo8yanptlDEjJd2oYCSylj2T3/VEBPVA1DOR0qFeMALh hZ+yKE6ARmTmI0y4DdLOYrEHTFECfGIAJoqZr9hkhBUmYMKrmIzcxUSWgdsXDac27Na6oIq4yO0DE6RS46Ry10g9rIQwQN0St6Q+/Wi/VhfVpfM2nJKnYO0VxZ3/v75qx</latexit> <latexit sha1_base64="ZNKuRe23lzCJMNesh6cRgweKXJI=">ACHicZVDLSgMxFM3UV62vqks3 g0VwUcpMEdRd0Y3Lio4W2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DBy0TRahHJeqE2BNOYuoBw47cSKYhFw+hiMr7P54xNVmsnoHiYx9Q UeRixkBIOh7p7zX615jScvOxl4Baghopq96u/vYEkiaAREI617rpODH6KFTDC6bTSzSNMRnjIe0aGFBtZ/mVqf2iWEGdiVORHYOft/I8VCwjo8yanptlDEjJd2oYCSylj2T3/VEBPVA1DOR0qFeMALh hZ+yKE6ARmTmI0y4DdLOYrEHTFECfGIAJoqZr9hkhBUmYMKrmIzcxUSWgdsXDac27Na6oIq4yO0DE6RS46Ry10g9rIQwQN0St6Q+/Wi/VhfVpfM2nJKnYO0VxZ3/v75qx</latexit> <latexit sha1_base64="ZNKuRe23lzCJMNesh6cRgweKXJI=">ACHicZVDLSgMxFM3UV62vqks3 g0VwUcpMEdRd0Y3Lio4W2qFk0kwbmkyG5I5Qh36BuNXvcCVu/Qs/wz8wM52FbS+Ee3LueHkBDFnGhznxyqtrK6tb5Q3K1vbO7t71f2DBy0TRahHJeqE2BNOYuoBw47cSKYhFw+hiMr7P54xNVmsnoHiYx9Q UeRixkBIOh7p7zX615jScvOxl4Baghopq96u/vYEkiaAREI617rpODH6KFTDC6bTSzSNMRnjIe0aGFBtZ/mVqf2iWEGdiVORHYOft/I8VCwjo8yanptlDEjJd2oYCSylj2T3/VEBPVA1DOR0qFeMALh hZ+yKE6ARmTmI0y4DdLOYrEHTFECfGIAJoqZr9hkhBUmYMKrmIzcxUSWgdsXDac27Na6oIq4yO0DE6RS46Ry10g9rIQwQN0St6Q+/Wi/VhfVpfM2nJKnYO0VxZ3/v75qx</latexit> z3 <latexit sha1_base64="VwYPdoeVjXkU912kwmjdymaS+E=">ACHicZVBLTsMwFHT4lvIr sGQTUSGxqKoEkIBdBRuWRBaqY0qx3Vaq3Yc2S9IJeoJEFs4ByvEltwDG6Ak2ZB2ydZbzxvnjWeIOZMg+P8WEvLK6tr6WN8ubW9s5uZW/UctEeoRyaVqB1hTziLqAQNO27GiWASctoLRTZvPV GlmYweYBxTX+BxEJGMBjq/rl31qtUnbqTl70I3AJUVHNXuW325ckETQCwrHWHdeJwU+xAkY4nZS7iaYxJiM8oB0DIyo9tPc6sQ+NkzfDqUyJwI7Z/9vpFhogWFolFnTM7OMASm5rhkVDEXWsmfyu x6LoBaIWiZSOtRzRiC89FMWxQnQiEx9hAm3QdpZLHafKUqAjw3ARDHzFZsMscIETHhlk5E7n8gi8E7rV3Xn7rzauC7CKqFDdIROkIsuUAPdoibyED9Ire0Lv1Yn1Yn9bXVLpkFTsHaKas7z/xkZq y</latexit> <latexit sha1_base64="VwYPdoeVjXkU912kwmjdymaS+E=">ACHicZVBLTsMwFHT4lvIr sGQTUSGxqKoEkIBdBRuWRBaqY0qx3Vaq3Yc2S9IJeoJEFs4ByvEltwDG6Ak2ZB2ydZbzxvnjWeIOZMg+P8WEvLK6tr6WN8ubW9s5uZW/UctEeoRyaVqB1hTziLqAQNO27GiWASctoLRTZvPV GlmYweYBxTX+BxEJGMBjq/rl31qtUnbqTl70I3AJUVHNXuW325ckETQCwrHWHdeJwU+xAkY4nZS7iaYxJiM8oB0DIyo9tPc6sQ+NkzfDqUyJwI7Z/9vpFhogWFolFnTM7OMASm5rhkVDEXWsmfyu x6LoBaIWiZSOtRzRiC89FMWxQnQiEx9hAm3QdpZLHafKUqAjw3ARDHzFZsMscIETHhlk5E7n8gi8E7rV3Xn7rzauC7CKqFDdIROkIsuUAPdoibyED9Ire0Lv1Yn1Yn9bXVLpkFTsHaKas7z/xkZq y</latexit> <latexit sha1_base64="VwYPdoeVjXkU912kwmjdymaS+E=">ACHicZVBLTsMwFHT4lvIr sGQTUSGxqKoEkIBdBRuWRBaqY0qx3Vaq3Yc2S9IJeoJEFs4ByvEltwDG6Ak2ZB2ydZbzxvnjWeIOZMg+P8WEvLK6tr6WN8ubW9s5uZW/UctEeoRyaVqB1hTziLqAQNO27GiWASctoLRTZvPV GlmYweYBxTX+BxEJGMBjq/rl31qtUnbqTl70I3AJUVHNXuW325ckETQCwrHWHdeJwU+xAkY4nZS7iaYxJiM8oB0DIyo9tPc6sQ+NkzfDqUyJwI7Z/9vpFhogWFolFnTM7OMASm5rhkVDEXWsmfyu x6LoBaIWiZSOtRzRiC89FMWxQnQiEx9hAm3QdpZLHafKUqAjw3ARDHzFZsMscIETHhlk5E7n8gi8E7rV3Xn7rzauC7CKqFDdIROkIsuUAPdoibyED9Ire0Lv1Yn1Yn9bXVLpkFTsHaKas7z/xkZq y</latexit> x1 <latexit sha1_base64="2HhTHyaEJkJenBTAOB/h38YNco=">AB83icbVDLSsNAFL2pr1pfVZduBovgq iQi6LoxmUF+4CmlMl0g6dTMLMjVhCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wST29zvPHJtRKwecJrwfkRHSoSCUbS70cUx0GYPc0 G3qBac+vuHGSVeAWpQYHmoPrlD2OWRlwhk9SYnucm2M+oRsEkn1X81PCEsgkd8Z6likbc9LN5hk5s8qQhLG2TyGZq783MhoZM40CO5lnNMteLv7n9VIMr/uZUEmKXLHFoTCVBGOSF0CGQnOGcmoJZVrYrISNqaYMbU0VW4K3 /OV0r6oe27du7+sNW6KOspwAqdwDh5cQPuoAktYJDAM7zCm5M6L86787EYLTnFzjH8gfP5AypgkcA=</latexit> <latexit sha1_base64="2HhTHyaEJkJenBTAOB/h38YNco=">AB83icbVDLSsNAFL2pr1pfVZduBovgq iQi6LoxmUF+4CmlMl0g6dTMLMjVhCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wST29zvPHJtRKwecJrwfkRHSoSCUbS70cUx0GYPc0 G3qBac+vuHGSVeAWpQYHmoPrlD2OWRlwhk9SYnucm2M+oRsEkn1X81PCEsgkd8Z6likbc9LN5hk5s8qQhLG2TyGZq783MhoZM40CO5lnNMteLv7n9VIMr/uZUEmKXLHFoTCVBGOSF0CGQnOGcmoJZVrYrISNqaYMbU0VW4K3 /OV0r6oe27du7+sNW6KOspwAqdwDh5cQPuoAktYJDAM7zCm5M6L86787EYLTnFzjH8gfP5AypgkcA=</latexit> <latexit sha1_base64="2HhTHyaEJkJenBTAOB/h38YNco=">AB83icbVDLSsNAFL2pr1pfVZduBovgq iQi6LoxmUF+4CmlMl0g6dTMLMjVhCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wST29zvPHJtRKwecJrwfkRHSoSCUbS70cUx0GYPc0 G3qBac+vuHGSVeAWpQYHmoPrlD2OWRlwhk9SYnucm2M+oRsEkn1X81PCEsgkd8Z6likbc9LN5hk5s8qQhLG2TyGZq783MhoZM40CO5lnNMteLv7n9VIMr/uZUEmKXLHFoTCVBGOSF0CGQnOGcmoJZVrYrISNqaYMbU0VW4K3 /OV0r6oe27du7+sNW6KOspwAqdwDh5cQPuoAktYJDAM7zCm5M6L86787EYLTnFzjH8gfP5AypgkcA=</latexit> <latexit sha1_base64="2HhTHyaEJkJenBTAOB/h38YNco=">AB83icbVDLSsNAFL2pr1pfVZduBovgq iQi6LoxmUF+4CmlMl0g6dTMLMjVhCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlslrHuBtRwKRvoUDJu4nmNAok7wST29zvPHJtRKwecJrwfkRHSoSCUbS70cUx0GYPc0 G3qBac+vuHGSVeAWpQYHmoPrlD2OWRlwhk9SYnucm2M+oRsEkn1X81PCEsgkd8Z6likbc9LN5hk5s8qQhLG2TyGZq783MhoZM40CO5lnNMteLv7n9VIMr/uZUEmKXLHFoTCVBGOSF0CGQnOGcmoJZVrYrISNqaYMbU0VW4K3 /OV0r6oe27du7+sNW6KOspwAqdwDh5cQPuoAktYJDAM7zCm5M6L86787EYLTnFzjH8gfP5AypgkcA=</latexit> x2 <latexit sha1_base64="/xhoKoWR751dxO/nYLz15N6jlLs=">AB83icbVDLSsNAFL3xWeur6tLNYBFcl aQIuiy6cVnBPqApZTK9aYdOJmFmIpbQ3DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9m gPqhU3Zo7B1klXkGqUKA5qHz5w5ilEUrDBNW657mJ6WdUGc4Ezsp+qjGhbEJH2LNU0gh1P5tnpFzqwxJGCv7pCFz9fdGRiOtp1FgJ/OMetnLxf+8XmrC637GZIalGxKEwFMTHJCyBDrpAZMbWEMsVtVsLGVFmbE1lW4K3 /OV0q7XPLfm3V9WGzdFHSU4hTO4A+uoAF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AyvkcE=</latexit> <latexit sha1_base64="/xhoKoWR751dxO/nYLz15N6jlLs=">AB83icbVDLSsNAFL3xWeur6tLNYBFcl aQIuiy6cVnBPqApZTK9aYdOJmFmIpbQ3DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9m gPqhU3Zo7B1klXkGqUKA5qHz5w5ilEUrDBNW657mJ6WdUGc4Ezsp+qjGhbEJH2LNU0gh1P5tnpFzqwxJGCv7pCFz9fdGRiOtp1FgJ/OMetnLxf+8XmrC637GZIalGxKEwFMTHJCyBDrpAZMbWEMsVtVsLGVFmbE1lW4K3 /OV0q7XPLfm3V9WGzdFHSU4hTO4A+uoAF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AyvkcE=</latexit> <latexit sha1_base64="/xhoKoWR751dxO/nYLz15N6jlLs=">AB83icbVDLSsNAFL3xWeur6tLNYBFcl aQIuiy6cVnBPqApZTK9aYdOJmFmIpbQ3DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9m gPqhU3Zo7B1klXkGqUKA5qHz5w5ilEUrDBNW657mJ6WdUGc4Ezsp+qjGhbEJH2LNU0gh1P5tnpFzqwxJGCv7pCFz9fdGRiOtp1FgJ/OMetnLxf+8XmrC637GZIalGxKEwFMTHJCyBDrpAZMbWEMsVtVsLGVFmbE1lW4K3 /OV0q7XPLfm3V9WGzdFHSU4hTO4A+uoAF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AyvkcE=</latexit> <latexit sha1_base64="/xhoKoWR751dxO/nYLz15N6jlLs=">AB83icbVDLSsNAFL3xWeur6tLNYBFcl aQIuiy6cVnBPqApZTK9aYdOJmFmIpbQ3DjQhG3/ow7/8ZJm4W2Hhg4nHMv98wJEsG1cd1vZ219Y3Nru7RT3t3bPzisHB23dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5zf3OIyrNY/lgpgn2IzqSPOSMGiv5fkTNOAizp9m gPqhU3Zo7B1klXkGqUKA5qHz5w5ilEUrDBNW657mJ6WdUGc4Ezsp+qjGhbEJH2LNU0gh1P5tnpFzqwxJGCv7pCFz9fdGRiOtp1FgJ/OMetnLxf+8XmrC637GZIalGxKEwFMTHJCyBDrpAZMbWEMsVtVsLGVFmbE1lW4K3 /OV0q7XPLfm3V9WGzdFHSU4hTO4A+uoAF30IQWMEjgGV7hzUmdF+fd+ViMrjnFzgn8gfP5AyvkcE=</latexit> x3 <latexit sha1_base64="7sikxe2TJ1ePgYWw26lqNsE8Uck=">AB83icbVBNS8NAFHypX7V+VT16WSyCp 5KoMeiF48VbC0pWy2L+3SzSbsbsQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnSATXxnW/ndLK6tr6RnmzsrW9s7tX3T9o6zhVDFsFrHqBFSj4BJbhuBnUQhjQKBD8H4JvcfHlFpHst7M0mwF9Gh5CFn1FjJ9yNqRkGYPU3 75/1qza27M5Bl4hWkBgWa/eqXP4hZGqE0TFCtu56bmF5GleFM4LTipxoTysZ0iF1LJY1Q97JZ5ik5scqAhLGyTxoyU39vZDTSehIFdjLPqBe9XPzP6YmvOplXCapQcnmh8JUEBOTvAy4AqZERNLKFPcZiVsRBVlxtZUsSV4 i19eJu2zufWvbuLWuO6qKMR3AMp+DBJTgFprQAgYJPMrvDmp8+K8Ox/z0ZJT7BzCHzifPy1okcI=</latexit> <latexit sha1_base64="7sikxe2TJ1ePgYWw26lqNsE8Uck=">AB83icbVBNS8NAFHypX7V+VT16WSyCp 5KoMeiF48VbC0pWy2L+3SzSbsbsQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnSATXxnW/ndLK6tr6RnmzsrW9s7tX3T9o6zhVDFsFrHqBFSj4BJbhuBnUQhjQKBD8H4JvcfHlFpHst7M0mwF9Gh5CFn1FjJ9yNqRkGYPU3 75/1qza27M5Bl4hWkBgWa/eqXP4hZGqE0TFCtu56bmF5GleFM4LTipxoTysZ0iF1LJY1Q97JZ5ik5scqAhLGyTxoyU39vZDTSehIFdjLPqBe9XPzP6YmvOplXCapQcnmh8JUEBOTvAy4AqZERNLKFPcZiVsRBVlxtZUsSV4 i19eJu2zufWvbuLWuO6qKMR3AMp+DBJTgFprQAgYJPMrvDmp8+K8Ox/z0ZJT7BzCHzifPy1okcI=</latexit> <latexit sha1_base64="7sikxe2TJ1ePgYWw26lqNsE8Uck=">AB83icbVBNS8NAFHypX7V+VT16WSyCp 5KoMeiF48VbC0pWy2L+3SzSbsbsQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnSATXxnW/ndLK6tr6RnmzsrW9s7tX3T9o6zhVDFsFrHqBFSj4BJbhuBnUQhjQKBD8H4JvcfHlFpHst7M0mwF9Gh5CFn1FjJ9yNqRkGYPU3 75/1qza27M5Bl4hWkBgWa/eqXP4hZGqE0TFCtu56bmF5GleFM4LTipxoTysZ0iF1LJY1Q97JZ5ik5scqAhLGyTxoyU39vZDTSehIFdjLPqBe9XPzP6YmvOplXCapQcnmh8JUEBOTvAy4AqZERNLKFPcZiVsRBVlxtZUsSV4 i19eJu2zufWvbuLWuO6qKMR3AMp+DBJTgFprQAgYJPMrvDmp8+K8Ox/z0ZJT7BzCHzifPy1okcI=</latexit> <latexit sha1_base64="7sikxe2TJ1ePgYWw26lqNsE8Uck=">AB83icbVBNS8NAFHypX7V+VT16WSyCp 5KoMeiF48VbC0pWy2L+3SzSbsbsQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnSATXxnW/ndLK6tr6RnmzsrW9s7tX3T9o6zhVDFsFrHqBFSj4BJbhuBnUQhjQKBD8H4JvcfHlFpHst7M0mwF9Gh5CFn1FjJ9yNqRkGYPU3 75/1qza27M5Bl4hWkBgWa/eqXP4hZGqE0TFCtu56bmF5GleFM4LTipxoTysZ0iF1LJY1Q97JZ5ik5scqAhLGyTxoyU39vZDTSehIFdjLPqBe9XPzP6YmvOplXCapQcnmh8JUEBOTvAy4AqZERNLKFPcZiVsRBVlxtZUsSV4 i19eJu2zufWvbuLWuO6qKMR3AMp+DBJTgFprQAgYJPMrvDmp8+K8Ox/z0ZJT7BzCHzifPy1okcI=</latexit> e3 <latexit sha1_base64="T0xafC3gI5PMvzG8a4QNBgEsbwg=">AB83icbVDLSsNAFL2pr1pfV ZduBovgqiQq6LoxmUF+4AmlMn0ph06mYSZiVBCf8ONC0Xc+jPu/BunbRbaemDgcM693DMnTAXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTDFsEYnqhlSj4BJbhuB3VQhjUOBnXB8N/M7T6g0T+Sjma QYxHQoecQZNVby/ZiaURjlO1f9qs1t+7OQVaJV5AaFGj2q1/+IGFZjNIwQbXueW5qgpwqw5nAacXPNKaUjekQe5ZKGqMO8nmKTmzyoBEibJPGjJXf2/kNZ6Eod2cpZRL3sz8T+vl5noJsi5TDODki0 ORZkgJiGzAsiAK2RGTCyhTHGblbARVZQZW1PFluAtf3mVtC/qnlv3Hq5qjduijKcwCmcgwfX0IB7aEILGKTwDK/w5mTOi/PufCxGS06xcwx/4Hz+ABjka8=</latexit> <latexit sha1_base64="T0xafC3gI5PMvzG8a4QNBgEsbwg=">AB83icbVDLSsNAFL2pr1pfV ZduBovgqiQq6LoxmUF+4AmlMn0ph06mYSZiVBCf8ONC0Xc+jPu/BunbRbaemDgcM693DMnTAXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTDFsEYnqhlSj4BJbhuB3VQhjUOBnXB8N/M7T6g0T+Sjma QYxHQoecQZNVby/ZiaURjlO1f9qs1t+7OQVaJV5AaFGj2q1/+IGFZjNIwQbXueW5qgpwqw5nAacXPNKaUjekQe5ZKGqMO8nmKTmzyoBEibJPGjJXf2/kNZ6Eod2cpZRL3sz8T+vl5noJsi5TDODki0 ORZkgJiGzAsiAK2RGTCyhTHGblbARVZQZW1PFluAtf3mVtC/qnlv3Hq5qjduijKcwCmcgwfX0IB7aEILGKTwDK/w5mTOi/PufCxGS06xcwx/4Hz+ABjka8=</latexit> <latexit sha1_base64="T0xafC3gI5PMvzG8a4QNBgEsbwg=">AB83icbVDLSsNAFL2pr1pfV ZduBovgqiQq6LoxmUF+4AmlMn0ph06mYSZiVBCf8ONC0Xc+jPu/BunbRbaemDgcM693DMnTAXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTDFsEYnqhlSj4BJbhuB3VQhjUOBnXB8N/M7T6g0T+Sjma QYxHQoecQZNVby/ZiaURjlO1f9qs1t+7OQVaJV5AaFGj2q1/+IGFZjNIwQbXueW5qgpwqw5nAacXPNKaUjekQe5ZKGqMO8nmKTmzyoBEibJPGjJXf2/kNZ6Eod2cpZRL3sz8T+vl5noJsi5TDODki0 ORZkgJiGzAsiAK2RGTCyhTHGblbARVZQZW1PFluAtf3mVtC/qnlv3Hq5qjduijKcwCmcgwfX0IB7aEILGKTwDK/w5mTOi/PufCxGS06xcwx/4Hz+ABjka8=</latexit> <latexit sha1_base64="T0xafC3gI5PMvzG8a4QNBgEsbwg=">AB83icbVDLSsNAFL2pr1pfV ZduBovgqiQq6LoxmUF+4AmlMn0ph06mYSZiVBCf8ONC0Xc+jPu/BunbRbaemDgcM693DMnTAXxnW/ndLa+sbmVnm7srO7t39QPTxq6yRTDFsEYnqhlSj4BJbhuB3VQhjUOBnXB8N/M7T6g0T+Sjma QYxHQoecQZNVby/ZiaURjlO1f9qs1t+7OQVaJV5AaFGj2q1/+IGFZjNIwQbXueW5qgpwqw5nAacXPNKaUjekQe5ZKGqMO8nmKTmzyoBEibJPGjJXf2/kNZ6Eod2cpZRL3sz8T+vl5noJsi5TDODki0 ORZkgJiGzAsiAK2RGTCyhTHGblbARVZQZW1PFluAtf3mVtC/qnlv3Hq5qjduijKcwCmcgwfX0IB7aEILGKTwDK/w5mTOi/PufCxGS06xcwx/4Hz+ABjka8=</latexit> e2 <latexit sha1_base64="Tg2IblW3A5cGR/EL2kChnHLfo/8=">AB83icbVDLSsNAFL2pr1pfVZ duBovgqiRF0GXRjcsK9gFNKZPpTt0MgkzE6GE/oYbF4q49Wfc+TdO2iy09cDA4Zx7uWdOkAiujet+O6WNza3tnfJuZW/4PCoenzS0XGqGLZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Lve7T6g0j+WjmSU4 iOhY8pAzaqzk+xE1kyDMcD5sDKs1t+4uQNaJV5AaFGgNq1/+KGZphNIwQbXue25iBhlVhjOB84qfakwom9Ix9i2VNEI9yBaZ5+TCKiMSxso+achC/b2R0UjrWRTYyTyjXvVy8T+vn5rwZpBxmaQGJVseClN BTEzyAsiIK2RGzCyhTHGblbAJVZQZW1PFluCtfnmdBp1z617D1e15m1RxnO4BwuwYNraMI9tKANDBJ4hld4c1LnxXl3PpajJafYOYU/cD5/A7fka4=</latexit> <latexit sha1_base64="Tg2IblW3A5cGR/EL2kChnHLfo/8=">AB83icbVDLSsNAFL2pr1pfVZ duBovgqiRF0GXRjcsK9gFNKZPpTt0MgkzE6GE/oYbF4q49Wfc+TdO2iy09cDA4Zx7uWdOkAiujet+O6WNza3tnfJuZW/4PCoenzS0XGqGLZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Lve7T6g0j+WjmSU4 iOhY8pAzaqzk+xE1kyDMcD5sDKs1t+4uQNaJV5AaFGgNq1/+KGZphNIwQbXue25iBhlVhjOB84qfakwom9Ix9i2VNEI9yBaZ5+TCKiMSxso+achC/b2R0UjrWRTYyTyjXvVy8T+vn5rwZpBxmaQGJVseClN BTEzyAsiIK2RGzCyhTHGblbAJVZQZW1PFluCtfnmdBp1z617D1e15m1RxnO4BwuwYNraMI9tKANDBJ4hld4c1LnxXl3PpajJafYOYU/cD5/A7fka4=</latexit> <latexit sha1_base64="Tg2IblW3A5cGR/EL2kChnHLfo/8=">AB83icbVDLSsNAFL2pr1pfVZ duBovgqiRF0GXRjcsK9gFNKZPpTt0MgkzE6GE/oYbF4q49Wfc+TdO2iy09cDA4Zx7uWdOkAiujet+O6WNza3tnfJuZW/4PCoenzS0XGqGLZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Lve7T6g0j+WjmSU4 iOhY8pAzaqzk+xE1kyDMcD5sDKs1t+4uQNaJV5AaFGgNq1/+KGZphNIwQbXue25iBhlVhjOB84qfakwom9Ix9i2VNEI9yBaZ5+TCKiMSxso+achC/b2R0UjrWRTYyTyjXvVy8T+vn5rwZpBxmaQGJVseClN BTEzyAsiIK2RGzCyhTHGblbAJVZQZW1PFluCtfnmdBp1z617D1e15m1RxnO4BwuwYNraMI9tKANDBJ4hld4c1LnxXl3PpajJafYOYU/cD5/A7fka4=</latexit> <latexit sha1_base64="Tg2IblW3A5cGR/EL2kChnHLfo/8=">AB83icbVDLSsNAFL2pr1pfVZ duBovgqiRF0GXRjcsK9gFNKZPpTt0MgkzE6GE/oYbF4q49Wfc+TdO2iy09cDA4Zx7uWdOkAiujet+O6WNza3tnfJuZW/4PCoenzS0XGqGLZLGLVC6hGwSW2DTcCe4lCGgUCu8H0Lve7T6g0j+WjmSU4 iOhY8pAzaqzk+xE1kyDMcD5sDKs1t+4uQNaJV5AaFGgNq1/+KGZphNIwQbXue25iBhlVhjOB84qfakwom9Ix9i2VNEI9yBaZ5+TCKiMSxso+achC/b2R0UjrWRTYyTyjXvVy8T+vn5rwZpBxmaQGJVseClN BTEzyAsiIK2RGzCyhTHGblbAJVZQZW1PFluCtfnmdBp1z617D1e15m1RxnO4BwuwYNraMI9tKANDBJ4hld4c1LnxXl3PpajJafYOYU/cD5/A7fka4=</latexit> e1 <latexit sha1_base64="lYE4z03Y7EMd5kOh7HTRFWoh4U=">AB83icbVBNS8NAFHypX7V+VT 16WSyCp5KIoMeiF48VbC0oWy2L+3SzSbsboQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnTAXxnW/ncra+sbmVnW7trO7t39QPzq6iRTDsEYnqhVSj4BI7huBvVQhjUOBj+HktvAfn1BpnsgHM0x iOlI8ogzaqzk+zE14zDKcTbwBvWG23TnIKvEK0kDSrQH9S9/mLAsRmYoFr3PTc1QU6V4UzgrOZnGlPKJnSEfUsljVEH+TzjJxZUiRNknDZmrvzdyGms9jUM7WTUy14h/uf1MxNdBzmXaWZQsWhKBP EJKQogAy5QmbE1BLKFLdZCRtTRZmxNdVsCd7yl1dJ96LpuU3v/rLRuinrqMIJnMI5eHAFLbiDNnSAQrP8ApvTua8O/Ox2K04pQ7x/AHzucPDVuRrQ=</latexit> <latexit sha1_base64="lYE4z03Y7EMd5kOh7HTRFWoh4U=">AB83icbVBNS8NAFHypX7V+VT 16WSyCp5KIoMeiF48VbC0oWy2L+3SzSbsboQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnTAXxnW/ncra+sbmVnW7trO7t39QPzq6iRTDsEYnqhVSj4BI7huBvVQhjUOBj+HktvAfn1BpnsgHM0x iOlI8ogzaqzk+zE14zDKcTbwBvWG23TnIKvEK0kDSrQH9S9/mLAsRmYoFr3PTc1QU6V4UzgrOZnGlPKJnSEfUsljVEH+TzjJxZUiRNknDZmrvzdyGms9jUM7WTUy14h/uf1MxNdBzmXaWZQsWhKBP EJKQogAy5QmbE1BLKFLdZCRtTRZmxNdVsCd7yl1dJ96LpuU3v/rLRuinrqMIJnMI5eHAFLbiDNnSAQrP8ApvTua8O/Ox2K04pQ7x/AHzucPDVuRrQ=</latexit> <latexit sha1_base64="lYE4z03Y7EMd5kOh7HTRFWoh4U=">AB83icbVBNS8NAFHypX7V+VT 16WSyCp5KIoMeiF48VbC0oWy2L+3SzSbsboQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnTAXxnW/ncra+sbmVnW7trO7t39QPzq6iRTDsEYnqhVSj4BI7huBvVQhjUOBj+HktvAfn1BpnsgHM0x iOlI8ogzaqzk+zE14zDKcTbwBvWG23TnIKvEK0kDSrQH9S9/mLAsRmYoFr3PTc1QU6V4UzgrOZnGlPKJnSEfUsljVEH+TzjJxZUiRNknDZmrvzdyGms9jUM7WTUy14h/uf1MxNdBzmXaWZQsWhKBP EJKQogAy5QmbE1BLKFLdZCRtTRZmxNdVsCd7yl1dJ96LpuU3v/rLRuinrqMIJnMI5eHAFLbiDNnSAQrP8ApvTua8O/Ox2K04pQ7x/AHzucPDVuRrQ=</latexit> <latexit sha1_base64="lYE4z03Y7EMd5kOh7HTRFWoh4U=">AB83icbVBNS8NAFHypX7V+VT 16WSyCp5KIoMeiF48VbC0oWy2L+3SzSbsboQS+je8eFDEq3/Gm/GTZuDtg4sDPv8WYnTAXxnW/ncra+sbmVnW7trO7t39QPzq6iRTDsEYnqhVSj4BI7huBvVQhjUOBj+HktvAfn1BpnsgHM0x iOlI8ogzaqzk+zE14zDKcTbwBvWG23TnIKvEK0kDSrQH9S9/mLAsRmYoFr3PTc1QU6V4UzgrOZnGlPKJnSEfUsljVEH+TzjJxZUiRNknDZmrvzdyGms9jUM7WTUy14h/uf1MxNdBzmXaWZQsWhKBP EJKQogAy5QmbE1BLKFLdZCRtTRZmxNdVsCd7yl1dJ96LpuU3v/rLRuinrqMIJnMI5eHAFLbiDNnSAQrP8ApvTua8O/Ox2K04pQ7x/AHzucPDVuRrQ=</latexit> xi = fφ(ei) <latexit sha1_base64="qh5OBA1S/e2+Db3eZYNh3vsEqz4=">ACI3icbVDLSsNAFJ34rPUVdelmsAjVRUlE0I1QdOy gn1AU8JkOmHzkzCzEQsIf/gT/gLbnXvTty4cOXOGkjaOuBC4dz7uXe4KYUaUd58NaWFxaXlktrZXNza3tu2d3ZaKEolJE0cskp0AKcKoIE1NSOdWBLEA0bawegq9t3RCoaiVs9jkmPo4GgIcVIG8m3jz2O9DAI0/vMp/AChn7qBTz14iHNsuqPSYx5 5NsVp+ZMAOeJW5AKNDw7S+vH+GE6ExQ0p1XSfWvRJTEjWdlLFIkRHqEB6RoqECeql05+yuChUfowjKQpoeFE/T2RIq7UmAemMz9SzXq5+K8X8JnNOjzvpVTEiSYCTxeHCYM6gnlgsE8lwZqNDUFYUnM7xEMkEdYm1rIJxZ2NYJ60TmquU3NvTiv1yK eEtgHB6AKXHAG6uAaNEATYPAnsAzeLEerVfrzXqfti5Yxcwe+APr8xvKoaU5</latexit> <latexit sha1_base64="qh5OBA1S/e2+Db3eZYNh3vsEqz4=">ACI3icbVDLSsNAFJ34rPUVdelmsAjVRUlE0I1QdOy gn1AU8JkOmHzkzCzEQsIf/gT/gLbnXvTty4cOXOGkjaOuBC4dz7uXe4KYUaUd58NaWFxaXlktrZXNza3tu2d3ZaKEolJE0cskp0AKcKoIE1NSOdWBLEA0bawegq9t3RCoaiVs9jkmPo4GgIcVIG8m3jz2O9DAI0/vMp/AChn7qBTz14iHNsuqPSYx5 5NsVp+ZMAOeJW5AKNDw7S+vH+GE6ExQ0p1XSfWvRJTEjWdlLFIkRHqEB6RoqECeql05+yuChUfowjKQpoeFE/T2RIq7UmAemMz9SzXq5+K8X8JnNOjzvpVTEiSYCTxeHCYM6gnlgsE8lwZqNDUFYUnM7xEMkEdYm1rIJxZ2NYJ60TmquU3NvTiv1yK eEtgHB6AKXHAG6uAaNEATYPAnsAzeLEerVfrzXqfti5Yxcwe+APr8xvKoaU5</latexit> <latexit sha1_base64="qh5OBA1S/e2+Db3eZYNh3vsEqz4=">ACI3icbVDLSsNAFJ34rPUVdelmsAjVRUlE0I1QdOy gn1AU8JkOmHzkzCzEQsIf/gT/gLbnXvTty4cOXOGkjaOuBC4dz7uXe4KYUaUd58NaWFxaXlktrZXNza3tu2d3ZaKEolJE0cskp0AKcKoIE1NSOdWBLEA0bawegq9t3RCoaiVs9jkmPo4GgIcVIG8m3jz2O9DAI0/vMp/AChn7qBTz14iHNsuqPSYx5 5NsVp+ZMAOeJW5AKNDw7S+vH+GE6ExQ0p1XSfWvRJTEjWdlLFIkRHqEB6RoqECeql05+yuChUfowjKQpoeFE/T2RIq7UmAemMz9SzXq5+K8X8JnNOjzvpVTEiSYCTxeHCYM6gnlgsE8lwZqNDUFYUnM7xEMkEdYm1rIJxZ2NYJ60TmquU3NvTiv1yK eEtgHB6AKXHAG6uAaNEATYPAnsAzeLEerVfrzXqfti5Yxcwe+APr8xvKoaU5</latexit> <latexit sha1_base64="qh5OBA1S/e2+Db3eZYNh3vsEqz4=">ACI3icbVDLSsNAFJ34rPUVdelmsAjVRUlE0I1QdOy gn1AU8JkOmHzkzCzEQsIf/gT/gLbnXvTty4cOXOGkjaOuBC4dz7uXe4KYUaUd58NaWFxaXlktrZXNza3tu2d3ZaKEolJE0cskp0AKcKoIE1NSOdWBLEA0bawegq9t3RCoaiVs9jkmPo4GgIcVIG8m3jz2O9DAI0/vMp/AChn7qBTz14iHNsuqPSYx5 5NsVp+ZMAOeJW5AKNDw7S+vH+GE6ExQ0p1XSfWvRJTEjWdlLFIkRHqEB6RoqECeql05+yuChUfowjKQpoeFE/T2RIq7UmAemMz9SzXq5+K8X8JnNOjzvpVTEiSYCTxeHCYM6gnlgsE8lwZqNDUFYUnM7xEMkEdYm1rIJxZ2NYJ60TmquU3NvTiv1yK eEtgHB6AKXHAG6uAaNEATYPAnsAzeLEerVfrzXqfti5Yxcwe+APr8xvKoaU5</latexit> z ⇠Syntactic Prior <latexit sha1_base64="lhmpRI1I8WlXEXYrMDla5/vPcM=">ACH3icbZC7TsMwFIYdrqXcCowsFhWIq UoQEowVLIxF0IvUVJXjOq1V24nsE0SJ8ga8BK/ACjsbYu3Kk+C0HaDlSJZ+/f85PvYXxIbcN2xs7S8srq2Xtgobm5t7+yW9vYbJko0ZXUaiUi3AmKY4IrVgYNgrVgzIgPBmsHwOs+bD0wbHql7GMWsI0lf8ZBTAtbqlk58SWA QhOlThn3DJfaBPUJ6N1JAKHCKa5pHOuWym7FnReFN5MlNGsat3St9+LaCKZAiqIMW3PjaGTEm3vFCwr+olhMaFD0mdtKxWRzHTSyX8yfGydHg4jbY8CPHF/T6REGjOSge3MX2/ms9z8Nwvk3GYILzspV3ECTNHp4jARGCKc w8I9rhkFMbKCUM1zHnRAtCVjkRYtFG8ewaJonFU8t+LdnperVzM8BXSIjtAp8tAFqIbVEN1RNEzekVv6N15cT6cT+dr2rkzGYO0J9yxj8qoKPp</latexit> <latexit sha1_base64="lhmpRI1I8WlXEXYrMDla5/vPcM=">ACH3icbZC7TsMwFIYdrqXcCowsFhWIq UoQEowVLIxF0IvUVJXjOq1V24nsE0SJ8ga8BK/ACjsbYu3Kk+C0HaDlSJZ+/f85PvYXxIbcN2xs7S8srq2Xtgobm5t7+yW9vYbJko0ZXUaiUi3AmKY4IrVgYNgrVgzIgPBmsHwOs+bD0wbHql7GMWsI0lf8ZBTAtbqlk58SWA QhOlThn3DJfaBPUJ6N1JAKHCKa5pHOuWym7FnReFN5MlNGsat3St9+LaCKZAiqIMW3PjaGTEm3vFCwr+olhMaFD0mdtKxWRzHTSyX8yfGydHg4jbY8CPHF/T6REGjOSge3MX2/ms9z8Nwvk3GYILzspV3ECTNHp4jARGCKc w8I9rhkFMbKCUM1zHnRAtCVjkRYtFG8ewaJonFU8t+LdnperVzM8BXSIjtAp8tAFqIbVEN1RNEzekVv6N15cT6cT+dr2rkzGYO0J9yxj8qoKPp</latexit> <latexit sha1_base64="lhmpRI1I8WlXEXYrMDla5/vPcM=">ACH3icbZC7TsMwFIYdrqXcCowsFhWIq UoQEowVLIxF0IvUVJXjOq1V24nsE0SJ8ga8BK/ACjsbYu3Kk+C0HaDlSJZ+/f85PvYXxIbcN2xs7S8srq2Xtgobm5t7+yW9vYbJko0ZXUaiUi3AmKY4IrVgYNgrVgzIgPBmsHwOs+bD0wbHql7GMWsI0lf8ZBTAtbqlk58SWA QhOlThn3DJfaBPUJ6N1JAKHCKa5pHOuWym7FnReFN5MlNGsat3St9+LaCKZAiqIMW3PjaGTEm3vFCwr+olhMaFD0mdtKxWRzHTSyX8yfGydHg4jbY8CPHF/T6REGjOSge3MX2/ms9z8Nwvk3GYILzspV3ECTNHp4jARGCKc w8I9rhkFMbKCUM1zHnRAtCVjkRYtFG8ewaJonFU8t+LdnperVzM8BXSIjtAp8tAFqIbVEN1RNEzekVv6N15cT6cT+dr2rkzGYO0J9yxj8qoKPp</latexit> <latexit sha1_base64="lhmpRI1I8WlXEXYrMDla5/vPcM=">ACH3icbZC7TsMwFIYdrqXcCowsFhWIq UoQEowVLIxF0IvUVJXjOq1V24nsE0SJ8ga8BK/ACjsbYu3Kk+C0HaDlSJZ+/f85PvYXxIbcN2xs7S8srq2Xtgobm5t7+yW9vYbJko0ZXUaiUi3AmKY4IrVgYNgrVgzIgPBmsHwOs+bD0wbHql7GMWsI0lf8ZBTAtbqlk58SWA QhOlThn3DJfaBPUJ6N1JAKHCKa5pHOuWym7FnReFN5MlNGsat3St9+LaCKZAiqIMW3PjaGTEm3vFCwr+olhMaFD0mdtKxWRzHTSyX8yfGydHg4jbY8CPHF/T6REGjOSge3MX2/ms9z8Nwvk3GYILzspV3ECTNHp4jARGCKc w8I9rhkFMbKCUM1zHnRAtCVjkRYtFG8ewaJonFU8t+LdnperVzM8BXSIjtAp8tAFqIbVEN1RNEzekVv6N15cT6cT+dr2rkzGYO0J9yxj8qoKPp</latexit> ei ⇠N(µzi, ⌃zi) <latexit sha1_base64="I+7623Tzq4My/liwg8d1VfjNKe8=">ACOXicbVDLSsNAFJ34rPUVdelmsAgVpCQi6L oxpVUtA9oQphMJ+3QmSTMTIQa8jv+hL/gVsGlrsStP+AkzUJbDwyce8693LnHjxmVyrLejIXFpeWV1cpadX1jc2vb3NntyCgRmLRxCLR85EkjIakrahipBcLgrjPSNcfX+Z+954ISaPwTk1i4nI0DGlAMVJa8symw5Ea+UFKMo9CR1IOCwU jl5ndcfnqcOTzEsfPJodw6K+pUOSunIM2tWwyoA54ldkho0fLMD2cQ4YSTUGpOzbVqzcFAlFMSNZ1UkiREeoyHpaxoiTqSbFpdm8FArAxhEQr9QwUL9PZEiLuWE+7ozv0LOern4r+fzmc0qOHdTGsaJIiGeLg4SBlUE8xjhgAqCFZto grCg+u8Qj5BAWOmwqzoUezaCedI5adhWw745rTUvyngqYB8cgDqwRlogivQAm2AwSN4Bi/g1Xgy3o1P42vaumCUM3vgD4zvH3zrsU=</latexit> <latexit sha1_base64="I+7623Tzq4My/liwg8d1VfjNKe8=">ACOXicbVDLSsNAFJ34rPUVdelmsAgVpCQi6L oxpVUtA9oQphMJ+3QmSTMTIQa8jv+hL/gVsGlrsStP+AkzUJbDwyce8693LnHjxmVyrLejIXFpeWV1cpadX1jc2vb3NntyCgRmLRxCLR85EkjIakrahipBcLgrjPSNcfX+Z+954ISaPwTk1i4nI0DGlAMVJa8symw5Ea+UFKMo9CR1IOCwU jl5ndcfnqcOTzEsfPJodw6K+pUOSunIM2tWwyoA54ldkho0fLMD2cQ4YSTUGpOzbVqzcFAlFMSNZ1UkiREeoyHpaxoiTqSbFpdm8FArAxhEQr9QwUL9PZEiLuWE+7ozv0LOern4r+fzmc0qOHdTGsaJIiGeLg4SBlUE8xjhgAqCFZto grCg+u8Qj5BAWOmwqzoUezaCedI5adhWw745rTUvyngqYB8cgDqwRlogivQAm2AwSN4Bi/g1Xgy3o1P42vaumCUM3vgD4zvH3zrsU=</latexit> <latexit sha1_base64="I+7623Tzq4My/liwg8d1VfjNKe8=">ACOXicbVDLSsNAFJ34rPUVdelmsAgVpCQi6L oxpVUtA9oQphMJ+3QmSTMTIQa8jv+hL/gVsGlrsStP+AkzUJbDwyce8693LnHjxmVyrLejIXFpeWV1cpadX1jc2vb3NntyCgRmLRxCLR85EkjIakrahipBcLgrjPSNcfX+Z+954ISaPwTk1i4nI0DGlAMVJa8symw5Ea+UFKMo9CR1IOCwU jl5ndcfnqcOTzEsfPJodw6K+pUOSunIM2tWwyoA54ldkho0fLMD2cQ4YSTUGpOzbVqzcFAlFMSNZ1UkiREeoyHpaxoiTqSbFpdm8FArAxhEQr9QwUL9PZEiLuWE+7ozv0LOern4r+fzmc0qOHdTGsaJIiGeLg4SBlUE8xjhgAqCFZto grCg+u8Qj5BAWOmwqzoUezaCedI5adhWw745rTUvyngqYB8cgDqwRlogivQAm2AwSN4Bi/g1Xgy3o1P42vaumCUM3vgD4zvH3zrsU=</latexit> <latexit sha1_base64="I+7623Tzq4My/liwg8d1VfjNKe8=">ACOXicbVDLSsNAFJ34rPUVdelmsAgVpCQi6L oxpVUtA9oQphMJ+3QmSTMTIQa8jv+hL/gVsGlrsStP+AkzUJbDwyce8693LnHjxmVyrLejIXFpeWV1cpadX1jc2vb3NntyCgRmLRxCLR85EkjIakrahipBcLgrjPSNcfX+Z+954ISaPwTk1i4nI0DGlAMVJa8symw5Ea+UFKMo9CR1IOCwU jl5ndcfnqcOTzEsfPJodw6K+pUOSunIM2tWwyoA54ldkho0fLMD2cQ4YSTUGpOzbVqzcFAlFMSNZ1UkiREeoyHpaxoiTqSbFpdm8FArAxhEQr9QwUL9PZEiLuWE+7ozv0LOern4r+fzmc0qOHdTGsaJIiGeLg4SBlUE8xjhgAqCFZto grCg+u8Qj5BAWOmwqzoUezaCedI5adhWw745rTUvyngqYB8cgDqwRlogivQAm2AwSN4Bi/g1Xgy3o1P42vaumCUM3vgD4zvH3zrsU=</latexit> Figure 2: Graphical representation of the structured flow model. We denote discrete syntactic variables as z, latent embedding variable as e, and observed pretrained word embeddings as x. fφ is the invertible projection function. 2018), and use the “SelfAtt-Graph” model (Ahmad et al., 2019) for dependency parsing.4 Following Ahmad et al. (2019), for dependency parsing gold POS tags are also used to learn POS tag embeddings as universal features. We train the systems on English and directly evaluate them on the target languages. Results are shown in Figure 1. While these systems achieve quite accurate results on closely related languages, we observe large performance drops on both tasks as distance to English increases. These results motivate our proposed approach, which aims to close this gap by directly adapting to the target language through unsupervised learning over unlabeled text. 3 Proposed Method In this section, we first introduce the unsupervised monolingual models presented in He et al. (2018), which we refer to as structured flow models, then we propose our approach that extends the structured flow models to cross-lingual settings. 3.1 Unsupervised Training of Structured Flow Models The structured flow generative model, proposed by He et al. (2018), is a state-of-the-art technique for inducing syntactic structure in a monolingual setting without supervision. This model cascades a structured generative prior psyntax(z; θ) with an invertible neural network fφ(z) to generate pre4We use an implementation and English source model checkpoint identical to the original paper. 3214 trained word embeddings x = fφ(z), which correspond to the words in the training sentences. z represents latent syntax variables that are not observed during training. The structured prior defines a probability over syntactic structures, and can be a Markov prior to induce POS tags or DMV prior (Klein and Manning, 2004) to induce dependency structures. Notably, the model side-steps discrete words, and instead uses pre-trained word embeddings as observations, which allows it to be directly employed in cross-lingual transfer setting by using cross-lingual word embeddings as the observations. A graphical illustration of this model is shown in Figure 2. Given a sentence of length l, we denote z = {z}K k=1 as a set of discrete latent variables from the structured prior, e = {ei}l i=1 as the latent embeddings, and x = {xi}l i=1 as the observed word embeddings. Note that the number of latent syntax variables K is no smaller than the sentence length l, and we assume xi is generated (indirectly) conditioned on zi for notation simplicity. The model is trained by maximizing the following marginal data likelihood: pus(x) = X z  psyntax(z; θ) · Yℓ i=1 pη(f−1 φ (xi)|zi) det ∂f−1 φ ∂xi  . (1) pη(·|zi) is defined to be a conditional Gaussian distribution that emits latent embedding e. The projection function fφ projects the latent embedding e to the observed embedding x. ∂f−1 φ ∂xi is the Jacobian matrix of function f−1 φ at xi, and det ∂f−1 φ ∂xi represents the absolute value of its determinant. To understand the intuitions behind Eq. 1, first denote the log likelihood over the latent embedding e as log pgaus(·), then log of Eq. 1 can be equivalently rewritten as: log pus(x) = log pgaus(f−1 φ (x)) + Xl i=1 log det ∂f−1 φ ∂xi . (2) Eq. 2 shows that f−1 φ (x) inversely projects x to a new latent embedding space, on which the unsupervised training objective is simply the Gaussian log likelihood with an additional Jacobian regularization term. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection, thus maximizing it can be thought of as preventing information loss.5 This projection scheme can flexibly transform embedding space to fit the task at hand, but still avoids trivial solutions by preserving information. While f−1 φ (x) can be any invertible function, He et al. (2018) use a version of the NICE architecture (Dinh et al., 2014) to construct f−1 φ , which has the advantage that the determinant term is constantly equal to one. This structured flow model allows for exact marginal data likelihood computation and exact inference by the use of dynamic programs to marginalize out z. More details about this model can be found in He et al. (2018). 3.2 Supervised Training of Structured Flow Models While He et al. (2018) train the structured flow model in an unsupervised fashion, this model can be also trained with supervised data when z is observed. Supervised training is required in the cross-lingual transfer where we train a model on the high-resource source language. The supervised objective can be written as: ps(z, x) = psyntax(z; θ) · Yℓ i=1 pη(f−1 φ (xi)|zi) det ∂f−1 φ ∂xi , (3) 3.3 Multilingual Training through Parameter Sharing In this paper, we focus on the zero-shot crosslingual transfer setting where the syntactic structure z is observed for the source language but unavailable for the target languages. Eq. 2 is an unsupervised objective which is optimized on the target languages, and Eq. 3 is optimized on the source language. To establish connections between the source and target languages, we employ two instances of the structured flow model – a source model and a target model – and share parameters between them. The source model uses the supervised objective, Eq. 3, and the target model uses the unsupervised objective, Eq. 2, and both are optimized jointly. Instead of tying their parameters in a hard way, we share their parameters softly through an L2 regularizer that encourages similarity. We use subscript p to represent variables of the source model and q to represent variables of 5Maximizing the Jacobian term encourages volume expansion and prevents the latent embedding from collapsing to a (nearly) single point. 3215 the target model. Together, our joint training objective is: L(θ{p,q}, η{p,q}, φ{p,q})=log ps(xp)+log pus(xq) −β1 2 ∥θp −θq∥2 −β2 2 ∥ηp −ηq∥2 −β3 2 ∥φp −φq∥2, (4) where β = {β1, β2, β3} are regularization parameters. Introduction of hyperparameters is concerning because in the zero-shot transfer setting we do not have annotated data to select the parameters for each target language, but in experiments we found it unnecessary to tune β for different target languages separately, and it is possible to use the same β within the same language category (i.e. distant or nearby). Under the parameter sharing scheme the projected latent embedding space e can be understood as the new interlingual embedding space from which we learn the syntactic structures. The expressivity of the flow model used in learning this latent embeddings space is expected to compensate for the imperfect orthogonality between the two embedding spaces. Further, jointly training both models with Eq. 4 is more expensive than typical cross-lingual transfer setups – it would require re-training both models for each language pair. To improve efficiency and memory utilization, in practice we use a simple pipelined approach: (1) We pre-train parameters for the source model only once, in isolation. (2) We use these parameters to initialize each target model, and regularize all target parameters towards this initializer via the L2 terms in Eq. 4. In this way, we only need to save the pre-trained parameters for a single source model, and target-side fine-tuning converges much faster than training each pair from scratch. This training approximation has been used before in Zhang et al. (2016). 4 Experiments In this section, we first describe the dataset and experimental setup, and then report the cross-lingual transfer results of POS tagging and dependency parsing on distant target languages. Lastly we include analysis of different systems. 4.1 Experimental Setup Across both POS tagging and dependency parsing tasks, we run experiments on Universal Dependency Treebanks (v2.2) (Nivre et al., 2018). Specifically, we train the proposed model on the English corpus with annotated data and fine-tune it on target languages in an unsupervised way. In the rest of the paper we will use Flow-FT to term our proposed method. We use the aligned cross-lingual word embeddings described in Section 2.2 as the observations of our model. To compare with Ahmad et al. (2019), on dependency parsing task we also use universal gold POS tags to index tag embeddings as part of observations. Specifically, the tag embeddings are concatenated with word embeddings to form x, tag embeddings are updated when training on the source language, and fixed at fine-tuning stage. We implement the structured flow model based on the public code from He et al. (2018),6which contains models with Markov prior for POS tagging and DMV prior for dependency parsing. Detailed hyperparameters can be found in Appendix B. Both source model and target model are optimized with Adam (Kingma and Ba, 2014). Training on the English source corpus is run 5 times with different random restarts for all models, then the source model with the best English test accuracy is selected to perform transfer. We compare our method with a direct transfer approach that is based on the state-of-the-art discriminative models as described in Section 2.2. The pre-trained cross-lingual word embeddings for all models are frozen since fine-tuning them will break the multi-lingual alignments. In addition, to demonstrate the efficacy of unsupervised adaptation, we also include direct transfer results of our model without fine-tuning, which we denote as Flow-Fix. On the POS tagging task we reimplement the generative baseline in Zhang et al. (2016) that employs a linear projection (LinearFT). We present results on 20 target languages in “distant languages” and “nearby languages” categories to analyze the difference of the systems and the scenarios to which they are applicable. 4.2 Part-Of-Speech Tagging Setup. Our method aims to predict coarse universal POS tags, as fine-grained tags are languagedependent. The discriminative baseline with the NCRF++ toolkit (Yang and Zhang, 2018) achieves supervised test accuracy on English of 94.02%, which is competitive (rank 12) on the CoNLL 6https://github.com/jxhe/ struct-learning-with-flow. 3216 Discriminative Generative Lang LSTM-CRF Flow-Fix Flow-FT Linear-FT Distant Languages zh (0.86) 33.31 35.24 43.44 35.95 fa (0.86) 61.74 55.32 64.47 34.35 ar (0.86) 56.41 49.70 64.00 38.95 ja (0.71) 26.37 25.09 38.37 12.49 id (0.71) 72.21 63.73 73.51 57.56 ko (0.69) 42.57 39.56 41.76 18.30 tr (0.62) 58.74 43.17 60.08 22.79 hi (0.61) 55.85 47.18 64.75 38.04 hr (0.59) 63.23 50.57 57.90 56.53 he (0.57) 48.90 47.97 62.69 48.17 AVG 51.93 45.75 57.10 36.31 Nearby Languages bg (0.50) 74.55 62.18 64.69 66.71 it (0.50) 77.75 69.93 80.99 73.55 pt (0.48) 74.68 65.08 72.65 72.54 fr (0.46) 73.33 64.15 69.78 66.63 es (0.46) 76.07 65.77 77.19 72.86 no (0.45) 69.30 58.98 62.05 62.38 da (0.41) 79.33 62.42 68.68 67.31 sv (0.40) 76.70 58.91 66.34 61.82 nl (0.37) 80.15 66.52 68.74 66.08 de (0.36) 68.75 57.91 59.97 56.16 AVG 75.06 63.19 69.11 66.60 en∗ 94.02 87.03 – 84.69 Table 2: POS tagging accuracy results (%). Numbers next to languages names are their distances to English. Supervised accuracy on English (∗) is included for reference. 2018 Shared Task scoreboard that uses the same dataset.7 The regularization parameters β in all generative models are tuned on the Arabic8 development data and kept the same for all target languages. Our running β is β1 = 0, β2 = 500, β3 = 80. Unsupervised fine-tuning is run for 10 epochs. Results. We show our results in Table 2, where unsupervised fine-tuning achieves considerable and consistent performance improvements over the Flow-Fix baseline in both language categories. When compared the discriminative LSTM-CRF baseline, our approach outperforms it on 8 out of 10 distant languages, with an average of 5.2% absolute improvement. Unsurprisingly, however, it also underperforms the expressive LSTM-CRF on 8 out of 10 nearby languages. The reasons for this phenomenon are two-fold. First, the flexible LSTM-CRF model is better able to fit the 7For reference, check the “en ewt” treebank results in http://universaldependencies.org/conll18/results-upos.html. 8We choose Arabic simply because it is first in alphabetical order. source English corpus than our method (94.02% vs 87.03% accuracy), thus it is also capable of fitting similar input when transferring. Second, unsupervised adaptation helps less when transferring to nearby languages (5.9% improvement over Flow-Fix versus 11.3% on distant languages), we posit that this is because a large portion of linguistic knowledge is shared between similar languages, and the cross-lingual word embeddings have better quality in this case, so unsupervised adaptation becomes less necessary. While the Linear-FT baseline on nearby languages is comparable to our method, its performance on distant languages is much worse, which confirms the importance of invertible projection, especially when language typologies are divergent. 4.3 Dependency Parsing Setup. In preliminary parsing results we found that transferring to “nearby language” group is likely to suffer from catastrophic forgetting (McCloskey and Cohen, 1989) and thus requires stronger regularization towards the source model. This also makes sense intuitively since nearby languages should prefer the source model more than distant languages. Therefore, we use two different sets of regularization parameters for nearby languages and distant languages, respectively. Specifically, β for the “distant languages” group is set as β1 = β2 = β3 = 0.1, tuned on the Arabic development set, and for the “nearby languages” group β is set as β1 = β2 = β3 = 1, tuned on the Spanish development set. Unsupervised adaptation is performed on sentences of length less than 40 due to memory constraints,9 but we test on sentences of all lengths. We run unsupervised fine-tuning for 5 epochs, and evaluate using unlabeled attachment score (UAS) with punctuation excluded. Results. We show our results in Table 3. While unsupervised fine-tuning improves the performance on the distant languages, it only has minimal effect on nearby languages, which is consistent with our observations in the POS tagging experiment and implies that unsupervised adaption helps more for distant transfer. Similar to POS tagging results, our method is able to outperform state-of-the-art “SelfAtt-Graph” model on 8 out of 10 distant languages, with an average of 8.3% 9Reducing batch size can address this memory issue, but greatly increases the training time. 3217 Discriminative Generative Lang SelfAtt-Graph Flow-Fix Flow-FT Distant Languages zh (0.86) 42.48 35.72 37.26 fa (0.86) 37.10 37.58 63.20 ar (0.86) 38.12 32.14 55.44 ja (0.71) 28.18 19.03 43.75 id (0.71) 49.20 46.74 64.20 ko (0.69) 34.48 34.76 37.03 tr (0.62) 35.08 34.76 36.05 hi (0.61) 35.50 29.20 33.17 hr (0.59) 61.91 59.57 65.31 he (0.57) 55.29 51.35 64.80 AVG 41.73 38.09 50.02 Nearby Languages bg (0.50) 79.40 73.52 73.57 it (0.50) 80.80 68.84 70.68 pt (0.48) 76.61 66.61 66.61 fr (0.46) 77.87 65.92 67.66 es (0.46) 74.49 63.10 64.28 no (0.45) 80.80 65.48 65.29 da (0.41) 76.64 61.64 61.08 sv (0.40) 80.98 66.22 64.43 nl (0.37) 68.55 61.59 61.72 de (0.36) 71.34 70.10 69.52 AVG 76.75 66.30 66.48 en∗ 91.82 67.80 – Table 3: Dependency parsing UAS (%) on sentences of all lengths. Numbers next to languages names are their distances to English. Supervised accuracy on English (∗) is included for reference. absolute improvement, but the strong discriminative baseline performs better when transferring to nearby languages. Note that the supervised performance of our method on English is poor. This is mainly because the DMV prior is too simple and limits the capacity of the model. While this model still achieves good performance on distant transfer, incorporating more complex DMV variants (Jiang et al., 2016) might lead to further improvement. Analysis on Dependency Relations. We further perform breakdown analysis on dependency relations to see how unsupervised adaptation helps learn new dependency rules. We select three typical distant languages with different word order of Subject, Object and Verb (Dryer, 2013): Arabic (Modern Standard, VSO), Indonesian (SVO) and Japanese (SOV). We investigate the unlabeled accuracy (recall) on the gold dependency labels. We especially explore four typical dependency relations: case (case marking), nmod (nominal modifier), obj (object) and nsubj (nominal subject). The first two are “nominal dependents” (modifiers for nouns) and the rest two are the main nominal “core arguments” (arguments for the predicate). Although different languages may vary, these four types are representative relations and occupies 25% to 40% in frequencies among all 37 UD dependency types. We compare our fine-tuning model with the baseline “SelfAtt-Graph” model and our basic model without fine-tuning. As shown in Figure 3, although our direct transfer model obtain similar results when compared with the baseline, the finetuning method brings large improvements on most of these dependency relations. In these three languages, Japanese benefits from our tuning method the most, probably because its word order is quite different from English and the baseline may overfit to the English order. For example, in Japanese, almost all of the “case” relations are head-first and “obj” relations are modifier-first, and these patterns are exactly opposite to those in English, which serves as our source language. As a result, direct transfer models fail on most of these relations since they only learn the patterns in English. With our fine-tuning on unlabeled data, the model may get more familiar with the unusual patterns of word order and predict more correct attachment decisions (around 0.4 improvements in recalls). In Arabic and Indonesian, although not as obviously as in Japanese, the improvements are still consistent, especially on the relations of the core arguments. 4.4 When to Use Generative Models? In unsupervised cross-lingual transfer setting, it is hard to find a system that is able to achieve state-of-the-art on all languages. As reflected by our experiments, there is a tradeoff between fitting source language and generalizing to target language – the flexibility of discriminative models results in overfitting issue and poor performance when transferred to distant languages. Unfortunately, a limited number of high-resource languages and many more low-resource languages in the world are mostly distant. This means that distant transfer is a practical challenge we face when dealing with low-resource languages. Next we try to give a preliminary guidance about which system should be used in specific transfer scenarios. As discussed in Section 2.1, there are different types of distance metrics. Here we aim to compute the significance of correlation between the 3218 case (15.3%) nmod (22.1%) obj (3.6%) nsubj (6.7%) 0 20 40 60 80 Recall (UAS%) Arabic case (9.7%) nmod (3.5%) obj (4.9%) nsubj (8.1%) 0 20 40 60 80 Recall (UAS%) Indonesian Baseline Direct-Transfer Fine-Tuned case (21.8%) nmod (5.9%) obj (2.6%) nsubj (3.9%) 0 20 40 60 Recall (UAS%) Japanese Figure 3: Results (UAS%) on typical dependency relations for Arabic, Indonesian and Japanese, respectively. “Baseline” denotes the “SelfAtt-Graph” model, and “Direct-Transfer” denotes our source model without finetuning. The number in the parenthesis after each dependency label indicates the relative frequency of this type. performance difference between our method and the discriminative baseline and different distance features. We have five input distance features: geographic, genetic, syntactic, inventory, and phonological. Specifically, we fit a generalized linear model (GLM) on the difference in accuracy and five features of all 20 target languages, then we perform a hypothesis test to compute the p-value that reflects the significance of specific features.10 Results are shown in Table 4, where we can conclude that the genetic distance feature is significantly correlated with POS tagging performance, while geographic distance feature is significantly correlated with dependency parsing performance. As assumed before, inventory and phonological distance do not have much influence on the transfer. Interestingly, syntactic distance is not the significant term for both tasks, we posit that this is because the transfer performance is affected by both cross-lingual word embedding quality and linguistic features, thus genetic/geographic distance might be a better indicator overall. The results suggest that our method might be more suitable than the discriminative approach at genetically distant transfer for POS tagging and geographically distant transfer for parsing. 4.5 Effect of Multilingual-BERT So far the analysis and experiments of this paper focus on non-contextualized fastText word embeddings. We note that concurrently to this work, Wu and Dredze (2019) found that the recently released multilingual BERT (mBERT; Devlin et al. (2019)) is able to achieve impressive performance on various cross-lingual transfer tasks. To study the effect of contextualized mBERT word embeddings on our proposed method, we report the average POS tagging and dependency parsing results in Table 5, while detailed numbers on each language 10We use the GLM toolkit present in the H2O Python Module. Feature p-value POS tagging Dependency Parsing Geographic 0.465 0.013 Genetic 0.007 0.531 Syntactic 0.716 0.231 Inventory 0.982 0.453 Phonological 0.502 0.669 Table 4: p-value of different distance features on POS tagging and dependency parsing task. A lower pvalue indicates stronger association between the feature and the response, which is the difference between our method and the discriminative baselines. are included in Appendix C. In the mBERT experiments, all the settings and hyperparameters are the same as in Section 4.2 and Section 4.3, but the aligned fastText embeddings are replaced with the mBERT embeddings.11 We also include the average results from fastText embeddings for comparison. On the POS tagging task all the models greatly benefit from the mBERT embeddings, especially our method on nearby languages where the mBERT outperforms the fastText by an average of 16 absolute points. Moreover, unsupervised adaptation still considerably improves the FlowFix baseline, and surpasses the LSTM-CRF baseline on 9 out of 10 distant languages with an average of 6% absolute performance boost. Different from the fastText setting where our method underperforms the discriminative baseline on the nearby language group, by the use of mBERT embeddings our method also beats the discriminative baseline on 7 out of 10 nearby languages with an average of 3% absolute improvement. A major limitation of our method lies in its strong independence assumptions, which results in the failure to model the long-term context information. We posit that the contextualized word embeddings 11We use the multilingual cased BERT base model released in https://github.com/google-research/ bert. 3219 Tagging Parsing emb Disc Flow-FT Disc Flow-FT Distant Languages fastText 51.93 57.10 41.73 50.02 mBERT 60.24 66.56 51.86 50.11 Nearby Languages fastText 75.06 69.11 76.75 66.48 mBERT 82.17 85.48 83.41 67.70 Table 5: Average of POS tagging accuracy (%) and dependency parsing UAS (%) results, comparing mBERT and fastText. “Disc” denotes the discriminative baselines. like mBERT exactly compensate for this drawback in our model through incorporating the context information into the observed word embeddings, so that our method is able to outperform the discriminative baseline on both distant and nearby language groups. On dependency parsing task, however, our method does not demonstrate significant improvement by the use of mBERT, while mBERT greatly helps the discriminative baseline. Therefore, although our method still outperforms the discriminative baseline on four very distant languages, the baseline demonstrates superior performance on other languages when using mBERT. Interestingly, we find that the performance of flow-based models with mBERT is similar to the performance with fastText word embeddings. Based on this, better generative models for unsupervised dependency parsing that can take advantage of contextualized embeddings seems a promising direction for future work. 5 Related Work Cross-lingual transfer learning has been widely studied to help induce syntactic structures in low-resource languages (McDonald et al., 2011; T¨ackstr¨om et al., 2013a; Agi´c et al., 2014; Tiedemann, 2015; Kim et al., 2017; Schuster et al., 2019; Ahmad et al., 2019). In the case when no available target annotations are available, unsupervised cross-lingual transfer can be performed by directly applying pre-trained source model to the target language. (Guo et al., 2015; Schuster et al., 2019; Ahmad et al., 2019). The challenge of direct transfer method lies in the different linguistic rules between source and distant target languages. Utilizing multiple sources of resources can mitigate this issue and has been actively studied in the past years (Cohen et al., 2011; Naseem et al., 2012; T¨ackstr¨om et al., 2013b; Zhang and Barzilay, 2015; Aufrant et al., 2015; Ammar et al., 2016; Wang and Eisner, 2018, 2019). Other approaches that try to overcome the lack of annotations include annotation projection by the use of bitext supervision or bilingual lexicons (Hwa et al., 2005; Smith and Eisner, 2009; Wisniewski et al., 2014) and source data point selection (Søgaard, 2011; T¨ackstr¨om et al., 2013b). Learning from both labeled source data and unlabeled target data has been explored before. Cohen et al. (2011) learns a generative target language parser as a linear interpolation of multiple source language parameters, Naseem et al. (2012) and T¨ackstr¨om et al. (2013b) rely on additional language typological features to guide selective model parameter sharing in a multi-source transfer setting, Wang and Eisner (2018, 2019) extract linguistic features from target languages by training a feature extractor on multiple source languages. 6 Conclusion In this work, we focus on transfer to distant languages for POS tagging and dependency parsing, and propose to learn a structured flow model in a cross-lingual setting. Through learning a new latent embedding space as well as languagespecific knowledge with unlabeled target data, our method proves effective at transferring to distant languages. Acknowledgements This research was supported by NSF Award No. 1761548 “Discovering and Demonstrating Linguistic Features for Language Documentation.” References ˇZeljko Agi´c, J¨org Tiedemann, Kaja Dobrovoljc, Simon Krek, Danijela Merkler, and Sara Moˇze. 2014. Cross-lingual dependency parsing of related languages with rich morphosyntactic tagsets. In EMNLP 2014 Workshop on Language Technology for Closely Related Languages and Language Variants. Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of NAACL. 3220 Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many languages, one parser. Transactions of the Association for Computational Linguistics. Lauriane Aufrant, Guillaume Wisniewski, and Franc¸ois Yvon. 2015. Zero-resource dependency parsing: Boosting delexicalized cross-lingual transfer with linguistic knowledge. In Proceedings of COLING. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. Shay B Cohen, Dipanjan Das, and Noah A Smith. 2011. Unsupervised structure prediction with nonparallel multilingual guidance. In Proceedings of EMNLP. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceedings of ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL. Laurent Dinh, David Krueger, and Yoshua Bengio. 2014. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516. Timothy Dozat and Christopher D Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR. Matthew S. Dryer. 2013. Order of subject, object and verb. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual dependency parsing based on distributed representations. In Proceedings of ACL. Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learning framework for multi-source transfer parsing. In Proceedings of AAAI. Harald Hammarstrm, Robert Forkel, Martin Haspelmath, and Sebastian Bank. 2015. Glottolog 2.6. Max Planck Institute for the Science of Human History. Junxian He, Graham Neubig, and Taylor BergKirkpatrick. 2018. Unsupervised learning of syntactic structure with invertible neural projections. In Proceedings of EMNLP. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(3):311–325. Yong Jiang, Wenjuan Han, and Kewei Tu. 2016. Unsupervised neural dependency parsing. In Proceedings of EMNLP. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for pos tagging without cross-lingual resources. In Proceedings of EMNLP. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Dan Klein and Christopher D Manning. 2004. Corpusbased induction of syntactic structure: Models of dependency and constituency. In Proceedings of ACL. M. Paul Lewis, Gary F. Simons, and Charles D. Fennig, editors. 2015. Ethnologue: Languages of the World, Eighteenth edition. SIL International, Dallas, Texas. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of EACL. Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stackpointer networks for dependency parsing. In Proceedings of ACL. Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109– 165. Elsevier. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of EMNLP. Steven Moran, Daniel McCloy, and Richard Wright, editors. 2014. PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of ACL. Joakim Nivre, Mitchell Abrams, ˇZeljko Agi´c, and et al. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). 3221 Tal Schuster, Ori Ram, Regina Barzilay, and Globerson Amir. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. In Proceedings of NAACL. David A Smith and Jason Eisner. 2009. Parser adaptation and projection with quasi-synchronous grammar features. In Proceedings of EMNLP. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of ICLR. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In Proceedings of ACL. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013a. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013b. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL-HLT. J¨org Tiedemann. 2015. Cross-lingual dependency parsing with universal dependencies and predicted pos labels. In Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015). Dingquan Wang and Jason Eisner. 2018. Synthetic data made to order: The case of parsing. In Proceedings EMNLP. Dingquan Wang and Jason Eisner. 2019. Surface statistics of an unknown language indicate how to parse it. Transactions of the Association for Computational Linguistics. Guillaume Wisniewski, Nicolas P´echeux, Souhir Gahbiche-Braham, and Franc¸ois Yvon. 2014. Cross-lingual part-of-speech tagging through ambiguous learning. In Proceedings of EMNLP. Shijie Wu and Mark Dredze. 2019. Beto, Bentz, Becas: The surprising cross-lingual effectiveness of BERT. arXiv preprint arXiv:1904.09077. Jie Yang and Yue Zhang. 2018. NCRF++: An opensource neural sequence labeling toolkit. In Proceedings of ACL. Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proceedings of EMNLP. Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi Jaakkola. 2016. Ten pairs to tag– multilingual pos tagging via coarse mapping between embeddings. In Proceedings of NAACL-HLT. 3222 A Details of UD Treebanks Language Dist. Treebank #Sent. Chinese (zh) 0.86 GSD train 3997 dev 500 test 500 Persian (fa) 0.86 Seraji train 4798 dev 599 test 600 Arabic (ar) 0.86 PADT train 6075 dev 909 test 680 Japanese (ja) 0.71 GSD train 7164 dev 511 test 557 Indonesian (id) 0.71 GSD train 4477 dev 559 test 557 Korean (ko) 0.69 GSD, Kaist train 27410 dev 3016 test 3276 Turkish (tr) 0.62 IMST train 3685 dev 975 test 975 Hindi (hi) 0.61 HDTB train 13304 dev 1659 test 1684 Croatian (hr) 0.59 SET train 6983 dev 849 test 1057 Hebrew (he) 0.57 HTB train 5241 dev 484 test 491 Bulgarian (bg) 0.50 BTB train 8907 dev 1115 test 1116 Italian (it) 0.50 ISDT train 13121 dev 564 test 482 Portuguese (pt) 0.48 Bosque, GSD train 17993 dev 1770 test 1681 French (fr) 0.46 GSD train 14554 dev 1478 test 416 Spanish (es) 0.46 GSD, AnCora train 28492 dev 3054 test 2147 Norwegian (no) 0.45 Bokmaal, Nynorsk train 29870 dev 4300 test 3450 Danish (da) 0.41 DDT train 4383 dev 564 test 565 Swedish (sv) 0.40 Talbanken train 4303 dev 504 test 1219 Dutch (nl) 0.37 Alpino, LassySmall train 18058 dev 1394 test 1472 German (de) 0.36 GSD train 13814 dev 799 test 977 English (en) – EWT train 12543 dev 2002 test 2077 Table 6: Statistics of the UD Treebanks that we used. We list the statistics of the UD Treebanks that we used in the following two tables. The left one lists the distance (to English) languages and the right one lists the similar (to English) languages. B Model Hyperparameters We use the same architecture as in He et al. (2018) for the invertible projection function fφ which is the NICE architecture (Dinh et al., 2014). It contains 8 coupling layers. The coupling function in each coupling layer is a rectified network with an input layer, one hidden layer, and linear output units. The number of hidden units is set to the same as the number of input units, which is 150 in our case. POS tagger is trained with batch size 32, while dependency parser is trained with batch size 16. C Full Results with mBERT Here we report in Table 7 the full results on all languages with mBERT.12 12The results of our discriminative baselines are different from the ones reported in Wu and Dredze (2019) because they do not use additional encoders on top of the pretrained mBERT word embeddings, while we keep the models unchanged here for direct comparison with fastText embeddings. On some languages our version produces better results and sometimes their version is superior. 3223 POS Tagging Dependency Parsing Lang LSTM-CRF Flow-Fix Flow-FT SelfAtt-Graph Flow-Fix Flow-FT Distant Languages zh (0.86) 59.63 53.61 65.84 48.78 35.73 35.64 fa (0.86) 57.63 56.18 68.55 51.47 37.99 63.18 ar (0.86) 53.50 48.92 67.33 50.91 32.13 56.85 ja (0.71) 46.81 40.98 46.06 40.08 19.23 43.55 id (0.71) 74.95 70.95 78.72 57.94 47.00 64.35 ko (0.69) 50.74 47.99 54.07 39.42 34.67 37.02 tr (0.62) 60.08 54.69 61.16 42.80 34.88 37.06 hi (0.61) 58.86 53.16 68.39 48.44 29.15 33.17 hr (0.59) 74.98 66.35 78.61 73.63 59.68 65.27 he (0.57) 65.24 57.27 76.83 65.11 51.39 65.03 AVG (mBERT) 60.24 55.01 66.56 51.86 38.19 50.11 AVG (fastText) 51.93 45.75 57.10 41.73 38.09 50.02 Nearby Languages bg (0.50) 82.36 74.56 80.68 86.32 73.65 74.06 it (0.50) 76.70 66.02 87.88 86.71 69.09 71.59 pt (0.48) 83.45 80.83 86.49 83.75 66.67 69.56 fr (0.46) 79.22 74.21 87.21 86.64 66.08 69.14 es (0.46) 77.68 72.28 84.50 81.74 63.18 66.46 no (0.45) 85.29 80.69 83.96 85.01 65.47 66.08 da (0.41) 85.57 81.90 86.79 82.22 61.61 62.15 sv (0.41) 86.39 81.27 86.31 85.33 66.04 64.51 nl (0.40) 83.67 78.88 85.05 77.32 61.70 63.24 de (0.37) 81.37 78.97 85.96 79.03 70.19 70.19 AVG (mBERT) 82.17 76.96 85.48 83.41 66.37 67.70 AVG (fastText) 75.06 63.19 69.11 76.75 66.30 66.48 en∗ 95.13 91.22 – 92.84 67.76 – Table 7: POS tagging accuracy (%) and dependency parsing UAS (%) results when using mBERT as the aligned embeddings. Numbers next to languages names are their distances to English. Supervised accuracy on English (∗) is included for reference.
2019
311
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3224–3230 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3224 Unsupervised Joint Training of Bilingual Word Embeddings Benjamin Marie Atsushi Fujita National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan {bmarie, atsushi.fujita}@nict.go.jp Abstract State-of-the-art methods for unsupervised bilingual word embeddings (BWE) train a mapping function that maps pre-trained monolingual word embeddings into a bilingual space. Despite its remarkable results, unsupervised mapping is also well-known to be limited by the dissimilarity between the original word embedding spaces to be mapped. In this work, we propose a new approach that trains unsupervised BWE jointly on synthetic parallel data generated through unsupervised machine translation. We demonstrate that existing algorithms that jointly train BWE are very robust to noisy training data and show that unsupervised BWE jointly trained significantly outperform unsupervised mapped BWE in several cross-lingual NLP tasks. 1 Introduction Bilingual word embeddings (BWE) represent the vocabulary of two languages in one common continuous vector space. They are known to be useful in a wide range of cross-lingual NLP tasks. The most prevalent methods for training BWE are so-called mapping methods (Mikolov et al., 2013a): word embeddings for two languages are separately trained on respective monolingual data and then mapped into one common embedding space. The mapping function is usually trained using a small bilingual lexicon for supervision. Recently, unsupervised mapping for BWE (Artetxe et al., 2018a; Lample et al., 2018a), i.e., trained without using any manually created bilingual resources, has been shown to reach a performance comparable to supervised BWE in several crosslingual NLP tasks. Unsupervised BWE are trained with a three-step approach. First, word embeddings are roughly mapped into an initial BWE space, for instance using adversarial training or an heuristic mapping. Then, using the initial BWE, a small synthetic bilingual lexicon is induced. Finally, a new BWE, which is expected to be better than the initial BWE, is learned from the induced lexicon through a pseudo-supervision with some supervised mapping method. The last two steps can be repeated to refine the BWE. In spite of their success, unsupervised mapping methods are inherently limited by the dissimilarity between the original word embedding spaces to be mapped. The feasibility of aligning two embedding spaces relies on the assumption that they are isomorphic. However, Søgaard et al. (2018) showed that these spaces are, in general, far from being isomorphic, and thus they result in suboptimal or degenerated unsupervised mappings. On the other hand, supervised methods that jointly train BWE from scratch (Upadhyay et al., 2016), on parallel or comparable corpora, do not have such limits since no pre-existing embedding spaces and no mapping function are involved. These methods jointly train BWE by exploiting bilingual and monolingual contexts of words, materialized by sentence or document pairs, to learn a single BWE space. However, they require large bilingual resources for training. To the best of our knowledge, joint training of BWE has never been explored for unsupervised scenarios. In this paper, we propose unsupervised joint training of BWE. Our method is an extension of previous work on unsupervised BWE: we propose to generate, without supervision, synthetic parallel sentences that can be directly exploited to jointly train BWE with existing algorithms. We empirically show that this method learns better BWE for several cross-lingual NLP tasks. 2 Pseudo-supervised joint training On the strong assumption that existing algorithms for joint training of BWE are robust enough even 3225 with very noisy parallel training data, we formulate the following research question: Do synthetic sentence pairs supply useful bilingual contextual information for learning better BWE? 2.1 Bilingual skipgram Previous work on joint training of BWE hypothesizes that exploiting both monolingual and bilingual contextual information yields better word embeddings, monolingually and bilingually. Among several existing algorithms for joint training of BWE, in this work, we use bilingual skipgram (BIVEC) (Luong et al., 2015), which has been shown to outperform other methods in several NLP tasks (Upadhyay et al., 2016). BIVEC uses the skipgram algorithm (Mikolov et al., 2013b) to learn the word embeddings for each language and exploits word alignments obtained for parallel data in order to make the embeddings cross-lingual. Given a pair of sentences, S1 in some language L1 and S2 in another language L2, a word wi in S1 is replaced with its aligned word a(wi) in S2, so that the L1 context can also be used for learning the embedding of the L2 word. BIVEC has been shown to be robust to noisy word alignments (Luong et al., 2015), which is a significant advantage of this method in our scenario using synthetic parallel data. 2.2 Training on synthetic parallel data For an unsupervised training of BWE, the training data must also be generated in an unsupervised way. To this end, we chose unsupervised machine translation (MT). Recent work has shown significant progress in unsupervised MT (Artetxe et al., 2018b; Lample et al., 2018b) with generated translations of a reasonable quality. Both statistical (SMT) and neural MT (NMT) have been adapted to the unsupervised scenario. We chose unsupervised SMT (USMT) to generate synthetic parallel data since it generates better translations than unsupervised NMT (Lample et al., 2018b). Given an initial BWE, for instance learned with unsupervised mapping methods, our method works as follows (see also Figure 1). First, a USMT is trained from monolingual data. We collect a set of phrases made of up to L tokens, using word2phrase,1 for each of the source and target 1https://code.google.com/archive/p/ word2vec/ Unsupervised training for initial BWE Pseudo-supervised mapping for BWE Unsupervised machine translation Pseudo-supervised joint training for BWE Unsupervised Joint Training (this work) Iterative refinement Synthetic parallel sentences Synthetic bilingual lexicon Iterative refinement Jointly trained BWE Mapped BWE Unsupervised Mapping (Artetxe et al., 2018; Lample et al., 2018) Figure 1: Our joint training framework is on top of existing unsupervised mapping methods. languages. As phrases, we also consider all the token types in each corpus. In our phrase table, each L1 phrase is paired with its k most probable translations in L2 determined based on a score computed from the given BWE.2 The phrase table and a language model trained on the L2 monolingual data compose the initial USMT. Then, the USMT is iteratively refined in the following manner. • Synthetic parallel data are generated by translating monolingual data using the USMT. Both L1-to-L2 and L2-to-L1 translations can be considered (Artetxe et al., 2019). • A new phrase table is trained on the synthetic parallel data to form a new USMT. Finally, on the synthetic parallel data generated by our USMT after N refinement steps, we jointly train new BWE as described in Section 2.1. Although this approach can efficiently generate parallel data of a reasonable quality, as shown in Figure 1, it heavily relies on the feasibility of mapping the word embeddings learned for L1 and L2 in the same space and used for the initial USMT. If the mapping fails, we cannot expect USMT to generate useful data for jointly training BWE. Conversely, if the mapping succeeds, we can generate data with bilingual contexts that may be useful to jointly train BWE. More importantly, we use USMT assuming that BIVEC is robust enough to learn from very noisy parallel data. Our intuition comes from the fact 2See for instance Equation 3 in Lample et al. (2018b). 3226 that SMT generates less diverse translations, with a significantly different word frequency distribution than in translations naturally produced by humans. SMT is limited by the vocabulary of its phrase table and will favor the generation of frequent n-grams thanks to its language model. Same words appear more frequently in similar contexts, facilitating the training of word embeddings and compensating, to some extent, for the noisiness of the translations. In Appendix A, we provide results of our preliminary experiments supporting this assumption. 3 Experiments Are BWE unsupervisedly and jointly trained on noisy synthetic data better than unsupervised mapped BWE? To answer this question, we conducted experiments in three different tasks with three language pairs: English–German (en-de), English–French (en-fr), and English–Indonesian (en-id). 3.1 Settings for training BWE We trained monolingual word embeddings with fastText (Bojanowski et al., 2017)3 separately on English (239M lines), German (237M lines), and French (38M lines) News Crawl corpora provided by WMT4 for en-de and en-fr. For enid, we used English (100M lines) and Indonesian (77M lines) Common Crawl corpora.5 We then mapped the word embeddings into a BWE space using VECMAP,6 one of the best and most robust methods for unsupervised mapping (Glavas et al., 2019). The resulting BWE were used as baselines in our evaluation tasks and also to bootstrap our USMT system. Our initial USMT systems were induced with the following configuration. Maximum phrase length was set to six (L = 6). To make our experiments reasonably fast, we selected the 300k most frequent phrases referring to each monolingual corpus, and retained 300-best target phrases for each source phrase (k = 300). 4-gram language models were trained with lmplz (Heafield et al., 2013). Then, USMT systems were refined 3https://github.com/facebookresearch/ fastText 4http://www.statmt.org/wmt19/ 5http://commoncrawl.org 6https://github.com/artetxem/vecmap four times (N = 4) and used to generate synthetic parallel data by translating 10M sentences randomly sampled from the monolingual data. Finally, on the synthetic parallel data, we trained new BWE using BIVEC7 with the parameters used in Upadhyay et al. (2016) and with word alignments determined by fast align (Dyer et al., 2013).8 We performed contrastive experiments for some of our tasks with a simple method proposed by Levy et al. (2017), denoted SENTID,9 with its default parameters for training BWE. SENTID does not optimize a joint objective but as for BIVEC we trained it on the synthetic parallel data and learned directly from scratch a single BWE space. SENTID does not require word alignments, but instead simply exploits sentence pair IDs as a bilingual signal associated with each word and train BWE by applying skipgram on a word/sentenceID matrix. All the methods for training word embeddings were trained with 512 dimensions and their -min-count parameter set to 5. Note that in all our experiments, we filtered the vocabulary so that all BWE spaces have the same vocabulary when compared. 3.2 Task 1: Bilingual lexicon induction Bilingual lexicon induction (BLI) is by far the most popular evaluation task for BWE used by previous work in spite of its limits (Glavas et al., 2019). In contrast to previous work, we used much larger test sets10 for each language pair. Table 1 reports on accuracy in retrieving a correct translation with CSLS (Lample et al., 2018a) for each source word of the test sets. For all the tasks, BIVEC and SENTID achieved better accuracy than VECMAP. This supports our assumption that even noisy synthetic parallel data can provide useful bilingual contexts for training BWE. The largest improvements were observed for enid, with a gain of more than 10 points. Interestingly, BIVEC and SENTID performed similarly, pointing out that word alignments are not necessary in our scenario. The accuracy was higher when synthetic parallel data did not contain syn7https://github.com/lmthang/bivec 8https://github.com/clab/fast_align 9https://bitbucket.org/omerlevy/xling_ embeddings 10https://github.com/facebookresearch/ MUSE 3227 Method Data en→de de→en en→fr fr→en en→id id→en src-tgt VECMAP all-all 42.4 59.0 67.7 70.0 58.9 59.5 BIVEC 10M-0 45.8 59.2 73.9 71.3 70.4 69.7 SENTID 10M-0 45.8 60.1 74.4 71.8 69.8 69.2 BIVEC 0-10M 43.7 63.4 72.0 74.3 67.3 72.3 SENTID 0-10M 43.5 63.5 72.6 74.8 67.5 73.4 BIVEC 10M-10M 44.9 54.9 73.9 73.8 69.5 72.1 SENTID 10M-10M 45.4 62.1 74.2 74.0 69.4 73.0 Coverage ratio 15.1 14.7 24.8 26.9 27.8 25.4 Table 1: Accuracy in BLI for different BWE. The “Data” column indicates the number of sentences in the monolingual data used to train BWE: e.g., “0” means that the data of the corresponding language has been generated by USMT. For the last two rows, 20M synthetic sentence pairs have been used: 10M generated by L1→L2 and 10M generated by L2→L1 USMT systems. The last row indicate coverage ratio for each test set by the BWE. Best scores in each translation direction is presented in bold. USMT Data en→de en→fr en→id src-tgt Acc. BLEU Acc. BLEU Acc. BLEU Step 0 10M-0 47.1 (12.1) 74.1 (17.0) 65.2 (13.6) Step 4 10M-0 46.4 (18.8) 75.6 (25.3) 69.4 (24.5) Step 0 0-10M 43.8 (16.0) 72.8 (18.6) 64.6 (17.7) Step 4 0-10M 44.0 (23.4) 73.5 (26.7) 66.4 (29.1) Coverage ratio 14.3 23.1 23.0 Table 2: Accuracy in BLI using BWE learned with BIVEC on synthetic parallel sentences generated either by step 0 or step 4 of USMT. BLEU scores of the USMT systems that generated the data were evaluated on the test sets presented in Section 3.3. thetic English (“10M-0” for “en→∗” and “0-10M” for “∗→en”). Using the concatenation of the synthetic data generated by L1→L2 and L2→L1 (last two rows of the table) slightly underperformed the best configuration despite the use of twice more training data. This is presumably due to the presence of sentences of two very different natures, synthetic and original, in the same language. To evaluate the robustness of BIVEC, we compared the performance to those obtained with noisier synthetic data generated by the initial USMT (without refinement). As shown in Table 2, we observed comparable results, especially for en→de and en→fr, confirming that this approach is very robust to noisy training data. Although BIVEC and SENTID used a sub-part of the monolingual data used by VECMAP, their vocabulary size can be larger. This unintuitive observation comes from the use of USMT to generate synthetic data: L1 words not covered by the phrase table are directly copied in the translations. As a result, such L1 words are introduced into the L2 vocabulary even if they do not appear in the L2 monolingual data used to train VECMAP, artifically increasing the coverage ratio11 of the lexicon. This side-effect of our method is especially useful for instance for named entities that should be kept as is. Since such words in L1 and their copies in L2 cooccur frequently in synthetic data, their embeddings are similar. Obviously, this sideeffect is interesting only for close languages and may introduce numerous unwanted L1 words in the L2 space. See Appendix B for some more analyses. 3.3 Task 2: Machine translation In the phrase table induction for USMT, both the geometry of the space (when retrieving the kclosest translations for a given source phrase) and the embeddings themselves (when computing cosine similarity for the translation probability) play an important role. Better BWE should lead to bet11As a definition for coverage, we chose the one implemented in VECMAP: the percentage of source words in a test bilingual lexicon that are in the vocabulary of the source word embeddings and that are paired with at least one target word that is in the vocabulary of the target word embeddings. 3228 Method en→de en→fr en→id VECMAP 12.1 17.0 13.6 BIVEC 12.7 17.3 15.9 SENTID 12.8 17.3 15.8 Table 3: BLEU scores of USMT at step 0 with a phrase table induced using different BWE. ter phrase tables and consequently translations of better quality. We thus regard USMT as an extrinsic evaluation task for BWE. Table 3 shows BLEU scores for our USMT at step 0 on en-de Newstest2016, en-fr Newstest2014 of WMT, and en-id ALT (Riza et al., 2016)12 test sets. We observed from 0.3 (BIVEC, en→fr) to 2.5 (BIVEC, en→id) BLEU points of improvements over USMT using VECMAP. Again, BIVEC and SENTID performed similarly. However, note that here USMT is merely an evaluation task: the improvement observed at step 0 are practically useless for USMT, since we can often gain much larger improvements through refinement as described in Section 2.2. Consequently, we assume that perfoming more iterations, i.e., retraining BWE on synthetic parallel data generated by an USMT system initialized from unsupervised joint BWE, will not improve either translation quality or BWE quality. 3.4 Task 3: Monolingual word analogy In the literature, VECMAP and BIVEC BWE have been shown to perform as well as, or better than, word embeddings trained exclusively on monolingual data in monolingual tasks. Since we use significantly less and noisier data for training BIVEC than VECMAP, we assume that this observation may not hold in our configuration. We tested our assumption with the English word analogy task of Mikolov et al. (2013b) by comparing VECMAP and BIVEC English word embeddings, with several different sets of en-fr synthetic parallel data for training BIVEC. As shown in Table 4, BIVEC led to significantly lower accuracy than VECMAP, especially for the configuration trained on synthetic English (generated from French) with a gap of 32.2 points. We also observed a lower accuracy when using original English, presumably due to the use of much smaller data than for training VECMAP. However, 12http://www2.nict.go.jp/astrec-att/ member/mutiyama/ALT/ Method English data Accuracy VECMAP 239M (en) 77.8 BIVEC 10M (en→fr) 65.7 10M (fr→en) 45.6 10M (fr→en) + 10M (en→fr) 62.3 fastText 239M (en) 79.1 10M (en→fr) 64.6 10M (fr→en) 45.1 10M (fr→en) + 10M (en→fr) 61.2 Table 4: Results on the English word analogy task using the English word embeddings. when training monolingual word embeddings using fastText on the same English data used for training BIVEC, we observed that fastText underperforms BIVEC. This confirms that BIVEC can take advantage of noisy but bilingual contexts to monolingually improve word embeddings. 4 Conclusion and future work We show in several cross-lingual NLP tasks that unsupervised joint BWE achieved better results than unsupervised mapped BWE. Our experiments also highlight the robustness of joint training that can take advantage of bilingual contexts even from very noisy synthetic parallel data. Since our approach works on top of unsupervised mapping for BWE and uses synthetic data generated by unsupervised MT, it will directly benefit from any future advances in these two types of techniques. Our approach has, however, a higher computational cost due to the need of generating synthetic parallel data, while generating more data would also improve the vocabulary coverage. As a future work, we would like to study, for training BWE, the impact of the use of synthetic parallel data generated by unsupervised NMT, or of a different nature, such as translation pairs extracted from monolingual corpora without supervision. Such translation pairs are, in general, more fluent but potentially much less accurate. Acknowledgments We would like to thank the reviewers for their useful comments and suggestions. A part of this work was conducted under the program “Promotion of Global Communications Plan: Research, Development, and Social Demonstration of Multilingual Speech Translation Technology” of the Ministry of Internal Affairs and Communications (MIC), Japan. 3229 References Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798, Melbourne, Australia. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632–3642. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. CoRR, abs/1902.01313. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vuli´c. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. CoRR, abs/1902.00508. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690–696, Sofia, Bulgaria. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018a. Word translation without parallel data. In International Conference on Learning Representations. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Association for Computational Linguistics. Omer Levy, Anders Søgaard, and Yoav Goldberg. 2017. A strong baseline for learning cross-lingual word embeddings from sentence alignments. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 765–774, Valencia, Spain. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151–159, Denver, Colorado. Association for Computational Linguistics. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, and Chenchen Ding. 2016. Introduction of the Asian Language Treebank. In Proceedings of the 2016 Conference of the Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Technique (O-COCOSDA), pages 1–6, Bali, Indonesia. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Association for Computational Linguistics. Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word embeddings: An empirical comparison. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1661–1670, Berlin, Germany. Association for Computational Linguistics. Data en→de en→fr Europarl 52.5 75.9 Synthetic Europarl 50.4 74.7 Table 5: Accuracy of BWE jointly trained on the original and on the synthetic version of Europarl in bilingual lexicon induction tasks. Presented results are for the same vocabulary. 3230 Method Data en→de de→en en→fr fr→en en→id id→en src tgt Cov. Acc. Cov. Acc. Cov. Acc. Cov. Acc. Cov. Acc. Cov. Acc. VECMAP all all 27.0 24.6 26.2 36.8 34.1 55.2 35.9 54.8 42.8 39.8 41.2 40.8 BIVEC 10M 0 24.3 60.6 22.9 70.0 32.4 74.3 33.1 73.5 35.6 73.3 32.6 74.6 SENTID 10M 0 24.3 60.5 22.9 70.5 32.4 75.0 33.1 73.8 35.6 72.9 32.6 74.3 BIVEC 0 10M 17.4 42.2 17.5 58.1 28.2 70.3 31.5 70.0 36.8 67.6 35.9 71.9 SENTID 0 10M 17.4 42.1 17.5 58.3 28.2 70.0 31.5 70.5 36.8 67.7 35.9 73.3 BIVEC 10M 10M 27.3 57.0 26.3 62.2 37.3 70.2 39.1 70.0 46.1 69.8 44.6 73.0 SENTID 10M 10M 27.3 57.3 26.3 66.7 37.3 70.8 39.1 70.5 46.1 70.1 44.6 74.6 Table 6: Results in BLI of VECMAP, BIVEC, and SENTID BWE, on the “full” Muse bilingual lexicons, without filtering the vocabulary. In other words, the compared BWE do not have the same vocabulary. The coverage is given by the VECMAP’s evaluation script. A Preliminary experiment To empirically test our assumption on the robustness of BIVEC to noisiness of training data, we performed a preliminary experiment. First, we trained a low-quality SMT systems for en→de an d en→fr on small parallel corpora.13 Then, a synthetic version of Europarl is compiled by coupling the English side of Europarl parallel corpora and its German and French translations generated by the SMT systems. Finally, with BIVEC, we obtained two types of BWE respectively from the original and the synthetic Europarl, and evaluated them in bilingual lexicon induction (BLI) tasks on the test sets used in Section 3.2. Results are presented in Table 5. Despite the poor performance of our SMT systems, BWE learned from the synthetic Europarl were only slightly less accurate for BLI than the BWE learned from the original Europarl. This result supports our assumption that BIVEC can exploit noisy synthetic data produced by SMT. B Bilingual lexicon induction: coverage statistics To show how the vocabulary coverage varies between BWE spaces, and to evaluate their impact on the accuracy in BLI, we report in Table 6 the coverage and the accuracy in BLI for all the BWE evaluated without restricting their vocabulary to be the same. Note that, because of the differences in coverage, accuracy of joint BWE cannot directly be compared with VECMAP BWE. 13We used the News Commentary corpora provided by WMT for en→de and en→fr to train SMT systems performing at 15.4 and 20.1 BLEU points on Newstest2016 en-de and Newstest2014 en-fr, respectively.
2019
312
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3231–3241 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3231 3232 embeddings (Nickel and Kiela, 2017, 2018) as follows: Roller et al. (2018) showed that Hearst patterns can provide important constraints for hypernymy extraction from distributional contexts. However, it is also well-known that Hearst patterns suffer from missing and incorrect extractions, as words must co-occur in exactly the right pattern to be detected successfully. For this reason, we first extract potential is-a relationships from a corpus using Hearst patterns and build a directed weighted graph from these extractions. We then embed this Hearst Graph in hyperbolic space to infer missing hypernymy relations and remove wrong extractions. By using hyperbolic space for the embedding, we can exploit the following important advantages: Consistency Hyperbolic entailment cones (Ganea et al., 2018) allow us to enforce transitivity of is-a-relations in the entire embedding space. This improves the taxonomic consistency of the model, as it enforces that (x, is-a, z) if (x, is-a, y) and (y, is-a, z). To improve optimization properties, we also propose a new method to compute hyperbolic entailment cones in the Lorentz model of hyperbolic space. Efficiency Hyperbolic space allows for very low dimensional embeddings of graphs with latent hierarchies and heavy-tailed degree distributions. For embedding large Hearst graphs – which exhibit both properties (e.g., see Figure 2) – this is an important advantage. In our experiments, we will show that hyperbolic embeddings allow us to decrease the embedding dimension by over an order of magnitude while outperforming SVD-based methods. Interpretability In hyperbolic embeddings, similarity is captured via distance while the generality of terms is captured through their norms. This makes it easy to interpret the embeddings with regard to their hierarchical structure and allows us to get additional insights, e.g., about a term’s degree of generality. Figure 1 shows an example of a two-dimensional embedding of the Hearst graph that we use in our experiments. Although we will use higher dimensionalities for our final embedding, the visualization serves as a good illustration of the hierarchical structure that is obtained through the embedding. 2 Related Work Hypernym detection Detecting is-a-relations from text is a long-standing task in natural language processing. A popular approach is to exploit highprecision lexico-syntactic patterns as first proposed by Hearst (1992). These patterns may be predefined or learned automatically (Snow et al., 2005; Shwartz et al., 2016; Nakashole et al., 2012). However, it is well known that such pattern-based methods suffer significantly from missing extractions as terms must occur in exactly the right configuration to be detected (Shwartz et al., 2016; Roller et al., 2018). Recent works improve coverage by leveraging search engines (Kozareva and Hovy, 2010) or by exploiting web-scale corpora (Seitner et al., 2016); but also come with precision trade-offs. To overcome the sparse extractions of patternbased methods, focus has recently shifted to distributional approaches which provide rich representations of lexical meaning. These methods alleviate the sparsity issue but also require specialized similarity measures to distinguish different lexical relationships. To date, most measures are inspired by the Distributional Inclusion Hypothesis (DIH; Geffet and Dagan 2005) which hypothesizes that for a subsumption relation (cat, is-a, mammal) the subordinate term (cat) should appear in a subset of the contexts in which the superior term (mammal) occurs. Unsupervised methods for hypernymy detection based on distributional approaches include WeedsPrec (Weeds et al., 2004), invCL (Lenci and Benotto, 2012), SLQS (Santus et al., 2014), and DIVE (Chang et al., 2018). Distributional representations that are based on positional or dependency-based contexts may also capture crude Hearst-pattern-like features (Levy et al., 2015; Roller and Erk, 2016). Shwartz et al. (2017) showed that such contexts plays an important role for the success of distributional methods. CamachoCollados et al. (2018) proposed a new shared task for hypernym retrieval from text corpora. Recently, Roller et al. (2018) performed a systematic study of unsupervised distributional and pattern-based approaches for hypernym detection. Their results showed that pattern-based methods are able to outperform DIH-based methods on several challenging hypernymy benchmarks. Key aspects to good performance were the extraction of patterns from large text corpora and using embedding methods to overcome the sparsity issue. Our work builds on these findings by replacing their 3233 1 2 3 4 5 0 1 2 3 4 5 Rank (log scale) Frequency (log scale) Figure 2: Frequency distribution of words appearing in the Hearst pattern corpus (on a log-log scale). embeddings with ones with a natural hierarchical structure. Taxonomy induction Although detecting hypernymy relationships is an important and difficult task, these systems alone do not produce rich taxonomic graph structures (Camacho-Collados, 2017), and complete taxonomy induction may be seen as a parallel and complementary task. Many works in this area consider a taxonomic graph as the starting point, and consider a variety of methods for growing or discovering areas of the graph. For example, Snow et al. (2006) train a classifier to predict the likelihood of an edge in WordNet, and suggest new undiscovered edges, while Kozareva and Hovy (2010) propose an algorithm which repeatedly crawls for new edges using a web search engine and an initial seed taxonomy. Cimiano et al. (2005) considered learning ontologies using Formal Concept Analysis. Similar works consider noisy graphs discovered from Hearst patterns, and provide algorithms for pruning edges until a strict hierarchy remains (Velardi et al., 2005; Kozareva and Hovy, 2010; Velardi et al., 2013). Maedche and Staab (2001) proposed a method to learn ontologies in a Semantic Web context. Embeddings Recently, works have proposed a variety of graph embedding techniques for representing and recovering hierarchical structure. Order-embeddings (Vendrov et al., 2016) represent text and images with embeddings where the ordering over individual dimensions forms a partially ordered set. Hyperbolic embeddings represent words in hyperbolic manifolds such as the Poincar´e ball and may be viewed as a continuous analogue to tree-like structures (Nickel and Kiela, 2017, 2018). Recently, Tifrea et al. (2018) also proposed an extension of GLOVE (Pennington et al., Pattern X which is a (example | class | kind | . . . ) of Y X (and | or) (any | some) other Y X which is called Y X is JJS (most)? Y X a special case of Y X is an Y that X is a !(member | part | given) Y !(features | properties) Y such as X1, X2, . . . (Unlike | like) (most | all | any | other) Y, X Y including X1, X2, . . . Table 1: Hearst patterns used in this study. Patterns are lemmatized, but listed as inflected for clarity. 2014) to hyperbolic space. In addition, works have considered how distributional co-occurrences may be used to augment order-embeddings (Li et al., 2018) and hyperbolic embeddings (Dhingra et al., 2018). Further methods have focused on the often complex overlapping structure of word classes, and induced hierarchies using box-lattice structures (Vilnis et al., 2018) and Gaussian word embeddings (Athiwaratkun and Wilson, 2018). Compared to many of the purely graph-based works, these methods generally require supervision of hierarchical structure, and cannot learn taxonomies using only unstructured noisy data. 3 Methods In the following, we discuss our method for unsupervised learning of concept hierarchies. We first discuss the extraction and construction of the Hearst graph, followed by a description of the Hyperbolic Embeddings. 3.1 Hearst Graph The main idea introduced by Hearst (1992) is to exploit certain lexico-syntactic patterns to detect is-a relationships in natural language. For instance, patterns like “NPy such as NPx” or “NPx and other NPy” often indicate a hypernymy relationship (u, is-a, v). By treating unique noun phrases as nodes in a large, directed graph, we may construct a Hearst Graph using only unstructured text and very limited prior knowledge in the form of patterns. Table 1 lists the only patterns that we use in this work. Formally, let E = {(u, v)}N i=1 denote the set of is-a relationships that have been extracted from a text corpus. Furthermore, let w(u, v) denote how often we have extracted the relationship (u, is-a, v). We then represent the extracted patterns as a weighted directed graph G = (V, E, w) 3234 3235 3236 let Θ = {vi}M i=1 be the set of embeddings. To find an embedding that minimizes the overall energy, we then solve the optimization problem ˆΘ = arg min Θ∈Hn X u,v ∈V L(u, v) (4) where L(u, v) = ( E(u, v) if (u, v) ∈E max(0, γ −E(u, v)) otherwise is the max-margin loss as used in (Ganea et al., 2018; Vendrov et al., 2016). The goal of Equation (4) is to find a joint embedding of all terms that best explains the observed Hearst patterns. To solve Equation (4), we follow Nickel and Kiela (2018) and perform stochastic optimization via Riemannian SGD (RSGD; Bonnabel 2013). In RSGD, updates to the parameters v are computed via vt+1 = expvt(−η grad f(vt)) (5) where grad f(vt) denotes the Riemannian gradient and η denotes the learning rate. In Equation 5, the Riemannian gradient of f at v is computed via grad f(vt) = projvt g−1 ℓ∇f(v)  where ∇f(v) denotes the Euclidean gradient of f and where projv(x) = v + ⟨v, x⟩Lv g−1 ℓ(v) = diag([−1, 1, . . . , 1]) denote the projection from the ambient space Rn+1 onto the tangent space TvLn and the inverse of the metric tensor, respectively. Finally, the exponential map for Ln is computed via expv(x) = cosh(∥x∥L)v + sinh(∥x∥L) x ∥x∥L where ∥v∥L = p ⟨v, v⟩L and v ∈TxLn. As suggested by Nickel and Kiela (2018), we initialize the embeddings close to the origin of Ln by sampling from the uniform distribution U(−0.001, 0.001) and by setting v0 to p 1 + ||v′||2, what ensures that the sampled points are located on the surface of the hyperboloid. 4 Experiments To evaluate the efficacy of our method, we evaluate on several commonly-used hypernymy benchmarks (as described in (Roller et al., 2018)) as well as in a reconstruction setting (as described in (Nickel and Kiela, 2017)). Following Roller et al. (2018), we compare to the following methods for unsupervised hypernymy detection: Pattern-Based Models Let E = {(x, y)}N i=1 be the set of Hearst patterns in our corpus, w(x, y) be the count of how many times (x, y) occurs in E, and W = P (x,y)∈E w(x, y). We then consider the following pattern-based methods: Count Model (p) This model simply outputs the count, or equivalently, the extraction probabilities of Hearst patterns, i.e., p(x, y) = w(x, y) W PPMI Model (ppmi) To correct for skewed occurrence probabilities, the PPMI model predicts hypernymy relations based on the Positive Pointwise Mutual Information over the Hearst pattern corpus. Let p−(x) = Σ(x,y)∈Ew(x, y)/W and p+(x) = Σ(y,x)∈Ew(y, x)/W, then: ppmi(x, y) = max  0, log p(x, y) p−(x)p+(y)  SVD Count (sp) To account for missing relations, we also compare against low-rank embeddings of the Hearst corpus using Singular Value Decomposition (SVD). Specifically, let X ∈RMxM, such that Xij = w(i, j)/W and UΣV ⊤be the singular value decomposition of X, then: sp(x, y) = u⊤ x Σrvy SVD PPMI (spmi) We also evaluate against the SVD of the PPMI matrix, which is identical to sp(i, j), with the exception that Xij = ppmi(i, j), instead of p(i, j). Roller et al. (2018) showed that this method provides state-of-the-art results for unsupervised hypernymy detection. Hyperbolic Embeddings (HypeCones) We embed the Hearst graph into hyperbolic space as described in Section 3.2. At evaluation time, we predict the likelihood using the model energy E(u, v). Distributional Models The distributional models in our evaluation are based on the DIH, i.e., the idea that contexts in which a narrow term x (ex: cat) may appear should be a subset of the contexts in which a broader term y (ex: animal) may appear. 3237 Detection (AP) Direction (Acc.) Graded (ρ) BLESS EVAL LEDS SHWARTZ WBLESS BLESS WBLESS BIBLESS HYPERLEX Cosine .12 .29 .71 .31 .53 .00 .54 .52 .14 WeedsPrec .19 .39 .87 .43 .68 .63 .59 .45 .43 invCL .18 .37 .89 .38 .66 .64 .60 .47 .43 SLQS .15 .35 .60 .38 .69 .75 .67 .51 .16 p(x, y) .49 .38 .71 .29 .74 .46 .69 .62 .62 ppmi(x, y) .45 .36 .70 .28 .72 .46 .68 .61 .60 sp(x, y) .66 .45 .81 .41 .91 .96 .84 .80 .51 spmi(x, y) .76 .48 .84 .44 .96 .96 .87 .85 .53 HypeCones .81 .50 .89 .50 .98 .94 .90 .87 .59 Table 2: Experimental results comparing distributional and pattern-based methods in all settings. WeedsPrec The first distributional model we consider is WeedsPrec (Weeds et al., 2004), which captures the features of x which are included in the set of more general term’s features, y: WeedsPrec(x, y) = Pn i=1 xi · ✶yi>0 Pn i=1 xi invCL Lenci and Benotto (2012), introduce the idea of distributional exclusion by also measuring the degree to which the broader term contains contexts not used by the narrower term. The degree of inclusion is denoted as: CL(x, y) = Pn i=1 min(xi, yi) Pn i=1 xi To measure the inclusion of x and y and the noninclusion of y in x, invCL is then computed as invCL(x, y) = p CL(x, y) · (1 −CL(y, x)) SLQS The SLQS model is based on the informativeness hypothesis (Santus et al., 2014; Shwartz et al., 2017), i.e., the idea that general words appear mostly in uninformative contexts, as measured by entropy. SLQS depends on the median entropy of a term’s top k contexts: Ex = mediank i=1[H(ci)] where H(ci) is the Shannon entropy of context ci across all terms. SLQS is then defined as: SLQS(x, y) = 1 −Ex/Ey Corpora and Preprocessing We construct our Hearst graph using the same data, patterns, and procedure as described in (Roller et al., 2018): Hearst patterns are extracted from the concatenation of GigaWord and Wikipedia. The corpus is tokenized, lemmatized, and POS-tagged using CoreNLP 3.8.0 (Manning et al., 2014). The full set of Hearst patterns is provided in Table 1. These include prototypical Hearst patterns, like “animals [such as] big cats”, as well as broader patterns like “New Year [is the most important] holiday.” Noun phrases were allowed to match limited modifiers, and produced additional hits for the head of the noun phrase. The final corpus contains circa 4.5M matched pairs, 431K unique pairs, and 243K unique terms. Hypernymy Tasks We consider three distinct subtasks for evaluating the performance of these models for hypernymy prediction: • Detection: Given a pair of words (u, v), determine if v is a hypernym of u. • Direction: Given a pair (u, v), determine if u is more general than v or vise versa. • Graded Entailment: Given a pair of words (u, v), determine the degree to which u is a v. For detection, we evaluate all models on five commonly-used benchmark datasets: BLESS (Baroni and Lenci, 2011), LEDS (Baroni et al., 2012), EVAL (Santus et al., 2015), SHWARTZ (Shwartz et al., 2016), and WBLESS (Weeds et al., 2014), In addition to positive hypernymy relations, these datasets include negative samples in the form of random pairs, co-hyponymy, antonymy, meronymy, and adjectival relations. For directionality and graded entailment, we also use BIBLESS (Kiela et al., 2015) and HYPERLEX (Vulic et al., 2016). We refer to Roller et al. (2018) for an in-depth discussion of these datasets. For all models, we use the identical text corpus and tune hyperparameters on the validation sets. 3238 Animals Plants Vehicles All Missing Transitive All Missing Transitive All Missing Transitive p(x, y) 350.18 512.28 455.27 271.38 393.98 363.73 43.12 82.57 66.10 ppmi(x, y) 350.47 512.28 455.38 271.40 393.98 363.76 43.20 82.57 66.16 sp(x, y) 56.56 77.10 11.22 43.40 64.70 17.88 9.19 26.98 14.84 spmi(x, y) 58.40 102.56 12.37 40.61 71.81 14.80 9.62 17.96 3.03 HypeCones 25.33 37.60 4.37 17.00 31.53 6.36 5.12 10.28 2.74 ∆% 56.6 51.2 61.1 58.1 51.3 57.0 44.3 42.8 9.6 Table 3: Reconstruction of Animals, Plants, and Vehicles subtrees in WORDNET. Table 2 shows the results for all tasks. It can be seen that our proposed approach provides substantial gains on the detection and directionality tasks and, overall, achieves state of the art results on seven of nine benchmarks. In addition, our method clearly outperforms other embeddingbased approaches on HYPERLEX, although it can not fully match the count-based methods. As Roller et al. (2018) noted, this might be an artifact of the evaluation metric, as count-based methods benefit from their sparse-predictions in this setting. Our method achieves also strong performance when compared to Poincar´e GLOVE on the task of hypernymy prediction. While Tifrea et al. (2018) report Spearman’s rho ρ = 0.421 on HYPERLEX and accuracy ACC = 0.790 on WBLESS, our method achieves ρ = 0.59 (HYPERLEX) and ACC = 0.909 (WBLESS). This illustrates the importance of the distributional constraints that are provided by Hearst patterns. An additional benefit is the efficiency of our embeddings. For all tasks, we have used a 20dimensional embedding for HYPECONES, while the best results for SVD-based methods have been achieved with 300 dimensions. This reduction in parameters by over an order of magnitude clearly highlights the efficiency of hyperbolic embeddings for representing hierarchical structures. Reconstruction In the following, we compare embedding and pattern-based methods on the task of reconstructing an entire subtree of WORDNET, i.e., the animals, plants, and vehicles taxonomies, as proposed by Kozareva and Hovy (2010). In addition to predicting the existence of single hypernymy relations, this allows us to evaluate the performance of these models for inferring full taxonomies and to perform an ablation for the prediction of missing and transitive relations. We follow previous work (Bordes et al., 2013; Nickel and Kiela, 2017) and report for each observed relation (u, v) in WORDNET, its score ranked against the score of the ground-truth negative edges. In Table 3, All refers to the ranking of all edges in the subtree, Missing to edges that are not included in the Hearst graph G, Transitive to missing transitive edges in G (i.e. for all edges {(x, z) : (x, y), (y, z) ∈E ∧(x, z) /∈E}). It can be seen that our method clearly outperforms the SVD and count-based models with a relative improvement of typically over 40% over the best non-hyperbolic model. Furthermore, our ablation shows that HYPECONES improves the consistency of the embedding due to its transitivity property. For instance, in our Hearst Graph the relation (male horse, is-a, equine) is missing. However, since we correctly model that (male horse, is-a, horse) and (horse, is-a, equine), by transitivity, we also infer (male horse, is-a, equine), which SVD fails to do. 5 Conclusion In this work, we have proposed a new approach for inferring concept hierarchies from large text corpora. For this purpose, we combine Hearst patterns with hyperbolic embeddings which allows us to set appropriate constraints on the distributional contexts and to improve the consistency in the embedding space. By computing a joint embedding of all terms that best explains the extracted Hearst patterns, we can then exploit these properties for improved hypernymy prediction. The natural hierarchical structure of hyperbolic space allows us also to learn very efficient embeddings that reduce the required dimensionality substantially over SVD-based methods. To improve optimization, we have furthermore proposed a new method to compute entailment cones in the Lorentz model of hyperbolic space. Experimentally, we show that our embeddings achieve state-of-the-art performance on a variety of commonly-used hypernymy benchmarks. 3239 References Rolf Apweiler, Amos Bairoch, Cathy H Wu, Winona C Barker, Brigitte Boeckmann, Serenella Ferro, Elisabeth Gasteiger, Hongzhan Huang, Rodrigo Lopez, Michele Magrane, et al. 2004. Uniprot: the universal protein knowledgebase. Nucleic acids research, 32(suppl 1):D115–D119. Michael Ashburner, Catherine A Ball, Judith A Blake, David Botstein, Heather Butler, J Michael Cherry, Allan P Davis, Kara Dolinski, Selina S Dwight, Janan T Eppig, et al. 2000. Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25. Ben Athiwaratkun and Andrew Gordon Wilson. 2018. Hierarchical density order embeddings. In Proceedings of the International Conference on Learning Representations. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722–735. Springer. Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 23–32. Association for Computational Linguistics. Marco Baroni and Alessandro Lenci. 2011. How we BLESSed distributional semantic evaluation. In Proceedings of the 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 1–10, Edinburgh, UK. Tim Berners-Lee, James Hendler, and Ora Lassila. 2001. The semantic web. Scientific american, 284(5):34–43. Silvere Bonnabel. 2013. Stochastic gradient descent on Riemannian manifolds. IEEE Trans. Automat. Contr., 58(9):2217–2229. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems, pages 2787–2795. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Jose Camacho-Collados. 2017. Why we have switched from building full-fledged taxonomies to simply detecting hypernymy relations. arXiv preprint arXiv:1703.04178. Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, and Horacio Saggion. 2018. Semeval-2018 task 9: hypernym discovery. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 712–724. Haw-Shiuan Chang, Ziyun Wang, Luke Vilnis, and Andrew McCallum. 2018. Distributional inclusion vector embedding for unsupervised hypernymy detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 485–495, New Orleans, Louisiana. Association for Computational Linguistics. Philipp Cimiano, Andreas Hotho, and Steffen Staab. 2005. Learning concept hierarchies from text corpora using formal concept analysis. Journal of artificial intelligence research, 24:305–339. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Rational, evaluation and approaches–erratum. Natural Language Engineering, 16(1):105–105. Jeffrey Dean and Sanjay Ghemawat. 2004. Mapreduce: Simplified data processing on large clusters. In OSDI’04: Sixth Symposium on Operating System Design and Implementation, pages 137–150, San Francisco, CA. Bhuwan Dhingra, Christopher Shallue, Mohammad Norouzi, Andrew Dai, and George Dahl. 2018. Embedding text in hyperbolic spaces. In Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12), pages 59–69, New Orleans, Louisiana, USA. Association for Computational Linguistics. Octavian-Eugen Ganea, Gary B´ecigneul, and Thomas Hofmann. 2018. Hyperbolic entailment cones for learning hierarchical embeddings. arXiv preprint arXiv:1804.01882. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 107– 114. Association for Computational Linguistics. Gene Ontology Consortium. 2016. Expansion of the gene ontology knowledgebase and resources. Nucleic acids research, 45(D1):D331–D338. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. YAGO2: A spatially and temporally enhanced knowledge base from wikipedia. Artif. Intell., 194:28–61. 3240 Douwe Kiela, Laura Rimell, Ivan Vulic, and Stephen Clark. 2015. Exploiting image generality for lexical entailment detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015), pages 119–124. ACL. Zornitsa Kozareva and Eduard Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1110–1118. Association for Computational Linguistics. Douglas B. Lenat. 1995. Cyc: a large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38. Alessandro Lenci and Giulia Benotto. 2012. Identifying hypernyms in distributional semantic spaces. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 75–79. Association for Computational Linguistics. Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional methods really learn lexical inference relations? In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970–976. Xiang Li, Luke Vilnis, and Andrew McCallum. 2018. Improved representation learning for predicting commonsense ontologies. In International Conference on Machine Learning Workshop on Deep Structured Prediction. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the 14th International Conference on Machine Learning, volume 98, pages 296–304. Carolus Linnaeus et al. 1758. Systema naturae, vol. 1. Systema naturae, Vol. 1. Alexander Maedche and Steffen Staab. 2001. Ontology learning for the semantic web. IEEE Intelligent systems, 16(2):72–79. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. George Miller and Christiane Fellbaum. 1998. Wordnet: An electronic lexical database. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography, 3(4):235– 244. Ndapandula Nakashole, Gerhard Weikum, and Fabian Suchanek. 2012. Patty: a taxonomy of relational patterns with semantic types. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1135–1145. Association for Computational Linguistics. Maximilian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems 30, pages 6338–6347. Curran Associates, Inc. Maximilian Nickel and Douwe Kiela. 2018. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In Proceedings of the Thirty-fifth International Conference on Machine Learning. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Philip Stuart Resnik. 1993. Selection and information: a class-based approach to lexical relationships. IRCS Technical Reports Series, page 200. FB Rogers. 1963. Medical subject headings. Bulletin of the Medical Library Association, 51:114–116. Stephen Roller and Katrin Erk. 2016. Relations such as hypernymy: Identifying and exploiting hearst patterns in distributional vectors for lexical entailment. arXiv preprint arXiv:1605.05433. Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. arXiv preprint arXiv:1806.03191. Enrico Santus, Alessandro Lenci, Qin Lu, and S Schulte im Walde. 2014. Chasing hypernyms in vector spaces with entropy. In 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 38–42. EACL (European chapter of the Association for Computational Linguistics). Enrico Santus, Frances Yung, Alessandro Lenci, and Chu-Ren Huang. 2015. Evalution 1.0: an evolving semantic dataset for training and evaluation of distributional semantic models. In Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications, pages 64–69. Julian Seitner, Christian Bizer, Kai Eckert, Stefano Faralli, Robert Meusel, Heiko Paulheim, and Simone Paolo Ponzetto. 2016. A large database of hypernymy relations extracted from the web. In Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portoroˇz, Slovenia, May 23-28, 2016. 3241 Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. arXiv preprint arXiv:1603.06076. Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 65–75, Valencia, Spain. Association for Computational Linguistics. GO Simms. 1992. The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines, volume 1. World Health Organization. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Advances in neural information processing systems, pages 1297–1304. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 801–808. Association for Computational Linguistics. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, WWW 2007, Banff, Alberta, Canada, May 8-12, 2007, pages 697–706. Alexandru Tifrea, Gary B´ecigneul, and OctavianEugen Ganea. 2018. Poincar´e glove: Hyperbolic word embeddings. arXiv preprint arXiv:1810.06546. Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. Ontolearn reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665–707. Paola Velardi, Roberto Navigli, Alessandro Cuchiarelli, and R Neri. 2005. Evaluation of ontolearn, a methodology for automatic learning of domain ontologies. Ontology Learning from Text: Methods, evaluation and applications, 123(92). Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. 2016. Order-embeddings of images and language. In Proceedings of the International Conference on Learning Representations (ICLR), volume abs/1511.06361. Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew McCallum. 2018. Probabilistic embedding of knowledge graphs with box lattice measures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 263–272. Association for Computational Linguistics. Ivan Vulic, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2016. Hyperlex: A large-scale evaluation of graded lexical entailment. arXiv preprint arXiv:1608.02117. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2249–2259. Dublin City University and Association for Computational Linguistics. Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th international conference on Computational Linguistics, page 1015. Association for Computational Linguistics. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, pages 481–492. ACM. Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. 2018. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712–3722.
2019
313
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3242–3252 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3242 Is Word Segmentation Necessary for Deep Learning of Chinese Representations? Yuxian Meng∗♠, Xiaoya Li∗♠, Xiaofei Sun♠, Qinghong Han♠ Arianna Yuan♠,♥, and Jiwei Li♠,♣ ♣School of Information, Renmin University of China ♥Computer Science Department, Stanford University ♠Shannon.AI { yuxian meng, xiaoya li, xiaofei sun, qinghong han , jiwei li}@shannonai.com [email protected] Abstract Segmenting a chunk of text into words is usually the first step of processing Chinese text, but its necessity has rarely been explored. In this paper, we ask the fundamental question of whether Chinese word segmentation (CWS) is necessary for deep learning-based Chinese Natural Language Processing. We benchmark neural word-based models which rely on word segmentation against neural char-based models which do not involve word segmentation in four end-to-end NLP benchmark tasks: language modeling, machine translation, sentence matching/paraphrase and text classification. Through direct comparisons between these two types of models, we find that charbased models consistently outperform wordbased models. Based on these observations, we conduct comprehensive experiments to study why wordbased models underperform char-based models in these deep learning-based NLP tasks. We show that it is because word-based models are more vulnerable to data sparsity and the presence of out-of-vocabulary (OOV) words, and thus more prone to overfitting. We hope this paper could encourage researchers in the community to rethink the necessity of word segmentation in deep learning-based Chinese Natural Language Processing. 1 1 Introduction There is a key difference between English (or more broadly, languages that use some form of the Latin alphabet) and Chinese (or other languages that do not have obvious word delimiters such as Korean and Japanese) : words in English can be easily recognized since the space token is a good approximation of a word divider, whereas no word divider 1Yuxian Meng and Xiaoya Li contributed equally to this paper. is present between words in written Chinese sentences. This gives rise to the task of Chinese Word Segmentation (CWS) (Zhang et al., 2003; Peng et al., 2004; Huang and Zhao, 2007; Zhao et al., 2006; Zheng et al., 2013; Zhou et al., 2017; Yang et al., 2017, 2018). In the context of deep learning, the segmented words are usually treated as the basic units for operations (we call these models the word-based models for the rest of this paper). Each segmented word is associated with a fixed-length vector representation, which will be processed by deep learning models in the same way as how English words are processed. Word-based models come with a few fundamental disadvantages, as will be discussed below. Firstly, word data sparsity inevitably leads to overfitting and the ubiquity of OOV words limits the model’s learning capacity. Particularly, Zipf’s law applies to most languages including Chinese. Frequencies of many Chinese words are extremely small, making the model impossible to fully learn their semantics. Let us take the widely used Chinese Treebank dataset (CTB) as an example (Xia, 2000). Using Jieba,2 the most widely-used opensourced Chinese word segmentation system, to segment the CTB, we end up with a dataset consisting of 615,194 words with 50,266 distinct words. Among the 50,266 distinct words, 24,458 words appear only once, amounting to 48.7% of the total vocabulary, yet they only take up 4.0% of the entire corpus. If we increase the frequency bar to 4, we get 38,889 words appearing less or equal to 4 times, which contribute to 77.4% of the total vocabulary but only 10.1% of the entire corpus. Statistics are given in Table 1. This shows that the word-based data is very sparse. The data sparsity issue is likely to induce overfitting, since more words means a larger number of parameters. In addition, since it 2https://github.com/fxsjy/jieba 3243 bar # distinct prop of vocab prop of corpus ∞ 50,266 100% 100% 4 38,889 77.4% 10.1% 1 24,458 48.7% 4.0% Table 1: Word statistics of Chinese TreeBank. Corpora Yao Ming reaches the final CTB 姚明 进入 总决赛 PKU 姚 明 进入 总 决赛 Table 2: CTB and PKU have different segmentation criteria (Chen et al., 2017c). is unrealistic to maintain a huge word-vector table, many words are treated as OOVs, which may further constrain the model’s learning capability. Secondly, the state-of-the-art word segmentation performance is far from perfect, the errors of which would bias downstream NLP tasks. Particularly, CWS is a relatively hard and complicated task, primarily because word boundary of Chinese words is usually quite vague. As discussed in Chen et al. (2017c), different linguistic perspectives have different criteria for CWS (Chen et al., 2017c). As shown in Table 1, in the two most widely adopted CWS datasets PKU (Yu et al., 2001) and CTB (Xia, 2000), the same sentence is segmented differently. Thirdly, if we ask the fundamental problem of how much benefit word segmentation may provide, it is all about how much additional semantic information is present in a labeled CWS dataset. After all, the fundamental difference between wordbased models and char-based models is whether teaching signals from the CWS labeled dataset are utilized. Unfortunately, the answer to this question remains unclear. For example. in machine translation we usually have millions of training examples. The labeled CWS dataset is relatively small (68k sentences for CTB and 21k for PKU), and the domain is relatively narrow. It is not clear that CWS dataset is sure to introduce a performance boost. Before neural network models became popular, there were discussions on whether CWS is necessary and how much improvement it can bring about. In information retrieval(IR), Foo and Li (2004) discussed CWS’s effect on IR systems and revealed that segmentation approach has an effect on IR effectiveness as long as the SAME segmentation method is used for query and document, and that CWS does not always work better than models without segmentation. In cases where CWS does lead to better performance, the gap between word-based models and char-based models can be closed if bigrams of characters are used in charbased models. In the phrase-based machine translation, Xu et al. (2004) reported that CWS only showed non-significant improvements over models without word segmentation. Zhao et al. (2013) found that segmentation itself does not guarantee better MT performance and it is not key to MT improvement. For text classification, Liu et al. (2007) compared a na¨ıve character bigram model with word-based models, and concluded that CWS is not necessary for text classification. Outside the literature of computational linguistics, there have been discussions in the field of cognitive science. Based on eye movement data, Tsai and McConkie (2003) found that fixations of Chinese readers do not land more frequently on the centers of Chinese words, suggesting that characters, rather than words, should be the basic units of Chinese reading comprehension. Consistent with this view, Bai et al. (2008) found that Chinese readers read unspaced text as fast as word spaced text. In this paper, we ask the fundamental question of whether word segmentation is necessary for deep learning-based Chinese natural language processing. We first benchmark word-based models against char-based models (those do not involve Chinese word segmentation). We run apples-toapples comparison between these two types of models on four NLP tasks: language modeling, document classification, machine translation and sentence matching. We observe that char-based models consistently outperform word-based model. We also compare char-based models with wordchar hybrid models (Yin et al., 2016; Dong et al., 2016; Yu et al., 2017), and observe that char-based models perform better or at least as good as the hybrid model, indicating that char-based models already encode sufficient semantic information. It is also crucial to understand the inadequacy of word-based models. To this end, we perform comprehensive analyses on the behavior of wordbased models and char-based models. We identify the major factor contributing to the disadvantage of word-based models, i.e., data sparsity, which in turn leads to overfitting, prevelance of OOV words, and weak domain transfer ability. Instead of making a conclusive (and arrogant) argument that Chinese word segmentation is not necessary, we hope this paper could foster more discussions and explorations on the necessity of the long-existing task of CWS in the community, alongside with its underlying mechanisms. 3244 2 Related Work Since the First International Chinese Word Segmentation Bakeoff in 2003 (Sproat and Emerson, 2003) , a lot of effort has been made on Chinese word segmentation. Most of the models in the early years are based on a dictionary, which is pre-defined and thus independent of the Chinese text to be segmented. The simplest but remarkably robust model is the maximum matching model (Jurafsky and Martin, 2014). The simplest version of it is the left-to-right maximum matching model (maxmatch). Starting with the beginning of a string, maxmatch chooses the longest word in the dictionary that matches the current position, and advances to the end of the matched word in the string. Different models are proposed based on different segmentation criteria (Huang and Zhao, 2007). With the rise of statistical machine learning methods, the task of CWS is formalized as a tagging task, i.e., assigning a BEMS label to each character of a string that indicates whether the character is the start of a word(Begin), the end of a word(End), inside a word (Middel) or a single word(Single). Traditional sequence labeling models such as HMM, MEMM and CRF are widely used (Lafferty et al., 2001; Peng et al., 2004; Zhao et al., 2006; Carpenter, 2006). . Neural CWS Models such as RNNs, LSTMs (Hochreiter and Schmidhuber, 1997) and CNNs (Krizhevsky et al., 2012; Kim, 2014) not only provide a more flexible way to incorporate context semantics into tagging models but also relieve researchers from the massive work of feature engineering. Neural models for the CWS task have become very popular these years (Chen et al., 2015b,a; Cai and Zhao, 2016; Yao and Huang, 2016; Chen et al., 2017b; Zhang et al., 2016; Chen et al., 2017c; Yang et al., 2017; Cai et al., 2017; Zhang et al., 2017). Neural representations can be used either as a set of CRF features or as input to the decision layer. 3 Experimental Results In this section, we evaluate the effect of word segmentation in deep learning-based Chinese NLP in four tasks, language modeling, machine translation, text classification and sentence matching/paraphrase. To enforce apples-to-apples comparison, for both the word-based model and the char-based model, we use grid search to tune all model dimension ppl word 512 199.9 char 512 193.0 word 2048 182.1 char 2048 170.9 hybrid (word+char) 1024+1024 175.7 hybrid (word+char) 2048+1024 177.1 hybrid (word+char) 2048+2048 176.2 hybrid (char only) 2048 171.6 Table 3: Language modeling perplexities in different models. important hyper-parameters such as learning rate, batch size, dropout rate, etc. 3.1 Language Modeling We evaluate the two types of models on Chinese Tree-Bank 6.0 (CTB6). We followed the standard protocol, by which the dataset was split into 80%, 10%, 10% for training, validation and test. The task is formalized as predicting the upcoming word given previous context representations. The text is segmented using Jieba.3 An upcoming word is predicted given the previous context representation. For different settings, context representations are obtained using the char-based model and the wordbased model. LSTMs are used to encode characters and words. Results are given in Table 3. In both settings, the char-based model significantly outperforms the word-based model. In addition to Jieba, we also used the Stanford CWS package (Monroe et al., 2014) and the LTP package (Che et al., 2010), which resulted in similar findings. It is also interesting to see results from the hybrid model (Yin et al., 2016; Dong et al., 2016; Yu et al., 2017), which associates each word with a representation and each char with a representation. A word representation is obtained by combining the vector of its constituent word and vectors of the remaining characters. Since a Chinese word can contain an arbitrary number of characters, CNNs are applied to the combination of characters vectors (Kim et al., 2016) to keep the dimensionality of the output representation invariant. We use hybrid (word+char) to denote the standard hybrid model that uses both char vectors and word vectors. For comparing purposes, we also implement a pseudo-hybrid model, denoted by hybrid (char only), in which we do use a word segmentor to segment the texts, but word representations 3https://github.com/fxsjy/jieba 3245 are obtained only using embeddings of their constituent characters. We tune hyper-parameters such as vector dimensionality, learning rate and batch size for all models. Results are given in Table 3. As can be seen, the char-based model not only outperforms the word-based model, but also the hybrid (word+char) model by a large margin. The hybrid (word+char) model outperforms the word-based model. This means that characters already encode all the semantic information needed and adding word embeddings would backfire. The hybrid (char only) model performs similarly to the char-based model, suggesting that word segmentation does not provide any additional information. It outperforms the word-based model, which can be explained by that the hybrid (char only) model computes word representations only based on characters, and thus do not suffer from the data sparsity issue, OOV issue and the overfitting issue of the word-based model. In conclusion, for the language modeling task on CTB, word segmentation does not provide any additional performance boost, and including word embeddings worsen the result. 3.2 Machine Translation In our experiments on machine translation, we use the standard Ch-En setting. The training set consists of 1.25M sentence pairs extracted from the LDC corpora.4 The validation set is from NIST 2002 and the models are evaluated on NIST 2003, 2004, 2005, 2006 and 2008. We followed exactly the common setup in Ma et al. (2018); Chen et al. (2017a); Li et al. (2017); Zhang et al. (2018), which use top 30,000 English words and 27,500 Chinese words. For the char-based model, vocab size is set to 4,500. We report results in both the Ch-En and the En-Ch settings. Regarding the implementation, we compare char-based models with word-based models under the standard framework of SEQ2SEQ +attention (Sutskever et al., 2014; Luong et al., 2015). The current state-of-the-art model is from Ma et al. (2018), which uses both the sentences (seq2seq) and the bag-of-words as targets in the training stage. We simply change the word-level encoding in Ma et al. (2018) to char-level encoding. For En-Ch translation, we use the same dataset to train and test both models. As in Ma et al. (2018), the dimensionality for word vectors and char vectors is set to 512. 4LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06. Results for Ch-En are shown in Table 4. As can be seen, for the vanilla SEQ2SEQ +attention model, the char-based model outperforms the word-based model across all datasets, yielding an average performance boost of +0.83. The same pattern applies to the bag-of-words framework in Ma et al. (2018). When changing the word-based model to the charbased model, we are able to obtain a performance boost of +0.63. As far as we are concerned, this is the best result on this 1.25M Ch-En dataset. Results for En-Ch are presented in Table 5. As can be seen, the char-based model outperforms the word-based model by a huge margin (+3.13), and this margin is greater than the improvement in the Ch-En translation task. This is because in Ch-En translation, the difference between word-based and char-based models is only present in the source encoding stage, whereas in En-Ch translation it is present in both the source encoding and the target decoding stage. Another major reason that contributes to the inferior performance of the wordbased model is the UNK word at decoding time, We also implemented the BPE subword model (Sennrich et al., 2016b,a) on the Chinese target side. The BPE model achieves a performance of 41.44 for the Seq2Seq+attn setting and 44.35 for bag-ofwords, significantly outperforming the word-based model, but still underperforming the char-based model by about 0.8-0.9 in BLEU. We conclude that for Chinese, generating characters has the advantage over generating words in deep learning decoding. 3.3 Sentence Matching/Paraphrase There are two Chinese datasets similar to the Stanford Natural Language Inference (SNLI) Corpus (Bowman et al., 2015): BQ and LCQMC, in which we need to assign a label to a pair of sentences depending on whether they share similar meanings. For the BQ dataset (Chen et al., 2018), it contains 120,000 Chinese sentence pairs, and each pair is associated with a label indicating whether the two sentences are of equivalent semantic meanings. The dataset is deliberately constructed so that sentences in some pairs may have significant word overlap but complete different meanings, while others are the other way around. For LCQMC (Liu et al., 2018), it aims at identifying whether two sentences have the same intention. This task is similar to but not exactly the same as the paraphrase detection task in BQ: two sentences can have different meanings but share the same intention. For exam3246 TestSet Mixed RNN Bi-Tree-LSTM PKI Seq2Seq Seq2Seq Seq2Seq (word) Seq2Seq (char) +Attn (word) +Attn (char) +Attn+BOW +Attn+BOW MT-02 36.57 36.10 39.77 35.67 36.82 (+1.15) 37.70 40.14 (+0.37) MT-03 34.90 35.64 33.64 35.30 36.27 (+0.97) 38.91 40.29 (+1.38) MT-04 38.60 36.63 36.48 37.23 37.93 (+0.70) 40.02 40.45 (+0.43) MT-05 35.50 34.35 33.08 33.54 34.69 (+1.15) 36.82 36.96 (+0.14) MT-06 35.60 30.57 32.90 35.04 35.22 (+0.18) 35.93 36.79 (+0.86) MT-08 – – 24.63 26.89 27.27 (+0.38) 27.61 28.23 (+0.62) Average – – 32.51 33.94 34.77 (+0.83) 36.51 37.14 (+0.63) Table 4: Results of different models on the Ch-En machine translation task. Results of Mixed RNN (Li et al., 2017), Bi-Tree-LSTM (Chen et al., 2017a) and PKI (Zhang et al., 2018) are copied from the original papers. TestSet Seq2Seq Seq2Seq Seq2Seq Seq2Seq (char) +Attn (word) +Attn (char) +Attn+BOW +Attn+BOW MT-02 42.57 44.09 (+1.52) 43.42 46.78 (+3.36) MT-03 40.88 44.57 (+3.69) 43.92 47.44 (+3.52) MT-04 40.98 44.73 (+3.75) 43.35 47.29 (+3.94) MT-05 40.87 42.50 (+1.63) 42.63 44.73 (+2.10) MT-06 39.33 42.88 (+3.55) 43.31 46.66 (+3.35) MT-08 33.52 35.36 (+1.84) 35.65 38.12 (+2.47) Average 39.69 42.36 (+2.67) 42.04 45.17 (+3.13) Table 5: Results on the En-Ch machine translation task. ple, the meanings of ”My phone is lost” and ”I need a new phone” are different, but their intentions are the same: buying a new phone. Each pair of sentences in the BQ and the LCQMC dataset is associated with a binary label indicating whether the two sentences share the same intention, and the task can be formalized as predicting this binary label. To predict correct labels, a model needs to handle the semantics of the subunits of a sentence, which makes the task very appropriate for examining the capability of semantic models. We compare char-based models with word-based models. For the word-based models, texts are segmented using Jieba. The SOTA results on these two datasets is achieved by the bilateral multiperspective matching model (BiMPM) (Wang et al., 2017). We use the standard settings proposed by BiMPM, i.e. 200d word/char embeddings, which are randomly initialized. Results are shown in Table 6. As can be seen, the char-based model significantly outperforms the word-based model by a huge margin, +1.34 on the LCQMC dataset and +2.90 on the BQ set. For this paraphrase detection task, the model needs to handle the interactions between sub-units of a sentence. We conclude that the char-based model is significantly better in this respect. 3.4 Text Classification For text classification, we use the currently widely used benchmarks including: • ChinaNews: Chinese news articles split into 7 news categories. • Ifeng: First paragraphs of Chinese news articles from 2006-2016. The dataset consists of 5 news categories; • JD Full: product reviews in Chinese crawled from JD.com. The reviews are used to predict customers’ ratings (1 to 5 stars), making the task a five-class classification problem. • JD binary: the same product reviews from JD.com. We label 1, 2-star reviews as “negative reviews” and 4 and 5-star reviews as “positive reviews” (3-star reviews are ignored), making the task a binary-classification problem. • Dianping: Chinese restaurant reviews crawled from the online review website Dazhong Dianping (similar to Yelp). We collapse the 1, 2 and 3-star reviews to “negative reviews” and 4 and 5-star reviews to “positive reviews”. The datasets were first introduced in Zhang and LeCun (2017). We trained the word-based version and the char-based version of bi-directional LSTM models to solve this task. Results are shown in Table 7. As can be seen, the only dataset that the char-based model underperforms the word-based model is the chinanews dataset, but the difference is quite small (0.05). On all the other datasets, the char-based model significantly outperforms the word-based model. Domain Adaptation Ability (Daum´e III, 2007; 3247 Dataset description char valid word valid char test word test LCQMC 238.7K/8.8K/12.5K 84.70 83.48 84.43 (+1.34) 83.09 BQ 100K/10K/10K 82.59 79.63 82.19 (+2.90) 79.29 Table 6: Results on the LCQMC and BQ corpus. Dataset description char valid word valid char test word test chinanews 1260K/140K/112K 91.81 91.82 91.80 91.85 (+0.05) dianping 1800K/200K/500K 78.80 78.47 78.76 (+0.36) 78.40 ifeng 720K/80K/50K 86.04 84.89 85.95 (+1.09) 84.86 jd binary 3600K/400K/360K 92.07 91.82 92.05 (+0.16) 91.89 jd full 2700K/300K/250K 54.29 53.60 54.18 (+0.81) 53.37 Table 7: Results on the validation and the test set for text classification. train dianping test jd model acc proportion of sen containing OOV word-based 81.28% 11.79% char-based 83.33% 0.56% train jd test dianping model acc proportion of sen containing OOV word-based 67.32% 7.10% char-based 67.93% 46.85% Table 8: Domain adaptation of the word-based model and the char-based model Jiang, 2008; Zhuang et al., 2010) refers to the ability of extending a model learned from one data distribution (the source domain) for a different (but related) data distribution (the target domain). Because of the data sparsity issue, we hypothesize that char-based models have greater domain adaptation ability than word-based models. We test our hypothesis on different sentiment analysis datasets. We train the word-based model and the char-based model on Dianping (2M restaurant reviews) and test the two models on jd binary (0.25M product reviews), as denoted by train dianping test jd. We also train models on jd binary and test them on Dianping, as denoted by train jd test dianping). Results are given in Table 8. As expected, the char-based model has more domain adaptation ability and performs better than the word-based model on both settings. The OOV issue is especially serious for the word-based model. In the train dianping test jd setting, 11.79% of the test sentences contain OOVs for the word-based model, whereas this number is only 0.56% for the char-based model. Similar observation holds for the train jd test dianping setting. 4 Analysis In this section, we aim at understanding why wordbased models underperform char-based models. We acknowledge that it is impossible to thoroughly inspect the inner mechanism of word-based models, but we try our best to identify major factors explaining the inferiority of word-based models. 4.1 Data Sparsity A common method to avoid vocabulary size getting too big is to set a frequency threshold, and use a special UNK token to denote all words whose frequency is below the threshold. The value of the frequency threshold is closely related to the vocabulary size, and consequently the number of parameters. Figure 2 shows the correlation between the vocabulary size and the frequency threshold, along with the correlation between model performances and the frequency threshold. For both the charbased model and the word-based model, using all words/chars (threshold set to 0) leads to bad results. The explanation is intuitive: it is hard to learn the semantics of infrequent words/characters. For the char-based model, the best performance is obtained when character frequency threshold is set to 5, resulting in a vocabulary size of 1,432 and a medium character frequency of 72. For the word-based model, the best performance is obtained when word frequency threshold is set to 50, in which case the vocabulary size is 1,355 and the medium word frequency is 83. As can be seen, the vocabulary size and the medium word frequency for the best word-based model is similar to those of the best char-based model. This means, for a given dataset, in order to learn the word/char semantics well, the model needs to have enough exposure to each word/character, the amount of which is ap3248 Figure 1: Effects of dropout rates on the char-based model and the word-based model. Figure 2: Effects of data sparsity on the char-based model and the word-based model.  again  repay  how much  interest hold  Interest expense  is how much  ? ? Word-based Semantic Matching Char-based Semantic Matching               ?  next month Interest expense is how much ? next month again repay , hold how much interest Figure 3: Semantic matching between two Chinese sentences with char-based models and word-based models. proximately the same across different models. For the word-based model, this requirement is particularly hard to meet due to its sparseness. 4.2 Out-of-Vocabulary Words One possible explanation for the inferiority of the word-based model is that it contains too many OOVs. If so, we should be able to narrow or even close the gap between word-based models and charbased models by decreasing the number of OOVs. As discussed in Section 4.2, setting the frequency threshold low to avoid OOVs will hinder the performance because it worsen the data sparsity issue. We thus use an alternative strategy: for different word-frequency thresholds, we remove sentences that contain word OOVs from all of the training, validation and test sets. Figure 4 shows vocabulary sizes of the training set and accuracies plotted 3249 Figure 4: Effects of removing training instances containing OOV words. against word frequency threshold. As can be seen, the gap between the two types of models is gradually narrowed as we increase the word-frequency threshold. It is also interesting that the curve for the char-based model goes up slightly at the beginning and then goes down steadily. It is because the OOV issue is not severe for the char-based model and thus does not affect the performance much. However, as we remove more and more training examples, the shrinking training dataset creates a bigger problem. By contrast, for the word-based model, the performance keeps increasing even when the frequency threshold is set to 50, meaning that the positive influence of removing some OOVs outweighs the negative influence of eliminating some training data. In conclusion, the word-based model suffers from the OOV issue. This issue can be alleviated by reducing the number of OOVs in the datasets. 4.3 Overfitting The data sparsity issue leads to the fact that wordbased models have more parameters to learn, and thus are more prone to overfitting. We conducted experiments on the BQ dataset (Chen et al., 2018) and the results validate this point (Figure 1). To achieve the best results, a larger dropout rate is needed for the word-based model (0.5) than the char-based model (0.3). This means overfitting is a severe issue for the word-based model. We also observe that curves with different dropout rates are closer together in word-based models than in charbased models, which means the dropout technique is not enough to resolve the overfitting issue. the char-based model without dropout already achieves better performance (80.82) than the word-based model with the optimal dropout rate (80.65). 4.4 Visualization The BQ semantic matching task aims at deciding whether two sentences have the same intention. Figure 3 tangibly shows why the char-based model outperforms the word-based model. The heatmap denotes the attention matching values between tokens of two two sentences, computed by the BiPMP model (Wang et al., 2017). The input two sentences are: (1) 利息费用是多少(how much is the interest expense), with segmented text being 利息费 用(interest expense) 是(is) 多少(how much) and (2) 下一个月还款要扣多少利息(how much interest do I have to pay if I repay the bill next month), with segmented text being 下个月(next month) 还款(repay), 扣(hold) 多少(how much) 利息(interest). For word-based semantic matching, since 利息费用(interest expense) is treated as a single word, it fails to be mapped to 利息(interest). This is not the case with the char-based model since the same character in the two sentences are more easily mapped. 5 Conclusion In this paper, we ask the fundamental question of whether word segmentation is necessary for deep learning of Chinese representations. We benchmark such word-based models against char-based models in four end-to-end NLP tasks, and enforce apples-to-apples comparisons as much as possible. We observe that char-based models consistently outperform word-based models. Building upon these findings, we show that word-based models’ inferiority is due to the sparseness of word distributions, which leads to more out-of-vocabulary words, overfitting and lack of domain generalization ability. We hope this paper will foster more discussions on the necessity of the long-existing task of CWS in the community. 3250 References Xuejun Bai, Guoli Yan, Simon P Liversedge, Chuanli Zang, and Keith Rayner. 2008. Reading spaced and unspaced chinese text: Evidence from eye movements. Journal of Experimental Psychology: Human Perception and Performance, 34(5):1277. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. EMNLP. Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for chinese. ACL. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for chinese. ACL. Bob Carpenter. 2006. Character language models for chinese word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 169– 172. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, pages 13–16. Association for Computational Linguistics. Huadong Chen, Shujian Huang, David Chiang, and Jiajun Chen. 2017a. Improved neural machine translation with a syntax-aware encoder and decoder. ACL. Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, and Buzhou Tang. 2018. The bq corpus: A large-scale domain-specific chinese corpus for sentence semantic equivalence identification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4946–4951. Xinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2017b. A feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. IJCAI. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1744–1753. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term memory neural networks for chinese word segmentation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1197–1206. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017c. Adversarial multi-criteria learning for chinese word segmentation. ACL. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. ACL. Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. Characterbased lstm-crf with radical-level features for chinese named entity recognition. In Natural Language Understanding and Intelligent Applications, pages 239– 250. Springer. Schubert Foo and Hui Li. 2004. Chinese word segmentation and its effect on information retrieval. Information processing & management, 40(1):161–190. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8–20. Jing Jiang. 2008. Domain adaptation in natural language processing. Technical report. Dan Jurafsky and James H Martin. 2014. Speech and language processing, volume 3. Pearson London. Yoon Kim. 2014. Convolutional neural networks for sentence classification. EMNLP. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In AAAI, pages 2741–2749. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, and Guodong Zhou. 2017. Modeling source syntax for neural machine translation. arXiv preprint arXiv:1705.01020. Wei Liu, Ben Allison, David Guthrie, and Louise Guthrie. 2007. Chinese text classification without automatic word segmentation. In Sixth International Conference on Advanced Language Processing and Web Information Technology (ALPIT 2007), pages 45–50. IEEE. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1952–1962. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. ACL. 3251 Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. arXiv preprint arXiv:1805.04871. Will Monroe, Spence Green, and Christopher D Manning. 2014. Word segmentation of informal arabic with domain adaptation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 206–211. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of the 20th international conference on Computational Linguistics, page 562. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for wmt 16. WMT. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. ACL. Richard Sproat and Thomas Emerson. 2003. The first international chinese word segmentation bakeoff. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 133– 143. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Jie-Li Tsai and George W McConkie. 2003. Where do chinese readers send their eyes? In The Mind’s Eye, pages 159–176. Elsevier. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. IJCAI. Fei Xia. 2000. The part-of-speech tagging guidelines for the penn chinese treebank (3.0). IRCS Technical Reports Series, page 38. Jia Xu, Richard Zens, and Hermann Ney. 2004. Do we need chinese word segmentation for statistical machine translation? In Proceedings of the Third SIGHAN Workshop on Chinese Language Processing. Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. ACL. Jie Yang, Yue Zhang, and Shuailong Liang. 2018. Subword encoding in lattice lstm for chinese word segmentation. arXiv preprint arXiv:1810.12594. Yushi Yao and Zheng Huang. 2016. Bi-directional lstm recurrent neural network for chinese word segmentation. In International Conference on Neural Information Processing, pages 345–353. Springer. Rongchao Yin, Quan Wang, Peng Li, Rui Li, and Bin Wang. 2016. Multi-granularity chinese word embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 981–986. Jinxing Yu, Xun Jian, Hao Xin, and Yangqiu Song. 2017. Joint embeddings of chinese words, characters, and fine-grained subcharacter components. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 286–291. Shiwen Yu, Jianming Lu, Xuefeng Zhu, Huiming Duan, Shiyong Kang, Honglin Sun, Hui Wang, Qiang Zhao, and Weidong Zhan. 2001. Processing norms of modern chinese corpus. Technical report, Technical report. Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexical analyzer ictclas. In Proceedings of the second SIGHAN workshop on Chinese language processing-Volume 17, pages 184–187. Association for Computational Linguistics. Jiacheng Zhang, Yang Liu, Huanbo Luan, Jingfang Xu, and Maosong Sun. 2018. Prior knowledge integration for neural machine translation using posterior regularization. arXiv preprint arXiv:1811.01100. Meishan Zhang, Guohong Fu, and Nan Yu. 2017. Segmenting chinese microtext: Joint informal-word detection and segmentation with neural networks. In IJCAI, pages 4228–4234. Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Transition-based neural word segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 421–431. Xiang Zhang and Yann LeCun. 2017. Which encoding is the best for text classification in chinese, english, japanese and korean? arXiv preprint arXiv:1708.02657. Hai Zhao, Chang-Ning Huang, and Mu Li. 2006. An improved chinese word segmentation system with conditional random field. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 162–165. Hai Zhao, Masao Utiyama, Eiichiro Sumita, and BaoLiang Lu. 2013. An empirical study on word segmentation for chinese machine translation. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 248–263. Springer. 3252 Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 647–657. Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, XIN-YU DAI, and Jiajun Chen. 2017. Word-context character embeddings for chinese word segmentation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 760–766. Fuzhen Zhuang, Ping Luo, Hui Xiong, Yuhong Xiong, Qing He, and Zhongzhi Shi. 2010. Cross-domain learning from multiple sources: A consensus regularization perspective. IEEE Transactions on Knowledge and Data Engineering, 22(12):1664–1678.
2019
314
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3253–3262 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3253 Towards Understanding Linear Word Analogies Kawin Ethayarajh, David Duvenaud†, Graeme Hirst University of Toronto †Vector Institute {kawin, duvenaud, gh}@cs.toronto.edu Abstract A surprising property of word vectors is that word analogies can often be solved with vector arithmetic. However, it is unclear why arithmetic operators correspond to non-linear embedding models such as skip-gram with negative sampling (SGNS). We provide a formal explanation of this phenomenon without making the strong assumptions that past theories have made about the vector space and word distribution. Our theory has several implications. Past work has conjectured that linear substructures exist in vector spaces because relations can be represented as ratios; we prove that this holds for SGNS. We provide novel justification for the addition of SGNS word vectors by showing that it automatically downweights the more frequent word, as weighting schemes do ad hoc. Lastly, we offer an information theoretic interpretation of Euclidean distance in vector spaces, justifying its use in capturing word dissimilarity. 1 Introduction Distributed representations of words are a cornerstone of current methods in natural language processing. Word embeddings, also known as word vectors, can be generated by a variety of models, all of which share Firth’s philosophy (1957) that the meaning of a word is defined by “the company it keeps”. The simplest such models obtain word vectors by constructing a low-rank approximation of a matrix containing a co-occurrence statistic (Landauer and Dumais, 1997; Rohde et al., 2006). In contrast, neural network models (Bengio et al., 2003; Mikolov et al., 2013b) learn word embeddings by trying to predict words using the contexts they appear in, or vice-versa. A surprising property of word vectors learned via neural networks is that word analogies can often be solved with vector arithmetic. For example, ‘king is to ? as man is to woman’ can be solved by finding the closest vector to ⃗ king−⃗ man+ ⃗ woman, which should be ⃗ queen. It is unclear why arithmetic operators can effectively compose embeddings generated by non-linear models such as skip-gram with negative sampling (SGNS). There have been two attempts to rigorously explain this phenomenon, but both have made strong assumptions about either the embedding space or the word distribution. The paraphrase model (Gittens et al., 2017) hinges on words having a uniform distribution rather than the typical Zipf distribution, which the authors themselves acknowledge is unrealistic. The latent variable model (Arora et al., 2016) assumes that word vectors are known a priori and generated by randomly scaling vectors sampled from the unit sphere. In this paper, we explain why – and under what conditions – word analogies can be solved with vector arithmetic, without making the strong assumptions past work has. We focus on GloVe and SGNS because they implicitly factorize a wordcontext matrix containing a co-occurrence statistic (Levy and Goldberg, 2014), which allows us to interpret the inner product of a word and context vector. We begin by formalizing word analogies as functions that transform one word vector into another. When this transformation is simply the addition of a displacement vector – as is the case when using vector arithmetic – we call the analogy a linear analogy. Central to our theory is the expression PMI(x,y) + log p(x,y), which we call the co-occurrence shifted pointwise mutual information (csPMI) of (x,y). We prove that in both SGNS and GloVe spaces without reconstruction error (i.e., when the factorized word-context matrix can be perfectly reconstructed), a linear analogy holds over a set of ordered word pairs iff csPMI(x,y) is the same for every word pair, csPMI(x1,x2) = csPMI(y1,y2) 3254 for any two word pairs, and the row vectors of x1,x2,y1,y2 in the factorized matrix are coplanar. By then framing vector addition as a kind of word analogy, we offer several new insights: 1. Past work has often cited the Pennington et al. (2014) conjecture as an intuitive explanation of why vector arithmetic works for analogy solving. The conjecture is that an analogy of the form a is to b as x is to y holds iff p(w|a)/p(w|b) ≈p(w|x)/p(w|y) for every word w in the vocabulary. While this is sensible, it is not based on any theoretical derivation or empirical support. We provide a formal proof that this is indeed true. 2. Consider two words x,y and their sum ⃗z = ⃗x +⃗y in an SGNS embedding space with no reconstruction error. If z were in the vocabulary, the similarity between z and x (as measured by the csPMI) would be the log probability of y shifted by a model-specific constant. This implies that the addition of two words automatically down-weights the more frequent word. Since many weighting schemes are based on the idea that more frequent words should be down-weighted ad hoc (Arora et al., 2017), the fact that this is done automatically provides novel justification for using addition to compose words. 3. Consider any two words x,y in an SGNS or GloVe embedding space with no reconstruction error. The squared Euclidean distance between ⃗x and ⃗y is a decreasing linear function of csPMI(x,y). In other words, the more similar two words are (as measured by csPMI) the smaller the distance between their vectors. Although this is intuitive, it is also the first rigorous explanation of why the Euclidean distance in embedding space is a good proxy for word dissimilarity. Although our main theorem only concerns embedding spaces with no reconstruction error, we also explain why, in practice, linear word analogies hold in embedding spaces with some noise. We conduct experiments that support the few assumptions we make and show that the transformations represented by various word analogies correspond to different csPMI values. Without making the strong assumptions of past theories, we thus offer a formal explanation of why, and when, word analogies can be solved with vector arithmetic. 2 Related Work PMI Pointwise mutual information (PMI) captures how much more frequently x,y co-occur than by chance (Church and Hanks, 1990): PMI(x,y) = log p(x,y) p(x)p(y) (1) Word Embeddings Word embeddings are distributed representations in a low-dimensional continuous space. Also called word vectors, they capture semantic and syntactic properties of words, even allowing relationships to be expressed arithmetically (Mikolov et al., 2013b). Word vectors are generally obtained in two ways: (a) from neural networks that learn representations by predicting co-occurrence patterns in the training corpus (Bengio et al., 2003; Mikolov et al., 2013b; Collobert and Weston, 2008); (b) from low-rank approximations of word-context matrices containing a co-occurrence statistic (Landauer and Dumais, 1997; Levy and Goldberg, 2014). SGNS The objective of skip-gram with negative sampling (SGNS) is to maximize the probability of observed word-context pairs and to minimize the probability of k randomly sampled negative examples. For an observed word-context pair (w,c), the objective would be logσ(⃗w ·⃗c) + k · Ec′∼Pn [log(−⃗w·⃗c′)], where c′ is the negative context, randomly sampled from a scaled distribution Pn. Though no co-occurrence statistics are explicitly calculated, Levy and Goldberg (2014) proved that SGNS is in fact implicitly factorizing a wordcontext PMI matrix shifted by −logk. Latent Variable Model The latent variable model (Arora et al., 2016) was the first attempt at rigorously explaining why word analogies can be solved arithmetically. It is a generative model that assumes that word vectors are generated by the random walk of a “discourse” vector on the unit sphere. Gittens et al.’s criticism of this proof is that it assumes that word vectors are known a priori and generated by randomly scaling vectors uniformly sampled from the unit sphere (or having properties consistent with this sampling procedure). The theory also relies on word vectors being uniformly distributed (isotropic) in embedding space; however, experiments by Mimno and Thompson (2017) have found that this generally does not hold in practice, at least for SGNS. 3255 Paraphrase Model The paraphrase model (Gittens et al., 2017) was the only other attempt to formally explain why word analogies can be solved arithmetically. It proposes that any set of context words C = {c1,...,cm} is semantically equivalent to a single word c if p(w|c1,...,cm) = p(w|c). One problem with this is that the number of possible context sets far exceeds the vocabulary size, precluding a one-to-one mapping; the authors circumvent this problem by replacing exact equality with the minimization of KL divergence. Assuming that the words have a uniform distribution, the paraphrase of C can then be written as an unweighted sum of its context vectors. However, this uniformity assumption is unrealistic – word frequencies obey a Zipf distribution, which is Pareto (Piantadosi, 2014). A later attempt at using paraphrases (Allen and Hospedales, 2019) completely ignores the effect of negative sampling in SGNS’ factorization. Neither work provides any empirical evidence in support of the paraphrase model. 3 The Structure of Word Analogies 3.1 Formalizing Analogies A word analogy is a statement of the form “a is to b as x is to y”, which we will write as (a,b)::(x,y). It asserts that a and x can be transformed in the same way to get b and y respectively, and that b and y can be inversely transformed to get a and x. A word analogy can hold over an arbitrary number of ordered pairs: e.g., “Berlin is to Germany as Paris is to France as Ottawa is to Canada ...”. The elements in each pair are not necessarily in the same space – for example, the transformation for (king,roi)::(queen,reine) is English-to-French translation. For (king,queen)::(man,woman), the canonical analogy in the literature, the transformation corresponds to changing the gender. Therefore, to formalize the definition of an analogy, we will refer to it as a transformation. Definition 1 An analogy f is an invertible transformation that holds over a set of ordered pairs S iff ∀(x,y) ∈S, f(x) = y∧f −1(y) = x. The word embedding literature (Mikolov et al., 2013b; Pennington et al., 2014) has focused on a very specific type of transformation, the addition of a displacement vector. For example, for (king,queen)::(man,woman), the transformation would be ⃗ king + ( ⃗ woman −⃗ man) = ⃗ queen, where the displacement vector is expressed as the difference ( ⃗ woman−⃗ man). To make a distinction with our general class of analogies in Definition 1, we will refer to these as linear analogies. Definition 2 A linear analogy f is an invertible transformation of the form⃗x 7→⃗x+⃗r. f holds over a set of ordered pairs S iff ∀(x,y) ∈S,⃗x+⃗r =⃗y. Definition 3 Let W be an SGNS or GloVe word embedding space and C its corresponding context space. Let k denote the number of negative samples, Xx,y the frequency, and bx,by the learned biases for GloVe. If there is no reconstruction error, for any words x,y with⃗x,⃗y ∈W and ⃗xc,⃗yc ∈C: SGNS : ⟨⃗x,⃗yc⟩= PMI(x,y)−logk GloVe : ⟨⃗x,⃗yc⟩= logXx,y −bx −by (2) SGNS and GloVe generate two vectors for each word in the vocabulary: a context vector, for when it is a context word, and a word vector, for when it is a target word. Context vectors are generally discarded after training. The SGNS identity in (2) is from Levy and Goldberg (2014), who proved that SGNS is implicitly factorizing the shifted wordcontext PMI matrix. The GloVe identity is simply the local objective for a word pair (Pennington et al., 2014). Since the matrix being factorized in both models is symmetric, ⟨⃗x,⃗yc⟩= ⟨⃗xc,⃗y⟩. Definition 4 The co-occurrence shifted PMI of a word pair (x,y) is PMI(x,y)+log p(x,y). Definition 5 Let M denote the word-context matrix that is implicitly factorized by GloVe or SGNS. If there is no reconstruction error, any four words {a,b,x,y} are contextually coplanar iff rank     Ma,· −My,· Mb,· −My,· Mx,· −My,·    ≤2 (3) For example, for SGNS, the first row of this matrix would be (PMI(a,·) −logk) −(PMI(y,·) − logk) = log[p(·|a)/p(·|y)]. This condition can be trivially derived from the fact that any four vectors ⃗a,⃗b,⃗x,⃗y in a d-dimensional space (for d ≥3) are coplanar iff rank(W ∗) ≤2, where W ∗=   ⃗aT −⃗yT ⃗bT −⃗yT ⃗xT −⃗yT   (4) Given that the vocabulary size is much greater than the dimensionality d, and assuming that the 3256 context matrix C is full rank, rank(W ∗CT) = rank(W ∗). The product W ∗CT is the matrix in (3); each of its three rows is the difference between two rows of M (e.g., Ma,·−My,·). Thus we can translate coplanarity in the embedding space to the coplanarity of M’s row vectors. Co-occurrence Shifted PMI Theorem Let W be an SGNS or GloVe word embedding space with no reconstruction error and S be a set of ordered word pairs such that ∀(x,y) ∈S,⃗x,⃗y ∈W and |S|> 1. A linear analogy f holds over S iff ∃γ ∈R, ∀(x,y) ∈S,csPMI(x,y) = γ and for any two word pairs (x1,y1),(x2,y2) ∈S, the four words are contextually coplanar and csPMI(x1,x2) = csPMI(y1,y2). In sections 3.2 to 3.4 of this paper, we prove the csPMI Theorem. In section 3.5, we explain why, in practice, perfect reconstruction is not needed to solve word analogies using vector arithmetic. In section 4, we explore what the csPMI Theorem implies about vector addition and Euclidean distance in embedding spaces. 3.2 Analogies as Parallelograms Lemma 1 A linear analogy f holds over a set of ordered word pairs S iff ∃γ ′ ∈R,∀(x,y) ∈ S,2⟨⃗x,⃗y⟩−∥⃗x∥2 2−∥⃗y∥2 2= γ ′ and for any two pairs (x1,y1),(x2,y2) ∈S, words x1,x2,y1,y2 are coplanar and 2⟨⃗x1,⃗x2⟩−∥⃗x1∥2 2−∥⃗x2∥2 2= 2⟨⃗y1,⃗y2⟩− ∥⃗y1∥2 2−∥⃗y2∥2 2 . f holds over every subset {(x1,y1),(x2,y2)} ⊂ S iff it holds over S. We start by noting that by Definition 2, f holds over {(x1,y1),(x2,y2)} iff: ⃗x1 +⃗r =⃗y1 ∧⃗x2 +⃗r =⃗y2 (5) By rearranging (5), we know that⃗x2 −⃗y2 =⃗x1 −⃗y1 and⃗x2−⃗x1 =⃗y2−⃗y1. Put another way, x1,y1,x2,y2 form a quadrilateral in vector space whose opposite sides are parallel and equal in length. By definition, this quadrilateral is then a parallelogram. In fact, this is often how word analogies are visualized in the literature (see Figure 1). To prove the first part of Lemma 1, we let γ ′ = −∥⃗r∥2 2. A quadrilateral is a parallelogram iff each pair of opposite sides is equal in length. For every possible subset, ⃗r = (⃗y1 −⃗x1) = (⃗y2 −⃗x2). This implies that ∀(x,y) ∈S, γ ′ = −∥⃗y−⃗x∥2 2= 2⟨⃗x,⃗y⟩−∥⃗x∥2 2−∥⃗y∥2 2 (6) However, this condition is only necessary and not sufficient for the parallelogram to hold. The other man king queen woman royal royal female female Figure 1: The parallelogram structure of the linear analogy (king,queen)::(man,woman). A linear analogy transforms the first element in an ordered word pair by adding a displacement vector to it. Arrows indicate the directions of the semantic relations. pair of opposite sides, which do not correspond to⃗r, are equal in length iff −∥⃗x1 −⃗x2∥2 2= −∥⃗y1 − ⃗y2∥2 2 ⇐⇒2⟨⃗x1,⃗x2⟩−∥⃗x1∥2 2−∥⃗x2∥2 2= 2⟨⃗y1,⃗y2⟩− ∥⃗y1∥2 2−∥⃗y2∥2 2, as stated in Lemma 1. Note that the sides that do not equal⃗r do not necessarily have a fixed length across different subsets of S. Although points defining a parallelogram are necessarily coplanar, in higher dimensional embedding spaces, it is possible for ∥⃗x1 −⃗x2∥= ∥⃗y1 −⃗y2∥and ∥⃗y1 −⃗x1∥= ∥⃗y2 −⃗x2∥to be satisfied without the points necessarily defining a parallelogram. Therefore, we must also require that x1,y1,x2,y2 be coplanar. However, we do not need the word embeddings themselves to verify coplanarity; when there is no reconstruction error, we can express it as a constraint over M, the matrix that is implicitly factorized by the embedding model (see Definition 5). 3.3 Analogies in the Context Space Lemma 2 A linear analogy f :⃗x 7→⃗x +⃗r holds over a set of ordered pairs S in an SGNS or GloVe word embedding space W with no reconstruction error iff ∃λ ∈R,g : ⃗xc 7→⃗xc +λ⃗r holds over S in the corresponding context space C. In other words, an analogy f that holds over S in the word space has a corresponding analogy g that holds over S in the context space. The displacement vector of g is simply the displacement vector of f scaled by some λ ∈R. To prove this, we begin with (5) and any word w in the vocabulary: ⃗x2 −⃗y2 = ⃗x1 −⃗y1 ⇐⇒⟨⃗wc,(⃗x2 −⃗y2)−(⃗x1 −⃗y1)⟩= 0 ⇐⇒⟨⃗w,(⃗x2c −⃗y2c)−(⃗x1c −⃗y1c)⟩= 0 ⇐⇒⃗x2c −⃗y2c = ⃗x1c −⃗y1c (7) Note that we can rewrite the second equation as the third because the matrices being factorized in 3257 (2) are symmetric and there is no reconstruction error. We can simplify from the second-last step because not all word vectors lie in the same hyperplane, implying that (⃗x2c −⃗y2c)−(⃗x1c −⃗y1c) =⃗0. Thus a linear analogy with displacement vector (⃗y1 −⃗x1) holds over S in the word embedding space iff an analogy with displacement vector (⃗y1c −⃗x1c) holds over S in the context space. This is supported by empirical findings that word and context spaces perform equally well on word analogy tasks (Pennington et al., 2014). Since there is an analogous parallelogram structure formed by x1,y1,x2,y2 in the context space, there is some linear map from ⃗w 7→⃗wc for each word w ∈S. The real matrix A describing this linear map is symmetric: ⟨⃗x,⃗yc⟩=⃗xTA⃗y = (AT⃗x)T⃗y = ⟨⃗xc,⃗y⟩for any (x,y) ∈S. This implies that C = AW, since ⟨⃗w,⃗xc⟩= ⟨⃗wc,⃗x⟩for any word w. Since A is a real symmetric matrix, by the finite-dimensional spectral theorem, there is an orthonormal basis of W consisting of eigenvectors of A. If A had distinct eigenvalues, then the relative geometry of the word embeddings would not be preserved by the transformation, in which case it would be possible for two words x,y to satisfy ⟨⃗x,⃗yc⟩̸= ⟨⃗xc,⃗y⟩. This would be a contradiction, given that the factorized word-context matrix is symmetric. Therefore, the relative geometry is only preserved when A has non-distinct eigenvalues. Because A’s eigenvectors are a basis for W and all have the same eigenvalue λ, all word vectors lie in the same eigenspace: ∃λ ∈R,∀⃗w ∈ W, ⃗wc = A⃗w = λ⃗w. Experiments on embedding isotropy in past work (Mimno and Thompson, 2017) provide some empirical support of this result. 3.4 Proof of the csPMI Theorem From Lemma 1, we know that if a linear analogy f holds over a set of ordered pairs S, then ∃γ ′ ∈ R,∀(x,y) ∈S,2⟨⃗x,⃗y⟩−∥⃗x∥2 2−∥⃗y∥2 2= γ ′. Because there is no reconstruction error, by Lemma 2, we can rewrite the inner product of two word vectors in terms of the inner product of a word and context vector. Using the SGNS identity in (2), we can then rewrite (6): γ ′ = 2⟨⃗x,⃗y⟩−∥⃗x∥2 2−∥⃗y∥2 2 = (1/λ)⟨⃗x−⃗y,⃗yc −⃗xc⟩ λγ ′ = 2 PMI(x,y)−PMI(x,x)−PMI(y,y) = csPMI(x,y)−log p(x|x)p(y|y) (8) We get the same equation using the GloVe identity in (2), since the learned bias terms bx,by cancel out. Note that p(x|x) ̸= 1 because p(x|x) is the probability that the word x will appear in the context window when the target word is also x, which is not guaranteed. For log p(x|x)p(y|y) to not be undefined, every word in S must appear in its own context at least once in the training corpus. However, depending on the size of the corpus and the context window, this may not necessarily occur. For this reason, we assume that p(w,w), the probability that a word co-occurs with itself, follows the Zipf distribution of p(w) scaled by some constant ρ ∈(0,1). We find this assumption to be justified, since the Pearson correlation between p(w) and non-zero p(w,w) is 0.825 for uniformly randomly sampled words in Wikipedia. We can therefore treat log p(x|x)p(y|y) ∀(x,y) ∈S as a constant α ∈R−. Rewriting (8), we get λγ ′ +α = csPMI(x,y) (9) The second identity in Lemma 1 can be expanded analogously, implying that f holds over a set of ordered pairs S iff (9) holds for every pair (x,y) ∈ S and csPMI(x1,x2) = csPMI(y1,y2) for any two pairs (x1,y1),(x2,y2) ∈S with contextually coplanar words. In section 5, we provide empirical support of this finding by showing that there is a moderately strong correlation (Pearson’s r > 0.50) between csPMI(x,y) and γ ′, in both normalized and unnormalized SGNS embedding spaces. 3.5 Robustness to Noise In practice, linear word analogies hold in embedding spaces even when there is non-zero reconstruction error. There are three reasons for this: the definition of vector equality is looser in practice, the number of word pairs in an analogy set is small relative to vocabulary size, and analogies mostly hold over frequent word pairs, which are associated with less variance in reconstruction error. For one, in practice, an analogy task (a,?)::(x,y) is solved by finding the most similar word vector to ⃗a + (⃗y −⃗x), where dissimilarity is defined in terms of Euclidean or cosine distance and ⃗a,⃗x,⃗y are excluded as possible answers (Mikolov et al., 2013b). The correct solution to a word analogy can be found even when that solution is not exact. This also means that the solution does not need to lie exactly on the plane defined 3258 by ⃗a,⃗x,⃗y. Although the csPMI Theorem assumes no reconstruction error for all word pairs, if we ignore the coplanarity constraint in Definition 5, only |S|2+2|S| word pairs need to have no reconstruction error for f to hold exactly over S. This number is far smaller than the size of the factorized word-context matrix. Lastly, in practice, linear word analogies mostly hold over frequent word pairs, which are associated with less variance in reconstruction error. More specifically, for a word pair (x,y), the variance of the noise εx,y = Mx,y −⟨⃗x,⃗yc⟩is a strictly decreasing function of its frequency Xx,y. This is because the cost of deviating from the optimal value is higher for more frequent word pairs: this is implicit in the SGNS objective (Levy and Goldberg, 2014) and explicit in GloVe objective (Pennington et al., 2014). We also show that this holds empirically in section 5. Assuming εx,y ∼ N(0,h(Xx,y)), where δ is the Dirac delta distribution: lim Xx,y→∞h(Xx,y) = 0 =⇒lim Xx,y→∞N(0,h(Xx,y)) = δ =⇒lim Xx,y→∞εx,y = 0 (10) As the frequency increases, the probability that the noise is close to zero increases. Although word pairs do not have an infinitely large frequency, as long as the frequency of each word pair is sufficiently large, the noise will likely be small enough for a linear analogy to hold over them in practice. Our experiments in section 5 bear this out: analogies involving countries and their capitals, which have a median word pair frequency of 3436.5 in Wikipedia, can be solved with 95.4% accuracy; analogies involving countries and their currency, which have a median frequency of just 19, can only be solved with 9.2% accuracy. A possible benefit of h mapping lower frequencies to larger variances is that it reduces the probability that a linear analogy f will hold over rare word pairs. One way of interpreting this is that h essentially filters out the word pairs for which there is insufficient evidence, even if the conditions in the csPMI Theorem are satisfied. This would explain why reducing the dimensionality of word vectors – up to a point – actually improves performance on word analogy tasks (Yin and Shen, 2018). Representations with the optimal dimensionality have enough noise to preclude spurious analogies that satisfy the csPMI Theorem, but not so much noise that non-spurious analogies (e.g., (king,queen)::(man,woman)) are also precluded. 4 Vector Addition as a Word Analogy 4.1 Formalizing Addition Corollary 1 Let ⃗z =⃗x +⃗y be the sum of words x,y in an SGNS word embedding space with no reconstruction error. If z were a word in the vocabulary, where δ is a model-specific constant, csPMI(x,z) = log p(y)+δ. To frame the addition of two words x,y as an analogy, we need to define a set of ordered pairs S such that a linear analogy holds over S iff ⃗x + ⃗y =⃗z. To this end, consider the set {(x,z),(/0,y)}, where z is a placeholder for the composition of x and y and the null word /0 maps to ⃗0 for a given embedding space. From Definition 2: (⃗x+⃗r =⃗z)∧(⃗/0+⃗r =⃗y) ⇐⇒⃗z−⃗x =⃗y−⃗/0 ⇐⇒⃗x+⃗y =⃗z (11) Even though /0 is not in the vocabulary, we can map it to ⃗0 because its presence does not affect any other word vector. To understand why, consider the shifted word-context PMI matrix M that does not have /0, and the matrix M′ that does, of which M is a submatrix. Where W and C are the word and context matrices, WCT = M ⇐⇒ [W ⃗0][C⃗0]T = M′. Even if the null word does not exist for a given corpus, the embeddings we would get by training on a corpus that did have the null word would otherwise be identical. An inner product with the zero vector is always 0, so we can infer from the SGNS identity in (2) that PMI(/0,·)−logk = 0 for every word in the vocabulary. The vectors⃗x,⃗y,⃗z,⃗/0 are all coplanar, and we know from the csPMI Theorem that if a linear analogy holds over {(x,z),(/0,y)}, then PMI(x,z)+log p(x,z) =2 PMI(/0,y)+log p(y)+log p(/0) =log p(y)+δ where δ = logk2 +log p(/0) (12) Thus the csPMI of the sum and one word is equal to the log probability of the other word shifted by a model-specific constant. If we assume, as in section 3.5, that the noise is normally distributed, then 3259 even without the assumption of zero reconstruction error, the csPMI of the sum and one word is on average equal to the log probability of the other word shifted by a constant. We cannot repeat this derivation with GloVe because it is unclear what the optimal values of the learned biases would be, even with perfect reconstruction. 4.2 Automatically Weighting Words Corollary 2 In an SGNS word embedding space, on average, the sum of two words has more in common with the rarer word, where commonality is measured by csPMI. For two words x,y, assume without loss of generality that p(x) > p(y). By (12): p(x) > p(y) ⇐⇒log p(x)+δ > log p(y)+δ ⇐⇒csPMI(z,y) > csPMI(z,x) (13) Therefore addition automatically down-weights the more frequent word. For example, if the vectors for x = ‘the’ and y = ‘apple’ were added to create a vector for z = ‘the apple’, we would expect csPMI(‘the apple’, ‘apple’) > csPMI(‘the apple’, ‘the’); being a stopword, ‘the’ would on average be heavily down-weighted. While the rarer word is not always the more informative one, weighting schemes like inverse document frequency (IDF) (Robertson, 2004) and unsupervised smoothed inverse frequency (uSIF) (Ethayarajh, 2018) are all based on the principle that more frequent words should be down-weighted because they are typically less informative. The fact that addition automatically down-weights the more frequent word thus provides novel justification for using addition to compose words. 4.3 Interpreting Euclidean Distance Corollary 3 ∃λ ∈R+,α ∈R−such that for any two words x and y in an SGNS or GloVe embedding space with no reconstruction error, λ ∥⃗x − ⃗y∥2 2= −csPMI(x,y)+α. From (9), we know that for some λ,α,γ ′ ∈R, csPMI(x,y) = λγ ′ + α, where γ ′ = −∥⃗x −⃗y∥2 2. Rearranging this identity, we get ∥⃗x−⃗y∥2 2 = −γ ′ = (−1/λ)(csPMI(x,y)−α) λ∥⃗x−⃗y∥2 2 = −csPMI(x,y)+α (14) Thus the squared Euclidean distance between two word vectors is simply a linear function of the negative csPMI. Since csPMI(x,y) ∈(−∞,0] and ∥⃗x−⃗y∥2 2 is non-negative, λ is positive. This identity is intuitive: the more similar two words are (as measured by csPMI), the smaller the distance between their word embeddings. In section 5, we provide empirical evidence of this, showing that there is a moderately strong positive correlation (Pearson’s r > 0.50) between −csPMI(x,y) and ∥⃗x −⃗y∥2 2, in both normalized and unnormalized SGNS embedding spaces. 4.4 Are Relations Ratios? Pennington et al. (2014) conjectured that linear relationships in the embedding space – which we call displacements – correspond to ratios of the form p(w|x)/p(w|y), where (x,y) is a pair of words such that ⃗y −⃗x is the displacement and w is some word in the vocabulary. This claim has since been repeated in other work (Arora et al., 2016). For example, according to this conjecture, the analogy (king,queen)::(man,woman) holds iff for every word w in the vocabulary p(w|king) p(w|queen) ≈ p(w|man) p(w|woman) (15) However, as noted earlier, this idea was neither derived from empirical results nor rigorous theory, and there has been no work to suggest that it would hold for models other than GloVe, which was designed around it. We now prove this conjecture for SGNS using the csPMI Theorem. Pennington et al. Conjecture Let S be a set of ordered word pairs (x,y) with vectors in an embedding space. A linear word analogy holds over S iff ∀(x1,y1),(x2,y2) ∈S, p(w|x1)/p(w|y1) ≈ p(w|x2)/p(w|y2) for every word w in the vocabulary. Assuming there is no reconstruction error, we replace approximate equality with exact equality and rewrite the identity for SGNS using (2): p(w|x1) p(w|y1) = p(w|x2) p(w|y2) ⇐⇒PMI(w,x1)−PMI(w,y1) = PMI(w,x2)−PMI(w,y2) ⇐⇒⟨⃗wc,⃗x1⟩−⟨⃗wc,⃗y1⟩= ⟨⃗wc,⃗x2⟩−⟨⃗wc,⃗y2⟩ ⇐⇒⟨⃗wc,(⃗x1 −⃗y1)−(⃗x2 −⃗y2)⟩= 0 (16) The same equation appears in the derivation in (7). This holds iff ⃗x1 −⃗y1 = ⃗x2 −⃗y2 (i.e., iff, by Definition 2, an analogy holds over {(x1,y1),(x2,y2)}) 3260 Figure 2: The noise distribution for an SGNS embedding model (i.e., ⟨⃗x,⃗yc⟩−[PMI(x,y)−logk]) at various frequencies. The noise is normally distributed and the variance decreases as the frequency increases. or if ⃗wc is orthogonal to non-zero (⃗x1 −⃗y1)−(⃗x2 − ⃗y2). Even if the context vector of some word is orthogonal to the difference between the relation vectors, not all are – as noted in section 3.4, not all word or context vectors lie in the same hyperplane in embedding space. Therefore, a linear word analogy holds over {(x1,y1),(x2,y2)} iff for every word w, p(w|x1)/p(w|y1) = p(w|x2)/p(w|y2). If this applies to every (x1,y1),(x2,y2) ∈S, as stated in the conjecture, then the same analogy holds over S. 5 Experiments Measuring Noise We uniformly sample word pairs in Wikipedia and estimate the noise (i.e., ⟨⃗x,⃗yc⟩−[PMI(x,y) −logk]) using SGNS vectors trained on the same corpus. As seen in Figure 2, the noise has an approximately zero-centered Gaussian distribution and the variance of the noise is lower at higher frequencies, supporting our assumptions in section 3.5. As previously mentioned, this is partly why linear word analogies are robust to noise: in practice, they typically hold over very frequent word pairs, and at high frequencies, the amount of noise is often negligible. Estimating csPMI The csPMI Theorem implies that if an analogy holds exactly over a set of word pairs when there is no reconstruction error, then each word pair has the same csPMI value. In Table 1, we provide the mean csPMI values for various analogies in Mikolov et al. (2013a) over the set of word pairs for which they should hold (e.g., {(Paris, France), (Berlin, Germany)} for capitalFigure 3: The negative csPMI for a word pair against the squared Euclidean distance between its SGNS word vectors. There is a positive correlation (Pearson’s r = 0.502); the more similar two words are, the smaller the Euclidean distance between their vectors. In the normalized SGNS word space, the correlation is just as strong (Pearson’s r = 0.514). world). We also provide the accuracy of the vector arithmetic solutions for each analogy, found by minimizing cosine distance over the 100K most frequent words in the vocabulary. As expected, when the variance in csPMI is lower, solutions to word analogies are more accurate: the Pearson correlation between accuracy and csPMI variance is −0.70 and statistically significant at the 1% level. This is because an analogy is more likely to hold over a set of word pairs when the displacement vectors are identical, and thus when the csPMI values are identical. Similar analogies, such as capital-world and capital-common-countries, also have similar mean csPMI values – our theory implies this, since similar analogies have similar displacement vectors. As the csPMI changes, the type of analogy gradually changes from geography (capital-world, cityin-state) to verb tense (gram5-present-participle, gram7-past-tense) to adjectives (gram2-opposite, gram4-superlative). We do not witness a similar gradation with the mean PMI, implying that analogies correspond uniquely to csPMI but not PMI. Euclidean Distance Because the sum of two word vectors is not in the vocabulary, we cannot calculate co-occurrence statistics involving the sum, precluding us from testing Corollaries 1 and 2. We test Corollary 3 by uniformly sampling word pairs and plotting, in Figure 3, the negative csPMI against the squared Euclidean distance between the SGNS word vectors. As ex3261 Analogy Mean csPMI Mean PMI Median Word Pair Frequency csPMI Variance Accuracy capital-world −9.294 6.103 980.0 0.496 0.932 capital-common-countries −9.818 4.339 3436.5 0.345 0.954 city-in-state −10.127 4.003 4483.0 2.979 0.744 gram6-nationality-adjective −10.691 3.733 3147.0 1.651 0.918 family −11.163 4.111 1855.0 2.897 0.836 gram8-plural −11.787 4.208 342.5 0.590 0.877 gram5-present-participle −14.530 2.416 334.0 2.969 0.663 gram9-plural-verbs −14.688 2.409 180.0 2.140 0.740 gram7-past-tense −14.840 1.006 444.0 1.022 0.651 gram3-comparative −15.111 1.894 194.5 1.160 0.872 gram2-opposite −15.630 2.897 49.0 3.003 0.554 gram4-superlative −15.632 2.015 100.5 2.693 0.757 currency −15.900 3.025 19.0 4.008 0.092 gram1-adjective-to-adverb −17.497 1.113 46.0 1.991 0.500 Table 1: The mean csPMI for analogies in Mikolov et al. (2013a) over the word pairs for which they should hold (e.g., (Paris, France) for capital-world). Similar analogies have a similar mean csPMI and arithmetic solutions are less accurate when the csPMI variance is higher (Pearson’s r = −0.70). The type of analogy gradually changes with the csPMI, from geography (capital-world) to verb tense (gram7-past-tense) to adjectives (gram2-opposite). pected, there is a moderately strong positive correlation (Pearson’s r = 0.502): the more similar two words are (as measured by csPMI), the smaller the Euclidean distance between them in embedding space. The correlation is just as strong in the normalized SGNS word space, where Pearson’s r = 0.514. As mentioned earlier, our assumption in section 3.4 that p(w,w) ∝p(w) is justified because there is a strong positive correlation between the two (Pearson’s r = 0.825). Unsolvability The csPMI Theorem reveals two reasons why an analogy may be unsolvable in a given embedding space: polysemy and corpus bias. Consider senses {x1,...,xM} of a polysemous word x. Assuming perfect reconstruction, a linear analogy f whose displacement has csPMI γ does not hold over (x,y) if γ ̸= PMI(x,y)+log p(x,y) = log[p(x1|y)+...+ p(xM|y)] p(y|x). The Theorem applies over all the senses of x, even if only a particular sense is relevant to the analogy. For example, while (open,closed)::(high,low) makes intuitive sense, it is unlikely to hold in practice, given that all four words are highly polysemous. Even if (a,b)::(x,y) is intuitive, there is also no guarantee that csPMI(a,b) ≈csPMI(x,y) and csPMI(a,x) ≈csPMI(b,y) for a given training corpus. The less frequent a word pair, the more sensitive its csPMI to even small changes in frequency. Infrequent word pairs are also associated with more reconstruction error (see section 3.5), making it even more unlikely that the analogy will hold in practice. This is why the accuracy for the currency analogy is so low (see Table 1) – in Wikipedia, currencies and their country co-occur with a median frequency of only 19. 6 Conclusion In this paper, we explained why word analogies can be solved using vector arithmetic. We proved that an analogy holds in an SGNS or GloVe embedding space with no reconstruction error iff the co-occurrence shifted PMI is the same for every word pair and across any two word pairs, provided the row vectors of those words in the factorized word-context matrix are coplanar. This had three implications. First, we provided a formal proof of the Pennington et al. (2014) conjecture, the intuitive explanation of this phenomenon. Second, we provided novel justification for the addition of SGNS word vectors by showing that it automatically down-weights the more frequent word, as weighting schemes do ad hoc. Third, we provided the first rigorous explanation of why the Euclidean distance between word vectors is a good proxy for word dissimilarity. Most importantly, we provided empirical support of our theory and avoided making the strong assumptions in past work, making our theory a much more tenable explanation. Acknowledgments We thank Omer Levy, Yoav Goldberg, and the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support. 3262 References Carl Allen and Timothy Hospedales. 2019. Analogies explained: Towards understanding word embeddings. arXiv preprint arXiv:1901.09813. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to PMI-based word embeddings. Transactions of the Association for Computational Linguistics, 4:385–399. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In International Conference on Learning Representations. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3(Feb):1137–1155. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, pages 160–167. ACM. Kawin Ethayarajh. 2018. Unsupervised random walk sentence embeddings: A strong but simple baseline. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 91–100. John R Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis. Alex Gittens, Dimitris Achlioptas, and Michael W Mahoney. 2017. Skip-gram – Zipf + uniform = vector additivity. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 69–76. Thomas K Landauer and Susan T Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review, 104(2):211. Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems, pages 2177–2185. Tomas Mikolov, Kai Chen, Greg S Corrado, and Jeff Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873–2878. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Steven T Piantadosi. 2014. Zipf’s word frequency law in natural language: A critical review and future directions. Psychonomic Bulletin & Review, 21(5):1112–1130. Stephen Robertson. 2004. Understanding inverse document frequency: on theoretical arguments for IDF. Journal of Documentation, 60(5):503–520. Douglas LT Rohde, Laura M Gonnerman, and David C Plaut. 2006. An improved model of semantic similarity based on lexical co-occurrence. Communications of the ACM, 8(627-633):116. Zi Yin and Yuanyuan Shen. 2018. On the dimensionality of word embedding. In Advances in Neural Information Processing Systems, pages 894–905.
2019
315
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3263–3274 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3263 On the Compositionality Prediction of Noun Phrases using Poincar´e Embeddings Abhik Jana†, Dmitry Puzyrev‡, Alexander Panchenko⋆, §, Pawan Goyal†, Chris Biemann§, and Animesh Mukherjee† †Indian Institute of Technology Kharagpur, Kharagpur, India ‡National Research University Higher School of Economics, Moscow, Russia ⋆Skolkovo Institute of Science and Technology, Moscow, Russia §Universit¨at Hamburg, Hamburg, Germany [email protected], {pawang,animeshm}@cse.iitkgp.ac.in [email protected] {panchenko,biemann}@informatik.uni-hamburg.de Abstract The compositionality degree of multiword expressions indicates to what extent the meaning of a phrase can be derived from the meaning of its constituents and their grammatical relations. Prediction of (non)-compositionality is a task that has been frequently addressed with distributional semantic models. We introduce a novel technique to blend hierarchical information with distributional information for predicting compositionality. In particular, we use hypernymy information of the multiword and its constituents encoded in the form of the recently introduced Poincar´e embeddings in addition to the distributional information to detect compositionality for noun phrases. Using a weighted average of the distributional similarity and a Poincar´e similarity function, we obtain consistent and substantial, statistically significant improvement across three gold standard datasets over state-of-the-art models based on distributional information only. Unlike traditional approaches that solely use an unsupervised setting, we have also framed the problem as a supervised task, obtaining comparable improvements. Further, we publicly release our Poincar´e embeddings, which are trained on the output of handcrafted lexical-syntactic patterns on a large corpus. 1 Introduction An important challenge in Natural Language Processing is to represent words, phrases, and larger spans in a way that reflects their meaning. Compositionality is one of the strongest assumptions in semantics, stating that the meaning of larger units can be derived from their smaller parts and their contextual relation. However, for idiomatic phrases, this assumption does not hold true as the meaning of the whole phrase may not be related to their parts in a straightforward fashion. The meaning of the phrases like ‘data format’, ‘head teacher’, ‘green tree’ can easily be understood from the constituent words whereas the semantics of the idiomatic phrases like ‘couch potato’, ‘rat race’, ‘nut case’ are non-compositional, i.e., refer to a different meaning than their parts suggest. In this work, we address compositionality prediction, which is the task of assigning a numerical score to a phrase indicating the extent to which the meaning of the phrase can be derived from the meanings of its constituent words. To motivate its importance, e.g., in machine translation, noncompositional phrases must be translated as a unit; in word sense disambiguation, assigning one of the constituent word’s senses to the whole phrase should be avoided for idiomatic phrases; semantic parsing also requires to correctly identify complex predicates and their arguments in this way. A significant amount of effort has gone into operationalizing dense-vector distributional semantic models (DSMs) of different flavors such as count-based models (Baldwin et al. (2003); Venkatapathy and Joshi (2005); McCarthy et al. (2007)), word embeddings based on word2vec (both CBOW and SkipGram) and similar (Reddy et al. (2011); Salehi et al. (2014); Cordeiro et al. (2016, 2019)), and multi-sense skip-gram models for compositionality prediction (Salehi et al., 2015). All these attempts are based on the hypothesis that the composition of the representation of constituent words will be closer to the representation of the entire phrase in case of compositional phrases as compared to the non-compositional ones (Choueka, 1988). Observing that the distributional information 3264 alone is not enough for precise compositionality prediction, we propose to utilize hypernymy information, hypothesizing that, for compositional phrases, the hypernym of the whole phrase is semantically closer to the hypernyms of one of the constituent words (head words) as compared to the non-compositional phrases. For example, ‘art school’ and ‘school’ have one common hypernym ‘educational institution’ whereas ‘hot dog’ has no common hypernym with ‘hot’ or ‘dog’, apart from very abstract concepts such as ‘physical entity’. Of course, this only holds for noun phrases, where taxonomic relations between nouns apply. To represent hypernymy information we use Poincar´e embeddings (Nickel and Kiela, 2017) for learning hierarchical representations of symbolic data by embedding them into a hyperbolic space. To this end, we extract hyponym-hypernym pairs by applying well-known lexical-syntactic patterns proposed by Hearst (1992) on a large corpus and train Poincar´e embeddings on a list of hyponymhypernym pairs. Relying on two types of representations, i.e., dense vectors in the Euclidean space and the novel hyperbolic Poincar´e embeddings, we interpolate their similarity predictions in a novel compositionality score metric that takes both distributional and hypernymy information into account. We evaluate our proposed metric on three well-accepted English datasets, i.e., Reddy (Reddy et al., 2011), Reddy++ (Ramisch et al., 2016) and Farahmand (Farahmand et al., 2015), demonstrating a performance boost when including hyperbolic embeddings by 2-4% absolute points across all datasets. In particular, our work contains the three following contributions: 1. We devise a straightforward and efficient approach for combining distributional and hypernymy information for the task of noun phrase compositionality prediction. As far as we are aware, this is the first application of Poincar´e embeddings to this task. 2. We demonstrate consistent and significant improvements on benchmark datasets in unsupervised and supervised settings. 3. We publicly release our Poincar´e embeddings trained on pattern extractions on a very large corpus. 2 Related Work Some of the initial efforts on compositionality prediction were undertaken by Baldwin et al. (2003), who use LSA to calculate the similarity between a phrase and its components, whereas Venkatapathy and Joshi (2005) extend this idea with collocation features (e.g., phrase frequency, point-wise mutual information). Researchers also tried to identify non-compositionality in verb-noun phrases using syntax (Cook et al., 2007) and selectional preferences (McCarthy et al., 2007). Attempts to examine the possibility to derive the semantics of a compound or multiword expression from its parts have been researched extensively (McCarthy et al., 2003; Mitchell and Lapata, 2008; Tratz and Hovy, 2010). Reddy et al. (2011) define a compositionality score and use different vector operations to estimate the semantic distance between a phrase and its individual components. Some of the investigations are made for compositionality detection using representation learning of word embeddings (Socher et al., 2012; Salehi et al., 2015). Salehi et al. (2014) also show that distributional similarity over multiple languages can help in improving the quality of compositionality prediction. In a recent attempt, Yazdani et al. (2015) tries to learn semantic composition and finds that complex functions such as polynomial projection and neural networks can model semantic composition more effectively than the commonly used additive and multiplicative functions. Kiela and Clark (2013) detect non-compositionality using concepts of mutual information. Lioma et al. (2015) replace the context vectors with language models and compute their Kullback–Leibler divergence to approximate their semantic distance. In another stream, researchers have also attempted to classify idiomatic vs. non-idiomatic expressions in different languages considering the context of the expressions (Flor and Klebanov, 2018; Bizzoni et al., 2018; Peng et al., 2018), see also a respective shared task (Biemann and Giesbrecht, 2011). In one of the recent attempts, Cordeiro et al. (2016) conduct an analysis of several DSMs (word2vec, GloVe, PPMI) with variations of hyper-parameters and produce the state-of-the-art results in the compositionality prediction task, which is extended further for different languages by Cordeiro et al. (2019). We take their work as our baseline and carry forward our investigation to improve the state-of-the-art performance by introducing the 3265 hyponymy-hypernymy information in the form of Poincar´e embeddings. Le et al. (2019) and Aly et al. (2019) also showed usefulness the use of Poincar´e embeddings: in their case for inducing taxonomies from the text. In both works, hyperbolic embeddings are trained using relations harvested using Hearst patterns, like in our work. The usefulness of hyperbolic embeddings was also shown beyond text processing: Khrulkov et al. (2019) successfully applied them for hierarchical relations in image classification tasks. 3 Methodology Our aim is to produce a compositionality score for a given two-word noun phrase w1w2. As per our hypothesis, the proposed compositionality score metric has two components: one component takes care of the extent of the distributional similarity between the phrase and the composition of constituent words. The second component captures hypernymy-based similarity obtained through Poincar´e embeddings (Nickel and Kiela, 2017). The rationale behind this is that replacing a word with its hypernym should yield phrases with similar meaning for compositional cases, dissimilar phrases otherwise (e.g., a ‘red herring’ is not similar to ‘red fish’). Distributional component: For the first component, we follow the scheme prescribed by Cordeiro et al. (2016), relying on the state-of-the-art DSM model and the score metric (ScoreD) proposed in that work. The metric ScoreD is defined as, ScoreD(w1w2) = cos(v(w1w2), v(w1 + w2)), (1) where v(w1 + w2) = v(w1) ∥v(w1)∥+ v(w2) ∥v(w2)∥, (2) and v(w) is the vector representation of w obtained from the DSM, ||.|| is the L2-norm. For the composition of two component word vectors, we use the additive model, which is well-accepted in the literature (Mitchell and Lapata, 2010). Hypernymy component: For the second component, we prepare Poincar´e embeddings. The Poincar´e embedding as introduced by Nickel and Kiela (2017) is a very recent approach to learn hierarchical representations of symbolic data by embedding them into the hyperbolic space. The underlying hyperbolic geometry helps to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. As per this proposed Poincar´e ball model, let βd = {x ∈R : ∥x∥< 1} (3) be the open d-dimensional unit ball, where ∥.∥denotes the Euclidean norm. The list of hyponym-hypernym pairs was obtained by applying lexical-syntactic patterns described by Hearst (1992) on the corpus prepared by Panchenko et al. (2016). This corpus is a concatenation of the English Wikipedia (2016 dump), Gigaword (Parker et al., 2009), ukWaC (Ferraresi et al., 2008) and English news corpora from the Leipzig Corpora Collection (Goldhahn et al., 2012). The lexical-syntactic patterns proposed by Hearst (1992) and further extended and implemented in the form of FSTs by Panchenko et al. (2012)1 for extracting (noisy) hyponym-hypernym pairs are given as follows – (i) such NP as NP, NP[,] and/or NP; (ii) NP such as NP, NP[,] and/or NP; (iii) NP, NP [,] or other NP; (iv) NP, NP [,] and other NP; (v) NP, including NP, NP [,] and/or NP; (vi) NP, especially NP, NP [,] and/or NP. Pattern extraction on the corpus yields a list of 27.6 million hyponym-hypernym pairs along with the frequency of their occurrence in the corpus. We normalize the frequency of each hyponymhypernym pair by dividing it by the logarithm of the global frequency of the hypernym in the list, which realizes a TF-IDF (Sparck Jones, 1972) weighting, to downrank noisy extractions with frequent pattern-extracted ‘hypernyms’ such as ‘problem, issue, bit’. Further, we sort the list of hyponym-hypernym pairs with respect to their the normalized frequency. As the Poincar´e embedding method takes as input a list of hyponym-hypernym pairs, we first prepare a list by adding top k pairs (based on normalized frequency) where the noun phrases or component words present in the gold-standard dataset exist as hyponym or hypernym. Note that we embed noun phrases as extracted by the patterns as units, i.e. a term like “educational institution” will get its own embedding if it appears in the pattern extractions as an NP. This list is quite sparse and therefore the hyperbolic space is 1https://zenodo.org/record/3234817 3266 not rich enough to produce good results (see Section 5). In order to circumvent this problem, we further populate the above list by appending the top m percent pairs from the complete sorted list of hyponym-hypernym pairs we prepared earlier. Next, we use this expanded list as input to prepare Poincar´e embeddings. Hyperparameters for training Poincar´e model: For both the unsupervised and the supervised setup we maintain the following settings for the training of the Poincar´e model unless otherwise stated: vector dimensionality d = 50, number of negative samples = 2, learning rate = 0.1, coefficient used for L2-regularization while training = 1, and number of epochs to use for burn-in initialization = 10. 3.1 Unsupervised Setup The Poincar´e distance between points x, y ∈βd is defined in the following way: d(x, y) = arcosh  1 + 2 ||x −y||2 (1 −∥x∥2)(1 −∥y∥2)  . (4) Poincar´e similarity score ScoreP is derived from the Poincar´e distance as ScoreP (x, y) = 1 1 + d(x, y). (5) Let w1w2 be the noun phrase for which we compute the compositionality score. Further let Hw1w2 be the set of top k hypernyms of the phrase w1w2 and Hw1, Hw2 be the set of top k hypernyms of the constituent words w1 and w2, respectively. Our proposed compositionality score metric Score(w1w2) is defined as follows: Score(w1w2) = (1 −α)ScoreD(w1w2)+ α max a∈Hw1w2 b∈Hw1w2 c∈Hw2w2 (ScoreP (v(a), v(b) + v(c))), (6) where v(w) indicates the vector representation of the word w and α is used to set the relative weight of the two components. 3.2 Supervised Setup We explore the utility of hierarchical information encoded in Poincar´e embeddings for the task of compositionality prediction in a supervised setup as well. As our aim is to predict a compositionality score, we employ several regression techniques like Support Vector Regression (Drucker et al., 1997), Kernel Ridge Regression (Vovk, 2013), kNearest Neighbours Regression (Altman, 1992), Partial Least Squares Regression (PLS) (Abdi, 2007) etc. We randomly split the full dataset into a 75% training set and a 25% test set, and experiment on 25 such random splits. For each split, we plugin the concatenation of the vector representation of the noun phrase as well as the component words. The supervised predicted score is ScoreS(w1w2) = (1 −α)·ScoreDS(w1w2)+ α·ScorePS(w1w2), (7) where ScoreDS(w1w2) is the predicted score when we plugin the vectors from DSMs into the regression model and ScorePS(w1w2) is the predicted score when Poincar´e embeddings are used as input. Thus, ScoreS indicates the weighted (weight = α) mixed prediction score from the supervised model. We measure the performance of our supervised model for each of the 25 random splits and report the mean and standard deviation of the performance metric. 3.3 Hyperparameters of the Model Apart from the hyperparameters used to train the Poincar´e model, our proposed model has three hyperparameters: k, m and α. k indicates the number of top hypernyms or hyponyms per target word to be used for training the Poincar´e model. Since only considering hyponym-hypernym pairs containing target words does not lead to sufficient training samples for the Poincar´e model, we add top m% hyponym-hypernym pairs extracted by using Hearst pattern to the training set. Note that we consider the top hyponym-hypernym pairs on the basis of normalized frequency. α indicates the relative weight between Poincar´e similarity and distributional similarity. We have optimized these three hyperparameters by grid search. 4 Evaluation 4.1 Datasets To evaluate our proposed models (both supervised and unsupervised) we use three gold standard datasets for English on compositionality detection and describe them in the following. 3267 Reddy (RD): This dataset contains compositionality judgments for 90 compounds in a scale of literality from 0 (idiomatic) to 5 (compositional), obtained by averaging crowdsourced judgments on these pairs (Reddy et al., 2011). For evaluation, we use only the global compositionality score, ignoring individual word judgments. Reddy++ (RD++): This is a recently introduced resource created for evaluation (Ramisch et al., 2016) that extends the Reddy dataset with an additional 90 English nominal compounds, amounting to a total of 180 nominal compounds. Consistent with RD, the scores range from 0 (idiomatic) to 5 (compositional) and are annotated through Mechanical Turk and averaged over the annotators. The additional 90 entries are adjective-noun pairs, balanced with respect to compositionality. Farahmand (FD): This dataset contains 1042 English compounds extracted from Wikipedia with binary non-compositionality judgments by four experts (Farahmand et al., 2015). In evaluations we use the sum of all the judgments to have a single numeral compositionality score, ranging from 0 (compositional) to 4 (idiomatic). We optimize our method on subsets of the datasets for pairs and constituents with available Poincar´e embeddings in order to measure the direct impact of our method, which comprises 79, 146 and 780 datapoints for the three sets RD-R, RD++-R and FD-R, respectively. We subsequently report scores on the full datasets RD-F (90), RD++-F (180) and FD-F (1042) for the sake of fair comparison to previous works. In cases where no Poincar´e embeddings are available, we use the fallback strategy of only relying on the distributional model, i.e. ScoreDS. For the supervised setup, we experiment on the FD dataset (on the reduced version and the full version) since for the other two datasets, the number of instances are not enough for supervision. 4.2 Baselines We use the recent work by Cordeiro et al. (2016) as the baseline, where authors apply several distributional semantic models and their variants by tuning hyperparameters like the dimension of vectors, the window-size during training and others. We resort to PPMI-SVD, two variants of word2vec (CBOW and SkipGram) and GloVe as our baselines. We use these models as provided, with the vector dimension size of 750 (PPMI-SVD, W2V) and 500 (GloVe)2. PPMI-SVD baseline: For each word, its neighboring nouns and verbs in a symmetric sliding window of w words in both directions, using a linear decay weighting scheme with respect to its distance d to the target (Levy et al., 2015) are extracted. The representation of a word is a vector containing the positive pointwise mutual information (PPMI) association scores between the word and its contexts. Note that, for each target word, contexts that appear less than 1000 times are discarded. The Dissect toolkit (Dinu et al., 2013) is then used in order to build a PPMI matrix and its dimensionality is reduced using singular value decomposition (SVD) to factorize the matrix. word2vec baseline: This DSM is prepared using the well-known word2vec (Mikolov et al., 2013) in both variants CBOW (W2V-CBOW) and Skip-Gram (W2V-SG), using default configurations except for the following: no hierarchical softmax; negative sampling of 25; frequent-word downsampling weight of 10−6; runs 15 training iterations; minimum word count threshold of 5. GloVe baseline: The count-based DSM of Pennington et al. (2014), implementing a factorization of the co-occurrence count matrix is used for the task. The configurations are the default ones, except for the following: internal cutoff parameter xmax = 75; builds co-occurrence matrix in 15 iterations; minimum word count threshold of 5. Other baseline models proposed by Reddy et al. (2011), Salehi et al. (2014), Salehi et al. (2015) report results only on Reddy dataset (since the other two datasets have been introduced later) whereas Yazdani et al. (2015) perform their evaluation only on the Farahmand dataset for their supervised model. In addition, this supervised approach requires an additional resource of ∼70k known noun phrases from Wikipedia for training. However, Cordeiro et al. (2016) compare their best models with all these baseline models and show that their models outperform across all the respective datasets. Hence we execute all our evaluations by considering only the best models proposed by Cordeiro et al. (2016) as our baselines. 2These pre-trained DSMs were provided by Cordeiro et al. (2016); on re-computation we get slightly different results than those reported in their paper. 3268 4.3 Evaluation Setup Quantitative evaluation is usually done by comparing model outcomes against the gold standard datasets. For all the three datasets (RD-R, RD++-R, FD-R), we report Spearman’s rank correlation (ρ) between the scores provided by the humans and the compositionality score obtained from the models. Note that for the nominal compounds in FD-R dataset, higher human scores indicate a higher degree of idiomaticity, which is opposite to the scoring in the RD-R and RD++-R datasets. We therefore always report the absolute correlation values (|ρ|) for all the datasets. 5 Experimental Results In this section, we report the results obtained from the baseline models and the unsupervised and supervised variants of our model. 5.1 Unsupervised Baseline Results We compare the performance of the baseline models (Cordeiro et al., 2016) and Poincar´e embeddings as a single signal on the reduced version of the three gold standard datasets: RD-R (79 instances), RD++-R (146 instances), FD-R (780 instances) in order to closely examine the influence of Poincar´e embeddings. Table 1 shows the performance for all the baselines in terms of Spearman’s rank correlation ρ. We observe that W2V-CBOW model produces the best performance across all the three datasets and W2V-SG achieves the second-best performance. As noted in the table, the Poincar´e embeddings on their own perform worse than all the other baselines. Further, since our final model is based on an interpolation between Poincar´e embeddings and W2VCBOW, we also attempted interpolation between other four baseline models, but the best results were always close to the better of the two models, and are not reported here. Base. Model RD-R RD++-R FD-R W2V-CBOW 0.8045 0.6964 0.3405 W2V-SG 0.8034 0.6963 0.3396 GloVe 0.7604 0.6487 0.2620 PPMI-SVD 0.7484 0.6468 0.2428 Poincar´e 0.6023 0.4765 0.2007 Table 1: Baseline (Cordeiro et al., 2016) results on the reduced version of three gold-standard datasets ordered in decreasing overall performance along with the results of using only Poincar´e embedding. 5.2 Results of Proposed Unsupervised Model We report the effect of tuning hyper-parameters introduced in Section 3, e.g. k, m, or α. Fixed k neighbours: We start by fixing k = 5 and obtain the correlations by varying m and α. The results are presented in Table 2. We experiment with values of m ranging from 0 to 10 and report results for m = 0, 1, 5, 10. Note that here m = 0 indicates the case where we use the Poincar´e embeddings of the target word’s top k hypernyms and hyponyms only with no additional highly frequent hyponym-hypernym pairs. Values of m > 10 degrade the quality, as too many noisy pattern extractions would be used in training. Key observations: For certain values of α we obtain considerable improvements over the baseline Spearman’s correlation when introducing Poincar´e embeddings. The addition of top hyponym-hypernym pairs (i.e., m > 0) improves the performance of the model. Finally, note that for m > 0, α = 0.4 generally produces better results across the three datasets. m(%) α RD-R RD++-R FD-R 0.2 0.8160 0.7102 0.3536 0 0.4 0.8117 0.7012 0.3532 0.6 0.7844 0.6581 0.3278 0.2 0.8274 0.7155 0.3482 1 0.4 0.8391 0.7165 0.3373 0.6 0.8136 0.6817 0.3036 0.2 0.8362 0.7268 0.3501 5 0.4 0.8578 0.7389 0.3432 0.6 0.8467 0.7279 0.3126 0.2 0.8346 0.7250 0.3513 10 0.4 0.8421 0.7461 0.3469 0.6 0.8299 0.7372 0.3204 Table 2: Effect of the introduction of the Poincar´e embeddings for varying values of m and α. Here W2VCBOW is used as distributional model. MODEL-DP with W2V-CBOW α RD-R RD++-R FD-R 0.2 0.8265 0.7177 0.3594 0.4 0.8324 0.7321 0.3646 0.6 0.8082 0.7077 0.3450 MODEL-DP with W2V-SG α RD-R RD++-R FD-R 0.2 0.8244 0.7215 0.3603 0.4 0.8330 0.7337 0.3673 0.6 0.8152 0.7101 0.3461 Table 3: Performance of MODEL-DP using W2VCBOW as well as W2V-SG as distributional models: Effect of removal of top 1% hypernym-hyponym pairs from the top 10% pairs (k = 5). 3269 Effect of the top m pairs: Since the extraction of the hypernyms from the corpus is completely unsupervised and based on handcrafted lexical-syntactic patterns, we investigate whether the most frequent hyponym-hypernym pairs are affecting the quality of Poincar´e embeddings, having noted many erroneous extractions for very frequent pairs. We fix the value of m = 10, but drop the most frequent 1% hyponym-hypernym pairs and retrain the Poincar´e model with the rest of the pairs. We call this variant MODEL-DP. The upper half of Table 3 shows the performance of this model while using W2V-CBOW as the distributional models (k = 5, which was the optimal k also in this setting). We compare the result of MODEL-DP for α = 0.4 with Table 2, row corresponding to m = 10%, α = 0.4. k α RD-R RD++-R FD-R 0.2 0.8269 0.7228 0.3563 3 0.4 0.8275 0.7382 0.3557 0.6 0.8089 0.7188 0.3278 0.2 0.8265 0.7177 0.3594 5 0.4 0.8324 0.7321 0.3646 0.6 0.8082 0.7077 0.3450 0.2 0.8123 0.7103 0.3534 10 0.4 0.8168 0.7248 0.3589 0.6 0.7700 0.6957 0.3484 Table 4: Results obtained for MODEL-DP (m = 10, top 1% hypernym-hyponym pairs removed) by varying the values of k. Key observations: We mainly observe that discarding the most frequent 1% hyponym-hypernym pairs improves the results for the largest dataset FD-R considerably while making the results from the other two datasets a little worse. We also produce results on MODEL-DP by varying the value of k. We try with k = 3, 5, 10, the results of which is presented in Table 4. Clearly, k = 5 gives the best performance. If we consider very few hypernyms per target word, it results in lack of sufficient information for the Poincar´e model, while training with too many hypernyms per target word dilutes the useful hierarchy information because it adds noise. Other DSM models: We use W2V-CBOW as the DSM for MODEL-DP. Keeping all the other parameters of MODEL-DP the same (i.e., m = 10, k = 5, α = 0.4) we replace the DSM by the W2V-SG vectors, which was performing the second best among the baselines. We are interested in observing whether the Poincar´e embeddings also benefit other DSM models as well. Key observations: The performance of this variant of our model is presented in the lower half of Table 3. We indeed observe the same effect of the Poincar´e embeddings improving the overall performance by 3-4% on all datasets. Other hyperparameters: In a series of experiments that we do not report in detail for brevity, we could make the following observations: For our task, the vector dimensionality of Poincar´e embeddings of d = 50 shows better results than higher or lower values, as tested with d ∈{20, 100}. Similarly, we tried with several vector dimensions of DSMs with d ∈50, 100, 300 but 750 gives the best performance for the best models reported by Cordeiro et al. (2016) and our model in the unsupervised setup. We further tried varying the relative weight of single word vectors for the sum in Equation 1, which did not have positive effects. Performance for reduced dataset Model RD-R RD++-R FD-R W2V-CBOW 0.8045 0.6964 0.3405 MODEL-DP 0.8324 0.7321 0.3646 Performance for full dataset Model RD-F RD++-F FD-F W2V-CBOW 0.7867 0.7022 0.2688 MODEL-DP 0.8095 0.7302 0.2958 Table 5: Performance of our model (MODEL-DP) and most competitive baseline (W2V-CBOW) for both the reduced datasets and the whole datasets (using the fallback strategy). Fallback strategy to encompass the whole dataset: In all the above experiments we consider the reduced version of the three goldstandard datasets due to lack of the Poincar´e embeddings for certain target words. We suggest a fallback strategy to incorporate the target words that do not have Poincar´e embeddings. In cases where the Poincar´e embeddings are not present, we fall back to the distributional similarity score. In cases, where the Poincar´e embeddings are available we use the combined score as discussed in Section 3. Note that, the distributions of distributional similarity scores and proposed combined scores are significantly different (according to the z-test (Fisher, 1932)). Therefore while falling back to the distributional similarity scores we scale up the scores by the proportion of normalized means of the two distributions. 3270 Key observations: The results for this fall back strategy is noted in the lower half of Table 5. We observe that for all three datasets we perform significantly better than the baselines. To be consistent with the literature, we compare our performance even with the supervised model proposed by Yazdani et al. (2015) for the FD-F dataset. For this dataset, the supervised model proposed by the authors produces a Spearman’s rank correlation (ρ) of 0.41 whereas the unsupervised MODEL-DP produces 0.29. However, our supervised approach, as we shall see later, beats this number reported by Yazdani et al. (2015) by a considerable margin. Significance test: From the extensive evaluation of our model by tuning several hyper-parameters, we obtain MODEL-DP (Table 3), which gives the best performance for all the three datasets outperforming the baselines (Table 1). We perform Wilcoxon’s sign-rank test (Rey and Neuh¨auser, 2011) for all the three datasets separately. We obtain p < 0.05 while comparing MODEL-DP and the best baseline model (W2V-CBOW) indicating that the difference between their compositionality predictions is statistically significant. Error analysis: We investigate the erroneous cases for which the annotators give a high compositionality score while our model produces a very low compositional score, e.g. ‘area director’, ‘discussion page’, and ‘emergency transportation’. We observe that the number of hypernyms extracted for these target noun phrases is very low (1 or 2), which leads to a less informative hierarchical representation in the Poincar´e model; this is either caused by a low frequency of terms overall, or by a low occurrence in hypernym pattern contexts. We also analyzed the non-compositional cases for which the annotators give a low compositionality score but our model produces a high score, e.g. ‘hard disk’, ‘hard drive’ and ‘soft drink’. In these cases even though they are non-compositional, the hypernyms of the noun phrases match with the hypernyms of the head constituent words. For example, ‘hard disk’ and ‘disk’ have the same hypernym ‘storage device’; similarly ‘soft drink’ and ‘drink’ have ‘product’; ‘hard drive’ and ‘drive’ have ‘device’. Thus, these non-compositional cases are different from entirely opaque expressions like ‘couch potato’, ‘hot dog’ where none of the hypernyms of the noun phrases match with the hypernyms of any of the constituent words. CatModel RD-RL RD++-RL FD-RL W2V-CBOW 0.8111 0.7256 0.4198 MODEL-DP-L 0.8223 0.7451 0.4179 MODEL-DP 0.8288 0.7592 0.4790 Table 6: Comparisons of the results produced by MODEL-DP-L from lexical resources vs. MODEL-DP along with the baselines for the reduced dataset. egorizing the non-compositional words based on the above observation and dealing with such cases is left for future work. Training using lexical resources: We further investigated the use of hyponym-hypernym pairs extracted from lexical resources like WordNet (Miller, 1995) or ConceptNet (Speer et al., 2017) for training the Poincar´e model. Even though the quality of the hyponym-hypernym pairs from lexical resources is better compared to the pairs extracted using Hearst patterns, the coverage of target words is very low. Therefore, for a fair comparison, we prepare a reduced version of the three gold standard datasets (RD-RL, RD++RL, FD-RL), where all the target words are present in lexical resources as well as hyponym-hypernym pairs extracted using Hearst patterns. RD-RL, RD++-RL, and FD-RL contain 74, 131, 380 target words, respectively. MODEL-DP-L uses the same compositionality score metric as MODEL-DP but in the case of MODEL-DP-L, the Poincar´e embedding is learned using the hyponym-hypernym pairs extracted only from WordNet and ConceptNet combined. The results are presented in Table 6. We see that even though MODEL-DP-L performs better than the baselines for two of the datasets, MODEL-DP gives the best result. We attribute this to the relative sparsity of lexical resources, which are seemingly not sufficient for training reliable Poincar´e embeddings. 5.3 Results of Proposed Supervised Model For the supervised setup we present our results on the reduced FD-R dataset (780 instances) and the full Farhamand FD-F dataset (1042 instances). We do not use the other two datasets for the supervised setup since the number of instances in both these datasets are too small to produce a reasonable training-test split required for supervision. As discussed in Section 3.2, we use various regression models; 75% of the dataset is used for training and the remaining 25% is used for testing; we experiment on 25 such random splits and 3271 FD-R Kernel Regression PLS Regression µ(|ρ|) σ(|ρ|) µ(|ρ|) σ(|ρ|) CBOW-S (750) 0.4017 0.0599 0.3972 0.0590 α MODEL-DP-S 0.2 0.4294 0.0591 0.4078 0.0566 0.4 0.4347 0.0563 0.4096 0.0525 0.6 0.4221 0.0540 0.3959 0.0497 CBOW-S (50) 0.4339 0.0570 0.4227 0.0584 α MODEL-DP-S, CBOW vectors of dim. 50 0.2 0.4487 0.0547 0.4361 0.0561 0.4 0.4520 0.0528 0.4372 0.0518 0.6 0.4410 0.0510 0.4196 0.0491 FD-F Kernel Regression PLS Regression µ(|ρ|) σ(|ρ|) µ(|ρ|) σ(|ρ|) CBOW-S (750) 0.3822 0.0471 0.3910 0.0434 α MODEL-DP-S 0.2 0.4030 0.0446 0.3984 0.0450 0.4 0.4083 0.0425 0.3941 0.0459 0.6 0.3986 0.0418 0.3747 0.0471 CBOW-S (50) 0.4212 0.0502 0.4201 0.0470 α MODEL-DP-S, CBOW vectors of dim. 50 0.2 0.4329 0.0500 0.4270 0.0467 0.4 0.4340 0.0488 0.4211 0.0469 0.6 0.4213 0.0478 0.3943 0.0499 Table 7: Mean (µ) and Standard Deviation (σ) of Spearman’s rank correlation (ρ) of the supervised approach for FD-R and FD-F datasets over 25 random splits. We compare best baseline model (CBOW - 750 and 50 dimension) and our model (MODEL-DP-S) using both 750 and 50 dimension of CBOW vectors. report mean and standard deviation of Spearman’s rank correlation (ρ). Among all the regression models (respective to the best choice of the hyperparameters), Kernel Ridge regression gives the best performance while PLS regression is the second best for both the FD-R and FD-F dataset. We compare the performance of the best baseline supervised model (CBOW-S) where only ScoreDS from Equation 7 is used as the predicted score with our proposed supervised model (MODEL-DPS) where ScoreS from Equation 7 is used as the predicted score. The performance of these two best regression models for the baseline and our model (for α = 0.4)3 are noted in Table 7. In the same table, we also report the results of the evaluation on FD-F dataset using a fallback strategy for the supervised setup: here, we use a 50-dimensional zero vector of the target word or compound for 3α = 0.4 produces the best results per grid search. which the Poincar´e embedding is absent. We observe that for both the datasets (reduced and full) our approach outperforms the baseline results by a large margin. As discussed earlier, the CBOW vectors used for experiments consist of 750 dimensions. Since the number of data points in the training set is small, we also experiment with CBOW vector dimension of 50 (MODEL-DPS50) in the supervised setup to avoid overfitting due to a large number of parameters. The results presented in Table 7 show that with the reduced number of dimensions, our model yields even better results and outperforms the correlations 0.41 and 0.34 reported respectively in (Yazdani et al., 2015) and (Cordeiro et al., 2016). 6 Conclusion In this paper, we present a novel straightforward method for estimating degrees of compositionality in noun phrases. The method is mixing hypernymy and distributional information of the noun phrases and their constituent words. To encode hypernymy information, we use Poincar´e embeddings, which – to the best of our knowledge – are used for the first time to accomplish the task of compositionality prediction. While these hyperbolic embeddings trained on hypernym pattern extractions are not a good signal on their own for this task, we observe that mixing distributional and hypernymy information via Euclidean and hyperbolic embeddings helps to substantially and significantly improve the performance of compositionality prediction, outperforming previous state-ofthe-art models. Our pretrained embeddings and the source codes are publicly available.4 Two directions for future work are (i) to extend our approach to other languages by using multilingual resources or translation data; and (ii) to explore various compositionality functions to combine the words’ representation on the basis of their grammatical function within a phrase. Acknowledgments We acknowledge the support of the DFG under the “JOIN-T” (BI 1544/4) and “ACQuA” (BI 1544/7) projects, Humboldt Foundation for providing scholarship as well as the DAAD and the Indian Department of Science and Technology via a DAAD-DST PPP grant. 4https://github.com/uhh-lt/poincare 3272 References Herv´e Abdi. 2007. Partial least squares regression. Encyclopedia of measurement and statistics, 2:740– 744. Naomi S. Altman. 1992. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175–185. Rami Aly, Alexander Ossa, Arne K¨ohn, Chris Biemann, and Alexander Panchenko. 2019. Every child should have parents: a taxonomy refinement algorithm based on hyperbolic term embeddings. In Proceedings of the 57th Annual Meeting of the Association of Computational Linguistics (Volume 2: Short Papers), Florence, Italy. Timothy Baldwin, Colin Bannard, Takaaki Tanaka, and Dominic Widdows. 2003. An empirical model of multiword expression decomposability. In Proceedings of the ACL 2003 workshop on Multiword expressions: analysis, acquisition and treatment, pages 89–96, Sapporo, Japan. Chris Biemann and Eugenie Giesbrecht. 2011. Distributional semantics and compositionality 2011: Shared task description and results. In Proceedings of the Workshop on Distributional Semantics and Compositionality, pages 21–28, Portland, OR, USA. Yuri Bizzoni, Marco S. G. Senaldi, and Alessandro Lenci. 2018. Finding the neural net: Deep-learning idiom type identification from distributional vectors. Italian Journal of Computational Linguistics, 4(1):27–41. Yaacov Choueka. 1988. Looking for needles in a haystack or locating interesting collocational expressions in large textual databases. In RIAO 88:(Recherche d’Information Assist´ee par Ordinateur). Conference, pages 609–623, Cambridge, MA, USA. Paul Cook, Afsaneh Fazly, and Suzanne Stevenson. 2007. Pulling their weight: Exploiting syntactic forms for the automatic identification of idiomatic expressions in context. In Proceedings of the workshop on a broader perspective on multiword expressions, pages 41–48, Prague, Czech Republic. Silvio Cordeiro, Carlos Ramisch, Marco Idiart, and Aline Villavicencio. 2016. Predicting the compositionality of nominal compounds: Giving word embeddings a hard time. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1986–1997, Berlin, Germany. Silvio Cordeiro, Aline Villavicencio, Marco Idiart, and Carlos Ramisch. 2019. Unsupervised compositionality prediction of nominal compounds. Computational Linguistics, 45(1):1–57. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. Dissect - distributional semantics composition toolkit. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 31–36, Sofia, Bulgaria. Harris Drucker, Christopher J. C. Burges, Linda Kaufman, Alex J. Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In Advances in Neural Information Processing Systems 9, pages 155–161, Denver, CO, USA. Meghdad Farahmand, Aaron Smith, and Joakim Nivre. 2015. A multiword expression data set: Annotating non-compositionality and conventionalization for English noun compounds. In Proceedings of the 11th Workshop on Multiword Expressions, pages 29–33, Denver, CO, USA. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWaC, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google, pages 47–54, Marrakech, Morocco. Ronald A. Fisher. 1932. Statistical methods for research workers. Oliver and Boyd, Edinburgh. Michael Flor and Beata Beigman Klebanov. 2018. Catching idiomatic expressions in EFL essays. In Proceedings of the Workshop on Figurative Language Processing, pages 34–44, New Orleans, LA, USA. Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig Corpora Collection: From 100 to 200 languages. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), pages 759–765, Istanbul, Turkey. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th Conference on Computational Linguistics - Volume 2, COLING ’92, pages 539–545, Nantes, France. Valentin Khrulkov, Leyla Mirvakhabova, Evgeniya Ustinova, Ivan Oseledets, and Victor Lempitsky. 2019. Hyperbolic image embeddings. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. Douwe Kiela and Stephen Clark. 2013. Detecting compositionality of multi-word expressions using nearest neighbours in vector space models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1427–1432, Seattle, WA, USA. Matt Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, and Maximilian Nickel. 2019. Inferring concept hierarchies from text corpora via hyperbolic embeddings. arXiv preprint arXiv:1902.00913. 3273 Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Christina Lioma, Jakob G. Simonsen, Birger Larsen, and Niels D. Hansen. 2015. Non-compositional term dependence for information retrieval. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 595–604, Santiago, Chile. Diana McCarthy, Bill Keller, and John Carroll. 2003. Detecting a continuum of compositionality in phrasal verbs. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment, pages 73–80, Sapporo, Japan. Diana McCarthy, Sriram Venkatapathy, and Aravind K. Joshi. 2007. Detecting compositionality of verbobject combinations using selectional preferences. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 369–379, Prague, Czech Republic. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, Stateline, NV, USA. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of Annual Meeting of the Association for Computational Linguistics (ACL-08: HLT), pages 236–244, Columbus, OH, USA. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems 30, pages 6338–6347, Long Tail Beach, CA, USA. Alexander Panchenko, Stefano Faralli, Eugen Ruppert, Steffen Remus, Hubert Naets, C´edrick Fairon, Simone P. Ponzetto, and Chris Biemann. 2016. TAXI at SemEval-2016 Task 13: a taxonomy induction method based on lexico-syntactic patterns, substrings and focused crawling. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1320–1327, San Diego, CA, USA. Alexander Panchenko, Olga Morozova, and Hubert Naets. 2012. A semantic similarity measure based on lexico-syntactic patterns. In KONVENS, pages 174–178, Vienna, Austria. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English gigaword forth edition. In Linguistic Data Consortium, Philadelphia, PA, USA. Jing Peng, Katsiaryna Aharodnik, and Anna Feldman. 2018. A distributional semantics model for idiom detection - the case of english and russian. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence, ICAART 2018, Volume 2, pages 675–682, Funchal, Madeira, Portugal. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Carlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, and Aline Villavicencio. 2016. How naked is the naked truth? a multilingual lexicon of nominal compound compositionality. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 156–161, Berlin, Germany. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of the 5th International Joint Conference on Natural Language Processing, pages 210–218, Chiang Mai, Thailand. Denise Rey and Markus Neuh¨auser. 2011. Wilcoxonsigned-rank test. In Miodrag Lovric, editor, International Encyclopedia of Statistical Science, pages 1658–1659. Springer, Berlin, Heidelberg. Bahar Salehi, Paul Cook, and Timothy Baldwin. 2014. Using distributional similarity of multi-way translations to predict multiword expression compositionality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 472–481, Gothenburg, Sweden. Bahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the compositionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 977–983, Denver, CO, USA. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and 3274 Computational Natural Language Learning, pages 1201–1211, Jeju Island, Korea. Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of documentation, 28(1):11–21. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 4444–4451, San Francisco, CA, USA. Stephen Tratz and Eduard Hovy. 2010. ISI: Automatic classification of relations between nominals using a maximum entropy classifier. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 222–225, Uppsala, Sweden. Sriram Venkatapathy and Aravind K. Joshi. 2005. Measuring the relative compositionality of verbnoun (V-N) collocations by integrating features. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 899–906, Vancouver, BC, Canada. Vladimir Vovk. 2013. Kernel ridge regression. In Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik, pages 105–116. Springer. Majid Yazdani, Meghdad Farahmand, and James Henderson. 2015. Learning semantic composition to detect non-compositionality of multiword expressions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1733–1742, Lisbon, Portugal.
2019
316
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3275–3285 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3275 Robust Representation Learning of Biomedical Names Minh C. Phan Aixin Sun Yi Tay Nanyang Technological University, Singapore [email protected]; [email protected]; [email protected] Abstract Biomedical concepts are often mentioned in medical documents under different name variations (synonyms). This mismatch between surface forms is problematic, resulting in difficulties pertaining to learning effective representations. Consequently, this has tremendous implications such as rendering downstream applications inefficacious and/or potentially unreliable. This paper proposes a new framework for learning robust representations of biomedical names and terms. The idea behind our approach is to consider and encode contextual meaning, conceptual meaning, and the similarity between synonyms during the representation learning process. Via extensive experiments, we show that our proposed method outperforms other baselines on a battery of retrieval, similarity and relatedness benchmarks. Moreover, our proposed method is also able to compute meaningful representations for unseen names, resulting in high practical utility in real-world applications. 1 Introduction Representation learning of words (Mikolov et al., 2013; Pennington et al., 2014), and/or sentences (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) forms the bedrock of many modern NLP applications. These techniques, largely relying on context information, have a huge impact on downstream applications. To this end, learning effective and useful representations has been a highly fruitful area of research. Biomedical names1, however, are different from standard words and sentences. These names have both contextual and conceptual meanings. Contextual meaning reflects the contexts where the names appear, and it is specifically granted to each 1Biomedical names refer to surface forms that represent biomedical concepts. They can be official names in biomedical vocabularies or unofficial names mentioned in text. Concept (CUI) and their names Source C0343047: leiner’s disease, complement component 5 deficiency, c5d, complement 5 dysfunction, infantile seborrheic dermatitis, erythroderma desquamativum. UMLS C0154832: coats’ disease, abnormal retinal vascular development, unilateral retinal telangiectasis, coats telangiectasis NCBIDisease C0019168: hepatitis b virus surface antigen, hepatitis-b surface antigen, hbs ag, hbsag, hepatitis b surface antigen BC5CDRChemical Table 1: Example of biomedical concepts and their names taken from one vocabulary (UMLS (Li et al., 2016)) and two annotated datasets (NCBIDisease (Do˘gan et al., 2014) and BC5CDR-ChemicalChemical (Li et al., 2016)). The concepts are listed by concept unique identifiers (CUI) defined in UMLS. name. Names of a broad and popular concept often have slightly different contextual meanings. On the other hand, conceptual meaning maps to the definitions/contexts of the names’ associated concepts, i.e., CUIs as shown in Table 1. As such, names of the same concepts share the common conceptual meanings, although they can own different contextual information. As illustrated in Table 1, biomedical concepts appear in the text under various names. Representations of the names are also expected to be well clustered in their distributional space, i.e., names of the same concepts are close to each other and distant from those of other concepts. Learning such conceptually grounded representations is highly desired for a wide range of applications, e.g., synonym retrieval/discovery, biomedical name normalization, and query expansion. For the first time, we investigate the problem of biomedical name embedding. Our goal is to derive meaningful and robust representations for biomedical names from their surface forms. Unfortunately, this task is not trivial since two names 3276 can be strongly related but not necessarily belong to the same concept (e.g., ‘complement component 5 deficiency’ and ‘complement component 5’). Furthermore, names of a concept can be completely different regarding their surface forms (e.g., ‘leiner’s disease’ and ‘c5d’). As such, we establish the key desiderata for learning robust representations. First, the output representations need to be both conceptually and contextually meaningful. Second, name representations that belong to the same concepts should be similar to each other, i.e., conceptual grounding. To this end, our proposed encoding framework incorporates three new objectives, namely context, concept, and synonym-based objectives. We formulate the representation learning process as a synonym prediction task, with context and conceptual losses acting as regularizers, preventing two synonyms from collapsing into semantically meaningless representations. As illustrated in Figure 1, synonym-based objective enforces similar representations between synonymous names, while concept-based objective pulls the name’s representations closer to its concept’s centroid. On the other hand, context-based objective aims to minimize the difference between the derived representation and its specific contextual representation. More concretely, our approach adopts a recurrent sequence encoding model to extract the semantics of biomedical names, and to learn the alternative naming of biomedical concepts. Our approach does not need any additional annotations on biomedical text. To be specific, we do not need the biomedical names to be pre-annotated in the text. Instead, we utilize available synonym sets in a metathesaurus vocabulary (e.g., UMLS), as the only additional resource for training. Our main contributions in this work are summarized as follows. For the first time, we investigate the problem of biomedical name embedding and its applications. We pay attention to the similarity between semantically related names as well as the names of the same concept. Furthermore, we define and distinguish three aspects constituting to quality of biomedical name representations. We propose a novel encoding framework that considers all these aspects in the representation learning. Finally, we evaluate the proposed encoder in biomedical synonym retrieval, name normalization, and semantic similarity and relatedness benchmarks. In most of these experiments, our Context(𝒔) Synonyms’ Concept(𝒔) 𝓛𝒅𝒆𝒇 𝓛𝒄𝒕𝒙 𝓛syn s Figure 1: Illustration of three aspects, which are associated to three training objectives, for computing representation of biomedical name s. Intuitively, the representation is supposed to be similar to its synonym’s as well as its conceptual and contextual representations. model significantly outperforms other baselines. 2 Related Work Our problem setting of name embedding is different from recent works in biomedical word embeddings (Chiu et al., 2016; Wang et al., 2018) and concept embeddings (Beam et al., 2018; Cai et al., 2018). Our goal is to derive meaningful representation for a sequence of words that likely represents a concept. This setting is also orthogonal to works that only focus on estimating the matching between names (Li et al., 2017; Liu et al., 2018). There are several options to encode variablelength names/phrases into fixed-sized vector representations. Existing approaches range from phrase-level extensions of word embeddings, compositions of pre-trained word representations to sequence encoding neural networks. Contextual Word Embeddings. We revisit skip-gram model (Mikolov et al., 2013), as one of the most popular context-based embedding approaches. The model computes the representations for both target word wt, and context word wc by maximizing the following log-likelihood: LW = X wt,wc∈Cwt log p(wc|wt) (1) The probability of observing wc in the local context of wt is defined as follows: p(wc|wt) = exp(v⊺ wcuwt) P w∈W exp(v⊺ wuwt) where uw and vw are the ‘input’ and ‘output’ vector representations of w. In this work, we refer to the input representations as contextual representations of words, or in short, word embeddings. 3277 The skip-gram model is extensible to names (or phrases) by treating them as special tokens: LS = X wt,wc∈Cwt log p(wc|wt)+ X s,wc∈Cs log p(wc|s) (2) where s is a special name token. Training of this model results in word and name embeddings. Average of Contextual Word Embeddings. Another simple and effective method to compute name embeddings is taking the average of their constituent word embeddings. Since words in a biomedical name are usually descriptive about its meaning, this simple baseline is expected to produce quality representations. FastText (Bojanowski et al., 2017) leverages this idea by considering character n-grams instead of words. Therefore, the model can derive representations for names that contain unseen words. The effectiveness of simple compositions such as taking average or power mean have also been verified in phrase and sentence embeddings (Wieting et al., 2016; Arora et al., 2017; R¨uckl´e et al., 2018). Sequence Encoding Models. Sequence encoding models aim to capture more sophisticated semantics of character and word sequences. These models range from multilayer feed-forward networks (Iyyer et al., 2015) to convolutional (Kalchbrenner et al., 2014), recursive and recurrent neural networks (Socher et al., 2011; Tai et al., 2015). They also differ by the types of supervision used in training. Context-based sentence encoders (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) is based on distributional hypothesis. The training utilizes sentences and their contexts (surrounding sentences), which can be extracted from an unlabeled corpus. Similar to contextual word embeddings, the derived sentence embeddings are expected to carry the contextual information. However, this contextual information does not fully reflect paraphrastic characteristic, i.e., semantically similar sentences do not necessarily have identical meanings. These embeddings, therefore, are not favorable in applications that demand strong synonym identification. In contrast, supervised or semi-supervised representation leaning requires annotated corpus, such as paraphrastic sentences or natural language inference data (Conneau et al., 2017; Wieting and Gimpel, 2017; Clark et al., 2018; Subramanian et al., 2018; Cer et al., 2018). However, most of these works focus on learning representations for sentences. The closest work to our problem setting is (Wieting et al., 2015). In this proposed model, the authors utilize pairs of paraphrastic phrases as training data, e.g., ‘does not exceed’ and ‘is no more than’. To prevent the trained model from overfitting, authors introduce regularization terms that applied on encoder’s parameters as well as the difference between the initial and trainable word embeddings. Their evaluation, however, only considers the paraphrastic similarity of phrases. Discussion. Our proposed encoder is based on BiLSTM (Graves and Schmidhuber, 2005), although it can be replaced by another sequence encoding model as mentioned above. Our approach utilizes synonym sets in UMLS to learn name representations, while also enforces the learned representation to be similar to their contextual and conceptual representations. The idea is related to word vector specialization (retrofitting) (Faruqui et al., 2015; Mrkˇsi´c et al., 2017; Vuli´c et al., 2018). The difference is that we focus on learning representation for multi-word concept names, hence the contextual and conceptual constraints are essential, in addition to the synonymous similarity. In contrast, most retrofitting approaches mainly aim to improve word representations. These models map initial word embeddings into a new vector space that satisfy the synonymous similarity desiderata, while also constrain the new representations to be similar to the initial ones. Since the initial word representations can be assumed to encode both contextual and conceptual information of the words, these retrofitting approaches can be viewed as special cases of our proposed encoding framework. 3 Biomedical Name Encoder For ease of presentation, we use three generic terms, uw, us and uc, to denote pre-trained word, name and concept embeddings, respectively. These embeddings will be used as inputs in our encoding framework. Note that there are several options to calculate these embeddings and our encoder can be adapted to different calculation results. Before going to details, we present an extension of skip-gram, which will serve as a baseline. Furthermore, the outputs of this baseline will be used as pre-trained embeddings in one of the framework’s configurations. 3278 name 𝑠 𝓛𝒔𝒚𝒏 context 𝑥 concept 𝑐 name 𝑠′ 𝓛𝒄𝒕𝒙 𝓛𝒅𝒆𝒇 Bi-LSTM 𝑞(𝑥) 𝑓(𝑠) 𝑓(𝑠′) 𝑔(𝑐) 𝑡11 𝑡12 𝑡13 Bi-LSTM 𝑢𝑤1 t21 t22 t23 Bi-LSTM 𝑢𝑤2 Non-trainable word embedding Trainable character embeddings BNE: Biomedical Name Encoder Three objectives used to train the encoder Figure 2: Our proposed biomedical name encoding framework. The main encoder (BNE) is based on two-level BiLSTM to capture both character and word-level information of an input name. BNE parameters are learned by considering three training objectives. Synonym-based objective Lsyn enforces similar representations between two synonymous names (s and s′). Concept-based objective Ldef, and context-based objectives Lctx apply similarity constraints on representations of names (s or s′, which are interchangeable) and their conceptual and contextual representations (g(c) and g(x), respectively). Details about g(c) and g(x) calculations are discussed in Section 3.2. 3.1 Skip-gram with Context and Concept The skip-gram model described by Equation 2 uses context words to calculate embeddings for names. Apart from the context words, we also considers the name’s conceptual information in this new baseline. We leverage two sources of conceptual information: words in a name, and name’s associated concept. We assume that names containing similar words tend to have similar meaning. Furthermore, names of the same concepts will also share common meaning. We introduce a new token type for concepts. The concept embeddings are trained in a similar way as name embeddings. Specifically, for this baseline, we utilize a pre-annotated corpus where names appearing in the training text are labeled with their associated concepts. We convert the annotated texts into sequences of words, name, and concept tokens to be used as inputs to the skip-gram model. For example, consider a pseudo sentence that has 4 words and contains a bigram name: wl w1 w2 wr, we map the annotated name w1 w2 to a name token si, and its annotated concept is denoted by ci. We create two sequences of tokens corresponding to this original sentence: • wl, si, ci, w1, w2, wr • wl, w1, w2, si, ci, wr The name and concept tokens are placed on the left and right sides of the annotated name to avoid being biased toward any single side. These token sequences are subsequently fed as inputs to the skipgram baseline (the training details are presented in Section 4). Outputs of this baseline are word, name and concept embeddings. 3.2 Biomedical Name Encoder with Context, Concept, and Synonym Our proposed framework is illustrated in Figure 2. The encoder unit is based on BiLSTM to aggregate information from both character and word levels. The encoded representations are constrained by three objectives, namely synonym, context, and concept-based objectives. The model utilizes synonym sets in UMLS as training data. We denote all the synonym sets as U = {Sc}, where Sc includes all names of concept c, i.e., Sc = {si}. Biomedical Name Encoder (BNE). The encoder extracts a fixed-sized representation for a given name (or surface form) s. We use one BiLSTM unit with last-pooling to encode characterlevel information of each word. The representation is then concatenated with the pre-trained word embedding to form a word-level representation. Another BiLSTM unit with max-pooling is used to aggregate the semantics from the sequence of words’ representations. Finally the aggregated representation is passed through a linear transformation. Mathematically, the encoding function is expressed as follows: hwi = [uwi ⊕last(BiLSTM(ti,1, .., ti,m))] hs = max(BiLSTM(hw1, .., hwn)) f(s) = Whs + b where uwi represents the pre-trained word embedding of word wi in name s. ti,j is a trainable character embedding in wi. ⊕denotes vector concatenation. W and b are parameters of the last transformation. Next, we detail three objectives used to train the encoder. 3279 Synonym-based Similarity. Representations of names that belong to the same concept should be similar to each other. We formulate this objective using the following loss function: Lsyn = X (s,s′)∈Sc×Sc d(f(s), f(s′)) (3) where d(·, ·) is a function that measures the difference between two representations. As mentioned in the introduction, training the encoder using only this synonym-based objective will lead to biased representations. Specifically, the encoder will be trained to act like a hash function, which performs well on determining whether two names are synonym of each other. However, it likely loses the semantics of names. As a remedy, we further introduce concept and context-based objectives to regularize the representations. Conceptual Meaningfulness. Representations of biomedical names should be similar to those of their associated concepts. This objective complements the synonym-based objective introduced earlier. The latter not only shifts the synonymous embeddings close to each other, but also pulls them near to its concept’s centroid, expressed as: Ldef = X c, s∈Sc d(f(s), g(c)) (4) where g(c) returns a vector that encodes conceptual information of the corresponding concept c. There are several options for this representation. It can be a mapping to pre-trained concept embeddings learned from a large corpus, i.e., g(c) = uc. Another option is taking composition (e.g., average) of all its name embeddings (see Table 1), i.e., g(c) = 1 |Sc| P s∈Sc us. Furthermore, when definition of the concept is available, g(c) can be modeled as another encoding function that extracts the conceptual meaning from the definition. Contextual Meaningfulness. Each name representation should accommodate specific contextual information owned by the name, formulated as: Lctx = X s,x∈Xs d(f(s), q(x)) (5) where Xs represents all local contexts of name s, and q(x) returns contextual representation of local context x. A straightforward way to model Xs is using local context words of s. However, this modeling is computationally expensive since the training will need to iterate through all the context words of the name. Alternatively, the contextual information can be modeled using 1-hop approximation of the name’s local contexts, which is mapped to the name’s contextual representation, i.e., Xs = {s} and q(x) = q(s) = us. We also consider another approximation where the contextual representation is further approximated by its pre-trained word embeddings, i.e., q(s) = 1 |T (s)| P w∈T (s) uw where T (s) represents words in name s. Intuitively, in these two approximations, we assume that the pre-trained name or word embeddings carry local contextual information since they are trained by context-based approaches (see Section 2). Combined Loss Function. The final loss function combines all the introduced losses: LBNE = Lsyn + Ldef + Lctx (6) For simplicity, we ignore weighting factors that control the contribution of each loss. However, applying and fine-tuning these factors will shift the encoding results more on either semantic similarity or synonym-based similarity direction. Choices of g(c) and q(x). Several options to calculate the conceptual and contextual representations are discussed earlier. Note that the two representations should be placed in the same distributional space. As such, the implicit relations between them are encoded in, and can be decoded from, their presentations. For efficiency, we model the local contexts Xs using contextual information encoded in the name itself, i.e., Xs = {s} and q(x) = q(s). To this end, we focus on studying two combinations of g(c) and q(s): • Option 1: Both g(c) and q(s) directly map to the pre-trained concept and name embeddings, respectively, i.e., g(c) = uc and q(s) = us. These embeddings are the outputs of our proposed extension of skip-gram model (see Section 3.1). This option requires annotated biomedical corpus. • Option 2: The contextual presentation q(s) is approximated by the average of pretrained words embeddings, i.e., q(s) = 1 |T (s)| P w∈Ts uw; and g(c) is the average of all contextual presentations associated to the 3280 concept, i.e., g(c) = 1 |Sc| P s∈Sc q(s). These computations only require pre-trained word embeddings, and a dictionary of names and concepts, e.g., UMLS. Distance Function and Optimization. Distance function d can be Euclidean distance or Kullback-Leibler divergence. Alternatively, the optimization can be modeled as binary classification, motivated by its efficiency and effectiveness (Conneau et al., 2017; Wieting and Gimpel, 2017; Logeswaran and Lee, 2018). Another benefit of using classification is to align the encoded BNE vectors to the pre-trained word, name, and concept embeddings. The pre-trained embeddings are derived by skip-gram with negative sampling (Mikolov et al., 2013), which is also formulated as classification. In a similar way, we adopt logistic loss with dot product classifier for all the objectives. For example, the updated loss function for Lsyn is rewritten as follows: ℓ(f(s′)⊺f(s)) + X ¯s∈Ns ℓ(−f(¯s)⊺f(s)) where ℓis the logistic loss function ℓ: x 7→ log(1 + e−x). Negative name ¯s is sampled from a mini-batch during optimization, similar to (Wieting et al., 2015). In a similar way, the loss functions Ldef and Lctx are also updated accordingly. 4 Experiments We first detail the implementations of baselines and the proposed BNE model. We then evaluates all the models with 4 different tasks in retrieval, embedding similarity and relatedness benchmarks. Skip-gram Baselines. We consider three variants of skip-gram (with negative sampling). SGW obtains word embeddings by training the very basic skip-gram model (see Equation 1). To get the representation for a name, we simply take the average of its associated word embeddings. SGS is another variant that considers names as special tokens. The model obtains embeddings for word and names concurrently (see Equation 2). SGS training requires input text to be segmented into names and regular words. SGS.C is our proposed extension of skip-gram model. As introduced in Section 3.1, this baseline requires an annotated corpus where the names are labeled with their associated concepts. Training of Skip-gram Baselines. We use PubMed corpus, which consists of 29 million biomedical abstracts, to train SGW . For SGS and SGS.C, we further utilize the annotations provided in Pubtator (Wei et al., 2013). The annotations (names and their associated concepts) come with five categories: disease, chemical, gene, species, and mutation. We use annotations of the two popular classes: disease and chemical. In preprocessing, text is tokenized and lowercased. Words that appear less than 3 times are ignored. We use spaCy library for this parsing. In total, our vocabulary contains approximately 3 millions words, 700 thousand names, and 85 thousand CUIs. We use Gensim library to train all the skip-gram baselines. The embedding dimension is 200, and the context window size is 6. Negative sampling is used with the number of negatives set to 5. Biomedical Named Encoder (BNE). We set the character embedding dimension to 50, and initialize their values randomly. We use 200 dimensions for the outputted name embeddings. The hidden states’ dimensions for both character and wordlevel BiLSTM are 200. We use Adam optimizer with the learning rate of 0.001, and gradient clipping threshold set to 5.0. Training batch size is 64. Dropout with the rate of 0.5 is used to regularize the model. Average performance on validation sets of biomedical name normalization experiment (see Section 4.3) is used as a criteria to stop the model training. Training of BNE. Our proposed model is trained using only the synonym sets in UMLS2, i.e., U = {Sc}. We limit the synonyms to those of disease concepts3. We intentionally leave the chemical concepts out for out-domain evaluation. As a result, approximately 16 thousand synonym sets (associated to that number of disease concepts) are collected for training. These synonym sets include 156 thousand disease names in total. In each training batch, one positive and one negative pairs are sampled separately for each loss. The pre-trained word (or name/concept) embeddings are taken from the skip-gram baselines as described before. We denote two configurations, associated to Options 1 and 2 (see Section 3.2), as BNE + SGS.C and BNE + SGW, respectively. Next, we present the evaluations of these models. 2We use the 2018AA version released in May, 2018. 3We consider the diseases that exist in the CTD’s MEDIC disease vocabulary (Davis et al., 2014). 3281 SGW SGS. C BNE + SGW BNE + SGS. C cardiotoxicity hypertrophic cardiomyopathy (*) endotoxemia ischemic colitis (*) hematologic diseases parkinson disease (*) lead poisoning pseudotumor cerebri (*) paranoid disorders rheumatic diseases (*) Figure 3: t-SNE visualization of 254 name embeddings. These names belong to 10 disease concepts in which 5 of these concepts appear in the training data, while the other 5 concepts (marked with (*)) do not. It can be observed that BNE projects names of the same concept close to each others. The model also retains closeness between names of related concepts, such as ‘parkinson disease’ and ‘paranoid disorders’ (see the blue and olive plus signs). 0.2 0.4 0.6 0.8 1 1 4 16 64 256 1024 Mean Coverage at k k SGW SGS SGS.C BNE + SGW BNE + SGS.C (a) Diseases (in-domain) 0.2 0.4 0.6 0.8 1 1 4 16 64 256 1024 Mean Coverage at k k SGW SGS SGS.C BNE + SGW BNE + SGS.C (b) Chemicals (out-domain) Figure 4: Mean coverage at k: average ratio of correct synonyms that are found in k-nearest neighbors, which are estimated by cosine similarity of name embeddings. Note that names in these disease and chemical test sets are not seen in the training data. 4.1 Closeness Analysis of Synonymous Embeddings We propose a measure to estimate the closeness between name embeddings of the same concept. For each name, we consider its k most similar names estimated by cosine similarity of their embeddings. We define coverage at k as ratio of correct synonyms that are found in the k-nearest neighbors. We report the average score of all query names, as mean coverage at k. We create two test sets for this experiment, one for disease names and one for chemical names. Given the CTD’s MEDIC disease vocabulary, we randomly select 1000 concepts and all their corresponding names in UMLS. In this experiment, we exclude these 1000 concepts from the synonym sets used to train BNE encoder. Furthermore, to ensure the quality of the selected names, we only consider the ones that appear in the high-quality biomedical phrases collected by Kim et al. (2018). Similarly, we create another test set for chemical names. This chemical set is used to evaluate outdomain performance since our model is trained using only disease synonyms. As shown in Figure 4, BNE outperforms other embedding baselines that do not consider the synonym-based objective. More importantly, the model also generalizes well to out-domain data (chemical names). Furthermore, among the skipgram baselines, the context-based name embedding model (SGS) is worse than the average word embedding baseline (SGW). The result again indicates that words in biomedical names are more indicative about their conceptual identities. The embedding plots in Figure 3 further illustrate the effectiveness of our encoder in enhancing the similarity between synonymous representations. By investigating name embeddings of an unseen concept ‘pseudotumor cerebri’, we observe that BNE is robust to the morphology of biomedical names, such as ‘benign hypertension intracranial’ and ‘ benign intracran hypt’. The model is also aware of word importance in long names such as ‘intracranial pressure increased (benign)’. Moreover, since BNE is trained using synonym sets, the encoder is equipped with knowledge about alternative expressions of biomedical terms, e.g., ‘intracranial hypertension’ and ‘intracranial increased pressure’. The knowledge can be used to infer quality representations for new synonyms. However, similar to skip-gram baselines, BNE faces serious challenges if the names are unpopular and contain words that do not reflect their conceptual meanings. For example, for this ‘pseudotumor cerebri’ concept, the name “Nonne’s syndrome”4 is distant from its concept cluster (see the red square locating near the blue plus signs in Figure 3). 4Dr. Max Nonne coined the name ‘pseudotumor cerebri’ in 1904. 3282 Models NCBI (Disease) BC5CDR (Disease) BC5CDR (Chemical) Jaccard 0.424 0.410 0.607 SGW 0.499 0.494 0.598 SGW + WMD 0.532 0.526 0.637 SGS 0.487 0.472 0.623 SGS.C 0.531 0.510 0.628 BNE + SGW 0.695 0.718 0.664 BNE + SGS.C 0.713 0.734 0.672 Table 2: Mean average precision (MAP) performance on the synonym retrieval task. The best and second best results are in boldface and underlined, respectively. 4.2 Synonym Retrieval We evaluate the embeddings in synonym retrieval application: given a biomedical mention (or name), retrieving all its synonyms from a controlled vocabulary by ranking. We use NCBIDisease (Do˘gan et al., 2014) and BC5CDR (Li et al., 2016) datasets in this evaluation. NCBIDisease contains disease mentions extracted from PubMed abstracts, while BC5CDR contains both disease and chemical mentions. These mentions are used as queries in this synonym retrieval task. Note that, different from the closeness evaluation, a disease name may or may not appear in the synonym sets used to train BNE encoder. On the other hand, chemical queries are completely unseen during the model training. For each query, we retrieve a list of potentially associated concepts. A concept is retrieved if one of its names is similar to the query (estimated by BM25 score). We collect all names of the top-20 retrieved concepts as a synonym candidate set. Cosine similarity is then used to rank the candidates. We also evaluate the results with Jaccard and Word’s Mover Distance (WMD) (Kusner et al., 2015) measures. As shown in Table 2, SGW +WMD outperforms Jaccard baseline (in MAP score), mainly because of its ability to capture semantic matching. However, both baselines are non-parametric. In contrast, BNE+SGW learns additional knowledge about the synonym matching by using synonyms sets in UMLS as training data. Although the model is trained on only disease names, it also generalizes well to chemical names. Furthermore, comparing between the two configurations of BNE, both BNE+SGW and BNE+SGSC models yield comparable performances. However, BNE+SGW is simpler since it does not require pre-trained name and concept embeddings. Models NCBI (Di) BC5CDR (Di) BC5CDR (Ch) Jaccard 0.843 0.772 0.935 SGW 0.800 0.725 0.771 SGW + WMD 0.779 0.731 0.919 SGS 0.815 0.790 0.929 SGS.C 0.838 0.811 0.929 BNE + SGW 0.854 0.829 0.930 BNE + SGS.C 0.857 0.829 0.934 Wieting et al. (2015) 0.822 0.813 0.930 D’Souza and Ng (2015) 0.847 0.841 Leaman and Lu (2016) 0.877† 0.889† 0.941 Wright et al. (2019) 0.878† 0.880† BNE + SGW + XM 0.873 0.905 0.954 BNE + SGS.C + XM 0.877 0.906 0.958 Table 3: Name normalization accuracy on disease (Di) and chemical (Ch) datasets. The last row group includes the results of supervised models that utilize training annotations in each specific dataset. XM denotes the use of ‘exact match’ rule to assign the corresponding concept to a mention if the mention is found in the training data. † indicates the results reported by Wright et al. (2019). 4.3 Biomedical Name Normalization Biomedical name normalization (a.k.a., biomedical concept linking) aims to map each biomedical mention appearing in text to its associated concept in a dictionary. We use NCBI-Disease and BC5CDR datasets in this evaluation. Similar to previous works, we use Ab3P (Sohn et al., 2008) to resolve local abbreviations. Composite mentions (such as ‘pineal and retinal tumors’) are split into separate mentions (‘pineal tumors’ and ‘retinal tumors’) using simple patterns as described in (D’Souza and Ng, 2015). For each mention, we find the concept CUI (in UMLS) that has the most similar name. The selected CUI is then mapped to its associated MeSH or OMIM ID in the CTD dictionary for evaluation. We only consider mentions whose associated concepts exist in the CTD dictionary and report the accuracy aggregated from all mentions in test set. Apart from existing baselines, we also re-implement compositional paraphrase model, proposed by Wieting et al. (2015). The difference is that we use word-level BiLSTM instead of recursive neural network. Furthermore, L2 regularizations with the weights of 10−3 and 10−4 are applied on the BiLSTM’s parameters and the difference between the trainable and initial word embeddings, respectively. Different from the lexical (Jaccard) and semantic matching (WMD and SGW) baselines, BNE ob3283 tains high scores in both accuracy and rankingbased (MAP) metrics (see Tables 2, and 3). The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings. Table 3 also includes performances of other state-of-the-art baselines in biomedical name normalization, such as sieve-based (D’Souza and Ng, 2015), supervised semantic indexing (Leaman and Lu, 2016), and coherence-based neural network Wright et al. (2019) approaches. Note that all these baselines require human annotated labels, and the models are specifically tuned for each dataset. On the other hand, BNE utilizes only the existing synonym sets in UMLS for training. When the dataset-specific annotations are utilized, even the simple exact matching rule can boost the performance of our model to surpass other baselines (see the last two rows in Table 3). 4.4 Semantic Similarity and Relatedness We evaluate the correlation between embedding cosine similarity and human judgments, regarding semantic similarity and relatedness. Different from previous evaluations, this experiment aims to evaluate the conceptual similarity and relatedness. We use two biomedical datasets: MayoSRS (Pakhomov et al., 2011) and UMNSRS (Pakhomov et al., 2016). The former contains multi-word name pairs of related concepts, e.g., ‘morning stiffness’ (C0457086) and ’rheumatoid arthriits’ (C0003873). The latter contains only single-word name pairs and is spitted into similarity and relatedness partitions. For example, a pair with high similarity score are ‘weakness’ (C1883552) and ‘paresis’ (C0030552). For these two datasets, the names in each pair comes from different concepts, hence they do not appear in the synonym pairs used to train our encoder. Furthermore, the coverage of pre-trained word embeddings in baselines such as SGW are 100% and 97% for UMNSRS and MayoSRS, respectively. Table 4 shows that BNE models perform especially well on the multi-word relatedness test set (MayoSRS). Conceptual information has been utilized by these models to enrich the name representations. On the other hand, when the training is performed solely on the synonym pairs (only use Lsyn), the trained model is overfitted to the training task and do not generalize to other test cases. SGW is still a strong baseline in these benchmarks. Other skip-gram and fastText embedModels UMNSRS (sim) UMNSRS (rel) MayoSRS (rel) SGW 0.645 0.584 0.518 Pakhomov et al. (2016) 0.620 0.580 Chen et al. (2018) 0.630 0.575 0.501 Beam et al. (2018) 0.411 0.334 0.427 SGS 0.614 0.566 0.516 SGS.C 0.654 0.592 0.557 BNE + SGW 0.606 0.580 0.626 BNE + SGS.C 0.637 0.593 0.602 BNE + SGS.C (Lsyn) 0.496 0.445 0.564 Wieting et al. (2015) 0.639 0.565 0.595 Table 4: Spearman’s rank correlation coefficient between cosine similarly scores of name embeddings and human judgments, reported on semantic similarity (sim) and relatedness (rel) benchmarks. dings (Pakhomov et al., 2016; Chen et al., 2018), which are trained on a similar corpus, do not achieve better results. Beam et al. (2018) use a SVD-based word2vec model (Levy et al., 2015) to compute embeddings for biomedical concepts. Although the embeddings are trained on a much larger multimodal medical data, their results are lower than other baselines. Further investigation reveals that many concepts in the test sets do not exist in their pre-trained concept embeddings. 5 Conclusion By learning to encode names of the same concepts into similar representations, while preserving their conceptual and contextual meanings, our encoder is able to extract meaningful representations for unseen names. The core unit of our encoder (in this work) is BiLSTM. Alternatively, sequence encoding models such as GRU, CNN, transformer, or even encoders with contextualized word embeddings like BERT (Devlin et al., 2018), or ELMo (Peters et al., 2018) can be used to replace this BiLSTM, however, with additional computation cost. We also discuss different ways of representing the contextual and conceptual information in our framework. In implementation, we use the simple aggregation of pre-trained embeddings. The experiment results show that this approach is both efficient and effective. Acknowledgments We thank the anonymous reviewers for their insightful suggestions. This work was supported by Data Science & Artificial Intelligence Research Centre, NTU Singapore. 3284 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In ICLR. Andrew L Beam, Benjamin Kompa, Inbar Fried, Nathan P Palmer, Xu Shi, Tianxi Cai, and Isaac S Kohane. 2018. Clinical concept embeddings learned from massive sources of medical data. CoRR, abs/1804.01486. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Xiangrui Cai, Jinyang Gao, Kee Yuan Ngiam, Beng Chin Ooi, Ying Zhang, and Xiaojie Yuan. 2018. Medical concept embedding with time-aware attention. In IJCAI, pages 3984–3990. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Qingyu Chen, Yifan Peng, and Zhiyong Lu. 2018. Biosentvec: creating sentence embeddings for biomedical texts. CoRR, abs/1810.09302. Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical nlp. In BioNLP, pages 166–174. Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In EMNLP, pages 1914–1925. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP, pages 670–680. Allan Peter Davis, Cynthia J Grondin, Kelley LennonHopkins, Cynthia Saraceni-Richards, Daniela Sciaky, Benjamin L King, Thomas C Wiegers, and Carolyn J Mattingly. 2014. The comparative toxicogenomics database’s 10th year anniversary: update 2015. Nucleic acids research, 43(D1):D914– D920. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Rezarta Islamaj Do˘gan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Jennifer D’Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In ACL — IJCNLP, volume 2, pages 297–302. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In NAACL-HLT, pages 1606–1615. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5-6):602–610. Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In NAACL-HLT, pages 1367– 1377. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum´e III. 2015. Deep unordered composition rivals syntactic methods for text classification. In ACL — IJCNLP, volume 1, pages 1681–1691. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In ACL, volume 1, pages 655– 665. Sun Kim, Lana Yeganova, Donald C Comeau, W John Wilbur, and Zhiyong Lu. 2018. Pubmed phrases, an open set of coherent phrases for searching biomedical literature. Scientific data, 5:180104. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In NeurIPS, pages 3294–3302. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In ICML, pages 957–966. Robert Leaman and Zhiyong Lu. 2016. Taggerone: joint named entity recognition and normalization with semi-markov models. Bioinformatics, 32(18):2839–2846. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. TACL, 3:211–225. Haodi Li, Qingcai Chen, Buzhou Tang, Xiaolong Wang, Hua Xu, Baohua Wang, and Dong Huang. 2017. Cnn-based ranking for biomedical entity normalization. BMC bioinformatics, 18(11):385. Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016. 3285 Miaofeng Liu, Jialong Han, Haisong Zhang, and Yan Song. 2018. Domain adaptation for disease phrase matching with adversarial networks. In BioNLP, pages 137–141. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. CoRR, abs/1803.02893. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS, pages 3111–3119. Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. TACL, pages 309–324. Serguei VS Pakhomov, Greg Finley, Reed McEwan, Yan Wang, and Genevieve B Melton. 2016. Corpus domain effects on distributional semantic modeling of medical terms. Bioinformatics, 32(23):3635– 3644. Serguei VS Pakhomov, Ted Pedersen, Bridget McInnes, Genevieve B Melton, Alexander Ruggieri, and Christopher G Chute. 2011. Towards a framework for developing semantic relatedness reference standards. Journal of biomedical informatics, 44(2):251–265. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT, volume 1, pages 2227– 2237. Andreas R¨uckl´e, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. 2018. Concatenated p-mean word embeddings as universal cross-lingual sentence representations. CoRR, abs/1803.01400. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NeurIPS, pages 801– 809. Sunghwan Sohn, Donald C Comeau, Won Kim, and W John Wilbur. 2008. Abbreviation definition identification based on automatic precision estimates. BMC bioinformatics, 9(1):402. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J Pal. 2018. Learning general purpose distributed sentence representations via large scale multi-task learning. In ICLR. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL — IJCNLP, volume 1, pages 1556– 1566. Ivan Vuli´c, Goran Glavaˇs, Nikola Mrkˇsi´c, and Anna Korhonen. 2018. Post-specialisation: Retrofitting vectors of words unseen in lexical resources. In NAACL-HLT, pages 516–527. Yanshan Wang, Sijia Liu, Naveed Afzal, Majid Rastegar-Mojarad, Liwei Wang, Feichen Shen, Paul Kingsbury, and Hongfang Liu. 2018. A comparison of word embeddings for the biomedical natural language processing. Journal of biomedical informatics, 87:12–20. Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2013. Pubtator: a web-based text mining tool for assisting biocuration. Nucleic acids research, 41(W1):W518–W522. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to compositional paraphrase model and back. TACL, 3:345– 358. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In ICLR. John Wieting and Kevin Gimpel. 2017. Revisiting recurrent networks for paraphrastic sentence embeddings. In ACL, pages 2078–2088. Dustin Wright, Yannis Katsis, Raghav Mehta, and Chun-Nan Hsu. 2019. Normco: Deep disease normalization for biomedical knowledge base construction. AKBC. ‘
2019
317
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3286–3296 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3286 Relational Word Embeddings Jose Camacho-Collados Luis Espinosa-Anke Steven Schockaert School of Computer Science and Informatics Cardiff University, United Kingdom {camachocolladosj,espinosa-ankel,schockaerts1}@cardiff.ac.uk Abstract While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding. Such strategies may not be optimal, however, as they are limited by the coverage of available resources and conflate similarity with other forms of relatedness. As an alternative, in this paper we propose to encode relational knowledge in a separate word embedding, which is aimed to be complementary to a given standard word embedding. This relational word embedding is still learned from co-occurrence statistics, and can thus be used even when no external knowledge base is available. Our analysis shows that relational word vectors do indeed capture information that is complementary to what is encoded in standard word embeddings. 1 Introduction Word embeddings are paramount to the success of current natural language processing (NLP) methods. Apart from the fact that they provide a convenient mechanism for encoding textual information in neural network models, their importance mainly stems from the remarkable amount of linguistic and semantic information that they capture. For instance, the vector representation of the word Paris implicitly encodes that this word is a noun, and more specifically a capital city, and that it describes a location in France. This information arises because word embeddings are learned from co-occurrence counts, and properties such as being a capital city are reflected in such statistics. However, the extent to which relational knowledge (e.g. Trump was the successor of Obama) can be learned in this way is limited. Previous work has addressed this by incorporating external knowledge graphs (Xu et al., 2014; Celikyilmaz et al., 2015) or relations extracted from text (Chen et al., 2016). However, the success of such approaches depends on the amount of available relational knowledge. Moreover, they only consider well-defined discrete relation types (e.g. is the capital of, or is a part of), whereas the appeal of vector space representations largely comes from their ability to capture subtle aspects of meaning that go beyond what can be expressed symbolically. For instance, the relationship between popcorn and cinema is intuitively clear, but it is more subtle than the assertion that “popcorn is located at cinema”, which is how ConceptNet (Speer et al., 2017), for example, encodes this relationship1. In fact, regardless of how a word embedding is learned, if its primary aim is to capture similarity, there are inherent limitations on the kinds of relations they can capture. For instance, such word embeddings can only encode similarity preserving relations (i.e. similar entities have to be related to similar entities) and it is often difficult to encode that w is in a particular relationship while preventing the inference that words with similar vectors to w are also in this relationship; e.g. Bouraoui et al. (2018) found that both (Berlin,Germany) and (Moscow,Germany) were predicted to be instances of the capital-of relation due to the similarity of the word vectors for Berlin and Moscow. Furthermore, while the ability to capture word analogies (e.g. king-man+woman≈queen) emerged as a successful illustration of how word embeddings can encode some types of relational information (Mikolov et al., 2013b), the generalization of this interesting property has proven to be less successful than initially anticipated (Levy et al., 2014; 1http://conceptnet.io/c/en/popcorn 3287 Linzen, 2016; Rogers et al., 2017). This suggests that relational information has to be encoded separately from standard similaritycentric word embeddings. One appealing strategy is to represent relational information by learning, for each pair of related words, a vector that encodes how the words are related. This strategy was first adopted by Turney (2005), and has recently been revisited by a number of authors (Washio and Kato, 2018a; Jameel et al., 2018; Espinosa Anke and Schockaert, 2018; Washio and Kato, 2018b; Joshi et al., 2019). However, in many applications, word vectors are easier to deal with than vector representations of word pairs. The research question we consider in this paper is whether it is possible to learn word vectors that capture relational information. Our aim is for such relational word vectors to be complementary to standard word vectors. To make relational information available to NLP models, it then suffices to use a standard architecture and replace normal word vectors by concatenations of standard and relational word vectors. In particular, we show that such relational word vectors can be learned directly from a given set of relation vectors. 2 Related Work Relation Vectors. A number of approaches have been proposed that are aimed at learning relation vectors for a given set of word pairs (a,b), based on sentences in which these word pairs co-occur. For instance, Turney (2005) introduced a method called Latent Relational Analysis (LRA), which relies on first identifying a set of sufficiently frequent lexical patterns and then constructs a matrix which encodes for each considered word pair (a,b) how frequently each pattern P appears in between a and b in sentences that contain both words. Relation vectors are then obtained using singular value decomposition. More recently, Jameel et al. (2018) proposed an approach inspired by the GloVe word embedding model (Pennington et al., 2014) to learn relation vectors based on cooccurrence statistics between the target word pair (a, b) and other words. Along similar lines, Espinosa Anke and Schockaert (2018) learn relation vectors based on the distribution of words occurring in sentences that contain a and b by averaging the word vectors of these co-occurring words. Then, a conditional autoencoder is used to obtain lower-dimensional relation vectors. Taking a slightly different approach, Washio and Kato (2018a) train a neural network to predict dependency paths from a given word pair. Their approach uses standard word vectors as input, hence relational information is encoded implicitly in the weights of the neural network, rather than as relation vectors (although the output of this neural network, for a given word pair, can still be seen as a relation vector). An advantage of this approach, compared to methods that explicitly construct relation vectors, is that evidence obtained for one word is essentially shared with similar words (i.e. words whose standard word vector is similar). Among others, this means that their approach can in principle model relational knowledge for word pairs that never co-occur in the same sentence. A related approach, presented in (Washio and Kato, 2018b), uses lexical patterns, as in the LRA method, and trains a neural network to predict vector encodings of these patterns from two given word vectors. In this case, the word vectors are updated together with the neural network and an LSTM to encode the patterns. Finally, similar approach is taken by the Pair2Vec method proposed in (Joshi et al., 2019), where the focus is on learning relation vectors that can be used for cross-sentence attention mechanisms in tasks such as question answering and textual entailment. Despite the fact that such methods learn word vectors from which relation vectors can be predicted, it is unclear to what extent these word vectors themselves capture relational knowledge. In particular, the aforementioned methods have thus far only been evaluated in settings that rely on the predicted relation vectors. Since these predictions are made by relatively sophisticated neural network architectures, it is possible that most of the relational knowledge is still captured in the weights of these networks, rather than in the word vectors. Another problem with these existing approaches is that they are computationally very expensive to train; e.g. the Pair2Vec model is reported to need 7-10 days of training on unspecified hardware2. In contrast, the approach we propose in this paper is computationally much simpler, while resulting in relational word vectors that encode relational information more accurately than those of the Pair2Vec model in lexical semantics tasks, as we will see in Section 5. Knowledge-Enhanced Word Embeddings. Sev2github.com/mandarjoshi90/pair2vec 3288 eral authors have tried to improve word embeddings by incorporating external knowledge bases. For example, some authors have proposed models which combine the loss function of a word embedding model, to ensure that word vectors are predictive of their context words, with the loss function of a knowledge graph embedding model, to encourage the word vectors to additionally be predictive of a given set of relational facts (Xu et al., 2014; Celikyilmaz et al., 2015; Chen et al., 2016). Other authors have used knowledge bases in a more restricted way, by taking the fact that two words are linked to each other in a given knowledge graph as evidence that their word vectors should be similar (Faruqui et al., 2015; Speer et al., 2017). Finally, there has also been work that uses lexicons to learn word embeddings which are specialized towards certain types of lexical knowledge, such as hypernymy (Nguyen et al., 2017; Vulic and Mrksic, 2018), antonymy (Liu et al., 2015; Ono et al., 2015) or a combination of various linguistic constraints (Mrkˇsi´c et al., 2017). Our method differs in two important ways from these existing approaches. First, rather than relying on an external knowledge base, or other forms of supervision, as in e.g. (Chen et al., 2016), our method is completely unsupervised, as our only input consists of a text corpus. Second, whereas existing work has focused on methods for improving word embeddings, our aim is to learn vector representations that are complementary to standard word embeddings. 3 Model Description We aim to learn representations that are complementary to standard word vectors and are specialized towards relational knowledge. To differentiate them from standard word vectors, they will be referred to as relational word vectors. We write ew for the relational word vector representation of w. The main idea of our method is to first learn, for each pair of closely related words w and v, a relation vector rwv that captures how these words are related, which we discuss in Section 3.1. In Section 3.2 we then explain how we learn relational word vectors from these relation vectors. 3.1 Unsupervised Relation Vector Learning Our goal here is to learn relation vectors for closely related words. For both the selection of the vocabulary and the method to learn relation vectors we mainly follow the initialization method of Camacho-Collados et al. (2019, RELATIVEinit) except for an important difference explained below regarding the symmetry of the relations. Other relation embedding methods could be used as well, e.g., (Jameel et al., 2018; Washio and Kato, 2018b; Espinosa Anke and Schockaert, 2018; Joshi et al., 2019), but this method has the advantage of being highly efficient. In the following we describe this procedure for learning relation vectors: we first explain how a set of potentially related word pairs is selected, and then focus on how relation vectors rwv for these word pairs can be learned. Selecting Related Word Pairs. Starting from a vocabulary V containing the words of interest (e.g. all sufficiently frequent words), as a first step we need to choose a set R ⊆V × V of potentially related words. For each of the word pairs in R we will then learn a relation vector, as explained below. To select this set R, we only consider word pairs that co-occur in the same sentence in a given reference corpus. For all such word pairs, we then compute their strength of relatedness following Levy et al. (2015a) by using a smoothed version of pointwise mutual information (PMI), where we use 0.5 as exponent factor. In particular, for each word w ∈V, the set R contains all sufficiently frequently co-occurring pairs (w, v) for which v is within the top-100 most closely related words to w, according to the following score: PMI0.5(u, v) = log nwv · s∗∗ nw∗· sv∗  (1) where nwv is the harmonically weighted3 number of times the words w and v occur in the same sentence within a distance of at most 10 words, and: nw∗= X u∈V nwu; sv∗= n0.5 v∗; s∗∗= X u∈V su∗ This smoothed variant of PMI has the advantage of being less biased towards infrequent (and thus typically less informative) words. Learning Relation Vectors. In this paper, we will rely on word vector averaging for learning relation vectors, which has the advantage of being much faster than other existing approaches, and thus allows us to consider a higher number of word pairs (or a larger corpus) within a fixed 3A co-occurrence in which there are k words in between w and v then receives a weight of 1 k+1. 3289 time budget. Word vector averaging has moreover proven surprisingly effective for learning relation vectors (Weston et al., 2013; Hashimoto et al., 2015; Fan et al., 2015; Espinosa Anke and Schockaert, 2018), as well as in related tasks such as sentence embedding (Wieting et al., 2016). Specifically, to construct the relation vector rwv capturing the relationship between the words w and v we proceed as follows. First, we compute a bag of words representation {(w1, f1), ..., (wn, fn)}, where fi is the number of times the word wi occurs in between the words w and v in any given sentence in the corpus. The relation vector rwv is then essentially computed as a weighted average: rwv = norm n X i=1 fi · wi ! (2) where we write wi for the vector representation of wi in some given pre-trained word embedding, and norm(v) = v ∥v∥. In contrast to other approaches, we do not differentiate between sentences where w occurs before v and sentences where v occurs before w. This means that our relation vectors are symmetric in the sense that rwv = rvw. This has the advantage of alleviating sparsity issues. While the directionality of many relations is important, the direction can often be recovered from other information we have about the words w and v. For instance, knowing that w and v are in a capital-of relationship, it is trivial to derive that “v is the capital of w”, rather than the other way around, if we also know that w is a country. 3.2 Learning Relational Word Vectors The relation vectors rwv capture relational information about the word pairs in R. The relational word vectors will be induced from these relation vectors by encoding the requirement that ew and ev should be predictive of rwv, for each (w, v) ∈R. To this end, we use a simple neural network with one hidden layer,4 whose input is given by (ew + ev) ⊕(ew ⊙ev), where we write ⊕for vector concatenation and ⊙for the component-wise multiplication. Note that the input needs to be symmetric, given that our relation 4More complex architectures could be used, e.g., (Joshi et al., 2019), but in this case we decided to use a simple architecture as the main aim of this paper is to encode all relational information into the word vectors, not in the network itself. Figure 1: Relational word embedding architecture. At the bottom of the figure, the input layer is constructed from the relational word embeddings ew and ev, which are the vectors to be learnt. As shown at the top, we aim to predict the target relation vector rwv. vectors are symmetric, which makes the vector addition and component-wise multiplication two straightforward encoding choices. Figure 1 depicts an overview of the architecture of our model. The network is defined as follows: iwv = (ew + ev) ⊕(ew ⊙ev) hwv = f(Xiwv + a) owv = f(Yhwv + b) (3) for some activation function f. We train this network to predict the relation vector rwv, by minimizing the following loss: L = X (w,v)∈R  owv −rwv 2 (4) The relational word vectors ew can be initialized using standard word embeddings trained on the same corpus. 4 Experimental Setting In what follows, we detail the resources and training details that we used to obtain the relational word vectors. Corpus and Word Embeddings. We followed the setting of Joshi et al. (2019) and used the English Wikipedia5 as input corpus. Multiwords (e.g. Manchester United) were grouped together as a 5Tokenized and lowercased dump of January 2018. 3290 single token by following the same approach described in Mikolov et al. (2013a). As word embeddings, we used 300-dimensional FastText vectors (Bojanowski et al., 2017) trained on Wikipedia with standard hyperparameters. These embeddings are used as input to construct the relation vectors rwv (see Section 3.1),6 which are in turn used to learn relational word embeddings ew (see Section 3.2). The FastText vectors are additionally used as our baseline word embedding model. Word pair vocabulary. As our core vocabulary V, we selected the 100, 000 most frequent words from Wikipedia. To construct the set of word pairs R, for each word from V, we selected the 100 most closely related words (cf. Section 3.1), considering only consider word pairs that co-occur at least 25 times in the same sentence throughout the Wikipedia corpus. This process yielded relation vectors for 974,250 word pairs. Training. To learn our relational word embeddings we use the model described in Section 3.2. The embedding layer is initialized with the standard FastText 300-dimensional vectors trained on Wikipedia. The method was implemented in PyTorch, employing standard hyperparameters, using ReLU as the non-linear activation function f (Equation 3). The hidden layer of the model was fixed to the same dimensionality as the embedding layer (i.e. 600). The stopping criterion was decided based on a small development set, by setting aside 1% of the relation vectors. Code to reproduce our experiments, as well as pre-trained models and details of the implementation such as other network hyperparameters, are available at https://github.com/pedrada88/rwe. 5 Experimental Results A natural way to assess the quality of word vectors is to test them in lexical semantics tasks. However, it should be noted that relational word vectors behave differently from standard word vectors, and we should not expect the relational word vectors to be meaningful in unsupervised tasks such as semantic relatedness (Turney and Pantel, 2010). In particular, note that a high similarity between ew and ev should mean that relationships which hold for w have a high probability of holding for v as well. Words which are related, but not syn6We based our implementation to learn relation vectors on the code available at https://github.com/ pedrada88/relative onymous, may thus have very dissimilar relational word vectors. Therefore, we test our proposed models on a number of different supervised tasks for which accurately capturing relational information is crucial to improve performance. Comparison systems. Standard FastText vectors, which were used to construct the relation vectors, are used as our main baseline. In addition, we also compare with the word embeddings that were learned by the Pair2Vec system7 (see Section 2). We furthermore report the results of two methods which leverage knowledge bases to enrich FastText word embeddings: Retrofitting (Faruqui et al., 2015) and Attract-Repel (Mrkˇsi´c et al., 2017). Retrofitting exploits semantic relations from a knowledge base to re-arrange word vectors of related words such that they become closer to each other, whereas Attract-Repel makes use of different linguistic constraints to move word vectors closer together or further apart depending on the constraint. For Retrofitting we make use of WordNet (Fellbaum, 1998) as the input knowledge base, while for Attract-Repel we use the default configuration with all constraints from PPDB (Pavlick et al., 2015), WordNet and BabelNet (Navigli and Ponzetto, 2012). All comparison systems are 300-dimensional and trained on the same Wikipedia corpus. 5.1 Relation Classification Given a pre-defined set of relation types and a pair of words, the relation classification task consists in selecting the relation type that best describes the relationship between the two words. As test sets we used DiffVec (Vylomova et al., 2016) and BLESS8 (Baroni and Lenci, 2011). The DiffVec dataset includes 12,458 word pairs, covering fifteen relation types including hypernymy, causepurpose or verb-noun derivations. On the other hand, BLESS includes semantic relations such as hypernymy, meronymy, and co-hyponymy.9 BLESS includes a train-test partition, with 13,258 and 6,629 word pairs, respectively. This task is treated as a multi-class classification problem As a baseline model (Diff), we consider the usual representation of word pairs in terms of their vector differences (Fu et al., 2014; Roller et al., 7We used the pre-trained model of its official repository. 8http://clic.cimec.unitn.it/distsem 9Note that both datasets exhibit overlap in a number of relations as some instances from DiffVec were taken from BLESS. 3291 Encoding Model Reference DiffVec BLESS Acc. F1 Prec. Rec. Acc. F1 Prec. Rec. Mult+Avg RWE (This paper) 85.3 64.2 65.1 64.5 94.3 92.8 93.0 92.6 Pair2Vec (Joshi et al., 2019) 85.0 64.0 65.0 64.5 91.2 89.3 88.9 89.7 FastText (Bojanowski et al., 2017) 84.2 61.4 62.6 61.9 92.8 90.4 90.7 90.2 Retrofitting† (Faruqui et al., 2015) 86.1* 64.6* 66.6* 64.5* 90.6 88.3 88.1 88.6 Attract-Repel† (Mrkˇsi´c et al., 2017) 86.0* 64.6* 66.0* 65.2* 91.2 89.0 88.8 89.3 Mult+Conc Pair2Vec (Joshi et al., 2019) 84.8 64.1 65.7 64.4 90.9 88.8 88.6 89.1 FastText (Bojanowski et al., 2017) 84.3 61.3 62.4 61.8 92.9 90.6 90.8 90.4 Diff (only) FastText (Bojanowski et al., 2017) 81.9 57.3 59.3 57.8 88.5 85.4 85.7 85.4 Table 1: Accuracy and macro-averaged F-Measure, precision and recall on BLESS and DiffVec. Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset. All models concatenate their encoded representations with the baseline vector difference of standard FastText word embeddings. 2014; Weeds et al., 2014), using FastText word embeddings. Since our goal is to show the complementarity of relational word embeddings with standard word vectors, for our method we concatenate the difference wj −wi with the vectors ei + ej and ei · ej (referred to as the Mult+Avg setting; our method is referred to as RWE). We use a similar representation for the other methods, simply replacing the relational word vectors by the corresponding vectors (but keeping the FastText vector difference). We also consider a variant in which the FastText vector difference is concatenated with wi + wj and wi · wj, which offers a more direct comparison with the other methods. This goes in line with recent works that have shown how adding complementary features on top of the vector differences, e.g. multiplicative features (Vu and Shwartz, 2018), help improve the performance. Finally, for completeness, we also include variants where the average ei + ej is replaced by the concatenation ei ⊕ej (referred to as Mult+Conc), which is the encoding considered in Joshi et al. (2019). For these experiments we train a linear SVM classifier directly on the word pair encoding, performing a 10-fold cross-validation in the case of DiffVec, and using the train-test splits of BLESS. Results Table 1 shows the results of our relational word vectors, the standard FastText embeddings and other baselines on the two relation classification datasets (i.e. BLESS and DiffVec). Our model consistently outperforms the FastText embeddings baseline and comparison systems, with the only exception being the precision score for DiffVec. Despite being completely unsupervised, it is also surprising that our model manages to outperform the knowledge-enhanced embeddings of Retrofitting and Attract-Repel in the BLESS dataset. For DiffVec, let us recall that both these approaches have the unfair advantage of having had WordNet as source knowledge base, used both to construct the test set and to enhance the word embeddings. In general, the improvement of RWE over standard word embeddings suggests that our vectors capture relations in a way that is compatible to standard word vectors (which will be further discussed in Section 6.2). 5.2 Lexical Feature Modelling Standard word embedding models tend to capture semantic similarity rather well (Baroni et al., 2014; Levy et al., 2015a). However, even though other kinds of lexical properties may also be encoded (Gupta et al., 2015), they are not explicitly modeled. Based on the hypothesis that relational word embeddings should allow us to model such properties in a more consistent and transparent fashion, we select the well-known McRae Feature Norms benchmark (McRae et al., 2005) as testbed. This dataset10 is composed of 541 words (or concepts), each of them associated with one or more features. For example, ‘a bear is an animal’, or ‘a bowl is round’. As for the specifics of our evaluation, given that some features are only associated with a few words, we follow the setting of Rubinstein et al. (2015) and consider the eight features with the largest number of associated words. We carry out this evaluation by treating the task as a multi-class classification problem, where the labels are the word features. As in the previous task, we use a linear SVM classifier and perform 3-fold cross-validation. For each input word, the 10Downloaded from https://sites.google.com/ site/kenmcraelab/norms-data 3292 Model McRae Feature Norms QVEC Overall metal is small is large animal is edible wood is round is long RWE 55.2 73.6 46.7 45.9 89.2 61.5 38.5 39.0 46.8 55.4 Pair2Vec 55.0 71.9 49.2 43.3 88.9 68.3 37.7 35.0 45.5 52.7 Retrofitting† 50.6 72.3 44.0 39.1 90.6 75.7 15.4 22.9 44.4 56.8* Attract-Repel† 50.4 73.2 44.4 33.3 88.9 71.8 31.1 24.2 35.9 55.9* FastText 54.6 72.7 48.4 45.2 87.5 63.2 33.3 39.0 47.8 54.6 Table 2: Results on the McRae feature norms dataset (Macro F-Score) and QVEC (correlation score). Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset. word embedding of the corresponding feature is fed to the classifier concatenated with its baseline FastText embedding. Given that the McRae Feature Norms benchmark is focused on nouns, we complement this experiment with a specific evaluation on verbs. To this end, we use the verb set of QVEC11 (Tsvetkov et al., 2015), a dataset specifically aimed at measuring the degree to which word vectors capture semantic properties which has shown to strongly correlate with performance in downstream tasks such as text categorization and sentiment analysis. QVEC was proposed as an intrinsic evaluation benchmark for estimating the quality of word vectors, and in particular whether (and how much) they predict lexical properties, such as words belonging to one of the fifteen verb supersenses contained in WordNet (Miller, 1995). As is customary in the literature, we compute Pearson correlation with respect to these predefined semantic properties, and measure how well a given set of word vectors is able to predict them, with higher being better. For this task we compare the 300-dimensional word embeddings of all models (without concatenating them with standard word embeddings), as the evaluation measure only assures a fair comparison for word embedding models of the same dimensionality. Results Table 2 shows the results on the McRae Feature Norms dataset12 and QVEC. In the case of the McRae Feature Norms dataset, our relational word embeddings achieve the best overall results, although there is some variation for the individual features. These results suggest that attributional information is encoded well in our relational word embeddings. Interestingly, our results also suggest that Retrofitting and Attract-Repel, 11https://github.com/ytsvetko/qvec 12Both metal and wood correspond to made of relations. which use pairs of related words during training, may be too na¨ıve to capture the complex relationships proposed in these benchmarks. In fact, they perform considerably lower than the baseline FastText model. On the other hand, Pair2Vec, which we recall is the most similar to our model, yields slightly better results than the FastText baseline, but still worse than our relational word embedding model. This is especially remarkable considering its much lower computational cost. As far as the QVEC results are concerned, our method is only outperformed by Retrofitting and Attract-Repel. Nevertheless, the difference is minimal, which is surprising given that these methods leverage the same WordNet resource which is used for the evaluation. 6 Analysis To complement the evaluation of our relational word vectors on lexical semantics tasks, in this section we provide a qualitative analysis of their intrinsic properties. 6.1 Word Embeddings: Nearest Neighbours First, we provide an analysis based on the nearest neighbours of selected words in the vector space. Table 4 shows nearest neighbours of our relational word vectors and the standard FastText embeddings.13 The table shows that our model captures some subtle properties, which are not normally encoded in knowledge bases. For example, geometric shapes are clustered together around the sphere vector, unlike in FastText, where more loosely related words such as “dimension” are found. This trend can easily be observed as well in the philology and assassination cases. In the bottom row, we show cases where relational information is somewhat confused with col13Recall from Section 4 that both were trained on Wikipedia with the same dimensionality, i.e., 300. 3293 SPHERE PHILOLOGY ASSASSINATION DIVERSITY RWE FastText RWE FastText RWE FastText RWE FastText rectangle spheres metaphysics philological riot assassinate connectedness cultural diversity conic spherical pedagogy philologist premeditate attempt openness diverse hexagon dimension docent literature bombing attempts creativity genetic diversity INTERSECT BEHAVIOUR CAPABILITY EXECUTE RWE FastText RWE FastText RWE FastText RWE FastText tracks intersection aggressive behaviour refueling capabilities murder execution northbound bisect detrimental behavioural miniaturize capable interrogation executed northwesterly intersectional distasteful misbehaviour positioning survivability incarcerate summarily executed Table 3: Nearest neighbours for selected words in our relational word embeddings (RWE) and FastText embeddings locationality, leading to undesired clusters, such as intersect being close in the space with “tracks”, or behaviour with “aggressive” or “detrimental”. These examples thus point towards a clear direction for future work, in terms of explicitly differentiating collocations from other relationships. 6.2 Word Relation Encoding Unsupervised learning of analogies has proven to be one of the strongest selling points of word embedding research. Simple vector arithmetic, or pairwise similarities (Levy et al., 2014), can be used to capture a surprisingly high number of semantic and syntactic relations. We are thus interested in exploring semantic clusters as they emerge when encoding relations using our relational word vectors. Recall from Section 3.2 that relations are encoded using addition and pointwise multiplication of word vectors. Table 4 shows, for a small number of selected word pairs, the top nearest neighbors that were unique to our 300-dimensional relational word vectors. Specifically, these pairs were not found among the top 50 nearest neighbors for the FastText word vectors of the same dimensionality, using the standard vector difference encoding. Similarly, we also show the top nearest neighbors that were unique to the FastText word vector difference encoding. As can be observed, our relational word embeddings can capture interesting relationships which go beyond what is purely captured by similarity. For instance, for the pair “innocent-naive” our model includes similar relations such as vainselfish, honest-hearted or cruel-selfish as nearest neighbours, compared with the nearest neighbours of standard FastText embeddings which are harder to interpret. Interestingly, even though not explicitly encoded in our model, the table shows some examples that highlight one property that arises often, which is the ability of our model to capture cohyponyms as relations, e.g., wrist-knee and angerdespair as nearest neighbours of “shoulder-ankle” and “shock-grief”, respectively. Finally, one last advantage that we highlight is the fact that our model seems to perform implicit disambiguation by balancing a word’s meaning with its paired word. For example, the “oct-feb” relation vector correctly brings together other month abbreviations in our space, whereas in the FastText model, its closest neighbour is ‘doppler-wheels’, a relation which is clearly related to another sense of oct, namely its use as an acronym to refer to ‘optical coherence tomography’ (a type of x-ray procedure that uses the doppler effect principle). 6.3 Lexical Memorization One of the main problems of word embedding models performing lexical inference (e.g. hypernymy) is lexical memorization. Levy et al. (2015b) found that the high performance of supervised distributional models in hypernymy detection tasks was due to a memorization in the training set of what they refer to as prototypical hypernyms. These prototypical hypernyms are general categories which are likely to be hypernyms (as occurring frequently in the training set) regardless of the hyponym. For instance, these models could equally predict the pairs dog-animal and screenanimal as hyponym-hypernym pairs. To measure the extent to which our model is prone to this problem we perform a controlled experiment on the lexical split of the HyperLex dataset (Vuli´c et al., 2017). This lexical split does not contain any word overlap between training and test, and therefore constitutes a reliable setting to measure the generalization capability of embedding models in a controlled setting (Shwartz et al., 2016). In HyperLex, each pair is provided by a score which measures the strength of the hypernymy relation. 3294 INNOCENT-NAIVE POLES-SWEDES SHOULDER-ANKLE RWE FastText RWE FastText RWE FastText vain-selfish murder-young lithuanians-germans polish-swedish wrist-knee oblique-ligament honest-hearted imprisonment-term germans-lithuanians poland-sweden thigh-knee pick-ankle injury cruel-selfish conspiracy-minded russians-lithuanians czechoslovakia-sweden neck-knee suffer-ankle injury SHOCK-GRIEF STRENGTHEN-TROPICAL CYCLONE OCT-FEB RWE FastText RWE FastText RWE FastText anger-despair overcome-sorrow intensify-tropical cyclone name-tropical cyclones aug-nov doppler-wheels anger-sorrow overcome-despair weaken-tropical storm bias-tropical cyclones sep-nov scanner-read anger-sadness moment-sadness intensify-tropical storm scheme-tropical cyclones nov-sep ultrasound-baby Table 4: Three nearest neighbours for selected word pairs using our relational word vector’s relation encoding (RWE) and the standard vector difference encoding of FastText word embeddings. In each column only the word pairs which were on the top 50 NNs of the given model but not in the other are listed. Relations which include one word from the original pair were not considered. Encoding Model r ρ Mult+Avg RWE 38.8 38.4 Pair2Vec 28.3 26.5 FastText 37.2 35.8 Retrofitting† 29.5* 28.9* Attract-Repel† 29.7* 28.9* Mult+Conc Pair2Vec 29.8 30.0 FastText 35.7 33.3 Diff (only) FastText 29.9 30.1 Table 5: Pearson (r) and Spearman (ρ) correlation on a subset of the HyperLex lexical split. Models marked with † use external resources. All models concatenate their encoded representations with the baseline vector difference of standard FastText word embeddings. For these experiments we considered the same experimental setting as described in Section 4. In this case we only considered the portion of the HyperLex training and test sets covered in our vocabulary14 and used an SVM regression model over the word-based encoded representations. Table 5 shows the results for this experiment. Even though the results are low overall (noting e.g. that results for the random split are in some cases above 50% as reported in the literature), our model can clearly generalize better than other models. Interestingly, methods such as Retrofitting and Attract-Repel perform worse than the FastText vectors. This can be attributed to the fact that these models have been mainly tuned towards similarity, which is a feature that loses relevance in this setting. Likewise, the relation-based embeddings of Pair2Vec do not help, probably due to the high-capacity of their model, which makes the word embeddings less informative. 14Recall from Section 4 that this vocabulary is shared by all comparison systems. 7 Conclusions We have introduced the notion of relational word vectors, and presented an unsupervised method for learning such representations. Parting ways from previous approaches where relational information was either encoded in terms of relation vectors (which are highly expressive but can be more difficult to use in applications), represented by transforming standard word vectors (which capture relational information only in a limited way), or by taking advantage of external knowledge repositories, we proposed to learn an unsupervised word embedding model that is tailored specifically towards modelling relations. Our model is intended to capture knowledge which is complementary to that of standard similarity-centric embeddings, and can thus be used in combination. We tested the complementarity of our relational word vectors with standard FastText word embeddings on several lexical semantic tasks, capturing different levels of relational knowledge. The evaluation indicates that our proposed method indeed results in representations that capture relational knowledge in a more nuanced way. For future work, we would be interested in further exploring the behavior of neural architectures for NLP tasks which intuitively would benefit from having access to relational information, e.g., text classification (Espinosa Anke and Schockaert, 2018; Camacho-Collados et al., 2019) and other language understanding tasks such as natural language inference or reading comprehension, in the line of Joshi et al. (2019). Acknowledgments. Jose Camacho-Collados and Steven Schockaert were supported by ERC Starting Grant 637277. 3295 References Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 238–247. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proc. GEMS Workshop, pages 1–10. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5. Zied Bouraoui, Shoaib Jameel, and Steven Schockaert. 2018. Relation induction in word embeddings revisited. In Proceedings of COLING, pages 1627–1637. Jose Camacho-Collados, Luis Espinosa-Anke, Shoaib Jameel, and Steven Schockaert. 2019. A latent variable model for learning distributional relation vectors. In Proceedings of IJCAI. Asli Celikyilmaz, Dilek Hakkani-Tr, Panupong Pasupat, and Ruhi Sarikaya. 2015. Enriching word embeddings using knowledge graph for semantic tagging in conversational dialog systems. In AAAI Spring Symposium. Jiaqiang Chen, Niket Tandon, Charles Darwis Hariman, and Gerard de Melo. 2016. Webbrain: Joint neural learning of large-scale commonsense knowledge. In Proceedings of ISWC, pages 102–118. Luis Espinosa Anke and Steven Schockaert. 2018. SeVeN: Augmenting word embeddings with unsupervised relation vectors. In Proceedings of COLING, pages 2653–2665. Miao Fan, Kai Cao, Yifan He, and Ralph Grishman. 2015. Jointly embedding relations and mentions for knowledge population. In Proceedings of RANLP, pages 186–191. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL, pages 1606–1615. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hierarchies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1199–1209. Abhijeet Gupta, Gemma Boleda, Marco Baroni, and Sebastian Pad´o. 2015. Distributional vectors encode referential attributes. In Proceedings of EMNLP, pages 12–21. Kazuma Hashimoto, Pontus Stenetorp, Makoto Miwa, and Yoshimasa Tsuruoka. 2015. Task-oriented learning of word embeddings for semantic relation classification. In Proceedings of CoNLL, pages 268–278. Shoaib Jameel, Zied Bouraoui, and Steven Schockaert. 2018. Unsupervised learning of distributional relation vectors. In Proceedings of ACL, pages 23–33. Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S Weld, and Luke Zettlemoyer. 2019. pair2vec: Compositional word-pair embeddings for cross-sentence inference. In Proceedings of NAACL. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015a. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211– 225. Omer Levy, Yoav Goldberg, and Israel Ramat-Gan. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of CoNLL, pages 171–180. Omer Levy, Steffen Remus, Chris Biemann, Ido Dagan, and Israel Ramat-Gan. 2015b. Do supervised distributional methods really learn lexical inference relations? In Proceedings of NAACL. Tal Linzen. 2016. Issues in evaluating semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 13–18. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of ACL, pages 1501–1511. Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37(4):547–559. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111– 3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751. George A Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Nikola Mrkˇsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gaˇsi´c, Anna Korhonen, and Steve Young. 2017. Semantic specialization of distributional word vector spaces using 3296 monolingual and cross-lingual constraints. Transactions of the Association of Computational Linguistics, 5(1):309–324. Roberto Navigli and Simone Paolo Ponzetto. 2012. Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Kim Anh Nguyen, Maximilian K¨oper, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Hierarchical embeddings for hypernymy detection and directionality. In Proceedings of EMNLP, pages 233–243. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of NAACL-HLT, pages 984–989. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 425–430. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Anna Rogers, Aleksandr Drozd, and Bofang Li. 2017. The (too many) problems of analogical reasoning with word vectors. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pages 135–148. Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hypernymy detection. In Proceedings of COLING, pages 1025–1036. Dana Rubinstein, EffiLevi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional models capture different types of semantic knowledge? In Proceedings of ACL, pages 726–730. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2389–2398. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI, pages 4444–4451. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of EMNLP, pages 2049–2054. Peter D. Turney. 2005. Measuring semantic similarity by latent relational analysis. In Proceedings of IJCAI, pages 1136–1141. Peter D. Turney and P. Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Tu Vu and Vered Shwartz. 2018. Integrating multiplicative features into supervised distributional methods for lexical entailment. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 160–166. Ivan Vuli´c, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computational Linguistics, 43(4):781–835. Ivan Vulic and Nikola Mrksic. 2018. Specialising word vectors for lexical entailment. In Proceedings of NAACL-HLT, pages 1134–1145. Ekaterina Vylomova, Laura Rimell, Trevor Cohn, and Timothy Baldwin. 2016. Take and took, gaggle and goose, book and read: Evaluating the utility of vector differences for lexical relation learning. In Proceedings of ACL, pages 1671–1682. Koki Washio and Tsuneaki Kato. 2018a. Filling missing paths: Modeling co-occurrences of word pairs and dependency paths for recognizing lexical semantic relations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1123–1133. Koki Washio and Tsuneaki Kato. 2018b. Neural latent relational analysis to capture lexical semantic relations in a vector space. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 594–600. Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hypernyms and co-hyponyms. In Proceedings of the 25th International Conference on Computational Linguistics, pages 2249–2259. Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of EMNLP, pages 1366–1371. John Wieting, Mohit Bansal, Kevin Gimple, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In Proceedings of ICLR. C. Xu, Y. Bai, J. Bian, B. Gao, G. Wang, X. Liu, and T.-Y. Liu. 2014. RC-NET: A general framework for incorporating knowledge into word representations. In Proc. CIKM, pages 1219–1228.
2019
318
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3297–3307 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3297 Unraveling Antonym’s Word Vectors through a Siamese-like Network Mathias Etcheverry Republic University Montevideo, Uruguay [email protected] Dina Wonsever Republic University Montevideo, Uruguay [email protected] Abstract Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may have similar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and patternbased approaches, relaying on a simple feed forward network as base network of the training phases. 1 Introduction Antonymy and synonymy are lexical relations that are crucial in language semantics. Antonymy is the relation between opposite words, (e.g. bigsmall) and synonymy refers to words with similar meaning (e.g. bug-insect). Detecting them automatically is a challenging NLP task that can benefit many others like textual entailment (Haghighi et al., 2005; Snow et al., 2006), machine translation (Bar and Dershowitz, 2010) and abstractive summarization (Khatri et al., 2018). Hand crafted lexical databases, such as WordNet (Miller, 1995), have been built and maintained to be used in NLP and other fields containing antonyms, synonyms and other lexical semantic relations. However, its construction and maintenance takes a considerable human effort and it is difficult to achieve a broad coverage. Detecting antonyms automatically, relying on existent resources such as text, dictionaries and lexical databases is an active NLP research area. In the last decade, the use and research concerning word vectors have increased rapidly. Word vectors rely on words co-occurrence information in a large corpus. The key idea behind word vectors is the distributional hypothesis that can be expressed as ”the words that are similar in meaning tend to occur in similar contexts” (Sahlgren, 2008; Rubenstein and Goodenough, 1965). A variety of methods have been developed to train word vectors, such as skip-gram (Mikolov et al., 2013), GloVe (Pennington et al., 2014), FastText (Joulin et al., 2016) and ElMo (Peters et al., 2018). Word vectors are used widely in NLP, for example, a well-known use is in supervised learning, taking advantage of the expansion through words relatedness of the training data. A main problem to discriminate antonymy automatically in a distributional unsupervised setting is that the oppositeness is not easily distinguishable in terms of the context distributions. In fact, pairs of antonyms are very similar in meaning. Antonyms are usable in the same contexts but leading to opposite meanings. Antonymy is said to have the paradox of simultaneous similarity and difference (Cruse, 1986), because antonyms are similar in almost every dimension of meaning except the one where they are opposite. The paradox of simultaneous similarity and difference is notorious in word space models. The contexts of a word and its antonyms contexts usually are similar and therefore they have close vector representations1. 1In fact, word space models may give similar representations to a broader range of related words, such as synonyms and hyponyms. Note the difference between the terms word similarity and word relatedness. While word similarity refers to similar words (synonyms), the concept of word relatedness 3298 Due to this paradox, word space models seem not suitable for antonymy detection. Then, a commonly used resource is the path of words connecting the joint occurrence of two candidate words (Nguyen et al., 2017). Path based approaches take profit of the fact that antonyms cooccur in the same context more than expected by chance (Scheible et al., 2013; Miller and Charles, 1991), so it is possible to obtain a significant amount of patterns. In this paper, we claim that vector space models, despite giving close representations for synonyms and antonyms, contain subtle differences that allow to discriminate antonymy. In order to stick out those differences we propose a method based on a neural network model that takes account of algebraic properties of synonymy and antonymy. The model formulation is based on the transitivity of synonymy and the antitransitivity of antonymy, on the symmetry of both relations and on the reflexivity and irreflexivity of synonymy and antonymy, respectively. Moreover, the model exploits the property that two antonyms of the same word tend to be synonyms (Edmundson, 1967) (Figure 1). We use these properties to define a model based on siamese networks and a training strategy through antonyms and synonyms. Figure 1: The antonyms of a same word tend to be synonyms. We show that the presented approach gives surprisingly good results, even in comparison to models that use external information, such as dependency parsing, part-of-speech tagging or path patterns from a corpus. The introduced model is a way to learn any kind of antitransitive relations between distributed vectors. Antitransitivity may be suitable, for instance, to represent the relation of being adversary (Bonato et al., 2017). A different application of the presented approach could be in includes other semantic fields, like antonyms, hypernyms, cohyponyms and specific relations (e.g. dog-bone). social networks in order to find out possible unknown enemies relying on a given set of known enmity and friendship links. The rest of the paper is structured as follows: In Section 2 we present the previous work on antonymy detection. In Section 3 we describe the proposed approach. We start with some algebraic principles of synonymy and antonymy on which our approach relies. Then we describe siamese networks and how the learned transformation tends to induce an equivalence relationship, suitable for synonyms. In Section 3.3, we comment the unsuitability of siamese networks to deal with an antitransitive relationship like antonymy and we propose a variation of the original siamese network to do so. We refer to this network as a parasiamese network. Then, we argue that the same base network of a parasiamese model for antonymy can be pre-trained minimizing a siamese scheme on synonyms. Section 4 details the dataset, word vectors and the random search strategy carried out to find out an adequate hyperparameter configuration. In Section 5 we present the results and the behavior of the model. Finally, Section 6 contains the conclusion of this paper. 2 Related Work Antonymy detection, and antonymy and synonymy discrimination, have been treated principally by two approaches: distributional and pattern-based. Distributional approaches refer to the use of word vectors or word’s distributional information. Pattern-based are those that rely on patterns of joint occurrences of pair of words (such as ”from X to Y”) to detect antonymy. Due to the direction of this work, we will not extend on pathbased approaches and we will give the most attention in this section to distributional approaches. As we commented before, at first glance word vectors seem not suitable to discriminate antonymy from synonymy because pairs of antonyms correspond to similar vectors. Many research studies and experiments have focused on the construction of vector representations that deem antonymy. Scheible et al. (2013) showed that the context distributions of adjectives allow to discriminate antonyms and synonyms if only words from certain classes are considered as context in the vector space mode. Hill et al. (2014) found that word vectors from machine translation models outperform 3299 those learned from monolingual models in word similarity. They suggest that vectors from machine translation models should be used on tasks that require word similarity information, while vectors from monolingual models are more suitable for word relatedness. Santus et al. (2014) proposed APAnt, an unsupervised method based on average precision of contexts intersections of two words, to discriminate antonymy from synonymy. Symmetric patterns in corpus (e.g. X and Y) were used by Schwartz et al. (2015) to build word vectors and they showed that the patterns can be chosen so that the resulting vectors consider antonyms as dissimilar. Ono et al. (2015) proposed an approach to train word vectors to detect antonymy using antonymy and synonymy information from a thesauri as supervised data. A main difference between their approach and ours is that they did not rely on pre-trained vectors. They used distributional information jointly with the supervised information to train vectors through a model based on skip-gram. Also, Nguyen et al. (2016) integrated synonymy and antonymy information into the skip-gram model to predict word similarity and distinguish synonyms and antonyms. More recently, Nguyen et al. (2017) distinguish antonyms and synonyms using lexico-syntactic patterns jointly with the supervised word vectors from Nguyen et al. (2016). To finish, (Vuli´c, 2018) obtain great performance injecting lexical contrast into word embeddings by terms of their ATTRACT-REPEL strategy. 3 Method In this section we describe the proposed approach to discriminate antonymy and synonymy. It consists on a siamese networks inspired approach to magnify the subtle differences on antonyms that distinguish them from synonyms. 3.1 Algebra of synonymy and antonymy In order to define and substantiate our approach we introduce an axiomatic characterization of antonymy and synonymy based on the work done by Edmundson (1967). Precisely, synonymy and antonymy are modeled as relations and a set of axioms is proposed. These axioms, as we are going to show, are essential to formulate our approach. At first glance, synonymy and antonymy can be seen as binary relations between words. However, based on empirical results2 Edmundson defined synonymy and antonymy as ternary relations in order to consider the multiple senses of the words, as follows: xSiy ≡x synonym of y according to sense i xAiy ≡x antonym of y according to sense i Note that the senses of the words are represented in the relationship rather than in the words themselves. Each i (and therefor Si and Ai) reflects a particular configuration of the senses of the words in the vocabulary, considering a unique sense for each word. Firstly, synonymy is considered a reflexive, symmetric and transitive relationship. This is expressed by the following axioms: ∀i∀x(xSix) (1) ∀i∀x∀y(xSiy =⇒ySix) (2) ∀i∀x∀y∀z(xSiy ∧ySiz =⇒xSiz) (3) Si is an equivalence relation for each fixed i and therefor it splits the set of words into equivalence classes. In the next section we show that this is suitable for siamese networks. Antonymy is also a symmetric relation but it is irreflexive and antitransitive: ∀i∀x¬(xAix) (4) ∀i∀x∀y(xAiy =⇒yAix) (5) ∀i∀x∀y∀z(xAiy ∧yAiz =⇒¬xAiz) (6) So far, synonymy and antonymy are described separately. The following two axioms involve both relationships: ∀i∀x∀y∀z(xAiy ∧yAiz =⇒xSiz) (7) ∀i∀x∀y∀z(xAiy ∧ySiz =⇒xAiz) (8) Axiom 7 is a refined version of the antitransitive property (axiom 6)3. Assuming that two words cannot be synonyms and antonyms simultaneously, it is direct to prove that axiom 7 implies axiom 6. We include axiom 6 for clarification purpose. 2Precisely, analyzing graphs of several sets of synonyms treated as a binary relation and the adjacency matrices associated with these graphs. 3In fact, the term antitransitivity is used in Edmundson article to refer axiom 7 instead of axiom 6 3300 The right-identity, axiom 8, says that synonyms of an antonym of a word are also antonyms. Consequently, antonymy relation can be extended to operate between synonymy equivalence classes. To introduce our model and the considered task setting, we simplify this definition enforcing a binary relation. We consider: xRy ⇐⇒∃i(xRiy), where R and Ri are S or A and Si or Ai, respectively. This simplification encapsulates the multiple senses of the words and therefore it is suitable for word embeddings. However, the presented axioms may not be completely fulfilled under this simplification. 3.2 Synonymy and Siamese Networks A siamese network is a model that receives two inputs an returns an output. A base neural network is applied to each input and the both outputs are measured using a vector distance function (see Figure 2). Usually, siamese networks are trained using a contrastive loss function. The complete model can be interpreted as a trainable distance function on complex data, like images, sound, or text. Siamese networks have been used in a variety of tasks such as sentence similarity (Chi and Zhang, 2018), palmprint recognition (Zhong et al., 2018) and object tracking (Bertinetto et al., 2016), among many others. Figure 2: A siamese network model. Consider a vocabulary V of words where we want to discriminate synonyms and a given word vector set for that vocabulary of dimension n. Then consider a neural network Fθ : IRn →IRn with weights θ and the following contrastive loss function L = X (x,y)∈P d(Fθ(x), Fθ(y))+ X (x′,y′)∈N max{0, α −d(Fθ(x′), Fθ(y′))}, where d : IRn × IRn →IR+ is a vector distance function (e.g. d = ||x −y||2), α is the threshold for the negative examples, and P and N are positive and negative example pairs, respectively. So P is a set of pairs of synonyms and N a set of pair of words that are not synonyms. We consider that each pair is already composed by the word vector of each word, this is convenient to simplify the notation. This model can be trained using a backpropagation based technique and the output vectors closer than a given threshold are classified as related. It can be proved that the relation induced by a siamese network is reflexive and symmetric. Transitivity is a little more tricky. It is assured to be satisfied when the sum of the distances of the antecedent related pairs is below the threshold and, in every case, the distance of the transitive pairs is below the double of the threshold. Therefore, a siamese network is a reasonable approach for supervised synonymy detection. 3.3 Antonymy and Antitransitivity While a siamese network seems a reasonable choice for supervised synonym detection, antonymy presents a really different scenario. Consider Fθ∗as the base neural network in a siamese scheme and suppose that it is trained and working perfectly to discriminate pairs of antonyms. Consider also three words w1, w2, w3 such that w1 is antonym of w2 and w2 is antonym of w3, then Fθ∗(w1) = Fθ∗(w2), Fθ∗(w2) = Fθ∗(w3) hence, Fθ∗(w1) = Fθ∗(w3) an therefore, w1 and w3 would be recognized as antonyms, violating axiom 6. A siamese network induces a transitive relationship but antonymy is actually antitransitive. To model an antitransitive relation, we propose the following variation of the siamese network. Let’s consider Fθ and the model diagrammed in figure 3. It consists of a model that consumes two vectors with the same dimension and applies a base neural network once to one input and twice 3301 to the other. The idea behind this scheme is that if two word are antonyms then the base network applied once in one word vector and twice in the other word vector, will return close vectors. It can be interpreted as one application of the base network takes to a representation of the equivalence class of the synonymy relation and the second application to a representation its opposite class in terms of antonymy. Figure 3: The proposed parasiamese network to discriminate antonymy. Assume that Fθ∗is trained and behaves perfectly on data according to the following loss function: Lant = X (x,y)∈P d(Fθ(x), Fθ(Fθ(y)))+ X (x′,y′)∈N max{0, α −d(Fθ(x′), Fθ(Fθ(y′)))}, where P and N are positive and negative example pairs, respectively; α is the threshold for the negative examples, and d a distance function as in siamese network. Then, it can be seen that the relation induced fulfills the antitransitivity property if Fθ∗(w) ̸= Fθ∗(Fθ∗(w)), which is expected since antonymy is an antireflexive relation. Symmetry is not forced by definition but can be included in the loss function or by data, adding the reversed version of each pair in the dataset. The latter is the alternative chosen in this work. 3.4 Relaxed Loss Function In order to classify a pair of words we rely in a threshold ρ. If the candidate pair obtains a distance (between its transformed vectors) below ρ, then it is classified as positive, otherwise as negative. So, it is not necessary to minimize the distance to 0 to classify it correctly. We propose to change the positive part of the contrastive loss function by X (x,y)∈P max(d(Fθ(x), Fθ(Fθ(y))) −ρν, 0) where ν is a factor in [0, 1] that states the importance given to ρ, the rest of the terms remains the same as in the previous section. If ν = 0 then the original loss function is recovered. We consider ν = 1/2 and we experimentally observe an improvement in results when this relaxed loss function is used. 3.5 Pre-training using synonyms Consider Fθ∗trained and perfectly working to detect pairs of antonyms using the parasiamese scheme presented in the previous section. Now, lets consider the word vectors w1, w2 and w3 such that w1 is antonym of w2 and w2 is antonym of w3. According to the parasiamese loss function we have that, Fθ∗(w1) = Fθ∗(Fθ∗(w2)), Fθ∗(Fθ∗(w2)) = Fθ∗(w3). This implies that Fθ∗(w1) = Fθ∗(w3), suggesting to F the role of a siamese network. On the other hand, using axiom 7 we have that w1 and w3 tend to be synonyms, which, as we previously show, fits fine for siamese networks. Using this result, we propose to pre-train Fθ, minimizing a siamese network on synonymy data as in Section 3.2, and then perform the parasiamese training to detect antonyms as described in Section 3.3. We use the same antonymy/synonymy dataset to pre-train and train the parasiamese network and we experimentally observe that this pre-training phase improves the performance of the parasiamese model. 4 Experiments In this section we describe the setup details of the experiments performed using the presented approach. Here we give the complete information to reproduce the experiments performed. We describe the dataset, word vectors set used and the random search strategy used for the hyperparameter configuration. 3302 Model Adjective Verb Noun P R F1 P R F1 P R F1 Baseline (concat) 0.691 0.664 0.677 0.756 0.641 0.694 0.764 0.716 0.739 AntSynNet 0.763 0.807 0.784 0.743 0.815 0.777 0.816 0.898 0.855 Parasiam (regular loss) 0.735 0.804 0.768 0.815 0.894 0.853 0.786 0.857 0.820 Parasiam (no pre-train) 0.764 0.848 0.804 0.825 0.892 0.857 0.787 0.849 0.817 Parasiam (ElMo) 0.838 0.844 0.841 0.830 0.910 0.869 0.802 0.855 0.827 Parasiam (FastText) 0.855 0.857 0.856 0.864 0.921 0.891 0.837 0.859 0.848 Table 1: Performance of our approach and the baseline models. AntSynNet corresponds to the work presented by Nguyen et al. (2017) and Baseline (concat) to a feed forward network on vectors concatenation. The third row refers to the parasiamese model without including the relaxed loss function, and the fourth to the model without performing the pre-training stage. Third and fourth row results were carried out using FastText vectors. Fifth and sixth rows show the results of the complete model (i.e. using pre-training and the relaxed loss function) on ElMo and FastText vectors, respectively. 4.1 Antonymy Dataset To perform our experiments we use the dataset created by Nguyen et al. (2017). This dataset contains a large amount of pairs of antonyms and synonyms grouped according to its word class (noun, adjective and verb). This dataset was built using pairs extracted by Nguyen et al. (2016) from WordNet and Wordnik4 to induce patterns through a corpus. Then, the induced patterns were used to extract new pairs, filtering those that match less than five patterns. Finally, the dataset was balanced to contain the same number of antonyms and synonyms, and split into train, validation and test. The number of pairs contained on each partition of each word class is showed in Table 2. Train Val Test Adjective 5562 398 1986 Verb 2534 182 908 Noun 2836 206 1020 Table 2: Nguyen et al. (2017) number of word pairs of each partition in the dataset. 4.2 Pre-trained word vectors For the experimental setting we consider pretrained general purpose word vectors. We avoid out-of-vocabulary terms using character based approaches. The following publicly available resources were considered: • FastText (Joulin et al., 2016) vectors trained on English Wikipedia dump 5. We use default 4http://www.wordnik.com 5http://mattmahoney.net/dc/enwik9.zip hyper-parameters and vectors dimension is 300. • ElMo (Peters et al., 2018) vectors for English from Che et al. (2018) 6. We use the first layer of ElMo that gives representations for decontextualized words. In the case of FastText we compute 300 dimensional vectors for each word in the dataset. In the case of ElMo embeddings, the pre-trained model was already defined to generate representations of 1024 dimensions. 4.3 Base Network Structure The base network transforms each word vector into a representative form synonymy and antonymy. Any differentiable function that inputs and outputs vectors of the same dimension of the word embeddings space can be used as base network. In this work we consider layered fully connected networks with ReLU as activation function. The presented model involves tens of hyperparameters and some of them with many options. We use random search to find a good hyperparameter configuration, since it may lead to a better and more efficient solution in comparison to grid or manual search (Bergstra and Bengio, 2012). This improvement is given by the fact that some hyper-parameters do not really matter and grid or manual search would consume time exploring each combination of them (for each combination of the rest), while random search does not exhaustively explore irrelevant parts of the hyperparameters space. 6http://vectors.nlpl.eu/repository/11/ 144.zip 3303 We perform random search sampling models according to the following considerations: • 2,3,4 and 5 layers uniformly chosen • for each hidden layer (if any) we sample its size from a Gaussian distribution with µ = d/2 and σ = d/5, where d is the dimension of the word vectors.7 • dropout with 1/2 of probability to be activated or not and dropout probability is given by a Gaussian distribution (µ = 0.25 and σ = 0.1). • prediction (or positive) threshold and contrastive loss threshold uniformly chosen between {2.5, 2, 1.5, 1, 0.5, 0.2} and {3, 5, 10}, respectively. • batch size uniformly chosen from {32, 64, 128} • we choose between SGD and Adam with equal probability with a learning rate chosen from {0.01, 0.001, 0.0001} • the patience for the early stopping was sampled uniformly from {3, 4, 7, 9} We initialize the weights of the network using Glorot uniform function (Glorot and Bengio, 2010). We stop the training using early stopping and we checkout the best model in the whole run against the validation set. For the implementation we use Keras (Chollet et al., 2015). After analyzing the results of 200 sampled hyperparameter configurations using the FastText vectors we found that an adequate hyperparameters setting is a four layered network of input dimensions [300, 227, 109, 300] on its layer from input to output, without dropout, and ReLu activation function for every neuron. For training, a batch size is 64, an acceptance threshold of 2.0 and of 3.0 for the negative part of the contrastive loss. The optimizer method is SGD with a learning rate of 0.01 and a patience of 5 for the early stopping. This training setup was used in both phases: pre-training and training. For the experiments with ElMo embeddings we uniquely adjust hidden layers sizes, probably ElMo results may improve by a dedicated hyperparameter search. 7The dimension of input and output layers is d by model definition. 5 Results In this section we discuss the results obtained with the presented approach and we analyze the model behavior through the outputs of the base network in siamese and parasiamese schemes. We include two baselines with different motivations for comparison purpose. We analyze the model outputs for related and unrelated pairs (i.e. pairs that are not synonyms or antonyms). In the end of this section, we analyze the output of the base network. 5.1 Baselines We consider two baselines to compare our experiments. The first baseline is a feed forward network classifier that consumes the concatenation of the embeddings of each word in the candidate pair. This baseline compares the performance boost of the proposed model against a conventional supervised classification scheme using neural networks. For this baseline we consider the FastText vectors to feed a four layered network with layer dimensions of [600, 400, 200, 1] from input to output and ReLu as activation function. This model was trained through binary cross-entropy loss and SGD with a learning rate of 0, 01. The second baseline we consider for comparison is AntSynNet (Nguyen et al., 2017), a patternbased approach that encodes the paths connecting the joint occurrences of each candidate pairs using a LSTM. It relies on additional information, such as, part-of-speech, lemma, syntax and dependency trees. 5.2 Antonymy and Synonymy discrimination We evaluate our model in the antonymysynonymy discrimination task proposed by Nguyen et al. (2017). However, the task here is faced from a different point of view. In this work we are interested in showing that word vectors contains what is needed to distinguish between antonyms and synonyms, instead of resolving the general task using any available resource. For that reason we do not try to improve the performance adding more information to the model, such as, paths. It is a supervised approach that discriminate antonymy and synonymy using only word vectors as features. The obtained results are reported in Table 1. The first baseline is included to compare the performance of our model with a word vector concatenation classification. We also report results 3304 with and without pre-training to show the performance gain that pre-training contributes. Notice that, in contrast to AntSynNet, no path-based information is considered in our approach. 5.3 Siamese and parasiamese outputs In this section we show the outputs of the siamese and parasiamese networks on word pairs chosen from the validation set (see Table 3). Word1 Word2 Cos Siam Psiam cold warm 0.327 3.051 1.01 cold hot 0.367 3.576 0.801 raw hot 0.467 5.398 1.424 peace war 0.448 6.167 0.367 stupid clever 0.406 7.635 1.192 stretch contract 0.526 4.351 0.354 love hate 0.378 2.771 0.153 reject take 0.528 2.341 0.66 close harsh 0.544 4.11 0.682 small large 0.178 4.076 0.302 auntie uncle 0.351 1.721 0.561 day night 0.366 1.098 0.803 sloping vertical 0.331 0.254 2.687 hot cool 0.253 1.753 2.239 invisible visible 0.137 1.816 2.11 change mutate 0.593 0.853 4.879 ample large 0.398 0.115 4.05 flee depart 0.499 0.682 3.958 cure heal 0.332 0.646 5.769 elegant classy 0.48 0.964 5.917 herald hail 0.507 0.752 5.826 live exist 0.527 0.126 4.737 agitate disturb 0.282 0.627 4.683 chop divide 0.657 1.56 4.085 sturdy hardy 0.333 1.894 3.493 stout robust 0.541 1.903 4.361 scatter dot 0.477 2.05 1.402 fizzle fail 0.437 3.706 3.464 see discern 0.501 3.878 3.724 Table 3: A word sampling and their vector cosine distances, siamese and parasiamese (Psiam) outputs. The upper and lower parts correspond to pairs in the validation data as antonyms and synonyms, respectively. The threshold for acceptance is 2.0. It can be observed in the obtained results, in general, a suitable behavior of the model. We also include the cosine distance to compare and show that it is unable to distinguish between antonyms and synonyms. It is interesting to notice, for instance, in the upper part of the table, that corresponds to antonyms, the difference in outputs between the pairs cold-warm and cold-hot. It may be interpreted as that cold-hot are more antonyms than cold-warm, which seems adequate. Below the dashed line of each part we include some failure cases. 5.3.1 Non-related pairs The task setting considered for this work only uses synonyms and antonyms for training. It is interesting to notice that in this work the behavior of the model with unrelated pairs is learned from related pairs and word embeddings, without considering any unrelated pairs during training. We show in Table 4 the outputs given by siamese and parasiamese networks on unrelated pairs. Word1 Word2 Cos Siam Psiam see disturb 0.579 3.803 3.866 flee cure 0.534 3.189 3.764 man wolf 0.507 4.017 2.649 ascend speak 0.63 1.927 2.776 safe adverse 0.475 4.57 0.625 change mature 0.511 1.118 3.602 cold night 0.48 1.134 1.98 cold day 0.574 3.727 0.483 warm night 0.462 0.773 0.658 warm day 0.52 0.733 1.601 Table 4: Unrelated pairs and its word vectors cosine distance, siamese model output and parasiamese (Psiam) output. The obtained results show that the model is not capable to detect unrelated pairs correctly. In fact, the model seems to learn a broader relation. For example, the words safe and adverse are predicted as antonyms and although they are not antonyms, they have some oppositeness. Similarly, the combinations of cold and warm with day and night also seems to be coherent since the day tends to be warmer than the night and the night tends to be colder than the day. In the upper part of the table we include unrelated pairs that were correctly predicted as unrelated and below the dashed line we include failure cases on unrelated pairs. 5.3.2 Base Network Output In this section we analyze the learned base network. In Figure 4 we show a 2D visualization of the original and the transformed word embeddings. The sample of words was chosen from 3305 Figure 4: 2D visualization of the original (left) and the transformed (right) words vectors. Related words are colored in red and light blue to facilitate the visualization of how the model split antonymy from original embeddings. the validation set and t-SNE (Maaten and Hinton, 2008) was used for the dimensionality reduction. It can be observed that in the original space antonyms tend to be close and when the base network is applied the space seems to be split into two parts, corresponding to each pole of antonymy. We also consider the resulting space from applying the transformation twice to the original word vector space, which is similar to the result of applying it only once. This behavior is coherent with the parasiamese network definition. To conclude this section, we show the closest words (in the vocabulary) of the words natural and unnatural, in the original and the transformed spaces, sorted by distance (Table 5). Note how some opposite words appear close in the original space, while in the transformed space the nearest words does not seem to be opposite to the word in question. 6 Conclusion We presented a supervised approach to distinguish antonyms and synonyms using pre-trained word embeddings. The proposed method is based on algebraic properties of synonyms and antonyms, principally in the transitivity of synonymy and the antitransitivity of antonymy. We proposed a new siamese inspired model to deal with antitransitivity, the parasiamese network. In addition, we proposed to pre-train this network, relying on the claim that two antonyms of the same word tend word neighborhood natural O naturals, nonnatural, naturalness, unnatural, naturalmotion, connatural, sobrenatural T pop, morning, simpleness, pee, public, cardia, liveness unnatural O nonnatural, unnaturalness, connatural, unnaturally, naturalness, natural, sobrenatural T shumen, simpsons, untroubledness, hither, random, bewitched, diarrhetic Table 5: Nearest word vectors in the original (O) and the transformed (T) spaces. to be synonyms, through a siamese network; and a relaxed version of the contrastive loss function. We evaluated our approach using a publicly available dataset and word vectors, obtaining encouraging results. References Kfir Bar and Nachum Dershowitz. 2010. Using synonyms for arabic-to-english example-based translation. In Proceedings of the Ninth Conference of the Association for Machine Translation in the Americas (AMTA 9). 3306 James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. Luca Bertinetto, Jack Valmadre, Jo˜ao F. Henriques, Andrea Vedaldi, and Philip H. S. Torr. 2016. Fullyconvolutional siamese networks for object tracking. CoRR, abs/1606.09549. Anthony Bonato, Ewa Infeld, Hari Pokhrel, and Paweł Prałat. 2017. Common adversaries form alliances: modelling complex networks via anti-transitivity. In International Workshop on Algorithms and Models for the Web-Graph, pages 16–26. Springer. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55–64, Brussels, Belgium. Association for Computational Linguistics. Ziming Chi and Bingyan Zhang. 2018. A sentence similarity estimation method based on improved siamese network. Journal of Intelligent Learning Systems and Applications, 10(04):121. Franc¸ois Chollet et al. 2015. Keras. https:// keras.io. David Alan Cruse. 1986. Lexical semantics. Cambridge university press. H. P. Edmundson. 1967. Axiomatic characterization of synonymy and antonymy. In COLING 1967 Volume 1: Conference Internationale Sur Le Traitement Automatique Des Langues. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, volume 9 of JMLR Proceedings, pages 249–256. JMLR.org. Aria Haghighi, Andrew Ng, and Christopher Manning. 2005. Robust textual inference via graph matching. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Felix Hill, Kyunghyun Cho, S´ebastien Jean, Coline Devin, and Yoshua Bengio. 2014. Embedding word similarity with neural machine translation. CoRR, abs/1412.6448. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Chandra Khatri, Gyanit Singh, and Nish Parikh. 2018. Abstractive and extractive text summarization using document context vector and recurrent neural networks. CoRR, abs/1807.08000. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2017. Distinguishing antonyms and synonyms in a pattern-based neural network. CoRR, abs/1701.02962. Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym-synonym distinction. arXiv preprint arXiv:1605.07766. Masataka Ono, Makoto Miwa, and Yutaka Sasaki. 2015. Word embedding-based antonym detection using thesauri and distributional information. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 984–989. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Magnus Sahlgren. 2008. The distributional hypothesis. Italian Journal of Disability Studies, 20:33–53. Enrico Santus, Qin Lu, Chu-Ren Huang, et al. 2014. Taking antonymy mask off in vector space. In Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing. Silke Scheible, Sabine Schulte Im Walde, and Sylvia Springorum. 2013. Uncovering distributional differences between synonyms and antonyms in a word space model. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 489–497. 3307 Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for improved word similarity prediction. In Proceedings of the nineteenth conference on computational natural language learning, pages 258–267. Rion Snow, Lucy Vanderwende, and Arul Menezes. 2006. Effectively using syntax for recognizing false entailment. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 33–40. Association for Computational Linguistics. Ivan Vuli´c. 2018. Injecting lexical contrast into word vectors by guiding vector space specialisation. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 137–143. Dexing Zhong, Yuan Yang, and Xuefeng Du. 2018. Palmprint recognition using siamese network. In Chinese Conference on Biometric Recognition, pages 48–55. Springer.
2019
319
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 331–335 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 331 Adaptive Attention Span in Transformers Sainbayar Sukhbaatar Edouard Grave Piotr Bojanowski Armand Joulin Facebook AI Research {sainbar,egrave,bojanowski,ajoulin}@fb.com Abstract We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters. 1 Introduction Language models are at the core of many NLP applications, like machine translation or dialogue. Recently, much progress has been made by a new neural network called Transformer (Vaswani et al., 2017). Part of its success is due to its ability to capture long term dependencies. This is achieved by taking long sequences as inputs and explicitly compute the relations between every token via a mechanism called the “self-attention” layer (AlRfou et al., 2019). While this layer allows for information to propagate across long distances, it has a computational and memory cost that scales quadratically with the size of the input sequence. As a consequence, Transformers hardly scale to sequences of more than a thousand tokens. This is particularly problematic in the case of character level language modeling where dependencies are often spread over a few thousands time steps. In this work, we propose an alternative to the self-attention layer to reduce the computational burden of a Transformer. Our layer learns its optimal context size, resulting in a network where each attention layer gathers information on their own context. In practice, we observe that this leads to Transformer with small context in the lowlevel layers and very large ones for the last layers. With this modification, we are able to scale input sequences to more than 8k tokens with no loss of performance, nor additional computational or memory cost. We validate our approach on the task of character level language modeling where we reach state-of-the-art performances while reducing the number of FLOPS. The code to reproduce our results is publicly available1. 2 Approach 2.1 Sequential transformer network Language modeling is the problem of assigning a probability to a sequence of tokens (w1, . . . , wT ): P(w1, . . . , wT ) = T Y t=1 P(wt | wt−1, . . . , w1). Recent progress was made with a new autoregressive model called Sequential Transformer (Vaswani et al., 2017). A Transformer is made of a sequence of layers that are composed of a block of parallel self-attention layers followed by a feedforward network. We refer to Vaswani et al. (2017) for the details on the structure. In this paper, we make a couple of modifications to the Transformer model: we use the relative position embeddings of Shaw et al. (2018) and the caching mechanism of Dai et al. (2019) to speed up the train and test time. Self-attention layer. A core mechanism of a transformer network is the self-attention layer, which consists of multiple attention heads working in parallel. Each attention head applies the attention mechanism of Bahdanau et al. (2015) to its own input. Given a token t in a sequence, the head 1https://github.com/facebookresearch/ adaptive-span 332 −100 −80 −60 −40 −20 Context 0.000 0.002 0.004 Attention Head A Head B Figure 1: Attention patterns of two different heads of a standard Transformer. The two patterns are qualitatively different: Head A utilizes recent steps, while Head B has uniform attention over the context. first computes similarities with its past, i.e., any token r in the span [t −S, t): str = x⊤ t W⊤ q (Wkxr + pt−r) , (1) where Wk and Wq are the “key” and “query” matrices, and pt−r is the relative position embedding. The attention weights are then obtained by applying a softmax function on these similarities: atr = exp (str) Pt−1 q=t−S exp (stq) , (2) Finally, the head outputs a vector yt by taking the average of the past representations weighted by their attention weights: yt = t−1 X r=t−S atrWvxr, (3) where Wv is called the “value” matrix. Outputs from different heads are then concatenated together and multiplied by an output matrix Wo before feeding to the next layer. Similar to the memory access mechanisms of Sukhbaatar et al. (2015), it pulls information from the past to update the current token representation. Repeating this mechanism in consecutive layers allows for information to flow over long distances. However, for each input token, each attention head scales linearly in memory and time in the context size, or attention span. There are typically 12 layers with 8 heads each that processes 512 tokens simultaneously. This drastically limits the maximum attention span used in Transformers. 2.2 Adaptive attention span Each attention head of a Transformer shares the same attention span S. This assumes that every head requires the same span to form its representation. As shown in Figure 1, this assumption does not hold in the context of character level language x mz(x) 1 z z + R Figure 2: The soft mask as a function of the distance. modeling: some heads (e.g., Head A) focus on the recent history, while others take information from the whole available context (e.g., Head B). In this section, we propose to learn the attention span of each head independently to reduce their computational and memory cost. For each head, we add a masking function to control for the span of the attention. A masking function is a non-increasing function that maps a distance to a value in [0, 1]. We take the following soft masking function mz parametrized by a real value z in [0, S]: mz(x) = min  max  1 R (R + z −x) , 0  , 1  , where R is a hyper-parameter that controls its softness. This soft masking function is inspired by Jernite et al. (2017). In Figure 2, we show the shape of this piecewise function as a function of the distance. The attention weights from Eq. 2 are then computed on the masked span, i.e., atr = mz(t −r) exp (str) t−1 P q=t−S mz(t −q) exp (stq) . We add a ℓ1 penalization on the parameters zi for each attention head i of the model to the loss function: L = −log P(w1, . . . , wT ) + λ M X i zi, where λ > 0 is the regularization hyperparameter, and M is the number of heads in each layer. Our formulation is differentiable in the parameters zi and we learn them jointly with the rest of the model. Dynamic attention span. As an extension, we consider a dynamic computation approach (Graves, 2016) where the attention span dynamically change based on the current input (Luong et al., 2015; Shu and Nakayama, 2017). At a time step t, the span parameter zt of 333 an attention head is then a function of the input parametrized by a vector v and a scalar b, i.e., zt = Sσ(vT xt + b). We penalize zt in the same way as before and learn the parameters v, b jointly with the rest of the parameters. 3 Experiments In this section, we evaluate the impact of our adaptive attention mechanism in the experimental setting of Al-Rfou et al. (2019) for character level language modeling. Dataset. We use the text8 and enwik8 datasets of Mahoney (2011). The both dataset have 100M tokens. We report bit per character (bpc) on dev and test set. Implementation details. We experiment with two sizes of models. Our small models have 12 layers and a hidden size of dh = 512, except for the feedforward ReLU layers, which have 2048 units. The large models have 24 layers with a hidden size of dh = 768, and a ReLU size of 4096. All models have 8 attention heads in each layer. Token and position embedding parameters are initialized from N(0, 1), and the projection matrices W{q,k,v,o} are initialized from U(−1/√dh, 1/√dh). A single set of position embeddings pt is shared across all the heads. In adaptive-span models, we reprameterized the span parameter z by z = Sz′, where z′ ∈[0, 1] is initialized to 0. In dynamic-span models, the bias term b is initialized −4 to make initial spans small. We set the hyperparameters λ = 2 × 10−6 and R = 32 for the both type of models, except λ is reduced to 0.5 × 10−6 when S = 8192 because z was not growing longer than 4000. We use Adagrad with a batch size of 64 and fixed learning rate of 0.07 and 32k warm-up steps. Our warm-up strategy differs from Vaswani et al. (2017): we linearly increase learning rate from zero to the final learning rate. Gradients of each module are clipped at 0.03 for better stability. At train time, we use a block of 512 consecutive characters and compute the loss and gradient for each of those 512 characters. In small models, we apply dropout with a rate of 0.3 to the attention and the feedforward ReLU activations. We train small models for 600K steps (900K steps when S = 8192), which takes about 2 ∼3 days on 8 V100 GPUs depending on the attention span limit. Large models are trained with a dropout rate of 0.4 until the validation performance stopped improving (250K steps for text8 and 150K steps for enwik8), and then further trained for 20K steps with a learning rate divided by 10. Results. In Table 1, we compare our sequential Transformer with the adaptive spans (“AdaptiveSpan”) of Sec. 2.2 to models of Al-Rfou et al. (2019) and Dai et al. (2019). For small models, our model outperforms the other Transformers by 0.07 bcp while significantly reducing the memory usage for large attention span. Interestingly, even with a limit on span sets to 8192, the average span is only 314. Similar results are obtained on enwik8 as shown in Table 2, where the adaptive-span model outperformed similar sized models with a significantly smaller average span. Our large models achieved state-of-the-art performances on both datasets with fewer parameters and FLOPS. In Figure 3, we compare the fixed and adaptive span small Transformers as we increase the attention span limit S. The performance of both models improve as the limit increase (see Figure 3(left)), but the adaptive-span model benefits more from longer span. As shown on the Figure 3(center), a Transformer with adaptive spans controls its average spans, leading to reduction of up to 70% in the number of FLOPS for the inference with large spans (see Figure 3(right)). Impact on the attention span. In Figure 4, we show the final attention spans of every attention heads of our small adaptive-span model with S = 4096. Even though all the span sizes are initialized to the same value, we see large varieties in their final values. We can see that the lowest 5 layers have the smallest possible attention span, which is R = 32 of the masking function. This indicates that lower layers in a Transformer model do not really require a long attention span in this particular task. In contrast, few attention heads in the higher layers have very long spans, exceeding several thousand. Although there is a general tendency of higher layers having longer attention spans, it is not a simple monotonic function of the layer height. Impact on the number of FLOPS. Having a smaller attention span has a direct impact on the total number of FLOPS necessary for computing one-step prediction. In a standard fixed-span 334 Model #layers Avg. span #Params #FLOPS dev test Small models T12 (Al-Rfou et al., 2019) 12 512 44M 22G 1.18 Adaptive-Span (S = 8192) 12 314 38M 42M 1.05 1.11 Large models T64 (Al-Rfou et al., 2019) 64 512 235M 120G 1.06 1.13 T-XL (Dai et al., 2019) 24 3800 277M 438M 1.08 Adaptive-Span (S = 8192) 24 245 209M 179M 1.01 1.07 Table 1: Character level language modeling on text8. We report bpc for the dev and test sets, as well as, the number of parameters, the average attention spans and total number of FLOPS (an estimate of the number of FLOPS necessary for computing one step prediction). 256 1024 4096 Span limit (S) 1.06 1.08 1.10 Dev. (bpc) Fixed Adaptive 256 1024 4096 Span limit (S) 0 2500 5000 7500 Average span 256 1024 4096 Span limit (S) 0.0 0.5 1.0 1.5 FLOPS ×108 Figure 3: Left: validation performances improve as the attention span limit S increase (we did not train a fixedspan model with S = 8192 due to memory limitation). Center: average attention span of trained models. Learning attention spans significantly reduces the average attention span. Right: the number of FLOPS during inference time grows almost linearly with S for the fixed span models. The adaptive-span models do not have this growth in #FLOPS because they have a very small attention span on average. Model #layers #Params #FLOPS dev / test Small models T12 12 44M 22G - / 1.11 T-XL 12 41M 64M - / 1.06 Adaptive 12 39M 41M 1.04 / 1.02 Large models T64 64 235M 120G - / 1.06 T-XL 18 88M 329M - / 1.03 T-XL 24 277M 438M - / 0.99 Adaptive 24 209M 181M 1.00 / 0.98 Table 2: Results on enwik8. The span limit is S = 8192 for the adaptive-span models. model, the total number of FLOPS is mostly controlled by the feed-forward layer (accounting for 62% of FLOPS when S = 256). However, as the span increase, the attention layer dominates the computation (82% of FLOPS when S = 8192), making it hard to scale to longer sequences. In contrast, the learning of an attention span keeps computation at a relatively constant level even as 1 2 3 4 5 6 7 8 9 10 11 12 Layers 101 102 103 Attention span Figure 4: Adaptive spans (in log-scale) of every attention heads in a 12-layer model with span limit S = 4096. Few attention heads require long attention spans. S increase as shown in Figure 3(right). The memory usage is also dominated by the attention layer as the attention span increase. Thus, reducing the average span will also reduce the memory usage. However, because all heads in a single layer attend to common state vectors, the maximum span within each layer will determine the memory usage. The same is true for the number of FLOPS if all heads of a layer are computed together, as often done for better efficiency. In practice, the largest fixed-span model that can fit in memory for training had a span of S = 2048 (batches had to be split when S = 4096), and 335 ov e r l ook s t he pa r k and i t s nume r ou s 100 200 Average span Figure 5: Example of average dynamic attention span as a function of the input sequence. The span is averaged over the layers and heads. Model Avg. span dev Adaptive (S = 1024) 123 1.08 Dynamic (S = 1024) 149 1.08 Table 3: Comparison between adaptive and dynamic attention span on text8. it took about 550ms per batch. In contrast, an adaptive-span model with a 4 times longer span of S = 8192 fit in memory and took about similar time per batch. Dynamic span. In Table 3, we show the adaptive and dynamic spans achieved the same performance with comparable average spans on text8. Figure 5 shows how the average dynamic span adapts to the input sequence. The span increases at the beginning of words and in the middle of composed words, e.g., to predict the “l” in “overlook”. 4 Conclusion In this work, we present a novel self-attention layer with an adaptive span. This mechanism allows for models with longer context, and thus with the capability to catch longer dependencies. We have shown the importantce of this feature in the context of character level modeling where information is spread over great distances. References Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2019. Character-level language modeling with deeper self-attention. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983. Yacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Variable computation in recurrent neural networks. In 5th International Conference on Learning Representations, ICLR. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP. Matt Mahoney. 2011. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text. html. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT. Raphael Shu and Hideki Nakayama. 2017. An empirical study of adequate vision span for attention-based neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30.
2019
32
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3308–3318 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3308 Incorporating Syntactic and Semantic Information in Word Embeddings using Graph Convolutional Networks Shikhar Vashishth1 Manik Bhandari1∗ Prateek Yadav2∗ Piyush Rai3 Chiranjib Bhattacharyya1 Partha Talukdar1 1Indian Institute of Science 2Microsoft Research, 3IIT Kanpur {shikhar,manikb,chiru,ppt}@iisc.ac.in [email protected], [email protected] Abstract Word embeddings have been widely adopted across several NLP applications. Most existing word embedding methods utilize sequential context of a word to learn its embedding. While there have been some attempts at utilizing syntactic context of a word, such methods result in an explosion of the vocabulary size. In this paper, we overcome this problem by proposing SynGCN, a flexible Graph Convolution based method for learning word embeddings. SynGCN utilizes the dependency context of a word without increasing the vocabulary size. Word embeddings learned by SynGCN outperform existing methods on various intrinsic and extrinsic tasks and provide an advantage when used with ELMo. We also propose SemGCN, an effective framework for incorporating diverse semantic knowledge for further enhancing learned word representations. We make the source code of both models available to encourage reproducible research. 1 Introduction Representing words as real-valued vectors is an effective and widely adopted technique in NLP. Such representations capture properties of words based on their usage and allow them to generalize across tasks. Meaningful word embeddings have been shown to improve performance on several relevant tasks, such as named entity recognition (NER) (Bengio et al., 2013), parsing (Socher et al., 2013), and part-of-speech (POS) tagging (Ma and Hovy, 2016). Using word embeddings for initializing Deep Neural Networks has also been found to be quite useful (Collobert et al., 2011; Johnson et al., 2017; Strubell et al., 2018). Most popular methods for learning word embeddings are based on the distributional hypothesis, which utilizes the co-occurrence statistics ∗Contributed equally to the work. from sequential context of words for learning word representations (Mikolov et al., 2013a; Pennington et al., 2014). More recently, this approach has been extended to include syntactic contexts (Levy and Goldberg, 2014) derived from dependency parse of text. Higher order dependencies have also been exploited by Komninos and Manandhar (2016); Li et al. (2018). Syntax-based embeddings encode functional similarity (in-place substitutable words) rather than topical similarity (topically related words) which provides an advantage on specific tasks like question classification (Komninos and Manandhar, 2016). However, current approaches incorporate syntactic context by concatenating words with their dependency relations. For instance, in Figure 1 scientists_subj, water_obj, and mars_nmod needs to be included as a part of vocabulary for utilizing the dependency context of discover. This severely expands the vocabulary, thus limiting the scalability of models on large corpora. For instance, in Levy and Goldberg (2014) and Komninos and Manandhar (2016), the context vocabulary explodes to around 1.3 million for learning embeddings of 220k words. Incorporating relevant signals from semantic knowledge sources such as WordNet (Miller, 1995), FrameNet (Baker et al., 1998), and Paraphrase Database (PPDB) (Pavlick et al., 2015) has been shown to improve the quality of word embeddings. Recent works utilize these by incorporating them in a neural language modeling objective function (Yu and Dredze, 2014; Alsuhaibani et al., 2018), or as a post-processing step (Faruqui et al., 2014; Mrkši´c et al., 2016). Although existing approaches improve the quality of word embeddings, they require explicit modification for handling different types of semantic information. Recently proposed Graph Convolutional Networks (GCN) (Defferrard et al., 2016; Kipf and 3309 Scientists discover water on Context Embedding GCN Embedding SynGCN (Sentence-level) Mars obj water apple chair Output Layer WTarget Sentence nmod subj case Figure 1: Overview of SynGCN: SynGCN employs Graph Convolution Network for utilizing dependency context for learning word embeddings. For each word in vocabulary, the model learns its representation by aiming to predict each word based on its dependency context encoded using GCNs. Please refer Section 5 for more details. Welling, 2016) have been found to be useful for encoding structural information in graphs. Even though GCNs have been successfully employed for several NLP tasks such as machine translation (Bastings et al., 2017), semantic role labeling (Marcheggiani and Titov, 2017), document dating (Vashishth et al., 2018a) and text classification (Yao et al., 2018), they have so far not been used for learning word embeddings, especially leveraging cues such as syntactic and semantic information. GCNs provide flexibility to represent diverse syntactic and semantic relationships between words all within one framework, without requiring relation-specific special handling as in previous methods. Recognizing these benefits, we make the following contributions in this paper. 1. We propose SynGCN, a Graph Convolution based method for learning word embeddings. Unlike previous methods, SynGCN utilizes syntactic context for learning word representations without increasing vocabulary size. 2. We also present SemGCN, a framework for incorporating diverse semantic knowledge (e.g., synonymy, antonymy, hyponymy, etc.) in learned word embeddings, without requiring relation-specific special handling as in previous methods. 3. Through experiments on multiple intrinsic and extrinsic tasks, we demonstrate that our proposed methods obtain substantial improvement over state-of-the-art approaches, and also yield an advantage when used in conjunction with methods such as ELMo (Peters et al., 2018). The source code of both the methods has been made available at http://github.com/ malllabiisc/WordGCN. 2 Related Work Word Embeddings: Recently, there has been much interest in learning meaningful word representations such as neural language modeling (Bengio et al., 2003) based continuous-bag-of-words (CBOW) and skip-gram (SG) models (Mikolov et al., 2013a). This is further extended by Pennington et al. (2014) which learns embeddings by factorizing word co-occurrence matrix to leverage global statistical information. Other formulations for learning word embeddings include multitask learning (Collobert et al., 2011) and ranking frameworks (Ji et al., 2015). Syntax-based Embeddings: Dependency parse context based word embeddings is first introduced by Levy and Goldberg (2014). They allow encoding syntactic relationships between words and show improvements on tasks where functional similarity is more relevant than topical similarity. The inclusion of syntactic context is further enhanced through second-order (Komninos and Manandhar, 2016) and multi-order (Li et al., 2018) dependencies. However, in all these existing approaches, the word vocabulary is severely expanded for incorporating syntactic 3310 relationships. Incorporating Semantic Knowledge Sources: Semantic relationships such as synonymy, antonymy, hypernymy, etc. from several semantic sources have been utilized for improving the quality of word representations. Existing methods either exploit them jointly (Xu et al., 2014; Kiela et al., 2015; Alsuhaibani et al., 2018) or as a post-processing step (Faruqui et al., 2014; Mrkši´c et al., 2016). SynGCN falls under the latter category and is more effective at incorporating semantic constraints (Section 9.3 and 9.2). Graph Convolutional Networks: In this paper, we use the first-order formulation of GCNs via a layer-wise propagation rule as proposed by (Kipf and Welling, 2016). Recently, some variants of GCNs have also been proposed (Yadav et al., 2019; Vashishth et al., 2019). A detailed description of GCNs and their applications can be found in Bronstein et al. (2017). In NLP, GCNs have been utilized for semantic role labeling (Marcheggiani and Titov, 2017), machine translation (Bastings et al., 2017), and relation extraction (Vashishth et al., 2018b). Recently, Yao et al. (2018) use GCNs for text classification by jointly embedding words and documents. However, their learned embeddings are task specific whereas in our work we aim to learn task agnostic word representations. 3 Background: Graph Convolutional Networks In this section, we will provide a brief overview of Graph Convolutional Networks (GCNs) (Defferrard et al., 2016; Kipf and Welling, 2016) and its extension to directed labeled graphs. 3.1 GCN on Directed Labeled Graphs Let G = (V, E, X) be a directed graph where V is the set of nodes (|V| = n), E indicates the edge set, and X ∈Rn×d denotes the d-dimensional input node features. An edge from node u to v with label luv is denoted by (u, v, luv). As the information need not always propagate only along the direction of the edge, following Marcheggiani and Titov (2017), we include inverse edges (v, u, l−1 uv ) in E. Embedding hk+1 v ∈Rd of a node v after k-GCN layers is given as follows. hk+1 v = f   X u∈N+(v) W k luvhk u + bk luv    Here, W k luv ∈Rd×d and bluv ∈Rd are label specific model parameters, N+(v) = N(v) ∪{v} is the set of immediate neighbors of v (including v itself), and hk u ∈Rd is hidden representation of node u after k −1 layers. Edge Label Gating Mechanism: In real-world graphs, some of the edges might be erroneous or irrelevant for the downstream task. This is predominant in automatically constructed graphs like dependency parse of text. To address this issue, we employ edge-wise gating (Marcheggiani and Titov, 2017) in GCNs. For each node v, we calculate a relevance score gk luv ∈R for all the edges in which v participates. The score is computed independently for each layer as shown below. gk luv = σ  ˆW k luvhk u + ˆbk luv  Here, ˆW k luv ∈R1×d and ˆbk luv ∈R are trainable parameters and σ(·) is the sigmoid function. The updated GCN propagation rule for the kth layer can be written as shown below. hk+1 v = f   X u∈N+(v) gk luv × W k luvhk u + bk luv   (1) 4 Methods Overview The task of learning word representations in an unsupervised setting can be formulated as follows: Given a text corpus, the aim is to learn a d-dimensional embedding for each word in the vocabulary. Most of the distributional hypothesis based approaches (Mikolov et al., 2013b; Pennington et al., 2014) only utilize sequential context for each word in the corpus. However, this becomes suboptimal when the relevant context words lie beyond the window size. For instance in Figure 1, a relevant context word discover for Mars is missed if the chosen window size is less than 3. On the contrary, a large window size might allow irrelevant words to influence word embeddings negatively. Using dependency based context helps to alleviate this problem. However, all existing syntactic context based methods (Levy and Goldberg, 2014; Komninos and Manandhar, 2016; Li et al., 2018) severely expand vocabulary size (as discussed in Section 1) which limits their scalability to a large corpus. To eliminate this drawback, we propose SynGCN which employs Graph Convolution Networks (Defferrard et al., 2016; Kipf and Welling, 3311 2016) to better encode syntactic information in embeddings. We prefer GCNs over other graph encoding architectures such as Tree LSTM (Tai et al., 2015) as GCNs do not restrict graphs to be trees and have been found to be more effective at capturing global information (Zhang et al., 2018). Moreover, they give substantial speedup as they do not involve recursive operations which are difficult to parallelize. The overall architecture is shown in Figure 1, for more details refer to Section 5. Enriching word embeddings with semantic knowledge helps to improve their quality for several NLP tasks (Faruqui et al., 2014; Mrkši´c et al., 2016). Existing approaches are either incapable of utilizing these diverse relations or need to be explicitly modeled for exploiting them. In this paper, we propose SemGCN which automatically learns to utilize multiple semantic constraints by modeling them as different edge types. SemGCN can be used as a post-processing method similar to Faruqui et al. (2014); Mrkši´c et al. (2016). We describe it in more detail in Section 6. 5 SynGCN In this section, we provide a detailed description of our proposed method, SynGCN. Following Mikolov et al. (2013b); Levy and Goldberg (2014); Komninos and Manandhar (2016), we separately define target and context embeddings for each word in the vocabulary as parameters in the model. For a given sentence s = (w1, w2, . . . , wn), we first extract its dependency parse graph Gs = (Vs, Es) using Stanford CoreNLP parser (Manning et al., 2014). Here, Vs = {w1, w2, . . . , wn} and Es denotes the labeled directed dependency edges of the form (wi, wj, lij), where lij is the dependency relation of wi to wj. Similar to Mikolov et al. (2013b)’s continuousbag-of-words (CBOW) model, which defines the context of a word wi as Cwi = {wi+j : −c ≤j ≤ c, j ̸= 0} for a window of size c, we define the context as its neighbors in Gs, i.e., Cwi = N(wi). Now, unlike CBOW which takes the sum of the context embedding of words in Cwi to predict wi, we apply directed Graph Convolution Network (as defined in Section 3) on Gs with context embeddings of words in s as input features. Thus, for each word wi in s, we obtain a representation hk+1 i after k-layers of GCN using Equation 1 which we reproduce below for ease of readability (with one exception as described below). hk+1 i = f  X j∈N (i) gk lij ×  W k lijhk j + bk lij    Please note that unlike in Equation 1, we use N(i) instead of N+(i) in SynGCN, i.e., we do not include self-loops in Gs. This helps to avoid overfitting to the initial embeddings, which is undesirable in the case of SynGCN as it uses random initialization. We note that similar strategy has been followed by Mikolov et al. (2013b). Furthermore, to handle erroneous edges in automatically constructed dependency parse graph, we perform edge-wise gating (Section 3.1) to give importance to relevant edges and suppress the noisy ones. The embeddings obtained are then used to calculate the loss as described in Section 7. Contrary to the standard word-embedding approaches (Mikolov et al., 2013b; Pennington et al., 2014) which rely on sequential context, SynGCN utilizes syntactic context to learn more meaningful word representations. We validate this in Section 9.1. Note that, the word vocabulary remains unchanged during the entire learning process, this makes SynGCN more scalable compared to the existing approaches. Theorem 1. SynGCN is a generalization of Continuous-bag-of-words (CBOW) model. Proof. The reduction can be obtained as follows. For a given sentence s, take the neighborhood of each word wi in Gs as it sequential context, i.e., N(wi) = {wi+j : −c ≤j ≤c, j ̸= 0} ∀wi ∈s. Now, if the number of GCN layers are restricted to 1 and the activation function is taken as identity (f(x) = x), then Equation 1 reduces to hi = X −c≤j≤c,j̸=0  glij ×  Wlijhj + bk lij  . Finally, W k lij and bk lij can be fixed to an identity matrix (I) and a zero vector (0), respectively, and edge-wise gating (glij) can be set to 1. This gives hi = X −c≤j≤c,j̸=0 (I · hj + 0) = X −c≤j≤c,j̸=0 hj, which is the hidden layer equation of CBOW model. 3312 6 SemGCN In this section, we propose another Graph Convolution based framework, SemGCN, for incorporating semantic knowledge in pre-trained word embeddings. Most of the existing approaches like Faruqui et al. (2014); Mrkši´c et al. (2016) are restricted to handling symmetric relations like synonymy and antonymy. On the other hand, although recently proposed (Alsuhaibani et al., 2018) is capable of handling asymmetric information, it still requires manually defined relation strength function which can be labor intensive and suboptimal. SemGCN is capable of incorporating both symmetric as well as asymmetric information jointly. Unlike SynGCN, SemGCN operates on a corpus-level directed labeled graph with words as nodes and edges representing semantic relationship among them from different sources. For instance, in Figure 2, semantic relations such as hyponymy, hypernymy and synonymy are represented together in a single graph. Symmetric information is handled by including a directed edge in both directions. Given the corpus level graph G, the training procedure is similar to that of SynGCN, i.e., predict the word w based on its neighbors in G. Inspired by Faruqui et al. (2014), we preserve the semantics encoded in pre-trained embeddings by initializing both target and context embeddings with given word representations and keeping target embeddings fixed during training. SemGCN uses Equation 1 to update node embeddings. Please note that in this case N+(v) is used as the neighborhood definition to preserve the initial learned representation of the words. 7 Training Details Given the GCN representation (ht) of a word (wt), the training objective of SynGCN and SemGCN is to predict the target word given its neighbors in the graph. Formally, for each method we maximize the following objective1. E = |V | X t=1 log P(wt|wt 1, wt 2 . . . wt Nt) 1We also experimented with joint SynGCN and SemGCN model but our preliminary experiments gave suboptimal performance as compared to the sequential model. This can be attributed to the fact that syntactic information is orders of magnitude greater than the semantic information available. Hence, the semantic constraints are not effectively utilized. We leave the analysis of the joint model as a future work. SemGCN (Corpus-level) water food snow aqua H2O water WTarget ocean liquid synonym hyponym hypernym synonym synonym Figure 2: Overview of SemGCN, our proposed Graph Convolution based framework for incorporating diverse semantic information in learned embeddings. Double-headed edges denote two edges in both directions. Please refer to Section 6 for more details. where, wt is the target word and wt 1, wt 2 . . . wt Nt are its neighbors in the graph. The probability P(wt|wt 1, wt 2 . . . wt Nt) is calculated using the softmax function, defined as P(wt|wt 1, wt 2 . . . wt Nt) = exp(vT wtht) P|V | i=1 exp(vTwiht) . Hence, E reduces to E = |V | X t=1  vT wtht −log |V | X i=1 exp(vT wiht)  , (2) where, ht is the GCN representation of the target word wt and vwt is its target embedding. The second term in Equation 2 is computationally expensive as the summation needs to be taken over the entire vocabulary. This can be overcome using several approximations like noisecontrastive estimation (Gutmann and Hyvärinen, 2010) and hierarchical softmax (Morin and Bengio, 2005). In our methods, we use negative sampling as used by Mikolov et al. (2013b). 8 Experimental Setup 8.1 Dataset and Training In our experiments, we use Wikipedia2 corpus for training the models. After discarding too long and too short sentences, we get an average sentence length of nearly 20 words. The corpus consists of 2https://dumps.wikimedia.org/enwiki/20180301/ 3313 57 million sentences with 1.1 billion tokens and 1 billion syntactic dependencies. 8.2 Baselines For evaluating SynGCN (Section 5), we compare against the following baselines: • Word2vec is continuous-bag-of-words model originally proposed by Mikolov et al. (2013b). • GloVe (Pennington et al., 2014), a log-bilinear regression model which leverages global cooccurrence statistics of corpus. • Deps (Levy and Goldberg, 2014) is a modification of skip-gram model which uses dependency context in place of sequential context. • EXT (Komninos and Manandhar, 2016) is an extension of Deps which utilizes second-order dependency context features. SemGCN (Section 6) model is evaluated against the following methods: • Retro-fit (Faruqui et al., 2014) is a postprocessing procedure which uses similarity constraints from semantic knowledge sources. • Counter-fit (Mrkši´c et al., 2016), a method for injecting both antonym and synonym constraints into word embeddings. • JointReps (Alsuhaibani et al., 2018), a joint word representation learning method which simultaneously utilizes the corpus and KB. 8.3 Evaluation method To evaluate the effectiveness of our proposed methods, we compare them against the baselines on the following intrinsic and extrinsic tasks3: • Intrinsic Tasks: Word Similarity is the task of evaluating closeness between semantically similar words. Following Komninos and Manandhar (2016); Pennington et al. (2014), we evaluate on Simlex999 (Hill et al., 2015), WS353 (Finkelstein et al., 2001), and RW (Luong et al., 2013) datasets. Concept Categorization involves grouping nominal concepts into natural categories. For instance, tiger and elephant should belong to mammal class. In our experiments, we evalute on AP (Almuhareb, 2006), Battig (Baroni and Lenci, 2010), BLESS (Baroni and Lenci, 2011), ESSLI (Baroni et al., 2008) datasets. Word Analogy task is to predict word b2, given three words a1, a2, and b1, such that the relation 3Details of hyperparameters are in supplementary. b1 : b2 is same as the relation a1 : a2. We compare methods on MSR (Mikolov et al., 2013c) and SemEval-2012 (Jurgens et al., 2012). • Extrinsic Tasks: Named Entity Recognition (NER) is the task of locating and classifying entity mentions into categories like person, organization etc. We use Lee et al. (2018)’s model on CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003) for evaluation. Question Answering in Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) involves identifying answer to a question as a segment of text from a given passage. Following Peters et al. (2018), we evaluate using Clark and Gardner (2018)’s model. Part-of-speech (POS) tagging aims at associating with each word, a unique tag describing its syntactic role. For evaluating word embeddings, we use Lee et al. (2018)’s model on Penn Treebank POS dataset (Marcus et al., 1994). Co-reference Resolution (Coref) involves identifying all expressions that refer to the same entity in the text. To inspect the effect of embeddings, we use Lee et al. (2018)’s model on CoNLL2012 shared task dataset (Pradhan et al., 2012). 9 Results In this section, we attempt to answer the following questions. Q1. Does SynGCN learn better word embeddings than existing approaches? (Section 9.1) Q2. Does SemGCN effectively handle diverse semantic information as compared to other methods? (Section 9.2) Q3. How does SemGCN perform compared to other methods when provided with the same semantic constraints? (Section 9.3) Q4. Does dependency context based embedding encode complementary information compared to ELMo? (Section 9.4) 9.1 SynGCN Evaluation The evaluation results on intrinsic tasks – word similarity, concept categorization, and analogy – are summarized in Table 1. We report Spearman correlation for word similarity and analogy tasks and cluster purity for concept categorization task. Overall, we find that SynGCN, our proposed method, outperforms all the existing word embed3314 Word Similarity Concept Categorization Word Analogy Method WS353S WS353R SimLex999 RW AP Battig BLESS ESSLI SemEval2012 MSR Word2vec 71.4 52.6 38.0 30.0 63.2 43.3 77.8 63.0 18.9 44.0 GloVe 69.2 53.4 36.7 29.6 58.0 41.3 80.0 59.3 18.7 45.8 Deps 65.7 36.2 39.6 33.0 61.8 41.7 65.9 55.6 22.9 40.3 EXT 69.6 44.9 43.2 18.6 52.6 35.0 65.2 66.7 21.8 18.8 SynGCN 73.2 45.7 45.5 33.7 69.3 45.2 85.2 70.4 23.4 52.8 Table 1: SynGCN Intrinsic Evaluation: Performance on word similarity (Spearman correlation), concept categorization (cluster purity), and word analogy (Spearman correlation). Overall, SynGCN outperforms other existing approaches in 9 out of 10 settings. Please refer to Section 9.1 for more details. Method POS SQuAD NER Coref Word2vec 95.0±0.1 78.5±0.3 89.0±0.2 65.1±0.3 GloVe 94.6±0.1 78.2±0.2 89.1±0.1 64.9±0.2 Deps 95.0±0.1 77.8±0.3 88.6±0.3 64.8±0.1 EXT 94.9±0.2 79.6±0.1 88.0±0.1 64.8±0.1 SynGCN 95.4±0.1 79.6±0.2 89.5±0.1 65.8±0.1 Table 2: SynGCN Extrinsic Evaluation: Comparison on parts-of-speech tagging (POS), question answering (SQuAD), named entity recognition (NER), and coreference resolution (Coref). SynGCN performs comparable or outperforms all existing approaches on all tasks. Refer Section 9.1 for details. ding approaches in 9 out of 10 settings. The inferior performance of SynGCN and other dependency context based methods on WS353R dataset compared to sequential context based methods is consistent with the observation reported in Levy and Goldberg (2014); Komninos and Manandhar (2016). This is because the syntactic context based embeddings capture functional similarity rather than topical similarity (as discussed in Section 1). On average, we obtain around 1.5%, 5.7% and 7.5% absolute increase in performance on word similarity, concept categorization and analogy tasks compared to the best performing baseline. The results demonstrate that the learned embeddings from SynGCN more effectively capture semantic and syntactic properties of words. We also evaluate the performance of different word embedding approaches on the downstream tasks as defined in Section 8.3. The experimental results are summarized in Table 2. Overall, we find that SynGCN either outperforms or performs comparably to other methods on all four tasks. Compared to the sequential context based methods, dependency based methods perform superior at question answering task as they effectively encode syntactic information. This is consistent with the observation of Peters et al. (2018). Method POS SQuAD NER Coref X = SynGCN 95.4±0.1 79.6±0.2 89.5±0.1 65.8±0.1 Retro-fit (X,1) 94.8±0.1 79.6±0.1 88.8±0.1 66.0±0.2 Counter-fit (X,2) 94.7±0.1 79.8±0.1 88.3±0.3 65.7±0.3 JointReps (X,4) 95.4±0.1 79.4±0.3 89.1±0.3 65.6±0.1 SemGCN (X,4) 95.5±0.1 80.4±0.1 89.5±0.1 66.1±0.1 Table 3: SemGCN Extrinsic Evaluation: Comparison of different methods for incorporating diverse semantic constraints in SynGCN embeddings on all extrinsic tasks. Refer Section 9.3 of paper for details. 9.2 Evaluation with Diverse Semantic Information In this section, we compare SemGCN against the methods listed in Section 8.2 for incorporating diverse semantic information in pre-trained embeddings. We use hypernym, hyponym, and antonym relations from WordNet, and synonym relations from PPDB as semantic information. For each method, we provide the semantic information that it can utilize, e.g., Retro-fit can only make use of synonym relation4. In our results, M(X, R) denotes the fine-tuned embeddings obtained using method M while taking X as initialization embeddings. R denotes the types of semantic information used as defined below. • R=1: Only synonym information. • R=2: Synonym and antonym information. • R=4: Synonym, antonym, hypernym and hyponym information. For instance, Counter-fit (GloVe, 2) represents GloVe embeddings fine-tuned by Counter-fit using synonym and antonym information. Similar to Section 9.1, the evaluation is performed on the three intrinsic tasks. Due to space limitations, we report results on one representative dataset per task. The results are summarized 4Experimental results controlling for semantic information are provided in Section 9.3. 3315 Init Embeddings (=X) Word2vec GloVe Deps EXT SynGCN Datasets WS353 AP MSR WS353 AP MSR WS353 AP MSR WS353 AP MSR WS353 AP MSR Performance of X 63.0 63.2 44.0 58.0 60.4 45.8 55.6 64.2 40.3 59.3 53.5 18.8 61.7 69.3 52.8 Retro-fit (X,1) 63.4 67.8 46.7 58.5 61.1 47.2 54.8 64.7 41.0 61.6 55.1 40.5 61.2 67.1 51.4 Counter-fit (X,2) 60.3 62.9 31.4 53.7 62.5 29.6 46.9 60.4 33.4 52.0 54.4 35.8 55.2 66.4 31.7 JointReps (X,4) 60.9 61.1 28.5 59.2 55.5 37.6 54.8 58.7 38.0 58.8 54.8 20.6 60.9 68.2 24.9 SemGCN (X,4) 64.8 67.8 36.8 63.3 63.2 44.1 62.3 69.3 41.1 62.9 67.1 52.1 65.3 69.3 54.4 Table 4: SemGCN Intrinsic Evaluation: Evaluation of different methods for incorporating diverse semantic constraints initialized using various pre-trained embeddings (X). M(X, R) denotes the fine-tuned embeddings using method M taking X as initialization embeddings. R denotes the type of semantic relations used as defined in Section 9.2. SemGCN outperforms other methods in 13 our of 15 settings. SemGCN with SynGCN gives the best performance across all tasks (highlighted using · ). Please refer Section 9.2 for details. F1 score X = SynGCN Retro-fit (X,1) Counter-fit (X,1) JointReps (X,1) SemGCN (X,1) 79.0 79.6 80.2 80.8 Figure 3: Comparison of different methods when provided with the same semantic information (synonym) for fine tuning SynGCN embeddings. Results denote the F1-score on SQuAD dataset. SemGCN gives considerable improvement in performance. Please refer Section 9.3 for details. in Table 4. We find that in 13 out of 15 settings, SemGCN outperforms other methods. Overall, we observe that SemGCN, when initialized with SynGCN, gives the best performance on all the tasks (highlighted by · in Table 4). For comparing performance on the extrinsic tasks, we first fine-tune SynGCN embeddings using different methods for incorporating semantic information. The embeddings obtained by this process are then evaluated on extrinsic tasks, as in Section 9.1. The results are shown in Table 3. We use the same M(X, R) notation to represent methods as in Section 9.2. We observe that while the other methods do not always consistently give improvement over the baseline SynGCN, SemGCN is able to improve upon SynGCN in all settings (better or comparable). Overall, we observe that SynGCN along with SemGCN is the most suitable method for incorporating both syntactic and semantic information. Method POS SQuAD NER Coref ELMo (E) 96.1±0.1 81.8±0.2 90.3±0.3 67.8±0.1 E+SemGCN(SynGCN, 4) 96.2±0.1 82.4±0.1 90.9±0.1 68.3±0.1 Table 5: Comparison of ELMo with SynGCN and SemGCN embeddings on multiple extrinsic tasks. For each task, models use a linear combination of the provided embeddings whose weights are learned. Results show that our proposed methods encode complementary information which is not captured by ELMo. 9.3 Evaluation with Same Semantic Information In this section, we compare SemGCN against other baselines when provided with the same semantic information: synonyms from PPDB. Similar to Section 9.2, we compare both on intrinsic and extrinsic tasks with different initializations. The evaluation results of fine-tuned SynGCN embeddings by different methods on SQuAD are shown in the Figure 3. The remaining results are included in the supplementary (Table S1 and S2). We observe that compared to other methods, SemGCN is most effective at incorporating semantic constraints across all the initializations and outperforms others at both intrinsic and extrinsic tasks. 9.4 Comparison with ELMo Recently, ELMo (Peters et al., 2018) has been proposed which fine-tunes word embedding based on sentential context. In this section, we evaluate SynGCN and SemGCN when given along with pre-trained ELMo embeddings. The results are reported in Table 5. The results show that dependency context based embeddings encode complementary information which is not captured by ELMo as it only relies on sequential context. Hence, our proposed methods serves as an effective combination with ELMo. 3316 10 Conclusion In this paper, we have proposed SynGCN, a graph convolution based approach which utilizes syntactic context for learning word representations. SynGCN overcomes the problem of vocabulary explosion and outperforms state-of-the-art word embedding approaches on several intrinsic and extrinsic tasks. We also propose SemGCN, a framework for jointly incorporating diverse semantic information in pre-trained word embeddings. The combination of SynGCN and SemGCN gives the best overall performance. We make the source code of both models available to encourage reproducible research. Acknowledgements We thank the anonymous reviewers for their constructive comments. This work is supported in part by the Ministry of Human Resource Development (Government of India) and Google PhD Fellowship. References Abdulrahman Almuhareb. 2006. Attributes in lexical acquisition. Mohammed Alsuhaibani, Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2018. Jointly learning word embeddings using a corpus and a knowledge base. PLOS ONE, 13(3):1–26. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, ACL ’98, pages 86– 90. Marco Baroni, Stefan Evert, and Alessandro Lenci. 2008. Esslli 2008 workshop on distributional lexical semantics. Association for Logic, Language and Information. Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpusbased semantics. Comput. Linguist., 36(4):673–721. Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, GEMS ’11, pages 1–10, Stroudsburg, PA, USA. Association for Computational Linguistics. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967. Association for Computational Linguistics. Y. Bengio, A. Courville, and P. Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155. M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam, and P. Vandergheynst. 2017. Geometric deep learning: Going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845–855. Association for Computational Linguistics. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537. Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. CoRR, abs/1606.09375. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2014. Retrofitting word vectors to semantic lexicons. CoRR, abs/1411.4166. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th International Conference on World Wide Web, WWW ’01, pages 406–414, New York, NY, USA. ACM. Michael Gutmann and Aapo Hyvärinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 297– 304, Chia Laguna Resort, Sardinia, Italy. PMLR. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with genuine similarity estimation. Comput. Linguist., 41(4):665–695. Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Matsushima, and S. V. N. Vishwanathan. 2015. Wordrank: Learning word embeddings via robust ranking. CoRR, abs/1506.02761. 3317 Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. David A. Jurgens, Peter D. Turney, Saif M. Mohammad, and Keith J. Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval ’12, pages 356–364, Stroudsburg, PA, USA. Association for Computational Linguistics. Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for similarity or relatedness. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2044–2048. Association for Computational Linguistics. Thomas N. Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. CoRR, abs/1609.02907. Alexandros Komninos and Suresh Manandhar. 2016. Dependency based embeddings for sentence classification tasks. Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-tofine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687–692. Association for Computational Linguistics. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–308. Association for Computational Linguistics. Chen Li, Jianxin Li, Yangqiu Song, and Ziwei Lin. 2018. Training and evaluating improved dependency-based word embeddings. In AAAI. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In CoNLL, Sofia, Bulgaria. Xuezhe Ma and Eduard H. Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. CoRR, abs/1603.01354. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55–60. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Association for Computational Linguistics. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The penn treebank: Annotating predicate argument structure. In Proceedings of the Workshop on Human Language Technology, HLT ’94, pages 114–119, Stroudsburg, PA, USA. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Tomas Mikolov, Scott Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT2013). Association for Computational Linguistics. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, AISTATS 2005, Bridgetown, Barbados, January 6-8, 2005. Nikola Mrkši´c, Diarmuid Ó Séaghdha, Blaise Thomson, Milica Gaši´c, Lina M. Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–148. Association for Computational Linguistics. 3318 Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. In In Proceedings of the Association for Computational Linguistics (ACL-2015), pages 425–430. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL - Shared Task, CoNLL ’12, pages 1–40, Stroudsburg, PA, USA. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing With Compositional Vector Grammars. In ACL. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027–5038. Association for Computational Linguistics. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556–1566. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 - Volume 4, CONLL ’03, pages 142–147, Stroudsburg, PA, USA. Association for Computational Linguistics. Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar. 2018a. Dating documents using graph convolution networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1605– 1615. Association for Computational Linguistics. Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018b. Reside: Improving distantly-supervised neural relation extraction using side information. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1257–1266. Association for Computational Linguistics. Shikhar Vashishth, Prateek Yadav, Manik Bhandari, and Partha Talukdar. 2019. Confidence-based graph convolutional networks for semi-supervised learning. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1792–1801. PMLR. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. Rcnet: A general framework for incorporating knowledge into word representations. Prateek Yadav, Madhav Nimishakavi, Naganand Yadati, Shikhar Vashishth, Arun Rajkumar, and Partha Talukdar. 2019. Lovasz convolutional networks. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1978–1987. PMLR. Liang Yao, Chengsheng Mao, and Yuan Luo. 2018. Graph Convolutional Networks for Text Classification. ArXiv e-prints, page arXiv:1809.05679. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 545–550. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215. Association for Computational Linguistics.
2019
320
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3319–3328 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3319 Word and Document Embedding with vMF-Mixture Priors on Context Word Vectors Shoaib Jameel Medway School of Computing University of Kent [email protected] Steven Schockaert School of Computer Science and Informatics Cardiff University [email protected] Abstract Word embedding models typically learn two types of vectors: target word vectors and context word vectors. These vectors are normally learned such that they are predictive of some word co-occurrence statistic, but they are otherwise unconstrained. However, the words from a given language can be organized in various natural groupings, such as syntactic word classes (e.g. nouns, adjectives, verbs) and semantic themes (e.g. sports, politics, sentiment). Our hypothesis in this paper is that embedding models can be improved by explicitly imposing a cluster structure on the set of context word vectors. To this end, our model relies on the assumption that context word vectors are drawn from a mixture of von MisesFisher (vMF) distributions, where the parameters of this mixture distribution are jointly optimized with the word vectors. We show that this results in word vectors which are qualitatively different from those obtained with existing word embedding models. We furthermore show that our embedding model can also be used to learn high-quality document representations. 1 Introduction Word embedding models are aimed at learning vector representations of word meaning (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017). These representations are primarily learned from co-occurrence statistics, where two words are represented by similar vectors if they tend to occur in similar linguistic contexts. Most models, such as Skip-gram (Mikolov et al., 2013b) and GloVe (Pennington et al., 2014) learn two different vector representations w and ˜w for each word w, which we will refer to as the target word vector and the context word vector respectively. Apart from the constraint that wi · ˜ wj should reflect how often words wi and wj co-occur, these vectors are typically unconstrained. As was shown in (Mu et al., 2018), after performing a particular linear transformation, the angular distribution of the word vectors that are obtained by standard models is essentially uniform. This isotropy property is convenient for studying word embeddings from a theoretical point of view (Arora et al., 2016), but it sits at odds with fact that words can be organised in various natural groupings. For instance, we might perhaps expect that words from the same part-of-speech class should be clustered together in the word embedding. Similarly, we might expect that organising word vectors in clusters that represent semantic themes would also be beneficial. In fact, a number of approaches have already been proposed that use external knowledge for imposing such a cluster structure, capturing the intuition that words which belong to the same category should be represented by similar vectors (Xu et al., 2014; Guo et al., 2015; Hu et al., 2015; Li et al., 2016c) or be located in a low-dimensional subspace (Jameel and Schockaert, 2016). Such models tend to outperform standard word embedding models, but it is unclear whether this is only because they can take advantage of external knowledge, or whether imposing a cluster structure on the word vectors is itself also inherently useful. In this paper, we propose a word embedding model which explicitly aims to learn context vectors that are organised in clusters. Note that unlike the aforementioned works, our method does not rely on any external knowledge. We simply impose the requirement that context word vectors should be clustered, without prescribing how these clusters should be defined. To this end, we extend the GloVe model by imposing a prior on the context word vectors. This prior takes the form of a mixture of von Mises-Fisher (vMF) distributions, which is a natural choice for modelling clusters in 3320 directional data (Banerjee et al., 2005). We show that this results in word vectors that are qualitatively different from those obtained using existing models, significantly outperforming them in syntax-oriented evaluations. Moreover, we show that the same model can be used for learning document embeddings, simply by viewing the words that appear in a given document as context words. We show that the vMF distributions in that case correspond to semantically coherent topics, and that the resulting document vectors outperform those obtained with existing topic modelling strategies. 2 Related Work A large number of works have proposed techniques for improving word embeddings based on external lexical knowledge. Many of these approaches are focused on external knowledge about word similarity (Yu and Dredze, 2014; Faruqui et al., 2015; Mrksic et al., 2016), although some approaches for incorporating categorical knowledge have been studied as well, as already mentioned in the introduction. What is different about our approach is that we do not rely on any external knowledge. We essentially impose the constraint that some category structure has to exist, without specifying what these categories look like. The view that the words which occur in a given document collection have a natural cluster structure is central to topic models such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its non-parametric counterpart called Hierarchical Dirichlet Processes (HDP) (Teh et al., 2005), which automatically discovers the number of latent topics based on the characteristics of the data. In recent years, several approaches that combine the intuitions underlying topic models with word embeddings have been proposed. For example, in (Das et al., 2015) it was proposed to replace the usual representation of topics as multinomial distributions over words by Gaussian distributions over a pre-trained word embedding, while (Batmanghelich et al., 2016) and (Li et al., 2016b) used von Mises-Fisher distributions for this purpose. Note that documents are still modelled as multinomial distributions of topics in these models. In (He et al., 2017) the opposite approach is taken: documents and topics are represented as vectors, with the aim of modelling topic correlations in an efficient way, while each topic is represented as a multinomial distribution over words. In this paper, we take a different approach for learning document vectors, by not considering any documentspecific topic distribution. This allows us to represent document vectors and (context) word vectors in the same space and, as we will see, leads to improved empirical results. Apart from using pre-trained word embeddings for improving topic representations, a number of approaches have also been proposed that use topic models for learning word vectors. For example, (Liu et al., 2015b) first uses the standard LDA model to learn a latent topic assignment for each word occurrence. These assignments are then used to learn vector representations of words and topics. Some extensions of this model have been proposed which jointly learn the topic-specific word vectors and the latent topic assignment (Li et al., 2016a; Shi et al., 2017). The main motivation for these works is to learn topic-specific word representations. They are thus similar in spirit to multiprototype word embeddings, which aim to learn sense-specific word vectors (Neelakantan et al., 2014). Our method is clearly different from these works, as our focus is on learning standard word vectors (as well as document vectors). Regarding word embeddings more generally, the attention has recently shifted towards contextualized word embeddings based on neural language models (Peters et al., 2018). Such contextualized word embeddings serve a broadly similar purpose as the aforementioned topic-specific word vectors, but with far better empirical performance. Despite their recent popularity, however, it is worth emphasizing that state-of-the-art methods such as ELMO (Peters et al., 2018) rely on a concatenation of the output vectors of a neural language model with standard word vectors. For this reason, among others, the problem of learning standard word vectors remains an important research topic. 3 Model Description The GloVe model (Pennington et al., 2014) learns for each word w a target word vector w and a context word vector ˜w by minimizing the following objective: X i,j xij̸=0 f(xij)(wi · ˜ wj + bi + ˜bj −log xij)2 3321 where xij is the number of times wi and wj cooccur in the given corpus, bi and ˜bj are bias terms and f(xij) is a weighting function aimed at reducing the impact of sparse co-occurrence counts. It is easy to see that this objective is equivalent to maximizing the following likelihood function P(D|Ω) ∝ Y i,j xij̸=0 N(log xij; µij, σ2)f(xij) where σ2 > 0 can be chosen arbitrarily, N means the Normal distribution and µij = wi · ˜ wj + bi + ˜bj Furthermore, D denotes the given corpus and Ω refers to the set of parameters learned by the word embedding model, i.e. the word vectors wi and ˜ wj and the bias terms. The advantage of this probabilistic formulation is that it allows us to introduce priors on the parameters of the model. This strategy was recently used in the WeMAP model (Jameel et al., 2019) to replace the constant variance σ2 by a variance σ2 j that depends on the context word. In this paper, however, we will use priors on the parameters of the word embedding model itself. Specifically, we will impose a prior on the context word vectors ˜w, i.e. we will maximize: Y i,j xij̸=0 N(log xij; µij, σ2)f(xij) · Y i P( ˜ wi) Essentially, we want the prior P( ˜ wi) to model the assumption that context word vectors are clustered. To this end, we use a mixture of von-Mises Fisher distributions. To describe this distribution, we begin with a von Mises-Fisher (vMF) distribution (Mardia and Jupp, 2009; Hornik and Gr¨un, 2014), which is a distribution over unit vectors in Rd that depends on a parameter θ ∈Rd, where d will denote the dimensionality of the word vectors. The vMF density for x ∈Sd (with Sd the d-dimensional unit hypersphere) is given by: vmf(x|θ) = eθ⊺x 0F1(; d/2; ||θ||2 4 ) where the denominator is given by 0F1(; p; q) = ∞ X n=0 Γ(p) Γ(p + n) qn n! which is commonly known as the confluent hypergeometric function. Note, however, that we will not need to evaluate this denominator, as it simply acts as a scaling factor. The normalized vector θ ∥θ∥, for θ ̸= 0, is the mean direction of the distribution, while ∥θ∥is known as the concentration parameter. To estimate the parameter θ from a given set of samples, we can use maximum likelihood (Hornik and Gr¨un, 2014). A finite mixture of vMFs, which we denote as movMF, is a distribution on the unit hypersphere of the following form (x ∈Sd): h(x|Θ) = K X k=1 ψk vmf(x|θk) where K is the number of mixture components, ψk ≥0 for each k, P k ψk = 1, and Θ = (θ1, ..., θK). The parameters of this movMF distribution can be computed using the ExpectationMaximization (EM) algorithm (Banerjee et al., 2005; Hornik and Gr¨un, 2014). Note that movMF is a distribution on unit vectors, whereas context word vectors should not be normalized. We therefore define the prior on context word vectors as follows: P(˜w) ∝h ˜w ∥˜w∥| Θ  Furthermore, we use L2 regularization to constrain the norm ∥˜w∥. We will refer to our model as CvMF. In the experiments, following (Jameel et al., 2019), we will also consider a variant of our model in which we use a context-word specific variance σ2 j . In that case, we maximize the following: Y i,j xij̸=0 N(log xij; µij, σ2 j ) · Y i P( ˜ wi) · Y i P(σ2 j ) where P(σ2 j ) is modelled as an inverse-gamma distribution (NIG). Note that in this variant we do not use the weighting function f(xij), as this was found to be unnecessary when using a contextword specific variance σ2 j in (Jameel et al., 2019). We will refer this variant as CvMF(NIG). Document embedding. The model described above can also be used to learn document embeddings. To this end, the target word vectors are simply replaced by document vectors and the counts 3322 xij then reflect how often word j occurs in document i. Below we will experimentally compare this strategy with existing methods for learning document representations, focusing especially on approaches that are inspired by probabilistic topic models. Indeed, we can intuitively think of the vMF mixture components in our model as representing topics. While there have already been topic models that use vMF distributions in this way (Batmanghelich et al., 2016; Li et al., 2016b), our approach is different because we do not consider a document-level topic distribution, and because we do not rely on pre-trained word embeddings. 4 Experiments In this section we assess the potential of our model both for learning word embeddings (Section 4.1) and for learning document embeddings (Section 4.2). Our implementation along with trained vectors is available online1. 4.1 Word Embedding Results In this section, we describe the word embedding results, where we directly compare our model with the following baselines: GloVe (Pennington et al., 2014), Skipgram (Mikolov et al., 2013b) (denoted as SG), Continuous Bag of Words (Mikolov et al., 2013b) (denoted as CBOW), and the recently proposed WeMAP model (Jameel et al., 2019). We have used the Wikipedia dataset which was shared by Jameel et al. (2019), using the same vocabulary and preprocessing strategy. We report results for 300-dimensional word vectors and we use K = 3000 mixture components for our model. As evaluation tasks, we use standard word analogy and similarity benchmarks. Analogy. Table 1 shows word analogy results for three datasets. First, we show results for the Google analogy dataset (Mikolov et al., 2013a) which is available from the GloVe project2 and covers a mix of semantic and syntactic relations. These results are shown separately in Table 1 as Gsem and Gsyn respectively. Second, we consider the Microsoft syntactic word analogy dataset3, which only covers syntactic relations and is referred to as MSR. Finally, we show results for the 1https://bit.ly/313U2ml 2https://github.com/stanfordnlp/GloVe 3https://aclweb.org/aclwiki/Analogy (State of the art) 50 300 1,000 3,000 0.58 0.6 0.62 0.64 0.66 0.68 vMF Mixtures Accuracy GSem GSyn Figure 1: Accuracy vs number of vMF mixtures on the Google word analogy dataset for our model. BATS analogy dataset4, which covers four categories of relations: inflectional morphology (IM), derivational morphology (DM), encyclopedic semantics (ES) and lexicographic semantics (LS). The results in Table 1 clearly show that our model behaves substantially differently from the baselines: for the syntactic/morphological relationships (Gsyn, MSR, IM, DM), our model outperforms the baselines in a very substantial way. On the other hand, for the remaining, semanticallyoriented categories, the performance is less strong, with particularly weak results for Gsem. For ES and IS, it needs to be emphasized that the results are weak for all models, which is partially due to a relatively high number of out-of-vocabulary words. In Figure 1 we show the impact of the number of mixture components K on the performance for Gsem and Gsyn (for the NIG variant). This shows that the under-performance on Gsem is not due to the choice of K. Among others, we can also see that a relatively high number of mixture components is needed to achieve the best results. Word similarity. The word similarity results are shown in Table 2, where we have considered the same datasets as Jameel et al. (2019). In the table, we refer to EN-RW-Stanford as Stanf, EN-SIMLEX-999 as LEX, SimVerb3500 as Verb, EN-MTurk771 as Tr771, EN-MTurk287 as Tr287, EN-MENTR3K as TR3k, the RareWords dataset as RW, and the recently introduced Card-660 rare words dataset (Pilehvar et al., 2018) denoted as CA-660. Note that we have removed multi-word expressions from the RW-660 dataset and consider only unigrams, which reduces the size of 4http://vecto.space/projects/BATS/ 3323 Models Gsem GSyn MSR IM DM ES LS GloVe 78.85 62.81 53.04 55.21 14.82 10.56 0.881 SG 71.58 60.50 51.71 55.45 13.48 08.78 0.671 CBOW 64.81 47.39 45.33 50.58 10.11 07.02 0.764 WeMAP 83.52 63.08 55.08 56.03 14.95 10.62 0.903 CvMF 63.22 67.41 63.21 65.94 17.46 9.380 1.100 CvMF(NIG) 64.14 67.55 63.55 65.95 17.49 9.410 1.210 Table 1: Word analogy accuracy results on different datasets. Models MC30 TR3k Tr287 Tr771 RG65 Stanf LEX Verb143 WS353 YP130 Verb RW CA-660 GloVe 0.739 0.746 0.648 0.651 0.752 0.473 0.347 0.308 0.675 0.582 0.184 0.422 0.301 SG 0.741 0.742 0.651 0.653 0.757 0.470 0.356 0.289 0.662 0.565 0.195 0.470 0.206 CBOW 0.727 0.615 0.637 0.555 0.639 0.419 0.279 0.307 0.618 0.227 0.168 0.419 0.219 WeMAP 0.769 0.752 0.657 0.659 0.779 0.472 0.361 0.303 0.684 0.593 0.196 0.480 0.301 CvMF 0.707 0.703 0.642 0.652 0.746 0.419 0.353 0.250 0.601 0.465 0.226 0.519 0.394 CvMF(NIG) 0.708 0.703 0.642 0.652 0.747 0.419 0.354 0.250 0.604 0.467 0.226 0.519 0.395 Table 2: Word similarity results on some benchmark datasets (Spearman’s Rho). this dataset to 484 records. In most of these datasets, our model does not outperform the baselines, which is to be expected given the conclusion from the analogy task that our model seems specialized towards capturing morphological and syntactic features. Interestingly, however, in the RW and CA-660 datasets, which focus on rare words, our model performs clearly better than the baselines. Intuitively, we may indeed expect that the use of a prior on the context words acts as a form of smoothing, which can improve the representation of rare words. Qualitative analysis. To better understand how our model differs from standard word embeddings, Table 3 shows the ten nearest neighbors (Al-Rfou et al., 2013) for a number of words according to our CvMF(NIG) model and according to the GloVe model. What can clearly be seen is that our model favors words that are of the same kind. For instance, the top 5 neighbours of fastest are all speed-related adjectives. As another example, the top 7 neighbors of red are colors. To further explore the impact of our model on rare words, Table 4 shows the nearest neighbors for some low-frequency terms. These examples clearly suggest that our model captures the meaning of these words in a better way than the GloVe model. For example, the top neighbors of casio are highly relevant terms such as notebook and compute, whereas the neighbors obtained with the GloVe model seem largely unrelated. For comparison, Table 5 shows the nearest neighbors of some high-frequency terms. In these case we can see that the GloVe model obtains the best results, as e.g. moreover is found as a neighbor of neural for our model, and indeed is found as a neighbor of clouds. This supports the results from the similarity benchmarks that our model performs better than standard methods at modelling rare words but worse at modelling frequent words. Finally, Table 6 shows the effect that our model can have on ambiguous words, where due to the use of the prior, a different dominant sense is found. 4.2 Document Embedding Results To evaluate the document embeddings, we focus on two downstream applications: categorization and document retrieval. As an intrinsic evaluation, we also evaluate the semantic coherence of the topics identified by our model. Document Categorization. We have evaluated our document embeddings on four standard document classification benchmarks: 1) 20 Newsgroups (20NG)5, 2) OHSUMED-23 (OHS)6, 3) TechTC-300 (TechTC)7, and 4) Reuters-21578 (Reu)8. As baselines, we consider the following approaches: 1) TF-IDF weighted bag-ofwords representation, 2) LDA9, 3) HDP10, 4) the 5http://qwone.com/ jason/20Newsgroups/ 6https://www.mat.unical.it/OlexSuite/Datasets/ SampleDataSets-download.htm 7http://techtc.cs.technion.ac.il/techtc300/techtc300.html 8https://archive.ics.uci.edu/ml/datasets/reuters21578+text+categorization+collection 9https://radimrehurek.com/gensim/models/ldamodel.html 10https://github.com/blei-lab/hdp 3324 fastest india red attackers cession summer Our GloVe Our GloVe Our GloVe Our GloVe Our GloVe Our GloVe slowest fifth pakistan indian blue blue assailants assailants ceding ceding winter winter quickest second lanka mumbai yellow white attacker besiegers annexation ceded autumn olympics slower sixth nepal pakistan white yellow townspeople pursuers annexing reaffirmation spring autumn faster slowest indian pradesh black which insurgents fortunately cede abrogation year spring fast ever bangladesh subcontinent green called policemen looters expropriation stipulating fall in surpassing quickest asia karnataka pink bright retaliation attacker continuance californios months beginning next third delhi bengal gray pink rioters accomplices ceded renegotiation in next surpassed respectively sri bangalore well green terrorists captors incorporation expropriation also months best tenth thailand asia the purple perpetrators strongpoints ironically zapatistas time during slow first china delhi with black whereupon whereupon dismantling annexation beginning year Table 3: Nearest neighbors for selected words. incisions unveil promissory batgirl casio Our GloVe Our GloVe Our GloVe Our GloVe Our GloVe incision incision unveiling unveils issuance estoppel catwoman huntress notebook <unk> indentations embellishment utilise devise curiously scribbled nightwing zatanna compute nightlifepartner punctures preferably introduce unveiling wherein untraceable supergirl clayface practicality vgnvcm scalpel notches invent <unk> handwritten evidencing batman superwoman utilizing counterstrike creases oftentimes expose finalise ostensibly gifting nemesis gcpd add graphing abrasions utilising publicize solidify purpotedly discordant abandon supergirl furthermore mkii lacerations lastly anticipating rediscover omnious renegotiation protege riddler utilising kajimitsuo extractions silhouettes unravelling embellish phony repossession unbeknownst woman utilizing reconditioned liposuction discreetly uncover reexamine proposing waiving reappears fight likewise bivort apertures purposefully inaugrate memorializing ironically abrogation cyborg first anticipating spellbinder Table 4: Nearest neighbors for low-frequency words. neural clouds Our GloVe Our GloVe neuronal neuronal cloud cumulonimbus brain cortical shadows cloud cortical correlates mist obscured perceptual neurons darkness mist physiological plasticity heavens shadows signaling neuroplasticity echoes aerosols furthermore computation indeed sky moreover circuitry furthermore fog cellular spiking fog swirling circuitry mechanisms lastly halos Table 5: Nearest neighbors for high-frequency words. amazon apple Our GloVe Our GloVe amazonian itunes cherry iigs forest kindle apples iphone brazil emusic peach macintosh rain nightlifepartner pear itunes green astore red ipad trees cdbaby sweet ipod wildlife guianas healthy ios preserve likewise doctor microsoft water aforementioned fruit garbageband rains ebay edible phone Table 6: Nearest neighbors for ambiguous words. von Mises-Fisher clustering model (movMF)11, 5) Gaussian LDA (GLDA)12 and 6) Spherical HDP 11https://cran.r-project.org/web/packages/movMF/index.html 12https://github.com/rajarshd/Gaussian LDA Models 20NG OHS TechTC Reu TF-IDF 0.852 0.632 0.306 0.319 LDA 0.859 0.629 0.305 0.323 HDP 0.862 0.627 0.304 0.339 movMF 0.809 0.610 0.302 0.336 GLDA 0.862 0.629 0.305 0.352 sHDP 0.863 0.631 0.304 0.353 GloVe 0.852 0.629 0.301 0.315 WeMAP 0.855 0.630 0.306 0.345 SG 0.853 0.631 0.304 0.341 CBOW 0.823 0.629 0.297 0.339 CvMF 0.871 0.633 0.305 0.362 CvMF(NIG) 0.871 0.633 0.305 0.363 Table 7: Document classification results (F1). (sHDP)1314, 7) GloVe15 (Pennington et al., 2014), 8) WeMAP (Jameel et al., 2019), 9) Skipgram (SG) and Continuous Bag-of-Words16 (Mikolov et al., 2013b) models. In the case of the word embedding models, we create document vectors in the same way as we do for our model, by simply replacing the role of target word vectors with document word vectors. In all the datasets, we removed punctuation and 13https://github.com/Ardavans/sHDP 14We do not compare with the method proposed in (Li et al., 2016b) because its implementation is not available. Moreover the sHDP method, which was published around the same time, is very similar in spirit, but the latter uses a nonparametric HDP topic model. 15https://github.com/stanfordnlp/GloVe 16https://github.com/facebookresearch/fastText 3325 non-ASCII characters. We then segmented the sentences using Perl. In all models, parameters were tuned based on a development dataset. To this end, we randomly split our dataset into 60% training, 20% development and 20% testing. We report the results in terms of F1 score on the test set, using the Perf tool17. The trained document vectors were used as input to a linear SVM classifier whose trade-off parameter C was tuned from a pool of {10, 50, 100}, which is a common setting in document classification tasks. Note that our experimental setup is inherently different from those setups where a word embedding model is evaluated on the text classification task using deep neural networks, as our focus is on methods that learn document vectors in an unsupervised way. We have therefore adopted a setting where document vectors are used as the input to an SVM classifier. In our model, we have set the number of word embeddings iterations to 50. The parameters of the vMF mixture model were re-computed after every 5 word embedding iterations. We tuned the dimensionality of the embedding from the pool {100, 150, 200} and the number of vMF mixture components from the pool {200, 500, 800}. We used the default document topic priors and word topic priors in the LDA and the HDP topic models. For the LDA model, we tuned the number of topics from the pool {50, 80, 100} and the number of iterations of the sampler was set to 1000. We also verified in initial experiments that having a larger number of topics than 100 did not allow for better performance on the development data. The number of vMF mixtures of the comparative method, movMF, was tuned from the pool {200, 500, 800}. For GLDA, as in the original paper, we have used word vectors that were pre-trained using Skipgram on the English Wikipedia. We have tuned the word vectors size and number of topics from a pool of {100, 150, 200} and {50, 80, 100} respectively. The number of iterations of the sampler was again set to 1000. We have used same pre-trained word embeddings for sHDP, where again the number of dimensions was automatically tuned. Table 7 summarizes our document classification results. It can be seen that our model outperforms all baselines, except for the TechTC dataset, where the results are very close. Among the baselines, sHDP achieves the best performance. Interest17http://osmot.cs.cornell.edu/kddcup/software.html Models WT2G HARD AQUT OHS TF-IDF 0.288 0.335 0.419 0.432 LDA 0.291 0.346 0.447 0.461 HDP 0.301 0.333 0.436 0.455 movMF 0.255 0.311 0.421 0.432 GLDA 0.301 0.351 0.447 0.462 sHDP 0.301 0.334 0.437 0.452 GloVe 0.301 0.333 0.436 0.459 WeMAP 0.302 0.362 0.447 0.465 SG 0.301 0.345 0.447 0.461 CBOW 0.299 0.323 0.441 0.459 CvMF 0.305 0.361 0.449 0.469 CvMF(NIG) 0.306 0.363 0.450 0.471 Table 8: Document retrieval learning experiments (NDCG@10). ingly, this model also uses von Mishes-Fisher mixtures, but relies on a pre-trained word embedding. Document Retrieval. Next we describe our document retrieval experiments. Specifically, we consider this problem as a learning-to-rank (LTR) task and use standard information retrieval (IR) tools to present our evaluation results. We have used the following standard IR benchmark datasets: 1) WT2G18 along with standard relevance assessments and topics (401 450), 2) TREC HARD (denoted as HARD)19, 3) AQUAINT-2 (AQUT)20 where we considered only the document-level relevance assessments, and 4) LETOR OHSUMED (OHS)21, which consists of 45 features along with query-document pairs with relevance judgments in five folds. We have obtained the raw documents and queries22 of this dataset, from which we can learn the document representations. As baselines, we have considered the following methods: 1) TF-IDF, 2) LDA (Blei et al., 2003), 3) HDP (Teh et al., 2005), 4) movMF (Banerjee et al., 2005), 5) sHDP (Batmanghelich et al., 2016), 6) GloVe (Pennington et al., 2014), 7) WeMAP (Jameel et al., 2019), 8) Skip-gram, and 9) CBOW word embedding models (Mikolov et al., 2013b). We have adopted the same preprocessing strategy as for the categorization task, with the exception of OHSUMED, for which suitable LTR features are already given. For all other datasets we 18http://ir.dcs.gla.ac.uk/test collections/access to data.html 19https://trec.nist.gov/data/hard.html 20https://catalog.ldc.upenn.edu/LDC2008T25 21https://www.microsoft.com/enus/download/details.aspx?id=52482 22http://ir.dcs.gla.ac.uk/resources/test collections/ 3326 used the Terrier LTR framework23 to generate the six standard LTR document features as described in (Jameel et al., 2015). The document vectors were then concatenated with these six features24. To perform the actual retrieval experiment, we used RankLib25 with a listwise RankNet (Burges et al., 2005) model26. Our results are reported in terms of NDCG@10, which is a common evaluation metric for this setting. Our training strategy is mostly the same as for the document categorization experiments, although for some parameters, such as the number of topics and vMF mixture components, we used larger values, which is a reflection of the fact that the collections used in this experiment are substantially larger and tend to be more diverse (Wei and Croft, 2006). In particular, the word vector lengths were chosen from a pool of {150, 200, 300} and the vMF mixtures from a pool of {300, 1000, 3000}. In the LDA model, we selected the number of topics from a pool of {100, 150, 200}. For GLDA we have used the same pool for the number of topics. All our results are reported for five-fold cross validation, where the parameters of the LTR model were automatically tuned, which is a common LTR experimental setting (Liu et al., 2015a). The results are presented in Table 8, showing that our model is able to consistently outperform all methods. Among the baselines, our NIG variant achieves the best performance in this case, which is remarkable as this is also a word embedding model. Word Coherence. In traditional topic models such as LDA, the topics are typically labelled by the k words that have the highest probability in the topic. These words tend to reflect semantically coherent themes, which is an important reason for the popularity of topic models. Accordingly, measuring the coherence of the top-k words that are identified by a given topic model, for each topic, is a common evaluation measure (Shi et al., 2017). Using the configurations that performed best on the tuning data in the document categorization task above, we used Gensim27 ( ˇReh˚uˇrek 23http://terrier.org/docs/v4.0/learning.html 24Note that in OHS the document vectors were concatenated with 45 LTR features. 25https://sourceforge.net/p/lemur/wiki/RankLib/ 26Note that in principle any LTR model for IR could be used. 27radimrehurek.com/gensim/models/coherencemodel.html Models 20NG OHS TechTC Reu TF-IDF 0.323 0.288 0.391 0.209 LDA 0.453 0.355 0.455 0.221 HDP 0.444 0.321 0.451 0.221 movMF 0.331 0.223 0.422 0.212 GLDA 0.466 0.356 0.455 0.234 sHDP 0.453 0.356 0.455 0.236 GloVe 0.455 0.352 0.453 0.221 WeMAP 0.456 0.354 0.454 0.223 SG 0.453 0.355 0.453 0.221 CBOW 0.432 0.344 0.421 0.220 CvMF 0.492 0.356 0.455 0.239 CvMF(NIG) 0.492 0.356 0.455 0.236 Table 9: Word coherence results in c v computed using Gensim. and Sojka, 2010) to compute the coherence of the top-20 words using the c v metric (R¨oder et al., 2015). For our model, GDLA and sHDP, the mixture components that were learned were consided as topics for this experiment. For GloVe, WeMAP, SG, TF-IDF, and CBOW, we used the von Mises-Fisher (vMF) soft clustering model (Banerjee et al., 2005) to determine the cluster memberships of the context words. For the TFIDF results, we instead used hard vMF clustering (Hornik and Gr¨un, 2014), as the movMF results are based on TF-IDF features as well. We tuned the number of clusters using the tuning data. The top-20 words after applying the clustering model were then output based on the distance from the cluster centroid. The results are shown in Table 9, showing that the word clusters defined by our mixture components are more semantically coherent than the topics obtained by the other methods. 5 Conclusions In this paper, we analyzed the effect of adding a prior to the GloVe word embedding model, encoding the intuition that words can be organized in various natural groupings. Somewhat surprisingly, perhaps, this leads to a word embedding model which behaves substantially differently from existing methods. Most notably, our model substantially outperforms standard word embedding models in analogy tasks that focus on syntactic/morphological relations, although this comes at the cost of lower performance in semantically oriented tasks such as measuring word similarity. We also found that the model performs better than 3327 standard word embedding models when it comes to modelling rare words. Word embedding models can also be used to learn document embeddings, by replacing word-word co-occurrences by document-word cooccurrences. This allowed us to compare our model with existing approaches that use von Mises-Fisher distributions for document modelling. In contrast to our method, these models are based on topic models (e.g. they typically model documents as a multinomial distribution over topics). Surprisingly, we found that the document representations learned by our model outperform these topic modelling-based approaches, even those that rely on pre-trained word embeddings and thus have an added advantage, considering that our model in this setting is only learned from the (often relatively small) given document collection. This finding puts into question the value of document-level topic distributions, which are used by many document embedding methods (being inspired by topic models such as LDA). Acknowledgments Steven Schockaert is supported by ERC Starting Grant 637277. References Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183–192, Sofia, Bulgaria. Association for Computational Linguistics. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. 2016. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics, 4:385–399. Arindam Banerjee, Inderjit S Dhillon, Joydeep Ghosh, and Suvrit Sra. 2005. Clustering on the unit hypersphere using von mises-fisher distributions. Journal of Machine Learning Research, 6:1345–1382. Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, and Sam Gershman. 2016. Nonparametric spherical topic modeling with word embeddings. In Proceedings ACL, volume 2016, pages 537–542. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of machine Learning research, 3:993–1022. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Christopher Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine learning (ICML-05), pages 89–96. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian LDA for topic models with word embeddings. In Proceedings ACL, pages 795–804. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL, pages 1606–1615. S. Guo, Q. Wang, B. Wang, L. Wang, and L. Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings ACL, pages 84–94. Junxian He, Zhiting Hu, Taylor Berg-Kirkpatrick, Ying Huang, and Eric P Xing. 2017. Efficient correlated topic modeling with topic embedding. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 225–233. Kurt Hornik and Bettina Gr¨un. 2014. movMF: An R package for fitting mixtures of von mises-fisher distributions. Journal of Statistical Software, 58(10):1– 31. Zhiting Hu, Poyao Huang, Yuntian Deng, Yingkai Gao, and Eric P. Xing. 2015. Entity hierarchy embedding. In ACL, pages 1292–1300. Shoaib Jameel, Zihao Fu, Bei Shi, Wai Lam, and Steven Schockaert. 2019. Word embedding as maximum a posteriori estimation. In Proceedings of the AAAI Conference on Artificial Intelligence. Shoaib Jameel, Wai Lam, and Lidong Bing. 2015. Supervised topic models with word order structure for document classification and retrieval learning. Information Retrieval Journal, 18(4):283–330. Shoaib Jameel and Steven Schockaert. 2016. Entity embeddings with conceptual subspaces as a basis for plausible reasoning. In Proceedings of ECAI, pages 1353–1361. Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. 2016a. Generative topic embedding: a continuous representation of documents. In Proceedings ACL. Ximing Li, Jinjin Chi, Changchun Li, Jihong Ouyang, and Bo Fu. 2016b. Integrating topic modeling with word embeddings by mixtures of vmfs. In Proceedings of the 26th International Conference on Computational Linguistics, pages 151–160. 3328 Yuezhang Li, Ronghuo Zheng, Tian Tian, Zhiting Hu, Rahul Iyer, and Katia P. Sycara. 2016c. Joint embedding of hierarchical categories and entities for concept categorization and dataless classification. In Proceedings COLING, pages 2678–2688. Quan Liu, Hui Jiang, Si Wei, Zhen-Hua Ling, and Yu Hu. 2015a. Learning semantic word embeddings based on ordinal knowledge constraints. In Proceedings of ACL, pages 1501–1511. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015b. Topical word embeddings. In Proceedings AAAI, pages 2418–2424. Kanti V Mardia and Peter E Jupp. 2009. Directional statistics, volume 494. John Wiley & Sons. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111– 3119. Nikola Mrksic, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gasic, Lina Maria Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings NAACLHLT, pages 142–148. Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations. In Proceedings ICLR. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1059–1069. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge rare word dataset-a reliable benchmark for infrequent word representation models. arXiv preprint arXiv:1808.09308. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Michael R¨oder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399–408. ACM. Bei Shi, Wai Lam, Shoaib Jameel, Steven Schockaert, and Kwun Ping Lai. 2017. Jointly learning word embeddings and latent topics. In Proceedings SIGIR, pages 375–384. Yee W Teh, Michael I Jordan, Matthew J Beal, and David M Blei. 2005. Sharing clusters among related groups: Hierarchical dirichlet processes. In Advances in neural information processing systems, pages 1385–1392. Xing Wei and W Bruce Croft. 2006. Lda-based document models for ad-hoc retrieval. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 178–185. ACM. C. Xu, Y. Bai, J. Bian, B. Gao, G. Wang, X. Liu, and T.-Y. Liu. 2014. RC-NET: A general framework for incorporating knowledge into word representations. In Proc. CIKM, pages 1219–1228. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings ACL, pages 545–550.
2019
321
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3329–3334 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3329 Delta Embedding Learning Xiao Zhang∗Ji Wu∗Dejing Dou† ∗Department of Electronic Engineering, Tsinghua University †Department of Computer and Information Science, University of Oregon [email protected] wuji [email protected] [email protected] Abstract Unsupervised word embeddings have become a popular approach of word representation in NLP tasks. However there are limitations to the semantics represented by unsupervised embeddings, and inadequate fine-tuning of embeddings can lead to suboptimal performance. We propose a novel learning technique called Delta Embedding Learning, which can be applied to general NLP tasks to improve performance by optimized tuning of the word embeddings. A structured regularization is applied to the embeddings to ensure they are tuned in an incremental way. As a result, the tuned word embeddings become better word representations by absorbing semantic information from supervision without “forgetting.” We apply the method to various NLP tasks and see a consistent improvement in performance. Evaluation also confirms the tuned word embeddings have better semantic properties. 1 Introduction Unsupervised word embeddings have become the basis for word representation in NLP tasks. Models such as skip-gram (Mikolov et al., 2013a) and Glove (Pennington et al., 2014) capture the statistics of a large corpus and have good properties that corresponds to the semantics of words (Mikolov et al., 2013b). However there are certain problems with unsupervised word embeddings, such as the difficulty in modeling some fine-grained word semantics. For example words in the same category but with different polarities are often confused because those words share common statistics in the corpus (Faruqui et al., 2015; Mrkˇsi´c et al., 2016). In supervised NLP tasks, these unsupervised word embeddings are often used in one of two ways: keeping fixed or using as initialization (finetuning). The decision is made based on the amount of available training data in order to avoid overfitting. Nonetheless, underfitting with keeping fixed and certain degrees of overfitting with fine-tuning is inevitable. Because this all or none optimization of the word embeddings lacks control over the learning process, the embeddings are not trained to an optimal point, which can result in suboptimal task performance, as we will show later. In this paper, we propose delta embedding learning, a novel method that aims to address the above problems together: using regularization to find the optimal fine-tuning of word embeddings. Better task performance can be reached with properly optimized embeddings. At the same time, the regularized fine-tuning effectively combines semantics from supervised learning and unsupervised learning, which addresses some limitations in unsupervised embeddings and improves the quality of embeddings. Unlike retrofitting (Yu and Dredze, 2014; Faruqui et al., 2015), which learns directly from lexical resources, our method provides a way to learn word semantics from supervised NLP tasks. Embeddings usually become task-specific and lose its generality when trained along with a model to maximize a task objective. Some approach tried to learn reusable embeddings from NLP tasks include multi-task learning, where one predicts context words and external labels at the same time (Tang et al., 2014), and specially designed gradient descent algorithms for fine-tuning (Yang and Mao, 2015). Our method learns reusable supervised embeddings by fine-tuning an unsupervised embeddings on a supervised task with a simple modification. The method also makes it possible to examine and interpret the learned semantics. The rest of the paper is organized as follows. Section 2 introduces the delta embedding learning method. Section 3 applies the method to NLP tasks, and the learned embeddings are evaluated and analyzed in section 4. 3330 Unsupervised embedding Model Dataset Task objective Delta embeddings Attention e.g. LSTM, CNN Figure 1: Delta embedding learning in a supervised NLP task. Solid line: forward model computation. Dashed line: learning of delta embeddings through back propagation 2 Methodology 2.1 Delta embedding learning The aim of the method is to combine the benefits of unsupervised learning and supervised learning to learn better word embeddings. An unsupervised word embeddings like skip-gram, trained on a large corpus (like Wikipedia), gives good-quality word representations. We use such an embedding wunsup as a starting point and learn a delta embedding w∆on top of it: w = wunsup + w∆. (1) The unsupervised embedding wunsup is fixed to preserve good properties of the embedding space and the word semantics learned from large corpus. Delta embedding w∆is used to capture discriminative word semantics from supervised NLP tasks and is trained together with a model for the supervised task. In order to learn only useful word semantics rather than task-specific peculiarities that results from fitting (or overfitting) a specific task, we impose L21 loss, one kind of structured regularization on w∆: loss = losstask + c n 󰁛 i=1 ( d 󰁛 j=1 w2 ∆ij) 1 2 (2) The regularization loss is added as an extra term to the loss of the supervised task. The effect of L21 loss on w∆has a straightforward interpretation: to minimize the total moving distance of word vectors in embedding space while reaching optimal task performance. The L2 part of the regularization keeps the change of word vectors small, so that it does not lose its original semantics. The L1 part of the regularization induces sparsity on delta embeddings, that only a small number of words get non-zero delta embeddings, while the majority of words are kept intact. The combined effect is selective fine-tuning with moderation: delta embedding captures only significant word semantics that is contained in the training data of the task while absent in the unsupervised embedding. 2.2 Task formulation Delta embedding learning is a general method that theoretically can be applied to any tasks or models that use embeddings. Figure 1 is an illustration of how the method is applied. The combined delta embedding and unsupervised embedding is provided to a model as input. The delta embedding is updated with the model while optimizing the loss function in (2). The model is trained to maximize task performance, and the produced delta embedding when combined with the unsupervised embedding becomes an improved word representation in its own right. 3 Experiments on NLP tasks We conduct experiments on several different NLP tasks to illustrate the effect of delta embedding learning on task performance. 3.1 Experimental setup Sentiment analysis We performed experiments on two sentiment analysis datasets: rt-polarity (binary) (Pang and Lee, 2005) and Kaggle movie review (KMR, 5 class) (Socher et al., 2013). For rtpolarity, we used a CNN model as in (Kim, 2014). For KMR an LSTM-based model is used. Reading comprehension We used the Stanford Question Answering Dataset (SQuAD, v1.1) (Rajpurkar et al., 2016) and the Bi-directional Attention Flow (BiDAF) (Seo et al., 2016) model. The original hyperparameters are used, except that character-level embedding is turned off to help clearly illustrate the effect of word embeddings. Language inference The MultiNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015) datasets are used for evaluation of the natural language inference task. We use the ESIM model, 3331 Regularization coefficient rt-polarity KMR SQuAD EM F1 MultiNLI Genre M Mis-M SNLI 0 (finetune) 78.61 68.43 64.29 74.35 69.3 61.2 62.1 57.2 ∞(fixed) 76.66 66.72 67.94 77.33 69.5 62.6 64.0 59.7 10−3 76.17 67.01 68.08 77.56 69.5 63.0 63.6 60.2 10−4 79.30 67.97 68.45 78.12 71.5 63.4 64.3 60.6 10−5 78.71 68.96 66.48 76.31 70.9 63.6 63.8 59.7 Table 1: Performance of different embedding training methods on various NLP tasks. Numbers represent model accuracy (in percentage) on each task , except for SQuAD a strong baseline in (Williams et al., 2018). As MultiNLI is a large dataset, we use a subset (“fiction” genre) for training to simulate a moderate data setting, and use development set and SNLI for testing. Common setup For all the experiments, we used Glove embeddings pre-trained on Wikipedia and Gigaword corpus1 as they are publicly available and frequently used in NLP literature. Dimensions of word embeddings in all models are set to 100. 3.2 Results The task performance of models with different embedding learning choices is reported in Table 1. All initialized with unsupervised pre-trained embeddings, comparison is made between finetuning, keeping fixed and tuning with delta embeddings. For delta embeddings, there is one hyperparameter c that controls the strength of regularization. We empirically experiment in the range of [10−5, 10−3]. In all the tasks delta embedding learning outperforms conventional methods of using embedding. As embeddings is the only variable, it shows delta embedding learning learns better quality embeddings that results in better task performance. Roughly two kinds of scenarios exist in these tasks. For easier tasks like sentiment analysis underfitting is obvious when keeping embeddings fixed. Harder tasks like reading comprehension on the other hand clearly suffer from overfitting. In both situations delta embeddings managed to balance between underfitting and overfitting with a more optimal tuning. For the hyper-parameter choice of regularization coefficient c, we found it fairly insensitive to tasks, with c = 10−4 achieving the best performance in most tasks. 1https://nlp.stanford.edu/projects/glove/ The results indicate that delta embedding learning does not require the decision to fix the embedding or not in an NLP task, as delta embedding learning always harvests the best from unsupervised embeddings and supervised fine-tuning, regardless of the amount of labeled data. 4 Embedding evaluation To validate the hypothesis that better performance is the result of better embeddings, we examine the properties of embeddings tuned with delta embedding learning. Word embedding from the BiDAF model is extracted after training on SQuAD, and is compared with the original Glove embedding. The motivation of investigating embeddings trained on SQuAD is because reading comprehension is a comprehensive language understanding task that involves a rather wide spectrum of word semantics. Training on SQuAD tunes a number of word embeddings which results in non-trivial changes of embedding properties on the whole vocabulary level, which we can validate with embedding evaluation tests. As for simpler tasks like sentiment analysis, we observe that they tune fewer words and the effects are less visible. 4.1 QVEC QVEC (Tsvetkov et al., 2015) is a comprehensive evaluation of the quality of word embeddings by aligning with linguistic features. We calculated the QVEC score of learned embeddings (Table 2). Embedding QVEC score Relative gain Glove 0.37536 finetune 0.37267 −2.7 · 10−3 delta@10−3 0.37536 3.0 · 10−6 delta@10−4 0.37543 7.5 · 10−5 delta@10−5 0.37332 −2.0 · 10−3 Table 2: QVEC scores of learned embeddings Using the original Glove embedding as reference, unconstrained finetune decreases the QVEC 3332 Correlation Glove finetune delta @10−4 ∆ WS-353 0.555 0.545 0.563 + WS-353-SIM 0.657 0.659 0.667 + WS-353-REL 0.495 0.485 0.506 + MC-30 0.776 0.764 0.783 + RG-65 0.741 0.736 0.740 Rare-Word 0.391 0.377 0.392 + MEN 0.702 0.703 0.703 + MTurk-287 0.632 0.625 0.635 + MTurk-771 0.574 0.577 0.576 + YP-130 0.460 0.475 0.467 + SimLex-999 0.292 0.304 0.295 + Verb-143 0.302 0.305 0.315 + SimVerb-3500 0.169 0.176 0.171 + Table 3: Evaluation of embedding by word pair similarity ranking. score, because the embedding overfits to the task, and some of the semantic information in the original embedding is lost. Delta embedding learning (c = 10−4) achieves the best task performance while also slightly increases the QVEC score. The change in score is somewhat marginal, but can be regarded as a sanity check: delta embedding learning does not lower the quality of the original embedding (in other words, it does not suffer from catastrophic forget). Also, as the QVEC score is strongly related to downstream task performance, it also means that delta-tuned embedding is no less general and universal than the original unsupervised embedding. 4.2 Word similarity Word similarity is a common approach for examining semantics captured by embeddings. We used the tool in (Faruqui and Dyer, 2014) to evaluate on 13 word similarity datasets. Showed in Table 3, delta embedding trained with c = 10−4 has the best performance in over half of the benchmarks. When compared to the original Glove embedding, unconstrained fine-tuned embedding gets better at some datasets while worse at others, indicating that naive finetuning learns some semantic information from the task while “forgetting” some others. Delta embedding learning however, achieves better performance than Glove embedding in all but one datasets (negligible decrease on RG-65, see the last column of Table 3). This shows that delta embedding learning effectively learns new semantics from a supervised task and adds it to the original embedding in a non-destructive way. The quality of embedding is improved. Sentiment Analysis neither still unexpected nor bore lacking worst suffers usual moving works interesting tv fun smart Reading Comprehension why another what along called whose call which also this if not occupation whom but he because into Language Inference not the even I nothing because that you it as anything only was if want forget well be so from does in certain could Table 4: Words with the largest norm of delta embedding in different tasks 4.3 Interpreting word semantics learning The formulation of delta embeddings makes it possible to help analyze word semantics learned in a supervised task, regardless of the model used. To answer the question “What is learned in the task?”, the norm of delta embeddings can be used to identify which word has a significant newly learned component. In Table 4, for instance, words with a sentiment like “bore” and “fun” are mostly learned in sentiment analysis tasks. In reading comprehension, question words like “what” and “why” are the first to be learned , after that are words helping to locate possible answers like “called,” “another,” and “also.” Nearest neighbors of word “not”2 Before training (+) good always clearly definitely well able (-) nothing yet none After training (+) sure (-) nothing yet none bad lack unable nobody less impossible unfortunately Not rarely Table 5: The position shift of word “not” in embedding space The semantics learned in a word can be represented by its shift of position in the embedding space (which is the delta embedding). We found the semantics learned are often discriminative features. Use the word “not” as an example, after training it clearly gains a component representing negativity, and differentiates positive and negative words much better (Table 5). These discriminative semantics are sometimes absent or only weakly present in co-occurrence statistics, but play a crucial role in the understanding of text in NLP tasks. 2only showing words with a polarity 3333 5 Conclusion We proposed delta embedding learning, a supervised embedding learning method that not only improves performance in NLP tasks, but also learns better universal word embeddings by letting the embedding “grow” under supervision. Because delta embedding learning is an incremental process, it is possible to learn from a sequence of tasks, essentially “continuous learning” (Parisi et al., 2018) of word semantics. It is an interesting future work and will make learning word embeddings more like human learning a language. Acknowledgments This research is partially supported by the National Key Research and Development Program of China (No.2018YFC0116800) and the NSF grant CNS-1747798 to the IUCRC Center for Big Learning. References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606–1615. Association for Computational Linguistics. Manaal Faruqui and Chris Dyer. 2014. Community evaluation and exchange of word vectors at wordvectors.org. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 19–24. Association for Computational Linguistics. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In International Conference on Learning Representations 2013. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gaˇsi´c, Lina M. Rojas-Barahona, PeiHao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to linguistic constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–148. Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 115–124. Association for Computational Linguistics. German Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. 2018. Continual lifelong learning with neural networks: A review. Neural Networks. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations 2017. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1555–1565. Association for Computational Linguistics. 3334 Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2049–2054. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Xuefeng Yang and Kezhi Mao. 2015. Supervised fine tuning for word embedding with integrated knowledge. arXiv preprint arXiv:1505.07931. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 545–550. Association for Computational Linguistics.
2019
322
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3335–3341 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3335 Annotation and Automatic Classification of Aspectual Categories Markus Egg Helena Prepens Humboldt-Universität zu Berlin Berlin, Germany {markus.egg,helena.prepens,will.roberts}@hu-berlin.de Will Roberts Abstract We present the first annotated resource for the aspectual classification of German verb tokens in their clausal context. We use aspectual features compatible with the plurality of aspectual classifications in previous work and treat aspectual ambiguity systematically. We evaluate our corpus by using it to train supervised classifiers to automatically assign aspectual categories to verbs in context, permitting favourable comparisons to previous work. 1 Introduction The universal linguistic category of aspect describes how a verb or a verbal projection (including sentences, ‘predicates’ for short) characterises the temporal course of a state of affairs or ‘eventuality’. Such information is relevant for tasks that extract temporal information from texts, such as information extraction, question answering, and document summarisation (Costa and Branco, 2012). Further tasks in which aspectual information plays a crucial role include computational semantic analysis (Caselli and Quochi, 2007), zoning (analysing the argumentative and rhetorical structure of texts; Baiamonte et al. 2016), and the analysis of specific textual elements, e.g., captions (Alikhani and Stone, 2019). Aspect must also be considered in event annotation (Pustejovsky et al., 2010; Bittar et al., 2011; Caselli et al., 2011). Aspect is a universal semantic category; thus, the same aspectual patterns reappear across languages. We created the first resource of German verbs annotated for aspectual class in context. We use aspectual features compatible with various different previously published aspectual classifications, and model the pervasive phenomenon of aspectual ambiguity. We evaluate the resource by using it in supervised aspectual classifiers for verbs in context. 2 Aspectual classes and ambiguity Aspectual classes are established by feature dichotomies (Vendler, 1967; Moens and Steedman, 1988; Egg, 2005). First, stative predicates describe purely static situations (e.g., be happy or love); dynamic ones introduce eventualities with development (e.g., continuous change of place in move). Dynamic predicates can be either unbounded (introduce eventualities without inherent boundaries, e.g., move or play the piano), or bounded (e.g., run a mile or build a house). Bounded predicates (also called ‘telic’) have four subgroups that are crossclassified by the features change - no change and punctual - extended: The first pair distinguishes predicates that express an explicit change of state (e.g., leave as change from being present to being away) from predicates that do not (e.g., play a sonata).1 The second pair distinguishes e.g. the no-change predicates cough and play a sonata or the change predicates explode and build a house. The punctual - extended distinction is gradual (while the others are binary). This will tend to aggravate both the annotation and the automatic classification of aspect. These features define six aspectual classes: Only dynamic predicates can be bounded or not, and only bounded predicates can be extended or punctual, and introduce an explicit change of state or none. Such aspectual properties are sometimes called ‘lexical aspect’ or ‘aktionsart’ to distinguish them from ‘morphological aspect’, e.g., the progressive or perfective/imperfective markers in Slavic languages. Also, the aspectual class of a verb may be influenced obligatorily by an argument, in particular, by an ‘incremental theme’ (Dowty, 1991; Krifka, 1This pair appears as ‘culminated’ and ‘non-culminated’ in Siegel and McKeown (2000) and as ‘±conseq[uence]’ in Moens and Steedman (1988); in the latter it partitions dynamic predicates. 3336 1992) like in eat an apple (bounded) vs. eat apples (unbounded), or by arguments that specify the path inherent in movement verbs, compare run a mile (bounded) vs. run some laps (unbounded).2 Our corpus contains a substantial number of these cases. Their classification in our corpus reflects the aspectual influence of these arguments. Operators like the progressive and specific kinds of adverbials may exert an aspectual influence on the predicates which they take as arguments. For instance, durative adverbials map unbounded predicates onto extended no-change predicates, and the progressive maps dynamic predicates of all kinds onto stative ones. Consequently, the aspectual class of a full clause or sentence may differ from the one of its main verb (plus its arguments); thus, annotating aspect at the clause or sentence level differs from our annotation task. The aspectual value of a predicate can also be modified in order to fit aspectual selection restrictions of an operator, which is known as aspectual coercion (Moens and Steedman, 1988). For instance, if plötzlich ‘suddenly’, which requires a punctual argument, is combined with an unbounded predicate like laufen ‘walk’, this induces an inchoative reinterpretation of the verb in the sense of ‘begin to walk’. Our annotation records the aspectual class of the argument before any coercion. Classifying verbs aspectually must be able to handle the (often systematic) aspectual ambiguity on the token level (5% of the tokens in our corpus), including (1) and (2).3 Ambiguity can arise in that a token has no value for a feature, e.g., abtrennen ‘detach’ in (1) for ‘punctual-extended’, because duration is unclear: (1) wenn der Kunde die Karte abtrennt ‘when the client detaches the card’ Other cases have two distinct readings, e.g., many verbs in the semantic field of communication have a stative and a change-of-state reading. E.g., in (2), zeigen ‘show’ can indicate a stative property (‘be more successful’) or a change of state (‘obtain better results’): (2) diese Firmen zeigen bessere Ergebnisse ‘these companies show better results’ 2Incrementality is given a wider definition in Tenny (1992), which goes beyond the phenomena relevant for our annotation initiative. 3Croft et al. (2016) also emphasise the importance of aspectual ambiguity in their work on aspectual annotation. Systematic ambiguity furthermore emerges for so-called ‘degree achievements’ like den Weg kehren ‘sweep the path’ (Kennedy and Levin, 2008), which systematically have an unbounded reading (continuous development, here, towards cleanliness) and an extended change reading (here, crossing a threshold of cleanliness). We found many instances of these in our corpus as well. The great level of detail of our classification is novel and addresses the problem that—beyond distinguishing stative predicates—previous work on aspectual classification disagrees widely. Our classification is related to previous ones in Table 1. It can easily be transformed into other classifications by collapsing classes. E.g., uniting the ‘unbounded’ and the ‘extended/no change’ class yields Moens and Steedman’s system; ignoring the ‘change/no change’ feature returns Vendler’s classes. In this way, our classification is not tied to the limitations imposed by specific aspectual theories. This flexibility is an advantage over preceding annotation initiatives, which typically presuppose a specific aspectual classification. This flexibility also means that our classification lends itself to tasks of different granularity. As we will show in Section 5, it can be used for coarse two-way distinctions, e.g., between stative and nonstative predicates, as well as for very fine-grained classification tasks. 3 Related work Siegel and McKeown (2000) annotated the main verbs of 1,478 and 615 parsed clauses from medical discharge summaries and novels, respectively, with the classes of Moens and Steedman (1988). These classes are determined lexically (and may be influenced by obligatorily aspectually relevant arguments as discussed above), which they call ‘fundamental aspectual class’. Each verb instance is assigned a single aspectual class, which neglects aspectual ambiguity on the token level. They trained supervised classifiers, using ‘linguistic indicators’ for aspectual classes as features, e.g., the perfect, the progressive or durative adverbials like for two hours. Co-occurrences of these indicators with the verbs were counted in large parsed corpora (supersets of the annotated corpora). For the first corpus, they distinguished stative vs. dynamic verbs with 93.9% accuracy. The second corpus was used for distinguishing ‘culmination’ 3337 Our classes Vendler (1967) Moens and Steedman (1988) Egg (2005) stative state state stative predicate unbounded activity process process predicate extended/ no change accomplishment intergressive predicate dynamic bounded extended/ change culminated process change predicate punctual/ no change achievement point intergressive predicate punctual/ change culmination change predicate Table 1: Comparison of aspectual classes in previous work and our features. and ‘non-culmination’4 with up to 74% accuracy. Friedrich and Palmer (2014) took into account aspectual ambiguity of verb tokens. In their first corpus of 6,161 clauses (from MASC, Ide et al. 2008), verb tokens are classified as stative, dynamic, or ambiguous. A second set of 2,667 clauses from the Brown Corpus focused on verbs that are ambiguous for stativity, by sampling sentences containing 20 frequent verbs that had both stative and dynamic senses, and annotating as before. They trained classifiers on these data, using the type-based indicators of Siegel and McKeown (2000), and vectors from a word space model. To characterise individual verb tokens, they included contextual features like POS tag, tense, voice, and WordNet classes of the verb arguments. Experiment 1 tested performance on verbs during training. The classifier was trained on the first data set, using 10-fold cross validation. Accuracy reached 84.1%, but no feature set statistically outperformed the naïve strategy of memorising each verb’s most likely aspect class. Experiment 2 tested the classifier on unseen verbs, by stratifying the cross validation folds by verb lemma. Falk and Martin (2016) annotated 1,200 French verb tokens, modelling aspectual ambiguity directly in their aspectual classification; this is based on Vendler classes but adds four ambiguity classes, e.g., for verbs ambiguous between ‘state’ and ‘activity’ like penser ‘think’. Also, there is a class of change-of-state verbs unspecified for punctuality, and two classes of degree achievements (with and without preference for the change reading). We see two problems for their approach. First, aspectual ambiguity is a property of individual verbs, hence, no additional classes are needed. Second, their classification is not general enough, e.g., for zeigen ‘show’, which can be stative or change of state. Since we can handle aspectual ambiguity of verbs, we can replicate their classification (up to 4In their adaption of Moens and Steedman’s terms, this emerges as ±conseq, i.e., change verbs vs. the union of unbounded and no-change verbs in our terms. the two classes of degree achievements). Falk and Martin train a classifier on their annotation, which reaches 67% accuracy on a three-way split between unbounded and change-of-state verbs, and those that fall in between the two groups. Other resources target aspectual classification at the sentence or clause level. Mathew and Katz (2009) annotate 1,816 Penn Treebank sentences with dynamic verbs as episodic (describing actual eventualities) or habitual (referring to habits, a subclass of stative predicates). Friedrich and Pinkal (2015) annotate 10,355 Wikipedia clauses as stative (and non-habitual), episodic, or habitual. Zarcone and Lenci (2008) annotate 3,129 sentences of the Italian Syntactic-Semantic Treebank (Montemagni et al., 2003) for Vendler’s (1967) classes. The corpora of Palmer et al. (2007) (6,065 clauses from Brown Corpus and MUC6 dataset) and Friedrich et al. (2016) (45,331 clauses from Brown Corpus, MASC, and Wikipedia) annotate clauses for ‘situation entities’, which include, but go beyond aspectual classes. 4 The resource 4.1 Annotation We compiled a corpus of German verb tokens in their clausal contexts from the SdeWaC corpus (Faaß and Eckart, 2013) and parsed them with mate-tools (Bohnet et al., 2013); the aspectual annotation used our six-fold classification. The corpus has three parts. Part A (3000 clauses) is based on a verb sample balanced for verb frequency. We took 60 verbs drawn at random for annotation, 20 each from the classes with high (65 verbs with counts of > 105 in SdeWaC), medium (602 verbs, counts > 104), and low frequency (ca. 2100 verbs, counts > 103). For each of these 60 verbs, we drew 50 sentences with that verb from SdeWaC. Part B (900 clauses) repeated this procedure without using a verb sample, including 300 sentences each with verbs of high, medium, and low frequency. Part C (300 clauses) has 150 sen3338 tences with punctual (e.g., cough) and 150 sentences with extended no-change verbs (e.g., run a mile), as these are systematically under-represented in the other two parts. Our annotation tool allowed only feasible combinations of the aspectual features. Annotation guidelines explained the aspectual features and provided tests for assigning values to them. E.g., stative predicates like glücklich sein ‘be happy’ do not combine with adverbials expressing intentionality: (3) *Max ist freiwillig glücklich. ‘Max is voluntarily happy.’ Similar tests guide the annotation of the other three feature pairs, e.g., only unbounded predicates combine with durative adverbials. The guidelines also explain the phenomenon of obligatory aspectual influence by verbal arguments. The annotation paid consideration to metaphorical usages; however, our anecdotal experience suggests that verbal metaphor tends to preserve aspectual class. Disagreements between annotators were subsequently adjudicated. We annotate aspectual ambiguity on the token level; categories are tagged ‘unknown’ when a verb has no value for a specific feature like in (1). Cases like (2) get two separate full annotations. 4.2 Annotator agreement We evaluated inter-annotator agreement after training the annotators and having them annotate ca. 2,200 clauses. Both annotators annotated 248 unseen clauses; nine of these were excluded as invalid. Table 2 shows agreement on the remaining 239 clauses before adjudication. Its first four rows display agreement on specific categories. ‘Class’ lists agreement on our six-way aspectual classification; the last line shows agreement without the problematic punctual - extended category. In Landis and Koch’s (1977) terms, agreement on the first three feature pairs is substantial, and fair on the fourth. Agreement on the overall classification is moderate, rising to substantial without the extended - punctual distinction. The results confirm our scepticism about the usefulness of this distinction, because deciding whether a predicate is punctual or extended has frequently proven to be extremely hard. Agreement on the stative/dynamic features is like in Friedrich and Palmer (2014). For the annotation of the first corpus by two annotators, their Cohen’s κ was 0.7, for their second corpus, 0.6. Stative κ = 0.746 Bounded 0.735 Change of state 0.758 Extended 0.292 Class 0.548 Class w/o extended 0.651 Table 2: Agreement on the aspectual class annotation. Task Baseline Classifier 6-way 0.295 0.712 4-way 0.445 0.785 Vendlerian 0.368 0.730 Stativity 0.719 0.877 Stative/Unbounded - Change 0.443 0.817 Culmination 0.618 0.856 Table 3: Classifier accuracy on aspect labelling tasks. 5 Evaluation To test the validity and utility of our annotated corpus, we trained supervised classifiers on the dataset. The fine granularity of our classification allows us to define several tasks. We use a logistic regression classifier with L2 regularisation (λ−1 = 2.78) and employ sentence-level features derived from the automatic parse of the clause: the verb lemma; POS; tense; use of the passive; a word embedding for the verb5; a bag of words to represent the sentence context; the lemmas of the verb’s grammatical dependants; the GermaNet (Hamp and Feldweg, 1997) semantic class for the verb and its subject and object; the adverb modifying the verb, if available; and the subcategorisation frame of the verb, given by the rule-based classifier of Roberts et al. (2014). Training and testing use 10-fold cross validation. Table 3 shows accuracies and baselines, which always predict the training set’s most frequent label. The first classifier predicts the full 6-way classification of the annotation. To handle aspectual ambiguity, each verb instance maps to an ambiguity class consisting of one or more aspectual class labels. The distribution of ambiguity classes is long-tailed, and we discard data points with labels 5The embedding was built using word2vec on the lemmatised SdeWaC with the parameters recommended by Baroni et al. (2014): 400-dimensional CBOW vectors, window size 5, subsampling with t = 1e−5, negative sampling with 10 samples. 3339 less frequent than a threshold set to 10. In the case of the 6-way classifier, this removes 40 data points, and results in 10 ambiguity classes in total. The second and the third classifier test our expectation that our resource is useful for less finegrained aspectual classifications, too. The second classifier disregards the punctual-extended feature (collapsing the two change and the two nonchange classes), i.e., follows Egg’s (2005) classification. 18 data points are dropped, leaving 7 possible labels. The third classifier disregards the change/no-change distinction, corresponding to Vendler’s (1967) classes. 26 data points are dropped, resulting in 7 possible labels. These three models achieve similar error rate reductions over the baseline of about 60%. The 4way classifier, which ignores the extended-punctual distinction, outperforms the Vendlerian classifier, which includes it; this suggests that the extendedpunctual distinction is more difficult to identify and to model. The following three classifiers are motivated by classifications in prior work. The fourth one (‘Stativity’) predicts whether a token is stative (1077), dynamic (2915), or ambiguous in context (60). This corresponds to Experiment 1 of Friedrich and Palmer (2014, p.520). Their baseline of 0.725 and their classifier accuracy of 0.841 are both similar to our results. We can also replicate their Experiment 2 by stratifying the cross validation folds by verb lemma, showing the performance of the classifier on unseen verbs. Our accuracy here is 0.811, almost identical to their reported 0.819. The fifth classifier approximates the classification task of Falk and Martin (2016), distinguishing ‘atelic’ (1707, our stative and unbounded verbs), ‘telic’ (1794, our change of state verbs), and ‘variable telicity’ (551, our no-change verbs, plus verbs that are ambiguous between the other two categories). Our results exceed theirs (0.675 accuracy with a 0.484 baseline). The sixth classifier predicts whether a verb token is ‘culminated’ or ‘non-culminated’, corresponding to the task of Experiment 2 of Siegel and McKeown (2000, Table 16, p.618). Culminated verbs (1834) are our change verbs, and non-culminated verbs (1077), the union of our unbounded and no-change verbs; 59 verbs are ambiguous in context. Siegel and McKeown report a baseline of 0.633, similar to ours, and their classifier achieves 0.740, which we outperform. These experiments support several conclusions. First, we have shown our resource can be used to build machine learning classifiers of high quality, speaking to the validity of our corpus. While we can only draw indirect comparisons to previous work in English and French, the accuracies achieved by our classifiers suggest that we go beyond the state of the art in our work. Second, our resource has proven to be very flexible in that it can be broken down in different ways to capture different aspectual distinctions, which is very welcome considering the wide range of aspectual classifications. Finally, the better performance of the 4-way classifier compared to the Venderian classifier, combined with the κ value for the extended-punctual distinction (Table 2) seems to indicate that both machines and human annotators find it hard to judge the length of time of a reported event. As hypothesised, this distinction has proved to be the most difficult of our four aspectual features; this finding accords with Zarcone and Lenci (2008), who report that durativity is the hardest aspectual feature to classify. 6 Conclusion and future work We present the first aspectually annotated resource for German verb tokens. We report substantial interannotator agreement, and validate our resource by training automatic aspectual classifiers, permitting favourable comparisons to prior work. The annotated corpus, the source code for the annotation tool, and the annotation guidelines are available at https://github.com/wroberts/ annotator. Future work will offer a more principled account of aspectual classification for specific verb classes, among them speech act and communication verbs (e.g., promise or call) that occur frequently in corpora but have hitherto been neglected in aspectual analyses. On a more general scale, we envisage examining the interplay of verb class (e.g., the classes of Levin 1993), verb sense, and aspectual class, with the purpose of estimating the influence of the sentential context on the aspectual value of the predicate. We also intend to develop a more principled treatment for the aspectual classification of metaphors, which are frequent in other corpora. 3340 References Malihe Alikhani and Matthew Stone. 2019. “Caption” as a coherence relation: evidence and implications. In Second Workshop on Shortcomings in Vision and Language (SiVL). Daniela Baiamonte, Tommaso Caselli, and Irina Prodanof. 2016. Annotating content zones in news articles. In Proceedings of Third Italian Conference on Computational Linguistics (CLiC-it 2016). Marco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 238–247, Baltimore, MD. Association for Computational Linguistics. André Bittar, Pascal Amsili, Pascal Denis, and Laurence Danlos. 2011. French TimeBank: An ISOTimeML annotated reference corpus. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 130–134, Portland, Oregon, USA. Association for Computational Linguistics. Bernd Bohnet, Joakim Nivre, Igor Boguslavsky, Richárd Farkas, Filip Ginter, and Jan Hajiˇc. 2013. Joint morphological and syntactic analysis for richly inflected languages. Transactions of the Association for Computational Linguistics, 1:415–428. Tommaso Caselli, Valentina Bartalesi Lenzi, Rachele Sprugnoli, Emanuele Pianta, and Irina Prodanof. 2011. Annotating events, temporal expressions and relations in Italian: The It-Timeml experience for the Ita-TimeBank. In Proceedings of the 5th Linguistic Annotation Workshop, pages 143–151, Portland, Oregon, USA. Association for Computational Linguistics. Tommaso Caselli and Valeria Quochi. 2007. Inferring the semantics of temporal prepositions in italian. In Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions, pages 38–44. Association for Computational Linguistics. Francisco Costa and António Branco. 2012. Aspectual type and temporal relation classification. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 266–275. Association for Computational Linguistics. William Croft, Pavlina Peskova, and Michael Regan. 2016. Annotation of causal and aspectual structure of events in RED: A preliminary report. In Proceedings of the Fourth Workshop on Events, pages 8–17, San Diego, CA. Association for Computational Linguistics. David Dowty. 1991. Thematic proto-roles and argument selection. Language, 67:547–619. Markus Egg. 2005. Flexible semantics for reinterpretation phenomena. CSLI Publications, Stanford. Gertrud Faaß and Kerstin Eckart. 2013. SdeWaC - A corpus of parsable sentences from the Web. In Language processing and knowledge in the Web, pages 61–68. Springer, Berlin, Heidelberg. Ingrid Falk and Fabienne Martin. 2016. Automatic identification of aspectual classes across verbal readings. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 12– 22, Berlin, Germany. Association for Computational Linguistics. Annemarie Friedrich and Alexis Palmer. 2014. Automatic prediction of aspectual class of verbs in context. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 517–523, Baltimore, MD. Association for Computational Linguistics. Annemarie Friedrich, Alexis Palmer, and Manfred Pinkal. 2016. Situation entity types: Automatic classification of clause-level aspect. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1757–1768, Berlin, Germany. Association for Computational Linguistics. Annemarie Friedrich and Manfred Pinkal. 2015. Automatic recognition of habituals: A three-way classification of clausal aspect. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2471–2481, Lisbon, Portugal. Association for Computational Linguistics. Birgit Hamp and Helmut Feldweg. 1997. GermaNet: A lexical-semantic net for German. In Proceedings of ACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 9–15. Nancy Ide, Collin Baker, Christiane Fellbaum, Charles Fillmore, and Rebecca Passonneau. 2008. Masc: The manually annotated sub-corpus of American English. In 6th International Conference on Language Resources and Evaluation, LREC 2008, pages 2455– 2460. European Language Resources Association (ELRA). Chris Kennedy and Beth Levin. 2008. Measure of change: The adjectival core of degree achievements. In Louise McNally and Chris Kennedy, editors, Adjectives and adverbs: Syntax, semantics and discourse, pages 156–182. Oxford University Press, Oxford. Manfred Krifka. 1992. Thematic roles as links between nominal reference and temporal constitution. In Ivan Sag and Anna Sabolcsi, editors, Lexical matters, pages 29–53. CSLI, Stanford. 3341 J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174. Beth Levin. 1993. English verb classes and their alternation: A preliminary investigation. University of Chicago Press, Chicago. Thomas Mathew and Graham Katz. 2009. Supervised categorization for habitual versus episodic sentences. In Sixth Midwest Computational Lingustics Colloquium, Bloomington. Indiana University. Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics, 14(2):15–28. Simonetta Montemagni, Francesco Barsotti, Marco Battista, Nicoletta Calzolari, Ornella Corazzari, Alessandro Lenci, Antonio Zampolli, Francesca Fanciulli, Maria Massetani, Remo Raffaelli, et al. 2003. Building the Italian syntactic-semantic treebank. In Treebanks, pages 189–210. Springer. Alexis Palmer, Elias Ponvert, Jason Baldridge, and Carlota Smith. 2007. A sequencing model for situation entity classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 896–903, Prague, Czech Republic. Association for Computational Linguistics. James Pustejovsky, Kiyong Lee, Harry Bunt, and Laurent Romary. 2010. ISO-TimeML: An international standard for semantic annotation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Will Roberts, Markus Egg, and Valia Kordoni. 2014. Subcategorisation acquisition from raw text for a free word-order language. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 298– 307, Gothenburg, Sweden. Association for Computational Linguistics. Eric V. Siegel and Kathleen R. McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics, 26(4):595–628. Carol Tenny. 1992. The aspectual interface hypothesis. In Ivan Sag and Anna Sabolcsi, editors, Lexical matters, pages 1–27. CSLI, Stanford. Zeno Vendler. 1967. Verbs and times. In Z. Vendler, editor, Linguistics in philosophy, pages 97–121. Cornell University Press, New York. Alessandra Zarcone and Alessandro Lenci. 2008. Computational models for event type classification in context. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08), pages 1232–1238, Marrakech, Morocco. European Language Resources Association (ELRA).
2019
323
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3342–3348 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3342 Putting words in context: LSTM language models and lexical ambiguity Laura Aina Kristina Gulordava Gemma Boleda Universitat Pompeu Fabra Barcelona, Spain {firstname.lastname}@upf.edu Abstract In neural network models of language, words are commonly represented using contextinvariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information. 1 Introduction In language, a word can contribute a very different meaning to an utterance depending on the context, a phenomenon known as lexical ambiguity (Cruse, 1986; Small et al., 2013). This variation is pervasive and involves both morphosyntactic and semantic aspects. For instance, in the examples in Table 1, show is used as a verb in Ex. (1), and as a noun in Ex. (2-3), in a paradigmatic case of morphosyntactic ambiguity in English. Instead, the difference between Ex. (2) and (3) is semantic in nature, with show denoting a TV program and an exhibition, respectively. Semantic ambiguity covers a broad spectrum of phenomena, ranging from quite distinct word senses (e.g. mouse as animal or computer device) to more subtle lexical modulation (e.g. visit a city / an aunt / a doctor; Cruse, 1986). This paper investigates how deep learning models of language, and in particular Long ShortTerm Memory Networks (LSTMs) trained on Language Modeling, deal with lexical ambiguity.1 In neural network models of language, words in a sentence are commonly represented through 1Code at: https://github.com/amore-upf/ LSTM_ambiguity word-level representations that do not change across contexts, that is, “static” word embeddings. These are then passed to further processing layers, such as the hidden layers in a recurrent neural network (RNN). Akin to classic distributional semantics (Erk, 2012), word embeddings are formed as an abstraction over the various uses of words in the training data. For this reason, they are apt to represent context-invariant information about a word —its lexical information— but not the contribution of a word in a particular context —its contextual information (Erk, 2010). Indeed, word embeddings subsume information relative to various senses of a word (e.g., mouse is close to words from both the animal and computer domain; Camacho-Collados and Pilehvar, 2018). Classic distributional semantics attempted to do composition to account for contextual effects, but it was in general unable to go beyond short phrases (Baroni, 2013); newer-generation neural network models have supposed a big step forward, as they can natively do composition (Westera and Boleda, 2019). In particular, the hidden layer activations in an RNN can be seen as putting words in context, as they combine the word embedding with information coming from the context (the adjacent hidden states). The empirical success of RNN models, and in particular LSTM architectures, at fundamental tasks like Language Modeling (Jozefowicz et al., 2015) suggests that they are indeed capturing relevant contextual properties. Moreover, contextualized representations derived from such models have been shown to be very informative as input for lexical disambiguation tasks (e.g. Melamud et al., 2016; Peters et al., 2018). We here present a method to probe the extent to which the hidden layers of an LSTM language trained on English data represent lexical and contextual information about words, in order to investigate how the model copes with lexical ambiguity. 3343 Examples LexSub w NN s NN w&s NN (1) . . . I clapped her shoulder to show I was not laughing at her. . . demonstrate, display, indicate, prove, clarify demonstrate, exhibit, indicate, offer, reveal indicate, demonstrate, suggest, prove, indicate, demonstrate, prove, ensure, suggest (2) . . . The show [...] revolutionized the way America cooks and eats... program, series, broadcast, presentation demonstrate, exhibit, indicate, offer, reveal series, program, production, miniseries, trilogy series, program, production, broadcast (3) . . . The inauguration of Dubai Internet City coincides with the opening of an annual IT show in Dubai.... exhibition, conference, convention, demonstration demonstrate, exhibit, indicate, offer, reveal conference, event, convention, symposium, exhibition conference, event, exhibition, symposium, convention Table 1: Examples from the LexSub dataset (Kremer et al., 2014) and nearest neighbors for target representations. Our work follows a recent strand of research that purport to identify what linguistic properties deep learning models are able to capture (Linzen et al., 2016; Adi et al., 2017; Gulordava et al., 2018; Conneau et al., 2018; Hupkes et al., 2018, a.o.). We train diagnostic models on the tasks of retrieving the embedding of a word and a representation of its contextual meaning, respectively —the latter obtained from a Lexical Substitution dataset (Kremer et al., 2014). Our results suggest that LSTM language models heavily rely on the lexical information in the word embeddings, at the expense of contextually relevant information. Although further analysis is necessary, this suggests that there is still much room for improvement to account for contextual meanings. Finally, we show that the hidden states used to predict a word – as opposed to those that receive it as input – display a bias towards contextual information. 2 Method Language model. As our base model, we employ a word-level bidirectional LSTM (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997) language model (henceforth, LM) with three hidden layers. Each input word at timestep t is represented through its word embedding wt; this is fed to both a forward and a backward stacked LSTMs, which process the sequence leftto-right and right-to-left, respectively (Eqs. (1-2) describe the forward LSTM). To predict the word at t, we obtain output weights by summing the activations of the last hidden layers of the forward and backward LSTMs at timesteps t−1 and t+1, respectively, and applying a linear transformation followed by softmax (Eq. 3, where L is the number of hidden layers). Thus, a word is predicted using both its left and right context jointly, akin to the context2vec architecture (Melamud et al., 2016) but differently from, e.g., the BiLSTM architecture used for ELMo (Peters et al., 2018). h1 t = LSTM1(wt, h1 t−1) (1) hi t = LSTMi(hi−1 t , hi t−1) (2) ot = softmax(f(−→ h L t−1 + ←− h L t+1)) (3) We train the LM on the concatenation of English text data from a Wikipedia dump2, the British National Corpus (Leech, 1992), and the UkWaC corpus (Ferraresi et al., 2008).3 More details about the training setup are specified in Appendix A.1. The model achieves satisfying performances on test data (perplexity: 18.06). For our analyses, we deploy the trained LM on a text sequence and extract the following activations of each hidden layer; Eq. (4) and Fig. 1. {−→ h i t|i ≤L} ∪{←− h i t|i ≤L} (4) hi t = [−→ h i t; ←− h i t] (5) hi t±1 = [−→ h i t−1; ←− h i t+1] (6) At timestep t, for each layer, we concatenate the forward and backward hidden states; Eq. (5). We refer to these vectors as current hidden states. As they are obtained processing the word at t as input and combining it with information from the context, we can expect them to capture the relevant contribution of such word (e.g., in Fig. 1 the mouse-as-animal sense). As a comparison, we also extract activations obtained by processing the text sequence up to t −1 and t + 1 in the forward and backward LSTM, respectively, hence excluding the word at t. We concatenate the forward and backward states of each layer; Eq. (6). While these 2From 2018/01/03, https://dumps.wikimedia. org/enwiki/ 350M tokens from each corpus, in total 150M (train/valid/test: 80/10/10%); vocabulary size: 50K. 3344 Figure 1: Language model and extracted representations. The different shades across layers reflect the different performances in the probe tasks (darker = higher) activations do not receive the word at t as input, they are relevant because they are used to predict that word as output. We refer to them as predictive hidden states. These may capture some aspects of the word (e.g., in Fig. 1, that it is a noun and denotes something animate), but are likely to be less accurate than the current states, since they do not observe the actual word. Probe tasks. We aim to assess to what extent the hidden states in the LM carry over the lexical and context-invariant information in the input word embedding, and how much they instead represent the contextual meaning of the word. To this end, we rely on vector representations of lexical and contextual word information. As for the former, we can directly use the word embeddings of the LM (w); it is instead more challenging to find a representation of the contextual meaning. Our solution is to use Lexical Substitution data (McCarthy and Navigli, 2009) and, in particular, the large dataset by Kremer et al., 2014 (henceforth, LexSub; see Table 1). In this dataset, words in context (up to 3 sentences) are annotated with a set of paraphrases given by human subjects. Since contextual substitutes reflect differences among uses of a word (for instance, demonstrate paraphrases show in a context like Ex. (1), but not in Ex. (2)), this type of data is often used as an evaluation benchmark for contextual representations of words (e.g., Erk and Pad´o, 2008; Melamud et al., 2016; Gar´ı Soler et al., 2019). We leverage LexSub to build proxies for ground-truth representations of the contextual meaning of words. We define two types of representations, inspired by previous work that proposed simple vector operations to combine word representations (Mitchell and Lapata, 2010; Thater et al., 2011, a.o.): the average embedding of the substitute words (henceforth, s), and the average embedding of the union of the substitute words and the target word (w&s). As Table 1 qualitatively shows, the resulting representations tend to be close to the substitute words and reflect the contextual nuance conveyed by the word; in the case of w&s, they also retain a strong similarity to the embedding of the target word.4 We frame our analyses as supervised probe tasks: a diagnostic model learns to “retrieve” word representations out of the hidden states; the rate of success of the model is taken to measure the amount of information relevant to the task that its input contains. Given current or predictive states as inputs, we define three diagnostic tasks: - WORD: predict w - SUB: predict s - WORD&SUB: predict w&s The WORD task is related to the probe tasks introduced in Adi et al. (2017) and Conneau et al. (2018), which, given a hidden state, require to predict the words that a sentence encoder has processed as input. Note that, while these authors predict words by their discrete index, we are predicting the complete multi-dimensional embedding of the word. Our test quantifies not only whether the model is tracking the identity of the input word, but also how much of its information it retains. We train distinct probe models for each task and type of input (i; e.g., current hidden state at layer 1). A model consists of a non-linear transformation from an input vector i (extracted from the LM) to a vector with the dimensionality of the word embeddings (Eq. 7, where ˆr is one of ˆw, ˆs, ˆ w&s for WORD, SUB, and WORD&SUB tasks, respectively). The models are trained through maxmargin loss, optimizing the cosine similarity between ˆr and the target representation against the similarities between ˆr and 5 negative samples (details in Appendix A.2). ˆr = tanh(W i + b) (7) 4These vectors are close to the related word embedding (0.45 and 0.66 mean cosine, see Table 2, row wt), but also different from it: on average, s and w&s share 17 and 25% of the top-10 neighbors with w, respectively (statistics from training data, excluding the word itself from neighbors). 3345 input WORD SUB WORD&SUB wt 1 .45 (±.14) .66 (±.09) avgctxt .35 (±.10) .16 (±.11) .24 (±.12) h1 t .84 (±.2) .61 (±.14) .71 (±.11) h2 t .74 (±.12) .60 (±.13) .69 (±.11) h3 t .64 (±.12) .58 (±.13) .65 (±.11) h1 t±1 .25 (±.16) .36 (±.16) .38 (±.16) h2 t±1 .27 (±.16) .39 (±.16) .41 (±.16) h3 t±1 .29 (±.15) .41 (±.16) .43 (±.16) Table 2: Results of probe tasks for current (hi t) and predictive (hi t±1) hidden states. We adapt the LexSub data to our setup as follows. Since substitutes are provided in their lemmatized form, we only consider datapoints where the word form is identical to the lemma so as to exclude effects due to morphosyntax (e.g., asking the models to recover play when they observe played).5 We require that at least 5 substitutes per datapoint are in the LM vocabulary to ensure quality in the target representations. LexSub data come with a validation/test split; since we need training data, we create a new random partitioning into train/valid/test (70/10/20%, with no overlapping contexts among splits). The final data consist of 4.7K/0.7K/1.3K datapoints for train/valid/test. 3 Results The results of the probe tasks on test data are presented in Table 2. We report the mean and standard deviation of the cosine similarity between the output representations (ˆw, ˆs, ˆ w&s) and the target ones (w, s, w&s). This evaluates the degree to which the word representations can be retrieved from the hidden states. For comparison, we also report the cosine scores between the targets and two baseline representations: the word embedding itself and the average of word embeddings of a 10word window around the target word (avgctxt).6 Overall, the models do better than these unsupervised baselines, with exceptions.7 Current hidden states. Both lexical and contextual representations can be retrieved from the current hidden states (hi t) to a large extent (cosines 5We also exclude substitutes that are multi-word expressions and the datapoints involving words that are part of a compound (e.g., fast in fast-growing). 6We exclude out-of-vocabulary words and punctuation. 7The first cell is 1 as it involves the same representation. 0.0 0.5 Cosine (w, s) 0.0 0.5 Cosine sub task Figure 2: Similarity of lexical and contextual vector (w - s) vs. similarity of target and prediction in SUB for h1 t. .58-.84), but retrieving the former is much easier than the latter (.64-.84 vs. .58-71). This suggests that the information in the word embedding is better represented in the hidden states than the contextually relevant one. In all three tasks, performance degrades closer to the output layer (from h1 t to h3 t ), but the effect is more pronounced for the WORD task (84/.74/.64). Word embeddings are part of the input to the hidden state, and the transformation learned for this task can be seen as a decoder in an auto-encoder, reconstructing the original input; the further the hidden layer is from the input, the more complex the function is to reverse-engineer. Crucially, the high performance at reconstructing the word embedding suggests that lexical information is retained in the hidden layers, possibly including also contextually irrelevant information (e.g., in Ex. (4) in Table 3 ˆw is close to verbs, even if share is here a noun). Contextual information (s and w&s) seems to be more stable across processing layers, although overall less present (cf. lower results). Table 3 reports one example where the learned model displays relevant contextual aspects (Ex. (4), share) and one where it does not (Ex. (5), studio). Qualitative analysis shows that morphosyntactic ambiguity (e.g., share as a noun vs. verb) is more easily discriminated, while semantic distinctions pose more challenges (e.g., studio as a room vs. company). This is not surprising, since the former tends to correlate with clearer contextual cues. Furthermore, we find that the more the contextual representation is aligned to the lexical one, the easier it is to retrieve the former from the hidden states (e.g., correlation cos(w, s) - cos(ˆs, s), for h1 t : Pearson’s ρ = .62∗∗∗; Fig. 2): that is, it is harder to resolve lexical ambiguity when the contextual meaning is less represented in the word embedding (e.g., less frequent uses). This suggests that the LM heavily relies on the informa3346 Context LexSub WORD: ˆw NN SUB: ˆs NN WORD&SUB: wˆ&s NN (4) ... The financial-services company will pay 0.82 share for each Williams share ... stock, dividend, interest, stake, unit stake, owe, discuss, coincide, reside portion, amount, percentage, fraction stake, percentage, portion, spend, proportion (5) ... Sony’s effort to hire producers Jon Peters and Peter Guber to run the studio... business, company, facility, film, lot lab, troupe, classroom, apartment, booth room, gallery, troupe, journal, house room, troupe, lab, audience, department (6) ... I had [...] told her that we needed other company than our own ... friend, acquaintance, visitor, accompaniment, associate retailer, trader, firm, maker, supplier firm, corporation, organisation, conglomerate, retailer corporation, firm, conglomerate, retailer, organisation Table 3: Examples with nearest neighbours of the representations predicted in the first current hidden layer. tion in the word embedding, making it challenging to diverge from it when contextually relevant (see Ex. (6) in Table 3). Current vs. predictive hidden states. The predictive hidden states are obtained without observing the target word; hence, recovering word information is considerably harder than for current states. Indeed, we observe worse results in this condition (e.g., below avgctxt in the WORD task); we also observe two patterns that are opposite to those observed for current states, which shed light on how LSTM LMs track word information. For predictive states, results improve closer to the output (from layer 1 to 3; they instead degrade for current states). We link this to the double objective that a LM has when it comes to word information: to integrate a word passed as input, and to predict one as output. Our results suggest that the hidden states keep track of information for both words, but lower layers focus more on the processing of the input and higher ones on the predictive aspect (see Fig. 1). This is in line with previous work showing that activations close to the output tend to be task-specific (Liu et al., 2019). Moreover, from predictive states, it is easier to retrieve contextual than lexical representations (.41/.43 vs. .29; the opposite was true for current states). Our hypothesis is that this is due to a combination of two factors. On the one hand, predictive states are based solely on contextual information, which highlights only certain aspects of a word; for instance, the context of Ex. (2) in Table 1 clearly signals that a noun is expected, and the predictive states in a LM should be sensitive to this kind of cue, as it affects the probability distribution over words. On the other hand, lexical representations are underspecified; for instance, the word embedding for show abstracts over both verbal and nominal uses of the word. Thus, it makes sense that the predictive state does not capture contextually irrelevant aspects of the word embedding, unlike the current state (note however that, as stated above, the overall performance of the current state is better, because it has access to the word actually produced). 4 Future work We introduced a method to study how deep learning models of language deal with lexical ambiguity. Though we focused on LSTM LMs for English, this method can be applied to other architectures, objective tasks, and languages; possibilities to explore in future work. We also plan to carry out further analyses aimed at individuating factors that challenge the resolution of lexical ambiguity (e.g., morphosyntactic vs. semantic ambiguity, frequency of a word or sense, figurative uses), as well as clarifying the interaction between prediction and processing of words within neural LMs. Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 715154), and from the Ram´on y Cajal programme (grant RYC-2015-18907). We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research, and the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center (RES-FI-2018-3-0034). This paper reflects the authors’ view only, and the EU is not responsible for any use that may be made of the information it contains. 3347 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proceedings of 5th ICLR International Conference on Learning Representations. Marco Baroni. 2013. Composition in distributional semantics. Language and Linguistics Compass, 7(10):511–522. Jose Camacho-Collados and Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. Journal of Artificial Intelligence, 63(1):743–788. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2126–2136. Alan Cruse. 1986. Lexical semantics. Cambridge University Press. Katrin Erk. 2010. What is word meaning, really? (and how can distributional models help us describe it?). In Proceedings of the 2010 Workshop on Geometrical Models of Natural Language Semantics, pages 17–26. Katrin Erk. 2012. Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635–653. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of the EMNLP Conference on Empirical Methods in Natural Language Processing, pages 897–906. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukWac, a very large web-derived corpus of English. In Proceedings of the 4th Web as Corpus Workshop (WAC-4) Can we beat Google, pages 47–54. Aina Gar´ı Soler, Anne Cocos, Marianna Apidianaki, and Chris Callison-Burch. 2019. A comparison of context-sensitive models for lexical substitution. In Proceedings of the 13th International Conference on Computational Semantics (IWCS). Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 NAACL-HLT Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1195–1205. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ’diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In International Conference on Machine Learning, pages 2342–2350. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Gerhard Kremer, Katrin Erk, Sebastian Pad´o, and Stefan Thater. 2014. What substitutes tell us-analysis of an” all-words” lexical substitution corpus. In Proceedings of the 14th EACL Conference of the European Chapter of the Association for Computational Linguistics, pages 540–549. Geoffrey Neil Leech. 1992. 100 million words of English: the British National corpus (BNC). Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Nelson F Liu, Matt Gardner, Yonatan Belinkov, Matthew Peters, and Noah A Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 NAACLHLT Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Diana McCarthy and Roberto Navigli. 2009. The English lexical substitution task. Language resources and evaluation, 43(2):139–159. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 NAACL-HLT Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237. 3348 Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Steven L Small, Garrison W Cottrell, and Michael K Tanenhaus. 2013. Lexical Ambiguity Resolution: Perspective from Psycholinguistics, Neuropsychology and Artificial Intelligence. Elsevier. Stefan Thater, Hagen F¨urstenau, and Manfred Pinkal. 2011. Word meaning in context: A simple and effective vector model. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1134–1143. Matthijs Westera and Gemma Boleda. 2019. Don’t blame distributional semantics if it can’t do entailment. In Proceedings of the 13th International Conference on Computational Semantics (IWCS), pages 120–133. A Appendix A.1 Language model The hidden layers are of sizes 600/600/300 respectively, while the word embeddings are of size 300. The language model was trained optimizing the log-likelihood of a target word given its surrounding context, with stochastic gradient descent for 20 epochs with decaying learning rate using Adam optimiser (Kingma and Ba, 2014). The initial learning rate was 0.0005 for batch size of 32. Dropout was set to 0.2 and applied to the input embedding, and the outputs of the LSTM layers. At training time, the text data is fed to the model in sequences of 100 tokens. A.2 Diagnostic models We train separate models for each combination of task and input type. Each model consist of a linear transformation and a tahn non-linearity, trained using Cosine Embedding Loss (PyTorch 0.4, Paszke et al., 2017) and Adam optimiser, with early stopping based on validation loss. We carried out hyperparameter search based on validation loss for each of the model types in order to set batch size and initial learning rate. We report the final settings for each combination of input and task in Table 4. At training time, for each positive target word, we obtain 5 negative targets by sampling words from the frequency quartile of the postive target (frequency is computed on the training corpus of the language model). We always exclude the target word, as well as the substitute words in the input WORD SUB WORD&SUB h1 t 16, 5 × 10−5 32, 1 × 10−4 32, 5 × 10−5 h2 t 16, 5 × 10−5 64, 5 × 10−4 64, 5 × 10−4 h3 t 16, 5 × 10−5 128, 5 × 10−4 16, 5 × 10−5 h1 t±1 128, 1 × 10−3 128, 1 × 10−3 128, 5 × 10−4 h2 t±1 16, 1 × 10−4 64, 5 × 10−4 16, 5 × 10−4 h3 t±1 128, 1 × 10−3 16, 1 × 10−4 128, 5 × 10−4 Table 4: Hyperparameter settings in the diagnostic models (batch size, initial learning rate) SUB and WORD&SUB conditions, from the negative samples. Given the input vector, we maximize the margin of the resulting output vector ˆr to the embeddings of the negative samples (i = −1), and minimize the distance of the output vector to the target representation of the positive instance (i = 1; Eq. 8). L(ˆr, r, i) =            1 −cos(ˆr, r) if i = 1 if i = −1 max(0, cos(ˆr, r) −margin) (8) At each training epoch, new negative instances are sampled, and the data is shuffled.
2019
324
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3349–3355 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3349 Making Fast Graph-based Algorithms with Graph Metric Embeddings Andrey Kutuzov†, Mohammad Dorgham‡, Oleksiy Oliynyk‡, Chris Biemann‡, and Alexander Panchenko⋆, ‡ †Language Technology Group, University of Oslo, Oslo, Norway ‡Language Technology Group, Universit¨at Hamburg, Hamburg, Germany ⋆Skolkovo Institute of Science and Technology, Moscow, Russia Abstract The computation of distance measures between nodes in graphs is inefficient and does not scale to large graphs. We explore dense vector representations as an effective way to approximate the same information: we introduce a simple yet efficient and effective approach for learning graph embeddings. Instead of directly operating on the graph structure, our method takes structural measures of pairwise node similarities into account and learns dense node representations reflecting user-defined graph distance measures, such as e.g. the shortest path distance or distance measures that take information beyond the graph structure into account. We demonstrate a speed-up of several orders of magnitude when predicting word similarity by vector operations on our embeddings as opposed to directly computing the respective path-based measures, while outperforming various other graph embeddings on semantic similarity and word sense disambiguation tasks and show evaluations on the WordNet graph and two knowledge base graphs. When operating on large graphs, such as transportation networks, social networks, or lexical resources, the need for estimating similarities between nodes arises. For many domain-specific applications, custom graph node similarity measures sim : V × V →R have been defined on pairs of nodes V of a graph G = (V, E). Examples include travel time, communities, or semantic distances for knowledge-based word sense disambiguation on WordNet (Miller, 1995). For instance, the similarity sij between the cup.n.01 and mug.n.01 synsets in the WordNet is 1 4 according to the inverted shortest path distance as these two nodes are connected by the undirected path cup → container ←vessel ←drinking vessel ←mug. In recent years, a large variety of such node similarity measures have been described, many of which are based on the notion of a random walk (Fouss et al., 2007; Pilehvar and Navigli, 2015; Lebichot et al., 2018). As given by the structure of the problem, most such measures are defined as traversals of edges E of the graph, which makes their computation prohibitively inefficient. To this end, we propose the path2vec model1, which solves this problem by decoupling development and use of graph-based measures, and – in contrast to purely walk-based embeddings – is trainable to reflect custom node similarity measures. We represent nodes in a graph with dense embeddings that are good in approximating such custom, e.g. application-specific, pairwise node similarity measures. Similarity computations in a vector space are several orders of magnitude faster than computations directly operating on the graph. First, effectiveness of our model is shown intrinsically by learning metric embeddings for three types of graphs (WordNet, FreeBase, and DBPedia), based on several similarity measures. Second, in an extrinsic evaluation on the Word Sense Disambiguation (WSD) task (Navigli, 2009) we replace several original measures with their vectorized counterparts in a known graph-based WSD algorithm by Sinha and Mihalcea (2007), reaching comparable levels of performance with the graph-based algorithms while maintaining computational gains. The main contribution of this paper is the demonstration of the effectiveness and efficiency of the path2vec node embedding method (Kutuzov et al., 2019). This method learns dense vector embeddings of nodes V based on a user-defined custom similarity measure sim, e.g. the shortest path distance or any other similarity measure. While our method is able to closely approximate quite different similarity measures as we show 1https://github.com/uhh-lt/path2vec 3350 on WordNet-based measures and therefore can be used in lieu of these measures in NLP components and applications, our main point is the increase of speed in the similarity computation of nodes, which gains up to 4 orders of magnitude with respect to the original graph-based algorithms. 1 Graph Metric Embeddings Model Definition of the Model Path2vec learns embeddings of the graph nodes {vi, vj} ∈V such that the dot products between pairs of the respective vectors (vi · vj) are close to the user-defined similarities between the nodes sij. In addition, the model reinforces the similarities vi · vn and vj · vm between the nodes vi and vj and all their respective adjacent nodes {vn : ∃(vi, vn) ∈E} and {vm : ∃(vj, vm) ∈E} to preserve local structure of the graph. The model preserves both global and local relations between nodes by minimizing P (vi,vj)∈B((v⊤ i vj −sij)2 −α(v⊤ i vn + v⊤ j vm)), where sij = sim(vi, vj) is the value of a ‘gold’ similarity measure between a pair of nodes vi and vj, vi and vj are the embeddings of the first and the second node, B is a training batch, α is a regularization coefficient. The second term (vi · vn + vj · vm) in the objective function is a regularizer that aids the model to simultaneously maximize the similarity between adjacent nodes while learning the similarity between the two target nodes (one adjacent node is randomly sampled for each target node). We use negative sampling to form a training batch B adding p negative samples (sij = 0) for each real (sij > 0) training instance: each real node (synset) pair (vi, vj) with ‘gold’ similarity sij is accompanied with p ‘negative’ node pairs (vi, vk) and (vj, vl) with zero similarities, where vk and vl are randomly sampled nodes from V . Embeddings are initialized randomly and trained using the Adam optimizer (Kingma and Ba, 2015) with early stopping. Once the model is trained, the computation of node similarities is approximated with the dot product of the learned node vectors, making the computations efficient: ˆsij = vi · vj. Relation to Similar Models Our model bears resemblance to the Skip-gram model (Mikolov et al., 2013), where the vector dot product vi · ˜vj of vectors of pairs of words (vi, vj) from a training corpus is optimized to a high score close to 1 for observed samples, while the dot products of negative samples are optimized towards 0. In the Skip-gram model, the target is to minimize the log likelihood of the conditional probabilities of context words wj given current words wi: L = −P (vi,vj)∈Bp log σ(vi · ˜vj) − P (vi,vj)∈Bn log σ(−vi · ˜vj), where Bp is the batch of positive training samples, Bn is the batch of the generated negative samples, and σ is the sigmoid function. At this, Skip-gram uses only local information, never creating the full co-occurrence count matrix. In our path2vec model, the target dot product values sij are not binary, but can take arbitrary values in the [0...1] range, as given by the custom distance metric. Further, we use only a single embedding matrix with vector representations of the graph nodes, not needing to distinguish target and context. Another related model is Global Vectors (GloVe) (Pennington et al., 2014), which learns co-occurrence probabilities in a given corpus. The objective function to be minimized in GloVe model is L = P (vi,vj)∈B f(sij)(vi · ˜vj −log sij + bi + bj)2, where sij counts the co-occurrences of words vi and vj, bi and bj are additional biases for each word, and f(sij) is a weighting function handling rare co-occurrences. Like the Skip-gram, GloVe also uses two embedding matrices, but it relies only on global information, pre-aggregating global word co-occurrence counts. Computing Training Similarities In general case, our model requires computing pairwise node similarities sij for training between all pairs of nodes in the input graph G. This step could be computationally expensive, but it is done only once to make computing of similarities fast. Besides, for some metrics, effective algorithms exist that compute all pairwise similarities at once, e.g. Johnson (1977) algorithm for computing shortest paths distances with the worst-case performance of O(|V |2 log |V | + |V ||E|). As the input training dataset also grows quadratically in |V |, training time for large graphs can be slow. To address this issue, we found it useful to prune the input training set so that each node vi ∈V has only k ∈[50; 200] most similar nodes. Such pruning does not lead to loss of effectiveness. 2 Computational Efficiency Experimental Setting In this section, we compare efficiency of our method as compared to the original graph based similarity metrics. We 3351 30 40 23.4 0.713 0.007 0.007 0.007 Computation time, sec. Leacock-Chodorow (WordNet) Wu-Palmer (WordNet) Shortest paths (WordNet) FSE embeddings Leacock-Chodorow (path2vec) Wu-Palmer (path2vec) Shortest paths (path2vec) 0 10 20 30 40 30 6.7 0.713 0.007 0.007 Computation time, sec. Leacock-Chodorow (NLTK) Wu-Palmer (NLTK) FSE embeddings Leacock-Chodorow (path2vec) Wu-Palmer (path2vec) 0 10 20 30 Figure 1: Similarity computation: graph vs vectors. trained the model on a graph of 82,115 noun synsets from WordNet. Using NLTK (Bird et al., 2009) we computed the following metrics: (1) Leacock-Chodorow similarities (LCH) based on the shortest path between two synsets in the WordNet hypernym/hyponym taxonomy and its maximum depth; (2) inverted shortest path distance (ShP); (3) Wu-Palmer similarities (WuP) based on the depth of the two nodes in the taxonomy and the depth of their most specific ancestor node. For instance, for LCH this procedure took about 30 hours on an Intel Xeon [email protected] CPU using 10 threads. We pruned similarities to the first 50 most similar ‘neighbors’ of each synset and trained path2vec on this dataset. Discussion of Results Figure 1 presents computation times for pairwise similarities between one synset and all other 82,115 WordNet noun synsets. We compare running times of calculating two original graph-based metrics to Hamming distance between 128D FSE binary embeddings (Subercaze et al., 2015) and to dot product between their dense vectorized 300D counterparts (using CPU). Using float vectors (path2vec) is 4 orders of magnitude faster than operating directly on graphs, and 2 orders faster than Hamming distance. The dot product computation is much faster as compared to shortest path computation (and other complex walks) on a large graph. Also, lowdimensional vector representations of nodes take much less space than the pairwise similarities between all the nodes. The time complexity of calculating the shortest path between graph nodes (as in ShP or LCH) is in the best case linear in the number of nodes and edges. Calculating Hamming distance between binary strings is linear in the sum of string lengths, which are equivalent of vector sizes (Hamming, 1950). At the same time, the complexity of calculating dot product between float vectors is linear in the vector size and is easily parallelized. LCH ShP WuP LCH ShP WuP WordNet 100 100 100 51.3 51.3 47.4 path2vec 93.5 95.2 93.1 53.2 55.5 55.5 TransR 77.6 77.6 72.5 38.6 node2vec 75.9 75.9 78.7 46.2 DeepWalk 86.8 86.8 85.0 53.3 FSE 90.0 90.0 89.0 55.6 Table 1: Spearman correlations with WordNet similarities (left) and human judgments (right) ×100. 3 Evaluation on Semantic Similarity Experimental Setting We use noun pairs from the SimLex999 dataset (Hill et al., 2015), measuring Spearman rank correlation between ‘gold’ WordNet distances for these pairs and the vector distances produced by the graph embedding models (trained on WordNet) to see how well the models fit the training objective. We also test the plausibility of the model’s output to human judgments. For this, we use human-annotated similarities from the same SimLex999. Some SimLex999 lemmas can be mapped to more than one WordNet synset. We chose the synset pair with the highest dot product between the embeddings from the corresponding model. Baselines Our model is compared against five baselines: raw WordNet similarities by respective measures; DeepWalk (Perozzi et al., 2014); node2vec (Grover and Leskovec, 2016); FSE (Subercaze et al., 2015); TransR (Lin et al., 2015). DeepWalk, node2vec, and TransR models were trained on the same WordNet graph. We used all 82,115 noun synsets as vertices and hypernym/hyponym relations between them as edges. During the training of DeepWalk and node2vec models, we tested different values for the number of random walks (in the range from 10 to 100), and the vector size (100 to 600). For DeepWalk, we additionally experimented with the window size (5 to 100). All other hyperparameters were left at default values. FSE embeddings of the WordNet noun synsets were provided to us by the authors, and consist of 128-bit vectors. Discussion of Results The left part of Table 1 shows results with the WordNet similarity scores used as gold standard. Path2vec outperforms other graph embeddings, achieving high correlations with WordNet similarities. This shows that our model efficiently approximates different graph measures. The right part of Table 1 shows results 3352 100 200 300 400 500 600 Vector size 0.35 0.40 0.45 0.50 0.55 Spearman's correlation WordNet graph (noun synsets) path2vec Deepwalk node2vec TransR WordNet 100 200 300 400 500 600 Vector size 0.0 0.2 0.4 0.6 Pearson's correlation Freebase graph (FB15k-237) 100 200 300 400 500 600 Vector size 0.4 0.5 0.6 0.7 0.8 Pearson's correlation DBpedia graph (DB100k) Figure 2: Evaluation on different graphs on SimLex999 (left) and shortest path distance (middle, right). for the correlations with human judgments (SimLex999). We report the results for the best models for each method, all of them (except FSE) using vector size 300 for comparability. Figure 2 (left) compares path2vec to the baselines, as measured by the correlations with SimLex999 human judgments. The WordNet line denotes the correlation of WordNet similarities with SimLex999 scores. For the path2vec models, there is a tendency to improve the performance when the vector size is increased (horizontal axis), until a plateau is reached beyond 600. Note that node2vec fluctuates, yielding low scores for 200 dimensions. The reported best DeepWalk models were trained with the 10 walks and window size 70. The reported best node2vec models were trained with 25 walks. Interestingly, path2vec and DeepWalk models consistently outperform the raw WordNet. 4 Evaluation inside a WSD Algorithm Experimental Setting To showcase how our approach can be be used inside a graph-based algorithm, we employ word sense disambiguation (WSD) task, reproducing the approach of (Sinha and Mihalcea, 2007). We replace graph similarities with the dot product between node embeddings and study how it influences the WSD performance. The WSD algorithm starts with building a graph where the nodes are the WordNet synsets of the words in the input sentence. The nodes are then connected by edges weighted with the similarity values between the synset pairs. The final step is selecting the most likely sense for each word based on the weighted in-degree centrality score for each synset. Discussion of Results Table 2 presents the WSD micro-F1 scores using raw WordNet similarities, 300D path2vec, DeepWalk and node2vec models, and the 128D FSE model. We evaluate on the following all-words English WSD test sets: Model Senseval2 Senseval3 SemEval-15 Random sense 0.381 0.312 0.393 Graph-based vs vector-based measures LCH (WordNet) 0.547↓0.000 0.494↓0.000 0.550↓0.000 LCH (path2vec) 0.527↓0.020 0.472↓0.022 0.536↓0.014 ShP (WordNet) 0.548↓0.000 0.495↓0.000 0.550↓0.000 ShP (path2vec) 0.534↓0.014 0.489↓0.006 0.563↑0.013 WuP (WordNet) 0.547↓0.000 0.487↓0.000 0.542↓0.000 WuP (path2vec) 0.543↓0.004 0.489↑0.002 0.545↑0.003 Various baseline graph embeddings trained on WordNet TransR 0.540 0.466 0.536 node2vec 0.503 0.467 0.489 DeepWalk 0.528 0.476 0.552 FSE 0.536 0.476 0.523 Table 2: F1 scores of a graph-based WSD algorithm on WordNet versus its vectorized counterparts. Senseval-2 (Palmer et al., 2001), Senseval-3 (Mihalcea et al., 2004), and SemEval-15 Task 13 (Moro and Navigli, 2015). The raw WordNet similarities have a small edge over their vector approximations in the majority of the cases yet the path2vec models consistently closely follow them while outperforming other graph embedding baselines: We indicate the differences with respect to the original with a subscript number. 5 Evaluation on Knowledge Base Graphs 5.1 Experimental Settings To show the utility of our model besides the WordNet graph, we also applied it to two graphs derived from knowledge bases (KBs). More specifically, we base our experiments on two publicly available standard samples from these two resources: the FB15k-237 (Toutanova and Chen, 2015) dataset contains 14,951 entities/nodes and is derived from Freebase (Bollacker et al., 2008); the DB100k (Ding et al., 2018) dataset contains 99,604 entities/nodes and is derived from DBPe3353 dia (Auer et al., 2007). It is important to note that both datasets were used to evaluate approaches that learn knowledge graph embeddings, e.g. (Lin et al., 2015; Xie et al., 2016; Joulin et al., 2017) on the task on knowledge base completion (KBC), to predict missing KB edges/relations between nodes/entities. The specificity of our model is that it learns a given graph similarity metric, which is not provided in these datasets. Therefore, we use only the graphs from these datasets, computing the shortest path distances between all pairs of nodes using the algorithm of Johnson (1977). Instead of the KBC task, we evaluate on the task of predicting node similarity, here using the shortest path distance. We generate a random sample of node pairs for testing from the set of all node pairs (these pairs are excluded from training). The test set contains an equal number of paths of length 1-7 (in total 1050 pairs each, 150 pairs per path length). 5.2 Discussion of Results Figure 2 (middle and right) shows evaluation results on the knowledge base graphs. Path2vec is able to better approximate the target graph metric than the standard graph embedding models. As dimensionality of the embeddings increases, the model more closely approximates the target metric, but the performance drop for the models with a low number of dimensions is not drastic, allowing more effective computations while maintaining a reasonable efficiency level. Regarding the competitors, DeepWalk comes closest to the performance of our approach, but does not seem to make use of the additional dimensions when training on larger vector sizes; on the DBPedia dataset, this issue is shared between all baselines, where correlation to the true path lengths decreases as representation length increases. 6 Related Work Representation learning on graphs received much attention recently in various research communities, see Hamilton et al. (2017a) for a thorough survey on the existing methods. All of them (including ours) are based on the idea of projecting graph nodes into a latent space with a much lower dimensionality than the number of nodes. Existing approaches to graph embeddings use either factorization of the graph adjacency matrix (Cao et al., 2015; Ou et al., 2016) or random walks over the graph as in Deepwalk (Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016). A different approach is taken by Subercaze et al. (2015), who directly embed the WordNet tree graph into Hamming hypercube binary representations. Their ‘Fast similarity embedding’ (FSE) model provides a quick way of calculating semantic similarities based on WordNet. The FSE embeddings are not differentiable though, considerably limiting their use in deep neural architectures. TransR (Lin et al., 2015) extends TransH (Wang et al., 2014) and is based on the idea that an entity may have a few aspects and different relations are focused on them. So the same entities can be close or far from each other depending on the type of the relation. TransR projects entity vectors into a relation specific space, and learns embeddings via translation between projected entities. We compare our path2vec model to these approaches, yet we did not compare to the models like GraphSAGE embeddings (Hamilton et al., 2017b) and Graph Convolutional Networks (Schlichtkrull et al., 2018) as they use node features which are absent in our setup. 7 Conclusion Structured knowledge contained in language networks is useful for NLP applications but is difficult to use directly in neural architectures. We proposed a way to train embeddings that directly represent a graph-based similarity measure structure. Our model, path2vec, relies on both global and local information from the graph and is simple, effective, and computationally efficient. We demonstrated that our approach generalizes well across graphs (WordNet, Freebase, and DBpedia). Besides, we integrated it into a graph-based WSD algorithm, showing that its vectorized counterpart yields comparable F1 scores on three datasets. Path2vec enables a speed-up of up to four orders of magnitude for the computation of graph distances as compared to ‘direct’ graph measures. Thus, our model is simple and general, hence it may be applied to any graph together with a node distance measure to speed up algorithms that employ graph distances. Acknowledgments This was supported by the DFG under “JOIN-T” (BI 1544/4) and “ACQuA” (BI 1544/7) projects. 3354 References S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. DBpedia: A nucleus for a web of open data. In The Semantic Web: Proceedings of the 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, pages 722–735, Busan, South Korea. Springer. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. O’Reilly Media, Inc. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250, Vancouver, BC, Canada. ACM. Shaosheng Cao, Wei Lu, and Qiongkai Xu. 2015. GraRep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 891–900, Melbourne, Australia. ACM. Boyang Ding, Quan Wang, Bin Wang, and Li Guo. 2018. Improving knowledge graph embedding using simple constraints. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 110–121, Melbourne, Australia. Association for Computational Linguistics. Francois Fouss, Alain Pirotte, Jean-Michel Renders, and Marco Saerens. 2007. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions on knowledge and data engineering, 19(3):355–369. Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855–864, San Francisco, CA, USA. ACM. William Hamilton, Rex Ying, and Jure Leskovec. 2017a. Representation learning on graphs: Methods and applications. IEEE Data Engineering Bulletin, 40(3):52–74. William Hamilton, Zhitao Ying, and Jure Leskovec. 2017b. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, Long Beach, CA, USA. Richard Hamming. 1950. Error detecting and error correcting codes. Bell System technical journal, 29(2):147–160. Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating Semantic Models With (Genuine) Similarity Estimation. Computational Linguistics, 41(4):665–695. Donald B. Johnson. 1977. Efficient algorithms for shortest paths in sparse networks. Journal of the ACM (JACM), 24(1):1–13. Armand Joulin, Edouard Grave, Piotr Bojanowski, Maximilian Nickel, and Tomas Mikolov. 2017. Fast linear model for knowledge graph embeddings. arXiv preprint arXiv:1710.10881. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA. Andrey Kutuzov, Mohammad Dorgham, Oleksiy Oliynyk, Chris Biemann, and Alexander Panchenko. 2019. Learning graph embeddings from WordNetbased similarity measures. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 125–135, Minneapolis, MN, USA. Association for Computational Linguistics. Bertrand Lebichot, Guillaume Guex, Ilkka Kivim¨aki, and Marco Saerens. 2018. A constrained randomized shortest-paths framework for optimal exploration. arXiv preprint arXiv:1807.04551. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 2181–2187, Austin, TX, USA. AAAI Press. Rada Mihalcea, Timothy Chklovski, and Adam Kilgarriff. 2004. The Senseval-3 English lexical sample task. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 25–28, Barcelona, Spain. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111–3119, Lake Tahoe, NV, USA. Curran Associates, Inc. George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Andrea Moro and Roberto Navigli. 2015. Semeval2015 task 13: Multilingual all-words sense disambiguation and entity linking. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 288–297. Association for Computational Linguistics. 3355 Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. 2016. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1105–1114, San Francisco, CA, USA. ACM. Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 21–24, Toulouse, France. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. DeepWalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701–710, New York, NY, USA. ACM. Mohammad T. Pilehvar and Roberto Navigli. 2015. From senses to texts: An all-in-one graph-based approach for measuring semantic similarity. Artificial Intelligence, 228:95–128. Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In Proceedings of the European Semantic Web Conference 2018: The Semantic Web, pages 593–607, Heraklion, Greece. Springer. Ravi Sinha and Rada Mihalcea. 2007. Unsupervised graph-based word sense disambiguation using measures of word semantic similarity. In International Conference on Semantic Computing (ICSC), pages 363–369, Irvine, CA, USA. IEEE. Julien Subercaze, Christophe Gravier, and Fr´ed´erique Laforest. 2015. On metric embedding for boosting semantic similarity computations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 8–14, Beijing, China. Association for Computational Linguistics. Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57–66, Beijing, China. Association for Computational Linguistics. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1112–1119, Qu´ebec City, QC, Canada. AAAI Press. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, pages 2659–2665, Phoenix, AZ, USA. AAAI Press.
2019
325
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3356–3361 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3356 Embedding Imputation with Grounded Language Information Ziyi Yang1, Chenguang Zhu2, Vin Sachidananda3, and Eric Darve1, 4 1Department of Mechanical Engineering, Stanford University 2Microsoft Speech and Dialogue Research Group 3Department of Electrical Engineering, Stanford University 4Institute for Computational and Mathematical Engineering, Stanford University {ziyi.yang, vsachi, darve}@stanford.edu, [email protected] Abstract Due to the ubiquitous use of embeddings as input representations for a wide range of natural language tasks, imputation of embeddings for rare and unseen words is a critical problem in language processing. Embedding imputation involves learning representations for rare or unseen words during the training of an embedding model, often in a post-hoc manner. In this paper, we propose an approach for embedding imputation which uses grounded information in the form of a knowledge graph. This is in contrast to existing approaches which typically make use of vector space properties or subword information. We propose an online method to construct a graph from grounded information and design an algorithm to map from the resulting graphical structure to the space of the pre-trained embeddings. Finally, we evaluate our approach on a range of rare and unseen word tasks across various domains and show that our model can learn better representations. For example, on the Card-660 task our method improves Pearson’s and Spearman’s correlation coefficients upon the stateof-the-art by 11% and 17.8% respectively using GloVe embeddings. 1 Introduction Word embeddings (Mikolov et al., 2013; Pennington et al., 2014) are used pervasively in deep learning for natural language processing. However, due to fixed vocabulary constraints in existing approaches to training word embeddings, it is difficult to learn representations for words which are rare or unseen during training. This is commonly referred to as the out-of-vocabulary (OOV) word problem. In the original embedding implementations, a special OOV token is typically reserved for such words. However, this rudimentary approach often detriments the performance of downstream tasks which contain numerous rare or unseen words. Recent works have proposed subword approaches (Zhao et al., 2018; Sennrich et al., 2015), which construct embeddings through the composition of characters or sentence pieces for OOV words. Vector space properties are also utilized to learn embeddings with small amounts of data (Bahdanau et al., 2017; Herbelot and Baroni, 2017). In this paper, we propose a novel approach, knowledge-graph-to-vector (KG2Vec), for the OOV word problem. KG2Vec makes use of the grounded language information in the form of a knowledge graph. Grounded information has been extensively used in various NLP tasks to represent real-world knowledge (Niles and Pease, 2003; Gruber, 1993; Guarino, 1998; de Bruijn et al., 2006; Paulheim, 2017) . In particular, early question answering systems used expert-crafted ontologies in order to endow these systems with common knowledge (Harabagiu et al., 2005; Xu et al., 2016). Additionally, lexical-semantic ontologies, such as WordNet, have been used to provide semantic relations between words in a wide variety of language processing and inference tasks (Morris and Hirst, 1991; Ovchinnikova et al., 2010). Grounded language information has been observed to augment model performance on a wide variety of natural language processing and understanding tasks (He et al., 2017; Choi et al., 2018). In these settings, a model is able to provide better generalization by using relational information from a knowledge graph or knowledge base in addition to the standard set of training examples. Additionally, outputs from models with grounded approaches have been observed to be more factually consistent and logically sound (Bordes et al., 2014) compared with outputs from models without grounding information. By foregoing the usage of vector space or subword information, KG2Vec is able to capture se3357 mantic meanings of words directly from the graphical structure in grounded knowledge using recent advances in network representation learning. Furthermore, KG2Vec leverages the most updated information from comprehensive knowledge bases (Wikipedia & Wiktionary). Therefore, KG2Vec can be applied to training embeddings of newly emerging OOV words. In summary, our contributions are three-fold: 1. An approach to constructing graphical representations of entities in a knowledge base in an unsupervised manner. 2. Methods for mapping entities from a graphical representation to the space in which a pretrained embedding lies. 3. Experimentation on rare and unseen word datasets and a new state-of-art performance on Card-660 dataset. 2 Related Work 2.1 Graph Neural Networks Graph neural networks (GNN) are an emerging deep learning approach for representation learning of graphical data (Xu et al., 2018; Kipf and Welling, 2016). GNNs can learn a representation vector hv for each node in the network by leveraging the graphical structure and node features fv. Node embeddings are generated by recursively aggregating each node’s neighborhood information and features. At the t-th iteration, the information aggregation is defined as: ht v = Mt(ht−1 v , {ht−1 u }u∈N(v)) (1) where ht v is the representation for v at the t-th iteration, Mt is an iteration-specific message aggregation function parametrized by a neural network and N(v) is the set of neighbors of node v. One simple form of Mt is mean neighborhood aggregation: ht v = ReLU( X u∈N(v) W tht−1 u |N(v)| + Btht−1 v ) (2) where W t and Bt are trainable matrices. Typically, h0 v is initialized as fv. The final node representation is usually a function of hT v from the last iteration T, such as an identity function or a transformation function (Ying et al., 2018). 2.2 The OOV word problem The out-of-vocabulary (OOV) word problem has been present in word embedding models since their inception (Mikolov et al., 2013; Pennington et al., 2014). Due to space and training data constraints, words which are either infrequent or do not appear in the training corpus can lack representations at the time of inference. Numerous methods have been proposed to tackle the OOV word problem with a small amount of training data. Deep learning based approaches (Bahdanau et al., 2017) and vector-space based methods (Herbelot and Baroni, 2017) can improve the rare word representations on various semantic similarity tasks. One downside to these approaches is that they require small amounts of training data for words whose embeddings are being imputed and, as a result, can have difficulties representing words for which training samples do not exist. Sub-word level representations have been studied in the context of the OOV word problem. Pinter et al. (2017) uses the RNN’s hidden state of the last sub-word in a word to produce representations. Zhao et al. (2018) proposes using characterlevel decomposition to produce embeddings for OOV words. 3 Model We propose the knowledge-graph-to-vector (KG2Vec) model for building OOV word representations from knowledge base information. KG2Vec starts with building a knowledge graph K with nodes consisting of pre-trained words and OOV words. It then utilizes a graph convolutional network (GNN) to map graph nodes to lowdimensional embeddings. The GNN is trained to minimize the Euclidean distance between the node embeddings to pre-trained word embeddings in the dictionary such as GloVe (Pennington et al., 2014) and ConceptNet Numberbatch (Speer et al., 2017). Finally, the GNN is used to generate embeddings for OOV words. 3.1 Build the Knowledge Graph In a knowledge graph K, each node v represents a word wv. The nodes (words) in the graph are chosen as follows. We count the frequency of occurrences for English words from the Wikipedia English dataset (with 3B tokens). The 2000 words with the highest frequencies of occurrence are skipped to diminish the effect of stop words. Among the words left, we choose the |V ′| words with the highest frequencies of occurrence. All 3358 OOV words for which we would like to impute embeddings are also added to the graph as nodes. For each node, we obtain its grounded information from two sources: (I) the words’ summary, defined as the first paragraph of the Wikipedia page when this word is searched; (II) the word’s definition in Wiktionary. We choose Wikipedia and Wiktionary over other knowledge bases because they are comprehensive, well-maintained and up-to-date. Here is an example of the grounded information for the word Brexit. • Wikipedia page summary: Brexit, a portmanteau of “British” and “exit”, is the impending withdrawal of the United Kingdom (UK) from the European Union (EU). It follows the referendum of 23 June 2016 when 51.9 per cent of voters chose to leave the EU... • Wiktionary definition: Brexit (Britain, politics) The withdrawal of the United Kingdom from the European Union. All the words in the Wikipedia summary and the Wiktionary definition form the grounded language information of this word wv, defined as Dv. Specifically, Dv is the concatenation of wv’s Wikipedia summary and the Wiktionary definition. An undirected edge evu exists between node v and u if the Jaccard coefficient |Dv∩Du| |Dv∪Du| > η, where η is a pre-defined threshold and chosen to be 0.5 empirically in the experiments. The edge evu is then assigned with a weight svu = |Dv∩Du| |Dv∪Du|. We also compute a feature vector fu as the mean of pre-trained embeddings of words in Dv. Finally, the obtained knowledge graph K = (V, E) has a feature vector fv for each node v ∈V . 3.2 Graph Neural Network The nodes in the graph are mapped to lowdimensional embeddings via graph convolutional neural network (GCN) (Kipf and Welling, 2016). It follows that, at the t-th neighborhood aggregation, the node embedding ht v for node v is modelled as: ht v = ReLU(W t X u∈S(v) svuht−1 u C + bt) (3) where S(v) = N(v) ∪{v}, and the normalization constant C = 1 + P u∈N(v) svu. W t and bt are trainable parameters. The node embeddings are initialized as the feature vector fv, i.e. h0 v = fv. At the final iteration T, the generated node embeddings {hT v } are computed without the ReLU function. The loss function of the GNN model is the mean square error between the pre-trained word vectors and generated embedding hT v for all words in the graph which are part of the model’s vocabulary (e.g. GloVe). During inference, OOV words are assigned embeddings computed by the GNN. 4 Experiments To evaluate our method’s ability to impute embeddings, we conduct experiments on the following rare and unseen word similarity tasks. 4.1 Card-660: Cambridge Rare Word Dataset Card-660 (Pilehvar et al., 2018) is a word-word similarity task with 660 example pairs involving uncommon words and provides a benchmark for rare word representation models. Card-660 has a inter-annotator agreement (IAA) measure of 0.90, which is significantly higher than previous datasets for rare word representation. Additionally, Card-660 contains examples from a disparate set of domains such as technology, popular culture and medicine. 4.2 Stanford Rare Word (RW) Similarity The Stanford Rare Word (RW) Similarity Benchmark (Luong et al., 2013) is a word-word semantic similarity task including 2034 word pairs and tests the ability of representation learning methods to capture the semantics of infrequent words. Due to the probabilistic underpinnings of word embeddings, where distances between two words’ representations are approximately proportional to their co-occurrence probability in a corpus, the authors found that rare words often have more noisy representations due to having fewer training samples. Although RW has a relatively low IAA measure of 0.41, the benchmark has been well-studied in previous literature. 4.3 Results Experiment results, measured by Pearson’s and Spearman’s correlation, on the Card-660 and Stanford rare words datasets are shown in table 1. The Wikipedia pages and Wiktionary definitions used in the following experiments are snapshots from Feb 16th, 2019. We compare KG2Vec to other embedding imputation models, including Mimick (Pinter et al., 2017), Definition centroid (Herbelot and Baroni, 2017), Definition LSTM (Bah3359 Model Missed words Missed pairs Pearson r Spearman ρ RW CARD RW CARD RW CARD RW CARD ConceptNet Numberbatch 5% 37% 10% 53% 53.0 36.0 53.7 24.7 + Mimick 0% 0% 0% 0% 56.0 34.2 57.6 35.6 + Definition centroid 0% 29% 0% 43% 59.1 42.9 60.3 33.8 + Definition LSTM 0% 25% 0% 39% 58.6 41.8 59.4 31.7 + SemLand 0% 29% 0% 43% 60.5 43.4 61.7 34.3 + BoS 0% 0% 0% 0% 60.0 49.2 61.7 47.6 + Node features 0.02% 7% 0.04% 12% 58.4 54.0 59.7 51.4 + KG2Vec 0.02% 7% 0.04% 12% 58.6 56.9 60.1 54.3 GloVe Common Crawl 1% 29% 2% 44% 44.0 33.0 45.1 27.3 + Mimick 0% 0% 0% 0% 44.7 23.9 45.6 29.5 + Definition centroid 0% 21% 0% 35% 43.5 35.2 45.1 31.7 + Definition LSTM 0% 20% 0% 33% 24.0 23.0 22.9 19.6 + SemLand 0% 21% 0% 35% 44.3 39.5 45.8 33.8 + BoS 0% 0% 0% 0% 44.9 31.5 46.0 35.3 + Node features 0.05% 0.4% 0.01% 0.7% 43.8 36.0 45.0 37.4 + KG2Vec 0.05% 0.4% 0.01% 0.7% 44.6 50.5 45.8 51.6 Table 1: Performance of OOV models on Stanford Rare Word Similarity and Card-660 datasets. Two word dictionaries are used: ConceptNet and GloVe. The overall best are underlined for each column, and the best results for each type of word dictionary are in bold. We run the BoS experiments with the default hyper-parameters from Zhao et al. (2018). Performances of other baseline models are collected from Pilehvar et al. (2018). danau et al., 2017), SemLand (Pilehvar and Collier, 2017) and BoS (Zhao et al., 2018). During evaluation, zero vectors are assigned to missing words and word-word similarity is computed as the inner product of the corresponding embeddings. In KG2Vec, the number of iterations T = 3 for GCN, and the number of nodes with pretrained word vectors |V ′| = 9000. We test on two types of pre-trained word vectors GloVe (Common crawl, cased 300d) and ConceptNet Numberbatch (300d). KG2Vec shows competitive performance in all test cases. On Card-660 dataset KG2Vec achieves state-of-the-art results by a significant margin. When using ConceptNet embeddings, KG2Vec results in improvements of 7.7% and 6.7% on Pearson’s and Spearman’s correlation coefficients, respectively, when compared to prior state-of-the-art performance (BoS). When using GloVe embeddings, KG2Vec improves upon SemLand by 11% and 17.8% on Pearson’s and Spearman’s correlation coefficients. Considering the fact that Card-660 contains a significant amount of recent OOV words (e.g. “Brexit”), this improvement indicates that KG2Vec’s can leverage upto-date information from knowledge bases. Additionally, this shows that GNNs can effectively cover OOV words and precisely model their semantic meanings. On Stanford Rare Word dataset, KG2Vec is comparable with other state-of-the-art models, suggesting its robustness across various test schemes. Note that the graph used in KG2Vec has a much smaller size compared with knowledge graphs used in SemLand, the WordNet, which has 155,327 words. To fairly evaluate KG2Vec, we include a baseline model that assigns the node feature fv as the final word representations for word wv if wv is not in the pre-trained dictionary. The results are denoted as “Node features” in table 1. In all test cases, KG2Vec improves by a large margin upon this baseline. For example, using GloVe on the Card-660 dataset, KG2Vec’s achieves a performance increase of 14.5% and 14.2% respectively for Pearson’s and Spearman’s coefficients over Node features. This observation suggests that the information aggregation by GNN is critical for embedding imputation and semantic inference. It also indicates that learning from the knowledge graph and its language information is an effective way to parse the semantic meaning of a rare word. 5 Discussion Application on Entity Relations Knowledge Base. Many public knowledge bases consist of 3360 relational data in a tuple format: (entity1, entity2, relation), where entities can be considered as the “nodes” in the graph and relations define the edges. Note that there are different kinds of relations and therefore edges in the graph have different types or labels. To impute the embeddings for entities in such scenario, one can conveniently adapt KG2Vec following Schlichtkrull et al. (2018) by learning different transformations for different types of edges. Adaption to New Vocabularies and Information. Considering the fast growth of vocabularies in the current era, the ability to perform online learning and quick adaptation for embedding imputations is a desired property. One can combine KG2Vec with meta-learning, e.g., MAML in Finn et al. (2017), such that the resulting model can quickly learn the embeddings of newly added nodes (words), or updated node features. 6 Conclusion and Future Work In this paper, we introduce KG2Vec, a graph neural network based approach for embedding imputation of OOV words which makes use of grounded language information. Using publicly available information sources like Wikipedia and Wiktionary, KG2Vec can effectively impute embeddings for rare or unseen words. Experimental results show that KG2Vec achieves state-ofthe-art results on the Card-660 dataset. Future research directions include a theoretical explanation of KG2Vec and applications to downstream NLP tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable feedback. References Dzmitry Bahdanau, Tom Bosc, Stanislaw Jastrzebski, Edward Grefenstette, Pascal Vincent, and Yoshua Bengio. 2017. Learning to compute word embeddings on the fly. CoRR, abs/1706.00286. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014. Question answering with subgraph embeddings. CoRR, abs/1406.3676. Jos de Bruijn, Marc Ehrig, Cristina Feier, Francisco MartnsRecuerda, francois Scharffe, and Moritz Weiten. 2006. Ontology Mediation, Merging, and Aligning, pages 95 – 113. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac : Question answering in context. CoRR, abs/1808.07036. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org. Thomas R. Gruber. 1993. A translation approach to portable ontology specifications. Knowl. Acquis., 5(2):199–220. N. Guarino. 1998. Formal Ontology in Information Systems: Proceedings of the 1st International Conference June 6-8, 1998, Trento, Italy, 1st edition. IOS Press, Amsterdam, The Netherlands, The Netherlands. Sanda M. Harabagiu, Dan I. Moldovan, Christine Clark, Mitchell Bowden, Andrew Hickl, and Patrick Wang. 2005. Employing two question answering systems in trec 2005. In TREC. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. CoRR, abs/1704.07130. Aur´elie Herbelot and Marco Baroni. 2017. High-risk learning: acquiring new word vectors from tiny data. CoRR, abs/1707.06556. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104–113. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems Volume 2, NIPS’13, pages 3111–3119, USA. Curran Associates Inc. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Comput. Linguist., 17(1):21– 48. Ian Niles and Adam Pease. 2003. Linking lexicons and ontologies: Mapping wordnet to the suggested upper merged ontology. In Proceedings of the 2003 International Conference on Information and Knowledge Engineering (IKE 03), Las Vegas, pages 412–416. 3361 Ekaterina Ovchinnikova, Laure Vieu, Alessandro Oltramari, Stefano Borgo, and Theodore Alexandrov. 2010. Data-driven and ontological analysis of framenet for natural language reasoning. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Heiko Paulheim. 2017. Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic Web, 8(3):489–508. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In EMNLP. Mohammad Taher Pilehvar and Nigel Collier. 2017. Inducing embeddings for rare and unseen words by leveraging lexical resources. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 388–393. Association for Computational Linguistics. Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge Rare Word Dataset – a reliable benchmark for infrequent word representation models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword rnns. CoRR, abs/1707.06961. Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593–607. Springer. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826. Kun Xu, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Hybrid question answering over knowledge base and free text. In COLING. Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems, pages 4805–4815. Jinman Zhao, Sidharth Mudgal, and Yingyu Liang. 2018. Generalizing word embeddings using bag of subwords. CoRR, abs/1809.04259.
2019
326
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3362–3367 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3362 The Effectiveness of Simple Hybrid Systems for Hypernym Discovery William Held and Nizar Habash Computational Approaches to Modeling Language Lab New York University Abu Dhabi, UAE {wbh230,nizar.habash}@nyu.edu Abstract Hypernymy modeling has largely been separated according to two paradigms, patternbased methods and distributional methods. However, recent works utilizing a mix of these strategies have yielded state-of-the-art results. This paper evaluates the contribution of both paradigms to hybrid success by evaluating the benefits of hybrid treatment of baseline models from each paradigm. Even with a simple methodology for each individual system, utilizing a hybrid approach establishes new stateof-the-art results on two domain-specific English hypernym discovery tasks and outperforms all non-hybrid approaches in a general English hypernym discovery task. 1 Introduction Discovering word-level hierarchies has long been an important step in constructing language taxonomies. The most important of these hierarchical relationships is hypernymy or the ISArelationship, i.e. ‘chihuahua’ is a ‘dog’, which forms the backbone of word-level taxonomies, most notably WordNet (Fellbaum, 1998). Early works on the modeling of this relationship focused on the practical task of discovering new instances of the hypernymy relationship given a vocabulary and an existing resource with labeled data about hypernymy, described as hypernym discovery by Camacho-Collados (2017). For the purposes of discovery, Hearst (1992) developed a landmark set of lexico-syntactic patterns which indicated hypernymy. There have been many follow-ups on this concept of identifying and utilizing patterns to identify hypernym pairs (Caraballo, 1999; Mann, 2002; Snow et al., 2005, 2006). However, by restricting the sentences of interest to only those which match patterns, even very large datasets with very loose pattern matching will often return small co-occurrence numbers, especially for more indirectly connected hypernym pairs. To tackle the sparsity of pattern-based approaches, recent focus has turned to distributional models of hypernymy. Distributional models are attractive since they use signals drawn from every sentence of training data. Distributional approaches have focused on discovering spatial properties of embedding space which capture the hypernymy relationship (Kotlerman et al., 2010; Yamane et al., 2016; Shwartz et al., 2017; Nickel and Kiela, 2017; Vulic and Mrksic, 2017). The performance of distributional approaches in hypernymy detection shows promise to create a more broad picture of the hypernymy relationship space. Recently, hybrid models of hypernymy, in both discovery and detection, have surpassed the performance of either paradigm individually. Similarly, the current state-of-the-art in hypernymy detection was set by a classifier which integrated information from both pattern data and distributional word embeddings (Shwartz et al., 2016). In hypernym discovery, where purely distributional methods have struggled, a model which utilized a hybrid approach of patterns and distributional representations far and away led the results of a recent SemEval Task (Camacho-Collados et al., 2018; Bernier-Colborne and Barriere, 2018). In this paper, we study the benefits of hybrid strategies of hypernymy via a hybrid of extremely simple models of pattern-based and distributional hypernym discovery. We evaluate this model on the English sub-tasks of SemEval 2018 Task 9 for Hypernym Discovery. Overall, our results show that these paradigms have an almost directly complementary effect even when individual models are simple, a result which we support using the degrees of hypernymy each paradigm captures effectively. 3363 2 Pattern-Based Model In order to make our pattern-based approach return a reasonable number of candidate hypernyms, we apply two separate methods to increase the number of candidate hypernyms presented by the pattern based model. Extended Pattern Use First, we utilize a set of 47 extended Hearst Patterns as collected in Seitner et al. (2016). Additionally, we consider ngram terms from our vocabulary to inherently contain a pattern co-occurrence with their sub-terms, e.g., nuclear physics →hyponym physics. In English, this construction is common and accounts for a high number of “co-occurrences” between hyponyms and hypernyms. All input sentences are tested by regular expression representations of these 47 patterns, yielding a table of candidates for the hypernymy relationship, in the form of xhypo, yhyper, and the number of times the pairs co-occurred in any of the extended Hearst Patterns. This stage is fully unsupervised but aims to extract lexico-syntactic information which indicates direct hypernymy. This raw co-occurrence table can be used to discover hypernym terms, with hypernym candidates scored based on their raw counts. Hearst Matrix Singular Value Decomposition While this raw co-occurrence table can be used to discover candidate hypernyms, it still suffers from a high amount of sparsity even for terms which occur in patterns. Roller et al. (2018) showed exactly that performing singular value decomposition on co-occurrence tables can yield recall improvements, oftentimes outperforming state-ofthe-art distributional methods for the hypernym detection task. To modify this method for the hypernym discovery task, we simply sort all vocabulary terms that occur in Hearst Patterns according to the following metric: sp(x, y) = U T x ΣrVy where U, Σ, and V are taken from the singular value decomposition of the Hearst Pattern cooccurrence matrix and Ux, Vy are the row vector and the column vector for the hyponym and the hypernym respectively. Then, a similarity cutoff is tuned to maximize the F1 score of our predicted hypernyms on any labeled data that we have. For words which never occur in any patterns, we still lack the ability to generate any reasonable candidates which causes this approach to still suffer from low total recall due to query terms never seen in patterns. 3 Distributional Modeling with Hypernyms from Nearest Neighbor For our distributional methodology, we choose the simplest possible supervised approach to hypernym discovery - a single nearest neighbor approach - in which the hypernyms for each query term are transferred from their nearest neighbor in the training data. This approach is motivated by the work of Snow et al. (2006) where linking a new hyponym to a similar known hyponym was shown to effectively encode an enormous amount of signal about correct hypernyms. Our method is as follows. Suppose we have a training set H consisting of a number of hyponyms and their corresponding hypernyms. H : {Hypoi : Hyper1 i ...Hyperj i } For a given query term Q, we find the nearest neighbor Hyponn from the training set by the cosine similarity of vector representations of the words. The hypernyms of the nearest neighbor are then sorted by descending frequency in the training set, such that the words which served as hypernyms to more known terms in the training data come first. This sorting metric serves as a heuristic of the generality of the hypernyms of the nearest neighbor. Since the nearest neighbor is unlikely to be an exact synonym, it is more likely that the query and its nearest neighbor share more general hypernyms, those that would appear at a lower depth in a taxonomy, than they are to share extremely specific hypernyms. Additionally, a similarity cutoff point is trained on tuning data, such that if there is no nearest neighbor with greater similarity the cutoff point, the nearest neighbors strategy simply returns the most frequent hypernyms from the entire training set. Contrasting to the Hearst Patterns, our distributional method instead tries to provide as many reasonable guesses to hypernyms as possible unless the nearest neighbor is very far away. Embedding Methodology Details In theory, any word embedding model can be used for this 3364 General Medical Music Model Variant MAP MRR P@5 MAP MRR P@5 MAP MRR P@5 Count Hearst Patterns 4.60 11.70 4.10 14.99 43.18 13.70 7.65 26.14 6.76 SVD Hearst Patterns 6.19 15.12 5.65 15.80 47.01 14.53 8.89 29.63 8.05 Hypernyms of Nearest Neighbor 9.85 24.56 8.76 29.57 48.18 34.10 38.65 72.77 38.45 Hybrid of Raw Count & NN 14.82 32.61 13.80 35.29 63.59 38.73 28.22 61.26 30.67 Hybrid of SVD & NN 15.97 34.07 15.00 37.85 64.47 40.19 54.62 77.24 55.08 Table 1: Comparison of model variants on all three sub-tasks of SemEval 2018 Task 9. nearest neighbor task as it does not explicitly take advantage of any particular features of a particular word embedding. However, in practice, we found that the FastText (Bojanowski et al., 2016) algorithm is preferable since even out of vocabulary query words are able to be given reasonable embeddings due to the meaningful embeddings of sub-strings that FastText provides. This guarantees that the nearest neighbor approach always gives some form of candidate hypernyms, even for words which are out of vocabulary or word n-grams which don’t have specific embeddings. For the purposes of evaluation, we used 100dimensional embeddings with common n-grams joined together. 4 Hybrid Approach Ultimately, we combine the methods in order to capture the valuable elements of each. While pattern-based approaches suffer from sparseness, they do tend to generate high precision results when available. Conversely, the nearest neighbor approach almost always generates a fair number of candidate hypernyms but suffers from low precision unless the nearest neighbor is an exact synonym. Therefore, we propose the following ordering rule for candidate hypernyms. When the pattern-based approach yields results, we rank them as first. Then, the hypernyms of the nearest neighbor are added until our total desired number of candidates is reached. Since this is a supervised setting, we tune a cutoff similarity value for the pattern-based approaches as described in Section 2. 5 Experiments & Results We evaluate our model on SemEval 2018 Task 9, the only existing benchmark for the hypernym discovery task. Specifically, we focus on the 3 English sub-tasks: general English, Medical literature, and Music literature. Each task comes with a separate corpus of unlabeled text data, training and trial data of hyponyms labeled with their complete list of hypernyms, and a vocabulary of valid hypernyms. The final results of a model are tested on a dataset of equal size to the training data. Further details can be found in Camacho-Collados et al. (2018)’s paper describing the tasks and their respective data. For each sub-task, we only use the data from the specific sub-task we are evaluating. The provided trial dataset is used to tune our cutoff points for Hearst Pattern frequency and minimum similarity for the nearest neighbor hypernyms approach. For each query word, we propose 15 candidates ranked as described in Section 4. Our initial experiments, shown in Table 1, compare all variations of our described systems on all tasks. The hybrid models consistently outperform the individual independent models by a significant margin, except for the Music task where the raw count method seems to negatively impact the hypernyms of nearest neighbor approach. The fact that our simple combination of these two models yields improved results is a positive indication that they each contribute separate signals to the model. For the three English sub-tasks, we evaluate our model using the evaluation script from the SemEval task, and compare our results on Mean Average Precision, Mean Reciprocal Rank, and Precision at 5, the metrics primarily discussed in the original task. We compare our system to the CRIM (Bernier-Colborne and Barriere, 2018), 300-sparsans (Berend et al., 2018), vanilla Taxoembed (Espinosa-Anke et al., 2016), and most frequent hypernym systems. The first two were the only systems to achieve state-of-the-art results on the above metrics for the three English subtasks, while the latter two represent the best baselines from the shared task. The comparison against these models on all three sub-tasks can be found in Table 2. 3365 General Medical Music Model MAP MRR P@5 MAP MRR P@5 MAP MRR P@5 Hybrid of SVD & NN(Our Model) 15.97 34.07 15.00 37.85 64.47 40.19 54.62 77.24 55.08 CRIM (Bernier-Colborne and Barriere, 2018) 19.78 36.10 19.03 34.05 54.64 36.77 40.97 60.93 41.31 vTE∗(Espinosa-Anke et al., 2016) 10.60 23.83 9.91 18.84 41.07 20.71 12.99 39.36 12.41 300-sparsans (Berend et al., 2018) 8.95 19.44 8.63 20.75 40.60 21.43 29.54 46.43 28.86 Most Frequent Hypernym∗ 8.77 21.39 7.81 28.93 35.80 34.20 33.32 51.48 35.76 Table 2: Results for all three English sub-tasks in SemEval 2018 Task 9. Baselines are marked with *, state-ofthe-art is marked in bold. Our simple hybrid model outperforms all systems in the competition on the general English hypernym discovery task except for CRIM. In the general task, we find it worth noting that there is a significant performance gap between our hybrid approach and all non-hybrid models, despite the simplicity of our model. The state-of-the-art model, CRIM, is also a hybrid model, but it makes much more robust use of the larger training set provided in the general English sub-task. Perhaps more surprising, our model yields new state-of-the-art results for the Music and Medical sub-domain tasks. As these approaches are both much smaller tasks with around 1/3rd of the training data, we see that our model is able to make effective use out of smaller datasets as well. 6 Analysis In our goal of evaluating hybrid models in isolation, we quantitatively analyze why these paradigms are beneficial in concert and manually analyze where these models fail and perform well. Hypernymy Distance Analysis In order to explain the high degree of compatibility the hybrid model highlights, we explore the idea that each model is modeling not only signal in support of the same hypernyms but tends to model wholly different types of hypernymy. In Section 3, we discussed the intuitions behind our sorting method that optimizes the nearest neighbor to rank general hypernyms first, as these are more likely to also apply to the query term. By contrast, the Hearst Patterns are more likely to occur when the query and hypernym are directly related. Our intuitions about the type of information captured by each model state that nearest neighbors should effectively yield higher portions of the taxonomy, while Hearst Patterns will link direct hypernyms. In Table 3, we support this by calculating the average length of the shortest path between the hyponym and the proposed hypernym for each model. The metric is not dramatic but it clearly separates the two approaches. Correctly predicted hypernyms from the nearest neighbor approach lie on average around one step further away on Wordnet from their query words than our correctly predicted hypernyms from the Hearst Patterns.1 Manual Error Analysis In order to more fully understand the contributions of each model to our results, we perform manual error analysis on a randomly selected subset of the test data. 100 examples were selected from the General sub-task and 50 examples each were taken from the Music and Medical sub-tasks. Results are in Table 4. Within these examples, each candidate is labeled with which system yielded the answer. We also categorize certain types of error into their own class. Overall, Hearst Pattern candidates account for 20% of all candidates and have a precision of 18%. Nearest Neighbor candidates are 80% of all candidates and also have a precision of 18%. The full Hearst Pattern precision numbers are 10%, 38% and 68% for the General, Music and Medical subtasks, respectively. The Nearest Neighbor precision numbers are 9%, 20%and 29%, respectively. In all datasets, Hearst Patterns alone almost never capture all hypernyms, but especially in special topic fields they show high precision results, as projected by previous work. In the general subtask, Hearst Patterns struggle more, generally when the query term is a term that is used in versions of the patterns that do not translate well to actual hypernymy, e.g., consumption→bad pattern hyponym factor from the phrase 1This distance is calculated when both terms exist in WordNet. For terms which lie in the other knowledge graphs used to construct the SemEval Task 9 dataset, we don’t calculate a distance. 3366 Model All Predictions Correct Predictions Raw Hearst Patterns 6.33 3.64 SVD Hearst Patterns 6.33 3.63 Nearest Neighbor 7.54 4.81 Table 3: Average length of shortest path between predicted hypernyms and their input hyponyms. Correct Incorrect Near Miss Gold Error Dataset HP NN HP NN HP NN HP NN General (100) 24 58 384 893 33 39 25 39 Music (50) 30 188 50 468 7 0 5 2 Medical (50) 21 141 8 558 2 0 3 0 Table 4: Error analysis of all all candidate hypernyms in a random sub-sample (number of queries in parentheses). ”Consumption is a factor in...”. Altogether, our best answers largely come from either very similar nearest neighbors, or hybrid instances where the Hearst Patterns capture a few specific hypernyms and rough hypernyms are captured by the nearest neighbor. We view the latter instances as ideal, since they depend neither on a perfect nearest neighbor nor on patterns capturing indirect hypernyms. For example, the query Fudan University has three gold hypernyms {university, school, educational institution}, university and school are returned by the Hearst Patterns and educational institution is returned by the nearest neighbor. In the general sub-task, the selection of a bad nearest neighbor when no Hearst Patterns are found is the source of a large number of major failures. Qualitative analysis generally shows that this occurs when the embedding for a rarely used query word must rely on its sub-string embedding from fastText, leading to a very incorrect nearest neighbor that still has high confidence, e.g., Queen Elizabeth→bad nearest neighborElizabeth Einstein. In the more specific sub-tasks, this type of error is less common as the domain is constrained in scope, making wildly incorrect nearest neighbors less common. However, in these more specific tasks, outliers with no strong nearest neighbor are much more frequent as the number of low confidence nearest neighbors increases in these tasks. In these cases, the model defaults to giving the most frequent hypernyms from training since the confidence cutoffs of neither Hearst Patterns nor the nearest neighbor similarity are met. We also separate out two interesting categories of error: gold errors (occurring 2.5% of the time) and near misses (occurring 2.9% of the time). These categories have similar properties and generally form within specific queries. The prior occurs primarily when a different sense is captured than the sense in the gold data itself, e.g., cereal→gold false negative{crop, grain, snack, foodstuff, carbohydrate}. The latter occurs primarily when an incorrect, but close, family of hypernyms is obtained from the data, e.g., microscope→near miss candidates{technology, facility, observer, measuring device}. 7 Conclusions and Future Work We studied the impact of utilizing a hybrid of pattern-based and distributional models for hypernym discovery by hybridizing simple models from each paradigm. Our results show that hybrid models of even simple systems are able to perform surprisingly well, consistently outperforming more robust single strategy models. Interestingly, a manual error analysis and metrics taken from WordNet suggests each paradigm models different types of hypernymy. We conclude that further work in hypernym discovery should utilize signals taken from both historical paradigms of hypernymy modeling, not only to improve confidence in answers but also to capture both direct and indirect hypernym relationships. References G´abor Berend, M´arton Makrai, and Peter F¨oldi´ak. 2018. 300-sparsans at semeval-2018 task 9: Hypernymy as interaction of sparse attributes. pages 928–934. Gabriel Bernier-Colborne and Caroline Barriere. 2018. Crim at semeval-2018 task 9: A hybrid approach 3367 to hypernym discovery. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 725–731. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Jose Camacho-Collados. 2017. Why we have switched from building full-fledged taxonomies to simply detecting hypernymy relations. arXiv preprint arXiv:1703.04178. Jose Camacho-Collados, Claudio Delli Bovi, Luis Espinosa Anke, Sergio Oramas, Tommaso Pasini, Enrico Santus, Vered Shwartz, Roberto Navigli, and Horacio Saggion. 2018. Semeval-2018 task 9: Hypernym discovery. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 712–724. Sharon A Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics. Luis Espinosa-Anke, Jose Camacho-Collados, Claudio Delli Bovi, and Horacio Saggion. 2016. Supervised distributional hypernym discovery via domain adaptation. In Conference on Empirical Methods in Natural Language Processing; 2016 Nov 1-5; Austin, TX. Red Hook (NY): ACL; 2016. p. 424-35. ACL (Association for Computational Linguistics). Christiane Fellbaum. 1998. Wordnet: An electronic lexical database and some of its applications. Marti A Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguisticsVolume 2, pages 539–545. Association for Computational Linguistics. Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distributional similarity for lexical inference. Natural Language Engineering, 16(4):359–389. Gideon S Mann. 2002. Fine-grained proper noun ontologies for question answering. In Proceedings of the 2002 workshop on Building and using semantic networks-Volume 11, pages 1–7. Association for Computational Linguistics. Maximillian Nickel and Douwe Kiela. 2017. Poincar´e embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pages 6341–6350. Stephen Roller, Douwe Kiela, and Maximilian Nickel. 2018. Hearst patterns revisited: Automatic hypernym detection from large text corpora. CoRR, abs/1806.03191. Julian Seitner, Christian Bizer, Kai Eckert, Stefano Faralli, Robert Meusel, Heiko Paulheim, and Simone Paolo Ponzetto. 2016. A large database of hypernymy relations extracted from the web. Vered Shwartz, Yoav Goldberg, and Ido Dagan. 2016. Improving hypernymy detection with an integrated path-based and distributional method. arXiv preprint arXiv:1603.06076. Vered Shwartz, Enrico Santus, and Dominik Schlechtweg. 2017. Hypernyms under siege: Linguistically-motivated artillery for hypernymy detection. pages 65–75. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Advances in neural information processing systems, pages 1297–1304. Rion Snow, Daniel Jurafsky, and Andrew Y Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 801–808. Association for Computational Linguistics. Ivan Vulic and Nikola Mrksic. 2017. Specialising word vectors for lexical entailment. CoRR, abs/1710.06371. Josuke Yamane, Tomoya Takatani, Hitoshi Yamada, Makoto Miwa, and Yutaka Sasaki. 2016. Distributional hypernym generation by jointly learning clusters and projections. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1871– 1879.
2019
327
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3368–3373 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3368 BERT-based Lexical Substitution Wangchunshu Zhou1 ∗ Tao Ge2 Ke Xu1 Furu Wei2 Ming Zhou2 1Beihang University, Beijing, China 2Microsoft Research Asia, Beijing, China [email protected], [email protected] {tage, fuwei, mingzhou}@microsoft.com Abstract Previous studies on lexical substitution tend to obtain substitute candidates by finding the target word’s synonyms from lexical resources (e.g., WordNet) and then rank the candidates based on its contexts. These approaches have two limitations: (1) They are likely to overlook good substitute candidates that are not the synonyms of the target words in the lexical resources; (2) They fail to take into account the substitution’s influence on the global context of the sentence. To address these issues, we propose an end-toend BERT-based lexical substitution approach which can propose and validate substitute candidates without using any annotated data or manually curated resources. Our approach first applies dropout to the target word’s embedding for partially masking the word, allowing BERT to take balanced consideration of the target word’s semantics and contexts for proposing substitute candidates, and then validates the candidates based on their substitution’s influence on the global contextualized representation of the sentence. Experiments show our approach performs well in both proposing and ranking substitute candidates, achieving the state-of-the-art results in both LS07 and LS14 benchmarks. 1 Introduction Lexical substitution (McCarthy and Navigli, 2007) aims to replace a target word in a sentence with a substitute word without changing the meaning of the sentence, which is useful for many Natural Language Processing (NLP) tasks like text simplification and paraphrase generation. One main challenge in this task is proposing substitutes that not only are semantically consistent with the original target word and fits in the ∗This work was done during the first author’s internship at Microsoft Research Asia. Sentence The wine he sent to me as my birthday gift is too strong to drink. WordNet hard, solid, s7ff, firm BERT (keep target word) stronger, strongly, hard, much BERT (mask target word) hot, thick, sweet, much BERT (embedding dropout) tough, powerful, potent, hard Sentence The wine he sent to me as my birthday gift is too strong to drink. The wine he sent to me as my birthday gift is too hot (0.81) to drink. (0.86) The wine he sent to me as my birthday gift is too tough (0.91) to drink. (0.92) The wine he sent to me as my birthday gift is too powerful (0.91) to drink. (0.93) (a) (b) Figure 1: (a) WordNet and original BERT cannot propose the valid substitute powerful in their top-K results but applying target word embedding dropout enables BERT to propose it; (b) Undesirable substitutes (e.g., hot, tough) tend to change the contextualized representation of the sentence more than good substitutes (e.g., powerful). The numbers after the words are the cosine similarity of the words’ contextualized vector to the original target words; while the numbers after the sentence are the similarity of the sentence’s contextualized representation before and after the substitution, defined in Eq (2). context but also preserve the sentence’s meaning. Most previous approaches to this challenge first obtain substitute candidates by picking synonyms from manually curated lexical resources as candidates, and then rank them based on their appropriateness in context, or instead ranking all words in the vocabulary to avoid the usage of lexical resources. For example, knowledge-based lexical substitution systems (Yuret, 2007; Hassan et al., 2007) use pre-defined rules to score substitute candidates; vector space modeling approach (Erk and Pad´o, 2008; Dinu and Lapata, 2010; Thater et al., 2010; Apidianaki, 2016) uses distributional sparse vector representations based on the syntactic context; substitute vector approach (Yuret, 2012; Melamud et al., 2015b) comprises the potential fillers for the target word slot in that context; word/context embedding similarity approach (Melamud et al., 2015a; Roller and Erk, 3369 2016; Melamud et al., 2016) uses the similarity of word embeddings to rank substitute words; and supervised learning approaches (Biemann, 2013; Szarvas et al., 2013a,b; Hintz and Biemann, 2016) uses delexicalized features to rank substitute candidates. Although these approaches work well in some cases, they have two key limitations: (1) they rely heavily on lexical resources. While the resources can offer synonyms for substitution, they are not perfect and they are likely to overlook some good candidates, as Figure 1(a) shows. (2) most previous approaches only measure the substitution candidates’ fitness given the context but they do not consider whether the substitution changes the sentence’s meaning. Take Figure 1(b) as an example, although tough may fit in the context as well as powerful, it changes the contextualized representation of the sentence more than powerful. Therefore, it is not so good as powerful for the substitution. To address the above issues, we propose a novel BERT-based lexical substitution approach, motivated by that BERT (Devlin et al., 2018) not only can predict the distribution of a masked target word conditioned on its bi-directional contexts but also can measure two sentences’ contextualized representation’s similarity. To propose substitute candidates for a target word in a sentence, we introduce a novel embedding dropout mechanism to partially mask the target word and use BERT to predict the word at the position. Compared to fully masking or keeping the target word, partially masking with embedding dropout allows BERT to take a balanced consideration of target word’s semantics and its contexts, helping avoid generating substitute candidates that are either semantically inconsistent with the target word or unfit in the contexts, as Figure 1(a) shows. To validate a substitute candidate, we propose to evaluate a candidate’s fitness based on the substitution’s influence on the contextualized representation of the sentence, which avoids selecting a substitute that changes the sentence’s meaning much, as Figure 1(b) illustrates. We conduct experiments on the official LS07 and LS14 benchmarks. The results show that our approach substantially outperforms previous approaches in both proposing and validating substitute candidates, achieving new stateof-the-art results in both datasets. The contributions of our paper are as follows: • We propose a BERT-based end-to-end lexiunmask dropout mask strong strong [MASK] … is too strong to drink. Figure 2: Unmasking, masking and partially masking the target word through target embedding dropout. cal substitution approach without relying on any annotated data and external linguistic resources. • Based on BERT, we introduce target word embedding dropout for helping substitute candidate proposal, and a substitute candidate validation method based on the substitution’s influence on the global contexts. • Our approach largely advances the state-ofthe-art results of lexical substitution in both LS07 and LS14 benchmarks. 2 BERT-based Lexical Substitution BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) is a bidirectional transformer encoder (Vaswani et al., 2017) trained with the objective of masked language modeling and the next-sentence prediction task, which proves effective in various NLP tasks. In this section, we present how to effectively leverage BERT for lexical substitution. 2.1 Substitute Candidate Proposal As BERT is a bi-directional language model trained by masking the target word, it can be used to propose a substitute candidate to reconstruct the sentence. In practice, however, if we mask the target word and let BERT predict the word at the position, BERT is very likely to generate candidates that are semantically different from the original target word although it fits in the context; on the other hand, if we do not mask the target word, approximately 99.99% of the predicted probability distribution will fall into the original target word, making it unreliable to choose the alternative candidates from the remaining 0.01% probability space, as Figure 1 shows. For a trade-off between the two extreme cases, we propose to apply embedding dropout to partially mask the target word. It forces a portion of dimension of the target word’s input embedding to 3370 zero, as illustrated in Figure 2. In this way, BERT can only receive vague information from the target word and thus has to consider other contexts to reconstruct the sentence, which improves substitute candidate proposal as Figure 1(a) shows. Formally, for the target word xk to be replaced in sentence x = (x1, · · · , xk, · · · , xL), we define sp(x′ k|x, k) as the proposal score for choosing x′ k as the substitution for xk: sp(x′ k|x, k) = log P(x′ k|ex, k) 1 −P(xk|ex, k) (1) where P(xk|x, k) is the probability for the kth word predicted by BERT given x, and ex is the same with x except that its kth position’s word is partially masked with embedding dropout. The denominator is the probability of the prediction that is not xk, normalizing P(x′ k|ex, k) against all the words in the vocabulary excluding xk. 2.2 Substitute Candidate Validation After we propose substitute candidates, we need to validate them because not all proposed candidates are appropriate. As Figure 1(b) shows, a proposed candidate (e.g., tough) may change the sentence’s meaning. To avoid such cases, we propose to evaluate a candidate’s fitness by comparing the sentence’s contextualized representation before and after the substitution for validation. Specifically, for a word xi, we use the concatenation of its representations in top four layers in BERT as its contextualized representation. We denote the sentence after the substitution as x′ = (x1, · · · , x′ k, · · · , xL). The validation score for the substitution of x′ k is defined in Eq (2): sv(x′ k|x, k) = SIM(x, x′; k) (2) where SIM(x, x′; k) is BERT’s contextualized representation similarity of x and x′, which is defined as follows: SIM(x, x′; k) = L X i wi,k × Λ(h(xi|x), h(x′ i|x′)) where h(xi|x) is BERT’s contextualized representation of the ith token in the sentence x and Λ(a, b) is cosine similarity of vector a and b. wi,k is the average self-attention score of all heads in all layers from ith token to kth position in x, which is used for weighing each position based on its semantic dependency to xk. In this way, we can use sv(x′ k|x, k) to measure the influence of the substitution of xk →x′ k on the semantics of the sentence. The undesirable substitute candidates like hot and tough in Figure 1(b) will get a lower sv and thus fail in ranking, while appropriate candidates like powerful will have a high sv and will be preferred. In practice, we consider both the proposal score sp in Eq (1) and the validation score sv in Eq (2) for overall recommendation for a candidate: s(x′ k|x, k) = sv(x′ k|x, k) + α × sp(x′ k|x, k) (3) where α is the weight for the proposal score. 3 Experiments 3.1 Experimental Setting We evaluate our approach on the SemEval 2007 dataset (McCarthy and Navigli, 2007) (denoted as LS07), and the CoinCo dataset (Kremer et al., 2014) (denoted as LS14), benchmark datasets which are the most widely used datasets for lexical substitution evaluation. LS07 consists of 201 target word types, each of which has 10 instances in different contexts (i.e., sentences); while LS14 provides the same kind of data as LS07 but is much larger – with 4,255 target word types in over 15K sentences. We use official evaluation metrics best, bestmode, oot, oot-mode in SemEval 2007 task as well as Precision@1 as our evaluation metrics. Among them, best, best-mode and Precision@1 evaluate the quality of the best predictions while oot (out-of-ten) and oot-mode evaluate the coverage of the gold substitutes in 10-best predictions. We use uncased BERT large model in Devlin et al. (2018) in our experiments. We use LS07 trial set as our development set for tuning the hyperparameters in our model. Empirically, we set the dropout ratio of the target word’s embedding to 0.3 and set the weight α in Eq (3) to 0.01. For each test instance, we propose 50 candidates using the approach in Section 2.1 and validate and rank them by Eq (3). As the embedding dropout introduces randomness to the final results, we repeat our experiments 5 times and report average scores with standard deviation. 3.2 Experimental Results Table 1 shows the results of our approaches as well as the state-of-the-art approaches in LS07 and LS14 benchmarks. Our approach substantially outperforms all previous approaches in both 3371 Method Resource best best-m oot oot-m P@1 LS07 our approach None 20.3±0.02 34.2±0.02 55.4±0.03 68.4±0.02 51.1±0.02 substitute vector (Melamud et al., 2015b) None 12.7 21.7 36.4 52.0 balAddCos (Melamud et al., 2015a) None 8.1 13.4 27.4 39.1 13.4 transfer learning (Hintz and Biemann, 2016) WordNet 17.2 48.8 supervised learning (Szarvas et al., 2013b) WordNet 15.9 48.8 40.8 KU (knowledge-based) (Yuret, 2007) multiple resources 12.9 20.7 46.2 61.3 UNT (knowledge-based) (Hassan et al., 2007) multiple resources 12.8 20.7 49.2 66.3 LS14 our approach None 14.5±0.01 33.9±0.02 45.9±0.02 69.9±0.02 56.3±0.01 substitute vector (Melamud et al., 2015b) None 8.1 17.4 26.7 46.2 balAddCos (Melamud et al., 2015a) None 5.6 11.9 20.0 33.3 11.8 Table 1: Results on LS07 and LS14. For all the metrics, the higher, the better. For substitution vector and balAddCos, they use all the words in the vocabulary as the substitution candidates. Method best best-m oot oot-m P@1 LS07 our approach 20.3 34.2 55.4 68.4 51.1 - w/o sp (Keep) 18.9 32.6 51.7 63.5 48.6 - w/o sp (Mask) 16.2 27.5 46.4 57.9 43.3 - w/o sp (WordNet) 15.9 27.1 45.9 57.1 42.8 - w/o sv 12.1 20.2 40.8 56.9 13.1 BERT (Keep) 9.2 16.3 37.3 52.2 9.2 BERT (Mask) 8.6 14.2 33.2 48.9 5.7 LS14 our approach 14.5 33.9 45.9 69.9 56.3 - w/o sp (Keep) 13.7 31.4 41.3 63.5 53.1 - w/o sp (Mask) 11.3 26.7 36.2 59.1 47.1 - w/o sp (WordNet) 11.0 26.3 35.9 58.7 46.3 - w/o sv 9.1 19.7 33.5 56.9 14.3 BERT (Keep) 8.3 17.2 31.1 54.4 11.2 BERT (Mask) 7.6 15.4 38.5 51.3 7.6 Table 2: Ablation study results of our approach. BERT (Keep/Mask) are the baselines that uses BERT unmasking/masking the target word to propose candidates and rank by the proposal scores. Remember that our approach is a linear combination of proposal score sp and validation score sv, as in Eq (3). In the baselines “w/o sp”, we alternatively use BERT (Keep), BERT (Mask) or WordNet to propose candidates. benchmarks, even those trained through supervised learning with external resources (Szarvas et al., 2013b), in terms of all the five metrics. Though our approach introduces randomness due to the embedding dropout, no large fluctuation is observed in our results. For understanding the improvement, we conduct an ablation test and show the result in Table 2. According to Table 2, we observe that the original BERT cannot perform as well as the previous state-of-the-art approaches by its own. Applying embedding dropout to BERT improves the model, allowing it to achieve 13.1% and 14.3% P@1 in LS07 and LS14 respectively. When we further add our candidate valuation method in Section 2.2 to validate the candidates, its performance is significantly improved. Furthermore, it is clear that our substitute candidate proposal method is much betMethod LS07 LS14 our approach 60.5 57.6 - w/o sv 55.3 52.2 - w/o sp 58.3 54.8 context2vec (Melamud et al., 2016) 56.0 47.9 substitute vector (Melamud et al., 2015b) 55.1 50.2 addcos (Melamud et al., 2015a) 52.9 48.3 PIC (Roller and Erk, 2016) 52.4 48.3 vector space modeling (Kremer et al., 2014) 52.5 47.8 transfer learning (Hintz and Biemann, 2016) 51.9 supervised learning (Kremer et al., 2014) 55.0 BERT (word similarity) 55.2 52.1 Table 3: GAP scores in the substitute ranking subtask. Note that for the baseline w/o sp, we do not need to propose candidates using BERT like Table 2 since candidates are given in advance in the ranking subtask. BERT (word similarity) ranks candidates by the cosine similarity of BERT contextualized representations of the original target word and a substitute candidate. We do not compare to Apidianaki (2016) as it only evaluates on a sample of the test data in a different setting. ter than WordNet for candidate proposal when we compare our approach to the -w/o sp (WordNet) baseline where candidates are obtained by WordNet and validated by our validation approach. Also, we evaluate our approach in the substitute ranking subtask of LS07 and LS14. In the ranking subtask, a system does not need to propose candidates by itself; instead, the substitute candidates for each test instance are given in advance, either from lexical resources (e.g. wordnet) or pooled substitutes. Following prior work, we use GAP score (Kishida, 2005) for evaluation in the subtask, which is a variant of MAP (Mean Average Precision). According to Table 3, we observe that both our proposal score sp and validation score sv contribute to the improvement, allowing our approach to outperform previous stateof-the-art approaches, even with the same substitute candidates. 3372 By comparing our approach without sp to the BERT baseline approach BERT (word similarity) in Table 3, we confirm that the comparison of sentence-level contextualized representations before and after the substitution is more effective and reliable than the word-level comparison for lexical substitution. This is because some changes in sentence’s meaning after the substitution can be better captured by the sentence-level analysis, just as the example in Figure 1(b) illustrates. 4 Conclusion In our work, we propose an end-to-end lexical substitution approach based on BERT, which can propose and validate substitute candidates without using any annotated data and manually curated resources. Experiments in LS07 and LS14 benchmark datasets show that our proposed embedding dropout for partially masking the target word is helpful for BERT to propose substitute candidates, and that analyzing a sentence’s contextualized representation before and after the substitution can largely improve the results of lexical substitution. Acknowledgments We thank the anonymous reviewers for their valuable comments. References Marianna Apidianaki. 2016. Vector-space models for ppdb paraphrase ranking in context. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2028–2034. Chris Biemann. 2013. Creating a system for lexical substitutions from scratch using crowdsourcing. Language Resources and Evaluation, 47(1):97–122. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1162–1172. Association for Computational Linguistics. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 897– 906. Association for Computational Linguistics. Samer Hassan, Andras Csomai, Carmen Banea, Ravi Sinha, and Rada Mihalcea. 2007. Unt: Subfinder: Combining knowledge sources for automatic lexical substitution. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 410–413. Association for Computational Linguistics. Gerold Hintz and Chris Biemann. 2016. Language transfer learning for supervised lexical substitution. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 118–129. Kazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. Gerhard Kremer, Katrin Erk, Sebastian Pad´o, and Stefan Thater. 2014. What substitutes tell us-analysis of an” all-words” lexical substitution corpus. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 540–549. Diana McCarthy and Roberto Navigli. 2007. Semeval2007 task 10: English lexical substitution task. In Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval ’07, pages 48–53, Stroudsburg, PA, USA. Association for Computational Linguistics. Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015a. Modeling word meaning in context with substitute vectors. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 472–482. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. Oren Melamud, Omer Levy, and Ido Dagan. 2015b. A simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1–7. Stephen Roller and Katrin Erk. 2016. Pic a different word: A simple model for lexical substitution in context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1121–1126. Gy¨orgy Szarvas, Chris Biemann, and Iryna Gurevych. 2013a. Supervised all-words lexical substitution using delexicalized features. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1131–1141. 3373 Gy¨orgy Szarvas, R´obert Busa-Fekete, and Eyke H¨ullermeier. 2013b. Learning to rank lexical substitutions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1926–1932. Stefan Thater, Hagen F¨urstenau, and Manfred Pinkal. 2010. Contextualizing semantic representations using syntactically enriched vector models. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 948–957. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Deniz Yuret. 2007. Ku: Word sense disambiguation by substitution. In Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval ’07, pages 207–213, Stroudsburg, PA, USA. Association for Computational Linguistics. Deniz Yuret. 2012. Fastsubs: An efficient and exact procedure for finding the most likely lexical substitutes based on an n-gram language model. IEEE Signal Processing Letters, 19(11):725–728.
2019
328
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3374–3380 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3374 Exploring Numeracy in Word Embeddings Aakanksha Naik∗, Abhilasha Ravichander∗, Carolyn Rose, Eduard Hovy Language Technologies Institute, Carnegie Mellon University {anaik, aravicha, cprose, ehovy}@cs.cmu.edu Abstract Word embeddings are now pervasive across NLP subfields as the de-facto method of forming text representataions. In this work, we show that existing embedding models are inadequate at constructing representations that capture salient aspects of mathematical meaning for numbers, which is important for language understanding. Numbers are ubiquitous and frequently appear in text. Inspired by cognitive studies on how humans perceive numbers, we develop an analysis framework to test how well word embeddings capture two essential properties of numbers: magnitude (e.g. 3<4) and numeration (e.g. 3=three). Our experiments reveal that most models capture an approximate notion of magnitude, but are inadequate at capturing numeration. We hope that our observations provide a starting point for the development of methods which better capture numeracy in NLP systems. 1 Introduction Word embeddings operationalize the distributional hypothesis, where a word is characterized by “the company it keeps” (Harris, 1954; Firth, 1957), and have been shown to capture semantic regularities in vector space (Mikolov et al., 2013c). They have been a driving force in NLP in recent years, and enjoy widespread use in a variety of semantic tasks (Rumelhart et al.; Mikolov et al., 2013a,b; Collobert and Weston, 2008; Glorot et al., 2011; Turney and Pantel, 2010; Turney, 2013). However, to what extent do these word representations capture numeric properties? Numbers often need to be dealt with precisely, and understanding the meaning of text also requires a careful understanding of the quantities involved. They have been identified to play an important role in textual entailment, a benchmark natural language ∗*The first two authors contributed equally to this work. understanding task. Marneffe et al. (2008) extract pairs of contradictions that occur naturally on Wikipedia and Google News, and find that as many as 29% of contradictions arise due to numeric discrepancies. They also identify that on many Recognizing Textual Entailment (RTE) datasets, 8.8% of contradictory pairs feature numeric contradictions. Naik et al. (2018) find that model inability to do numerical reasoning causes 4% of errors made by state-of-the-art models in Natural Language Inference. Spithourakis and Riedel (2018) emphasize the importance of numeracy in language modeling. Yet, numbers are often forgotten and even masked in NLP applications (Mitchell and Lapata, 2009). In several domains such as economics, finance and scientific articles numbers can play a crucial role in text. Take for example a recent news headline, Met Office: Global Warming could exceed 1.5 C within five years Ideally, the text representation we use should be able to capture that global warming can exceed 1.5 C, not 100 C. Magnitude is an essential aspect of a number’s meaning1 (Dehaene et al., 1998; Whalen et al., 1999; Cantlon and Brannon, 2006; Gross, 2011; Cutini and Bonato, 2012; Agrillo et al., 2012; Feigenson et al., 2004) . Systems should also be able to draw valid inferences irrespective of whether the text uses “five” or “5”. This requires an understanding of symbolic representations used to record numbers in text. Such representation systems are called numeration systems, and individual symbols within the system 1Prior work has shown that humans, as well as several species of animals share analogue systems that represent “quantities” or “magnitudes” associated with numbers(Dehaene et al., 1998) 3375 are called numerals2. Systems must handle numeration, i.e. associations between distinct symbols used for the same number under different systems (3=three). In this work, we examine the extent to which word embeddings are capable of representing numeracy attributes, asking the question - if pretrained word embeddings are utilized for representing text across NLP tasks, what can they represent about numbers? Our framework formulates triples of numbers to probe word embeddings on their ability to represent magnitude, and their robustness to differences in numeration. We hope this analysis highlights limitations of current pretrained word embeddings at capturing numeracy, and will motivate future research to develop more careful treatments of quantities in text. 2 Analysis Framework We construct an analysis framework to evaluate embeddings on their ability to capture magnitude and numeration. Numbers follow a well-defined ordering, under a mathematical system, which holds independent of textual context (e.g.: 0 < 1 < 2...). This ordering is established by magnitude (Izard and Dehaene, 2008; Russell, 2009) and is consistent across numeration systems. Therefore, an embedding representation that captures magnitude and numeration precisely should maintain this ordering across numeration systems in the embedding space. We evaluate this ability by constructing contrastive tests (Zhu et al., 2018). A contrastive test for a property p is defined as a triple (x, x+, x−) such that x is closer to x+ than x−under p. If embeddings capture p, x will be closer to x+ than x−in the embedding space, indicating that the embedding method passes the test. We propose three categories of tests, which differ in the choice of x−3: 1. OVA (One-vs-All): Define x−= {y|y ∈ X −x, y ̸= x+}. A model must identify x to be closer to x+ than all x−. 2. SC(Strict Contrastive): Choose x−to be the second-closest to x after x+ under p. 3. BC (Broad Contrastive): Choose x−to be the furthest from x under property p. 2Several cultures have developed numeration systems (Zhang and Norman, 1995). In this work, we restrict our scope to Arabic and English numeration systems (e.g. Arabic-2, English: two). 3x+ is chosen to be the token closest to x under p. Model #English #Arabic GloVe-6B-*D 120 (0.03%) 19409 (4.85%) GloVe-42B-300D 239 (0.01%) 108839 (5.68%) GloVe-840B-300D 532 (0.02%) 109353 (4.98%) FastText-Wiki 374 (0.04%) 25549 (2.56%) FastText-CC 592 (0.03%) 59386 (2.97%) SkipGram-BoW 114 (0.06%) 2401 (1.31%) SkipGram-Dep 111 (0.06%) 2416 (1.39%) GloVe-Num 1117 (0.02%) 318109 (4.4%) GloVe-All 973 (0.01%) 189598 (2.8%) FastText-Num 1117 (0.02%) 317627 (4.4%) FastText-All 973 (0.01%) 189366 (2.8%) Word2Vec-Num 486 (0.02%) 67908 (2.7%) Word2Vec-All 434 (0.01%) 37164 (1.2%) Table 1: Proportion of English and Arabic numerals containing representations in different models. Though embeddings are retrained on the same corpus, preprocessing choices (eg:lowercasing, filtering low frequency words etc.) result in different vocabularies OVA requires that x+ must be the closest vector to x in the embedding space. High performance on this test would indicate that the property is captured almost precisely. SC relaxes strictness by only requiring x+ to be closer than the secondclosest token under property p. Finally BC is the least strict of the three. Models can succeed on BC if they manage to capture even an approximate notion of p. We use this framework to construct three categories of contrastive tests for both magnitude and numeration. Example tests for magnitude are shown below4: 1. OVA-MAG: (3, 4, x), such that x = {y|y ∈ X −{3}, y ̸= 4} 2. SC-MAG: (3, 4, 5) 3. BC-MAG: (3, 4, 1000000) Similarly for numeration, 1. OVA-NUM: (3, three, x), such that x = {y|y ∈Y, y ̸= three} 2. SC-NUM: (3, three, four) 3. BC-NUM: (3, three, billion) 3 Representations We evaluate the following embedding methods: Skipgram (Mikolov et al., 2013a): Feedforward network trained to predict words within a fixed window surrounding the current word, with hidden weights used as embeddings. We evaluate with window sizes in {2, 5}, dependency 4Note that we consider 2 and 4 equidistant from 3, so examples like (3,2,4) are removed. 3376 Model OVA-MAG SC-MAG BC-MAG Random 0.04 49.82 49.34 GloVe-6B-50D 7.70 55.62 82.48 GloVe-6B-100D 10.27 57.83 82.83 GloVe-6B-200D 15.88 62.21 83.94 GloVe-6B-300D 18.41 62.92 83.98 GloVe-42B-300D 5.18 55.58 91.86 GloVe-840B-300D 11.06 55.40 88.54 FastText-Wiki 13.94 59.96 96.15 FastText-CC 7.83 53.89 85.40 SkipGram-2 7.12 55.49 95.84 SkipGram-5 8.85 55.40 96.42 SkipGram-Dep 3.32 51.99 94.60 Table 2: Performance (% accuracy) of various embedding models on magnitude tests. We also report the performance of a random embedding baseline. parse-based context (Levy and Goldberg, 2014) GloVe (Pennington et al., 2014): Embeddings generated by training log-bilinear models to predict global word co-occurrence statistics. We evaluate variants with #tokens in {6B, 42B, 840B}; dimensionality in {50, 100, 200, 300} FastText (Bojanowski et al., 2017): Extended Skipgram model representing words as character n-grams to incorporate sub-word information. We evaluate Wikipedia and Common Crawl variants. 3.1 Retrained Word Vectors We retrain all models on GigaWord5 and English Wikipedia6, under the setting: window size=5; dimensionality=100. To evaluate whether having more occurrences of numerals in the training data correlates with better representations, we train two variants for each model: one on sentences containing numbers (56M in total; 1.5B tokens) (Num), and another on 56M sentences (1.5B tokens) subsampled without constraints (All). 4 Experiments How many numerals have representations? Table 1 shows the proportion of English7 and Arabic numerals in each. Overall, numerals make up less than 5% vocabulary in all models. Despite 5We use the fourth edition: https://catalog.ldc. upenn.edu/LDC2009T13. 6We use the May 1, 2019 dump from https:// dumps.wikimedia.org/backup-index.html. 7To detect English numerals, we use word2number: https://pypi.org/project/word2number/. this, all variants contain representations for sufficient numerals to allow us to apply our framework. For off-the-shelf variants, we construct 2260 OVA-MAG, SC-MAG and BC-MAG tests. For numeration, we construct separate tests for each model, as there are few common numerals. Further statistics about number of tests for each model are reported in table 3. For retrained embeddings, we construct 31860 OVA-MAG, SC-MAG and BC-MAG tests, 130 OVA-NUM and SC-NUM tests, and 136 BC-NUM tests8. 4.1 Evaluating Off-The-Shelf Embeddings Tables 2 and 3 present the performance of offthe-shelf embeddings on magnitude and numeration tests respectively. We use cosine similarity9 as the distance metric. High performance on BCMAG indicates that all models capture an approximate notion of magnitude, distinguishing between very large and very small numbers. We speculate this might be because numbers from different magnitude classes often appear in different contexts (See §5.1). As tests become stricter, model performance drops massively. Models perform nearly 10x worse on OVA-MAG as compared to BC-MAG. This suggests model are unable to capture magnitude precisely. Across models, SkipGram variants and FastText-Wiki perform best on BC-MAG. However, GloVe outperforms all others on OVA-MAG and SC-MAG. On numeration tests, models fare much worse. With the exception of GloVe models on BC-NUM, no model significantly outperforms a random baseline. 4.2 Evaluating Retrained Embeddings Table 4 presents the performance of retrained embeddings and a random embedding baseline on magnitude and numeration tests. There is no significant difference in performance between Num and All variants, suggesting that seeing more numerals during training does not necessarily result in better representations. Results follow similar trends as off-the-shelf embeddings. All models capture an approximate notion of magnitude (high performance on BC-MAG), but do not capture numeration. Across models, FastText variants fare 8Since all embeddings are trained on the same corpus and share the same vocabulary, there are enough common English numerals to construct a single set of numeration tests. 9We experiment with Euclidean distance, and observe similar results (Appendix A and B). 3377 OVA-NUM SC-NUM BC-NUM Model #Tests Rand Emb #Tests Rand Emb #Tests Rand Emb GloVe-6B-50D 117 0.00 0.85 117 49.57 52.99 117 50.43 79.49 GloVe-6B-100D 117 0.00 0.85 117 52.99 47.86 117 57.26 81.20 GloVe-6B-200D 117 1.71 0.85 117 48.72 57.26 117 42.74 78.63 GloVe-6B-300D 117 1.71 0.00 117 50.43 58.97 117 54.70 88.89 GloVe-42B-300D 226 0.44 0.44 226 52.21 51.33 226 53.98 10.18 GloVe-840B-300D 515 0.19 0.19 515 49.90 50.68 515 49.71 81.94 FastText-Wiki 360 0.28 0.28 360 50.00 49.72 360 56.67 41.67 FastText-CC 572 0.00 0.52 572 46.85 51.22 572 41.26 44.76 SkipGram-2 112 0.00 0.00 112 51.79 48.21 112 49.11 49.11 SkipGram-5 112 0.00 0.89 112 52.68 51.79 112 50.89 14.29 SkipGram-Dep 109 0.92 1.83 109 53.21 48.62 109 52.29 31.19 Table 3: Performance (% accuracy) of various embedding models on numeration tests. Since we construct a separate set of tests per model, we report the performance of a random embedding model for each set (Rand). Bolded numbers highlight cases where performance is higher than both random embedding and random choice. Note that random choice performance for OVA-NUM is 1 #T ests. Model Magnitude Numeration OVA-MAG SC-MAG BC-MAG OVA-NUM SC-NUM BC-NUM random 0.00 49.62 49.71 2.31 47.69 53.68 GloVe-Num 0.01 49.47 72.76 0.00 50.00 19.85 GloVe-All 0.01 49.08 74.02 0.00 46.15 19.85 FastText-Num 0.09 51.05 96.69 1.54 54.62 58.09 FastText-All 0.09 51.16 97.90 0.00 46.92 61.03 Word2Vec-Num 0.02 50.12 93.55 0.77 44.62 33.82 Word2Vec-All 0.02 49.37 94.20 0.00 54.62 34.56 Table 4: Performance (% accuracy) of various (retrained) embedding models on magnitude and numeration tests. best. 5 Discussion 5.1 Performance on Magnitude Tests Tables 2 and 4 show that most models do not capture magnitude precisely (low performance on OVA-MAG; SC-MAG), but seem to learn an approximate notion of magnitude (high performance on BC-MAG) 10. To examine the difference in contexts that separates numbers of vastly varying magnitudes, we sample 1 million sentences containing numbers from English Wikipedia and GigaWord and compute pointwise mutual information (PMI), defined as PMI (number, class) = log p(number,class) p(number,·)p(·,class) 10Cognitive studies show that human babies initially start recognizing numbers by approximation and their ability to identify numbers precisely improves over their lifespan (Halberda et al., 2012). (Moyer and Landauer, 1967) were the first to observe that humans took longer to distinguish between closer numbers (eg: 8 and 9) than numbers which were further away in distance (eg: 2 and 9). This finding has since been replicated several times (Dehaene, 2011). In our framework, models find it harder to distinguish between closer numbers (SC-MAG) than distant numbers (BC-MAG)- however the differences here likely arise from different contexts in which numbers of vastly varying magnitudes are used. between the contexts of primitive numbers (numbers 1-10) and large numbers (>500, >1000, >3000, >10000, >100000) as shown in Table 5. We consider the word immediately following the number as context, since it appears in the context of the number across embedding methods, regardless of sliding window size. We apply add-100 smoothing to identify contexts with maximum discriminatory power. We observe in table 5 that terms separating primitives from larger numbers fall into categories such as days in a month, which are less than 31, or percentages which are <= 100. In comparison, contexts of larger numbers include terms like ‘election’, ‘census’ and ‘world’. As we move beyond numbers that are likely to be dates (>3000), we observe terms such as ’ZIP’ occurring with ZIP codes in text, ‘block’ occurring in contexts such as ‘house in 9600 block of Washington Boulevard, ‘Refugees’ which appears in contexts such as ‘relocate about 125,000 refugees away from the border’. We observe that different contexts characterize classes of numbers, and speculate that this may allow embeddings to distinguish between numbers that appear consistently in vastly different contexts 3378 Primitives λ = 500 Primitives λ = 1000 Primitives λ = 3000 Primitives λ = 10000 Primitives λ = 100000 Wiki % Summer % Summer % BC % Exchange % Elected July Census million Census million RPM million HD million Ontario January Film July Film July BCE July Departs May Owner April World January World January Inhabitants January Delhi July Spinneys September Election April Election May Hollywood May Raxaul January Thana GW percent index percent World percent DOWN percent novos percent novos p.m GMT million GMT million Composite million ZIP million Tel a.m World billion Olympic billion block billion University billion NDI trillion Olympic p.m Olympics p.m LAS points UP points Refugees billion Olympics years season years UP p.m Old p.m Eritrean Table 5: Top 5 nouns by PMI(word, class) for primives and large numbers (numbers > λ), in 1 million sentences drawn from Wikipedia (wiki) and GigaWord (GW) respectively. leading to good performance on BC-MAG. 5.2 Recovering magnitude information from nearest neighbours Model performance on SC-MAG and BC-MAG indicates whether ordering relationships between a number,its closest, second-closest, and furthest numbers are maintained. However, infinite numbers exist, making it infeasible to construct contrastive tests to check ordering relationships between all triples. To mitigate this, we experiment with a paradigm that performs regression with a number’s nearest neighbors to predict its magnitude. If magnitude can be recovered from the structure of the embedding space, this provides evidence that magnitude ordering relations are maintained to some extent. For this experiment, we divide the set of 2260 numbers common across offthe-shelf variants11 into training (80%) and test (20%) sets and run a kNN (k-nearest neighbor) regressor model to predict magnitude. R2 scores for are shown in table 6. Most models show reasonably high R2 scores, indicating that some ordering relationships must be maintained, helping embeddings capture approximate notions of magnitude. While this property of current embedding models is interesting, their failure to capture precise magnitude is an important issue. Word embeddings are used for semantic tasks such as natural language inference or reading comprehension, wherein models might need to reason more precisely about numbers. 6 Conclusion Current NLP systems rely heavily on word embeddings. In this work we demonstrate that three 11We do this to compare results across all models. Retrained variants contain embeddings for all 2260 numbers. Model R2 Score GloVe-6B-50D 0.53 GloVe-6B-100D 0.75 GloVe-6B-200D 0.67 GloVe-6B-300D 0.62 GloVe-42B-300D 0.44 GloVe-840B-300D 0.83 FastText-Wiki 0.71 FastText-CC 0.56 SkipGram-2 0.67 SkipGram-5 0.76 SkipGram-Dep 0.41 GloVe-Num 0.12 GloVe-All 0.30 FastText-Num 0.73 FastText-All 0.47 Word2Vec-Num 0.68 Word2Vec-All 0.65 Table 6: Results of kNN Regression Test for Magnitude popular embedding models are inadequate at dealing precisely with numbers, in two aspects: magnitude and numeration. We hope this work will promote a more careful treatment of language, and serve a cautionary purpose against using word embeddings in downstream tasks without recognizing their limitations. This work also raises important questions about other categories of word-like tokens that need to be treated like special cases. We hope the community will carefully consider that distributed word representations cannot be relied upon in all scenarios. 7 Acknowledgements This work has partially been supported by the National Science Foundation under Grant No. CNS 13-30596. The authors would like to thank Thomas Manzini, Shruti Rijhwani and Siddharth Dalmia for helpful discussions and reviews while drafting this paper. 3379 References Christian Agrillo, Laura Piffer, Angelo Bisazza, and Brian Butterworth. 2012. Evidence for two numerical systems that are similar in humans and guppies. PloS one, 7(2):e31923. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jessica F Cantlon and Elizabeth M Brannon. 2006. Shared system for ordering small and large numbers in monkeys and humans. Psychological science, 17(5):401–406. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Simone Cutini and Mario Bonato. 2012. Subitizing and visual short-term memory in human and non-human species: a common shared system? Number without language: comparative psychology and the evolution of numerical cognition, 129. Stanislas Dehaene. 2011. The number sense: How the mind creates mathematics. OUP USA. Stanislas Dehaene, Ghislaine Dehaene-Lambertz, and Laurent Cohen. 1998. Abstract representations of numbers in the animal and human brain. Trends in neurosciences, 21(8):355–361. Lisa Feigenson, Stanislas Dehaene, and Elizabeth Spelke. 2004. Core systems of number. Trends in cognitive sciences, 8(7):307–314. J. Firth. 1957. A synopsis of linguistic theory 19301955. In Studies in Linguistic Analysis. Philological Society, Oxford. Reprinted in Palmer, F. (ed. 1968) Selected Papers of J. R. Firth, Longman, Harlow. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 513–520. Hans J Gross. 2011. To bee or not to bee, this is the question the inborn numerical competence of humans and honeybees: The inborn numerical competence of humans and honeybees. Communicative & integrative biology, 4(5):594–597. Justin Halberda, Ryan Ly, Jeremy B Wilmer, Daniel Q Naiman, and Laura Germine. 2012. Number sense across the lifespan as revealed by a massive internetbased sample. Proceedings of the National Academy of Sciences, 109(28):11116–11120. Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146–162. V´eronique Izard and Stanislas Dehaene. 2008. Calibrating the mental number line. Cognition, 106(3):1221–1247. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–308, Baltimore, Maryland. Association for Computational Linguistics. Marie-Catherine Marneffe, Anna N Rafferty, and Christopher D Manning. 2008. Finding contradictions in text. Proceedings of ACL-08: HLT, pages 1039–1047. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2009. Language models based on semantic composition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 430–439. Association for Computational Linguistics. Robert S Moyer and Thomas K Landauer. 1967. Time required for judgements of numerical inequality. Nature, 215(5109):1519. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, et al. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1. 3380 Bertrand Russell. 2009. Principles of mathematics. Routledge. Georgios P Spithourakis and Sebastian Riedel. 2018. Numeracy for language models: Evaluating and improving their ability to predict numbers. arXiv preprint arXiv:1805.08154. Peter D Turney. 2013. Distributional semantics beyond words: Supervised learning of analogy and paraphrase. Transactions of the Association for Computational Linguistics, 1:353–366. Peter D Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37:141–188. John Whalen, Charles R Gallistel, and Rochel Gelman. 1999. Nonverbal counting in humans: The psychophysics of number representation. Psychological Science, 10(2):130–137. Jiajie Zhang and Donald A Norman. 1995. A representational analysis of numeration systems. Cognition, 57(3):271–295. Xunjie Zhu, Tingfeng Li, and Gerard de Melo. 2018. Exploring semantic properties of sentence embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 632–637, Melbourne, Australia. Association for Computational Linguistics. A Magnitude Tests with Euclidean Distance Table 7 describes the performance of word embedding models on magnitude tests with Euclidean distance. Model OVA-MAG SC-MAG BC-MAG Random 0.04 49.82 49.34 GloVe-6B-50D 7.7 54.87 79.78 GloVe-6B-100D 10.27 57.12 78.5 GloVe-6B-200D 15.88 58.72 80.09 GloVe-6B-300D 18.41 60.44 79.82 GloVe-42B-300D 5.18 55.27 55.09 GloVe-840B-300D 11.06 55.49 98.23 SkipGram-2 8.85 55.35 96.37 SkipGram-5 7.12 55.44 95.8 SkipGram-Dep 3.32 51.95 94.56 FastText-CC 7.83 54.07 91.28 FastText-Wiki 13.94 59.34 98.19 Table 7: Performance (% accuracy) of embedding models on magnitude tests with Euclidean distance B Numeration Tests with Euclidean Distance Tables 8 and 9 describe the performance of word embedding models on numeration tests with Euclidean distance. SC-NUM Model #Tests Rand Emb GloVe-6B-50D 117 49.57 52.14 GloVe-6B-100D 117 52.99 51.28 GloVe-6B-200D 117 48.72 52.65 GloVe-6B-300D 117 50.43 56.89 GloVe-42B-300D 226 52.21 52.65 GloVe-840B-300D 515 49.90 56.89 FastText-Wiki 360 50.00 49.72 FastText-CC 572 46.85 49.72 SkipGram-2 112 51.79 48.21 SkipGram-5 112 52.68 51.79 SkipGram-Dep 109 53.21 48.62 Table 8: Performance (% accuracy) of embedding models on SC-NUM BC-NUM Model #Tests Rand Emb GloVe-6B-50D 117 50.43 99.15 GloVe-6B-100D 117 57.26 100.0 GloVe-6B-200D 117 42.74 2.21 GloVe-6B-300D 117 54.70 87.57 GloVe-42B-300D 226 53.98 2.21 GloVe-840B-300D 515 49.71 87.57 FastText-Wiki 360 56.67 98.89 FastText-CC 572 41.26 98.89 SkipGram-2 112 49.11 49.11 SkipGram-5 112 50.89 14.29 SkipGram-Dep 109 52.29 31.19 Table 9: Performance (% accuracy) of embedding models on BC-NUM
2019
329
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 336–345 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 336 Neural News Recommendation with Long- and Short-term User Representations Mingxiao An1,*, Fangzhao Wu2, Chuhan Wu3, Kun Zhang1, Zheng Liu2, Xing Xie2 1University of Science and Technology of China, Hefei 230026, China 2Microsoft Research Asia, Beijing 100080, China 3Department of Electronic Engineering, Tsinghua University, Beijing 100084, China {anmx,zhkun}@mail.ustc.edu.cn, [email protected] [email protected], {zhengliu,xingx}@microsoft.com Abstract Personalized news recommendation is important to help users find their interested news and improve reading experience. A key problem in news recommendation is learning accurate user representations to capture their interests. Users usually have both long-term preferences and short-term interests. However, existing news recommendation methods usually learn single representations of users, which may be insufficient. In this paper, we propose a neural news recommendation approach which can learn both long- and short-term user representations. The core of our approach is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles and topic categories, and use attention network to select important words. In the user encoder, we propose to learn long-term user representations from the embeddings of their IDs. In addition, we propose to learn short-term user representations from their recently browsed news via GRU network. Besides, we propose two methods to combine long-term and short-term user representations. The first one is using the long-term user representation to initialize the hidden state of the GRU network in short-term user representation. The second one is concatenating both long- and short-term user representations as a unified user vector. Extensive experiments on a real-world dataset show our approach can effectively improve the performance of neural news recommendation. 1 Introduction Online news platforms such as MSN News1 and Google News2 which aggregate news from various sources and distribute them to users have gained *This work was done when the first author was an intern in Microsoft Research Asia. 1https://www.msn.com/news 2https://news.google.com/ 2017 NBA Championship Celebration From Warriors Rami Malek Wins the 2019 Oscar Oklahoma City Thunder vs. Golden State Warriors Bohemian Rhapsody Is HighestGrossing Musician Biopic Ever 𝑡𝑡1 𝑡𝑡𝑖𝑖 𝑡𝑡𝑖𝑖+1 𝑡𝑡𝑗𝑗 … … Figure 1: An illustrative example of long-term and short-term interests in news reading. huge popularity and attracted hundreds of millions of users (Das et al., 2007; Wang et al., 2018). However, massive news are generated everyday, making it impossible for users to read through all news (Lian et al., 2018). Thus, personalized news recommendation is very important for online news platforms to help users find their interested contents and alleviate information overload (Lavie et al., 2010; Zheng et al., 2018). Learning accurate user representations is critical for news recommendation (Okura et al., 2017). Existing news recommendation methods usually learn a single representation for each user (Okura et al., 2017; Lian et al., 2018; Wu et al., 2019). For example, Okura et al. (2017) proposed to learn representations of news using denoising autoencoder and learn representations of users from their browsed news using GRU network (Cho et al., 2014). However, it is very difficult for RNN networks such as GRU to capture the entire information of very long news browsing history. Wang et al. (2018) proposed to learn the representations of news using knowledge-aware convolutional neural network (CNN), and learn the representations of users from their browsed news based on the similarities between the candidate news and the browsed news. However, this method needs to store the entire browsing history of each user in the online news recommendation stage, which may bring huge challenge to the storage and may cause heavy latency. 337 Our work is motivated by the observation that the interests of online users in news are very diverse. Some user interests may last for a long time and are consistent for the same user (Li et al., 2014). For example, as shown in Fig. 1, if a user is a fan of “Golden State Warriors”, this user may tend to read many basketball news about this NBA team for several years. We call this kind of user preferences as long-term interest. In addition, many user interests may evolve with time and may be triggered by specific contexts or temporal demands. For example, in Fig. 1, the browsing of the news on movie “Bohemian Rhapsody” causes the user reading several related news such as “Rami Malek Wins the 2019 Oscar” since “Rami Malek” is an important actor in this movie, although this user may never read news about “Rami Malek” before. We call this kind of user interests as shortterm interest. Thus, both long-term and shortterm user interests are important for personalized news recommendation, and distinguishing longterm user interests from short-term ones may help learn more accurate user representations. In this paper, we propose a neural news recommendation approach with both long- and shortterm user representations (LSTUR). Our approach contains two major components, i.e., a news encoder and a user encoder. The news encoder is used to learn representations of news articles from their titles and topic categories. We apply attention mechanism to the news encoder to learn informative news representations by selecting important words. The user encoder consists of two modules, i.e., a long-term user representation (LTUR) module and a short-term user representation (STUR) module. In STUR, we use a GRU network to learn short-term representations of users from their recently browsing news. In LTUR, we learn the long-term representations of users from the embeddings of their IDs. In addition, we propose two methods to combine the short-term and longterm user representations. The first one is using the long-term user representations to initialize the hidden state of GRU network in the STUR model. The second one is concatenating the long-tern and short-term user representations as a unified user vector. We conducted extensive experiments on a real-world dataset. The experimental results show our approach can effectively improve the performance of news recommendation and consistently outperform many baseline methods. 2 Related Works Personalized news recommendation is an important task in natural language processing field and has wide applications (Zheng et al., 2018). It is critical for news recommendation methods to learn accurate news and user representations (Wang et al., 2018). Many conventional news recommendation methods rely on manual feature engineering to build news and user representations (Phelan et al., 2009; Liu et al., 2010; Li et al., 2010; Son et al., 2013; Li et al., 2014; Bansal et al., 2015; Lian et al., 2018). For example, Liu et al. (2010) proposed to use the topic categories and interests features predicted by a Bayesian model to represent news, and use the click distribution features of news categories to represent users. Li et al. (2014) used a Latent Dirichlet Allocation (LDA) (Blei et al., 2003) model to generate topic distribution features as the news representations. They represented a session by using the topic distribution of browsed news in this session, and the representations of users were built from their session representations weighted by the time. However, these methods heavily rely on manual feature engineering, which needs massive domain knowledge to craft. In addition, the contexts and orders of words in news are not incorporated, which are important for understanding the semantic meanings of news and learning representations of news and users. In recent years, several deep learning methods were proposed for personalized news recommendation (Wang et al., 2018; Okura et al., 2017; Zheng et al., 2018). For example, Okura et al. (2017) proposed to learn representations of news from news bodies using denoising autoencoder, and learn representations of users from the representations of their browsed news using a GRU network. Wang et al. (2018) proposed to learn representations of news from their titles via a knowledge-aware CNN network, and learn representations of users from the representations of their browsed news articles weighted by their similarities with the candidate news. Wu et al. (2019) proposed to learn news and user representations with personalized word- and news-level attention networks, which exploits the embedding of user ID to generate the query vector for the attentions. However, these methods usually learn a single representation vector for each user, and cannot distinguish the long-term preferences and short-term interests of users in reading news. Thus, the user 338 representations learned in these methods may be insufficient for news recommendation. Different from these methods, our approach can learn both long-term and short-term user representations in a unified framework to capture the diverse interests of users for personalized neural new commendation. Extensive experiments on the real-world dataset validate the effectiveness of our approach and the advantage over many baseline methods. 3 Our Approach In this section, we present our neural news recommendation approach with long- and short-term user representations (LSTUR). Our approach contains two major components, i.e., a news encoder to learn representations of news and a user encoder to learn representations of users. Next, we introduce each component in detail. 3.1 News Encoder The news encoder is used to learn representations of news from their titles, topic and subtopic categories. The architecture of the news encoder in our approach is illustrated in Fig. 2. There are two sub-modules in the news encoder, i.e., a title encoder and a topic encoder. The title encoder is used to learn news representations from titles. There are three layers in the title encoder. The first layer is word embedding, which is used to convert a news title from a word sequence into a sequence of dense semantic vectors. Denote the word sequence in a news title t as t = [w1, w2, . . . , wN], where N is the length of this title. It is transformed into [w1, w2, . . . , wN] via a word embedding matrix. The second layer in title encoder is a convolutional neural network (CNN) (LeCun et al., 2015). Local contexts are very useful for understanding the semantic meaning of news titles. For example, in the news title “Next season of super bowl games”, the local contexts of “bowl” such as “super” and “games” are very important for inferring that it belongs to a sports event name. Thus, we apply a CNN network to learn contextual word representations by capturing the local context information. Denote the contextual representation of wi as ci, which is computed as follows: ci = ReLU(C × w[i−M:i+M] + b), (1) where w[i−M:i+M] is the concatenation of the embeddings of words between position i −M and 𝒗 𝑎ଵ 𝑎ே Padding Padding Word Embedding 𝒘𝟏 𝒘𝟐 𝒘𝑵ି𝟏 𝒘𝑵 𝒄𝟏 𝒄𝟐 𝒄𝑵ି𝟏 𝒄𝑵 𝑤ଵ 𝑤ଶ 𝑤ேିଵ 𝑤ே 𝒆𝒕 𝑎ଶ 𝑎ேିଵ Topic Embedding Subtopic Embedding News Topic News Subtopic News Title 𝒆𝒗 𝒆𝒔𝒗 ⨁ ⨁ 𝒆 Figure 2: The framework of the news encoder. i + M. C and b are the parameters of the convolutional filters in CNN, and M is the window size. The third layer is an attention network (Bahdanau et al., 2015). Different words in the same news title may have different informativeness for representing news. For instance, in the news title “The best NBA moments in 2018”, the word “NBA” is very informative for representing this news since it is an important indication of sports news, while the word “2018” is less informative. Thus, we employ a word-level attention network to select important words in news titles to learn more informative news representations. The attention weight αi of the i-th word is formulated as follows: ai = tanh(v × ci + vb), αi = exp(ai) PN j=1 exp(aj) , (2) where v and vb are the trainable parameters. The final representation of a news title t is the summation of its contextual word representations weighted by their attention weights as follows: et = N X i=1 αici. (3) The topic encoder module is used to learn news representations from its topics and subtopics. On many online news platforms such as MSN news, news articles are usually labeled with a topic category (e.g., “Sports”) and a subtopic category (e.g., “Football NFL”) to help target user interests. The topic and subtopic categories of news are also informative for learning representations of news and users. They can reveal the general and detailed topics of the news, and reflect the preferences of users. For example, if a user browsed many news articles with the “Sports” topic category, then we 339 User Click History 𝒆ଵ GRU News Encoder GRU GRU … … News Encoder News Encoder 𝒖௟ 𝒆ଶ 𝒆௞ 𝒆௫ 𝒖 𝑐ଵ 𝑐ଶ 𝑐௞ 𝑐௫ Dot Product User Embedding News Encoder Candidate News Score (a) LSTUR-ini. 𝒆ଵ GRU News Encoder GRU GRU … … News Encoder News Encoder 𝒖௟ 𝒆ଶ 𝒆௞ 𝒆௫ 𝒖௦ 𝑐ଵ 𝑐ଶ 𝑐௞ 𝑐௫ User Click History Dot Product User Embedding News Encoder Candidate News Concatenation Score ⨁ (b) LSTUR-con. Figure 3: The two frameworks of our LSTUR approach. can infer this user is probably interested in sports, and it may be effective to recommend candidate news in the “Sports” topic category to this user. To incorporate the topic and subtopic information into news representation, we propose to learn the representations of topics and subtopics from the embeddings of their IDs, as shown in Fig. 2. Denote ev and esv as the representations of topic and subtopic. The final representation of a news article is the concatenation of the representations of its title, topic and subtopic, i.e., e = [et, ev, esv]. 3.2 User Encoder The user encoder is used to learn representations of users from the history of their browsed news. It contains two modules, i.e., a short-term user representation model (STUR) to capture user’s temporal interests, and a long-term user representation model (LTUR) to capture user’s consistent preferences. Next, we introduce them in detail. 3.2.1 Short-Term User Representation Online users may have dynamic short-term interests in reading news articles, which may be influenced by specific contexts or temporal information demands. For example, if a user just reads a news article about “Mission: Impossible 6 – Fallout”, and she may want to know more about the actor “Tom Cruise” in this movie and click news articles related to “Tom Cruise”, although she is not his fan and may never read his news before. We propose to learn the short-term representations of users from their recent browsing history to capture their temporal interests, and use gated recurrent networks (GRU) (Cho et al., 2014) network to capture the sequential news reading patterns (Okura et al., 2017). Denote news browsing sequence from a user sorted by timestamp in ascending order as C = {c1, c2, . . . , ck}, where k is the length of this sequence. We apply the news encoder to obtain the representations of these browsed articles, denoted as {e1, e2, . . . , ek}. The short-term user representation is computed as follows: rt = σ(W r[ht−1, et]), zt = σ(W z[ht−1, et]), ˜ht = tanh(W ˜h[rt ⊙ht−1, et]), ht = zt ⊙ht + (1 −zt) ⊙˜ht, (4) where σ is the sigmoid function, ⊙is the itemwise product, W r, W z and W ˜h are the parameters of the GRU network. The short-term user representation is the last hidden state of the GRU network, i.e., us = hk. 3.2.2 Long-Term User Representations Besides the temporal interests, online users may also have long-term interests in reading news. For example, a basketball fan may tend to browse many sports news related to NBA in several years. Thus, we propose to learn long-term representations of users to capture their consistent preferences. In our approach the long-term user representations are learned from the embeddings of the user IDs, which are randomly initialized and finetuned during model training. Denote u as the ID of a user and W u as the look-up table for long-term user representation, the long-term user representation of this user is ul = W u[u]. 3.2.3 Long- and Short-Term User Representation In this section, we introduce two methods to combine the long-term and short-term user presentations for unified user representation, which are shown in Fig. 3. 340 The first method is using the long-term user representation to initialize the hidden state of the GRU network in the short-term user representation model, as shown in Fig. 3a. We denote this method as LSTUR-ini. We use the last hidden state of the GRU network as the final user representation. The second method is concatenating the long-term user representation with the short-term user representation as the final user representation, as shown in Fig. 3b. We denote this method as LSTUR-con. 3.3 Model Training For online news recommendation services where user and news representations can be computed in advance, the scoring function should be as simple as possible to reduce latency. Motivated by (Okura et al., 2017), we use the simple dot production to compute the news click probability score. Denote the representation of a user u as u and the representation of a candidate news article ex as ex, the probability score s(u, cx) of this user clicking this news is computed as s(u, cx) = u⊤ex. Motivated by (Huang et al., 2013) and (Zhai et al., 2016), we propose to use the negative sampling technique for model training. For each news browsed by a user (regarded as a positive sample), we randomly sample K news articles from the same impression which are not clicked by this user as negative samples. Our model will jointly predict the click probability scores of the positive news and the K negative news. In this way, the news click prediction problem is reformulated as a pseudo K + 1-way classification task. We minimize the summation of the negative log-likelihood of all positive samples during training, which can be formulated as follows: − P X i=1 log exp(s(u, cp i )) exp(s(u, cp i )) + PK k=1 exp(s(u, cn i,k)) , (5) where P is the number of positive training samples, and cn i,k is the k-th negative sample in the same session with the i-th positive sample. Since not all users can be incorporated in news recommendation model training (e.g., the new coming users), it is not appropriate to assume all users have long-term representations in our models in the prediction stage. In order to handle this problem, in the model training stage, we randomly mask the long-term representations of users with a certain probability p. When we mask the longterm representations, all the dimensions are set to zero. Thus, the long-term user representation in our LSTUR approach can be reformulated as: ul = M · W u[u], M ∼B(1, 1 −p), (6) where B is Bernoulli distribution, and M is a random variable that subject to B(1, 1−p). We find in experiments that this trick for model training can improve the performance of our approach. 4 Experiments 4.1 Dataset and Experimental Settings Since there is no off-the-shelf dataset for news recommendation, we built one by ourselves through collecting logs from MSN News3 in four weeks from December 23rd, 2018 to January 19th, 2019. We used the logs in the first three weeks for model training, and those in the last week for test. We also randomly sampled 10% of logs from the training set as the validation data. For each sample, we collected the browsing history in last 7 days to learn short-term user representations. The detailed dataset statistics are summarized in Table 1. # of users 25,000 # of users in training set 22,938 # of news 38,501 Avg. # of words per title 9.98 # of imprs 393,191 # of positive samples 492,185 NP ratio4 18.74 # of negative samples 9,224,537 Table 1: Statistics of the dataset in our experiments. In our experiments, we used the pretrained GloVe embedding5 (Pennington et al., 2014) as the initialization of word embeddings. The word embedding dimension is 200. The number of filters in CNN network is 300, and the window size of the filters in CNN network is set to 3. We applied dropout (Srivastava et al., 2014) to each layer in our approach to mitigate overfitting. The dropout rate is 0.2. The default value of long-term user representation masking probability p for model training is 0.5. We used Adam (Kingma and Ba, 2014) to optimize the model, and the learning rate was 0.01. The batch size is set to 400. The number of negative samples for each positive sample is 4. These hyper-parameters were all selected according to the results on validation set. We used 3https://www.msn.com/en-us/news 4The ratio of the negative sample number to the positive sample number. 5http://nlp.stanford.edu/data/glove.840B.300d.zip 341 impression-based ranking metrics to evaluate the performance, including area under the ROC curve (AUC), mean reciprocal rank (MRR), and normalized discounted cumulative gain (nDCG). We repeated each experiment for 10 times independently, and reported the average results with 0.95 confidence probability. 4.2 Performance Evaluation We evaluate the performance of our approach by comparing it with several baseline methods, including: • LibFM (Rendle, 2012), a state-of-the-art matrix factorization method which is widely used in recommendation. In our experiments, the user features are the concatenation of TF-IDF features extracted from the browsed news titles, and the normalized count features from the topics and subtopics of the browsed news. The features for news consists of TFIDF features from its title, and one-hot vectors of its topic and subtopic. The input to LibFM is the concatenation of user features and features of candidate news. • DeepFM (Guo et al., 2017), a widely used method that combines factorization machines and deep neural networks. We use the same features as LibFM. • Wide & Deep (Cheng et al., 2016), another deep learning based recommendation method that combines a wide channel and a deep channel. Again, the same features with LibFM are used for both channels. • DSSM (Huang et al., 2013), deep structured semantic model. The inputs are hashed words via character trigram, where all the browsed news titles are merged as query document. • CNN (Kim, 2014), using CNN with max pooling to learn news representations from the titles of browsed news by keeping the most salient features. • DKN (Wang et al., 2018), a deep news recommendation model which contains CNN and candidate-aware attention on the news browsing histories. • GRU (Okura et al., 2017), learning news representations by a denoising autoencoder and user representations by a GRU network. The results of comparing different methods are summarized in Table 2. We have obtained observations from Table 2. First, the news recommendation methods (e.g. CNN, DKN and LSTUR) which use neural networks to learn news and user representations can significantly outperform the methods using manual feature engineering (e.g. LibFM, DeepFM, Wide & Deep, and DSSM). This is probably because handcrafted features are usually not optimal, and neural networks can capture both global and local semantic contexts in news, which are useful for learning more accurate news and user representations for news recommendation. Second, our LSTUR approach outperforms all baseline methods compared here, including deep learning models such as CNN, GRU and DKN. Our LSTUR approach can capture both the long-term preferences and short-term interests to capture the complex and diverse user interests in news reading, while the baseline methods only learn a single representation for each user, which is insufficient. In addition, our LSTUR approach uses attention mechanism in the news encoder to select important words, which can help learn more informative news representations. Third, our proposed two methods to learn longand short-term user representations, i.e., LSTURini and LSTUR-con, can achieve comparable performance and both outperform baseline methods, which validate the effectiveness of these methods. In addtion, the performance of LSTUR-con is more stable than LSTUR-ini, which indicates that using the concatenation of both short-term and long-term user representations is capable of retaining all the information. We also conducted experiments to explore the performance of combining both LSTUR-con and LSTUR-ini in the same model, but the performance improvement is very limited, implying that each of them can fully capture the long- and short-term user interests for news recommendation. 4.3 Effectiveness of Long- and Short-Term User Representation In this section, we conducted several experiments to explore the effectiveness of our approach in learning both long-term and short-term user representations. We compare the performance of our LSTUR methods with the long-term user representation model LTUR and the short-term user rep342 Methods AUC MRR nDCG@5 nDCG@10 LibFM 56.52 ± 1.31 25.53 ± 0.81 26.66 ± 1.04 34.72 ± 0.95 DeepFM 58.13 ± 1.69 27.01 ± 0.20 28.37 ± 0.57 36.78 ± 0.62 Wide & Deep 58.07 ± 0.55 27.07 ± 0.37 28.51 ± 0.45 36.93 ± 0.43 DSSM 58.43 ± 0.58 27.25 ± 0.49 28.31 ± 0.60 36.91 ± 0.54 CNN 61.13 ± 0.77 29.44 ± 0.73 31.44 ± 0.87 39.51 ± 0.74 DKN 61.25 ± 0.78 29.47 ± 0.64 31.54 ± 0.79 39.59 ± 0.67 GRU 62.69 ± 0.16 30.24 ± 0.13 32.56 ± 0.17 40.55 ± 0.13 LSTUR-con 63.47 ± 0.10 30.94 ± 0.14 33.43 ± 0.13 41.34 ± 0.13 LSTUR-ini 63.56 ± 0.42 30.98 ± 0.32 33.45 ± 0.39 41.37 ± 0.36 Table 2: The performance of different methods on news recommendation. Figure 4: The effectiveness of incorporating long-tern user representations (LTUR) and short-term user representations (STUR). Figure 5: The comparisons of different methods in learning short-term user representations from recently browsed news articles. resentation model STUR. The results are summarized in Fig. 4. From the results we find both LTUR and STUR are useful for news recommendation, and the STUR model can outperform the LTUR model. According to the statistics in Table 1, the longterm representations of many users in test data are unavailable, which leads to relative weak performance of LTUR on these users. In addition, combining STUR and LTUR using our two longand short-term user representation methods, i.e., LSTUR-ini and LSTUR-con, can effectively improve the performance. This result validates that incorporating both long-term and short-term user representations is useful to capture the diverse user interests more accurately and is beneficial for news recommendation. 4.4 Effectiveness of News Encoders in STUR In our STUR model, GRU is used to learn shortterm user representations from the recent browsing news. We explore the effectiveness of GRU in encoding news by replacing it with several other encoders, including: 1) Average: using the average of all the news representations in recent browsing history; 2) Attention: the summation of news representations weighted by their attention weights; 3) LSTM (Hochreiter and Schmidhuber, 1997), replacing GRU with LSTM. The results are summarized in Fig. 5. According to Fig. 5, the sequence-based encoders (e.g., GRU, LSTM) outperform the Average and Attention based encoders. This is probably because the sequence-based encoders can capture the sequential new reading patterns to learn short-term representations of users, which is difficult for Average and Attention based encoders. In addition, GRU achieves better performance than LSTM. This may be because GRU contains fewer parameters and has lower risk of overfitting . Thus, we select GRU as the news encoder in STUR. 4.5 Effectiveness of News Title Encoders In this section, we conduct experiments to compare different news title encoders. In our approach, the news encoder is a combination of CNN network and an attention network (denoted as CNN+Att). We compare it with several variants, i.e., CNN, LSTM, and LSTM with attention (LSTM+Att), to validate the effectiveness of our 343 (a) AUC (b) nDCG@10 Figure 6: The comparisons of different methods in learning news title representations and the effectiveness of attention machenism in selecting important words. (a) AUC (b) nDCG@10 Figure 7: The effectiveness of incorporating news topic and subtopic information for news recommendation. approach. The results are summarized in Fig. 6. According to Fig. 6, using attention mechanism in both encoders based on CNN and LSTM can achieve better performance. This is probably because the attention network can select important words, which can learn more informative news representations. In addition, encoders using CNN outperform those using LSTM. This may be because local contexts in news titles are more important for learning news representations. 4.5.1 Effectiveness of News Topic In this section, we conduct experiments to validate the effectiveness of incorporating topic and subtopic of news in the news encoder. We compare the performance of our approach with its variants without topic and/or subtopics. The results are shown in Fig. 7. According to Fig. 7, incorporating either topics or subtopics can effectively improve the performance of our approach. In addition, the news encoder with subtopics outperforms the news encoder with topics. This is probably because subtopics can provide more fine-grained topic information which is more helpful for news recommendation. Thus, the model with subtopics can achieve better news recommendation performance. Moreover, combining topics and subtopics can further improve the performance of our approach. These results validate the effectiveness of our approach in exploiting topic information for news recommendation. 4.5.2 Influence of Masking Probability In this section, we explore the influence of the probability p in Eq. (6) for randomly masking long-term user representation in model training. We vary the value of p from 0.0 to 0.9 with a step of 0.1 for both LSTUR-ini and LSTUR-con. The results are summarized in Fig. 8. According to Fig. 8, the results of LSTUR-ini and LSTUR-con have similar patterns. The performance of both methods improves when p increases from 0. When p is too small, the model will tend to overfit on the LTUR, since LTUR has many parameters. Thus, the performance is not optimal. However, when p is too large, the performance of both methods starts to decline. This may be be344 0.0 0.3 0.6 0.9 63.0 64.0 AUC MRR nDCG@5 nDCG@10 40.5 41.5 0.0 0.3 0.6 0.9 30.0 31.0 32.0 33.0 34.0 Mask probability p (a) LSTUR-ini. 0.0 0.3 0.6 0.9 63.0 64.0 AUC MRR nDCG@5 nDCG@10 40.5 41.5 0.0 0.3 0.6 0.9 30.0 31.0 32.0 33.0 34.0 Mask probability p (b) LSTUR-con. Figure 8: The influence of mask probability p on the performance of our approach. 2019 CES Highlights : Innovations in Enviro-Sensing for Robocars California dries off after storm batter state for days 15 Recipes Inspired By Vintage Movies Texas State Rep . Dennis Bonnen Elected As House Speaker Should You Buy American Express Stock After Earnings ? How Meghan Markle Has Changed Prince Harry Considerably Figure 9: Visualization of the word-level attentions. cause the useful information in LTUR cannot be effectively incorporated. Thus, the performance is also not optimal. A moderate choice on p (e.g., 0.5) is most appropriate for both LSTUR-ini and LSTUR-con methods, which can properly balance the learning of LTUR and STUR. 5 Visualization of Attention Weights In this section, we visually explore the effectiveness of the word-level attention network in the news encoder. The attention weights in several example news titles are shown in Fig. 9. From the results, we find our approach can effectively recognize important words to learn more informative news representations. For example, the words “Enviro-Sensing” and “Robocars” in the first news title are assigned high attention weights because these words are indications of news on technologies, while the words “2019” and “for” are assigned low attention weights by our approach since they are less informative. These results validate the effectiveness of the attention network in the news encoder. 6 Conclusion In this paper, we propose a neural news recommendation approach which can learn both longand short-term user representations. The core of our model is a news encoder and a user encoder. In the news encoder, we learn representations of news from their titles and topic categories, and use an attention network to highlight important words for informative representation learning. In the user encoder, we propose to learn long-term representations of users from the embeddings of their IDs. In addition, we learn short-term representations of users from their recently browsed news via a GRU network. Besides, we propose two methods to fuse long- and short-term user representations, i.e., using long-term user representation to initialize the hidden state of the GRU network in short-term user representation, or concatenating both longand short-term user representations as a unified user vector. Extensive experiments on a real-world dataset collected from MSN news show our approach can effecitively improve the performance of news recommendation. Acknowledgement The authors would like to thank Microsoft News for providing technical support and data in the experiments, and Jiun-Hung Chen (Microsoft News) and Ying Qiao (Microsoft News) for their support and discussions. We also want to thank Jianqiang Huang for his help in the experiments. 345 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Trapit Bansal, Mrinal Das, and Chiranjib Bhattacharyya. 2015. Content driven user profiling for comment-worthy recommendations of news and blog articles. In RecSys, pages 195–202. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3(Jan):993–1022. Heng-Tze Cheng, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, Hemal Shah, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, and Wei Chai. 2016. Wide & deep learning for recommender systems. In DLRS, pages 7–10. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP, pages 1724–1734. Abhinandan S. Das, Mayur Datar, Ashutosh Garg, and Shyam Rajaram. 2007. Google news personalization: scalable online collaborative filtering. In WWW, pages 271–280. Huifeng Guo, Ruiming TANG, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2017. DeepFM: A factorization-machine based neural network for CTR prediction. In IJCAI, pages 1725–1731. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM, pages 2333–2338. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pages 1746– 1751. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Talia Lavie, Michal Sela, Ilit Oppenheim, Ohad Inbar, and Joachim Meyer. 2010. User attitudes towards news content personalization. International Journal of Human-Computer Studies, 68(8):483–495. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436–444. Lei Li, Li Zheng, Fan Yang, and Tao Li. 2014. Modeling and broadening temporal user interest in personalized news recommendation. Expert Systems with Applications, 41(7):3168–3177. Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. 2010. A contextual-bandit approach to personalized news article recommendation. In WWW, pages 661–670. Jianxun Lian, Fuzheng Zhang, Xing Xie, and Guangzhong Sun. 2018. Towards better representation learning for personalized news recommendation: a multi-channel deep fusion approach. In IJCAI, pages 3805–3811. Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. 2010. Personalized news recommendation based on click behavior. In IUI, pages 31–40. Shumpei Okura, Yukihiro Tagami, Shingo Ono, and Akira Tajima. 2017. Embedding-based news recommendation for millions of users. In KDD, pages 1933–1942. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Owen Phelan, Kevin McCarthy, and Barry Smyth. 2009. Using twitter to recommend real-time topical news. In RecSys, pages 385–388. Steffen Rendle. 2012. Factorization machines with libFM. ACM Transactions on Intelligent Systems and Technology, 3(3):1–22. Jeong-Woo Son, A-Yeong Kim, and Seong-Bae Park. 2013. A location-based news article recommendation with explicit localized semantic analysis. In SIGIR, pages 293–302. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958. Hongwei Wang, Fuzheng Zhang, Xing Xie, and Minyi Guo. 2018. DKN: Deep knowledge-aware network for news recommendation. In WWW, pages 1835– 1844. Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019. NPA: Neural news recommendation with personalized attention. In KDD. Shuangfei Zhai, Keng hao Chang, Ruofei Zhang, and Zhongfei Mark Zhang. 2016. Deepintent: Learning attentions for online advertising with recurrent neural networks. In KDD, pages 1295–1304. Guanjie Zheng, Fuzheng Zhang, Zihan Zheng, Yang Xiang, Nicholas Jing Yuan, Xing Xie, and Zhenhui Li. 2018. DRN: A deep reinforcement learning framework for news recommendation. In WWW, pages 167–176.
2019
33
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3381–3392 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3381 HIGHRES: Highlight-based Reference-less Evaluation of Summarization Hardy1 Shashi Narayan2∗ Andreas Vlachos1,3 1Department of Computer Science, University of Sheffield 2Google Research 3Department of Computer Science and Technology, University of Cambridge [email protected], [email protected], [email protected] Abstract There has been substantial progress in summarization research enabled by the availability of novel, often large-scale, datasets and recent advances on neural network-based approaches. However, manual evaluation of the system generated summaries is inconsistent due to the difficulty the task poses to human non-expert readers. To address this issue, we propose a novel approach for manual evaluation, HIGHlight-based Reference-less Evaluation of Summarization (HIGHRES), in which summaries are assessed by multiple annotators against the source document via manually highlighted salient content in the latter. Thus summary assessment on the source document by human judges is facilitated, while the highlights can be used for evaluating multiple systems. To validate our approach we employ crowd-workers to augment with highlights a recently proposed dataset and compare two state-of-the-art systems. We demonstrate that HIGHRES improves inter-annotator agreement in comparison to using the source document directly, while they help emphasize differences among systems that would be ignored under other evaluation approaches.1 1 Introduction Research in automatic summarization has made headway over the years with single document summarization as the front-runner due to the availability of large datasets (Sandhaus, 2008; Hermann et al., 2015; Narayan et al., 2018b) which has enabled the development of novel methods, many of them employing recent advances in neural networks (See et al., 2017; Narayan et al., 2018c; Pasunuru and Bansal, 2018, inter alia). ∗The work was primarily done while Shashi was still at School of Informatics, University of Edinburgh. 1Our dataset and code are available at https:// github.com/sheffieldnlp/highres Figure 1: Highlight-based evaluation of a summary. Annotators to evaluate a summary (bottom) against the highlighted source document (top) presented with a heat map marking the salient content in the document; the darker the colour, the more annotators deemed the highlighted text salient. Measuring progress in summarization is difficult, as the task has as input a source document consisting of multiple sentences and methods need to generate a shorter text that expresses the salient information of the source fluently and succinctly. Thus there can be multiple equally good summaries for the same source document as not all salient information can fit in a given summary length, while even extractive methods that select complete sentences are not guaranteed to produce a coherent summary overall. The most consistently used evaluation approach is comparison of the summaries produces against reference summaries via automatic measures such as ROUGE (Lin, 2004) and its variants. However, 3382 automatic measures are unlikely to be sufficient to measure performance in summarization (Schluter, 2017), also known for other tasks in which the goal is to generate natural language (Novikova et al., 2017). Furthermore, the datasets typically considered have a single reference summary, as obtaining multiple ones increases dataset creation cost, thus evaluation against them is likely to exhibit reference bias (Louis and Nenkova, 2013; Fomicheva and Specia, 2016), penalizing summaries containing salient content different from the reference. For the above reasons manual evaluation is considered necessary for measuring progress in summarization. However, the intrinsic difficulty of the task has led to research without manual evaluation or only fluency being assessed manually. Those that conduct manual assessment of the content, typically use a single reference summary, either directly (Celikyilmaz et al., 2018; Tan et al., 2017) or through questions (Narayan et al., 2018b,c) and thus are also likely to exhibit reference bias. In this paper we propose a novel approach for manual evaluation, HIGHlight-based Referenceless Evaluation of document Summarization (HIGHRES), in which a summary is assessed against the source document via manually highlighted salient content in the latter (see Figure 1 for an example). Our approach avoids reference bias, as the multiple highlights obtained help consider more content than what is contained in a single reference. The highlights are not dependent on the summaries being evaluated but only on the source documents, thus they are reusable across studies, and they can be crowd-sourced more effectively than actual summaries. Furthermore, we propose to evaluate the clarity of a summary separately from its fluency, as they are different dimensions. Finally, HIGHRES provides absolute instead of ranked evaluation, thus the assessment of a system can be conducted and interpreted without reference to other systems. To validate our proposed approach we use the recently introduced eXtreme SUMmarization dataset (XSUM, Narayan et al., 2018b) to evaluate two state-of-the-art abstractive summarization methods, Pointer Generator Networks (See et al., 2017) and Topic-aware Convolutional Networks (Narayan et al., 2018b), using crowd-sourcing for both highlight annotation and quality judgments. We demonstrate that HIGHRES improves interannotator agreement in comparison to using the source document directly, while they help emphasize differences among systems that would be ignored under other evaluation approaches, including reference-based evaluation. Furthermore, we show that the clarity metric from the DUC (Dang, 2005) must be measured separately from “fluency”, as judgments for them had low correlation. Finally, we make the highlighted XSUM dataset, codebase to replicate the crowd-sourcing experiments and all other materials produced in our study publicly available. 2 Literature Review In recent years, summarization literature has investigated different means of conducting manual evaluation. We study a sample of 26 recent papers from major ACL conferences and outline the trends of manual evaluation in summarization in Table 1. From 26 papers, 11 papers (e.g., See et al., 2017; Kedzie et al., 2018; Cao et al., 2018) did not conduct any manual evaluation. Following the Document Understanding Conference (DUC, Dang, 2005), a majority of work has focused on evaluating the content and the linguistic quality of summaries (Nenkova, 2005). However, there seems to be a lack of consensus on how a summary should be evaluated: (i) Should it be evaluated relative to other summaries or standalone in absolute terms? and (ii) What would be a good source of comparison: the input document or the reference summary? The disagreements on these issues result in authors evaluating their summaries often (11 out of 26 papers) using automatic measures such as ROUGE (Lin, 2004) despite of its limitations (Schluter, 2017). In what follows, we discuss previously proposed approaches along three axes: evaluation metrics, relative vs. absolute, and the choice of reference. Evaluation Metrics Despite differences in the exact definitions, the majority (e.g., Hsu et al., 2018; Celikyilmaz et al., 2018; Narayan et al., 2018b; Chen and Bansal, 2018; Peyrard and Gurevych, 2018) agree on both or either one of two broad quality definitions: coverage determines how much of the salient content of the source document is captured in the summary, and informativeness, how much of the content captured in the summary is salient with regards to the original document. These measures correspond to “recall” and “precision” metrics respectively in Table 1, notions that are commonly used 3383 Systems No Manual Eval Pyramid QA Correctness Fluency Clarity Recall Precision Absolute Relative With Reference With Document With Ref. & Doc. See et al. (2017) ✓ Lin et al. (2018) ✓ Cohan et al. (2018) ✓ Liao et al. (2018) ✓ Kedzie et al. (2018) ✓ Amplayo et al. (2018) ✓ Jadhav and Rajan (2018) ✓ Li et al. (2018a) ✓ Pasunuru and Bansal (2018) ✓ Cao et al. (2018) ✓ Sakaue et al. (2018) ✓ Celikyilmaz et al. (2018) ✓ ✓ ✓ ✓ ✓ ✓ Chen and Bansal (2018) ✓ ✓ ✓ ✓ ✓ Guo et al. (2018) ✓ ✓ ✓ ✓ Hardy and Vlachos (2018) ✓ ✓ Hsu et al. (2018) ✓ ✓ ✓ ✓ ✓ Krishna and Srinivasan (2018) ✓ ✓ ✓ Kry´sci´nski et al. (2018) ✓ ✓ ✓ ✓ Li et al. (2018b) ✓ ✓ Narayan et al. (2018a) ✓ ✓ ✓ Narayan et al. (2018b) ✓ ✓ ✓ ✓ ✓ ✓ ✓ Narayan et al. (2018c) ✓ ✓ ✓ ✓ ✓ ✓ ✓ Peyrard and Gurevych (2018) ✓ ✓ ✓ ✓ ShafieiBavani et al. (2018) ✓ Song et al. (2018) ✓ ✓ ✓ ✓ ✓ Yang et al. (2017) ✓ ✓ ✓ HIGHRES (ours) ✓ ✓ ✓ ✓ ✓ ✓ Table 1: Overview of manual evaluations conducted in recent summarization systems. We categorize them in four dimensions: the first columns presents papers that do not report on human evaluation; the second column identifies matrices used for evaluating content (“Pyramid”, “QA”, “Correctness”, “Recall” and “Precision”) and quality (“Clarity”, “Fluency”) of summaries; the third column focuses if the system ranking reported by humans on content evaluation were “Absolute” or “Relative”; and finally, the fourth column evaluates if summaries were evaluated against the input document (“With Document”), the reference summary (“With Reference”) or both (“With Ref. & Doc.”). in information retrieval and information extraction literature. Clarke and Lapata (2010) proposed a question-answering based approach to improve the agreement among human evaluations for the quality of summary content, which was recently employed by Narayan et al. (2018b) and Narayan et al. (2018c) (QA in Table 1). In this approach, questions were created first from the reference summary and then the system summaries were judged with regards to whether they enabled humans to answer those questions correctly. ShafieiBavani et al. (2018), on the other hand, used the “Pyramid” method (Nenkova and Passonneau, 2004) which requires summaries to be annotated by experts for salient information. A similar evaluation approach is the factoids analysis by Teufel and Van Halteren (2004) which evaluates the system summary against factoids, a representation based on atomic units of information, that are extracted from multiple gold summaries. However, as in the case of the “Pyramid” method, extracting factoids requires experts annotators. Finally, a small number of work evaluates the ”Correctness” (Chen and Bansal, 2018; Li et al., 2018b; Chen and Bansal, 2018) of the summary, similar to fact checking (Vlachos and Riedel, 2014), which can be a challenging task in its own right. The linguistic quality of a summary encompasses many different qualities such as fluency, grammatically, readability, formatting, naturalness and coherence. Most recent work uses a single human judgment to capture all linguistic qualities of the summary (Hsu et al., 2018; Kry´sci´nski et al., 2018; Narayan et al., 2018b; Song et al., 2018; Guo et al., 2018); we group them under “Fluency” in Table 1 with an exception of “Clarity” which 3384 was evaluated in the DUC evaluation campaigns (Dang, 2005). The “Clarity” metric puts emphasis in easy identification of noun and pronoun phrases in the summary which is a different dimension than “Fluency”, as a summary may be fluent but difficult to be understood due to poor clarity. Absolute vs Relative Summary Ranking. In relative assessment of summarization, annotators are shown two or more summaries and are asked to rank them according to the dimension at question (Yang et al., 2017; Chen and Bansal, 2018; Narayan et al., 2018a; Guo et al., 2018; Krishna and Srinivasan, 2018). The relative assessment is often done using the paired comparison (Thurstone, 1994) or the best-worst scaling (Woodworth and G, 1991; Louviere et al., 2015), to improve inter-annotator agreement. On the other hand, absolute assessment of summarization (Li et al., 2018b; Song et al., 2018; Kry´sci´nski et al., 2018; Hsu et al., 2018; Hardy and Vlachos, 2018) is often done using the Likert rating scale (Likert, 1932) where a summary is assessed on a numerical scale. Absolute assessment was also employed in combination with the question answering approach for content evaluation (Narayan et al., 2018b; Mendes et al., 2019). Both approaches, relative ranking and absolute assessment, have been investigated extensively in Machine Translation (Bojar et al., 2016, 2017). Absolute assessment correlates highly with the relative assessment without the bias introduced by having a simultaneous assessment of several models (Bojar et al., 2011). Choice of Reference. The most convenient way to evaluate a system summary is to assess it against the reference summary (Celikyilmaz et al., 2018; Yang et al., 2017; Peyrard and Gurevych, 2018), as this typically requires less effort than reading the source document. The question answering approach of Narayan et al. (2018b,c) also falls in this category, as the questions were written using the reference summary. However, summarization datasets are limited to a single reference summary per document (Sandhaus, 2008; Hermann et al., 2015; Grusky et al., 2018; Narayan et al., 2018b) thus evaluations using them is prone to reference bias (Louis and Nenkova, 2013), also a known issue in machine translation evaluation (Fomicheva and Specia, 2016). A circumvention for this issue is to evaluate it against the source document (Song et al., 2018; Narayan et al., 2018a; Hsu et al., 2018; Kry´sci´nski et al., 2018), asking judges to assess the summary after reading the source document. However this requires more effort and is known to lead to low inter-annotator agreement (Nenkova and Passonneau, 2004). 3 HIGHRES Our novel highlight-based reference-less evaluation does not suffer from reference bias as a summary is assessed against the source document with manually highlighted salient content. These highlights are crowd-sourced effectively without the need of expert annotators as required by the Pyramid method (Nenkova and Passonneau, 2004) or to generate reference summaries. Our approach improves over the “Correctness” or “Fluency” only measure for summarization by taking salience into account. Finally, the assessment of summaries against the document with highlighted pertinent content facilitates an absolute evaluation of summaries with high inter-annotator agreement. Our evaluation framework comprises three main components: document highlight annotation, highlight-based content evaluation, and clarity and fluency evaluation. The second component, which evaluates the notions of “Precision” and “Recall” requires the highlights from the first one to be conducted. However, the highlight annotation needs to happen only once per document, and it can be reused to evaluate many system summaries, unlike the Pyramid approach (Nenkova and Passonneau, 2004) that requires additional expert annotation for every system summary being evaluated. The third component is independent of the others and can be run in isolation. In all components we employ crowd-workers as human judges, and implement appropriate sanity checking mechanisms to ensure good quality judgements. Finally, we present an extended version of ROUGE (Lin, 2004) that utilizes the highlights to evaluate system summaries against the document; this demonstrates another use of the highlights for summarization evaluation. 3.1 Highlight Annotation In this part, we ask human judges to read the source document and then highlight words or phrases that are considered salient. Each judge is allowed to highlight parts of the text at any granu3385 larity, from single words to complete sentences or even paragraphs. However we enforce a limit in the number of words to K that can be highlighted in total by a judge in a document, corresponding to the length of the summary expected. By employing multiple judges per document who are restricted in the amount of text that can be highlighted we expect to have a more diverse and focused highlight from multiple judges which cover different viewpoints of the article. To ensure that each highlight is reliable, we performed a sanity check at the end of the task where we ask the judges to answer a True/False question based on the article. We rejected all annotations that failed to correctly answer the sanity check question. 3.2 Highlight-based Content Evaluation In this component, we present human judges a document that has been highlighted using heatmap coloring and a summary to assess. We ask our judges to assess the summary for (i) ‘All important information is present in the summary’ and (ii) ‘Only important information is in the summary.’ The first one is the recall (content coverage) measure and the second, the precision (informativeness) measure. All the ratings were collected on a 1-100 Likert scale (Likert, 1932). Figure 2 shows the content evaluation user interface where salient parts of the document are highlighted. As with the highlight annotation, we performed the same form of sanity check to the one in the highlight annotation task. 3.3 Clarity and Fluency Evaluation In this part, we give the judges only the summary and ask them to rate it on clarity and fluency. For clarity, each judge is asked whether the summary is easy to be understood, i.e. there should be no difficulties in identifying the referents of the noun phrases (every noun/place/event should be wellspecified) or understanding the meaning of the sentence. For fluency, each judge is asked whether the summary sounds natural and has no grammatical problems. While fluency is often evaluated in recent work, clarity, while first introduced in DUC evaluations, has recently been ignored in manual evaluation, despite that it captures a different dimension of summarization quality. To ensure that the judgments for clarity and fluency are not affected by each other (poor fluency can affect clarity, but a summary can have perfect fluency but low clarity), we evaluate each metric separately. We ask the judges to evaluate multiple summaries per task with each dimension in its own screen. For sanity checking, we insert three artificial summaries of different quality (good, mediocre and bad summaries). The good summary is the unedited one, while the others are generated from sentences randomly sampled from the source document. For the mediocre summary, some words are edited to introduce some grammatical or syntactic errors while for the bad summary, the words are further scrambled. We reject judgements that failed to pass this criteria: bad < mediocre < good. 3.4 Highlight-based ROUGE Evaluation Our Highlight-based ROUGE (we refer to it as HROUGE) formulation is similar to the original ROUGE with the difference that the n-grams are weighted by the number of times they were highlighted. One benefit of HROUGE is that it introduces saliency into the calculation without being reference-based as in ROUGE. Implicitly HROUGE considers multiple summaries as the highlights are obtained from multiple workers. Given a document D as a sequence of m tokens {w1, . . . , wm}, annotated with N highlights, we define the weight βn g ∈[0, 1] for an n-gram g as: βn g = m−(n−1) X i=1 "Pi+n−1 j=i NumH(wj) N n # wi:i+n−1==g m−(n−1) X i=1 [1]wi:i+n−1==g where, [x]y is an indicator function which returns x if y is true and 0, otherwise. NumH(wj) = PN k=1 len(Hk) K [1]wj∈Hk is a function which returns the number of times word wj is highlighted by the annotators out of N times weighted by the lengths of their highlights; Hk is the highlighted text by the k-th annotator and K is the maximum allowed length of the highlighted text (see Section 3.1). NumH(wj) gives less importance to annotators with highlights with few words. In principle, if an n-gram is highlighted by every crowd-worker and the length of the highlight of each crowd-worker is K, the n-gram g will have a maximum weight of βn g = 1. The HROUGE scores for a summary S can then 3386 Figure 2: The UI for content evaluation with highlight. Judges are given an article with important words highlighted using heat map. Judges can also remove less important highlight color by sliding the scroller at the left of the page. At the right of the page judges give the recall and precision assessment by sliding the scroller from 1 to 100 based on the given summary quality. be defined as: HRn rec = X g∈n -gram(S) βn g count(g, D ∩S) X g∈n -gram(D) βn g count(g, D) HRn pre = X g∈n -gram(S) βn g count(g, D ∩S) X g∈n -gram(S) count(g, S) HRn rec and HRn pre are the HROUGE recall and precision scores; count(g, X) is the maximum number of n-gram g occurring in the text X. The weight in the denominator of HRn pre is uniform (βn g = 1) for all g because if we weighted according to the highlights, words in the summary that are not highlighted in the original document would be ignored. This would result in HRn pre not penalizing summaries for containing words that are likely to be irrelevant as they do not appear in the highlights of the document. It is important to note HROUGE has an important limitation in that it penalizes abstractive summaries that do not reuse words from the original document. This is similar to ROUGE penalizing summaries for not reusing words from the reference summaries, however the highlights allow us to implicitly consider multiple references without having to actually obtain them. 4 Summarization Dataset and Models We use the extreme summarization dataset (XSUM, Narayan et al., 2018b)2 which comprises BBC articles paired with their singlesentence summaries, provided by the journalists writing the articles. The summary in the XSUM dataset demonstrates a larger number of novel ngrams compared to other popular datasets such as CNN/DailyMail (Hermann et al., 2015) or NY Times (Sandhaus, 2008) as such it is suitable to be used for our experiment since the more abstractive nature of the summary renders automatic methods such as ROUGE less accurate as they rely on string matching, and thus calls for human evaluation for more accurate system comparisons. Following Narayan et al. (2018b), we didn’t use the whole test set portion, but sampled 50 articles from it for our highlight-based evaluation. We assessed summaries from two state-ofthe-art abstractive summarization systems using our highlight-based evaluation: (i) the PointerGenerator model (PTGEN) introduced by See et al. (2017) is an RNN-based abstractive systems which allows to copy words from the source text, 2https://github.com/EdinburghNLP/XSum 3387 and (ii) the Topic-aware Convolutional Sequence to Sequence model (TCONVS2S) introduced by Narayan et al. (2018b) is an abstractive model which is conditioned on the article’s topics and based entirely on Convolutional Neural Networks. We used the pre-trained models3 provided by the authors to obtain summaries from both systems for the documents in our test set. 5 Experiments and Results All of our experiments are done using the Amazon Mechanical Turk platform.We develop three types of Human Intelligence Tasks (HITs): highlight annotation, highlight-based content evaluation, and fluency and clarity evaluation. In addition, we elicited human judgments for content evaluation in two more ways: we assessed system summaries against the original document (without highlights) and against the reference summary. The latter two experiments are intended as the comparison for our proposed highlight-based content evaluation. 5.1 Highlight Annotation We collected highlight annotations from 10 different participants for each of 50 articles. For each annotation, we set K, the maximum number of words to highlight, to 30. Our choice reflects the average length (24 words) of reference summaries in the XSUM dataset. To facilitate the annotation of BBC news articles with highlights, we asked our participants to adapt the 5W1H (Who, What, When, Where, Why and How) principle (Robertson, 1946) that is a common practice in journalism. The participants however were not obliged to follow this principle and were free to highlight content as they deem fit. The resulting annotation exhibits a substantial amount of variance, confirming the intuition that different participants are not expected to agree entirely on what is salient in a document. On average, the union of the highlights from 10 annotators covered 38.21% per article and 33.77% of the highlights occurred at the second half of the article. This shows that the judges did not focus only on the beginning of the documents but annotated all across the document. Using Fleiss Kappa (Fleiss, 1971) on the binary labels provided by each judge on each word (highlighted or not) we obtained an average agreement 3Both models were trained using the standard crossentropy loss to maximize the likelihood of the reference summary given the document. Model Highlight Non HighReference -based light-based -based Prec Rec Prec Rec Prec Rec TCONVS2S 57.42 49.95 52.55 41.04 46.75 36.45 PTGEN 50.94 44.41 48.57 39.21 44.24 38.24 Reference 67.90 56.83 66.01 52.45 — — Table 2: Results of content evaluation of summaries against documents with highlights, documents without highlights and reference summaries. Model Highlight-based Non Highlight-based Prec Rec Prec Rec TCONVS2S 0.67 0.80 0.75 0.83 PTGEN 0.73 0.86 0.73 0.90 Reference 0.49 0.63 0.48 0.67 Table 3: Coefficient of variation (lower is better) for evaluating summaries against documents with and without highlights. of 0.19 for the 50 articles considered. The low agreement score does not indicate a poor annotation process necessarily; we argue that this is primarily due to the annotators having different opinions on which parts of an article are salient. The article with the highest agreement (0.32) has more focused highlights, whereas the article with the lowest agreement (0.04) has highlights spread all over (both articles can be seen in the supplementary materials). Interestingly, the reference summary on the highest agreement article appears to be more informative of its content when the annotator agreement is high; the reference summary on the lowest agreement article is more indicative, i.e., it does not contain any informative content from the article but only to inform the reader about the article’s topic and scope. These results confirm that the annotation behaviour originates from the nature of the document and the summary it requires, and validates our highlight annotation setup. 5.2 Content Evaluation of Summaries We assessed the summaries against (i) documents with highlights (Highlight-based), (ii) original documents without highlights (Non Highlightbased) and (iii) reference summaries (Referencebased). For each setup, we collected judgments from 3 different participants for each model summary. Table 2 and 3 presents our results. Both the highlight-based and non-highlight based assessment of summaries agree on the ranking among TCONVS2S, PTGEN and Reference. Perhaps unsurprisingly human-authored 3388 summaries were considered best, whereas, TCONVS2S was ranked 2nd, followed by PTGEN. However, the performance difference in TCONVS2S and PTGEN is greatly amplified when they are evaluated against document with highlights (6.48 and 5.54 Precision and Recall points) compared to when evaluated against the original documents (3.98 and 1.83 Precision and Recall points). The performance difference is lowest when they are evaluated against the reference summary (2.51 and -1.79 Precision and Recall points). The superiority of TCONVS2S is expected; TCONVS2S is better than PTGEN for recognizing pertinent content and generating informative summaries due to its ability to represent high-level document knowledge in terms of topics and long-range dependencies (Narayan et al., 2018b). We further measured the agreement among the judges using the coefficient of variation (Everitt, 2006) from the aggregated results. It is defined as the ratio between the sample standard deviation and sample mean. It is a scale-free metric, i.e. its results are comparable across measurements of different magnitude. Since, our sample size is small (3 judgements per summary), we use the unbiased version (Sokal and Rohlf, 1995) as cv = (1+ 1 4n) σ ¯x, where σ is the standard deviation, n is the number of sample, and ¯x is the mean. We found that the highlight-based assessment in general has lower variation among judges than the non-highlight based or reference-based assessment. The assessment of TCONVS2S summaries achieves 0.67 and 0.80 of Precision and Recall cv points which are 0.08 and 0.03 points below when they are assessed against documents with no highlights, respectively. We see a similar pattern in Recall on the assessment of PTGEN summaries. Our results demonstrate that the highlightbased assessment of abstractive systems improve agreement among judges compared to when they are assessed against the documents without highlights or the reference summaries. The assessment of human-authored summaries does not seem to follow this trend, we report a mixed results (0.49 vs 0.48 for precision and 0.63 vs 0.67 for recall) when they are evaluated with and without the highlights. Model Fluency Clarity TCONVS2S 69.51 67.19 PTGEN 55.24 52.49 Reference 77.03 75.83 Table 4: Mean ”Fluency” and ”Clarity” scores for TCONVS2S , PTGEN and Reference summaries. All the ratings were collected on a 1-100 Likert scale. Model Unigram Bigram Prec Rec Prec Rec ROUGE (Original document) TCONVS2S 77.17 4.20 26.12 1.21 PTGEN 77.09 4.99 28.75 1.64 Reference 73.65 4.42 22.42 1.17 HROUGE (Highlights from the document) TCONVS2S 7.94 5.42 3.30 2.11 PTGEN 7.90 6.46 3.37 2.64 Reference 7.31 5.73 2.39 1.84 Table 5: HROUGE-1 (unigram) and HROUGE-2 (bigram) precision, and recall scores for TCONVS2S , PTGEN and Reference summaries. 5.3 Clarity and Fluency Evaluation Table 4 shows the results of our fluency and clarity evaluations. Similar to our highlight-based content evaluation, human-authored summaries were considered best, whereas TCONVS2S was ranked 2nd followed by PTGEN, on both measures. The Pearson correlation between fluency and clarity evaluation is 0.68 which shows a weak correlation; it confirms our hypothesis that the ”clarity” captures different aspects from ”fluency” and they should not be combined as it is commonly done. 5.4 Highlight-based ROUGE Evaluation Table 5 presents our HROUGE results assessing TCONVS2S , PTGEN and Reference summaries with the highlights. To compare, we also report ROUGE results assessing these summaries against the original document without highlights. In the latter case, HROUGE becomes the standard ROUGE metric with βn g = 1 for all n-grams g. Both ROUGE and HROUGE favour method of copying content from the original document and penalizes abstractive methods, thus it is not surprising that PTGEN is superior to TCONVS2S, as the former has an explicit copy mechanism. The fact that PTGEN is better in terms of HROUGE is also an evidence that the copying done by PTGEN selects salient content, thus confirming that the copying mechanism works as intended. When comparing the reference summaries against the original documents, both ROUGE and HROUGE confirm that the reference summaries are rather 3389 Figure 3: Highlighted article, reference summary, and summaries generated by TCONVS2S and PTGEN. Words in red in the system summaries are highlighted in the article but do not appear in the reference. abstractive as reported by Narayan et al. (2018b), and they in fact score below the system summaries. Recall scores are very low in all cases which is expected, since the 10 highlights obtained per document or the documents themselves, taken together, are much longer than any of the summaries. 6 Qualitative Analysis HIGHRES eliminates reference bias. The example presented in Figure 3 demonstrates how our highlight-based evaluation eliminates reference bias in summarization evaluation. The summaries generated by TCONVS2S and PTGEN are able to capture the essence of the document, however, there are phrases in these summaries that do not occur in the reference summary. A referencebased evaluation would fail to give a reasonable score to these system summaries. The HIGHRES however, would enable the judges to better evaluate the summaries without any reference bias. Fluency vs Clarity. Example in Table 6 shows disagreements between fluency and clarity scores for different summaries of the same article. From the example, we can see that the TCONVS2S summary is fluent but is not easily understood in the context of ‘the duration of resignation’, while the PTGEN summary has word duplication which lower the fluency and also lacking clarity due to several unclear words. Model Summary Text Fluency Clarity TCONVS2S dick advocaat has resigned as sunderland manager until the end of the season . 92.80 44.33 PTGEN sunderland have appointed former sunderland boss dick advocaat as manager at the end of the season to sign a new deal . 41.33 6.00 Table 6: TCONVS2S and PTGEN showing a disagreement between fluency and clarity scores. We italicized words that are not clear in the summaries. 7 Conclusion and Future Work In this paper we introduced the HIGHlightbased Reference-less Evaluation Summarization (HIGHRES) framework for manual evaluation. The proposed framework avoids reference bias and provides absolute instead of ranked evaluation of the systems. Our experiments show that HIGHRES lowers the variability of the judges’ content assessment, while helping expose the differences between systems. We also showed that by evaluating clarity we are able to capture a different dimension of summarization quality that is not captured by the commonly used fluency. We believe that our highlight-based evaluation is an ideal setup of abstractive summarization for three reasons: (i) highlights can be crowd sourced effectively without expert annotations, (ii) it avoids reference bias and (iii) it is not limited by n-gram overlap. In future work, we would like to extend our framework to other variants of summarization e.g. multi-document. Also, we will explore ways of automating parts of the process, e.g. the highlight annotation. Finally, the highlights could also be used as training signal, as it offers content saliency information at a finer level than the single reference typically used. Acknowledgments Hardy would like to thank the Indonesian government that has sponsored his studies through the Indonesia Endowment Fund for Education (LPDP). Shashi Narayan and Andreas Vlachos were supported by the EU H2020 SUMMA project (grant agreement number 688139). The latter is also supported by the EPSRC grant eNeMILP (EP/R021643/1). 3390 References Reinald Kim Amplayo, Seonjae Lim, and Seung-Won Hwang. 2018. Entity Commonsense Representation for Neural Abstractive Summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 697–707. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shujian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, and Marco Turchi. 2017. Findings of the 2017 Conference on Machine Translation (WMT17). In Proceedings of the Second Conference on Machine Translation, pages 169–214. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation, volume 2, pages 131–198. Ondˇrej Bojar, Miloˇs Ercegovˇcevi´c, Martin Popel, and Omar Zaidan. 2011. A Grain of Salt for the WMT Manual Evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 1–11. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 152–161. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1662–1675. Yen-Chun Chen and Mohit Bansal. 2018. Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 675–686. James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)., pages 615–621. Hoa Trang Dang. 2005. Overview of DUC 2005. In Proceedings of the Document Understanding Conference, volume 2005, pages 1–12. Brian S Everitt. 2006. The Cambridge Dictionary of Statistics. Cambridge University Press. Joseph L. Fleiss. 1971. Measuring Nominal Scale Agreement Among Many Raters. Psychological bulletin, 76(5):378–382. Marina Fomicheva and Lucia Specia. 2016. Reference bias in monolingual machine translation evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 77–82. Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 708–719. Association for Computational Linguistics. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 687–697. Hardy and Andreas Vlachos. 2018. Guided Neural Language Generation for Abstractive Summarization using Abstract Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 768–773. Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Neural Information Processing Systems, pages 1–14. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1. Aishwarya Jadhav and Vaibhav Rajan. 2018. Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks. In 3391 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 142–151. Chris Kedzie, Kathleen McKeown, and Hal Daum´e III. 2018. Content Selection in Deep Learning Models of Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1818–1828, Brussels, Belgium. Association for Computational Linguistics. Kundan Krishna and Balaji Vasan Srinivasan. 2018. Generating Topic-Oriented Summaries Using Neural Attention. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1697–1705. Wojciech Kry´sci´nski, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving Abstraction in Text Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1808–1817. Chenliang Li, Weiran Xu, Si Li, and Sheng Gao. 2018a. Guiding Generation for Abstractive Text Summarization Based on Key Information Guide Network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 55–60. Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2018b. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1430–1441. Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract Meaning Representation for Multi-Document Summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1178–1190. Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, 1, pages 25–26. Junyang Lin, Shuming Ma, and Qi Su. 2018. Global Encoding for Abstractive Summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 163–169. Annie Louis and Ani Nenkova. 2013. Automatically Assessing Machine Summary Content Without a Gold Standard. Computational Linguistics, 39(2):267–300. Jordan J Louviere, Terry N Flynn, Anthony Alfred Fred, and John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. Afonso Mendes, Shashi Narayan, Sebasti˜ao Miranda, Zita Marinho, Andr´e F. T. Martins, and Shay B. Cohen. 2019. Jointly extracting and compressing documents with summary state representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, US. Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B Cohen, Mirella Lapata, Jiangsheng Yu, and Yi Chang. 2018a. Document Modeling with External Attention for Sentence Extraction. In Proceedings of the 56st Annual Meeting of the Association for Computational Linguistics, pages 2020– 2030, Melbourne, Australia. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Don’t Give Me the Details, Just the Summary! Topic-aware Convolutional Neural Networks for Extreme Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018c. Ranking Sentences for Extractive Summarization with Reinforcement Learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1747–1759, Stroudsburg, PA, USA. Association for Computational Linguistics. Ani Nenkova. 2005. Automatic Text Summarization of Newswire: Lessons Learned from the Document Understanding Conference. In Proceedings of the 20th National Conference on Artificial Intelligence Volume 3, pages 1436–1441. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating Content Selection in Summarization: The Pyramid Method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145–152. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why We Need New Evaluation Metrics for {NLG}. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241–2252. Ramakanth Pasunuru and Mohit Bansal. 2018. MultiReward Reinforced Summarization with Saliency and Entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646–653. 3392 Maxime Peyrard and Iryna Gurevych. 2018. Objective Function Learning to Match Human Judgements for Optimization-Based Summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 654–660. D. W. Robertson. 1946. A Note on the Classical Origin of ” Circumstances ” in the Medieval Confessional. Studies in Philology, 43(1):6–14. Shinsaku Sakaue, Tsutomu Hirao, Masaaki Nishino, and Masaaki Nagata. 2018. Provable Fast Greedy Compressive Summarization with Any Monotone Submodular Function. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1737–1746. Evan Sandhaus. 2008. The New York Times Annotated Corpus. Natalie Schluter. 2017. The limits of automatic summarisation according to ROUGE. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 41–45. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the ACL, pages 1073–1083. Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, and Fang Chen. 2018. Summarization Evaluation in the Absence of Human Model Summaries Using the Compositionality of Word Embeddings. In Proceedings of the 27th International Conference on Computational Linguistics, pages 905– 914. R R Sokal and F J Rohlf. 1995. Biometry. 3rd ed. WH Freeman and Company. Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structure-Infused Copy Mechanisms for Abstractive Summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1717–1729. Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive Document Summarization with a GraphBased Attentional Neural Model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1171–1181, Stroudsburg, PA, USA. Association for Computational Linguistics. Simone Teufel and Hans Van Halteren. 2004. Evaluating information content by factoid analysis: Human annotation and stability. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 419–426. L L Thurstone. 1994. A Law of Comparative Judgment. Psychological review, 101(2):255–270. Andreas Vlachos and Sebastian Riedel. 2014. Fact Checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18–22, Baltimore, MD, USA. Association for Computational Linguistics. Jordan J Louviere Woodworth and George G. 1991. Best-worst scaling: A model for the largest difference judgments. University of Alberta: Working Paper. Yinfei Yang, Forrest Sheng Bao, and Ani Nenkova. 2017. Detecting (Un)Important Content for SingleDocument News Summarization. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 707–712.
2019
330
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3393–3402 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3393 EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing Yue Dong1, Zichao Li2, Mehdi Rezagholizadeh2, Jackie Chi Kit Cheung1 1MILA, McGill [email protected], [email protected] 2Huawei Noah’s Ark Lab {li.zichao, mehdi.rezagholizadeh}@huawei.com Abstract We present the first sentence simplification model that learns explicit edit operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter approach. Most current neural sentence simplification systems are variants of sequence-to-sequence models adopted from machine translation. These methods learn to simplify sentences as a byproduct of the fact that they are trained on complex-simple sentence pairs. By contrast, our neural programmer-interpreter is directly trained to predict explicit edit operations on targeted parts of the input sentence, resembling the way that humans might perform simplification and revision. Our model outperforms previous state-of-the-art neural sentence simplification models (without external knowledge) by large margins on three benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89 WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better and simpler output sentences1. 1 Introduction Sentence simplification aims to reduce the reading complexity of a sentence while preserving its meaning. Simplification systems can benefit populations with limited literacy skills (Watanabe et al., 2009), such as children, second language speakers and individuals with language impairments including dyslexia (Rello et al., 2013), aphasia (Carroll et al., 1999) and autism (Evans et al., 2014). Inspired by the success of machine translation, many text simplification (TS) systems treat sentence simplification as a monolingual translation task, in which complex-simple sentence pairs 1Link to our code and data can be found here https: //github.com/yuedongP/EditNTS. are presented to the models as source-target pairs (Zhang and Lapata, 2017). Two major machine translation (MT) approaches are adapted into TS systems, each with its advantages: statistical machine translation (SMT)-based models (Zhu et al., 2010; Wubben et al., 2012; Narayan and Gardent, 2014; Xu et al., 2016) can easily integrate human-curated features into the model, while neural machine translation (NMT)-based models (Nisioi et al., 2017; Zhang and Lapata, 2017; Vu et al., 2018) can operate in an end-to-end fashion by extracting features automatically. Nevertheless, MTbased models must learn the simplifying operations that are embedded in the parallel complexsimple sentences implicitly. These operations are relatively infrequent, as a large part of the original complex sentence usually remains unchanged in the simplification process (Zhang et al., 2017). This leads to MT-based models that often produce outputs that are identical to the inputs (Zhao et al., 2018), which is also confirmed in our experiments. We instead propose a novel end-to-end Neural Programmer-Interpreter (Reed and de Freitas, 2016) that learns to explicitly generate edit operations in a sequential fashion, resembling the way that a human editor might perform simplifications on sentences. Our proposed framework consists of a programmer and an interpreter that operate alternately at each time step: the programmer predicts a simplifying edit operation (program) such as ADD, DELETE, or KEEP; the interpreter executes the edit operation while maintaining a context and an edit pointer to assist the programmer for further decisions. Table 1 shows sample runs of our model. Intuitively, our model learns to skip words that do not need to be modified by predicting KEEP, so it can focus on simplifying the parts that actually require changes. An analogy can be drawn to residual connections popular in deep neural archi3394 WikiLarge Source in 2005 , meissner became the second american woman to land the triple axel jump in national competition . Output meissner was the second american woman to land the triple axel jump . Program DEL DEL DEL KEEP ADD(was) DEL KEEP KEEP KEEP KEEP KEEP KEEP KEEP KEEP KEEP KEEP DEL DEL DEL KEEP Reference she is the second american woman and the sixth woman worldwide to do a triple axel jump . WikiSmall Source theodoros “ thodoris ” zagorakis -lrb- , born october 27 , 1971 in lyd -lrb- a village near the city of kavala -rrb- , is a retired greek footballer and was the captain of the greece national football team that won the 2004 uefa european football championship . Output zagorakis -lrb- born october 27 , 1971 is a former greek football player . Program DEL DEL DEL DEL KEEP KEEP DEL KEEP KEEP KEEP KEEP KEEP DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL KEEP KEEP ADD(former) DEL KEEP ADD(football) ADD(player) DEL DEL ... DEL KEEP Reference theodoros zagorakis -lrb- born 27 october , 1971 -rrb- is a former football player . Newsela Source schools and parent groups try to help reduce costs for low-income students who demonstrate a desire to play sports , she said . Output schools and parent groups try to help pay for low-income students . Program KEEP KEEP KEEP KEEP KEEP KEEP KEEP ADD(pay) DEL DEL KEEP KEEP KEEP DEL DEL DEL DEL DEL DEL DEL DEL DEL DEL KEEP Reference clark said that schools do sometimes lower fees for students who do n’t have enough money . Table 1: Example outputs of EditNTS taken from the validation set of three text simplification benchmarks. Given a complex source sentence, our trained model predicts a sequence of edit tokens (EditNTS programs) that executes into a sequence of simplified tokens (EditNTS output). tectures for image recognition, which give models the flexibility to directly copy parameters from previous layers if they are not the focus of the visual signal (He et al., 2016). In addition, the edit operations generated by our model are easier to interpret than the black-box MT-based seq2seq systems: by looking at our model’s generated programs, we can trace the simplification operations used to transform complex sentences to simple ones. Moreover, our model offers control over the ratio of simplification operations. By simply changing the loss weights on edit operations, our model can prioritize different simplification operations for different sentence simplification tasks (e.g., compression or lexical replacement). The idea of learning sentence simplification through edit operations was attempted by AlvaManchego et al. (2017). They were mainly focused on creating better-aligned simplification edit labels (“silver” labels) and showed that a simple sequence labelling model (BiLSTM) fails to predict these silver simplification labels. We speculate that the limited success of their proposed model is due to the facts that the model relies on an external system and assumes the edit operations are independent of each other. We address these two problems by 1) using variants of Levenshtein distances to create edit labels that do not require external tools to execute; 2) using an interpreter to execute the programs and summarize the partial output sequence immediately before making the next edit decision. Our interpreter also acts as a language model to regularize the operations that would lead to ungrammatical outputs, as a programmer alone will output edit labels with little consideration of context and grammar. In addition, our model is completely end-to-end and does not require any extra modules. Our contributions are two-fold: 1) we propose to model the edit operations explicitly for sentence simplification in an end-to-end fashion, rather than relying on MT-based models to learn the simplification mappings implicitly, which often generates outputs by blindly repeating the source sentences; 2) we design an NPI-based model that simulates the editing process by a programmer and an interpreter, which outperforms the state-of-the-art neural MT-based TS models by large margins in terms of SARI and is judged by humans as simpler and overall better. 2 Related Work MT-based Sentence Simplification SMT-based models and NMT-based models have been the main approaches for sentence simplification. They rely on learning simplification rewrites implic3395 itly from complex-simple sentence pairs. For SMT-based models, Zhu et al. (2010) adopt a tree-based SMT model for sentence simplification; Woodsend and Lapata (2011) propose a quasi-synchronous grammar and use integer linear programming to score the simplification rules; Wubben et al. (2012) employ a phrase-based MT model to obtain candidates and re-rank them based on the dissimilarity to the complex sentence; Narayan and Gardent (2014) develop a hybrid model that performs sentence splitting and deletion first and then re-rank the outputs similar to Wubben et al. (2012); Xu et al. (2016) propose SBMT-SARI, a syntax-based machine translation framework that uses an external knowledge base to encourage simplification. On the other side, many NMT-based models have also been proposed for sentence simplification: Nisioi et al. (2017) employ vanilla recurrent neural networks (RNNs) on text simplification; Zhang and Lapata (2017) propose to use reinforcement learning methods on RNNs to optimize a specific-designed reward based on simplicity, fluency and relevancy; Vu et al. (2018) incorporate memory-augmented neural networks for sentence simplification; Zhao et al. (2018) integrate the transformer architecture and PPDB rules to guide the simplification learning; Sulem et al. (2018b) combine neural MT models with sentence splitting modules for sentence simplification. Edit-based Sentence Simplification The only previous work on sentence simplification by explicitly predicting simplification operations is by Alva-Manchego et al. (2017). Alva-Manchego et al. (2017) use MASSAlign (Paetzold et al., 2017) to obtain ‘silver’ labels for simplification edits and employ a BiLSTM to sequentially predict three of their silver labels—KEEP, REPLACE and DELETE. Essentially, their labelling model is a non-autoregressive classifier with three classes, where a downstream module (Paetzold and Specia, 2017) is required for applying the REPLACE operation and providing the replacement word. We instead propose an end-toend neural programmer-interpreter model for sentence simplification, which does not rely on external simplification rules nor alignment tools2. 2Our model can be combined with these external knowledge base and alignment tools for further performance improvements. Neural Programmer-Interpreter Models The neural programmer-interpreter (NPI) was first proposed by Reed and de Freitas (2016) as a machine learning model that learns to execute programs given their execution traces. Their experiments demonstrate success for 21 tasks including performing addition and bubble sort. It was adopted by Ling et al. (2017) to solve algebraic word problems and by B´erard et al. (2017); Vu and Haffari (2018) to perform automatic post-editing on machine translation outputs. We instead design our NPI model to take monolingual complex input sentences and learn to perform simplification operations on them. 3 Model Conventional sequence-to-sequence learning models map a sequence x = x1, . . . , x|x| to another one y = y1, . . . , y|y|, where elements of x and y are drawn from a vocabulary of size V , by modeling the conditional distribution P(yt|y1:t−1, x) directly. Our proposed model, EditNTS, tackles sentence simplification in a different paradigm by learning the simplification operations explicitly. An overview of our model is shown in Figure 1. 3.1 EditNTS Model EditNTS frames the simplification process as executing a sequence of edit operations on complex tokens monotonically. We define the edit operations as {ADD(W), KEEP, DELETE, STOP}. Similar to the sequence-to-sequence learning models, we assume a fixed-sized vocabulary of V words that can be added. Therefore, the number of prediction candidates of the programmer is V + 3 after including KEEP, DELETE, and STOP. To solve the out-of-vocabulary (OOV) problem, conventional Seq2Seq models utilize a copy mechanism (Gu et al., 2016) that selects a word from source (complex) sentence directly with a trainable pointer. In contrast, EditNTS has the ability to copy OOV words into the simplified sentences by directly learning to predict KEEP on them in complex sentences. We argue that our method has advantage over a copy mechanism in two ways: 1) our method does not need extra parameters for copying; 2) a copy mechanism may lead to the model copying blindly rather than performing simplifications. We detail other constraints on the edit opera3396 Figure 1: Our model contains two parts: the programmer and the interpreter. At time step t, the programmer predicts an edit operation zt on the complex word xkt by considering the interpreter-generated words y1:jt−1, programmer-generated edit labels z1:t−1, and a context vector ct obtained by attending over all words in the complex sentence. The interpreter executes the edit operation zt to generate the simplified token yjt and provides the interpreter context y1:jt to the programmer for the next decision. tions in Section 3.2. It turns out that the sequence of edit operations z constructed by Section 3.2 is deterministic given x and y (an example of of z can be seen in Table 2). Consequently, EditNTS can learn to simplify by modelling the conditional distribution P(z|x) with a programmer, an interpreter and an edit pointer: P(z|x) = z Y t=1 P(zt|y1:jt−1, z1:t−1, xkt, x). (1) Complex sentence x = x1, . . . x|x| [’the’, ’line’, ’between’, ’combat’, ’is’, ’getting’, ’blurry’] Simple sentence y = y1, . . . y|y| [’war’, ’is’, ’changing’] Supervised programs z = z1, . . . , z|z| [ADD(’war’), DEL, DEL, DEL, DEL, KEEP, ADD(’changing’), DEL, DEL] Table 2: Given the source sentence x and the target sentence y, our label creation algorithm (section 3.2) generates a deterministic program sequence z for training. At time step t, the programmer decides an edit operation zt on the word xkt, which is assigned by the edit pointer, based on the following contexts: 1) the summary of partially edited text y1:jt−1, 2) the previously generated edit operations z1:t−1, 3) and the complex input sentence x. The interpreter then executes the edit operation zt into a simplified token yjt and updates the interpreter context based on y1:jt to help the programmer at the next time step. The model is trained to maximize Equation 1 where z is the expert edit sequence created in 3.2. We detail the components and functions of the programmer and the interpreter hereafter. Programmer. The programmer employs an encoder-decoder structure to generate programs; i.e., sequences of edit operations z. An encoder transforms the input sentence x = x1, . . . x|x| into a sequence of latent representations henc i . We additionally utilize the part-of-speech (POS) tags g = g1, . . . g|x| to inject the syntactic information of sentences into the latent representations. The specific transformation process is: henc i = LSTMenc([e1(xi), e2(gi)]) (2) where e1(·) and e2(·) are both look-up tables. The decoder is trained to predict the next edit label zt (Eq. 3), given the vector representation henc kt for the word xkt that currently needs to be edited (Eq. 2), vector representation hedit t of previously generated edit labels z1:t−1 (Eq. 4), the source context vector ct (Eq.5), and the vector representation of previously generated words by the interpreter y1:jt−1 (Eq. 6). Pedit = softmax(V ′(tanh(V (hedit t )) (3) hedit t = LSTMedit([henc kt , ct, hedit t−1, hint t−1]) (4) ct = |x| X j=1 αtjhj, αtj = softmax(hkt, hj) (5) 3397 Note that there are three attentions involved in the computation of the programmer. 1) the soft attention over all complex tokens to form a context ct; 2) kt: the hard attention over complex input tokens for the edit pointer, which determines the index position of the current word that needs to be edited at t. We force kt to be the number of KEEP and DELETE previously predicted by the programmer up to time t. 3) jt−1: the hard attention over simple tokens for training (this attention is used to speed up the training), which is the number of KEEP and ADD(W) in the reference gold labels up to time t −1. During inference, the model no longer needs this attention and instead incrementally obtains y1:jt−1 based on its predictions. Interpreter. The interpreter contains two parts: 1) a parameter-free executor exec(zt, xkt) that applies the predicted edit operation zt on word xkt, resulting in a new word yjt. The specific execution rules for the operations are as follows: execute KEEP/DELETE to keep/delete the word and move the edit pointer to the next word; execute ADD(W) to add a new word W and the edit pointer stays on the same word; and execute STOP to terminate the edit process. 2) an LSTM interpreter (Eq. 6) that summarizes the partial output sequence of words produced by the executor so far. The output of the LSTM interpreter is given to the programmer in order to generate the next edit decision. hint t = LSTMint([hint t−1, yjt−1]) (6) 3.2 Edit Label Construction Unlike neural seq2seq models, our model requires expert programs for training. We construct these expert edit sequences from complex sentences to simple ones by computing the shortest edit paths using a dynamic programming algorithm similar to computing Levenshtein distances without substitutions. When multiple paths with the same edit distance exist, we further prioritizes the path that ADD before DELETE. By doing so, we can generate a unique edit path from a complex sentence to a simple one, reducing the noise and variance that the model would face 3. Table 2 demonstrates an example of the created edit label path and Table 3 shows the counts of the created edit labels 3We tried other way of labelling, such as 1) preferring DELETE to ADD; 2) deciding randomly when there is a tie; 3) including REPLACE as an operation. However, models trained with these labelling methods do not give good results from our empirical studies. on the training sets of the three text simplification corpora. KEEP DELETE ADD STOP WikiLarge 2,781,648 3,847,848 2,082,184 246,768 WikiSmall 1,356,170 780,482 399,826 88,028 Newsela 1,042,640 1,401,331 439,110 94,208 Table 3: Counts of the edit labels constructed by our label edits algorithm on three dataset (identical complexsimple sentence pairs are removed). As can be seen from Table 3, our edit labels are very imbalanced, especially on DELETE. We resolve this by two approaches during training: 1) we associate the inverse of edit label frequencies as the weights to calculate the loss; 2) the model only executes DELETE when there is an explicit DELETE prediction. Thus, if the system outputs STOP before finish editing the whole complex sequence, our system will automatically pad KEEP until the end of the sentence, ensuring the system outputs remain conservative with respect to the complex sequences. 4 Experiments 4.1 Dataset Three benchmark text simplification datasets are used in our experiments. WikiSmall contains automatically aligned complex-simple sentence pairs from standard to simple English Wikipedia (Zhu et al., 2010). We use the standard splits of 88,837/205/100 provided by Zhang and Lapata (2017) as train/dev/test sets. WikiLarge (Zhang and Lapata, 2017) is the largest TS corpus with 296,402/2000/359 complex-simple sentence pairs for training/validating/testing, constructed by merging previously created simplification corpora (Zhu et al., 2010; Woodsend and Lapata, 2011; Kauchak, 2013). In addition to the automatically aligned references, Xu et al. (2016) created eight more human-written simplified references for each complex sentence in the development/test set of WikiLarge. The third dataset is Newsela (Xu et al., 2015), which consists of 1130 news articles. Each article is rewritten by professional editors four times for children at different grade levels (0-4 from complex to simple). We use the standard splits provided by Zhang and Lapata (2017), which contains 94,208/1129/1076 sentence pairs for train/dev/test. Table 4 provides other statistics on these three benchmark training 3398 sets. Vocabulary size Sentence length comp simp comp simp WikiLarge 201,841 168,962 25.17 18.51 WikiSmall 113,368 93,835 24.26 20.33 Newsela 41,066 30,193 25.94 15.89 Table 4: Statistics on the vocabulary sizes and the average sentence lengths of the complex and simplified sentences in the three text simplification training sets. 4.2 Baselines We compare against three state-of-the-art SMTbased TS systems: PBMT-R (Wubben et al., 2012) where the phrase-based MT system’s outputs are re-ranked; 2) Hybrid (Narayan and Gardent, 2014) where syntactic transformation such as sentence splits and deletions are performed before re-rank; 3) SBMT-SARI (Xu et al., 2016), a syntax-based MT framework with external simplification rules. We also compare against four stateof-the-art NMT-based TS systems: vanilla RNNbased model NTS (Nisioi et al., 2017), memoryaugmented neural networks NSELSTM (Vu et al., 2018), deep reinforcement learning-based neural network DRESS and DRESS-LS (Zhang and Lapata, 2017), and DMASS+DCSS (Zhao et al., 2018) that integrates the transformer model with external simplification rules. In addition, we compare our NPI-based EditNTS with the BiLSTM sequence labelling model (Alva-Manchego et al., 2017) that are trained on our edit labels4, we call it Seq-Label model. 4.3 Evaluation We report two widely used sentence simplification metrics in the literature: SARI (Xu et al., 2016) and FKGL (Kincaid et al., 1975). FKGL (Kincaid et al., 1975) measures the readability of the system output (lower FKGL implies simpler output) and SARI (Xu et al., 2016) evaluates the system output by comparing it against the source and reference sentences. Earlier work also used BLEU as a metric, but recent work has found that it does not reflect simplification (Xu et al., 2016) and is in fact negatively correlated with simplicity (Sulem et al., 2018a). Systems with high BLEU scores are thus 4We made a good faith reimplementation of their model and trained it with our created edit labels. We cannot directly compare with their results because their model is not available and their results are not obtained from standard splits. biased towards copying the complex sentence as a whole, while SARI avoids this by computing the arithmetic mean of the N-gram (N ∈{1, 2, 3, 4}) F1-scores of three rewrite operations: add, delete, and keep. We also report the F1-scores of these three operations. In addition, we report the percentage of unchanged sentences that are directly copied from the source sentences. We treat SARI as the most important measurement in our study, as Xu et al. (2016) demonstrated that SARI has the highest correlation with human judgments in sentence simplification tasks. In addition to automatic evaluations, we also report human evaluations5 of our system outputs compared to the best MT-based systems, external knowledge-based systems, and Seq-Label by three human judges6 with a five-point Likert scale. The volunteers are asked to rate simplifications on three dimensions: 1) fluency (is the output grammatical?), 2) adequacy (how much meaning from the original sentence is preserved?), and 3) simplicity (is the output simper than the original sentence?). 4.4 Training Details We used the same hyperparameters across the three datasets. We initialized the word and edit operation embeddings with 100-dimensional GloVe vectors (Pennington et al., 2014) and the part-ofspeech tag 7 embeddings with 30 dimensions. The number of hidden units was set to 200 for the encoder, the edit LSTM, and the LSTM interpreter. During training, we regularized the encoder with a dropout rate of 0.3 (Srivastava et al., 2014). For optimization, we used Adam (Kingma and Ba, 2014) with a learning rate 0.001 and weight decay of 10−6. The gradient was clipped to 1 (Pascanu et al., 2013). We used a vocabulary size of 30K and the remaining words were replaced with UNK. In our main experiment, we used the inverse 5The outputs of PBMT-R, Hybrid, SBMT-SARI and DRESS are publicly available and we are grateful to Sanqiang Zhao for providing their system’s outputs. 6Three volunteers (one native English Speaker and two non-native fluent English speakers) are participated in our human evaluation, as one of the goal of our system is to make the text easier to understand for non-native English speakers. The volunteers are given complex setences and different system outputs in random order, and are asked to rate from one to five (the higher the better) in terms of simplicity, fluency, and adequacy. 7We used the NLTK toolkit with the default Penn Treebank Tag set to obtain the part-of-speech tags; there are 45 possible POS-tags (36 standard tags and 7 special symbols) in total. 3399 of the edit label frequencies as the loss weights, aiming to balance the classes. Batch size across all datasets was 64. 5 Results WikiLarge SARI Edit F1 of SARI FKGL % unc. add del keep Reference 8.88 15.88 MT-based TS Models PBMT-R 38.56 5.73 36.93 73.02 8.33 10.58 Hybrid 31.40 1.84 45.48 46.87 4.57 36.21 NTS 35.66 2.99 28.96 75.02 8.42 43.45 NSELSTM 36.88 - DRESS 37.08 2.94 43.15 65.15 6.59 22.28 DRESS-LS 37.27 2.81 42.22 66.77 6.62 27.02 Edit Labelling-based TS Models Seq-Label 37.08 2.94 43.20 65.10 5.35 19.22 EditNTS 38.22 3.36 39.15 72.13 7.30 10.86 Models that use external knowledge base SBMT-SARI 39.96 5.96 41.42 72.52 7.29 9.47 DMASS+DCSS 40.45 5.72 42.23 73.41 7.79 6.69 (a) WikiLarge WikiSmall SARI Edit F1 of SARI FKGL % unc. add del keep Reference 8.86 3.00 MT-based TS Models PBMT-R 15.97 6.75 28.50 12.67 11.42 14.00 Hybrid 30.46 16.53 59.60 15.25 9.20 4.00 NTS 13.61 2.08 26.21 12.53 11.35 36.00 NSELSTM 29.75 DRESS 27.48 2.86 65.94 13.64 7.48 11.00 DRESS-LS 27.24 3.75 64.27 13.71 7.55 13.00 Edit Labelling-based TS Models Seq-Label 30.50 2.72 76.31 12.46 9.38 9.00 EditNTS 32.35 2.24 81.30 13.54 5.47 0.00 (b) WikiSmall Newsela SARI Edit F1 of SARI FKGL %unc. add delete keep Reference 3.20 0.00 MT-based TS Models PBMT-R 15.77 3.07 38.34 5.90 7.59 5.85 Hybrid 30.00 1.16 83.23 5.62 4.01 3.34 NTS 24.12 2.73 62.66 6.98 5.11 16.25 NSELSTM 29.58 DRESS 27.37 3.08 71.61 7.43 4.11 11.98 DRESS-LS 26.63 3.21 69.28 7.40 4.20 15.51 Edit Labelling-based TS Models Seq-Label 29.53 1.40 80.25 6.94 5.45 15.97 EditNTS 31.41 1.84 85.36 7.04 3.40 4.27 (c) Newsela Table 5: Automatic Evaluation Results on three benchmarks. We report corpus level FKGL, SARI and edit F1 scores (add,keep,delete). In addition, we report the percentage of unchanged sentences (%unc.) in the system outputs when compared to the source sentences. Table 5 summarizes the results of our automatic evaluations. In terms of readability, our system obtains lower (= better) FKGL compared to other MT-based systems, which indicates our system’s output is easier to understand. In terms of the percentage of unchanged sentences, one can see that MT-based models have much higher rates of unchanged sentences than the reference. Thus, the models learned a safe but undesirable strategy of copying the sources sentences directly. By contrast, our model learns to edit the sentences and has a lower rate of keeping the source sentences unchanged. In term of SARI, the edit labelling-based models Seq-Label and EditNTS achieve better or comparable results with respect to state-of-the-art MTbased models, demonstrating the promise of learning edit labels for text simplification. Compared to Seq-Label, our model achieves a large improvement of (+1.14,+1.85,+1.88 SARI) on WikiLarge, Newsela, and WikiSmall. We believe this improvement is mainly from the interpreter in EditNTS, as it provides the proper context to the programmer for making edit decisions (more ablation studies in section 5.1). On Newsela and WikiSmall, our model significantly outperforms stateof-the-art TS models by a large margin (+1.89, +1.41 SARI), showing that EditNTS learns simplification better on smaller datasets with respect to MT-based simplification models. On WikiLarge, our model outperforms the best NMT-based system DRESS-LS by a large margin of +0.95 SARI and achieves comparable performance to the best SMT-based model PBMT-R. While the overall SARI are similar between EditNTS and PBMT-R, the two models prefer different strategies: EditNTS performs extensive DELETE while PBMT-R is in favour of performing lexical substitution and simplification. On WikiLarge, two models SBMT-SARI and DMASS+DCSS reported higher SARI scores as they employ external knowledge base PPDB for word replacement. These external rules can provide reliable guidance about which words to modify, resulting in higher add/keep F1 scores (Table 5-a). On the contrary, our model is inclined to generate shorter sentences, which leads to high F1 scores on delete operations 8. Nevertheless, our model is preferred by human judges than SBMT8As the full outputs of NSELSTM are not available, we cannot compute the edit F1 scores and FKGL for this system. 3400 WikiLarge Newsela WikiSmall F A S avg. F A S avg. F A S avg. Reference 4.39 4.11 2.62 3.71 4.40 2.74 3.79 3.64 4.48 4.03 2.99 3.83 PBMT-R 4.38 4.05 2.28 3.57 3.76 3.44 2.28 3.16 4.32 4.28 1.53 3.38 Hybrid 3.41 3.01 3.31 3.24 3.62 2.88 2.97 3.16 3.76 3.87 2.12 3.25 SBMT-SARI 4.25 3.96 2.61 3.61 DRESS 4.63 4.01 3.07 3.90 4.16 3.08 3.00 3.41 4.61 3.64 3.62 3.96 DMASS+DCSS 4.39 3.97 2.80 3.72 seq-label 3.91 4.11 2.97 3.66 3.45 3.22 2.09 2.92 3.83 3.9 2.01 3.25 EditNTS 4.76 4.45 3.18 4.13 4.34 3.13 3.16 3.54 4.31 3.34 4.26 3.97 Table 6: Mean ratings for Fluency (F), Adequacy (A), Simplicity (S), and the Average score (avg.) by human judges on the three benchmark test sets. 50 sentences are rated on WikiLarge, 30 sentences are rated on WikiSmall and Newsela. Aside from comparing system outputs, we also include human ratings for the gold standard reference as an upper bound. SARI and DMASS+DCSS in terms of all the measurements (Table 6), indicating the effectiveness of our model on correctly performing deleting operations while maintaining fluent and adequate outputs. Moreover, our model can be easily integrated with these external PPTB simplification rules for word replacement by adding a new edit label “replacement” for further improvements. The results of our human evaluations are presented in Table 6. As can be seen, our model outperforms MT-based models on Fluency, Simplicity, and Average overall ratings. Despite our system EditNTS is inclined to perform more delete operations, human judges rate our system as adequate. In addition, our model performs significantly better than Seq-Label in terms of Fluency, indicating the importance of adding an interpreter to 1) summarize the partial edited outputs and 2) regularize the programmer as a language model. Interestingly, similar to the human evaluation results in Zhang and Lapata (2017), judges often prefer system outputs than the gold references. Controllable Generation: In addition to the state-of-the-art performance, EditNTS has the flexibility to prioritize different edit operations. Note that NMT-based systems do not have this feature at all, as the sentence length of their systems’ output is not controllable and are purely depends on the training data. Table 7 shows that by simply changing the loss weights on different edit labels, we can control the length of system’s outputs, how much words it copies from the original sentences and how much novel words the system adds. 5.1 Ablation Studies In the ablation studies, we aim to investigate the effectiveness of each component in our model. We add:keep:delete ratio Avg. len % copied % novel 10:1:1 (add rewarded) 25.21 53.52 56.28 1:10:1 (keep rewarded) 21.52 84.22 12.81 1:1:10 (delete rewarded) 15.83 57.36 16.72 Table 7: Results on Newsela by controlling the edit label ratios. We increase the loss weight on ADD,KEEP,DELETE ten times respectively. The three rows show the systems’ output statistics on the average output sentence length (Avg. len), the average percentage of tokens that are copied from the input (% copied), and the average percentage of novel tokens that are added with respect to the input sentence (% novel). compare the full model with its variants where POS tags removed, interpreter removed, context removed. As shown in Table 8, the interpreter is a critical part to guarantee the performance of the sequence-labelling model, while POS tags and attention provide further performance gains. Newsela SARI Edit F1 of SARI add delete keep EditNTS 31.41 1.84 85.36 7.04 −POS tags 31.27 1.46 85.34 7.00 −attn-context 30.95 1.54 84.26 7.05 −Interpreter 30.13 1.70 81.70 7.01 Table 8: Performance on Newsela after removing different components in EditNTS. 6 Conclusion We propose an NPI-based model for sentence simplification, where edit-labels are predicted by the programmer and then executed into simplified tokens by the interpreter. Our model outperforms previous state-of-the-art machine translation-based TS models in most of the au3401 tomatic evaluation metrics and human ratings, demonstrating the effectiveness of learning edit operations explicitly for sentence simplification. Compared to the black-box MT-based systems, our model is more interpretable by providing generated edit operation traces, and more controllable with the ability to prioritize different simplification operations. Acknowledgments The research was supported in part by Huawei Noah’s Ark Lab (Montreal Research Centre), Natural Sciences and Engineering Research Council of Canada (NSERC) and Canadian Institute For Advanced Research (CIFAR). We thank Sanqiang Zhao and Xin Jiang for sharing their pearls of wisdom, Xingxing Zhang for providing the datasets and three anonymous reviewers for giving their insights and comments. References Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 295–305. Alexandre B´erard, Laurent Besacier, and Olivier Pietquin. 2017. Lig-cristal submission for the wmt 2017 automatic post-editing task. In Proceedings of the Second Conference on Machine Translation, pages 623–629. John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplifying text for language-impaired readers. In Ninth Conference of the European Chapter of the Association for Computational Linguistics. Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 131–140. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (volume 1: Long papers), volume 1, pages 1537–1546. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 158–167. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 435–445. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 85–91. Gustavo Paetzold, Fernando Alva-Manchego, and Lucia Specia. 2017. Massalign: Alignment and annotation of comparable documents. Proceedings of the IJCNLP 2017, System Demonstrations, pages 1–4. Gustavo Paetzold and Lucia Specia. 2017. Lexical simplification with neural ranking. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 34–40. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310–1318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Scott Reed and Nando de Freitas. 2016. Neural programmer-interpreters. In Proceedings of International Conference on Learning Representations (ICLR). Luz Rello, Clara Bayarri, Azuki G´orriz, Ricardo Baeza-Yates, Saurabh Gupta, Gaurang Kanvinde, Horacio Saggion, Stefan Bott, Roberto Carlini, and 3402 Vasile Topac. 2013. Dyswebxia 2.0!: more accessible text for people with dyslexia. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, page 25. Citeseer. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Bleu is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744. Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Simple and effective text simplification using semantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 162–173. Thuy-Trang Vu and Gholamreza Haffari. 2018. Automatic post-editing of machine translation: A neural programmer-interpreter approach. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3048–3053. Tu Vu, Baotian Hu, Tsendsuren Munkhdalai, and Hong Yu. 2018. Sentence simplification with memoryaugmented neural networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 79–85. Willian Massami Watanabe, Arnaldo Candido Junior, Vin´ıcius Rodriguez Uzˆeda, Renata Pontin de Mattos Fortes, Thiago Alexandre Salgueiro Pardo, and Sandra Maria Alu´ısio. 2009. Facilita: reading assistance for low-literacy readers. In Proceedings of the 27th ACM International Conference on Design of Communication, pages 29–36. ACM. Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 409–420. Association for Computational Linguistics. Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 1015–1024. Association for Computational Linguistics. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594. Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequenceto-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164–3173. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1353–1361. Association for Computational Linguistics.
2019
331
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403–3414 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3403 Decomposable Neural Paraphrase Generation Zichao Li, Xin Jiang, Lifeng Shang, Qun Liu Huawei Noah’s Ark Lab {li.zichao, jiang.xin, shang.lifeng, qun.liu}@huawei.com Abstract Paraphrasing exists at different granularity levels, such as lexical level, phrasal level and sentential level. This paper presents Decomposable Neural Paraphrase Generator (DNPG), a Transformer-based model that can learn and generate paraphrases of a sentence at different levels of granularity in a disentangled way. Specifically, the model is composed of multiple encoders and decoders with different structures, each of which corresponds to a specific granularity. The empirical study shows that the decomposition mechanism of DNPG makes paraphrase generation more interpretable and controllable. Based on DNPG, we further develop an unsupervised domain adaptation method for paraphrase generation. Experimental results show that the proposed model achieves competitive in-domain performance compared to the state-of-the-art neural models, and significantly better performance when adapting to a new domain. 1 Introduction Paraphrases are texts that convey the same meaning using different wording. Paraphrase generation is an important technique in natural language processing (NLP), which can be applied in various downstream tasks such as information retrieval, semantic parsing, and dialogue systems. Neural sequence-to-sequence (Seq2Seq) models have demonstrated the superior performances on generating paraphrases given a sentence (Prakash et al., 2016; Cao et al., 2017; Li et al., 2018; Ma et al., 2018). All of the existing works learn to paraphrase by mapping a sequence to another, with each word processed and generated in a uniform way. This work is motivated by a commonly observed phenomenon that the paraphrase of a sentence is usually composed of multiple paraphrasing patterns at different levels of granularity, e.g., from the lexical to phrasal to sentential levels. For instance, the following pair of paraphrases contains both the phrase-level and the sentence-level patterns. what is the reason of $x →what makes $x happen world war II →the second world war Specifically, the blue part is the sentence-level pattern, which can be expressed as a pair of sentence templates, where $x can be any fragment of text. The green part is the phrase-level pattern, which is a pair of phrases. Table 1 shows more examples of paraphrase pairs sampled from WikiAnswers corpus 1 and Quora question pairs 2. We can see that the sentence-level paraphrases are more general and abstractive, while the word/phrase-level paraphrases are relatively diverse and domain-specific. Moreover, we notice that in many cases, paraphrasing can be decoupled, i.e., the word-level and phrase-level patterns are mostly independent of the sentence-level paraphrase patterns. To address this phenomenon in paraphrase generation, we propose Decomposable Neural Paraphrase Generator (DNPG). Specifically, the DNPG consists of a separator, multiple encoders and decoders, and an aggregator. The separator first partitions an input sentence into segments belonging to different granularities, which are then processed by multiple granularity-specific encoders and decoders in parallel. Finally the aggregator combines the outputs from all the decoders to produce a paraphrase of the input. We explore three advantages of the DNPG: 1http://knowitall.cs.washington.edu/paralex/ 2https://www.kaggle.com/c/quora-question-pairs 3404 Table 1: Examples of paraphrase pairs in WikiAnswers and Quora datasets. We manually labeled the sentences with the blue italic words being sentence-level and the green underlined words being phrase-level. What is the population of New York? How many people is there in NYC? Who wrote the Winnie the Pooh books? Who is the author of winnie the pooh? What is the best phone to buy below 15k? Which are best mobile phones to buy under 15000? How can I be a good geologist? What should I do to be a great geologist? How do I reword a sentence to avoid plagiarism? How can I paraphrase my essay and avoid plagiarism? Interpretable In contrast to the existing Seq2Seq models, we show that DNPG can automatically learn the paraphrasing transformation separately at lexical/phrasal and sentential levels. Besides generating a paraphrase given a sentence, it can meanwhile interpret its prediction by extracting the associated paraphrase patterns at different levels, similar to the examples shown above. Controllable The model allows the user to control the generation process precisely. By employing DNPG, the user can specify the part of the sentence being fixed while the rest being rephrased at a particular level. Domain-adaptable In this work, we assume that high-level paraphrase patterns are more likely to be shared across different domains. With all the levels coupled together, it is difficult for conventional Seq2Seq models to well adapt to a new domain. The DNPG model, however, can conduct paraphrase at abstractive (sentential) level individually, and thus be more capable of performing well in domain adaptation. Concretely, we develop a method for the DNPG to adapt to a new domain with only non-parallel data. We verify the DNPG model on two large-scale paraphrasing datasets and show that it can generate paraphrases in a more controllable and interpretable way while preserving the quality. Furthermore, experiments on domain adaptation show that DNPG performs significantly better than the state-of-the-art methods. The technical contribution of this work is of three-fold: 1. We propose a novel Seq2Seq model that decomposes the paraphrase generation into learning paraphrase patterns at different granularity levels separately. 2. We demonstrate that the model achieves more interpretable and controllable generation of paraphrases. 3. Based on the proposed model, we develop a simple yet effective method for unsupervised domain adaptation. 2 Decomposable Neural Paraphrase Generator This section explains the framework of the proposed DNPG model. We first give an overview of the model design and then elaborate each component in detail. 2.1 Model Overview Figure 1: Model Architecture. As illustrated in Figure 1, DNPG consists of four components: a separator, multi-granularity encoders and decoders (denoted as m-encoder and m-decoder respectively), and an aggregator. The m-encoder and m-decoder are composed of multiple independent encoders and decoders, with each corresponding to a specific level of granularity. Given an input sentence of words X = [x1, . . . , xL] with length L, the separator first determines the granularity label for each word, denoted as Z = [z1, . . . , zL]. After that, the input sentence X together with its associated labels Z are fed into m-encoder in parallel and summarized as Uz = m-encoderz(X, Z), (1) where the subscript z denotes the granularity level. At the decoding stage, each decoder can individually predict the probability of generating the next 3405 word yt as Pz(yt|y1:t−1, X) = m-decoderz(Uz, y1:t−1). (2) Finally, the aggregator combines the outputs of all the decoders and make the final prediction of the next word: P(yt|y1:t−1, X) = X zt Pzt(yt|y1:t−1, X)P(zt|y1:t−1, X). (3) Here P(zt|y1:t−1, X) is computed as the probability of being at the granularity level zt, and Pzt(yt|y1:t−1, X) is given by the decoder m-decoderzt at level zt. The choice of the encoder and decoder modules of DNPG can be quite flexible, for instance longshort term memory networks (LSTM) Hochreiter and Schmidhuber (1997) or convolutional neural network (CNN) (LeCun et al., 1998). In this work, the m-encoder and m-decoder are built based on the Transformer model (Vaswani et al., 2017). Besides, we employ LSTM networks to build the separator and aggregator modules. Without loss of generality, we consider two levels of granularity in our experiments, that is, z = 0 for the lexical/phrasal level and z = 1 for the sentential level. 2.2 Separator For each word xl in the sentence, we assign a latent variable zl indicating its potential granularity level for paraphrasing. This can be simply formulated as a sequence labeling process. In this work we employ the stacked LSTMs to compute the distribution of the latent variables recursively: hl = BiLSTM([xl; hl−1, hl+1]) gl = LSTM([hl, zl−1; gl−1]) P(zl|X) = GS(Wggl, τ) (4) where hl and gl represent the hidden states in the LSTMs and GS(·, τ) denotes the Gumbel-Softmax function (Jang et al., 2016). The reason of using Gumbel-Softmax is to make the model differentiable, and meanwhile produce the approximately discrete level for each token. τ is the temperature controlling the closeness of z towards 0 or 1. 2.3 Multi-granularity encoder and decoder We employ the Transformer architecture for the encoders and decoders in DNPG. Specifically, the phrase-level Transformer is composed of m-encoder0 and m-decoder0, which is responsible for capturing the local paraphrasing patterns. The sentence-level Transformer is composed of m-encoder1 and m-decoder1, which aims to learn the high-level paraphrasing transformations. Based on the Transformer design in Vaswani et al. (2017), each encoder or decoder is composed of positional encoding, stacked multihead attention, layer normalization, and feedforward neural networks. The multi-head attention in the encoders contains self-attention while the one in the decoders contains both self-attention and context-attention. We refer readers to the original paper for details of each component. In order to better decouple paraphrases at different granularity levels, we introduce three inductive biases to the modules by varying the model capacity and configurations in the positional encoding and multi-head attention modules. We detail them hereafter. Positional Encoding: We adopt the same variant of the positional encoding method in Vaswani et al. (2017), that is, the sinusoidal function: PE(pos, 2d) = sin(p/100002d/D) PE(pos, 2d + 1) = cos(p/100002d/D) (5) For phrase-level Transformer, we use the original position, i.e., p := pos. For the sentence-level Transformer, in order to make the positional encoding insensitive to the lengths of the phraselevel fragment, we set: p = pos X i=1 P(zi = 1) (6) Multi-head Attention: We modify the selfattention mechanism in the encoders and decoders by setting a different receptive field for each granularity, as illustrated in Figure 2. Specifically, for the phrase-level model, we restrict each position in the encoder and decoder to attend only the adjacent n words (n = 3), so as to mainly capture the local composition. As for the sentence-level model, we allow the self-attention to cover the entire sentence, but only those words labeled as sentence-level (i.e., zl = 1) are visible. In this manner, the model will focus on learning the sentence structures while ignoring the low-level details. To do so, we re-normalize the original atten3406 tion weights αt,l as α ′ t,l = P(zl = 1)αt,l PL l=1 P(zl = 1)αt,l . (7) We also restrict the decoder at z level only access the position l : zl = z at encoder in the same way. Figure 2: Attention: phrase-level self-attention (upper) and sentence-level self-attention (lower). Model Capacity: We choose a larger capacity for the phrase-level Transformer over the sentence-level Transformer. The intuition behind is that lexical/phrasal paraphrases generally contain more long-tail expressions than the sentential ones. In addition, the phrase-level Transformer is equipped with the copying mechanism (Gu et al., 2016). Thus, the probability of generating the target word yt by the m-decoder0 is: Pz=0(yt|y1:t−1, X) =(1 −ρt)Pgen(yt|y1:t−1, X) + ρtPcopy(yt|y1:t−1, X) (8) where ρt is the copying probability, which is jointly learned with the model. Table 2 summarizes the specifications of the Transformer models for each granularity. Table 2: Model Specifications. Phrase-level model Sentence-level model Receptive field Local Global Word Visibility {xl}L l=1 {xl}l:zl=1 #Dimension 300 150 #Heads 6 3 Copy mechanism Yes No 2.4 Aggregator Each Transformer model works independently until generating the final paraphrases. The prediction of the token at t-th position is determined by the Figure 3: Aggregator. aggregator, which combines the outputs from the m-decoders. More precisely, the aggregator first decides the probability of the next word being at each granularity. The previous word yt−1 and the context vectors c0 and c1 given by m-decoder0 and m-decoder1, are fed into a LSTM to make the prediction: vt = LSTM([Wc[c0; c1; yt−1]; vt−1]) P(zt|y1:t−1, X) = GS(Wvvt, τ), (9) where vt is the hidden state of the LSTM. Then, jointly with the probabilities computed by mdecoders, we can make the final prediction of the next word via Eq (3). 2.5 Learning of Separator and Aggregator The proposed model can be trained end-to-end by maximizing the conditional probability (3). However, learning from scratch may not be informative for the separator and aggregator to disentangle the paraphrase patterns in an optimal way. Thus we induce weak supervision to guide the training of the model. We construct the supervision based on a heuristic that long-tail expressions contain more rare words. To this end, we first use the word alignment model (Och and Ney, 2003) to establish the links between the words in the sentence pairs from the paraphrase corpus. Then we assign the label z∗= 0 (phrase-level) to n (randomly sampled from {1, 2, 3}) pairs of aligned phrases that contain most rare words. The rest of the words are labeled as z∗= 1 (sentence-level). We train the model with explicit supervision at 3407 the beginning, with the following loss function: L = T X t=1 log P(yt|y1:t−1, X)+ λ( L X l=1 log P(z∗ l |X) + T X t=1 log P(z∗ t |y1:t−1, X)) (10) where λ is the hyper-parameter controlling the weight of the explicit supervision. In experiments, we decrease λ gradually from 1 to nearly 0. 3 Applications and Experimental Results We verify the proposed DNPG model for paraphrase generation in three aspects: interpretability, controllability and domain adaptability. We conduct experiments on WikiAnswers paraphrase corpus (?) and Quora duplicate question pairs, both of which are questions data. While the Quora dataset is labeled by human annotators, the WikiAnswers corpus is collected in a automatic way, and hence it is much noisier. There are more than 2 million pairs of sentences on WikiAnswers corpus. To make the application setting more similar to realworld applications, and more challenging for domain adaptation, we use a randomly sampled subset for training. The detailed statistics are shown in Table 3. Table 3: Statistics of the paraphrase datasets. WikiAnswers Quora Training 500K 100K Validation 6K 4K Test 20K 20K 3.1 Implementation and Training Details As the words in the WikiAnswers are all stemmed and lower case, we do the same pre-processing on Quora dataset. For both datasets, we truncate all the sentences longer than 20 words. For the models with copy mechanism, we maintain a vocabulary of size 8K. For the other baseline models besides vanilla Transformer, we include all the words in the training sets into vocabulary to ensure that the improvement of our models does not come from solving the out-of-vocabulary issue. For a fair comparison, we use the Transformer model with similar number of parameters with our model. Specifically, it is with 3 layers, model size of 450 dimensions, and attention with 9 heads. We use early stopping to prevent the problem of over-fitting. We train the DNPG with Adam optimizer (Kingma and Ba, 2014). We set the learning rate as 5e −4, τ as 1 and λ as 1 at first, and then decrease them to 1e −4, 0.9 and 1e −2 after 3 epochs. We set the hyper-parameters of models and optimization in all other baseline models to remain the same in their original works. We implement our model with PyTorch (Paszke et al., 2017). 3.2 Interpretable Paraphrase Generation First, we evaluate our model quantitatively in terms of automatic metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), which have been widely used in previous works on paraphrase generation. In addition, we include iBLEU (Sun and Zhou, 2012), which penalizes repeating the source sentence in its paraphrase. We use the same hyper-parameter in their original work. We compare DNPG with four existing neural-based models: ResidualLSTM (Prakash et al., 2016), VAE-SVG-eq (Gupta et al., 2017), pointer-generator (See et al., 2017) and the Transformer (Vaswani et al., 2017), the latter two of which have been reported as the state-of-the-art models in Li et al. (2018) and Wang et al. (2018) respectively. For a fair comparison, we also include a Transformer model with copy mechanism. Table 4 shows the performances of the models, indicating that DNPG achieves competitive performance in terms of all the automatic metrics among all the models. In particular, the DNPG has similar performance with the vanilla Transformer model on Quora dataset, while significantly performs better on WikiAnswers. The reason maybe that the DNPG is more robust to the noise, since it can process the paraphrase in an abstractive way. It also validates our assumption that paraphrasing can be decomposed in terms of granularity. When the training data of high quality is available, the transformer-based models significantly outperforms the LSTM-based models. Besides the quantitative performance, we demonstrate the interpretability of DNPG. Given an input sentence, the model can not only generate its paraphrase but also predict the granularity level of each word. By using the predicted granularity levels and the context attentions in the Transformer, we are able to extract the phrasal and sentential paraphrase patterns from the model. Specifically, we extract the sentential templates ¯X 3408 Table 4: In-domain performance of paraphrase generation. Quora WikiAnswers Models BLEU iBLEU ROUGE-1 ROUGE-2 BLEU iBLEU ROUGE-1 ROUGE-2 ResidualLSTM 17.57 12.67 59.22 32.40 27.36 22.94 48.52 18.71 VAE-SVG-eq 20.04 15.17 59.98 33.30 32.98 26.35 50.93 19.11 Pointer-generator 22.65 16.79 61.96 36.07 39.36 31.98 57.19 25.38 Transformer 21.73 16.25 60.25 33.45 33.01 27.70 51.85 20.70 Transformer+Copy 24.77 17.98 63.34 37.31 37.88 31.43 55.88 23.37 DNPG (ours) 25.03 18.01 63.73 37.75 41.64 34.15 57.32 25.88 Table 5: Performance of paraphrase generation on domain adaptation (source →target). WikiAnswers→Quora Quora→WikiAnswers Models BLEU iBLEU ROUGE-1 ROUGE-2 BLEU iBLEU ROUGE-1 ROUGE-2 Pointer-generator 6.96 5.04 41.89 12.77 27.94 21.87 53.99 20.85 Transformer+Copy 8.15 6.17 44.89 14.79 29.22 23.25 53.33 21.02 DNPG (ours) 10.00 7.38 47.53 18.89 31.84 24.22 54.87 22.27 Shallow fusion 7.95 6.04 44.87 14.79 29.76 22.57 53.54 20.68 MTL 6.37 4.90 37.64 11.83 23.65 18.34 48.19 17.53 MTL+Copy 9.83 7.22 47.08 19.03 30.78 21.87 54.10 21.08 Adapted DNPG (ours) 16.98 10.39 56.01 28.61 35.12 25.60 56.17 23.65 of X (or ¯Y of Y ) by substituting each fragment of words at the phrasal level by a unique placeholder such as $x. The extraction process is denoted as ¯X = T (X, Z) = [¯x1, . . . , ¯x¯L], where the element ¯x¯l is either a placeholder or a word labeled as sentence-level. Through the attention weights, we ensure that the pair of aligned fragment share the same placeholder in { ¯X, ¯Y }. The whole generation and alignment process is detailed in Appendix A. Each pair of fragments sharing the same placeholder are extracted as the phrasal paraphrase patterns. Table 6 gives examples of the generated paraphrases and the corresponding extracted templates. For instance, the model learns a sentential paraphrasing pattern: ¯X: what is $x’s $y →¯Y : what is the $y of $x, which is a common rewriting pattern applicable in general practice. The results clearly demonstrate the ability of DNPG to decompose the patterns at different levels, making its behaviors more interpretable. 3.3 Controllable Paraphrase Generation The design of the DNPG model allows the user to control the generating process more precisely. Thanks to the decomposable mechanisms, it is flexible for the model to conduct either sentential paraphrasing or phrasal paraphrasing individually. Furthermore, instead of using the learned separator, the user can manually specify the granularity labels of the input sentence and then choose the following paraphrasing strategies. Sentential paraphrasing is performed by restricting the phrase-level decoder (m-decoder0) to copying from the input at the decoding stage, i.e., keeping the copy probability ρt = 1. To ensure that the phrasal parts are well preserved, we replace each phrase-level fragment by a unique placeholder and recover it after decoding. Phrasal paraphrasing is performed with sentence template being fixed. For each phrase-level fragment, paraphrase is generated by m-decoder0 only and the generation stopped at t : zt = 1. Once the beam search of size B finished, there are B paraphrase candidates ˆYb. We pick up the one with the best accuracy and readability. Specifically, we re-rank them by P( ˆYb|X, Z) calculated by the full model of DNPG. Given a sentence, we manually label different segments of words as phrase-level, and employ the model to conduct sentential and phrasal paraphrasing individually. With the manual labels, the model automatically selects different paraphrase patterns for generation. Table 7 shows examples of the generated results by different paraphrasing strategies. As demonstrated by the examples, DNPG is flexible enough to generate paraphrase given different sentence templates and phrases. Controllable generation is useful in downstream applications, for instance, data augmentation in the task-oriented dialogue system. Suppose we have the user utterance book a flight from New York to London and want to produce more 3409 Table 6: Examples of the generated paraphrases and extracted patterns at each granularity level by DNPG. Input Sentence Generate Paraphrase Sentential Paraphrase Patterns Phrasal Paraphrase Patterns what is the technique for prevent suicide? how can you prevent suicide? what is the technique for $x →how can you $x what is the second easiest island? what is the 2nd easiest island? second easiest island →2nd easiest island what is rihanna brother’s name? what is the name of rihanna’s brother? what is $x’s $y →what is the $y of $x rihanna brother →rihanna’s brother do anyone see the relation between greek god and hindu god? what is the relationship between the greek god and hindu god? do anyone see the $x between $y →what is the $x between the $y relation →relationship Table 7: Examples of controllable generation of paraphrase. The words with underline are labeled as phrase-level and the ones in italic form are at sentencelevel. The strategy All is referred as the fully automatic generation. Input sentence & labels Strategy Generated Paraphrase what is the value of a 1961 us cent? All what is the 1961 nickel ’s value? what is the value of a 1961 us cent? Phrase what is the price of a 1961 nickel? what is the value of a 1961 us cent? Sentence what is the 1961 us cent ’s value? what is the value of a 1961 us cent? Phrase what is the value of a 1961 nickel? what is the value of a 1961 us cent? Sentence how much is a 1961 us cent worth? what should i do to avoid sleep in class? All how do i prevent sleep in class? what should i do to avoid sleep in class? Phrase what should i do to prevent sleep in class? what should i do to avoid sleep in class? Sentence how do i avoid sleep in class? what should i do to avoid sleep in class? Phrase what should i do to avoid fall sleep during class? what should i do to avoid sleep in class? Sentence what should i do if i don’t want to sleep in class? utterances with the same intent. With the DNPG, we can conduct sentential paraphrasing and keep the slot values fixed, e.g. buy an airline ticket to London from New York. 3.4 Unsupervised Domain Adaptation Existing studies on paraphrase generation mainly focus on the in-domain setting with a large-scale parallel corpus for training. In practice, there is always a need to apply the model in a new domain, where no parallel data is available. We formulate it as an unsupervised domain adaptation problem for paraphrase generation. Based on the observation that the sentence templates generated by DNPG tend to be more general and domaininsensitive, we consider directly performing the sentential paraphrase in the target domain as a solution. However, the language model of the source and target domains may differ, we therefore finetune the separator of DNPG so that it can identify the granularity of the sentence in the target domain more accurately. Specifically, to adapt the separator Psep(Z|X) to the target domain, we employ a reinforcement learning (RL) approach by maximizing the accumulative reward: Rseparator = EPsep(Z|X) L X l=1 rl(z1:l, X). (11) We define the reward functions based on the principle that the source and target domain share the similar sentence templates. We first train a neural language model, specifically LSTM, with the sentence templates in the source domain, with the conditional probability denoted as PLM(¯x¯l|¯x1:¯l−1). In the target domain, the template language model is employed as a reward function for separator. Formally we define the reward rl at position l as: rl(z1:l, X) = αPLM(¯x¯l|¯x1:¯l−1), (12) where the template ¯x1:¯l = T (X, z1:l) is extracted in the way as detailed in Section 3.2. And α is a scaling factor that penalizes the long fragment labeled as phrase-level, since more informative sentence templates are preferred. With the reward, the separator is further tuned with the policy gradient method (Williams, 1992; Sutton et al., 2000). To bridge the gap between training and testing of the Transformer models in different domain, we finetune the DNPG model on the sentential paraphrase patterns extracted in source domain. Since only the unlabeled data in the target domain is needed to fine-tune separator, the domain adaptation can be done incrementally. An overview of the complete training process is illustrated in Figure 4. We refer the model fine-tuned in this way as Adapted DNPG. We evaluate the performance of the original DNPG and the Adapted DNPG in two settings of domain transfer: 1) Quora dataset as the source domain and WikiAnswers as the target domain, denoted as Quora→WikiAnswers, and 2) in reverse as WikiAnswers→Quora. For the baseline models, in addition to the pointer-generator network and the Transformer model with copy mechanism (denoted as Transformer+Copy), we use 3410 Figure 4: Left: Training of language model in the source domain; Right: RL training of separator in the target domain. the shallow fusion (Gulcehre et al., 2015) and the multi-task learning (MTL) (Domhan and Hieber, 2017) that harness the non-parallel data in the target domain for adaptation. For fair comparisons, we use the Transformer+Copy as the base model for shallow fusion and implement a variant of MTL with copy mechanism (denoted as MTL+Copy). Table 5 shows performances of the models in two settings. DNPG achieves better performance over the pointer-generator and Transformer-based model, and has the competitive performance with MTL+Copy, which accesses target domain for training. With a fine-tuned separator, Adapted DNPG outperforms other models significantly on Quora→WikiAnswers. When it comes to WikiAnswers→Quora, where domain adaptation is more challenging since the source domain is noisy, the margin is much larger. The main reason is that the original meaning can be preserved well when the paraphrasing is conducted at the sentential level only. For an intuitive illustration, We show examples of the generated paraphrases from Adapted DNPG and MTL+Copy in Table 10 in Appendix. It is shown that the sentential paraphrasing is an efficient way to reuse the general paraphrase patterns and meanwhile avoid mistakes on rephrasing domain-specific phrases. However, it is at the expense of the diversity of the generated paraphrases. We leave this problem for future work. To further verify the improvement of Adapted DNPG, we conduct the human evaluation on the WikiAnswers→Quora setting. We have six human assessors to evaluate 120 groups of paraphrase candidates given the input sentences. Each group consists of the output paraphrases from Table 8: Human Evaluation in WikiAnswers→Quora Models Mean Rank Agreement MTL+Copy 3.22 0.446 Naive DNPG 3.13 0.323 Adapted DNPG 1.79 0.383 Reference 1.48 0.338 MTL+Copy, DNPG and Adapted DNPG as well as the reference. The evaluators are asked to rank the candidates from 1 (best) to 4 (worst) by their readability, accuracy and surface dissimilarity to the input sentence. The detailed evaluation guide can be found in Appendix B. Table 8 shows the mean rank and inter-annotator agreement (Cohen’s kappa) of each model. Adapted DNPG again significantly outperforms MTL+Copy by a large margin (p-value < 0.01). The performance of the original DNPG and MTL+Copy has no significant difference (p-value = 0.18). All of the interannotator agreement is regarded as fair or above. 3.5 Ablation Studies and Discussion We quantify the performance gain of each inductive bias we incorporated in the DNPG model. Specifically, we compare the DNPG with three variants: one with vanilla attention modules, one with vanilla positional encoding and the one uses vanilla softmax. We train them with the training set of WikiAnswers and test in the validation set of Quora. The results are shown in Table 9, which shows that each inductive bias has a positive contribution. It further proves that the decomposition mechanism allows the model to capture more abstractive and domain-invariant patterns. We also note that there is a large drop without the constraints on multi-head attention, which is a core part of the decomposition mechanism. We investigate the effect of the weak supervision for separator and aggregator by setting λ as 0. Though there is not a significant drop on quantitative performance, we observe that the model struggles to extract meaningful paraphrase patterns. It means that explicit supervision for separator and aggregator can make a difference and it does not need to be optimal. It opens a door to incorporate symbolic knowledge, such as regular expression of sentence templates, human written paraphrase patterns, and phrase dictionary, into the neural network. Through training in a parallel corpus, DNPG can generalize the symbolic rules. 3411 Table 9: Ablation Study in WikiAnswers→Quora Model Variants BLEU iBLEU DNPG 9.84 7.40 w/ Vanilla Multi-Head Attention 7.65 5.86 w/ Vanilla Positional Encoding 9.46 7.08 w/ Vanilla Softmax 9.30 7.01 4 Related Work 4.1 Neural Paraphrase Generation Most of the existing neural methods of paraphrase generation focus on improving the indomain quality of generated paraphrases. Prakash et al. (2016) and Ma et al. (2018) adjust the network architecture for larger capacity. Cao et al. (2017) and Wang et al. (2018) utilize external resources, in other words, phrase dictionary and semantic annotations. Li et al. (2018) reinforce the paraphrase generator by a learnt reward function. Although achieving state-of-the-art performances, none of the above work considers the paraphrase patterns at different levels of granularity. Moreover, their models can generate the paraphrase in a neither interpretable nor a fine-grained controllable way. In Iyyer et al. (2018)’s work, the model is trained to produce a paraphrase of the sentence with a given syntax. In this work, we consider automatically learning controllable and interpretable paraphrasing operations from the corpus. This is also the first work to consider scalable unsupervised domain adaptation for neural paraphrase generation. 4.2 Controllable and Interpretable Text Generation There is extensive attention on controllable neural sequence generation and its interpretation. A line of research is based on variational auto-encoder (VAE), which captures the implicit (Gupta et al., 2017; Li et al., 2017) or explicit information (Hu et al., 2017; Liao et al., 2018) via latent representations. Another approach is to integrate probabilistic graphical model, e.g., hidden semi-Markov model (HSMM) into neural network (Wiseman et al., 2018; Dai et al., 2016). In these works, neural templates are learned as a sequential composition of segments controlled by the latent states, and be used for language modeling and data-totext generation. Unfortunately, it is non-trivial to adapt this approach to the Seq2Seq learning framework to extract templates from both the source and the target sequence. 4.3 Domain Adaptation for Seq2Seq Learning Domain adaptation for neural paraphrase generation is under-explored. To our best knowledge, Su and Yan (2017)’s work is the only one on this topic. They utilize the pre-trained word embedding and include all the words in both domains to vocabulary, which is tough to scale. Meanwhile, we notice that there is a considerable amount of work on domain adaptation for neural machine translation, another classic sequence-to-sequence learning task. However, most of them require parallel data in the target domain (Wang et al., 2017a,b). In this work, we consider unsupervised domain adaptation, which is more challenging, and there are only two works that are applicable. Gulcehre et al. (2015) use the language model trained in the target domain to guide the beam search. Domhan and Hieber (2017) optimize two stacked decoders jointly by learning language model in the target domain and learning to translate in the source domain. In this work, we utilize the similarity of sentence templates in the source and target domains. Thanks to the decomposition of multi-grained paraphrasing patterns, DNPG can fast adapt to a new domain without any parallel data. 5 Conclusion In this paper, we have proposed a neural paraphrase generation model, which is equipped with a decomposition mechanism. We construct such mechanisms by latent variables associated with each word, and a couple of Transformer models with various inductive biases to focus on paraphrase patterns at different levels of granularity. We further propose a fast and incremental method for unsupervised domain adaptation. The quantitative experiment results show that our model has competitive in-domain performance compared to the state-of-the-art models, and outperforms significantly upon other baselines in domain adaptation. The qualitative experiments demonstrate that the generation of our model is interpretable and controllable. In the future, we plan to investigate more efficient methods of unsupervised domain adaptation with decomposition mechanism on other NLP tasks. 3412 References Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017. Joint copying and restricted generation for paraphrase. In Thirty-First AAAI Conference on Artificial Intelligence. Hanjun Dai, Bo Dai, Yan-Ming Zhang, Shuang Li, and Le Song. 2016. Recurrent hidden semi-markov model. In International Conference on Learning Representations. Tobias Domhan and Felix Hieber. 2017. Using targetside monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500–1505. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1631–1640. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2017. A deep generative framework for paraphrase generation. arXiv preprint arXiv:1709.05074. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. arXiv preprint arXiv:1703.00955. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. arXiv preprint arXiv:1804.06059. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. arXiv preprint arXiv:1708.00625. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865–3878. Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, and Tong Zhang. 2018. Quase: Sequence editing under quantifiable guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3855–3864. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out. Shuming Ma, Xu Sun, Wei Li, Sujian Li, Wenjie Li, and Xuancheng Ren. 2018. Query and output: Generating words by querying distributed word representations for paraphrase generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 196–206. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In International Conference on Learning Representations. Aaditya Prakash, Sadid A Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual lstm networks. arXiv preprint arXiv:1610.03098. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Yu Su and Xifeng Yan. 2017. Cross-domain semantic parsing via paraphrasing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1235–1246. Hong Sun and Ming Zhou. 2012. Joint learning of a dual smt system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short PapersVolume 2, pages 38–42. Association for Computational Linguistics. 3413 Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Rui Wang, Andrew Finch, Masao Utiyama, and Eiichiro Sumita. 2017a. Sentence embedding for neural machine translation domain adaptation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 560–566. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017b. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482–1488. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2018. A task in a suit and a tie: paraphrase generation with semantic augmentation. arXiv preprint arXiv:1811.00119. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Sam Wiseman, Stuart Shieber, and Alexander Rush. 2018. Learning neural templates for text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3174–3187. A Algorithm for extracting templates Algorithm 1 ExtractSentParaPattern INPUT: X, Y , Zx, Zy, α′, V OUTPUT: ¯X, ¯Y 1: procedure EXTRACT ¯X 2: L ←|X|; 3: ¯X ←[ ]; 4: c ←1; 5: p ←[ ]; 6: for l := 1 to L do 7: if zx l = 0 then 8: if l = 1 or ¯Xl−1 /∈V then 9: ¯X.add(Vc); 10: p.add([ ]); 11: c ←c + 1; 12: else 13: p[c].add(l); 14: else 15: ¯X.add(Xl); 16: procedure EXTRACT ¯Y 17: T ←|Y |; 18: ¯Y ←[ ]; 19: for t := 1 to T do 20: if zy t = 0 then 21: c ←arg max c 1 |p[c]| P|p[c]| l=1 α′ p[c][l],t; 22: if t = 1 or ¯Yt−1 ̸= Vc then 23: ¯Y .add(Vc); 24: else 25: ¯Y .add(Yt); End 3414 Table 10: Examples of the generated paraphrases and extracted patterns at each granularity level by DNPG. Input Sentence Extracted Source Templates Generated Sentential Paraphrase Generated Paraphrase Generated by MTL+Copy is there any verify angel investor on quora? is there any $x on $y how many $x on $y how many verify angel investor on quora? is there any verify on quora? how much salary do iit professor get? how much salary do $x get how much money do $x make how much money do iit professor make? how much do professor UNK? which is the best mobile below 15000? which is the $x the $x the best mobile below 15000 ? what mobile 15000? how many time should i have bath? how many $x number of $x number of time should i have bath? how do you have bath? who is the best hollywood actor? who is the $x what is $x name what is the best hollywood actor name? who is the best actor? how do you change a key ignition in a 1988 chevy celebrity? how do you $x what is the best way to $x what is the best way to change a key ignition in a 1988 chevy celebrity? how do you change a 1988 in a 1988 chevy? why do company issue bonus share ? why do $x what is the purpose of the $x what is the purpose of the company issue bonus share? how do company issue bonus share? under which condition do the hiv virus survive? under which condition do the $x which condition is best for the $x which condition is best for the hiv virus survive? which do the hiv virus survive? use of monggo seed ? $x of $y what is the $x of $y what is the use of monggo seed? ? how do you eat potato salad? how do you $x is there any way to $x is there any way to eat potato salad? how do you eat potato salad? who is the most important person in yours life? who is the $x in $y who is $y ’s $x who is yours life ’s most important person? what is the most important person in yours life? what is the easiest business to start? what is $x what is $x in the world what is the easiest business to start in the world? what is business? B Evaluation Guide Please evaluate the paraphrase with respect to three criterions: readability, accuracy, and diversity, which are arranged by importance. Specifically, the criterions of paraphrase quality from bad to good are listed in detailed as following: 1. Non-readable. The generated paraphrase does not make sense and is not humangenerated text. Please note that readable is not equivalent to grammatical correct. That is, considered there are non-English speaker, a readable paraphrase can have grammar mistakes. 2. Readable but is not accurate. The answer to the paraphrased question is not helpful to the owner of the original question. For instance, how can i study c++ →what be c++. Here are some examples of accurate paraphrase: (a) how can i learn c++ →what be the best way to learn c++ (b) can i learn c++ in a easy way →be learn c++ hard (c) do you have some suggestion of well design app →what be some well design app name (d) be study hard →how study hard 3. Accurate but with trivial paraphrasing. Just remove or add some stop words. For instance, why can trump win the president election →why can trump win president election 4. Novel paraphrasing. More or loss, there is information loss of a non-trivial paraphrase. Thus, again, determine whether the paraphrase is equivalent to the original question from the perspective of question owner. Furthermore, it is not necessary for a non-trivial paraphrase contains rare paraphrasing pattern. For instance, maybe there is lot of paraphrase with the pattern what be $name → some interesting facts about $name. But it can still be considered as non-trivial paraphrase. There are some other things to be noted: 1. There maybe special token, that is, [UNK] in the generated paraphrase. A generated paraphrase with [UNK] should generally have higher rank. 2. The same paraphrase should have same ranking. Otherwise, please try your best to distinguish the quality of paraphrase. 3. Please do Google search first when you see some strange word or phrase for better evaluation. 4. Please note that all the words are stemmed and lower case. Just assume all the words are in their right form. For instance, what be you suggestion of some english movie is equivalent to What are your suggestions of some English movies.
2019
332
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3415–3427 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3415 Transforming Complex Sentences into a Semantic Hierarchy Christina Niklaus13, Matthias Cetto1, Andr´e Freitas2, and Siegfried Handschuh13 1 University of St.Gallen {christina.niklaus, matthias.cetto, siegfried.handschuh}@unisg.ch 2 University of Manchester [email protected] 3 University of Passau {christina.niklaus, siegfried.handschuh}@uni-passau.de Abstract We present an approach for recursively splitting and rephrasing complex English sentences into a novel semantic hierarchy of simplified sentences, with each of them presenting a more regular structure that may facilitate a wide variety of artificial intelligence tasks, such as machine translation (MT) or information extraction (IE). Using a set of hand-crafted transformation rules, input sentences are recursively transformed into a twolayered hierarchical representation in the form of core sentences and accompanying contexts that are linked via rhetorical relations. In this way, the semantic relationship of the decomposed constituents is preserved in the output, maintaining its interpretability for downstream applications. Both a thorough manual analysis and automatic evaluation across three datasets from two different domains demonstrate that the proposed syntactic simplification approach outperforms the state of the art in structural text simplification. Moreover, an extrinsic evaluation shows that when applying our framework as a preprocessing step the performance of state-of-the-art Open IE systems can be improved by up to 346% in precision and 52% in recall. To enable reproducible research, all code is provided online. 1 Introduction Text Simplification (TS) is defined as the process of reducing the linguistic complexity of natural language (NL) text by utilizing a more readily accessible vocabulary and sentence structure. Its goal is to improve the readability of a text, making information easier to comprehend for people with reduced literacy, such as non-native speakers (Paetzold and Specia, 2016), aphasics (Carroll et al., 1998), dyslexics (Rello et al., 2013) or deaf persons (Inui et al., 2003). However, not only human readers may benefit from TS. Previous work has established that applying TS as a preprocessing step can improve the performance of a variety of natural language processing (NLP) tasks, such as Open IE (Saha and Mausam, 2018; Cetto et al., 2018), MT (ˇStajner and Popovic, 2016, 2018), Relation Extraction (Miwa et al., 2010), Semantic Role Labeling (Vickrey and Koller, 2008), Text Summarization (Siddharthan et al., 2004; Bouayad-Agha et al., 2009), Question Generation (Heilman and Smith, 2010; Bernhard et al., 2012), or Parsing (Chandrasekar et al., 1996; Jonnalagadda et al., 2009). Linguistic complexity stems from the use of either a difficult vocabulary or sentence structure. Therefore, TS is classified into two categories: lexical simplification and syntactic simplification. Through substituting a difficult word or phrase with a more comprehensible synonym, the former primarily addresses a human audience. Most NLP systems, on the contrary, derive greater benefit from syntactic simplification, which focuses on identifying grammatical complexities in a sentence and converting these structures into simpler ones, using a set of text-to-text rewriting operations. Sentence splitting plays a major role here: it divides a sentence into several shorter components, with each of them presenting a simpler and more regular structure that is easier to process for downstream applications. Many different methods for addressing the task of TS have been presented so far. As noted in ˇStajner and Glavaˇs (2017), data-driven approaches outperform rule-based systems in the area of lexical simplification (Glavaˇs and ˇStajner, 2015; Paetzold and Specia, 2016; Nisioi et al., 2017; Zhang and Lapata, 2017). In contrast, the state-of-the-art syntactic simplification approaches are rule-based (Siddharthan and Mandya, 2014; Ferr´es et al., 2016; Saggion et al., 2015), providing more grammatical output and covering a wider range of syn3416 tactic transformation operations, however, at the cost of being very conservative, often to the extent of not making any changes at all. Acknowledging that existing TS corpora (Zhu et al., 2010; Coster and Kauchak, 2011; Xu et al., 2015) are inappropriate for learning to decompose sentences into shorter, syntactically simplified components, as they contain only a small number of split examples, Narayan et al. (2017) lately compiled the first TS dataset that explicitly addresses the task of sentence splitting. Using this corpus, several encoderdecoder models (Bahdanau et al., 2014) are proposed for breaking down a complex source into a set of sentences with a simplified structure. Aharoni and Goldberg (2018) further explore this idea, augmenting the presented neural models with a copy mechanism (Gu et al., 2016; See et al., 2017). Figure 1: Example of the output that is generated by our proposed TS approach. A complex input sentence is transformed into a semantic hierarchy of simplified sentences in the form of minimal, self-contained propositions that are linked via rhetorical relations. In contrast to above-mentioned end-to-end neural approaches, we followed a more systematic approach. First, we performed an in-depth study of the literature on syntactic sentence simplification, followed by a thorough linguistic analysis of the syntactic phenomena that need to be tackled in the sentence splitting task. Next, we materialized our findings into a small set of 35 hand-crafted transformation rules that decompose sentences with a complex linguistic structure into shorter constituents that present a simpler and grammatically sound structure, leveraging downstream semantic applications whose predictive quality deteriorates with sentence length and complexity. One of our major goals was to overcome the conservatism exhibited by state-of-the-art syntactic TS approaches, i.e. their tendency to retain the input sentence rather than transforming it. For this purpose, we decompose each source sentence into minimal semantic units and turn them into self-contained propositions. In that way, we provide a fine-grained output that is easy to process for subsequently applied NLP tools. Another major drawback of the structural TS approaches described so far is that they do not preserve the semantic links between the individual split components, resulting in a set of incoherent utterances. Consequently, important contextual information is lost, impeding the interpretability of the output for downstream semantic tasks. To prevent this, we establish a contextual hierarchy between the split components and identify the semantic relationship that holds between them. An example of the resulting output is displayed in Figure 1. 2 Related Work To date, three main classes of techniques for syntactic TS with a focus on the task of sentence splitting have been proposed. The first uses a set of syntax-based hand-crafted transformation rules to perform structural simplification operations, while the second exploits machine learning (ML) techniques where the model learns simplification rewrites automatically from examples of aligned complex source and simplified target sentences. In addition, approaches based on the idea of decomposing a sentence into its main semantic constituents using a semantic parser were described. 2.1 Syntax-driven Rule-based Approaches The line of work on structural TS starts with Chandrasekar et al. (1996), who manually defines a set of rules to detect points where sentences may be split, such as relative pronouns or conjunctions, based on chunking and dependency parse representations. Siddharthan (2002) presents a pipelined architecture for a simplification framework that extracts a variety of clausal and phrasal components from a source sentence and transforms them into stand-alone sentences using a set of hand-written grammar rules based on shallow syntactic features. More recently, Siddharthan and Mandya (2014) propose RegenT, a hybrid TS approach that combines an extensive set of 136 hand-written gram3417 mar rules defined over dependency tree structures for tackling 7 types of linguistic constructs with a much larger set of automatically acquired rules for lexical simplification. Taking a similar approach, Ferr´es et al. (2016) describe a linguistically-motivated rule-based TS approach called YATS, which relies on part-of-speech tags and syntactic dependency information to simplify a similar set of linguistic constructs, using a set of only 76 hand-crafted transformation patterns in total. These two state-of-the-art rule-based structural TS approaches primarily target reader populations with reading difficulties, such as people suffering from dyslexia, aphasia or deafness. According to Siddharthan (2014), those groups most notably benefit from splitting long sentences that contain clausal constructions. Consequently, simplifying clausal components is the main focus of the proposed TS systems of this category. Finally, ˇStajner and Glavaˇs (2017) present LEXEV and EVLEX, which combine a syntactic simplification approach that uses an even smaller set of 11 hand-written rules to perform sentence splitting and deletion of irrelevant sentences or sentence parts with an unsupervised lexical simplifier based on word embeddings (Glavaˇs and ˇStajner, 2015). 2.2 Approaches based on Semantic Parsing While the TS approaches described above are based on syntactic information, there are a variety of methods that use semantic structures for sentence splitting. These include the work of Narayan and Gardent (2014) and Narayan and Gardent (2016), who propose a framework that takes semantically-shared elements as the basis for splitting and rephrasing a sentence. It first generates a semantic representation of the input to identify splitting points in the sentence. In a second step, the split components are then rephrased by completing them with missing elements in order to reconstruct grammatically sound sentences. Lately, with DSS, Sulem et al. (2018c) describe another semantic-based structural simplification framework that follows a similar approach. 2.3 Data-driven Approaches More recently, data-driven approaches for the task of sentence splitting emerged. Narayan et al. (2017) propose a set of sequence-to-sequence models trained on the WebSplit corpus, a dataset of over one million tuples that map a single complex sentence to a sequence of structurally simplified sentences. Aharoni and Goldberg (2018) further explore this idea, augmenting the presented neural models with a copy mechanism. Though outperforming the models used in Narayan et al. (2017), they still perform poorly compared to previous state-of-the-art rule-based syntactic simplification approaches. In addition, Botha et al. (2018) observed that the sentences from the WebSplit corpus contain fairly unnatural linguistic expressions using only a small vocabulary. To overcome this limitation, they present a scalable, languageagnostic method for mining training data from Wikipedia edit histories, providing a rich and varied vocabulary over naturally expressed sentences and their extracted splits. When training the best-performing model of Aharoni and Goldberg (2018) on this new split-and-rephrase dataset, they achieve a strong improvement over prior best results from Aharoni and Goldberg (2018). However, due to the uniform use of a single split per source sentence in the training set, each input sentence is broken down into two output sentences only. Consequently, the resulting simplified sentences are still comparatively long and complex. 3 Recursive Sentence Splitting We present DISSIM, a recursive sentence splitting approach that creates a semantic hierarchy of simplified sentences.1 The goal of our approach is to generate an intermediate representation that presents a simple and more regular structure which is easier to process for downstream semantic applications and may support a faster generalization in ML tasks. For this purpose, we cover a wider range of syntactic constructs (10 in total) than state-of-the-art rule-based syntactic frameworks. In particular, our approach is not limited to breaking up clausal components, but also splits and rephrases a variety of phrasal elements, resulting in a much more fine-grained output where each proposition represents a minimal semantic unit that is typically composed of a simple subject-predicate-object structure. Though tackling a larger set of linguistic constructs, our framework operates on a much smaller set of only 35 manually defined rules as compared to existing syntax-driven rule-based approaches. 1The source code of our framework is available under https://github.com/Lambda-3/ DiscourseSimplification. 3418 With the help of the transformation patterns that we specified, source sentences that present a complex linguistic form are transformed into clean, compact structures by disembedding clausal and phrasal components that contain only supplementary information. These elements are then transformed into independent sentences. In that way, the source sentence is reduced to its key information (“core sentence”) and augmented with a number of associated contextual sentences that disclose additional information about it, resulting in a novel hierarchical representation in the form of core sentences and accompanying contexts. Moreover, we identify the rhetorical relations by which core sentences and their associated contexts are connected in order to preserve their semantic relationship. The resulting representation of the source text, which we will call a “discourse tree” in the following, can then be used to facilitate a variety of artificial intelligence tasks, such as text summarization, MT, IE or opinion mining, among other. 3.1 Transformation Stage The structural TS framework that we propose takes a sentence as input and performs a recursive transformation stage that is based upon 35 handcrafted grammar rules. Each rule defines how to split up and rephrase the input into structurally simplified sentences (subtask 1), establish a contextual hierarchy between the split components (subtask 2) and identify the semantic relationship that holds between those elements (subtask 3). The transformation patterns are based on syntactic and lexical features that can be derived from a sentence’s phrase structure. They were heuristically determined in a rule engineering process whose main goal was to provide a best-effort set of patterns, targeting the challenge of being applied in a recursive fashion and to overcome biased or incorrectly structured parse trees. We empirically determined a fixed execution order of the rules by examining which sequence achieved the best simplification results in a manual qualitative analysis conducted on a development test set of 100 randomly sampled Wikipedia sentences. The grammar rules are applied recursively in a top-down fashion on the source sentence, until no more simplification pattern matches. In that way, the input is turned into a discourse tree, consisting of a set of hierarchically ordered and semantically interconnected sentences that present a simplified syntax. Table 2 displays some examples of our transformation patterns,2 which are specified in terms of Tregex patterns.3 CLAUSAL/PHRASAL TYPE # RULES Clausal disembedding 1 Coordinate clauses 1 2 Adverbial clauses 6 3a Relative clauses (non-defining) 8 3b Relative clauses (defining) 5 4 Reported speech 4 Phrasal disembedding 5 Coordinate verb phrases (VPs) 1 6 Coordinate noun phrases (NPs) 2 7a Appositions (non-restrictive) 1 7b Appositions (restrictive) 1 8 Prepositional phrases (PPs) 3 9 Adjectival and adverbial phrases 2 10 Lead NPs 1 Total 35 Table 1: Linguistic constructs addressed by DISSIM. Subtask 1: Sentence Splitting and Rephrasing. Each transformation rule takes a sentence’s phrasal parse tree4 as input and encodes a pattern that, in case of a match, will extract textual parts from the tree. The decomposed text spans, as well as the remaining text span are then transformed into new stand-alone sentences. In order to ensure that the resulting simplified output is grammatically sound, some of the extracted text spans are combined with their corresponding referents from the main sentence or appended to a simple phrase (e.g. “This is”). In that way, the simplification rules encode both the splitting points and rephrasing procedure for reconstructing proper sentences. Both coordinate and subordinate clauses, as well as various types of phrasal elements are addressed by our TS approach. Table 1 provides an overview of the linguistic constructs that are tackled, including the number of transformation patterns that were specified for the respective syntactic phenomenon. For a better understanding of the splitting and rephrasing procedure, Figure 2 visualizes the application of the first grammar rule that matches the given input sentence. The upper part of the box represents the complex input, which is matched against the simplification pattern. The lower part 2For reproducibility purposes, the complete set of transformation patterns is available under https://github. com/Lambda-3/DiscourseSimplification/ tree/master/supplemental_material. 3See Levy and Andrew (2006) for details on the rule syntax. 4generated by Stanford’s pre-trained lexicalized parser (Socher et al., 2013) 3419 RULE TREGEX PATTERN EXTRACTED SENTENCE SharedNPPostCoordinationExtractor (for coordinate verb phrases) ROOT <<: (S < (NP $.. (VP < +(VP) (VP > VP $.. VP )))) NP + VP . SubordinationPreExtractor (for adverbial clauses with pre-posed subordinative clauses) ROOT <<: (S < (SBAR < ( S < (NP $.. VP) ) $.. (NP $.. VP))) S < (NP $.. VP) . Table 2: A selection of transformation rule patterns. A boxed pattern represents the part that is extracted from the input sentence. An underlined pattern designates its referent. A pattern in bold will be deleted from the remaining part of the input. then depicts the result of the transformation operation. Example: SUBORDINATIONPREEXTRACTOR Input: “Although the Treasury will announce details of the November refunding on Monday, the funding will be delayed if Congress and President Bush fail to increase the Treasury’s borrowing capacity.” Matched Pattern: ROOT S . . VP will be delayed if ... borrowing capacity NP the funding , , SBAR S VP will announce details ... on Monday NP the Treasury IN Although Extraction: (3) “although” →Contrast (1) The funding will be delayed if Congress and President Bush fail to increase the Treasury’s borrowing capacity. (1) The Treasury will announce details of the November refunding on Monday. (2) context (2) core Figure 2: (Subtask 1) The source sentence is split up and rephrased into a set of syntactically simplified sentences. (Subtask 2) Then, the split sentences are connected with information about their constituency type to establish a contextual hierarchy between them. (Subtask 3) Finally, by identifying and classifying the rhetorical relations that hold between the simplified sentences, their semantic relationship is restored which can be used to inform downstream applications. Subtask 2: Constituency Type Classification. Each split will create two or more sentences with a simplified syntax. In order to establish a contextual hierarchy between them, we connect them with information about their constituency type. According to Fay (1990), clauses can be related to one another in two ways: First, there are parallel clauses that are linked by coordinating conjunctions, and second, clauses may be embedded inside another, introduced by subordinating conjunctions. The same applies to phrasal elements. Since the latter commonly express minor information, we denote them context sentences. In contrast, the former are of equal status and typically depict the key information contained in the input. Therefore, they are called core sentences in our approach. To differentiate between those two types of constituents, the transformation patterns encode a simple syntax-based approach where subordinate clauses and phrasal elements are classified as context sentences, while coordinate clauses/phrases are labelled as core.5 Subtask 3: Rhetorical Relation Identification. Finally, we aim to determine intra-sentential semantic relationships in order to restore semantic relations between the disembedded components. For this purpose, we identify and classify the rhetorical relations that hold between the simplified sentences, making use of both syntactic and lexical features which are encoded in the transformation patterns. While syntactic features are manifested in the phrasal composition of a sentence’s parse tree, lexical features are extracted from the parse tree in the form of cue phrases. The determination of potential cue words and their positions in specific syntactic environments is based on the work of Knott and Dale (1994). The extracted cue phrases are then used to infer the type of rhetorical relation. For this task we utilize a predefined list of rhetorical cue words adapted from the work of Taboada and Das (2013), which assigns them to the relation that they most likely trigger. For example, the transformation rule in Figure 2 spec5This approach roughly relates to the concept of nuclearity in Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), which specifies each text span as either a nucleus or a satellite. The nucleus span embodies the central piece of information, whereas the role of the satellite is to further specify the nucleus. 3420 ifies that “although” is the cue word here, which is mapped to a “Contrast” relationship. 3.2 Final Discourse Tree The leaf nodes resulting from the first simplification pass are recursively simplified in a topdown approach. When no more transformation rule matches, the algorithm stops. The final discourse tree for the example sentence of Figure 2 is shown in Figure 3. Subordination Contrast Subordination Condition Coordination List Subordination Elaboration Bush is President. Bush fails to increase the Treasury’s borrowing capacity. core context Congress fails to increase the Treasury’s borrowing capacity. core core The funding will be delayed. core context Subordination Temporal This is on Monday. The Treasury will announce details of the November refunding. core context context core Figure 3: Final discourse tree of the example sentence. 4 Experimental Setup To compare the performance of our TS approach with state-of-the-art syntactic simplification systems, we evaluated DISSIM with respect to the sentence splitting task (subtask 1). The evaluation of the rhetorical structures (subtasks 2 and 3) will be subject of future work. Corpora. We conducted experiments on three commonly used simplification corpora from two different domains. The first dataset we used was Wikilarge, which consists of 359 sentences from the PWKP corpus (Xu et al., 2016). Moreover, to demonstrate domain independence, we compared the output generated by our TS approach with that of the various baseline systems on the Newsela corpus (Xu et al., 2015), which is composed of 1077 sentences from newswire articles. In addition, we assessed the performance of our simplification system using the 5000 test sentences from the WikiSplit benchmark (Botha et al., 2018), which was mined from Wikipedia edit histories. Baselines. We compared our DISSIM approach against several state-of-the-art baseline systems that have a strong focus on syntactic transformations through explicitly modeling splitting operations. For Wikilarge, these include (i) DSS; (ii) SENTS (Sulem et al., 2018c), which is an extension of DSS that runs the split sentences through the NTS system (Nisioi et al., 2017); (iii) HYBRID (Narayan and Gardent, 2014); (iv) YATS; and (v) RegenT. In addition, we report evaluation scores for the complex input sentences, which allows for a better judgment of system conservatism, and the corresponding simple reference sentences. With respect to the Newsela dataset, we considered the same baseline systems, with the exceptions of DSS and SENTS, whose outputs were not available. Finally, regarding the WikiSplit corpus, we restricted the comparison to the best-performing system in Botha et al. (2018), Copy512, which is a sequence-to-sequence neural model augmented with a copy mechanism and trained over the WikiSplit dataset. Automatic Evaluation. The automatic metrics that were calculated in the evaluation procedure comprise a number of basic statistics, including (i) the average sentence length of the simplified sentences in terms of the average number of tokens per output sentence (#T/S); (ii) the average number of simplified output sentences per complex input (#S/C); (iii) the percentage of sentences that are copied from the source without performing any simplification operation (%SAME), serving as an indicator for system conservatism; and (iv) the averaged Levenshtein distance from the input (LDSC), which provides further evidence for a system’s conservatism. Furthermore, in accordance with prior work on TS, we report average BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016) scores for the rephrasings of each system.6 Finally, we computed the SAMSA and SAMSAabl score of each system, which are the first metrics that explicitly target syntactic aspects of TS (Sulem et al., 2018b). Manual Analysis. Human evaluation is carried out on a subset of 50 randomly sampled sentences per corpus by 2 non-native, but fluent English speakers who rated each input-output pair according to three parameters: grammaticality (G), meaning preservation (M) and structural simplicity (S) (see Section A of the appendix). In order to get further insights into the quality of our implemented simplification patterns, we performed an extensive qualitative analysis of the 35 hand-crafted transformation rules, comprising a 6For the computation of the BLEU and SARI scores we used the implementation of Nisioi et al. (2017) which is available under https://github.com/senisioi/ NeuralTextSimplification. 3421 manual recall-based analysis of the simplification patterns, and a detailed error analysis. Usefulness. Since the DISSIM framework that we propose is aimed at serving downstream semantic applications, we measure if an improvement in the performance of NLP tools is achieved when using our TS approach as a preprocessing step. For this purpose, we chose the task of Open IE (Banko et al., 2007) and determine whether such systems benefit from the sentence splitting approach presented in this work. 5 Results and Discussion Automatic Evaluation. The upper part of Table 3 reports the results that were achieved on the 359 sentences from the Wikilarge corpus, using a set of automatic metrics. Transforming each sentence of the dataset, our DISSIM approach reaches the highest splitting rate among the TS systems under consideration, together with HYBRID, DSS and SENTS. With 2.82 split sentences per input on average, our framework outputs by a large margin the highest number of structurally simplified sentences per source. Moreover, consisting of 11.01 tokens on average, the DISSIM approach returns the shortest sentences of all systems. The relatively high word-based Levenshtein distance of 11.90 confirms previous findings. With regard to SARI, our DISSIM framework (35.05) again outperforms the baseline systems. However, it is among the systems with the lowest BLEU score (63.03). Though, Sulem et al. (2018a) recently demonstrated that BLEU is inappropriate for the evaluation of TS approaches when sentence splitting is involved, since it negatively correlates with structural simplicity, thus penalizing sentences that present a simplified syntax, and presents no correlation with the grammaticality and meaning preservation dimensions. For this reason, we only report these scores for the sake of completeness and to match past work. According to Sulem et al. (2018b), the recently proposed SAMSA and SAMSAabl scores are better suited for the evaluation of the sentence splitting task. With a score of 0.67, the DISSIM framework shows the best performance for SAMSA, while its score of 0.84 for SAMSAabl is just below the one obtained by the RegenT system (0.85).7 7According to Sulem et al. (2018b), SAMSA highly correlates with human judgments for S and G, while SAMSAabl The results on the Newsela dataset, depicted in the middle part of Table 3, support our findings on the Wikilarge corpus, indicating that our TS approach can be applied in a domain independent manner. The lower part of Table 3 illustrates the numbers achieved on the WikiSplit dataset. Though the Copy512 system beats our approach in terms of BLEU and SARI, the remaining scores are clearly in favour of the DISSIM system. Manual Analysis. The results of the human evaluation are displayed in Table 4. The interannotator agreement was calculated using Cohen’s κ, resulting in rates of 0.72 (G), 0.74 (M) and 0.60 (S). The assigned scores demonstrate that our DISSIM approach outperforms all other TS systems in the S dimension. With a score of 1.30 on the Wikilarge sample sentences, it is far ahead of the baseline approaches, with HYBRID (0.86) coming closest. However, this system receives the lowest scores for G and M. RegenT obtains the highest score for G (4.64), while YATS is the best-performing approach in terms of M (4.60). However, with a rate of only 0.22, it achieves a low score for S, indicating that the high score in the M dimension is due to the conservative approach taken by YATS, resulting in only a small number of simplification operations. This explanation also holds true for RegenT’s high mark for G. Still, our DISSIM approach follows closely, with a score of 4.50 for M and 4.36 for G, suggesting that it obtains its goal of returning finegrained simplified sentences that achieve a high level of grammaticality and preserve the meaning of the input. Considering the average scores of all systems under consideration, our approach is the best-performing system (3.39), followed by RegenT (3.16). The human evaluation ratings on the Newsela and WikiSplit sentences show similar results, again supporting the domain independence of our proposed approach. The results of the recall-based qualitative analysis of the transformation patterns, together with the findings of the error analysis are illustrated in Section B of the appendix in Tables 9 and 10. Concerning the quality of the implemented simplification rules, the percentage of sentences that were correctly split was approaching 100% for coordinate and adverbial clauses, and exceeded 80% on average. achieves the highest correlation for M. 3422 #T/S #S/C %SAME LDSC BLEU SARI SAMSA SAMSAabl 359 test sentences from the Wikilarge corpus Complex 22.06 1.03 100 0.00 94.25 32.53 0.59 0.96 Simple reference 20.19 1.14 0.00 7.14 99.48 43.09 0.48 0.78 DISSIM 11.01 2.82 0.00 11.90 63.03 35.05 0.67 0.84 DSS 12.91 1.87 0.00 8.14 74.42 34.32 0.64 0.75 SENTS 14.17 1.09 0.00 13.79 54.37 29.76 0.40 0.58 HYBRID 13.44 1.03 0.00 13.04 48.97 26.19 0.47 0.76 YATS 18.83 1.40 18.66 4.44 73.07 33.03 0.56 0.80 RegenT 18.20 1.45 41.50 3.77 82.49 32.41 0.61 0.85 1077 test sentences from the Newsela corpus Complex 23.34 1.01 100 0.00 20.91 9.84 0.49 0.96 Simple reference 12.81 1.01 0.00 16.25 100 91.13 0.25 0.46 DISSIM 11.20 2.96 0.00 13.00 14.54 49.00 0.57 0.84 HYBRID 12.49 1.02 0.00 13.46 14.42 40.34 0.38 0.74 YATS 18.71 1.42 16.16 5.03 17.51 36.88 0.50 0.83 RegenT 16.74 1.61 33.33 5.03 18.96 32.83 0.55 0.85 5000 test sentences from the WikiSplit corpus Complex 32.01 1.10 100 0.00 74.28 29.91 0.37 0.95 Simple reference 18.14 2.08 0.00 7.48 100 94.71 0.49 0.75 DISSIM 11.91 4.09 0.76 19.10 51.96 39.33 0.54 0.84 Copy512 16.55 2.08 13.30 2.39 76.42 61.51 0.51 0.78 Table 3: Automatic evaluation results. G M S avg. Wikilarge test set Simple reference 4.70 4.56 -0.2 3.02 DISSIM 4.36 4.50 1.30 3.39 DSS 3.44 3.68 0.06 2.39 SENTS 3.48 2.70 -0.18 2.00 HYBRID 3.16 2.60 0.86 2.21 YATS 4.40 4.60 0.22 3.07 RegenT 4.64 4.56 0.28 3.16 Newsela test set Simple reference 4.92 2.94 0.46 2.77 DISSIM 4.44 4.60 1.38 3.47 HYBRID 2.97 2.35 0.93 2.08 YATS 4.26 4.42 0.32 3.00 RegenT 4.54 4.70 0.62 3.29 WikiSplit test set Simple reference 4.72 4.32 0.44 3.16 DISSIM 4.36 4.36 1.66 3.46 Copy512 4.72 4.72 0.92 3.45 Table 4: Human evaluation ratings on a random sample of 50 sentences from each dataset. Figure 4: Performance of state-of-the-art Open IE systems with (solid lines) and without (dashed lines) sentence splitting as a preprocessing step. System Precision Recall AUC Stanford Open IE + 346% + 52% + 597% REVERB + 28% + 40% + 57% OLLIE + 38% + 8% + 20% ClausIE + 50% - 20% + 15% OpenIE-4 + 20% - 1% + 3% Table 5: Improvements when using DISSIM as a preprocessing step. Usefulness. To investigate whether our proposed structural TS approach is able to improve the performance of downstream NLP tasks, we compare the performance of a number of state-of-the-art Open IE systems, including ClausIE (Del Corro and Gemulla, 2013), OpenIE4 (Mausam, 2016), REVERB (Fader et al., 2011), OLLIE (Mausam et al., 2012) and Stanford Open IE (Angeli et al., 2015), when directly operating on the raw input data with their performance when our DISSIM framework is applied as a preprocessing step. For this purpose, we made use of the Open IE benchmark framework proposed in Stanovsky and Dagan (2016).8 The results are displayed in Figure 4. The resulting improvements in overall precision, recall and area under the curve (AUC) are listed in Table 5. The numbers show that when using our DISSIM framework, all systems under consideration gain in AUC. The highest improvement in AUC was achieved by Stanford Open IE, yielding a 597% increase over the output produced when acting as a stand-alone system. AUC scores of REVERB and OLLIE improve by 57% and 20%. While REVERB primarily profits from a boost in recall (+40%), ClausIE, OLLIE and OpenIE-4 mainly improve in precision (+50%, +38% and +20%). 6 Comparative Analysis In the following, we compare our TS framework with state-of-the-art rule-based syntactic TS approaches and discuss the strengths and weaknesses of each system. Sentence Splitting. Table 6 compares the output generated by the TS systems RegenT and YATS 8In Cetto et al. (2018), we further present the performance of our system using the matching function that was originally described in Stanovsky and Dagan (2016), which uses a more fine-grained metric for the comparison of relational phrases and arguments. 3423 on a sample sentence. As can be seen, RegenT and YATS break down the input into a sequence of sentences that present its message in a way that is easy to digest for human readers. However, the sentences are still rather long and present an irregular structure that mixes multiple semantically unrelated propositions, potentially causing problems for downstream tasks. On the contrary, our fairly aggressive simplification strategy that splits a source sentence into a large set of very short sentences9 is rather inapt for a human audience and may in fact even hinder reading comprehension. Though, we were able to demonstrate that the transformation process we propose can improve the performance of downstream NLP applications. SYSTEM OUTPUT Input The house was once part of a plantation and it was the home of Josiah Henson, a slave who escaped to Canada in 1830 and wrote the story of his life. RegenT The house was once part of a plantation. And it was the home of Josiah Henson, a slave. This slave escaped to Canada in 1830 and wrote the story of his life. YATS The house was once part of a plantation. And it was the home of Josiah Henson. Josiah Henson was a slave who escaped to Canada in 1830 and wrote the story of his life. DISSIM #1 0 The house was once part of a plantation. L:LIST #2 #2 0 It was the home of Josiah Henson. L:ELABORATION #3 L:LIST #1 #3 1 Josiah Henson was a slave. L:ELABORATION #4 L:ELABORATION #6 #4 2 This slave escaped to Canada. L:TEMPORAL #5 L:LIST #6 #5 3 This was in 1830. #6 2 This slave wrote the story of his life. L:LIST #4 Table 6: Simplification example (from Newsela). SYSTEM OUTPUT Input “The amabassador’s arrival has not been announced and he flew in complete secrecy,” the official said. LEXEV, EVLEX He arrived in complete secrecy. DISSIM #1 0 The ambassador’s arrival has not been announced. L:LIST #2 L:ATTRIBUTION #3 #2 0 He flew in complete secrecy. L:LIST #1 L:ATTRIBUTION #3 #3 1 This was what the official said. Table 7: Example (ˇStajner and Glavaˇs, 2017). 9In the output generated by DISSIM, contextual sentences are linked to their referring sentences and semantically classified by rhetorical relations. The number indicates the sentences’ context layer cl. Sentences with cl = 0 carry the core information of the source, whereas sentences with a cl≥1 provide contextual information about a sentence with a context layer of cl-1. Text Coherence. The vast majority of syntactic simplification approaches do not take into account discourse-level aspects, producing a disconnected sequence of simplified sentences which results in a loss of cohesion that makes the text harder to interpret (Siddharthan, 2014). However, two notable exceptions have to be mentioned. Siddharthan (2006) was the first to use discourse-aware cues in one of RegenT’s predecessor systems, with the goal of generating a coherent output, e.g. by choosing appropriate determiners (“This slave” in Table 6). However, as opposed to our approach, where a semantic relationship is established for each output sentence, only a comparatively low number of sentences is linked by such cue words in Siddharthan (2006)’s framework (and its successors). EVLEX and LEXEV also operate on the discourse level. They are semantically motivated, eliminating irrelevant information from the input by maintaining only those parts of the input that belong to factual event mentions. Our approach, on the contrary, aims to preserve the full informational content of a source sentence, as illustrated in Table 7. By distinguishing core from contextual information, we are still able to extract only the key information given in the input. 7 Conclusion We presented a recursive sentence splitting approach that transforms structurally complex sentences into a novel hierarchical representation in the form of core sentences and accompanying contexts that are semantically linked by rhetorical relations. In a comparative analysis, we demonstrated that our TS approach achieves the highest scores on all three simplification corpora with regard to SAMSA (0.67, 0.57, 0.54), and comes no later than a close second in terms of SAMSAabl (0.84, 0.84, 0.84), two recently proposed metrics targeted at automatically measuring the syntactic complexity of sentences. These findings are supported by the other scores of the automatic evaluation, as well as the manual analysis. In addition, the extrinsic evaluation that was carried out based on the task of Open IE verified that downstream semantic applications profit from making use of our proposed structural TS approach as a preprocessing step. In the future, we plan to investigate the constituency type classification and rhetorical relation identification steps and port this approach to languages other than English. 3424 References Roee Aharoni and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and stronger baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 719–724. Association for Computational Linguistics. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 344–354, Beijing, China. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the 20th International Joint Conference on Artifical Intelligence, pages 2670–2676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Delphine Bernhard, Louis De Viron, V´eronique Moriceau, and Xavier Tannier. 2012. Question generation for french: collating parsers and paraphrasing questions. Dialogue & Discourse, 3(2):43–74. Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning to split and rephrase from wikipedia edit history. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 732–737. Association for Computational Linguistics. Nadjet Bouayad-Agha, Gerard Casamayor, Gabriela Ferraro, Simon Mille, Vanesa Vidal, and Leo Wanner. 2009. Improving the comprehension of legal documentation: the case of patent claims. In Proceedings of the 12th International Conference on Artificial Intelligence and Law, pages 78–87. ACM. John Carroll, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of the AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7–10. Matthias Cetto, Christina Niklaus, Andr´e Freitas, and Siegfried Handschuh. 2018. Graphene: Semantically-linked propositions in open information extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2300–2311. Association for Computational Linguistics. R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th Conference on Computational Linguistics - Volume 2, COLING ’96, pages 1041–1044, Stroudsburg, PA, USA. Association for Computational Linguistics. William Coster and David Kauchak. 2011. Simple english wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 665–669, Stroudsburg, PA, USA. Association for Computational Linguistics. Luciano Del Corro and Rainer Gemulla. 2013. Clausie: Clause-based open information extraction. In Proceedings of the 22Nd International Conference on World Wide Web, pages 355–366, New York, NY, USA. ACM. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535–1545, Edinburgh, Scotland, UK. Association for Computational Linguistics. Richard Fay, editor. 1990. Collins Cobuild English Grammar. Collins. Daniel Ferr´es, Montserrat Marimon, Horacio Saggion, and Ahmed AbuRa’ed. 2016. Yats: Yet another text simplifier. In Natural Language Processing and Information Systems, pages 335–342, Cham. Springer International Publishing. Goran Glavaˇs and Sanja ˇStajner. 2015. Simplifying lexical simplification: Do we need simplified corpora? In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 63–68. Association for Computational Linguistics. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631–1640. Association for Computational Linguistics. Michael Heilman and Noah A Smith. 2010. Extracting simplified statements for factual question generation. In Proceedings of QG2010: The Third Workshop on Question Generation, volume 11. Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplification for reading assistance: A project note. In Proceedings of the Second International Workshop on Paraphrasing - Volume 16, PARAPHRASE ’03, pages 9–16, Stroudsburg, PA, USA. Association for Computational Linguistics. 3425 Siddhartha Jonnalagadda, Luis Tari, J¨org Hakenberg, Chitta Baral, and Graciela Gonzalez. 2009. Towards effective sentence simplification for automatic processing of biomedical text. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 177–180. Association for Computational Linguistics. Alistair Knott and Robert Dale. 1994. Using linguistic phenomena to motivate a set of coherence relations. Discourse processes, 18(1):35–62. Roger Levy and Galen Andrew. 2006. Tregex and tsurgeon: tools for querying and manipulating tree data structures. In Proceedings of the fifth international conference on Language Resources and Evaluation, pages 2231–2234. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 4074–4077. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534, Jeju Island, Korea. Association for Computational Linguistics. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2010. Entity-focused sentence simplification for relation extraction. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 788–796. Coling 2010 Organizing Committee. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 435–445. Shashi Narayan and Claire Gardent. 2016. Unsupervised sentence simplification using deep semantics. In Proceedings of the 9th International Natural Language Generation conference, pages 111–120. Association for Computational Linguistics. Shashi Narayan, Claire Gardent, Shay B. Cohen, and Anastasia Shimorina. 2017. Split and rephrase. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 606–616. Association for Computational Linguistics. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 85–91. Gustavo H. Paetzold and Lucia Specia. 2016. Unsupervised lexical simplification for non-native speakers. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 3761–3767. AAAI Press. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Luz Rello, Ricardo Baeza-Yates, and Horacio Saggion. 2013. The impact of lexical simplification by verbal paraphrases for people with and without dyslexia. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 501– 512. Springer. Horacio Saggion, Sanja ˇStajner, Stefan Bott, Simon Mille, Luz Rello, and Biljana Drndarevic. 2015. Making it simplext: Implementation and evaluation of a text simplification system for spanish. ACM Trans. Access. Comput., 6(4):14:1–14:36. Swarnadeep Saha and Mausam. 2018. Open information extraction from conjunctive sentences. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2288–2299. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Association for Computational Linguistics. Advaith Siddharthan. 2002. An architecture for a text simplification system. In Language Engineering Conference, 2002. Proceedings, pages 64–71. IEEE. Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77–109. Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Journal of Applied Linguistics, 165(2):259–298. Advaith Siddharthan and Angrosh Mandya. 2014. Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 722–731. Association for Computational Linguistics. 3426 Advaith Siddharthan, Ani Nenkova, and Kathleen McKeown. 2004. Syntactic simplification for improving content selection in multi-document summarization. In Proceedings of the 20th international conference on Computational Linguistics, page 896. Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing With Compositional Vector Grammars. In ACL. Sanja ˇStajner and Goran Glavaˇs. 2017. Leveraging event-based semantics for automated text simplification. Expert systems with applications, 82:383–395. Sanja ˇStajner and Maja Popovic. 2016. Can text simplification help machine translation? In Proceedings of the 19th Annual Conference of the European Association for Machine Translation, pages 230– 242. Sanja ˇStajner and Maja Popovic. 2018. Improving machine translation of english relative clauses with automatic text simplification. In Proceedings of the First Workshop on Automatic Text Adaptation (ATA). Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), page (to appear), Austin, Texas. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Bleu is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738–744. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685–696. Association for Computational Linguistics. Elior Sulem, Omri Abend, and Ari Rappoport. 2018c. Simple and effective text simplification using semantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 162–173. Association for Computational Linguistics. Maite Taboada and Debopam Das. 2013. Annotation upon annotation: Adding signalling information to a corpus of discourse relations. D&D, 4(2):249–281. David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Proceedings of ACL-08: HLT, pages 344–352. Association for Computational Linguistics. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283–297. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594. Association for Computational Linguistics. Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1353–1361. Association for Computational Linguistics. A Annotation Guidelines for the Manual Evaluation Table 8 lists the questions for the human annotation. Since the focus of our work is on structural rather than lexical simplification, we follow the approach taken in Sulem et al. (2018c) in terms of SIMPLICITY and restrict our analysis to the syntactic complexity of the resulting sentences, which is measured on a scale that ranges from -2 to 2 in accordance with Nisioi et al. (2017), while neglecting the lexical simplicity of the output sentences. Regarding the GRAMMATICALITY and MEANING PRESERVATION dimensions, we adopted the guidelines from ˇStajner and Glavaˇs (2017), with some minor deviations to better reflect our goal of simplifying the structure of the input sentences, while retaining their full informational content. PARAM. QUESTION SCALE G Is the output fluent and grammatical? 1 to 5 M Does the output preserve the meaning of the input? 1 to 5 S Is the output simpler than the input, ignoring the complexity of the words? -2 to 2 Table 8: Questions for the human annotation. 3427 B Qualitative Analysis of the Transformation Patterns and Error Analysis Tables 9 and 10 show the results of the recallbased qualitative analysis of the transformation patterns, together with the findings of the error analysis. These analyses were carried out on a dataset which we compiled.10 It consists of 100 Wikipedia sentences per syntactic phenomenon tackled by our TS approach. In the construction of this corpus we ensured that the collected sentences exhibit a great syntactic variability to allow for a reliable predication about the coverage and accuracy of the specified simplification rules. Note that we do not consider the rules for disembedding adjectival/adverbial phrases and lead NPs, since an examination of the frequency distribution of the syntactic constructs tackled by our approach over the Wikilarge, Newsela and WikiSplit test sentences has shown that these types of constructs occur relatively rarely. freq. %fired %correct trans. Clausal disembedding Coordinate clauses 113 93.8% 99.1% Adverbial clauses 113 84.1% 96.8% Relative clauses (non-def.) 108 88.9% 70.8% Relative clauses (defining) 103 86.4% 75.3% Reported speech 112 82.1% 75.0% Phrasal disembedding Coordinate VPs 109 85.3% 89.2% Coordinate NPs 115 48.7% 82.1% Appositions (non-restrictive) 107 86.0% 83.7% Appositions (restrictive) 122 87.7% 72.0% PPs 163 68.1% 75.7% Total 1165 81.1% 82.0% Table 9: Recall-based qualitative analysis of the transformation rule patterns. This table presents the results of a manual analysis of the performance of the handcrafted simplification patterns. The first column lists the syntactic phenomena under consideration, the second column indicates its frequency in the dataset, the third column displays the percentage of the grammar fired, and the fourth column reveals the percentage of sentences where the transformation operation results in a correct split. 10The dataset is available under https://github. com/Lambda-3/DiscourseSimplification/ tree/master/supplemental_material. Err. 1 Err. 2 Err. 3 Err. 4 Err. 5 Err. 6 Clausal disembedding Coordinate clauses 1 0 0 0 0 0 Adverbial clauses 1 1 0 1 0 0 Relative clauses (non-def.) 5 8 0 0 14 1 Relative clauses (defining) 8 8 2 0 5 1 Reported speech 5 1 13 1 2 1 Phrasal disembedding Coordinate VPs 4 3 2 1 0 0 Coordinate NPs 3 3 0 3 1 0 Appositions (nonrestrictive) 0 5 3 0 7 0 Appositions (restrictive) 1 21 3 0 0 0 PPs 3 11 4 6 4 0 Total 31 61 27 12 33 3 (19%) (37%) (16%) (7%) (20%) (2%) Table 10: Error analysis. This table shows the results of the error analysis conducted on the same dataset. Six types of errors were identified (Error 1: additional parts; Error 2: missing parts; Error 3: morphological errors; Error 4: wrong split point; Error 5: wrong referent; Error 6: wrong order of the syntactic elements).
2019
333
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3428 Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference R. Thomas McCoy,1 Ellie Pavlick,2 & Tal Linzen1 1Department of Cognitive Science, Johns Hopkins University 2Department of Computer Science, Brown University [email protected], ellie [email protected], [email protected] Abstract A machine learning system can score well on a given test set by relying on heuristics that are effective for frequent example types but break down in more challenging cases. We study this issue within natural language inference (NLI), the task of determining whether one sentence entails another. We hypothesize that statistical NLI models may adopt three fallible syntactic heuristics: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic. To determine whether models have adopted these heuristics, we introduce a controlled evaluation set called HANS (Heuristic Analysis for NLI Systems), which contains many examples where the heuristics fail. We find that models trained on MNLI, including BERT, a state-of-the-art model, perform very poorly on HANS, suggesting that they have indeed adopted these heuristics. We conclude that there is substantial room for improvement in NLI systems, and that the HANS dataset can motivate and measure progress in this area. 1 Introduction Neural networks excel at learning the statistical patterns in a training set and applying them to test cases drawn from the same distribution as the training examples. This strength can also be a weakness: statistical learners such as standard neural network architectures are prone to adopting shallow heuristics that succeed for the majority of training examples, instead of learning the underlying generalizations that they are intended to capture. If such heuristics often yield correct outputs, the loss function provides little incentive for the model to learn to generalize to more challenging cases as a human performing the task would. This issue has been documented across domains in artificial intelligence. In computer vision, for example, neural networks trained to recognize objects are misled by contextual heuristics: a network that is able to recognize monkeys in a typical context with high accuracy may nevertheless label a monkey holding a guitar as a human, since in the training set guitars tend to co-occur with humans but not monkeys (Wang et al., 2018). Similar heuristics arise in visual question answering systems (Agrawal et al., 2016). The current paper addresses this issue in the domain of natural language inference (NLI), the task of determining whether a premise sentence entails (i.e., implies the truth of) a hypothesis sentence (Condoravdi et al., 2003; Dagan et al., 2006; Bowman et al., 2015). As in other domains, neural NLI models have been shown to learn shallow heuristics, in this case based on the presence of specific words (Naik et al., 2018; Sanchez et al., 2018). For example, a model might assign a label of contradiction to any input containing the word not, since not often appears in the examples of contradiction in standard NLI training sets. The focus of our work is on heuristics that are based on superficial syntactic properties. Consider the following sentence pair, which has the target label entailment: (1) Premise: The judge was paid by the actor. Hypothesis: The actor paid the judge. An NLI system that labels this example correctly might do so not by reasoning about the meanings of these sentences, but rather by assuming that the premise entails any hypothesis whose words all appear in the premise (Dasgupta et al., 2018; Naik et al., 2018). Crucially, if the model is using this heuristic, it will predict entailment for (2) as well, even though that label is incorrect in this case: (2) Premise: The actor was paid by the judge. Hypothesis: The actor paid the judge. 3429 Heuristic Definition Example Lexical overlap Assume that a premise entails all hypotheses constructed from words in the premise The doctor was paid by the actor. −−−−−→ WRONG The doctor paid the actor. Subsequence Assume that a premise entails all of its contiguous subsequences. The doctor near the actor danced. −−−−−→ WRONG The actor danced. Constituent Assume that a premise entails all complete subtrees in its parse tree. If the artist slept, the actor ran. −−−−−→ WRONG The artist slept. Table 1: The heuristics targeted by the HANS dataset, along with examples of incorrect entailment predictions that these heuristics would lead to. We introduce a new evaluation set called HANS (Heuristic Analysis for NLI Systems), designed to diagnose the use of such fallible structural heuristics.1 We target three heuristics, defined in Table 1. While these heuristics often yield correct labels, they are not valid inference strategies because they fail on many examples. We design our dataset around such examples, so that models that employ these heuristics are guaranteed to fail on particular subsets of the dataset, rather than simply show lower overall accuracy. We evaluate four popular NLI models, including BERT, a state-of-the-art model (Devlin et al., 2019), on the HANS dataset. All models performed substantially below chance on this dataset, barely exceeding 0% accuracy in most cases. We conclude that their behavior is consistent with the hypothesis that they have adopted these heuristics. Contributions: This paper has three main contributions. First, we introduce the HANS dataset, an NLI evaluation set that tests specific hypotheses about invalid heuristics that NLI models are likely to learn. Second, we use this dataset to illuminate interpretable shortcomings in state-of-the-art models trained on MNLI (Williams et al., 2018b); these shortcoming may arise from inappropriate model inductive biases, from insufficient signal provided by training datasets, or both. Third, we show that these shortcomings can be made less severe by augmenting a model’s training set with the types of examples present in HANS. These results indicate that there is substantial room for improvement for current NLI models and datasets, and that HANS can serve as a tool for motivating and measuring progress in this area. 1GitHub repository with data and code: https:// github.com/tommccoy1/hans 2 Syntactic Heuristics We focus on three heuristics: the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic, all defined in Table 1. These heuristics form a hierarchy: the constituent heuristic is a special case of the subsequence heuristic, which in turn is a special case of the lexical overlap heuristic. Table 2 in the next page gives examples where each heuristic succeeds and fails. There are two reasons why we expect these heuristics to be adopted by a statistical learner trained on standard NLI training datasets such as SNLI (Bowman et al., 2015) or MNLI (Williams et al., 2018b). First, the MNLI training set contains far more examples that support the heuristics than examples that contradict them:2 Heuristic Supporting Cases Contradicting Cases Lexical overlap 2,158 261 Subsequence 1,274 72 Constituent 1,004 58 Even the 261 contradicting cases in MNLI may not provide strong evidence against the heuristics. For example, 133 of these cases contain negation in the premise but not the hypothesis, as in (3). Instead of using these cases to overrule the lexical overlap heuristic, a model might account for them by learning to assume that the label is contradiction whenever there is negation in the premise but not the hypothesis (McCoy and Linzen, 2019): (3) a. I don’t care. ↛I care. b. This is not a contradiction. ↛This is a contradiction. 2In this table, the lexical overlap counts include the subsequence counts, which include the constituent counts. 3430 Heuristic Premise Hypothesis Label Lexical The banker near the judge saw the actor. The banker saw the actor. E overlap The lawyer was advised by the actor. The actor advised the lawyer. E heuristic The doctors visited the lawyer. The lawyer visited the doctors. N The judge by the actor stopped the banker. The banker stopped the actor. N Subsequence The artist and the student called the judge. The student called the judge. E heuristic Angry tourists helped the lawyer. Tourists helped the lawyer. E The judges heard the actors resigned. The judges heard the actors. N The senator near the lawyer danced. The lawyer danced. N Constituent Before the actor slept, the senator ran. The actor slept. E heuristic The lawyer knew that the judges shouted. The judges shouted. E If the actor slept, the judge saw the artist. The actor slept. N The lawyers resigned, or the artist slept. The artist slept. N Table 2: Examples of sentences used to test the three heuristics. The label column shows the correct label for the sentence pair; E stands for entailment and N stands for non-entailment. A model relying on the heuristics would label all examples as entailment (incorrectly for those marked as N). There are some examples in MNLI that contradict the heuristics in ways that are not easily explained away by other heuristics; see Appendix A for examples. However, such cases are likely too rare to discourage a model from learning these heuristics. MNLI contains data from multiple genres, so we conjecture that the scarcity of contradicting examples is not just a property of one genre, but rather a general property of NLI data generated in the crowdsourcing approach used for MNLI. We thus hypothesize that any crowdsourced NLI dataset would make our syntactic heuristics attractive to statistical learners without strong linguistic priors. The second reason we might expect current NLI models to adopt these heuristics is that their input representations may make them susceptible to these heuristics. The lexical overlap heuristic disregards the order of the words in the sentence and considers only their identity, so it is likely to be adopted by bag-of-words NLI models (e.g., Parikh et al. 2016). The subsequence heuristic considers linearly adjacent chunks of words, so one might expect it to be adopted by standard RNNs, which process sentences in linear order. Finally, the constituent heuristic appeals to components of the parse tree, so one might expect to see it adopted by tree-based NLI models (Bowman et al., 2016). 3 Dataset Construction For each heuristic, we generated five templates for examples that support the heuristic and five templates for examples that contradict it. Below is one template for the subsequence heuristic; see Appendix B for a full list of templates. (4) The N1 P the N2 V. ↛The N2 V. The lawyer by the actor ran. ↛The actor ran. We generated 1,000 examples from each template, for a total of 10,000 examples per heuristic. Some heuristics are special cases of others, but we made sure that the examples for one heuristic did not also fall under a more narrowly defined heuristic. That is, for lexical overlap cases, the hypothesis was not a subsequence or constituent of the premise; for subsequence cases, the hypothesis was not a constituent of the premise. 3.1 Dataset Controls Plausibility: One advantage of generating examples from templates—instead of, e.g., modifying naturally-occurring examples—is that we can ensure the plausibility of all generated sentences. For example, we do not generate cases such as The student read the book ↛The book read the student, which could ostensibly be solved using a hypothesis-plausibility heuristic. To achieve this, we drew our core vocabulary from Ettinger et al. (2018), where every noun was a plausible subject of every verb or a plausible object of every transitive verb. Some templates required expanding this core vocabulary; in those cases, we manually curated the additions to ensure plausibility. 3431 Selectional criteria: Some of our example types depend on the availability of lexically-specific verb frames. For example, (5) requires awareness of the fact that believed can take a clause (the lawyer saw the officer) as its complement: (5) The doctor believed the lawyer saw the officer. ↛The doctor believed the lawyer. It is arguably unfair to expect a model to understand this example if it had only ever encountered believe with a noun phrase object (e.g., I believed the man). To control for this issue, we only chose verbs that appeared at least 50 times in the MNLI training set in all relevant frames. 4 Experimental Setup Since HANS is designed to probe for structural heuristics, we selected three models that exemplify popular strategies for representing the input sentence: DA, a bag-of-words model; ESIM, which uses a sequential structure; and SPINN, which uses a syntactic parse tree. In addition to these three models, we included BERT, a stateof-the-art model for MNLI. The following paragraphs provide more details on these models. DA: The Decomposable Attention model (DA; Parikh et al., 2016) uses a form of attention to align words in the premise and hypothesis and to make predictions based on the aggregation of this alignment. It uses no word order information and can thus be viewed as a bag-of-words model. ESIM: The Enhanced Sequential Inference Model (ESIM; Chen et al., 2017) uses a modified bidirectional LSTM to encode sentences. We use the variant with a sequential encoder, rather than the tree-based Hybrid Inference Model (HIM). SPINN: The Stack-augmented ParserInterpreter Neural Network (SPINN; Bowman et al., 2016) is tree-based: it encodes sentences by combining phrases based on a syntactic parse. We use the SPINN-PI-NT variant, which takes a parse tree as an input (rather than learning to parse). For MNLI, we used the parses provided in the MNLI release; for HANS, we used parse templates that we created based on parses from the Stanford PCFG Parser 3.5.2 (Klein and Manning, 2003), the same parser used to parse MNLI. Based on manual inspection, this parser generally provided correct parses for HANS examples. BERT: The Bidirectional Encoder Representations from Transformers model (BERT; Devlin et al., 2019) is a Transformer model that uses attention, rather than recurrence, to process sentences. We use the bert-base-uncased pretrained model and fine-tune it on MNLI. Implementation and evaluation: For DA and ESIM, we used the implementations from AllenNLP (Gardner et al., 2017). For SPINN3 and BERT,4 we used code from the GitHub repositories for the papers introducing those models. We trained all models on MNLI. MNLI uses three labels (entailment, contradiction, and neutral). We chose to annotate HANS with two labels only (entailment and non-entailment) because the distinction between contradiction and neutral was often unclear for our cases.5 For evaluating a model on HANS, we took the highest-scoring label out of entailment, contradiction, and neutral; we then translated contradiction or neutral labels to non-entailment. An alternate approach would have been to add the contradiction and neutral scores to determine a score for non-entailment; we found little difference between these approaches, since the models almost always assigned more than 50% of the label probability to a single label.6 5 Results All models achieved high scores on the MNLI test set (Figure 1a), replicating the accuracies found in past work (DA: Gururangan et al. 2018; ESIM: Williams et al. 2018b; SPINN: Williams et al. 2018a; BERT: Devlin et al. 2019). On the HANS dataset, all models almost always assigned the correct label in the cases where the label is entailment, i.e., where the correct answer is in line with the hypothesized heuristics. However, they all performed poorly—with accuracies less than 10% in most cases, when chance is 50%—on the cases where the heuristics make incorrect predictions 3https://github.com/stanfordnlp/spinn; we used the NYU fork at https://github.com/ nyu-mll/spinn. 4https://github.com/google-research/ bert 5For example, with The actor was helped by the judge ↛ The actor helped the judge, it is possible that the actor did help the judge, pointing to a label of neutral; yet the premise does pragmatically imply that the actor did not help the judge, meaning that this pair could also fit the non-strict definition of contradiction used in NLI annotation. 6We also tried training the models on MNLI with neutral and contradiction collapsed into non-entailment; this gave similar results as collapsing after training (Appendix D) . 3432 0% 25% 50% 75% 100% DA ESIM SPINN BERT Accuracy (a) Lexical overlap Subsequence Constituent Entailed Non−entailed DA ESIM SPINN BERT DA ESIM SPINN BERT DA ESIM SPINN BERT 0% 25% 50% 75% 100% 0% 25% 50% 75% 100% Accuracy (b) Figure 1: (a) Accuracy on the MNLI test set. (b) Accuracies on six sub-components of the HANS evaluation set; each sub-component is defined by its correct label and the heuristic it addresses. The dashed lines indicate chance performance. All models behaved as we would expect them to if they had adopted the heuristics targeted by HANS. That is, they nearly always predicted entailment for the examples in HANS, leading to near-perfect accuracy when the true label is entailment, and near-zero accuracy when the true label is non-entailment. (Figure 1b). Thus, despite their high scores on the MNLI test set, all four models behaved in a way consistent with the use of the heuristics targeted in HANS, and not with the correct rules of inference. Comparison of models: Both DA and ESIM had near-zero performance across all three heuristics. These models might therefore make no distinction between the three heuristics, but instead treat them all as the same phenomenon, i.e. lexical overlap. Indeed, for DA, this must be the case, as this model does not have access to word order; ESIM does in theory have access to word order information but does not appear to use it here. SPINN had the best performance on the subsequence cases. This might be due to the treebased nature of its input: since the subsequences targeted in these cases were explicitly chosen not to be constituents, they do not form cohesive units in SPINN’s input in the way they do for sequential models. SPINN also outperformed DA and ESIM on the constituent cases, suggesting that SPINN’s tree-based representations moderately helped it learn how specific constituents contribute to the overall sentence. Finally, SPINN did worse than the other models on constituent cases where the correct answer is entailment. This moderately greater balance between accuracy on entailment and non-entailment cases further indicates that SPINN is less likely than the other models to assume that constituents of the premise are entailed; this harms its performance in cases where that assumption happens to lead to the correct answer. BERT did slightly worse than SPINN on the subsequence cases, but performed noticeably less poorly than all other models at both the constituent and lexical overlap cases (though it was still far below chance). Its performance particularly stood out for the lexical overlap cases, suggesting that some of BERT’s success at MNLI may be due to a greater tendency to incorporate word order information compared to other models. Analysis of particular example types: In the cases where a model’s performance on a heuristic was perceptibly above zero, accuracy was not evenly spread across subcases (for case-by-case results, see Appendix C). For example, within the lexical overlap cases, BERT achieved 39% accuracy on conjunction (e.g., The actor and the doctor saw the artist ↛The actor saw the doctor) but 0% accuracy on subject/object swap (The judge called the lawyer ↛The lawyer called the judge). Within the constituent heuristic cases, BERT achieved 49% accuracy at determining that a clause embedded under if and other conditional words is not entailed (If the doctor resigned, the lawyer danced ↛The doctor resigned), but 0% accuracy at identifying that the clause outside of the conditional clause is also not entailed (If the doctor resigned, the lawyer danced ↛The lawyer danced). 6 Discussion Independence of heuristics: Though each heuristic is most closely related to one class of model (e.g., the constituent heuristic is related to tree-based models), all models failed on cases illustrating all three heuristics. This finding is unsurprising since these heuristics are closely related 3433 to each other, meaning that an NLI model may adopt all of them, even the ones not specifically targeting that class of model. For example, the subsequence and constituent heuristics are special cases of the lexical overlap heuristic, so all models can fail on cases illustrating all heuristics, because all models have access to individual words. Though the heuristics form a hierarchy—the constituent heuristic is a subcase of the subsequence heuristic, which is a subcase of the lexical overlap heuristic—this hierarchy does not necessarily predict the performance of our models. For example, BERT performed worse on the subsequence heuristic than on the constituent heuristic, even though the constituent heuristic is a special case of the subsequence heuristic. Such behavior has two possible causes. First, it could be due to the specific cases we chose for each heuristic: the cases chosen for the subsequence heuristic may be inherently more challenging than the cases chosen for the constituent heuristic, even though the constituent heuristic as a whole is a subset of the subsequence one. Alternately, it is possible for a model to adopt a more general heuristic (e.g., the subsequence heuristic) but to make an exception for some special cases (e.g., the cases to which the constituent heuristic could apply). Do the heuristics arise from the architecture or the training set? The behavior of a trained model depends on both the training set and the model’s architecture. The models’ poor results on HANS could therefore arise from architectural limitations, from insufficient signal in the MNLI training set, or from both. The fact that SPINN did markedly better at the constituent and subsequence cases than ESIM and DA, even though the three models were trained on the same dataset, suggests that MNLI does contain some signal that can counteract the appeal of the syntactic heuristics tested by HANS. SPINN’s structural inductive biases allow it to leverage this signal, but the other models’ biases do not. Other sources of evidence suggest that the models’ failure is due in large part to insufficient signal from the MNLI training set, rather than the models’ representational capacities alone. The BERT model we used (bert-base-uncased) was found by Goldberg (2019) to achieve strong results in syntactic tasks such as subject-verb agreement prediction, a task that minimally requires a distinction between the subject and direct object of a sentence (Linzen et al., 2016; Gulordava et al., 2018; Marvin and Linzen, 2018). Despite this evidence that BERT has access to relevant syntactic information, its accuracy was 0% on the subject-object swap cases (e.g., The doctor saw the lawyer ↛ The lawyer saw the doctor). We believe it is unlikely that our fine-tuning step on MNLI, a much smaller corpus than the corpus BERT was trained on, substantially changed the model’s representational capabilities. Even though the model most likely had access to information about subjects and objects, then, MNLI did not make it clear how that information applies to inference. Supporting this conclusion, McCoy et al. (2019) found little evidence of compositional structure in the InferSent model, which was trained on SNLI, even though the same model type (an RNN) did learn clear compositional structure when trained on tasks that underscored the need for such structure. These results further suggest that the models’ poor compositional behavior arises more because of the training set than because of model architecture. Finally, our BERT-based model differed from the other models in that it was pretrained on a massive amount of data on a masking task and a next-sentence classification task, followed by finetuning on MNLI, while the other models were only trained on MNLI; we therefore cannot rule out the possibility that BERT’s comparative success at HANS was due to the greater amount of data it has encountered rather than any architectural features. Is the dataset too difficult? To assess the difficulty of our dataset, we obtained human judgments on a subset of HANS from 95 participants on Amazon Mechanical Turk as well as 3 expert annotators (linguists who were unfamiliar with HANS: 2 graduate students and 1 postdoctoral researcher). The average accuracy was 76% for Mechanical Turk participants and 97% for expert annotators; further details are in Appendix F. Our Mechanical Turk results contrast with those of Nangia and Bowman (2019), who report an accuracy of 92% in the same population on examples from MNLI; this indicates that HANS is indeed more challenging for humans than MNLI is. The difficulty of some of our examples is in line with past psycholinguistic work in which humans have been shown to incorrectly answer comprehension questions for some of our subsequence subcases. For example, in an experiment in which participants read the sentence As Jerry played the violin 3434 gathered dust in the attic, some participants answered yes to the question Did Jerry play the violin? (Christianson et al., 2001). Crucially, although Mechanical Turk annotators found HANS to be harder overall than MNLI, their accuracy was similar whether the correct answer was entailment (75% accuracy) or non-entailment (77% accuracy). The contrast between the balance in the human errors across labels and the stark imbalance in the models’ errors (Figure 1b) indicates that human errors are unlikely to be driven by the heuristics targeted in the current work. 7 Augmenting the training data with HANS-like examples The failure of the models we tested raises the question of what it would take to do well on HANS. One possibility is that a different type of model would perform better. For example, a model based on hand-coded rules might handle HANS well. However, since most models we tested are in theory capable of handling HANS’s examples but failed to do so when trained on MNLI, it is likely that performance could also be improved by training the same architectures on a dataset in which these heuristics are less successful. To test that hypothesis, we retrained each model on the MNLI training set augmented with a dataset structured exactly like HANS (i.e. using the same thirty subcases) but containing no specific examples that appeared in HANS. Our additions comprised 30,000 examples, roughly 8% of the size of the original MNLI training set (392,702 examples). In general, the models trained on the augmented MNLI performed very well on HANS (Figure 2); the one exception was that the DA model performed poorly on subcases for which a bag-of-words representation was inadequate.7 This experiment is only an initial exploration and leaves open many questions about the conditions under which a model will successfully avoid a heuristic; for example, how many contradicting examples are required? At the same time, these results do suggest that, to prevent a model from learning a heuristic, one viable approach is to use a training set that does not support this heuristic. 7The effect on MNLI test set performance was less clear; the augmentation with HANS-like examples improved MNLI test set performance for BERT (84.4% vs. 84.1%) and ESIM (77.6% vs 77.3%) but hurt performance for DA (66.0% vs. 72.4%) and SPINN (63.9% vs. 67.0%). Lexical overlap Subsequence Constituent Entailed Non−entailed DA ESIM SPINN BERT DA ESIM SPINN BERT DA ESIM SPINN BERT 0% 25% 50% 75% 100% 0% 25% 50% 75% 100% Accuracy Figure 2: HANS accuracies for models trained on MNLI plus examples of all 30 categories in HANS. Transfer across HANS subcases: The positive results of the HANS-like augmentation experiment are compatible with the possibility that the models simply memorized the templates that made up HANS’s thirty subcases. To address this, we retrained our models on MNLI augmented with subsets of the HANS cases (withholding some cases; see Appendix E for details), then tested the models on the withheld cases. The results of one of the transfer experiments, using BERT, are shown in Table 3. There were some successful cases of transfer; e.g., BERT performed well on the withheld categories with sentence-initial adverbs, regardless of whether the correct label was non-entailment or entailment. Such successes suggest that BERT is able to learn from some specific subcases that it should rule out the broader heuristics; in this case, the nonwithheld cases plausibly informed BERT not to indiscriminately follow the constituent heuristic, encouraging it to instead base its judgments on the specific adverbs in question (e.g., certainly vs. probably). However, the models did not always transfer successfully; e.g., BERT had 0% accuracy on entailed passive examples when such examples were withheld, likely because the training set still included many non-entailed passive examples, meaning that BERT may have learned to assume that all sentences with passive premises are cases of non-entailment. Thus, though the models do seem to be able to rule out the broadest versions of the heuristics and transfer that knowledge to some new cases, they may still fall back to the heuristics for other cases. For further results involving withheld categories, see Appendix E. Transfer to an external dataset: Finally, we tested models on the comp same short and 3435 Withheld category Results Lexical overlap: Conjunctions (↛) The doctor saw the author and the tourist. ↛The author saw the tourist. 0% 50% 100% MNLI MNLI+ Lexical overlap: Passives (→) The authors were helped by the actor. →The actor helped the authors. 0% 50% 100% MNLI MNLI+ Subsequence: NP/Z (↛) Before the actor moved the doctor arrived. ↛The actor moved the doctor. 0% 50% 100% MNLI MNLI+ Subsequence: PP on object (→) The authors saw the judges by the doctor. →The authors saw the judges. 0% 50% 100% MNLI MNLI+ Constituent: Adverbs (↛) Probably the artists helped the authors. ↛The artists helped the authors. 0% 50% 100% MNLI MNLI+ Constituent: Adverbs (→) Certainly the lawyers shouted. →The lawyers shouted. 0% 50% 100% MNLI MNLI+ Table 3: Accuracies for BERT fine-tuned on basic MNLI and on MNLI+, which is MNLI augmented with most HANS categories except withholding the categories in this table. The two lexical overlap cases shown here are adversarial in that MNLI+ contains cases superficially similar to them but with opposite labels (namely, the Conjunctions (→) and Passives (↛) cases from Table 4 in the Appendix). The remaining cases in this table are not adversarial in this way. comp same long datasets from Dasgupta et al. (2018), which consist of lexical overlap cases: (6) the famous and arrogant cat is not more nasty than the dog with glasses in a white dress. ↛ the dog with glasses in a white dress is not more nasty than the famous and arrogant cat. This dataset differs from HANS in at least three important ways: it is based on a phenomenon not present in HANS (namely, comparatives); it uses a different vocabulary from HANS; and many of its sentences are semantically implausible. We used this dataset to test both BERT finetuned on MNLI, and BERT fine-tuned on MNLI augmented with HANS-like examples. The augmentation improved performance modestly for the long examples and dramatically for the short examples, suggesting that training with HANS-like examples has benefits that extend beyond HANS.8 8We hypothesize that HANS helps more with short examples because most HANS sentences are short. Short Long Entailed Non−entailed MNLI MNLI+ MNLI MNLI+ 0% 25% 50% 75% 100% 0% 25% 50% 75% 100% Accuracy Figure 3: Results on the lexical overlap cases from Dasgupta et al. (2018) for BERT fine-tuned on MNLI or on MNLI augmented with HANS-like examples. 8 Related Work 8.1 Analyzing trained models This project relates to an extensive body of research on exposing and understanding weaknesses in models’ learned behavior and representations. In the NLI literature, Poliak et al. (2018b) and Gururangan et al. (2018) show that, due to biases in NLI datasets, it is possible to achieve far better than chance accuracy on those datasets by only looking at the hypothesis. Other recent works address possible ways in which NLI models might use fallible heuristics, focusing on semantic phenomena, such as lexical inferences (Glockner et al., 2018) or quantifiers (Geiger et al., 2018), or biases based on specific words (Sanchez et al., 2018). Our work focuses instead on structural phenomena, following the proof-of-concept work done by Dasgupta et al. (2018). Our focus on using NLI to address how models capture structure follows some older work about using NLI for the evaluation of parsers (Rimell and Clark, 2010; Mehdad et al., 2010). NLI has been used to investigate many other types of linguistic information besides syntactic structure (Poliak et al., 2018a; White et al., 2017). Outside NLI, multiple projects have used classification tasks to understand what linguistic and/or structural information is present in vector encodings of sentences (e.g., Adi et al., 2017; Ettinger et al., 2018; Conneau et al., 2018). We instead choose the behavioral approach of using task performance on critical cases. Unlike the classification approach, this approach is agnostic to model structure; our dataset could be used to evaluate a symbolic NLI system just as easily as a neural one, whereas typical classification approaches only work for models with vector representations. 3436 8.2 Structural heuristics Similar to our lexical overlap heuristic, Dasgupta et al. (2018), Nie et al. (2018), and Kim et al. (2018) also tested NLI models on specific phenomena where word order matters; we use a larger set of phenomena to study a more general notion of lexical overlap that is less dependent on the properties of a single phenomenon, such as passives. Naik et al. (2018) also find evidence that NLI models use a lexical overlap heuristic, but our approach is substantially different from theirs.9 This work builds on our pilot study in McCoy and Linzen (2019), which studied one of the subcases of the subsequence heuristic. Several of our subsequence subcases are inspired by psycholinguistics research (Bever, 1970; Frazier and Rayner, 1982; Tabor et al., 2004); these works have aims similar to ours but are concerned with the representations used by humans rather than neural networks. Finally, all of our constituent heuristic subcases depend on the implicational behavior of specific words. Several past works (Pavlick and CallisonBurch, 2016; Rudinger et al., 2018; White et al., 2018; White and Rawlins, 2018) have studied such behavior for verbs (e.g., He knows it is raining entails It is raining, while He believes it is raining does not). We extend that approach by including other types of words with specific implicational behavior, namely conjunctions (and, or), prepositions that take clausal arguments (if, because), and adverbs (definitely, supposedly). MacCartney and Manning (2009) also discuss the implicational behavior of these various types of words within NLI. 8.3 Generalization Our work suggests that test sets drawn from the same distribution as the training set may be inadequate for assessing whether a model has learned to perform the intended task. Instead, it is also necessary to evaluate on a generalization set that departs from the training distribution. McCoy et al. (2018) found a similar result for the task of question formation; different architectures that all succeeded on the test set failed on the generalization set in different ways, showing that the test set alone was not sufficient to determine what the models had 9Naik et al. (2018) diagnose the lexical overlap heuristic by appending and true is true to existing MNLI hypotheses, which decreases lexical overlap but does not change the sentence pair’s label. We instead generate new sentence pairs for which the words in the hypothesis all appear in the premise. learned. This effect can arise not just from different architectures but also from different initializations of the same architecture (Weber et al., 2018). 9 Conclusions Statistical learners such as neural networks closely track the statistical regularities in their training sets. This process makes them vulnerable to adopting heuristics that are valid for frequent cases but fail on less frequent ones. We have investigated three such heuristics that we hypothesize NLI models are likely to learn. To evaluate whether NLI models do behave consistently with these heuristics, we have introduced the HANS dataset, on which models using these heuristics are guaranteed to fail. We find that four existing NLI models perform very poorly on HANS, suggesting that their high accuracies on NLI test sets may be due to the exploitation of invalid heuristics rather than deeper understanding of language. However, these models performed significantly better on both HANS and on a separate structure-dependent dataset when their training data was augmented with HANS-like examples. Overall, our results indicate that, despite the impressive accuracies of state-of-the-art models on standard evaluations, there is still much progress to be made and that targeted, challenging datasets, such as HANS, are important for determining whether models are learning what they are intended to learn. Acknowledgments We are grateful to Adam Poliak, Benjamin Van Durme, Samuel Bowman, the members of the JSALT General-Purpose Sentence Representation Learning team, and the members of the Johns Hopkins Computation and Psycholinguistics Lab for helpful comments, and to Brian Leonard for assistance with the Mechanical Turk experiment. Any errors remain our own. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891 and the 2018 Jelinek Summer Workshop on Speech and Language Technology (JSALT). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the JSALT workshop. 3437 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In International Conference on Learning Representations. Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1955–1960. Association for Computational Linguistics. Thomas G. Bever. 1970. The cognitive basis for linguistic structures. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466–1477. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Association for Computational Linguistics. Kiel Christianson, Andrew Hollingworth, John F Halliwell, and Fernanda Ferreira. 2001. Thematic roles assigned along the garden path linger. Cognitive Psychology, 42(4):368–407. Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Reinhard Stolle, and Daniel G. Bobrow. 2003. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 Workshop on Text Meaning. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment Challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW’05, pages 177–190, Berlin, Heidelberg. Springer-Verlag. Ishita Dasgupta, Demi Guo, Andreas Stuhlm¨uller, Samuel J. Gershman, and Noah D. Goodman. 2018. Evaluating compositionality in sentence embeddings. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 1596– 1601, Madison, WI. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801. Association for Computational Linguistics. Lyn Frazier and Keith Rayner. 1982. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14(2):178–210. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A Deep Semantic Natural Language Processing Platform. In Proceedings of the Workshop for NLP Open Source Software (NLPOSS). Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neural models of natural language inference with multiply-quantified sentences. arXiv preprint arXiv:1810.13033. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655. Association for Computational Linguistics. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv preprint arXiv:1901.05287. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 3438 Volume 1 (Long Papers), pages 1195–1205. Association for Computational Linguistics. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112. Association for Computational Linguistics. Juho Kim, Christopher Malon, and Asim Kadav. 2018. Teaching syntax by adversarial distraction. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 79–84. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Bill MacCartney and Christopher D Manning. 2009. Natural language inference. Ph.D. thesis, Stanford University. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202. Association for Computational Linguistics. R. Thomas McCoy, Robert Frank, and Tal Linzen. 2018. Revisiting the poverty of the stimulus: Hierarchical generalization without a hierarchical bias in recurrent neural networks. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 2093–2098, Madison, WI. R. Thomas McCoy and Tal Linzen. 2019. Non-entailed subsequences as a challenge for natural language inference. In Proceedings of the Society for Computation in Linguistics, volume 2. R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. 2019. RNNs implicitly implement tensor-product representations. In International Conference on Learning Representations. Yashar Mehdad, Alessandro Moschitti, and Fabio Massimo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1020–1028. Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353. Association for Computational Linguistics. Nikita Nangia and Samuel R. Bowman. 2019. Human vs. muppet: A conservative estimate of human performance on the GLUE benchmark. Yixin Nie, Yicheng Wang, and Mohit Bansal. 2018. Analyzing compositionality-sensitivity of NLI models. arXiv preprint arXiv:1811.07033. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Association for Computational Linguistics. Ellie Pavlick and Chris Callison-Burch. 2016. Tense manages to predict implicative behavior in verbs. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2225–2229. Association for Computational Linguistics. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018a. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67–81. Association for Computational Linguistics. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018b. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. Association for Computational Linguistics. Laura Rimell and Stephen Clark. 2010. Cambridge: Parser evaluation using textual entailment by grammatical relation comparison. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 268–271. Association for Computational Linguistics. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 731–744. Association for Computational Linguistics. Ivan Sanchez, Jeff Mitchell, and Sebastian Riedel. 2018. Behavior analysis of NLI models: Uncovering the influence of three factors on robustness. In Proceedings of the 2018 Conference of the North 3439 American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1975–1985. Association for Computational Linguistics. Whitney Tabor, Bruno Galantucci, and Daniel Richardson. 2004. Effects of merely local syntactic coherence on sentence processing. Journal of Memory and Language, 50(4):355–370. Jianyu Wang, Zhishuai Zhang, Cihang Xie, Yuyin Zhou, Vittal Premachandran, Jun Zhu, Lingxi Xie, and Alan Yuille. 2018. Visual concepts and compositional voting. Annals of Mathematical Sciences and Applications, 3(1):151–188. Noah Weber, Leena Shekhar, and Niranjan Balasubramanian. 2018. The fine line between linguistic generalization and failure in seq2seq-attention models. In Proceedings of the Workshop on Generalization in the Age of Deep Learning, pages 24–27. Association for Computational Linguistics. Aaron Steven White, Pushpendre Rastogi, Kevin Duh, and Benjamin Van Durme. 2017. Inference is everything: Recasting semantic resources into a unified evaluation framework. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 996–1005. Asian Federation of Natural Language Processing. Aaron Steven White and Kyle Rawlins. 2018. The role of veridicality and factivity in clause selection. In Proceedings of the 48th Annual Meeting of the North East Linguistic Society. Aaron Steven White, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic inference in neural models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4717–4724. Association for Computational Linguistics. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association of Computational Linguistics, 6:253–267. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. A MNLI examples that contradict the HANS heuristics The sentences in (7) show examples from the MNLI training set that contradict the lexical overlap, subsequence, and constituent heuristics. The full set of all 261 contradicting examples in the MNLI training set may be viewed at https://github.com/ tommccoy1/hans/blob/master/mnli_ contradicting_examples. (7) a. A subcategory of accuracy is consistency. ↛Accuracy is a subcategory of consistency. b. At the same time, top Enron executives were free to exercise their stock options, and some did. ↛Top Enron executives were free to exercise. c. She was chagrined at The Nation’s recent publication of a column by conservative education activist Ron Unz arguing that liberal education reform has been an unmitigated failure. ↛Liberal education reform has been an unmitigated failure. B Templates Tables 4, 5, and 6 contain the templates for the lexical overlap heuristic, the subsequence heuristic, and the constituent heuristic, respectively. In some cases, a given template has multiple versions, such as one version where a noun phrase modifier attaches to the subject and another where the modifier attaches to the object. For clarity, we have only listed one version of each template here. The full list of templates can be viewed in the code on GitHub.10 C Fine-grained results Table 7 shows the results by subcase for models trained on MNLI for the subcases where the correct answer is entailment. Table 8 shows the results by subcase for these models for the subcases where the correct answer is non-entailment. D Results for models trained on MNLI with neutral and contradiction merged Table 9 shows the results on HANS for models trained on MNLI with the labels neutral and contradiction merged in the training set into the single label non-entailment. The results are similar to the results obtained by merging the labels after training, with the models generally outputting entailment for all HANS examples, whether that was the correct answer or not. 10https://github.com/tommccoy1/hans 3440 Subcase Template Example Entailment: Untangling relative clauses The N1 who the N2 V1 V2 the N3 →The N2 V1 the N1. The athlete who the judges admired called the manager. →The judges admired the athlete. Entailment: Sentences with PPs The N1 P the N2 V the N3 →The N1 V the N3 The tourists by the actor recommended the authors. →The tourists recommended the authors. Entailment: Sentences with relative clauses The N1 that V2 V1 the N2 →The N1 V1 the N2 The actors that danced saw the author. →The actors saw the author. Entailment: Conjunctions The N1 V the N2 and the N3 →The N1 V the N3 The secretaries encouraged the scientists and the actors. →The secretaries encouraged the actors. Entailment: Passives The N1 were V by the N2 →The N1 V the N2 The authors were supported by the tourists. →The tourists supported the authors. Non-entailment: Subject-object swap The N1 V the N2. ↛The N2 V the N1. The senators mentioned the artist. ↛The artist mentioned the senators. Non-entailment: Sentences with PPs The N1 P the N2 V the N3 ↛The N3 V the N2 The judge behind the manager saw the doctors. ↛The doctors saw the manager. Non-entailment: Sentences with relative clauses The N1 V1 the N2 who the N3 V2 ↛The N2 V1 the N3 The actors advised the manager who the tourists saw. ↛The manager advised the tourists. Non-entailment: Conjunctions The N1 V the N2 and the N3 ↛The N2 V the N3 The doctors advised the presidents and the tourists. ↛The presidents advised the tourists. Non-entailment: Passives The N1 were V by the N2 ↛The N1 V the N2 The senators were recommended by the managers. ↛The senators recommended the managers. Table 4: Templates for the lexical overlap heuristic 3441 Subcase Template Example Entailment: Conjunctions The N1 and the N2 V the N3 →The N2 V the N3 The actor and the professor mentioned the lawyer. →The professor mentioned the lawyer. Entailment: Adjectives Adj N1 V the N2 →N1 V the N2 Happy professors mentioned the lawyer. →Professors mentioned the lawyer. Entailment: Understood argument The N1 V the N2 →The N1 V The author read the book. →The author read. Entailment: Relative clause on object The N1 V1 the N2 that V2 the N3 →The N1 V1 the N2 The artists avoided the senators that thanked the tourists. →The artists avoided the senators. Entailment: PP on object The N1 V the N2 P the N3 →The N1 V the N2 The authors supported the judges in front of the doctor. →The authors supported the judges. Non-entailment: NP/S The N1 V1 the N2 V2 the N3 ↛The N1 V1 the N2 The managers heard the secretary encouraged the author. ↛The managers heard the secretary. Non-entailment: PP on subject The N1 P the N2 V ↛The N2 V The managers near the scientist resigned. ↛The scientist resigned. Non-entailment: Relative clause on subject The N1 that V1 the N2 V2 the N3 ↛The N2 V2 the N3 The secretary that admired the senator saw the actor. ↛The senator saw the actor. Non-entailment: MV/RR The N1 V1 P the N2 V2 ↛The N1 V1 P the N2 The senators paid in the office danced. ↛The senators paid in the office. Non-entailment: NP/Z P the N1 V1 the N2 V2 the N3 ↛The N1 V1 the N2 Before the actors presented the professors advised the manager. ↛The actors presented the professors. Table 5: Templates for the subsequence heuristic 3442 Subcase Template Example Entailment: Embedded under preposition P the N1 V1, the N2 V2 the N3 →The N1 V1 Because the banker ran, the doctors saw the professors. →The banker ran. Entailment: Outside embedded clause P the N1 V1 the N2, the N3 V2 the N4 →The N3 V2 the N4 Although the secretaries recommended the managers, the judges supported the scientist. →The judges supported the scientist. Entailment: Embedded under verb The N1 V1 that the N2 V2 →The N2 V2 The president remembered that the actors performed. →The actors performed. Entailment: Conjunction The N1 V1, and the N2 V2 the N3. →The N2 V2 the N3 The lawyer danced, and the judge supported the doctors. →The judge supported the doctors. Entailment: Adverbs Adv the N V →The N V Certainly the lawyers resigned. →The lawyers resigned. Non-entailment: Embedded under preposition P the N1 V1, the N2 V2 the N2 ↛The N1 V1 Unless the senators ran, the professors recommended the doctor. ↛The senators ran. Non-entailment: Outside embedded clause P the N1 V1 the N2, the N3 V2 the N4 ↛The N3 V2 the N4 Unless the authors saw the students, the doctors helped the bankers. ↛The doctors helped the bankers. Non-entailment: Embedded under verb The N1 V1 that the N2 V2 the N3 ↛The N2 V2 the N3 The tourists said that the lawyer saw the banker. ↛The lawyer saw the banker. Non-entailment: Disjunction The N1 V1, or the N2 V2 the N3 ↛The N2 V2 the N3 The judges resigned, or the athletes mentioned the author. ↛The athletes mentioned the author. Non-entailment: Adverbs Adv the N1 V the N2 ↛The N1 V the N2 Probably the artists saw the authors. ↛The artists saw the authors. Table 6: Templates for the constituent heuristic 3443 Heuristic Subcase DA ESIM SPINN BERT Lexical Untangling relative clauses 0.97 0.95 0.88 0.98 overlap The athlete who the judges saw called the manager. →The judges saw the athlete. Sentences with PPs 1.00 1.00 1.00 1.00 The tourists by the actor called the authors. →The tourists called the authors. Sentences with relative clauses 0.98 0.97 0.97 0.99 The actors that danced encouraged the author. →The actors encouraged the author. Conjunctions 1.00 1.00 1.00 0.77 The secretaries saw the scientists and the actors. →The secretaries saw the actors. Passives 1.00 1.00 0.95 1.00 The authors were supported by the tourists. →The tourists supported the authors. Subsequence Conjunctions 1.00 1.00 1.00 0.98 The actor and the professor shouted. →The professor shouted. Adjectives 1.00 1.00 1.00 1.00 Happy professors mentioned the lawyer. →Professors mentioned the lawyer. Understood argument 1.00 1.00 0.84 1.00 The author read the book. →The author read. Relative clause on object 0.98 0.99 0.95 0.99 The artists avoided the actors that performed. →The artists avoided the actors. PP on object 1.00 1.00 1.00 1.00 The authors called the judges near the doctor. →The authors called the judges. Constituent Embedded under preposition 0.99 0.99 0.85 1.00 Because the banker ran, the doctors saw the professors. →The banker ran. Outside embedded clause 0.94 1.00 0.95 1.00 Although the secretaries slept, the judges danced. →The judges danced. Embedded under verb 0.92 0.94 0.99 0.99 The president remembered that the actors performed. →The actors performed. Conjunction 0.99 1.00 0.89 1.00 The lawyer danced, and the judge supported the doctors. →The lawyer danced. Adverbs 1.00 1.00 0.98 1.00 Certainly the lawyers advised the manager. →The lawyers advised the manager. Table 7: Results for the subcases where the correct label is entailment. 3444 Heuristic Subcase DA ESIM SPINN BERT Lexical Subject-object swap 0.00 0.00 0.03 0.00 overlap The senators mentioned the artist. ↛The artist mentioned the senators. Sentences with PPs 0.00 0.00 0.01 0.25 The judge behind the manager saw the doctors. ↛The doctors saw the manager. Sentences with relative clauses 0.04 0.04 0.06 0.18 The actors called the banker who the tourists saw. ↛The banker called the tourists. Conjunctions 0.00 0.00 0.01 0.39 The doctors saw the presidents and the tourists. ↛The presidents saw the tourists. Passives 0.00 0.00 0.00 0.00 The senators were helped by the managers. ↛The senators helped the managers. Subsequence NP/S 0.04 0.02 0.09 0.02 The managers heard the secretary resigned. ↛The managers heard the secretary. PP on subject 0.00 0.00 0.00 0.06 The managers near the scientist shouted. ↛The scientist shouted. Relative clause on subject 0.03 0.04 0.05 0.01 The secretary that admired the senator saw the actor. ↛The senator saw the actor. MV/RR 0.04 0.03 0.03 0.00 The senators paid in the office danced. ↛The senators paid in the office. NP/Z 0.02 0.01 0.11 0.10 Before the actors presented the doctors arrived. ↛The actors presented the doctors. Constituent Embedded under preposition 0.14 0.02 0.29 0.50 Unless the senators ran, the professors recommended the doctor. ↛The senators ran. Outside embedded clause 0.01 0.00 0.02 0.00 Unless the authors saw the students, the doctors resigned. ↛The doctors resigned. Embedded under verb 0.00 0.00 0.01 0.22 The tourists said that the lawyer saw the banker. ↛The lawyer saw the banker. Disjunction 0.01 0.03 0.20 0.01 The judges resigned, or the athletes saw the author. ↛The athletes saw the author. Adverbs 0.00 0.00 0.00 0.08 Probably the artists saw the authors. ↛The artists saw the authors. Table 8: Results for the subcases where the correct label is non-entailment. 3445 Correct: Entailment Correct: Non-entailment Model Model class Lexical Subseq. Const. Lexical Subseq. Const. DA Bag-of-words 1.00 1.00 0.98 0.00 0.00 0.03 ESIM RNN 0.99 1.00 1.00 0.00 0.01 0.00 SPINN TreeRNN 0.94 0.96 0.93 0.06 0.14 0.11 BERT Transformer 0.98 1.00 0.99 0.04 0.02 0.20 Table 9: Results for models trained on MNLI with neutral and contradiction merged into a single label, nonentailment. E Results with augmented training with some subcases withheld For each model, we ran five experiments, each one having 6 of the 30 subcases withheld. Each trained model was then evaluated on the categories that had been withheld from it. The results of these experiments are in Tables 10, 11, 12, 13 and 14. F Human experiments To obtain human results, we used Amazon Mechanical Turk. We subdivided HANS into 114 different categories of examples, covering all possible variations of the template used to generate the example and the specific word around which the template was built. For example, for the constituent heuristic subcase of clauses embedded under verbs (e.g. The doctor believed the lawyer danced ↛The lawyer danced), each possible verb under which the clause could be embedded (e.g. believed, thought, or assumed) counted as a different category. For each of these 114 categories, we chose 20 examples from HANS and obtained judgments from 5 human participants for each of those 20 examples. Each participant provided judgments for 57 examples plus 10 controls (67 stimuli total) and was paid $2.00. The controls consisted of 5 examples where the premise and hypothesis were the same (e.g. The doctor saw the lawyer →The doctor saw the lawyer) and 5 examples of simple negation (e.g. The doctor saw the lawyer ↛The doctor did not see the lawyer). For analyzing the data, we discarded any participants who answered any of these controls incorrectly; this led to 95 participants being retained and 105 being rejected (participants were still paid regardless of whether they were retained or filtered out). On average, each participant spent 6.5 seconds per example; the participants we retained spent 8.9 seconds per example, while the participants we discarded spent 4.2 seconds per example. The total amount of time from a participant accepting the experiment to completing the experiment averaged 17.6 minutes. This included 9.1 minutes answering the prompts (6.4 minutes for discarded participants and 12.1 minutes for retained participants) and roughly one minute spent between prompts (1 second after each prompt). The remaining time was spent reading the consent form, reading the instructions, or waiting to start (Mechanical Turk participants often wait several minutes between accepting an experiment and beginning the experiment). The expert annotators were three native English speakers who had a background in linguistics but who had not heard about this project before providing judgments. Two of them were graduate students and one was a postdoctoral researcher. Each expert annotator labeled 124 examples (one example from each of the 114 categories, plus 10 controls). 3446 Heuristic Subcase DA ESIM SPINN BERT Lexical Subject-object swap 0.01 1.00 1.00 1.00 overlap The senators mentioned the artist. ↛The artist mentioned the senators. Lexical Untangling relative clauses 0.34 0.23 0.23 0.20 overlap The athlete who the judges saw called the manager. →The judges saw the athlete. Subsequence NP/S 0.27 0.00 0.00 0.10 The managers heard the secretary resigned. ↛The managers heard the secretary. Subsequence Conjunctions 0.49 0.38 0.38 0.38 The actor and the professor shouted. →The professor shouted. Constituent Embedded under preposition 0.51 0.51 0.51 1.00 Unless the senators ran, the professors recommended the doctor. ↛The senators ran. Constituent Embedded under preposition 1.00 0.06 1.00 0.03 Because the banker ran, the doctors saw the professors. →The banker ran. Table 10: Accuracies for models trained on MNLI augmented with most HANS example categories except withholding the categories in this table (experiment 1/5 for the withheld category investigation). Heuristic Subcase DA ESIM SPINN BERT Lexical Sentences with PPs 0.00 0.96 0.71 0.97 overlap The judge behind the manager saw the doctors. ↛The doctors saw the manager. Lexical Sentences with PPs 1.00 1.00 0.94 1.00 overlap The tourists by the actor called the authors. →The tourists called the authors. Subsequence PP on subject 0.00 0.07 0.57 0.39 The managers near the scientist shouted. ↛The scientist shouted. Subsequence Adjectives 0.71 0.99 0.64 1.00 Happy professors mentioned the lawyer. →Professors mentioned the lawyer. Constituent Outside embedded clause 0.78 1.00 1.00 0.17 Unless the authors saw the students, the doctors resigned. ↛The doctors resigned. Constituent Outside embedded clause 0.78 0.78 0.78 0.97 Although the secretaries slept, the judges danced. →The judges danced. Table 11: Accuracies for models trained on MNLI augmented with most HANS example categories except withholding the categories in this table (experiment 2/5 for the withheld category investigation). 3447 Heuristic Subcase DA ESIM SPINN BERT Lexical Sentences with relative clauses 0.00 0.04 0.02 0.84 overlap The actors called the banker who the tourists saw. ↛The banker called the tourists. Lexical Sentences with relative clauses 1.00 0.97 1.00 1.00 overlap The actors that danced encouraged the author. →The actors encouraged the author. Subsequence Relative clause on subject 0.00 0.04 0.00 0.93 The secretary that admired the senator saw the actor. ↛The senator saw the actor. Subsequence Understood argument 0.28 1.00 0.81 0.94 The author read the book. →The author read. Constituent Embedded under verb 0.00 0.00 0.05 0.98 The tourists said that the lawyer saw the banker. ↛The lawyer saw the banker. Constituent Embedded under verb 1.00 0.94 0.98 0.43 The president remembered that the actors performed. →The actors performed. Table 12: Accuracies for models trained on MNLI augmented with most HANS example categories except withholding the categories in this table (experiment 3/5 for the withheld category investigation). Heuristic Subcase DA ESIM SPINN BERT Lexical Passives 0.00 0.00 0.00 0.00 overlap The senators were helped by the managers. ↛The senators helped the managers. Lexical Conjunctions 0.05 0.51 0.52 1.00 overlap The secretaries saw the scientists and the actors. →The secretaries saw the actors. Subsequence MV/RR 0.76 0.44 0.32 0.07 The senators paid in the office danced. ↛The senators paid in the office. Subsequence Relative clause on object 0.72 1.00 0.99 0.99 The artists avoided the actors that performed. →The artists avoided the actors. Constituent Disjunction 0.11 0.29 0.51 0.44 The judges resigned, or the athletes saw the author. ↛The athletes saw the author. Constituent Conjunction 0.99 1.00 0.74 1.00 The lawyer danced, and the judge supported the doctors. →The lawyer danced. Table 13: Accuracies for models trained on MNLI augmented with most HANS example categories except withholding the categories in this table (experiment 4/5 for the withheld category investigation). 3448 Heuristic Subcase DA ESIM SPINN BERT Lexical Conjunctions 0.00 0.44 0.00 0.08 overlap The doctors saw the presidents and the tourists. ↛The presidents saw the tourists. Lexical Passives 0.00 0.00 0.00 0.00 overlap The authors were supported by the tourists. →The tourists supported the authors. Subsequence NP/Z 0.00 0.10 0.18 0.57 Before the actors presented the doctors arrived. ↛The actors presented the doctors. Subsequence PP on object 0.04 0.76 0.04 0.98 The authors called the judges near the doctor. →The authors called the judges. Constituent Adverbs 0.76 0.33 0.20 0.84 Probably the artists saw the authors. ↛The artists saw the authors. Constituent Adverbs 0.66 1.00 0.59 0.96 Certainly the lawyers advised the manager. →The lawyers advised the manager. Table 14: Accuracies for models trained on MNLI augmented with most HANS example categories except withholding the categories in this table (experiment 5/5 for the withheld category investigation).
2019
334
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3449–3460 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3449 Zero-Shot Entity Linking by Reading Entity Descriptions Lajanugen Logeswaran†∗ Ming-Wei Chang‡ Kenton Lee‡ Kristina Toutanova‡ Jacob Devlin‡ Honglak Lee‡,† †University of Michigan, ‡Google Research {llajan,honglak}@umich.edu, {mingweichang,kentonl,kristout,jacobdevlin,honglak}@google.com Abstract We present the zero-shot entity linking task, where mentions must be linked to unseen entities without in-domain labeled data. The goal is to enable robust transfer to highly specialized domains, and so no metadata or alias tables are assumed. In this setting, entities are only identified by text descriptions, and models must rely strictly on language understanding to resolve the new entities. First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities. Second, we propose a simple and effective adaptive pre-training strategy, which we term domainadaptive pre-training (DAP), to address the domain shift problem associated with linking unseen entities in a new domain. We present experiments on a new dataset that we construct for this task and show that DAP improves over strong pre-training baselines, including BERT. The data and code are available at https: //github.com/lajanugen/zeshel.1 1 Introduction Entity linking systems have achieved high performance in settings where a large set of disambiguated mentions of entities in a target entity dictionary is available for training. Such systems typically use powerful resources such as a high-coverage alias table, structured data, and linking frequency statistics. For example, Milne and Witten (2008) show that by only using the prior probability gathered from hyperlink statistics on Wikipedia training articles, one can achieve 90% accuracy on the task of predicting links in Wikipedia test articles. While most prior works focus on linking to general entity databases, it is often desirable to link to ∗Work completed while interning at Google 1zeshel stands for zero-shot entity linking. Military Star Wars Elder Scrolls Burden ( Oblivion ) Burden is an Alteration spell that temporarily adds .. Burden ( Effect ) Burden is a spell effect that temporarily increases the weight.. Coronation Street Lego Mention All entities In the entity dictionary .... Orient Expedition Orient Expedition was one of the various subthemes... Orient Expedition Wallet Orient Expedition Wallet was a wallet themed around Orient Expedition.. .... Train Orient expedition is a train ride named after the theme of the same name. The train itself is .. Entity Linking Model Mention All entities In the entity dictionary Test The Burden spell is the opposite of Feather , increasing a character ' s encumbrance ... Figure 1: Zero-shot entity linking. Multiple training and test domains (worlds) are shown. The task has two key properties: (1) It is zero-shot, as no mentions have been observed for any of the test world entities during training. (2) Only textual (non-structured) information is available. specialized entity dictionaries such as legal cases, company project descriptions, the set of characters in a novel, or a terminology glossary. Unfortunately, labeled data are not readily available and are often expensive to obtain for these specialized entity dictionaries. Therefore, we need to develop entity linking systems that can generalize to unseen specialized entities. Without frequency statistics and meta-data, the task becomes substantially more challenging. Some prior works have pointed out the importance of building entity linking systems that can generalize to unseen entity sets (Sil et al., 2012; Wang et al., 2015), but adopt an additional set of assumptions. 3450 In this work, we propose a new zero-shot entity linking task, and construct a new dataset for it.2 The target dictionary is simply defined as a set of entities, each with a text description (from a canonical entity page, for example). We do not constrain mentions to named entities, unlike some prior work, which makes the task harder due to large number of candidate entities. In our dataset, multiple entity dictionaries are available for training, with task performance measured on a disjoint set of test entity dictionaries for which no labeled data is available. Figure 1 illustrates the task setup. We construct the dataset using multiple sub-domains in Wikia and automatically extract labeled mentions using hyper-links. Zero-shot entity linking poses two challenges for entity linking models. First, without the availability of powerful alias tables or frequency priors, models must read entity descriptions and reason about the correspondence with the mention in context. We show that a strong reading comprehension model is crucial. Second, since labeled mentions for test entities are not available, models must adapt to new mention contexts and entity descriptions. We focus on both of these challenges. The contributions of this paper are as follows: • We propose a new zero-shot entity linking task that aims to challenge the generalization ability of entity linking systems with minimal assumptions. We construct a dataset for this task, which will be made publicly available. • We build a strong baseline by using state-of-theart reading comprehension models. We show that attention between mention in context and entity descriptions, which has not been used in prior entity linking work, is critical for this task. • We propose a simple yet novel adaptation strategy called domain-adaptive pre-training (DAP) and show that it can further improve entity linking performance. 2 Zero-shot Entity Linking We first review standard entity linking task definitions and discuss assumptions made by prior systems. We then define the zero-shot entity linking task and discuss its relationship to prior work. 2Existing datasets are either unsuitable or would have to be artificially partitioned to construct a dataset for this task. 2.1 Review: Entity linking Entity linking (EL) is the task of grounding entity mentions by linking them to entries in a given database or dictionary of entities. Formally, given a mention m and its context, an entity linking system links m to the corresponding entity in an entity set E = {ei}i=1,...,K, where K is the number of entities. The standard definition of EL (Bunescu and Pasca, 2006; Roth et al., 2014; Sil et al., 2018) assumes that mention boundaries are provided by users or a mention detection system. The entity set E can contain tens of thousands or even millions of entities, making this a challenging task. In practice, many entity linking systems rely on the following resources or assumptions: Single entity set This assumes that there is a single comprehensive set of entities E shared between training and test examples. Alias table An alias table contains entity candidates for a given mention string and limits the possibilities to a relatively small set. Such tables are often compiled from a labeled training set and domain-specific heuristics. Frequency statistics Many systems use frequency statistics obtained from a large labeled corpus to estimate entity popularity and the probability of a mention string linking to an entity. These statistics are very powerful when available. Structured data Some systems assume access to structured data such as relationship tuples (e.g., (Barack Obama, Spouse, Michelle Obama)) or a type hierarchy to aid disambiguation. 2.2 Task Definition The main motivation for this task is to expand the scope of entity linking systems and make them generalizable to unseen entity sets for which none of the powerful resources listed above are readily available. Therefore, we drop the above assumptions and make one weak assumption: the existence of an entity dictionary E = {(ei, di)}i=1,..,K, where di is a text description of entity ei. Our goal is to build entity linking systems that can generalize to new domains and entity dictionaries, which we term worlds. We define a world as W = (MW, UW, EW), where MW and UW are distributions over mentions and documents from the world, respectively, and EW is an entity dictionary associated with W. Mentions m from MW 3451 Task In-Domain Seen Small Statistics Structured Entity Entity Set Candidate Set Data dictionary Standard EL      Cross-Domain EL     Linking to Any DB (Sil et al., 2012)    Zero-Shot EL  Table 1: Assumptions and resources for entity linking task definitions. We classify task definitions based on whether (i) the system is tested on mentions from the training domain (In-Domain), (ii) linked mentions from the target entity set are seen during training (Seen Entity Set), (iii) a small high-coverage candidate set can be derived using alias tables or strict token overlap constraints (Small Candidate Set) and the availability of (iv) Frequency statistics, (v) Structured Data, and (vi) textual descriptions (Entity dictionary). are defined as mention spans in documents from UW. We assume the availability of labelled mention, entity pairs from one or more source worlds W1 src . . . Wn src for training. At test time we need to be able to label mentions in a new world Wtgt. Note that the entity sets EW1src, . . . , EWn src, EWtgt are disjoint. See Figure 1 for an illustration of several training and test worlds. We additionally assume that samples from the document distribution UWtgt and the entity descriptions EWtgt are available for training. These samples can be used for unsupervised adaptation to the target world. During training, mention boundaries for mentions in Wtgt are not available. At test time, mention boundaries are provided as input. 2.3 Relationship to other EL tasks We summarize the relationship between the newly introduced zero-shot entity linking task and prior EL task definitions in Table 1. Standard EL While there are numerous differences between EL datasets (Bunescu and Pasca, 2006; Ling et al., 2015), most focus on a standard setting where mentions from a comprehensive test entity dictionary (often Wikipedia) are seen during training, and rich statistics and meta-data can be utilized (Roth et al., 2014). Labeled in-domain documents with mentions are also assumed to be available. Cross-Domain EL Recent work has also generalized to a cross-domain setting, linking entity mentions in different types of text, such as blogposts and news articles to the Wikipedia KB, while only using labeled mentions in Wikipedia for training (e.g., Gupta et al. (2017); Le and Titov (2018), inter alia). Linking to Any DB Sil et al. (2012) proposed a task setup very similar to ours, and later work (Wang et al., 2015) has followed a similar setting. The main difference between zero-shot EL and these works is that they assumed either a highcoverage alias table or high-precision token overlap heuristics to reduce the size of the entity candidate set (i.e., to less than four in Sil et al. (2012)) and relied on structured data to help disambiguation. By compiling and releasing a multi-world dataset focused on learning from textual information, we hope to help drive progress in linking entities for a broader set of applications. Work on word sense disambiguation based on dictionary definitions of words is related as well (Chaplot and Salakhutdinov, 2018), but this task exhibits lower ambiguity and existing formulations have not focused on domain generalization. 3 Dataset Construction We construct a new dataset to study the zeroshot entity linking problem using documents from Wikia.3 Wikias are community-written encyclopedias, each specializing in a particular subject or theme such as a fictional universe from a book or film series. Wikias have many interesting properties suitable for our task. Labeled mentions can be automatically extracted based on hyperlinks. Mentions and entities have rich document context that can be exploited by reading comprehension approaches. Each Wikia has a large number of unique entities relevant to a specific theme, making it a useful benchmark for evaluating domain generalization of entity linking systems. We use data from 16 Wikias, and use 8 of them for training and 4 each for validation and testing. To construct data for training and evaluation, we first extract a large number of mentions from the Wikias. Many of these mentions can be easily linked by string matching between mention string 3 https://www.wikia.com. 3452 World Entities Mentions Train Evaluation Seen Unseen Training American Football 31929 3898 410 333 Doctor Who 40281 8334 819 702 Fallout 16992 3286 337 256 Final Fantasy 14044 6041 629 527 Military 104520 13063 1356 1408 Pro Wrestling 10133 1392 151 111 StarWars 87056 11824 1143 1563 World of Warcraft 27677 1437 155 100 Validation Coronation Street 17809 0 0 1464 Muppets 21344 0 0 2028 Ice Hockey 28684 0 0 2233 Elder Scrolls 21712 0 0 4275 Test Forgotten Realms 15603 0 0 1200 Lego 10076 0 0 1199 Star Trek 34430 0 0 4227 YuGiOh 10031 0 0 3374 Table 2: Zero-shot entity linking dataset based on Wikia. and the title of entity documents. These mentions are downsampled during dataset construction, and occupy a small percentage (5%) of the final dataset. While not completely representative of the natural distribution of mentions, this data construction method follows recent work that focuses on evaluating performance on the challenging aspects of the entity linking problem (e.g., Gupta et al. (2017) selected mentions with multiple possible entity candidates for assessing indomain unseen entity performance). Each Wikia document corresponds to an entity, represented by the title and contents of the document. These entities, paired with their text descriptions, comprise the entity dictionary. Since the task is already quite challenging, we assume that the target entity exists in the entity dictionary and leave NIL recognition or clustering (NIL mentions/entities refer to entities nonexistent in the knowledge-base) to future editions of the task and dataset. We categorize the mentions based on token overlap between mentions and the corresponding entity title as follows. High Overlap: title is identical to mention text, Multiple Categories: title is mention text followed by a disambiguation phrase (e.g., mention string: ‘Batman’, title: ‘Batman (Lego)’), Ambiguous substring: mention is a substring of title (e.g., mention string: ‘Agent’, title: ‘The Agent’). All other mentions are categorized Coronation Street Mention She told ray that Dickie and Audrey had met up again and tried to give their marriage another go . . . I don’t want to see her face again .. . ” Dickie Fleming Richard “Dickie” Fleming lived in coronation street with his wife Audrey from 1968 to 1970. Audrey Fleming Audrey Fleming (ne´e bright) was a resident of 3 coronation street from 1968 to 1970 . Audrey married Dickie Fleming .. . Zeedan Nazir Zeedan Nazir is the son of the Late Kal and Jamila Nazir . .. Star Wars Mention The droid acted as Moff Kilran’s representative on board the Black Talon, an Imperial transport ship. Gageclass transport The Gage-class transport was a transport design used by the reconstituted Sith Empire of the Great Galactic War. Imperial Armored Transport The Kuat Drive Yards Imperial Armored Transport was fifty meters long and carried ten crewmen and twenty soldiers. M-class Imperial Attack Transport The M-class Imperial Attack Transport was a type of starship which saw service in the Imperial Military during the Galactic War. Table 3: Example mention and entity candidates from Coronation Street and Star Wars. Note that the language usage is very different across different Worlds. as Low Overlap. These mentions respectively constitute approximately 5%, 28%, 8% and 59% of the mentions in the dataset. Table 2 shows some statistics of the dataset. Each domain has a large number of entities ranging from 10,000 to 100,000. The training set has 49,275 labeled mentions. To examine the indomain generalization performance, we construct heldout sets seen and unseen of 5,000 mentions each, composed of mentions that link to only entities that were seen or unseen during training, respectively. The validation and test sets have 10,000 mentions each (all of which are unseen). Table 3 shows examples of mentions and entities in the dataset. The vocabulary and language used in mentions and entity descriptions differs drastically between the different domains. In addition to acquiring domain specific knowledge, understanding entity descriptions and performing reasoning is required in order to resolve mentions. 4 Models for Entity Linking We adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more expensive but powerful candidate ranking stage. 3453 4.1 Candidate generation Without alias tables for standard entity linking, a natural substitute is to use an IR approach for candidate generation. We use BM25, a variant of TF-IDF to measure similarity between mention string and candidate documents.4 Top-k entities retrieved by BM25 scoring with Lucene5 are used for training and evaluation. In our experiments k is set to 64. The coverage of the top-64 candidates is less than 77% on average, indicating the difficulty of the task and leaving substantial room for improvement in the candidate generation phase. 4.2 Candidate ranking Since comparing two texts—a mention in context and a candidate entity description—is a task similar to reading comprehension and natural language inference tasks, we use an architecture based on a deep Transformer (Vaswani et al., 2017) which has achieved state-of-the-art performance on such tasks (Radford et al., 2018; Devlin et al., 2019). As in BERT (Devlin et al., 2019), the mention in context m and candidate entity description e, each represented by 128 word-piece tokens, are concatenated and input to the model as a sequence pair together with special start and separator tokens: ([CLS] m [SEP] e [SEP]). Mention words are signaled by a special embedding vector that is added to the mention word embeddings. The Transformer encoder produces a vector representation hm,e of the input pair, which is the output of the last hidden layer at the special pooling token [CLS]. Entities in a given candidate set are scored as w⊤hm,e where w is a learned parameter vector, and the model is trained using a softmax loss. An architecture with 12 layers, hidden dimension size 768 and 12 attention heads was used in our experiments. We refer to this model as FullTransformer. By jointly encoding the entity description and the mention in context with a Transformer, they can attend to each other at every layer. Note that prior neural approaches for entity linking have not explored such architectures with deep cross-attention. To assess the value of this departure from prior work, we implement the following two variants: (i) Pool-Transformer: a siamese-like network which uses two deep Transformers to separately derive single-vector repre4We also experimented with using the mention+context text but this variant performs substantially worse. 5 http://lucene.apache.org/ sentations of the mention in context, hm, and the candidate entity, he; they take as input the mention in context and entity description respectively, together with special tokens indicating the boundaries of the texts: ([CLS] m [SEP]) and ([CLS] e [SEP]), and output the last hidden layer encoding at the special start token. The scoring function is h⊤ mhe. Single vector representations for the two components have been used in many prior works, e.g., Gupta et al. (2017). (ii) CandPool-Transformer: a variant which uses single vector entity representations but can attend to individual tokens of the mention and its context as in Ganea and Hofmann (2017). This architecture also uses two Transformer encoders, but introduces an additional attention module which allows he to attend to individual token representations of the mention in context. In the experiments section, we also compare to re-implementations of Gupta et al. (2017) and Ganea and Hofmann (2017), which are similar to Pool-Transformer and Cand-Pool-Transformer respectively but with different neural architectures for encoding. 5 Adapting to the Target World We focus on using unsupervised pre-training to ensure that downstream models are robust to target domain data. There exist two general strategies for pre-training: (1) task-adaptive pre-training, and (2) open-corpus pre-training. We describe these below, and also propose a new strategy: domainadaptive pre-training (DAP), which is complementary to the two existing approaches. Task-adaptive pre-training Glorot et al. (2011); Chen et al. (2012); Yang and Eisenstein (2015), inter alia, pre-trained on the source and target domain unlabeled data jointly with the goal of discovering features that generalize across domains. After pre-training, the model is fine-tuned on the source-domain labeled data.6 Open-corpus pre-training Instead of explicitly adapting to a target domain, this approach simply applies unsupervised pre-training to large corpora before fine-tuning on the source-domain labeled data. Examples of this approach include ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2019). 6In many works, the learned representations are kept fixed and only higher layers are updated. 3454 Intuitively, the target-domain distribution is likely to be partially captured by pre-training if the open corpus is sufficiently large and diverse. Indeed, open-corpus pre-training has been shown to benefit out-of-domain performance far more than indomain performance (He et al., 2018). Domain-adaptive pre-training In addition to pre-training stages from other approaches, we propose to insert a penultimate domain adaptive pre-training (DAP) stage, where the model is pre-trained only on the target-domain data. As usual, DAP is followed by a final fine-tuning stage on the source-domain labeled data. The intuition for DAP is that representational capacity is limited, so models should prioritize the quality of target domain representations above all else. We introduce notation to describe various ways in which pre-training stages can be composed. • Usrc denotes text segments from the union of source world document distributions UW1src . . . UWn src. • Utgt denotes text segments from the document distribution of a target world Wtgt. • Usrc+tgt denotes randomly interleaved text segments from both Usrc and Utgt. • UWB denotes text segments from open corpora, which in our experiments are Wikipedia and the BookCorpus datasets used in BERT. We can chain together a series of pre-training stages. For example, UWB →Usrc+tgt →Utgt indicates that the model is first pre-trained on the open corpus, then pre-trained on the combined source and target domains, then pre-trained on only the target domain, and finally fine-tuned on the source-domain labeled data.7 We show that chaining together different pre-training strategies provides additive gains. 6 Experiments Pre-training We use the BERT-Base model architecture in all our experiments. The Masked LM objective (Devlin et al., 2019) is used for unsupervised pre-training. For fine-tuning language models (in the case of multi-stage pre-training) and 7We use the notation Ux interchangeably to mean both the unsupervised data x and the strategy to pre-train on x. Model Resources Avg Acc Edit-distance ∅ 16.49 TF-IDF 8 ∅ 26.06 Ganea and Hofmann (2017) GloVe 26.96 Gupta et al. (2017) GloVe 27.03 Full-Transformer ∅ 19.17 Full-Transformer (Pre-trained) Usrc 66.55 Full-Transformer (Pre-trained) Utgt 67.87 Full-Transformer (Pre-trained) Usrc+tgt 67.91 Pool-Transformer (Pre-trained) UWB 57.61 Cand-Pool-Trans. (Pre-trained) UWB 52.62 Full-Transformer (Pre-trained) UWB 76.06 Table 4: Baseline results for Zero-shot Entity Linking. Averaged normalized Entity-Linking accuracy on all validation domains. Usrc+tgt refers to masked language model pre-training on unlabeled data from training and validation worlds. fine-tuning on the Entity-Linking task, we use a small learning rate of 2e-5, following the recommendations from Devlin et al. (2019). For models trained from scratch we use a learning rate of 1e-4. Evaluation We define the normalized entitylinking performance as the performance evaluated on the subset of test instances for which the gold entity is among the top-k candidates retrieved during candidate generation. The unnormalized performance is computed on the entire test set. Our IR-based candidate generation has a top-64 recall of 76% and 68% on the validation and test sets, respectively. The unnormalized performance is thus upper-bounded by these numbers. Strengthening the candidate generation stage improves the unnormalized performance, but this is outside the scope of our work. Average performance across a set of worlds is computed by macro-averaging. Performance is defined as the accuracy of the single-best identified entity (top-1 accuracy). 6.1 Baselines We first examine some baselines for zero-shot entity linking in Table 4. We include naive baselines such as Levenshtein edit-distance and TFIDF, which compare the mention string against candidate entity title and full document description, respectively, to rank candidate entities. We re-implemented recent neural models designed for entity linking (Ganea and Hofmann, 2017; Gupta et al., 2017), but did not expect them to perform well since the original systems were designed for settings where labeled mentions or meta-data for the target entities were available. 3455 (a) Pretraining W1 tgt W2 tgt W3 tgt W4 tgt Avg Usrc+tgt (Glorot et al., 2011)† 73.19 71.61 62.16 64.69 67.91 Usrc+tgt →Utgt (DAP) 79.20 75.55 66.85 66.72 72.08 UWB (Devlin et al., 2019) 83.40 79.00 73.03 68.82 76.06 UWB →Utgt (DAP) 81.68 81.34 73.17 71.97 77.04 UWB →Usrc+tgt 82.92 79.00 72.62 69.55 76.02 UWB →Usrc+tgt →Utgt (DAP) 82.82 81.59 75.34 72.52 78.07 (b) 40 50 60 70 60 65 70 75 80 Usrc+tgt UWB UWB →Usrc+tgt Target domain MLM accuracy Entity-Linking accuracy Figure 2: Left: (a) Impact of using Domain Adaptive Pre-training. We fine-tune all the models on the source labeled data after pretraining. Right: (b) Relationship between MLM (Masked LM) accuracy of pre-trained model and Entity-Linking performance of the fine-tuned model, evaluated on target domains. Adding domain adaptive pre-training improves both MLM accuracy as well as the entity linking performance. Note: src represents the union of all 8 training worlds and we adapt to one tgt world at a time. The target worlds are W1 tgt: Coronation street, W2 tgt: Muppets, W3 tgt: Ice hockey, W4 tgt: Elder scrolls. †We refer to Glorot et al. (2011) for the idea of training a denoising autoencoder on source and target data together rather than the actual implementation. See text for more details. The poor performance of these models validates the necessity of using strong reading comprehension models for zero-shot entity linking. When using the Full-Transformer model, pretraining is necessary to achieve reasonable performance. We present results for models pre-trained on different subsets of our task corpus (Usrc, Utgt, Usrc+tgt) as well as pre-training on an external large corpus (UWB). We observe that the choice of data used for pre-training is important. In Table 4 we also compare the PoolTransformer, Candidate-Pool-Transformer and Full-Transformer. The significant gap between Full-Transformer and the other variants shows the importance of allowing fine-grained comparisons between the two inputs via the cross attention mechanism embedded in the Transformer. We hypothesize that prior entity linking systems did not need such powerful reading comprehension models due to the availability of strong additional meta information. The remaining experiments in the paper use the Full-Transformer model, unless mentioned otherwise. 6.2 Generalization to Unseen Entities and New Worlds To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds. Table 5 compares entity linking performance for different entity splits. Seen entities from the training worlds are unsurprisingly the easiest to link to. For unseen entities from the training world, we observe a Evaluation Accuracy Training worlds, seen 87.74 Training worlds, unseen 82.96 Validation worlds, unseen 76.06 Table 5: Performance of the Full-Transformer (UWB) model evaluated on seen and unseen entities from the training and validation worlds. 5-point drop in performance. Entities from new worlds (which are by definition unseen and are mentioned in out-of-domain text) prove to be the most difficult. Due to the shift in both the language distribution and entity sets, we observe a 11-point drop in performance. This large generalization gap demonstrates the importance of adaptation to new worlds. 6.3 Impact of Domain Adaptive Pre-training Our experiments demonstrate that DAP improves on three state-of-the-art pre-training strategies: • Usrc+tgt: task-adaptive pre-training, which combines source and target data for pretraining (Glorot et al., 2011).9 • UWB: open-corpus pre-training, which uses Wikipedia and the BookCorpus for pre-training (We use a pre-trained BERT model (Devlin et al., 2019)). • UWB →Usrc+tgt: the previous two strategies chained together. While no prior work has applied this approach to domain adaptation, a similar approach for task adaptation was proposed by Howard and Ruder (2018). 9We use Masked LM and Transformer encoder, which are more powerful than the instantiation in (Glorot et al., 2011). 3456 Pre-training EL Accuracy N. Acc. U. Acc. UWB (Devlin et al., 2019) 75.06 55.08 UWB →Utgt (DAP) 76.17 55.88 UWB →Usrc+tgt →Utgt (DAP) 77.05 56.58 Table 6: Performance on test domains with FullTransformer. N. Acc represents the normalized accuracy. U. Acc represents the unnormalized accuracy. The unnormalized accuracy is upper-bounded by 68%, the top-64 recall of the candidate generation stage. The results are in Figure 2(a). DAP improves all pre-training strategies with an additional pretraining stage on only target-domain data. The best setting, UWB →Usrc+tgt →Utgt, chains together all existing strategies. DAP improves the performance over a strong pre-trained model (Devlin et al., 2019) by 2%. To further analyze the results of DAP, we plot the relationships between the accuracy of Masked LM (MLM accuracy) on target unlabeled data and the final target normalized accuracy (after finetuning on the source labeled data) in Figure 2(b). Adding an additional pre-training stage on the target unlabeled data unsurprisingly improves the MLM accuracy. More interestingly, we find that improvements in MLM accuracy are consistently followed by improvements in entity linking accuracy. It is intuitive that performance on unsupervised objectives reflect the quality of learned representations and correlate well with downstream performance. We show empirically that this trend holds for a variety of pre-training strategies. 6.4 Test results and performance analysis Table 6 shows the normalized and unnormalized Entity Linking performance on test worlds. Our best model that chains together all pretraining strategies achieves normalized accuracy of 77.05% and unnormalized accuracy of 56.58%. Note that the unnormalized accuracy corresponds to identifying the correct entity from tens of thousands of candidate entities. To analyze the mistakes made by the model, we compare EL accuracy across different mention categories in Table 7. Candidate generation (Recall@64) is poor in the Low Overlap category. However, the ranking model performs in par with other hard categories for these mentions. Overall EL accuracy can thus be improved significantly by strengthening candidate generation. Mention Category Recall@64 EL Accuracy N. Acc. U. Acc. High Overlap 99.28 87.64 87.00 Ambiguous Substring 88.03 75.89 66.81 Multiple categories 84.88 77.27 65.59 Low Overlap 54.37 71.46 38.85 Table 7: Performance on test domains categorized by mention categories. Recall@64 indicates top-64 performance of candidate generation. N. Acc. and U. Acc. are respectively the normalized and unnormalized accuracies. 7 Related Work We discussed prior entity linking task definitions and compared them to our task in section 2. Here, we briefly overview related entity linking models and unsupervised domain adaptation methods. Entity linking models Entity linking given mention boundaries as input can be broken into the tasks of candidate generation and candidate ranking. When frequency information or alias tables are unavailable, prior work has used measures of similarity of the mention string to entity names for candidate generation (Sil et al., 2012; Murty et al., 2018). For candidate ranking, recent work employed distributed representations of mentions in context and entity candidates and neural models to score their compatibility. Mentions in context have been represented using e.g., CNN (Murty et al., 2018), LSTM (Gupta et al., 2017), or bag-of-word embeddings (Ganea and Hofmann, 2017). Entity descriptions have been represented using similar architectures. To the best of our knowledge, while some models allow for crossattention between single-vector entity embeddings and mention-in-context token representations, no prior works have used full cross-attention between mention+context and entity descriptions. Prior work on entity linking tasks most similar to ours used a linear model comparing a mention in context to an entity description and associated structured data (Sil et al., 2012). Sil et al. (2012) also proposed a distant supervision approach which could use first-pass predictions for mentions in the target domain as noisy supervision for re-training an in-domain model. We believe this approach is complementary to unsupervised representation learning and could bring additional benefits. In another task similar to ours, Wang et al. (2015) used collective inference and 3457 target database relations to obtain good performance without (domain, target database)-specific labeled training data. Collective inference is another promising direction, but could have limited success when no metadata is available. Unsupervised domain adaptation There is a large body of work on methods for unsupervised domain adaptation, where a labeled training set is available for a source domain and unlabeled data is available for the target domain. The majority of work in this direction assume that training and test examples consist of (x, y) pairs, where y is in a fixed shared label set Y. This assumption holds for classification and sequence labeling, but not for zero-shot entity linking, since the source and target domains have disjoint labels. Most state-of-the-art methods learn non-linear shared representations of source and target domain instances, through denoising training objectives (Eisenstein, 2018). In Section 5, we overviewed such work and proposed an improved domain adaptive pre-training method. Adversarial training methods (Ganin et al., 2016), which have also been applied to tasks where the space Y is not shared between source and target domains (Cohen et al., 2018), and multisource domain adaptation methods (Zhao et al., 2018; Guo et al., 2018) are complementary to our work and can contribute to higher performance. 8 Conclusion We introduce a new task for zero-shot entity linking, and construct a multi-world dataset for it. The dataset can be used as a shared benchmark for entity linking research focused on specialized domains where labeled mentions are not available, and entities are defined through descriptions alone. A strong baseline is proposed by combining powerful neural reading comprehension with domainadaptive pre-training. Future variations of the task could incorporate NIL recognition and mention detection (instead of mention boundaries being provided). The candidate generation phase leaves significant room for improvement. We also expect models that jointly resolve mentions in a document would perform better than resolving them in isolation. Acknowledgements We thank Rahul Gupta and William Cohen for providing detailed helpful feedback on an earlier draft of this paper. We thank the Google AI Language Team for valuable suggestions and feedback. References Razvan Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Devendra Singh Chaplot and Ruslan Salakhutdinov. 2018. Knowledge-based word sense disambiguation using topic models. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning. Daniel Cohen, Bhaskar Mitra, Katja Hofmann, and W. Bruce Croft. 2018. Cross domain regularization for neural ranking models using adversarial learning. In The 41st International ACM SIGIR Conference on Research; Development in Information Retrieval. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Jacob Eisenstein. 2018. Natural Language Processing. MIT Press. Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th International Conference on Machine Learning. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 3458 Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. arXiv preprint arXiv:1805.04787. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Phong Le and Ivan Titov. 2018. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics. David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM Conference on Information and Knowledge Management. Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI. Dan Roth, Heng Ji, Ming-Wei Chang, and Taylor Cassidy. 2014. Wikification and beyond: The challenges of entity and concept grounding. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Tutorials. Avi Sil, Heng Ji, Dan Roth, and Silviu-Petru Cucerzan. 2018. Multi-lingual entity discovery and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Tutorial Abstracts. Avirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, and Alexander Yates. 2012. Linking named entities to any database. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015. Language and domain independent entity linking with quantified collective validation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Han Zhao, Shanghang Zhang, Guanhang Wu, Jos´e MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adaptation. In Advances in Neural Information Processing Systems. A Examining model errors and predictions In tables 8, 9, 10, 11 we show some example mentions and model predictions. For each instance, the examples show the correct gold entity and the top5 predictions from the model. Examples show 32 token contexts centered around mentions and the first 32 tokens of candidate entity documents. 3459 Coronation Street Mention Robbie pulled over the ambulance with a van and used a gun to get the Prison Officer with Tony to release him . He integrated himself with the Street residents , finding Gold Entity Prison Officer (Episode 7351) The unnamed Prison Officer was on duty during May 2010 in the Highfield Prison dining room when Tony Gordon provoked a fight with a fellow inmate Top-5 predictions Prison Officer (Episode 7351) The unnamed Prison Officer was on duty during May 2010 in the Highfield Prison dining room when Tony Gordon provoked a fight with a fellow inmate Inmate (Episode 7351) The Inmate was an unnamed fellow prisoner of Tony Gordon in Highfield Prison . Tony provoked a fight in the dining room with the inmate by staring Police Officer (Simon Willmont) The unnamed Police Officer was on duty at Weatherfield Police Station in March 2010 when Peter Barlow was released from custody following his arrest as he Prison Officer (Bill Armstrong) The Prison Officer looked after the incarceration of three Coronation Street residents : In November 2000 he was on duty at Strangeways Jail when Jim McDonald Robbie Sloane Quietly spoken Robbie Sloane was Tony Gordon ’ s henchman and a convicted murderer , who he met while sharing a cell at Highfield Prison in 2010 . When Robbie Table 8: Mention and entity candidates from Coronation Street. Muppets Mention Bean Bunny was introduced during the seventh season of ” Muppet Babies ” , and a pre - teen Bean would later be featured as part of the Muppet Kids series . Bean was active Gold Entity Bean Bunny (Muppet Kids) A young version of Bean Bunny made a few appearances in the Muppet Kids books and video games . Young Bean moves to the Muppet Kids Top-5 predictions Baby Bean Bunny Baby Bean Bunny appeared in the late 1989 / 1990 seasons of ” Muppet Babies ” as a baby version of Bean Bunny . He joined the other babies Bean Bunny (Muppet Kids) A young version of Bean Bunny made a few appearances in the Muppet Kids books and video games . Young Bean moves to the Muppet Kids Bean Bunny Bean Bunny first appeared in 1986 as the star of the TV special ” The Tale of the Bunny Picnic ” . The cute bunny was part of a family Piggy (Muppet Kids) A pre - teen version of Miss Piggy , as seen in the ” Muppet Kids ” books and video games . Piggy lives in a fancy Muppet Kids Muppet Kids was a series of books and educational software made in the 1990s , featuring young , pre - teen versions of the principal franchise characters . Characters included Table 9: Mention and entity candidates from Muppets. 3460 Ice Hockey Mention 1979 - 80 PCJHL Season This is a list of Peace - Cariboo Junior Hockey League Standings for the 1979 80 season . This was the PCJHL ’ s final Gold Entity Rocky Mountain Junior Hockey League The Rocky Mountain Junior Hockey League was a Canadian Junior ” A ” ice hockey league in British Columbia . History . Promoted to a Junior ” Top-5 predictions Peace Junior Hockey League Hockey League Peace Junior Hockey League is a League that started in the 1960 ’ s and ended in 1975 . Then change its name to Peace Cariboo junior Hockey Cariboo Hockey League The Cariboo Hockey League was a Senior and Intermediate hockey league in the Cariboo District of British Columbia , Canada . History . The league began in the 1955 Cariboo Junior League The Cariboo Junior League operated in northern British Columbia in the 1963 64 season . Its champion was eligible for the British Columbia Junior Playoffs . The league Rocky Mountain Junior Hockey League The Rocky Mountain Junior Hockey League was a Canadian Junior ” A ” ice hockey league in British Columbia . History . Promoted to a Junior ” North West Junior Hockey League The North West Junior Hockey League is a Junior ” B ” ice hockey league operating in the Peace River region of Alberta and British Columbia , Table 10: Mention and entity candidates from Ice Hockey. Elder Scrolls Mention to get everyone to safety . Rolunda ’ s brother is one of those people . The Frozen Man . Rolunda ’ s brother Eiman has ventured into Orkey ’ s Hollow to find Gold Entity The Frozen Man (Quest) The Frozen Man is a quest available in The Elder Scrolls Online. It involves finding a Nord who has been trapped in ice by a mysterious ” Frozen Man Top-5 predictions The Frozen Man (Quest) The Frozen Man is a quest available in The Elder Scrolls Online. It involves finding a Nord who has been trapped in ice by a mysterious ” Frozen Man The Frozen Man The Frozen Man is an insane Bosmer ghost found in Orkey ’ s Hollow . He says he was in a group of people inside the cave when it Kewan Kewan is a Redguard worshipper of the Daedric Prince Peryite . He is frozen in a trance that relates to the Daedric quest , but can be unfrozen in completion the Stromgruf the Steady Stromgruf the Steady is the Nord who is found in the Grazelands of Vvardenfell , west of Pulk and east of Vassamsi Grotto ( Online ) . He is Maren the Seal Maren the Seal is a Nord hunter and worshipper of the Daedric Prince Peryite . She is frozen in a trance that relates to the Daedric Prince ’ s Table 11: Mention and entity candidates from Elder Scrolls.
2019
335
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3461–3471 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3461 Dual Adversarial Neural Transfer for Low-Resource Named Entity Recognition Joey Tianyi Zhou1,†, Hao Zhang2,†, Di Jin3, Hongyuan Zhu4,‡, Meng Fang5, Rick Siow Mong Goh1, Kenneth Kwok1 1IHPC, A*STAR 2A*AI, A*STAR 3CSAIL, MIT 4I2R, A*STAR 5Tencent Robotics X {zhouty,gohsm,kenkwok}@ihpc.a-star.edu.sg, {zhang hao@scei,zhuh@i2r}.a-star.edu.sg, [email protected], [email protected] Abstract We propose a new neural transfer method termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER). Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are investigated to explore effective feature fusion between high and low resource. To address the noisy and imbalanced training data, we propose a novel Generalized Resource-Adversarial Discriminator (GRAD). Additionally, adversarial training is adopted to boost model generalization. In experiments, we examine the effects of different components in DATNet across domains and languages, and show that significant improvement can be obtained especially for lowresource data, without augmenting any additional hand-crafted features and pre-trained language model. 1 Introduction Named entity recognition (NER) is an important step in most natural language processing (NLP) applications. It detects not only the type of named entity, but also the entity boundaries, which requires deep understanding of the contextual semantics to disambiguate the different entity types of same tokens. To tackle this challenging problem, most early studies were based on handcrafted rules, which suffered from limited performance in practice. Current methods are devoted to developing learning based algorithms, especially neural network based methods, and have been advancing the state-of-the-art progressively (Collobert et al., 2011; Huang et al., 2015; Lample et al., 2016; Chiu and Nichols, 2016; Ma and Hovy, 2016). These end-to-end models generalize well on new entities based on features automatically learned from the data. However, when † The first two authors contributed equally. ‡ Corresponding author. the annotated corpora is small, especially in the low resource scenario (Zhang et al., 2016), the performance of these methods degrades significantly since the hidden feature representations cannot be learned adequately. Recently, more and more approaches have been proposed to address low-resource NER. Early works (Chen et al., 2010; Li et al., 2012) primarily assumed a large parallel corpus and focused on exploiting them to project information from high- to low-resource. Unfortunately, such a large parallel corpus may not be available for many low-resource languages. More recently, crossresource word embedding (Fang and Cohn, 2017; Adams et al., 2017; Yang et al., 2017) was proposed to bridge the low- and high-resources and enable knowledge transfer. Although the aforementioned transfer-based methods show promising performance in low-resource NER, there are two issues remain further study: 1) Representation Difference - they did not consider the representation difference across resources and enforced the feature representation to be shared across languages/domains; 2) Resource Data Imbalance the training size of high-resource is usually much larger than that of low-resource. The existing methods neglect such difference in their models, resulting in poor generalization. In this work, we present a general neural transfer framework termed Dual Adversarial Transfer Network (DATNet) to address the above issues in a unified framework for low-resource NER. Specifically, to handle the representation difference, we first investigate on two architectures of hidden layers (Bi-LSTM) for transfer. The first one is that all the units in hidden layers are common units shared across languages/domains. Another is composed of both private and common units, where the private part preserves the independent language/domain information. Extensive 3462 experiments are conducted to show that there is not always a winner and two transfer strategies have their own advantages over each other in different situations, which is largely ignored by existing research. On top of common units, the adversarial discriminator (AD) loss is introduced to encourage the resource-agnostic representation so that the knowledge from high resource can be more compatible with low resource. To handle the resource data imbalance issue, we further propose a variant of the AD loss, termed Generalized Resource-Adversarial Discriminator (GRAD), to impose the resource weight during training so that low-resource and hard samples can be paid more attention to. In addition, we create adversarial samples to conduct the Adversarial Training (AT), further improving the generalization and alleviating over-fitting problem. We unify two kinds of adversarial learning, i.e., GRAD and AT, into one transfer learning model, termed Dual Adversarial Transfer Network (DATNet), to achieve end-toend training and obtain significant improvements on a series of NER tasks In contrast with prior methods, we do not use additional hand-crafted features and do not use cross-lingual word embeddings as well as pre-trained language models (Peters et al., 2018; Radford, 2018; Akbik et al., 2018; Devlin et al., 2018) when addressing the crosslanguage tasks. 2 Related Work Named Entity Recognition NER is typically framed as a sequence labeling task which aims at automatic detection of named entities (e.g., person, organization, location and etc.) from free text (Marrero et al., 2013). The early works applied CRF, SVM, and perception models with handcrafted features (Ratinov and Roth, 2009; Passos et al., 2014; Luo et al., 2015). With the advent of deep learning, research focus has been shifting towards deep neural networks (DNN), which requires little feature engineering and domain knowledge (Lample et al., 2016; Zukov Gregoric et al., 2018; Zhou et al., 2019). (Collobert et al., 2011) proposed a feed-forward neural network with a fixed sized window for each word, which failed in considering useful relations between long-distance words. To overcome this limitation, (Chiu and Nichols, 2016) presented a bidirectional LSTM-CNNs architecture that automatically detects word- and character-level features. Ma and Hovy (2016) further extended it into bidirectional LSTM-CNNs-CRF architecture, where the CRF module was added to optimize the output label sequence. Liu et al. (2018) proposed task-aware neural language model termed LM-LSTM-CRF, where character-aware neural language models were incorporated to extract character-level embedding under a multi-task framework. Transfer Learning for NER Transfer learning can be a powerful tool to low resource NER tasks. To bridge high and low resource, transfer learning methods for NER can be divided into two types: the parallel corpora based and the shared representation based transfer. Early works mainly focused on exploiting parallel corpora to project information between the high- and low-resource languages (Yarowsky et al., 2001; Chen et al., 2010; Li et al., 2012; Feng et al., 2018). For example, (Chen et al., 2010) and (Feng et al., 2018) proposed to jointly identify and align bilingual named entities. Ni et al. (Ni and Florian, 2016; Ni et al., 2017) utilized the Wikipedia entity type mappings to improve low-resource NER. (Mayhew et al., 2017) created a cross-language NER system, which works well for very minimal resources by translate annotated data of high-resource into lowresource. On the other hand, the shared representation methods do not require the parallel correspondence (Rei and Søgaard, 2018). For instance, (Fang and Cohn, 2017) proposed cross-lingual word embeddings to transfer knowledge across resources. (Yang et al., 2017) presented a transfer learning approach based on deep hierarchical recurrent neural network, where full/partial hidden features between source and target tasks are shared. (Al-Rfou’ et al., 2015) built massive multilingual annotators with minimal human expertise by using language agnostic techniques. (Cotterell and Duh, 2017) proposed character-level neural CRFs to jointly train and predict low- and highresource languages. (Pan et al., 2017) proposes a large-scale cross-lingual named entity dataset which contains 282 languages for evaluation. In addition, multi-task learning (Yang et al., 2016; Luong et al., 2016; Rei, 2017; Aguilar et al., 2017; Hashimoto et al., 2017; Lin et al., 2018) shows that jointly training on multiple tasks/languages helps improve performance. Different from transfer learning methods, multi-task learning aims at improving the performance of all the resources instead of low resource only. 3463 Char CNN Char Emb Word Emb Bidirectional LSTM CRF Layer concat (a) Base Model Source Word Emb Target Word Emb Shared Char CNN Shared Char Emb + ηc + concat concat Gradient Reversal Source CRF Layer Target CRF Layer Self-Attention GRAD ηwS ηwT Source Bi-LSTM Target Bi-LSTM Shared Bi-LSTM concat concat + (b) DATNet-P Source Word Emb Target Word Emb Shared Char CNN Shared Char Emb ηc + concat Shared Bidirectional LSTM concat Gradient Reversal Source CRF Layer Target CRF Layer Self-Attention GRAD ηwS ηwT + + (c) DATNet-F Figure 1: The general architecture of proposed models. Adversarial Learning Adversarial learning originates from Generative Adversarial Nets (GAN) (Goodfellow et al., 2014), which shows impressing results in computer vision. Recently, many papers have tried to apply adversarial learning to NLP tasks. (Liu et al., 2017) presented an adversarial multi-task learning framework for text classification. (Gui et al., 2017) applied the adversarial discriminator to POS tagging for Twitter. (Kim et al., 2017) proposed a language discriminator to enable language-adversarial training for cross-language POS tagging. Apart from adversarial discriminator, adversarial training is another concept originally introduced by (Szegedy et al., 2014; Goodfellow et al., 2015) to improve the robustness of image classification model by injecting malicious perturbations into input images. Recently, (Miyato et al., 2017) proposed a semisupervised text classification method by applying adversarial training, where for the first time adversarial perturbations were added onto word embeddings. (Yasunaga et al., 2018) applied adversarial training to POS tagging. Different from all these adversarial learning methods, our method is more general and integrates both the adversarial discriminator and adversarial training in an unified framework to enable end-to-end training. 3 Dual Adversarial Transfer Network In this section, we introduce two transfer architectures for DATNet in detail. For the base model, we follow the state-of-the-art LSTM-CNN-CRF based structure (Huang et al., 2015; Lample et al., 2016; Chiu and Nichols, 2016; Ma and Hovy, 2016) for NER task, as shown in Figure 1(a). 3.1 Character-level Encoder Previous works have shown that character features can boost sequence labeling performance by capturing morphological and semantic information (Lin et al., 2018). For low-resource dataset to obtain high-quality word features, character features learned from other language/domain may provide crucial information for labeling, especially for rare and out-of-vocabulary words. Characterlevel encoder usually contains BiLSTM (Lample et al., 2016) and CNN (Chiu and Nichols, 2016; Ma and Hovy, 2016) approaches. In practice, (Reimers and Gurevych, 2017) observed that the difference between the two approaches is statistically insignificant in sequence labeling tasks, but character-level CNN is more efficient and has less parameters. Thus, we use character-level CNN and share character features between high- and low-resource tasks to enhance the representations of low-resource. 3.2 Word-level Encoder To learn a better word-level representation, we concatenate the character-level features of each word with a latent word embedding as wi = [wchar i , wemb i ], where the latent word embedding wemb i is initialized with pre-trained embeddings and fixed during training. One unique characteristic of NER is that the historical and future input for a given time step could be useful for label inference. To exploit such a characteristic, we use bidirectional LSTM architecture (Hochreiter and Schmidhuber, 1997) to extract contextualized word-level features. In this way, we can gather the information from the past and future 3464 for a particular time frame t as follows, −→ h t = lstm(−→ h t−1, wt), ←− h t = lstm(←− h t+1, wt). After the LSTM layer, the representation of a word is obtained by concatenating its left and right context representation as follows, ht = [−→ h t, ←− h t]. To consider the resource representation difference on word-level features, we introduce two kinds of transferable word-level encoder in our model, namely DATNet-Full Share (DATNet-F) and DATNet-Part Share (DATNet-P). In DATNetF, all the BiLSTM units are shared by both resources while word embeddings for different resources are disparate. The illustrative figure is depicted in the Figure 1(c). Different from the DATNet-F, the DATNet-P decomposes the BiLSTM units into the shared component and the resource-related one, which is shown in the Figure 1(b). Different from existing works (Yang et al., 2017; Fang and Cohn, 2017; Cotterell and Duh, 2017; Cao et al., 2018), in this work, we investigate the performance of two different shared representation architectures on different tasks and give the corresponding guidance (see Section 4.5). 3.3 Generalized Resource-Adversarial Discriminator In order to make the feature representation extracted from the source domain more compatible with those from the target domain, we encourage the outputs of the shared BiLSTM part to be resource-agnostic by constructing a resourceadversarial discriminator, which is inspired by the Language-Adversarial Discriminator proposed by (Kim et al., 2017). Unfortunately, previous works did not consider the imbalance of training size for two resources. Specifically, the target domain consists of very limited labeled training data, e.g., 10 sentences. In contrast, labeled training data in the source domain are much richer, e.g., 10k sentences. If such imbalance was not considered during training, the stochastic gradient descent (SGD) optimization would make the model more biased to high resource (Lin et al., 2017b). To address this imbalance problem, we impose a weight α on two resources to balance their influences. However, in the experiment we also observe that the easily classified samples from high resource comprise the majority of the loss and dominate the gradient. To overcome this issue, we further propose Generalized Resource-Adversarial Discriminator (GRAD) to enable adaptive weights for each sample which focuses the model training on hard samples. To compute the loss of GRAD, the output sequence of the shared BiLSTM is firstly encoded into a single vector via a self-attention module (Bahdanau et al., 2015), and then projected into a scalar r via a linear transformation. The loss function of the resource classifier is formulated as: ℓGRAD = − X i {Ii∈DSα(1 −ri)γ log ri + Ii∈DT (1 −α)rγ i log(1 −ri)} (1) where Ii∈DS, Ii∈DT are the identity functions to denote whether a sentence is from high resource (source) and low resource (target), respectively; α is a weighting factor to balance the loss contribution from high and low resource; the parameter (1 −ri)γ (or rγ i ) controls the loss contribution from individual samples by measuring the discrepancy between prediction and true label (easy samples have smaller contribution); and γ scales the contrast of loss contribution from hard and easy samples. In practice, the value of γ does not need to be tuned much and usually set as 2 in our experiment. Intuitively, the weighting factors α and (1 −ri)γ reduce the loss contribution from high resource and easy samples, respectively. Note that though the resource classifier is optimized to minimize the resource classification error, when the gradients originated from the resource classification loss are back-propagated to the other model parts than the resource classifier, they are negated for parameter updates so that these bottom layers are trained to be resource-agnostic. 3.4 Label Decoder The label decoder induces a probability distribution over sequences of labels, conditioned on the word-level encoder features. In this paper, we use a linear chain model based on the first-order Markov chain structure, termed the chain conditional random field (CRF) (Lafferty et al., 2001), as the decoder. In this decoder, there are two kinds of cliques: local cliques and transition cliques. Specifically, local cliques correspond to the individual elements in the sequence. And transition cliques, on the other hand, reflect the evolution of states between two neighboring elements at time t −1 and t and we define the transition distribution as θ. Formally, a linear-chain CRF can be written as p(y|h1:T ) = 3465 Benchmark Resource Language # Training Tokens (# Entities) # Dev Tokens (# Entities) # Test Tokens (# Entities) CoNLL-2003 Source English 204,567 (23,499) 51,578 (5,942) 46,666 (5,648) Cross-language NER CoNLL-2002 Target Spanish 207,484 (18,797) 51,645 (4,351) 52,098 (3,558) CoNLL-2002 Target Dutch 202,931 (13,344) 37,761 (2,616) 68,994 (3,941) Cross-domain NER WNUT-2016 Target English 46,469 (2,462) 16,261 (1,128) 61,908 (5,955) WNUT-2017 Target English 62,730 (3,160) 15,733 (1,250) 23,394 (1,740) Table 1: Statistics of CoNLL and WNUT Named Entity Recognition Datasets. 1 Z(h1:T ) exp nPT t=2 θyt−1,yt + PT t=1 Wytht o , where Z(h1:T ) is a normalization term and y is the sequence of predicted labels as follows: y = y1:T . Model parameters are optimized to maximize this conditional log likelihood, which acts as the objective function of the model. We define the loss function for source and target resources as follows, ℓS = −P i log p(y|h1:T ), ℓT = −P i log p(y|h1:T ). 3.5 Adversarial Training So far our model can be trained end-to-end with standard back-propagation by minimizing the following loss: ℓ= ℓGRAD + ℓS + ℓT (2) Recent works have demonstrated that deep learning models are fragile to adversarial examples (Goodfellow et al., 2015). In computer vision, those adversarial examples can be constructed by changing a very small number of pixels, which are virtually indistinguishable to human perception (Pin-Yu et al., 2018). Recently, adversarial samples are widely incorporated into training to improve the generalization and robustness of the model, which is so-called adversarial training (AT) (Miyato et al., 2017). It emerges as a powerful regularization tool to stabilize training and prevent the model from being stuck in local minimum. In this paper, we explore AT in context of NER. To be specific, we prepare an adversarial sample by adding the original sample with a perturbation bounded by a small norm ϵ to maximize the loss function as follows: ηx = arg max η:∥η∥2≤ϵ ℓ(Θ; x + η) (3) where Θ is the current model parameters set. However, we cannot calculate the value of η exactly in general, because the exact optimization with respect to η is intractable in neural networks. Following the strategy in (Goodfellow et al., 2015), this value can be approximated by linearizing it as follows, ηx = ϵ g ∥g∥2 , where g = ∇ℓ(Θ; x) where ϵ can be determined on the validation set. In this way, adversarial examples are generated by adding small perturbations to the inputs in the direction that most significantly increases the loss function of the model. We find such η against the current model parameterized by Θ, at each training step, and construct an adversarial example by xadv = x + ηx. Note that we generate this adversarial example on the word and character embedding layer, respectively, as shown in the Figure 1(b) and 1(c). Then, the classifier is trained on the mixture of original and adversarial examples to improve the generalization. To this end, we augment the loss in Eqn. 2 and define the loss function for adversarial training as: ℓAT = ℓ(Θ; x) + ℓ(Θ; xadv) (4) where ℓ(Θ; x), ℓ(Θ; xadv) represents the loss from an original example and its adversarial counterpart, respectively. Note that we present the AT in a general form for the convenience of presentation. For different samples, the loss and parameters should correspond to their counterparts. For example, for the source data with word embedding wS, the loss can be defined as follows, ℓAT = ℓ(Θ; wS)+ℓ(Θ; wS,adv) with wS,adv = wS+ηwS and ℓ= ℓGRAD + ℓS. Similarly, we can compute the perturbations ηc for char-embedding and ηwT for target word embedding. 4 Experiments 4.1 Datasets In order to evaluate the performance of DATNet, we conduct the experiments on following widely used NER datasets: CoNLL-2003 English NER (Kim and De, 2003), CoNLL-2002 Spanish & Dutch NER (Kim, 2002), WNUT-2016 & 2017 English Twitter NER (Zeman, 2017). The statistics of these datasets are described in Table 1. We use the official split of train/validation/test sets. Different from previous works that either append the one-hot gazetteer feature to the input of 3466 Mode Methods Additional Features CoNLL Datasets WNUT Datasets POS Gazetteers Orthographic Spanish Dutch WNUT-2016 WNUT-2017 Mono-language /domain (Gillick et al., 2016) × × × 82.59 82.84 (Lample et al., 2016) × √ × 85.75 81.74 (Partalas et al., 2016) √ √ √ 46.16 (Limsopatham and Collier, 2016) × × √ 52.41 (Lin et al., 2017a) √ √ × 40.42 Our Base Model Best Mean & Std × × × 85.53 85.35±0.15 85.55 85.24±0.21 44.96 44.37±0.31 35.20 34.67±0.34 Cross-language /domain (Yang et al., 2017) × √ × 85.77 85.19 (Lin et al., 2018) × √ × 85.88 86.55 (Feng et al., 2018) √ × × 86.42 88.39 (von D¨aniken and Cieliebak, 2017) × √ × 40.78 (Aguilar et al., 2017) √ × √ 41.86 DATNet-P Best Mean & Std × × × 88.16 87.89±0.18 88.32 88.09±0.13 50.85 50.41±0.32 41.12 40.52±0.38 DATNet-F Best Mean & Std × × × 87.04 86.79±0.20 87.77 87.52±0.19 53.43 53.03±0.24 42.83 42.32±0.32 Table 2: Comparison with State-of-the-art Results in CoNLL and WNUT datasets (F1-score). Tasks CoNLL-2002 Spanish NER WNUT-2016 Twitter NER # Target train sentences 10 50 100 200 500 1000 10 50 100 200 500 1000 Base 21.53 42.18 48.35 63.66 68.83 76.69 3.80 14.07 17.99 26.20 31.78 36.99 + AT 19.23 41.01 50.46 64.83 70.85 77.91 4.34 16.87 18.43 26.32 35.68 41.69 + P-Transfer 29.78 61.09 64.78 66.54 72.94 78.49 7.71 16.17 20.43 29.20 34.90 41.20 + F-Transfer 39.72 63.00 63.36 66.39 72.88 78.04 15.26 20.04 26.60 32.22 38.35 44.81 DATNet-P 39.52 62.57 64.05 68.95 75.19 79.46 9.94 17.09 25.39 30.71 36.05 42.30 DATNet-F 44.52 63.89 66.67 68.35 74.24 78.56 17.14 22.59 28.41 32.48 39.20 45.25 Table 3: Experiments on Extremely Low Resource (F1-score). the CRF layer (Collobert et al., 2011; Chiu and Nichols, 2016; Yang et al., 2017) or introduce the orthographic feature as additional input for learning social media NER in tweets (Partalas et al., 2016; Limsopatham and Collier, 2016; Aguilar et al., 2017), we do not use hand-crafted features and only words and characters are considered as the inputs. Our goal is to study the effects of transferring knowledge from high-resource dataset to low-resource dataset. To be noted, we used only training set for model training for all datasets except the WNUT-2016 NER dataset. Since in this dataset, all the previous studies merged the training set and validation set together for training. Specifically, we use CoNLL-2003 English NER dataset as high-resource (i.e., source) for all the experiments, CoNLL-2002 and WNUT datasets as low-resource (i.e., target) in cross-language and cross-domain NER settings, respectively. 4.2 Experimental Setup We use 50-dimensional publicly available pretrained word embeddings for English, Spanish and Dutch of CoNLL and WNUT datasets in our experiments, which are trained by word2vec on the corresponding Wikipedia articles (Lin et al., 2018), and the 30-dimensional randomly initialized character embeddings are used for all the datasets. We set the filters as 20 for char-level CNN and the dimension of hidden states of the word-level LSTM as 200 for both base model and DATNet-F. For DATNet-P, we set 100 for source, share, and target LSTMs, respectively. Parameters optimization is performed by Adam (Kingma and Ba, 2015) with gradient clipping of 5.0 and learning rate decay strategy. We set the initial learning rate of β0 = 0.001 for all experiments. At each epoch t, learning rate βt is updated using βt = β0/(1 + ρ × t), where ρ is decay rate with 0.05. To reduce over-fitting, we apply Dropout (Srivastava et al., 2014) to the embedding layer and the output of the LSTM layer, respectively. 4.3 Comparison with State-of-The-Art Results In this section, we compare our approach with state-of-the-art methods on CoNLL and WNUT benchmark datasets. Note that our models do not use any additional large-scale language resources, so we do not consider the language models (Peters et al., 2018; Radford, 2018; Devlin et al., 2018) for fair comparison. In the experiment, we exploit all the source data (i.e., CoNLL-2003 English NER) and target data to improve performance of target tasks. The averaged results with standard deviation over 10 repetitive runs are summarized in Table 2, and we also report the best results on each task for fair comparison with other SOTA methods. From results, we observe that incorporating the additional resource is helpful to improve performance. DATNet-P achieves the highest F1 score on CoNLL-2002 Spanish and sec3467 CoNLL-2002 Spanish NER WNUT-2016 Twitter NER Model F1-score Model F1-score Model F1-score Model F1-score Base 85.35 +AT 86.12 Base 44.37 +AT 47.41 +P-T (no AD) 86.15 +AT +P-T (no AD) 86.90 +P-T (no AD) 47.66 +AT +P-T (no AD) 48.44 +F-T (no AD) 85.46 +AT +F-T (no AD) 86.17 +F-T (no AD) 49.79 +AT +F-T (no AD) 50.93 +P-T (AD) 86.32 +AT +P-T (AD) 87.19 +P-T (AD) 48.14 +AT +P-T (AD) 49.41 +F-T (AD) 85.58 +AT +F-T (AD) 86.38 +F-T (AD) 50.48 +AT +F-T (AD) 51.84 +P-T (GRAD) 86.93 +AT +P-T (GRAD) (DATNet-P) 88.16 +P-T (GRAD) 48.91 +AT +P-T (GRAD) (DATNet-P) 50.85 +F-T (GRAD) 85.91 +AT +F-T (GRAD) (DATNet-F) 87.04 +F-T (GRAD) 51.31 +AT +F-T (GRAD) (DATNet-F) 53.43 * AT: Adversarial Training; P-T: P-Transfer; F-T: F-Transfer; AD: Adversarial Discriminator; GRAD: Generalized ResourceAdversarial Discriminator. Table 4: Quantitative Performance Comparison between Models with Different Components. α 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Ratio CoNLL-2002 Spanish NER ρ = 0.1 78.37 78.63 78.70 78.32 77.96 77.92 77.88 77.78 77.85 77.90 77.65 77.57 77.38 77.49 77.29 ρ = 0.2 80.99 81.71 82.18 81.57 81.53 81.55 81.44 81.25 81.32 81.16 81.02 81.16 80.63 80.79 80.54 ρ = 0.4 83.76 83.73 84.18 84.48 84.26 84.12 83.54 83.40 83.52 84.18 83.42 83.47 83.28 83.33 83.19 ρ = 0.6 85.18 85.24 85.85 85.68 85.84 86.10 85.71 85.74 85.42 85.60 85.20 85.40 85.26 85.24 84.98 Table 5: Analysis of Discriminator Weight α in GRAD with Varying Data Ratio ρ (F1-score). ond F1 score on CoNLL-2002 Dutch dataset while DATNet-F beats others on WNUT-2016 and 2017 Twitter datasets. Different from other SOTA models, DATNets do not use any addition features1. 0.05 0.1 0.2 0.4 0.6 1.0 Target Dataset Ratio 68 70 72 74 76 78 80 82 84 86 88 F1 Score (%) Base Base + AT F-Transfer P-Transfer DATNet-F DATNet-P (a) CoNLL-2002 Spanish 0.05 0.1 0.2 0.4 0.6 1.0 Target Dataset Ratio 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 F1 Score (%) Base Base + AT F-Transfer P-Transfer DATNet-F DATNet-P (b) WNUT-2016 Twitter Figure 2: Comparison with Different Target Data Ratio, where AT stands for adversarial training, F(P)Transfer denotes the DATNet-F(P) without AT. 4.4 Transfer Learning Performance In this section, we investigate on improvements with transfer learning under multiple low-resource settings with partial target data. To simulate a lowresource setting, we randomly select subsets of target data with varying data ratio at 0.05, 0.1, 0.2, 0.4, 0.6, and 1.0. The results for cross-language and cross-domain transfer are shown in Figure 2(a) and 2(b), respectively, where we compare the results with each part of DATNet under various data ratios. From those figures, we have the following observations: 1) both adversarial training and adversarial discriminator in DATNet consistently contribute to the performance improvement; 2) the transfer learning component in the DATNet consistently improves over the base model results 1We are not sure whether (Feng et al., 2018) has incorporated the validation set into training. And if we merge training and validation sets, we can push the F1 score to 88.71. and the improvement margin is more substantial when the target data ratio is lower. For example, when the data ratio is 0.05, DATNet-P model outperforms the base model by more than 4% absolutely in F1-score on Spanish NER and DATNetF model improves around 13% absolutely in F1score compared to base model on WNUT-2016 NER. In the second experiment, we further investigate DATNet on the extremely low resource cases, e.g., the number of training target sentences is 10, 50, 100, 200, 500 and 1,000. The setting is quite challenging and fewer previous works have studied before. The results are summarized in Table 3. We have two interesting observations: 1) DATNet-F outperforms DATNet-P on cross-language transfer when the target resource is extremely low, however, this situation is reversed when the target dataset size is large enough (here for this specific dataset, the threshold is 100 sentences); 2) DATNet-F is always superior to DATNet-P on cross-domain transfer. For the first observation, DATNet-F with more shared hidden units is more efficient to transfer knowledge than DATNet-P when data size is extremely small. For the second observation, because cross-domain transfer are in the same language, more knowledge is common between source and target domains, requiring more shared hidden features to carry with these knowledge compared to cross-language transfer. Therefore, for cross-language transfer with extremely low resource and cross-domain transfer, we suggest using DATNet-F to achieve better performance. As for cross-language transfer with relatively more training data, DATNet-P is preferred. 3468 100 75 50 25 0 25 50 75 100 100 75 50 25 0 25 50 75 100 (a) no AD 100 50 0 50 100 100 75 50 25 0 25 50 75 100 (b) AD 100 50 0 50 100 100 50 0 50 100 (c) GRAD Figure 3: The visualization of extracted features from shared bidirectional-LSTM layer. The left, middle, and right figures show the results when no Adversarial Discriminator (AD), AD, and GRAD is performed, respectively. Red points denotes source CoNLL-2003 English examples, blue points denotes target CoNLL-2002 Spanish examples. ϵwT 1.0 3.0 5.0 7.0 9.0 Ratio CoNLL-2002 Spanish NER ρ = 0.1 75.90 76.23 77.38 77.77 78.13 ρ = 0.2 81.54 81.65 81.32 81.81 81.68 ρ = 0.4 83.62 83.83 83.43 83.99 83.40 ρ = 0.6 84.44 84.47 84.72 84.04 84.05 Table 6: Analysis of Maximum Perturbation ϵwT in AT with Varying Data Ratio ρ (F1-score). 4.5 Ablation Study of DATNet In the proposed DATNet, both GRAD and AT play important roles in low resource NER. In this experiment, we further investigate how GRAD and AT help to transfer knowledge across language/domain. In the first experiment, we used t-SNE (Maaten and Hinton, 2008) to visualize the feature distribution of BiLSTM outputs without AD, with normal AD (GRAD without considering data imbalance), and with the proposed GRAD in Figure 3. From this figure, we can see that GRAD in DATNet makes the distribution of extracted features from source and target datasets much more similar by considering data imbalance, which indicates that the outputs are resource-invariant. To better understand the working mechanism, Table 4 further reports the quantitative performance comparison between models with different components. We observe that GRAD shows the stable superiority over the normal AD regardless of other components. There is not always a winner between DATNet-P and DATNet-F on different settings. DATNet-P architecture is more suitable to cross-language transfer while DATNet-F is more suitable to cross-domain transfer. From the previous results, we know that AT helps enhance the overall performance by adding perturbations to inputs with the limit of ϵ = 5, i.e., ∥η∥2 ≤5. In this experiment, we further investigate how target perturbation ϵwT with fixed source perturbation ϵwS = 5 in AT affects knowledge transfer and the results on Spanish NER are summarized in Table 6. The results generally indicate that less training data require a larger ϵ to prevent over-fitting, which further validates the necessity of AT in the case of low resource data. Finally, we analyze the discriminator weight α in GRAD and results are summarized in Table 5. From the results, it is interesting to find that α is directly proportional to the data ratio ρ, basically, which means that more target training data requires larger α (i.e., smaller 1−α to reduce training emphasis on the target domain) to achieve better performance. 5 Conclusion In this paper we develop a transfer learning model DATNet for low-resource NER, which aims at addressing representation difference and resource data imbalance problems. We introduce two variants, DATNet-F and DATNet-P, which can be chosen according to cross-language/domain user case and target dataset size. To improve model generalization, we propose dual adversarial learning strategies, i.e., AT and GRAD. Extensive experiments show the superiority of DATNet over existing models and it achieves significant improvements on CoNLL and WNUT NER benchmark datasets. Acknowledgments This paper is supported by the Singapore Government’s Research, Innovation and Enterprise 2020 Plan, Advanced Manufacturing and Engineering domain (Programmatic Grant No. A1687b0033, A18A1b0045) and the Agency for Science, Technology and Research, under the AME Programmatic Funding Scheme (Project No. A18A2b0046, A1718g0048). 3469 References Oliver Adams, Adam Makarucha, Graham Neubig, Steven Bird, and Trevor Cohn. 2017. Cross-lingual word embeddings for low-resource language modeling. In EACL, pages 937–947. Association for Computational Linguistics. Gustavo Aguilar, Suraj Maharjan, Adrian Pastor L´opez Monroy, and Thamar Solorio. 2017. A multitask approach for named entity recognition in social media data. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 148–153. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Association for Computational Linguistics. Rami Al-Rfou’, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-ner: Massive multilingual named entity recognition. In SDM. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2018. Adversarial transfer learning for chinese named entity recognition with selfattention mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 182–192. Yufeng Chen, Chengqing Zong, and Keh-Yih Su. 2010. On jointly recognizing and aligning bilingual named entities. In ACL, pages 631–639. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR, pages 2493–2537. Ryan Cotterell and Kevin Duh. 2017. Lowresource named entity recognition with crosslingual, character-level neural conditional random fields. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 91–96. Asian Federation of Natural Language Processing. Pius von D¨aniken and Mark Cieliebak. 2017. Transfer learning and sentence level features for named entity recognition on tweets. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 166–171. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In ACL, pages 587–593. Xiaocheng Feng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. 2018. Improving low resource named entity recognition using cross-lingual knowledge transfer. In IJCAI, pages 4071–4077. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In NAACL HLT, pages 1296–1306. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR. Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, and Xuanjing Huang. 2017. Part-of-speech tagging for twitter with adversarial neural networks. In EMNLP, pages 2411–2420. Kazuma Hashimoto, caiming xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP, pages 1923–1933. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. CoRR, abs/1508.01991. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for pos tagging without cross-lingual resources. In EMNLP, pages 2832–2838. Sang Erik F. Tjong Kim. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Sang Erik F. Tjong Kim and Meulder Fien De. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In NAACL HLT. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. 3470 John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL HLT, pages 260–270. Qi Li, Haibo Li, Heng Ji, Wen Wang, Jing Zheng, and Fei Huang. 2012. Joint bilingual name tagging for parallel corpora. In CIKM ’12, pages 1727–1731. Nut Limsopatham and Nigel Collier. 2016. Bidirectional lstm for named entity recognition in twitter messages. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 145–152. Bill Y. Lin, Frank Xu, Zhiyi Luo, and Kenny Zhu. 2017a. Multi-channel bilstm-crf model for emerging named entity recognition in social media. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text, pages 160–165. T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. 2017b. Focal loss for dense object detection. In 2017 IEEE International Conference on Computer Vision (ICCV). Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In ACL. L. Liu, J. Shang, F. Xu, X. Ren, H. Gui, J. Peng, and J. Han. 2018. Empower sequence labeling with taskaware neural language model. In AAAI. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In ACL. Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint entity recognition and disambiguation. In EMNLP, pages 879–888. Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In ICLR. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In ACL, pages 1064–1074. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9:2579–2605. M´onica Marrero, Juli´an Urbano, Sonia S´anchezCuadrado, Jorge Morato, and Juan Miguel G´omezBerb´ıs. 2013. Named entity recognition: Fallacies, challenges and opportunities. Computer Standards & Interfaces, (5):482–489. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2536–2545. Association for Computational Linguistics. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2017. Adversarial training methods for semisupervised text classification. In ICLR. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In ACL, pages 1470–1480. Jian Ni and Radu Florian. 2016. Improving multilingual named entity recognition with wikipedia entity type mapping. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1275–1284. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958. Association for Computational Linguistics. Ioannis Partalas, C´edric Lopez, Nadia Derbas, and Ruslan Kalitvianski. 2016. Learning to search for recognizing named entities in twitter. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 171–177. Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. arXiv preprint arXiv:1404.5367. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL HLT, pages 2227–2237. Association for Computational Linguistics. Chen Pin-Yu, Sharma Yash, Zhang Huan, Yi Jinfeng, and Cho-Jui Hsieh. 2018. Ead: Elastic-net attacks to deep neural networks via adversarial examples. In AAAI. Alec Radford. 2018. Improving language understanding by generative pre-training. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pages 147– 155. Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In ACL, pages 2121–2130. Marek Rei and Anders Søgaard. 2018. Zero-shot sequence labeling: Transferring knowledge from sentences to tokens. In NAACL HLT, pages 293–302. 3471 Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In EMNLP, pages 338–348. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, pages 1929–1958. Christian Szegedy, Wojciech Zaremba, Dumitru Erhan Ian Goodfellow Ilya Sutskever, Joan Bruna, and Rob Fergus. 2014. Intriguing properties of neural networks. In ICLR. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tagging from scratch. CoRR, abs/1603.06270. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In ICLR. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1– 8. Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In NAACL HLT, pages 976–986. Daniel et al. Zeman. 2017. Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1–19. Boliang Zhang, Xiaoman Pan, Tianlu Wang, Ashish Vaswani, Heng Ji, Kevin Knight, and Daniel Marcu. 2016. Name tagging for low-resource incident languages based on expectation-driven learning. In NAACL HLT, pages 249–259. Joey Tianyi Zhou, Meng Fang, Hao Zhang, Chen Gong, Xi Peng, Zhiguo Cao, and Rick Siow Mong Goh. Learning with annotation of various degrees. IEEE Transactions on Neural Networks and Learning Systems. Joey Tianyi Zhou, Hao Zhang, Di Jin, Xi Peng, Yang Xiao, and Zhiguo Cao. 2019. Roseq: Robust sequence labeling. IEEE Transactions on Neural Networks and Learning Systems, PP:1–11. Andrej Zukov Gregoric, Yoram Bachrach, and Sam Coope. 2018. Named entity recognition with parallel recurrent neural networks. In ACL, pages 69–74.
2019
336
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3472–3484 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3472 Scalable Syntax-Aware Language Models Using Knowledge Distillation Adhiguna Kuncoro♠♦Chris Dyer♠Laura Rimell♠ Stephen Clark♠Phil Blunsom♠♦ ♠DeepMind, London, UK ♦Department of Computer Science, University of Oxford, UK {akuncoro,cdyer,laurarimell,clarkstephen,pblunsom}@google.com Abstract Prior work has shown that, on small amounts of training data, syntactic neural language models learn structurally sensitive generalisations more successfully than sequential language models. However, their computational complexity renders scaling difficult, and it remains an open question whether structural biases are still necessary when sequential models have access to ever larger amounts of training data. To answer this question, we introduce an efficient knowledge distillation (KD) technique that transfers knowledge from a syntactic language model trained on a small corpus to an LSTM language model, hence enabling the LSTM to develop a more structurally sensitive representation of the larger training data it learns from. On targeted syntactic evaluations, we find that, while sequential LSTMs perform much better than previously reported, our proposed technique substantially improves on this baseline, yielding a new state of the art. Our findings and analysis affirm the importance of structural biases, even in models that learn from large amounts of data. 1 Introduction Language models (LMs) based on sequential LSTMs (Hochreiter and Schmidhuber, 1997) have numerous practical applications, but it has also been shown that they do not always develop accurate syntactic generalisations (Marvin and Linzen, 2018). Thus, one strategy for improving LSTMs is to change their biases to facilitate more linguistically valid generalisations. This paper introduces a scalable method for introducing syntactic biases to LSTMs (and indeed, to any left-to-right language model trained with a cross-entropy objective) by distilling knowledge (Bucilˇa et al., 2006; Hinton et al., 2015) from recurrent neural network grammars (Dyer et al., 2016, RNNGs). RNNGs have been shown to successfully capture non-local syntactic dependencies (Kuncoro et al., 2018), achieve excellent parsing performance (Kuncoro et al., 2017; Fried et al., 2017), and correlate well with encephalography signals (Hale et al., 2018). Unfortunately, these benefits come at the expense of scalability, since the hierarchical constituent composition process (§3) within RNNGs means that the structure of the computation graph for a sentence varies according to its tree structure. Even with the help of automatic dynamic batching (Neubig et al., 2017a,b), RNNGs can be ten times slower to train than a comparable LSTM as they benefit less from specialised hardware like GPUs. As such, RNNGs are an impractical alternative to computationally convenient architectures that are used to build language models from massive corpora (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019; Radford et al., 2019). As RNNGs are hard to scale, we instead use the predictions of an RNNG teacher model trained on a small training set, to guide the learning of syntactic structure in a sequential LSTM student model, which is trained on the training set in its entirety. We denote the resulting lanugage model (i.e., the student LSTM) as a distilled syntaxaware LSTM LM (DSA-LSTM). Intuitively, the RNNG teacher is an expert on syntactic generalisation, although it lacks the opportunity to learn the relevant semantic and common-sense knowledge from a large training corpus. By learning from both, the DSA-LSTM therefore learns from a signal that is informative for syntactic generalisation, but without sacrificing the semantic richness contained in a large corpus. Since the DSA-LSTM differs from a conventional LSTM only in its training loss, it has the same hardware-friendly computational structure as a conventional LSTM, and is therefore much 3473 faster to train. On targeted syntactic evaluations, it achieves better accuracy than: (i) a strong LSTM LM which, through careful hyperparameter tuning, performs much better than previously thought (§2); (ii) the teacher RNNG that exploits a hierarchical inductive bias but lacks scalability (§3); and (iii) a born-again network (Furlanello et al., 2018) that similarly learns from KD, albeit without a hierarchical bias from the teacher. We analyse the DSA-LSTM’s internal representation through the syntactic probe (Shi et al., 2016; Adi et al., 2017) of Blevins et al. (2018), and find that the learned representations encode hierarchical information to a large extent, despite the DSA-LSTM lacking direct access to syntactic annotation. While not directly comparable, on subject-verb agreement both the teacher RNNG and student DSA-LSTM outperform BERT (Devlin et al., 2019; Goldberg, 2019), which benefits from bidirectional information and is trained on 30 times as much data. Altogether, these findings suggest that structural biases continue to play an important role, even at massive data scales, in improving the linguistic competence of LMs. 2 Replication of Targeted Syntactic Evaluations of LSTM LMs In this section, we replicate the targeted syntactic evaluations reported by Marvin and Linzen (2018), which assess LMs’ ability to assign higher probability in grammatical/ungrammatical minimal pairs within a variety of complex syntactic structures. This will serve as our primary evaluation instrument in this paper. The following example illustrates the subjectverb agreement across an object relative clause (no complementiser) construction: • The farmer the parents love swims/∗swim. An LM succeeds on each example iff it assigns a higher probability to the grammatical sentence. Marvin and Linzen (2018) report that LSTMs, even with multi-task syntactic supervision, on aggregate still lag far behind human performance. Experimental settings. Following Marvin and Linzen (2018), we use LSTMs with 650 hidden units trained on the Wikipedia corpus of Gulordava et al. (2018). Hyperparameters are optimised based on a grid search and can be found in the Appendix. As the targeted syntactic evaluations are based on individual sentences, our LSTM models each sentence separately.1 Discussion. We present our findings in Table 1 (“Ours”); for all our models we report mean and standard deviation of 10 identical models from different random seeds. Our LSTM LM achieves much better perplexity than the LSTM LM (32% ppl. reduction) and even the multi-task LSTM (12% reduction) of Marvin and Linzen (2018). As our LSTM has the same number of hidden units, we attribute this gap to differences in optimisation and codebases. On aggregate, our LSTM LM outperforms the LSTM multi-task model from Marvin and Linzen (2018) that exploits explicit CCG annotations, and is able to match or exceed human performance on 7 out of all 15 constructions, thus confirming earlier findings that neural language models are able to acquire complex syntactic generalisation without explicit syntactic supervision (Gulordava et al., 2018; Goldberg, 2019). Despite the small variance in perplexity (stdev 0.16 ppl.), the trained LMs exhibit large variance in accuracy for some constructions (up to stdev 0.12 for NPI across a relative clause). This observation is consistent with earlier findings that models with similar perplexity may exhibit different patterns of syntactic generalisation (Kuncoro et al., 2018; Tran et al., 2018), and serves as a caution against reporting results based on single runs. 3 Syntactic Evaluation with RNNG To what extent can a model that leverages syntactic bias and annotation do well on targeted syntactic evaluations, even when trained on less data? Here we briefly describe and assess the performance of the stack-only RNNG (Kuncoro et al., 2017) that we use as the teacher. Our choice of RNNG is motivated by its excellent number agreement performance on the Linzen et al. (2016) dataset,2 achieving 92.9% for four attractors under purely incremental decoding (Kuncoro et al., 2018). 1By modelling each sentence separately, our setup is consistent with that of Marvin and Linzen (2018) but differs from those with cross-sentential context (Mikolov et al., 2010). 2While BERT (Devlin et al., 2019) achieves even better number agreement performance (Goldberg, 2019), the results are not directly comparable since BERT operates nonincrementally and was trained on 500 times as much data. The current state of the art among models trained on the Linzen et al. (2016) training set is the adaptive universal transformer model (Dehghani et al., 2019). 3474 Marvin & Linzen models Ours Ours (small training) M&L-LSTM M&L-Multi Our LSTM Small LSTM† RNNG† Humans Gulordava et al. (2018) test perplexity 78.65 61.10 53.73 (±0.16) 94.54 (±0.21) 92.30 (±0.27) N/A SUBJECT-VERB AGREEMENT Simple 0.94 1.00 1.00 (±0.00) 0.89 (±0.03) 0.99 (±0.01) 0.96 In a sentential complement 0.99 0.93 0.97 (±0.02) 0.89 (±0.01) 0.93 (±0.02) 0.93 Short VP coordination 0.90 0.90 0.96 (±0.02) 0.90 (±0.03) 0.96 (±0.02) 0.94 Long VP coordination 0.61 0.81 0.82 (±0.05) 0.78 (±0.03) 0.94 (±0.03) 0.82 Across a prepositional phrase 0.57 0.69 0.89 (±0.02) 0.83 (±0.02) 0.95 (±0.01) 0.85 Across a subject relative clause 0.56 0.74 0.87 (±0.02) 0.81 (±0.04) 0.95 (±0.03) 0.88 Across an object relative clause 0.50 0.57 0.77 (±0.11) 0.54 (±0.08) 0.95 (±0.03) 0.85 Across an object relative clause (no that) 0.52 0.52 0.70 (±0.05) 0.55 (±0.07) 0.93 (±0.02) 0.82 In an object relative clause 0.84 0.89 0.90 (±0.03) 0.79 (±0.05) 0.96 (±0.01) 0.78 In an object relative clause (no that) 0.71 0.81 0.86 (±0.05) 0.72 (±0.03) 0.96 (±0.02) 0.79 Average of subject-verb agreement 0.71 0.79 0.87 (±0.02) 0.77 (±0.02) 0.95 (±0.01) 0.86 REFLEXIVE ANAPHORA Simple 0.83 0.86 0.91 (±0.01) 0.93 (±0.01) 0.83 (±0.02) 0.96 In a sentential complement 0.86 0.83 0.81 (±0.02) 0.77 (±0.03) 0.46 (±0.05) 0.91 Across a relative clause 0.55 0.56 0.64 (±0.02) 0.63 (±0.02) 0.82 (±0.02) 0.87 Average of reflexive anaphora 0.75 0.75 0.79 (±0.01) 0.78 (±0.01) 0.70 (±0.02) 0.91 NEGATIVE POLARITY ITEMS Simple 0.40 0.48 0.96 (±0.04) 0.93 (±0.06) 0.28 (±0.05) 0.98 Across a relative clause 0.41 0.73 0.75 (±0.12) 0.82 (±0.09) 0.78 (±0.06) 0.81 Average of negative polarity items 0.41 0.61 0.86 (±0.06) 0.88 (±0.05) 0.53 (±0.04) 0.90 Average of all constructions 0.68 0.75 0.85 (±0.02) 0.79 (±0.02) 0.85 (±0.02) 0.88 Table 1: Replication of Marvin and Linzen (2018) results. M&L-Multi is the Marvin and Linzen (2018) LSTM trained on LM and CCG supertagging (Bangalore and Joshi, 1999; Clark and Curran, 2007) losses with an interpolation factor of 0.5. We report our LSTM LM, small LSTM†, and RNNG† performance (†smaller training data; §3) in the format of mean (±standard deviation) of 10 identical models from different seeds. Results in bold denote the best among models trained on similar amounts of training data. 3.1 Recurrent Neural Network Grammars An RNNG defines the joint probability of surface string x and phrase-structure tree y, denoted as t(x, y). The model generates phrase-structure trees in a top-down, left-to-right manner through a series of action sequences in a process reminiscent of shift-reduce parsing. At any given state, the decision over which action to take is parameterised by a stack LSTM (Dyer et al., 2015) encoding partially-completed constituents. Let ht be the stack LSTM hidden state at time t. The next action at ∈{GEN, NT, REDUCE} is sampled according to a categorical distribution defined by an affine transformation and a softmax: at ∼softmax(Waht + ba). • If at ∈{GEN, NT}, the model samples a terminal x or a non-terminal n from each respective categorical distribution as the next input: x ∼softmax(Wxht + bx), n ∼softmax(Wnht + bn). • If at = REDUCE, the topmost stack elements going back to the last incomplete non-terminal are popped, and a composition function (here a bidirectional LSTM) is executed to represent the completed phrase on the stack. This recursive composition function constitutes a primary difference with the syntactic LM of Choe and Charniak (2016) that operates sequentially, and has been found to be crucial for achieving good number agreement (Kuncoro et al., 2018) and correlation with brain signals (Hale et al., 2018). The stack LSTM, composition function, lookup embeddings, and pairs of affine transformation weights and biases {W, b} are model parameters. 3.2 Experiments Here we outline the experimental settings and present our RNNG findings. Experimental settings. We implement the RNNG with DyNet and enable autobatching on GPU. Predicted phrase-structure trees for the training and validation sets of the Gulordava et al. (2018) Wikipedia dataset are obtained with a pre-trained Berkeley parser (Petrov and Klein, 2007). Since training the RNNG on the full training set with the same number of hidden units 3475 as the LSTM would take more than a month,3 we train the RNNG on ∼20% of the training set (600,000 sentences), and use a smaller hidden state size of 256 (vs. 650 for the full LSTM). As the dataset is pre-processed, we select this subset such that all word types occur at least once in this smaller training set. Incremental decoding and marginal probability. To preserve incrementality constraints, at test time we use a word-synchronised beam search (Fried et al., 2017) with fast-tracking (Stern et al., 2017), using word and action beam sizes of k = 50 and k × 10 = 500, respectively. As exact inference of t(x) is intractable, we evaluate with a lower bound of the marginal probability by summing over the top k hypotheses yb(x) 1 , . . . , yb(x) k on the beam b(x) once parsing finishes: t(x) = X y′∈T (x) t(x, y′) ≥ k X i=1 t(x, yb(x) i ), where T (x) denotes the set of all possible phrasestructure trees for a sentence x. On targeted syntactic evaluations, the model succeeds iff log t(xcorrect) > log t(xincorrect). Discussion. We present the results in Table 1 (sixth column: “RNNG”), and compare with LSTMs trained on: (i) the full dataset (fourth column: “Our LSTM”), and (ii) the same (smaller) training set as the RNNG (fifth column: “Small LSTM”). Our findings clearly reaffirm the benefits of both hierarchical bias and data scale. In terms of hierarchical bias, an RNNG that leverages syntactic annotations and explicit composition operators outperforms a comparable small LSTM on 11 out of 15 constructions, and on aggregate improves accuracy on targeted syntactic evaluations from 79% to 85% (29% error reduction), thus matching the aggregate performance of the full LSTM trained on 5 times as much data, although we remark that their success and failure modes appear to be different. In terms of data scale, the LSTM LM trained on the full training set substantially outperforms the LSTM trained on the smaller training set. In particular, the performance difference between the small and full LSTMs sheds light on which constructions are sensitive to variations in the amount 3We tested the speed of RNNGs and LSTMs with similar capacity (40 million parameters) on DyNet. Both models ran on a single Quadro P4000 GPU with automatic batching turned on and a batch size of 20 sentences. of data. For instance, agreement across an object relative clause exhibits large variations across the two training regimes (77% to 54%), suggesting that LSTMs require a large amount of data to learn these constructions well. Our finding on the importance of data scale for LM training is consistent with the success of recent LM pretraining approaches (Peters et al., 2018; Devlin et al., 2019, inter alia) and earlier work on noisy channel models for tasks such as machine translation and speech recognition (Jelinek, 1997; Rosenfeld, 2000; Koehn, 2010, inter alia). Despite its smaller training set, the RNNG performs extremely well on subject-verb agreement, substantially outperforming both the full LSTM and a pre-trained BERT (Devlin et al., 2019, Table 2) trained on 150 times as much data, although it still lags behind the full LSTM on reflexive anaphora and NPI. 4 Syntax-Aware Language Model Given the trade-off between hierarchical operations and scalability, how can we design LMs that can better capture complex syntactic dependencies and be easily scalable at the same time? 4.1 Knowledge Distillation (KD) The goal of KD is to find a set of student model parameters ˆθKD that would minimise the Kullback–Leibler (KL) divergence between the teacher RNNG’s marginal probability t(x) = P y′∈T (x) t(x, y′) and the LSTM student qθ(x). Expanding the KL term and removing terms that do not depend on θ yields: ˆθKD = arg min θ DKL (t(x) || qθ(x)) , (1) = arg min θ − X x∈Σ∗ t(x) log qθ(x), (2) = arg min θ −Ex∼t(x) log qθ(x), (3) where Σ denotes the set of all word types in the vocabulary, and Σ∗the set of all possible sentences. As Eq. 2 involves an intractable summation over the set of all possible sentences, one alternative is to approximate this expectation with Monte Carlo sampling to obtain K sentences D′ = {x′(1), . . . , x′(K)} from t(x),4 and train a student LSTM LM on these sampled sentences as opposed to ground-truth LM data: 4While an RNNG estimates t(x, y), a simple way of sampling surface strings x from the RNNG is to sample pairs of (x(k), y(k)) ∼t(x, y) and ignore all non-terminals y(k). 3476 Ex∼t(x) log qθ(x) ≈1 K X x′∈D′ |x′| X j=1 log qθ(x′ j|x′ <j), although our preliminary experiments suggest that this procedure performs poorly due to high variance.5 We instead approximate Eq. 3 by minimising the KL at the local word-level: Ex∼t(x) log qθ(x) ≈ Ex∗∼p∗(x) |x∗| X j=1 DKL (t(w | x∗<j) || qθ(w | x∗<j)) , where x∗is sampled from the empirical distribution p∗(x), rather than from the teacher RNNG. Here t(w | x∗<j) and qθ(w | x∗<j) respectively parameterise the (marginal) probability of generating the next-word continuation w ∈Σ, given the “ground-truth” conditioning context x∗<j, under the teacher and student models. For a dataset of sentences D = {x∗(1), . . . , x∗(|D|)} characterising the empirical distribution p∗(x∗) = 1 |D| when x∗∈D (i.i.d. assumption), the word-level objective is: ˆθKD ≈arg min θ −1 |D| X x∗∈D ℓKD(x∗; θ), ℓKD(x∗; θ) = |x∗| X j=1 X w∈Σ t(w | x∗<j) log qθ(w | x∗<j). In earlier work, this local word-level approximation to the KD objective for sequence models has been shown to work surprisingly well in the case of neural machine translation6 (Kim and Rush, 2016) and language modelling (Furlanello et al., 2018, Born-Again Networks). Interpolation. As the teacher RNNG is trained on a smaller training set, the DSA-LSTM should not only aim to emulate the RNNG’s predictions and risk being upper-bounded by the teacher’s performance, but also learn from the correct next word x∗ j to fully exploit scalability.7 We thus interpolate the distillation (left) and LM (right) losses: 5This procedure of training a student LSTM LM on string samples from the RNNG with K ≈3, 000, 000 yields a high validation perplexity of above 1,000, due to the enormity of the sample space and the use of discrete samples. 6While Kim and Rush (2016) proposed a technique for sequence-level KD for machine translation through beam search, the same technique is not directly applicable to LM, which is an unconditional language generation problem. 7Recall that ℓKD(x; θ) does not depend on the true next word x∗ j. ˆθα-int = arg min θ −1 |D| X x∗∈D  αℓKD(x∗; θ) + (1 −α) |x∗| X j=1 log qθ(x∗ j | x∗<j)  , where α is the interpolation coefficient. We illustrate the effect of this interpolation in Fig. 1. Furthermore, computing ℓKD(x∗; θ) requires the RNNG’s estimate of t(w | x∗<j), which necessitates an expensive marginalisation over all tree prefixes that generate w conditional on x∗<j. For efficiency, we approximate this using the onebest predicted tree from a pre-trained Berkeley parser,8 denoted as ˆyberk(x∗), as follows: t(w | x∗<j) ≈t(w | x∗<j, ˆyberk <j (x∗)), where ˆyberk <j (x∗) are all the non-terminals in ˆyberk(x∗) that occur before x∗ j. In other words, we first parse the sentence with a Berkeley parser, and use the resulting tree prefix as conditioning context to compute the probability of generating w ∈Σ under the RNNG. While this means that the teacher’s predictions are not derived from a purely incremental process,9 the student DSA-LSTM still operates strictly incrementally. This interpolated objective is similar to label smoothing (Szegedy et al., 2016; Pereyra et al., 2017), with the softmax distribution of the RNNG as the smoothing factor as opposed to the uniform distribution. Intuition. In Fig. 1, we provide an intuition about why the interpolation of the distillation and LM losses could inject hierarchical bias into a sequential model. We consider the interpolated target with α = 0.5 for a prefix (suppressing nonterminals) Parts of the river valley, where the correct continuation is have since the agreement controller parts is plural. The standard LM loss is zero only when all word types other than the correct one are assigned zero probability mass, and it is only in expectation (across training contexts) that syntactic regularities are inferred. In contrast, the interpolated target assigns a minimum probability of 0.5 to the correct label, but crucially contains additional information about the plausibility of every alternative based on the teacher RNNG’s predictions. Under this objective, the plural verbs 8We use the same pre-trained Berkeley parser to obtain training and validation trees in §3. 9The resulting syntactic prefix ˆyberk <j (x) for approximating t(w | x∗ <j) under the RNNG is obtained from a Berkeley parser that has access to yet unseen words x>j. 3477 Figure 1: Example of the KD target (top), the standard LM target (middle), and the interpolated target used to train the DSA-LSTM (bottom) with α = 0.5, for a prefix (showing only the terminals) Parts of the river valley, where the correct continuation is have due to the plural subject parts. are and meander are assigned relatively high probability mass since they fit both the syntactic and semantic constraints (e.g. Parts of the river valley often meander), while the set of singular verbs has, meanders, and is are assigned much lower probability mass since they are syntactically illicit. Thus, as long as the RNNG makes the accurate structural generalisations (and we have shown that it largely does in §3), every training instance provides the student LSTM with a wealth of information about all the possible legitimate continuations according to the predictions of the hierarchical teacher, thereby making it easier for the student to learn the appropriate hierarchical constraints and generalisations. Differences with other KD work. Our approach departs from the predominant view of distillation primarily as a means of compressing knowledge from a bigger teacher or an ensemble to a compact student (Ba and Caruana, 2014; Kim and Rush, 2016; Liu et al., 2018, inter alia) in two important ways. First, here the teacher and student models are different in character, and not just in size: we transfer knowledge from a teacher that models the joint probability of strings and phrasestructure trees through hierarchical operations, to a student that only models surface strings through sequential operations. This setup presents an interesting dynamic since the DSA-LSTM has to mimic the predictions of the RNNG, which conditions on syntactic annotation to guide hierarchical operations, even though the DSA-LSTM itself has no direct access to any syntactic annotations at all. Second, distillation thus far has mostly been applied in settings where the teacher and student models are trained on the same data. For scalability reasons, we train the RNNG on a subset of the data, and obtain its soft predictions on the rest. We hypothesise that the predictions of the hierarchical teacher—although they come from a model trained on a smaller dataset—can nevertheless encourage the LSTM to develop structurally sensitive representations of the larger dataset it observes. Born-Again Networks (BA). In practice, the interpolated distillation objective above can be applied to any teacher and student models. Recently, Furlanello et al. (2018) surprisingly finds perplexity improvement in a born-again setup that trains an LSTM LM on the gold data, and then uses the resulting model as a teacher to a student LSTM that shares the same architecture as the teacher. To better understand the importance of learning from a hierarchical teacher (which is not the case in a BA-LSTM since the teacher model is also sequential), we present experiments comparing the DSALSTM with a BA-LSTM. 4.2 Experiments Here we describe our experimental settings and present our findings. Computational challenge. The KD loss necessitates computing the teacher RNNG’s predictive softmax distribution for each token in the training set, but pre-computing these for the Gulordava et al. (2018) training set leads to a pro3478 hibitive memory footprint.10 To save space, we instead pre-compute the teacher RNNG’s hidden state ht ∈RM for every token xt in the training set (M ≪|Σ|), and compute the teacher’s softmax on-the-fly with an affine transformation and a softmax, which presents minimal computational overhead. Experimental settings. The DSA-LSTM has an identical architecture to the LSTM LM (§2), although the learning rate is optimised independently (Appendix). We select the final model based on validation LM perplexity, with targeted syntactic evaluations only applied at test time. Training speed. Since the DSA-LSTM operates sequentially, it is amenable to batching operations and is five times faster to train than a comparable RNNG. Despite this significant speed-up, training the DSA-LSTM in our basic implementation is still half as fast as the standard LM objective. We attribute this difference to the additional computational overhead associated with the distillation objective, such as I/O operations and computing the cross-entropy between the teacher and student models for the entire vocabulary. These operations, however, only apply at training time; at test time there is no overhead of inferring qˆθα-int(x) under the DSA-LSTM. Baselines. The DSA-LSTM benefits from three main components: (i) a KD objective, which in itself has been shown to be a good regulariser (Furlanello et al., 2018), (ii) the scalability of the sequential architecture, and (iii) a hierarchical bias, which here comes from the teacher RNNG. To understand the benefit of each component, we compare DSA-LSTM with these baselines: • a strong LSTM LM (§2) that is scalable but lacks a hierarchical bias (“Full LSTM”); • the teacher RNNG trained on a 20% subset of the training set (§3), which benefits from a hierarchical bias but lacks scalability (“RNNG”); • a DSA-LSTM trained on the same smaller subset as the teacher RNNG (“S-DSA-LSTM”). This baseline isolates the importance of scalability, since it still benefits from a KD objective and a hierarchical bias from the teacher RNNG; 10Pre-computing the RNNG’s predictions necessitates storing N × |Σ| numbers, where N is the number of tokens. For the Gulordava et al. (2018) training set (∼80M tokens), this requires storing 4 trillion floating points, or 25 terabytes. • a born-again LSTM that benefits from KD and scalability, though it lacks a hierarchical bias due to the sequential teacher (“BA-LSTM”). Discussion. To avoid clutter, for each model variant we present only the mean performance of 10 identical models from different random seeds; results with standard deviations are in the Appendix. We present our findings in Table 2, based on which we derive several observations. • Of the three models trained on the small subset, the S-DSA-LSTM outperforms the small LSTM trained on standard LM objective, improving overall acccuracy from 0.79 to 0.82 (14% error reduction), even though both models share the same architecture and training set size (i.e. only the training objective is different). On subjectverb agreement, the S-DSA-LSTM successfully narrows the gap with the slower teacher RNNG, which benefits from syntactic bias and annotation. These findings confirm our hypothesis that the KD approach constitutes an efficient way to inject hierarchical bias into sequential models. • The born-again model (BA-LSTM) outperforms the LSTM LM, albeit by a small margin. This finding suggests that KD helps improve the syntactic competence of LSTMs, even when the teacher model lacks explicit hierarchical bias and shares the same architecture as the student. • In terms of perplexity, both BA-LSTM and DSA-LSTM perform slightly worse than the full LSTM LM trained without KD loss. We attribute this gap to the smoother target distribution when using KD, which effectively penalises high probabilities on the correct next word x∗ j unless the teacher model is extremely confident. This observation is consistent with earlier findings on label smoothing in machine translation (Pereyra et al., 2017; Vaswani et al., 2017), which often results in better BLEU at the expense of slightly worse likelihood. • Despite identical architectures, on aggregate the DSA-LSTM substantially improves over the full LSTM (85% to 89%), constituting a 27% error rate reduction and a new state of the art. Our findings suggest that the DSA-LSTM combines the best of both hierarchical bias and data scale: on subject-verb agreement, the DSA-LSTM improves over the LSTM baseline and narrows the gap with the teacher RNNG, while at the same time performing well on reflexive anaphora and 3479 Small Training Set Full Training Set Small LSTM† S-DSA-LSTM† RNNG† Full LSTM BA-LSTM DSA-LSTM BERT Humans Gulordava et al. (2018) test ppl. 94.54 93.95 92.30 53.73 54.64 56.74 N/A N/A SUBJECT-VERB AGREEMENT Simple 0.89 0.96 0.99 1.00 1.00 1.00 1.00 0.96 In a sentential complement 0.89 0.98 0.93 0.97 0.98 0.98 0.83 0.93 Short VP coordination 0.90 0.88 0.96 0.96 0.95 0.99 0.89 0.94 Long VP coordination 0.78 0.74 0.94 0.82 0.80 0.80 0.98 0.82 Across a prepositional phrase 0.83 0.88 0.95 0.89 0.89 0.91 0.85 0.85 Across a subject relative clause 0.81 0.87 0.95 0.87 0.87 0.90 0.84 0.88 Across an object relative clause 0.54 0.69 0.95 0.77 0.81 0.84 0.89 0.85 Across an object relative clause (no that) 0.55 0.61 0.93 0.70 0.74 0.77 0.86 0.82 In an object relative clause 0.79 0.87 0.96 0.90 0.91 0.92 0.95 0.78 In an object relative clause (no that) 0.72 0.88 0.96 0.86 0.83 0.92 0.79 0.79 Average of subject-verb agreement 0.77 0.84 0.95 0.87 0.88 0.90 0.89 0.86 REFLEXIVE ANAPHORA Simple 0.93 0.90 0.83 0.91 0.92 0.91 0.94 0.96 In a sentential complement 0.77 0.78 0.46 0.81 0.81 0.82 0.89 0.91 Across a relative clause 0.63 0.67 0.82 0.64 0.64 0.67 0.80 0.87 Average of reflexive anaphora 0.78 0.78 0.70 0.79 0.79 0.80 0.88 0.91 NEGATIVE POLARITY ITEMS Simple 0.93 0.84 0.28 0.96 0.98 0.94 N/A 0.98 Across a relative clause 0.82 0.73 0.78 0.75 0.70 0.91 N/A 0.81 Average of negative polarity items 0.88 0.79 0.53 0.86 0.84 0.92 N/A 0.90 Average of all constructions 0.79 0.82 0.85 0.85 0.86 0.89 N/A 0.88 Table 2: Experimental findings of the “DSA-LSTM”. For each column, we report the mean of 10 identical models trained from different random seeds; standard deviation values are reported in the Appendix. “S-DSA-LSTM” indicates the DSA-LSTM trained on the smaller RNNG training set, while “BA-LSTM” is the born-again model where the teacher is the full LSTM LM. We also compare with the syntactic generalisation of “BERT” Base (Devlin et al., 2019; Goldberg, 2019), which is not strictly comparable since it is trained on 30 times as much data. † indicates models trained on the smaller 20% training set (§3). Results in bold denote the best among those trained with the same amounts of data. NPI, on which the teacher RNNG (but not the full LSTM) fails to achieve a good performance. • While not directly comparable, the DSA-LSTM outperforms a pre-trained BERT (Devlin et al., 2019; Goldberg, 2019)11 on subject-verb agreement. Since BERT benefits from bidirectionality and was trained on 30 times as much data as the DSA-LSTM, this finding suggests that, at least in terms of syntactic competence, structural biases continue to be relevant even as the current generation of sequential LMs is able to exploit increasingly large amounts of data. 4.3 Probing for Hierarchical Information Having established the advantages of the DSALSTM on targeted syntactic evaluations, we turn to the question of analysing how its internal representation differs from that of a standard LSTM LM. To this end, we adopt the method of Blevins et al. (2018) and use a probe (Shi et al., 2016; Adi et al., 2017; Belinkov et al., 2017; Conneau et al., 2018; Hewitt and Manning, 2019, inter alia) that 11Goldberg (2019) applies an additional pre-processing step, removing sentences in which the focus verb does not appear as a single word in the word piece-based vocabulary; hence, the evaluation sentences are slightly different. predicts the grandparent constituent of a word token xt, based on its encoding ht under the pretrained LSTM. Under this framework, the accuracy of the probe on a held-out set can be understood as an indication of how well the hidden states encode the relevant syntactic information required to succeed in this task. We use a linear classifier for the probe and obtain the predicted grandparent constituent label using the same pre-trained Berkeley parser (§3) that we used to obtain predicted phrase-structure trees to train the RNNG. For the probing experiment, we randomly select sentences from each respective training, validation, and test set of the Gulordava et al. (2018) dataset to yield ∼300,000 words for training and ∼10,000 words for each of validation and test sets. For the probe features, we use a concatenation of the LSTM hidden state at the current and next words,12 i.e. [ht; ht+1], where ; denotes the concatenation operation. Recall that the DSA-LSTM operates only on word sequences and has no access to the Berkeley parse during training. We summarise the probing 12Our probing feature set thus slightly differs from that of Blevins et al. (2018), who concatenated the hidden states of a left-to-right and right-to-left LSTM language models. 3480 Figure 2: Probing accuracy on the test set. We analyse the hidden states of the LSTM and DSALSTM to analyse the structural information encoded in each respective model’s hidden state. result in Fig. 2. Overall, the syntactic probing accuracy for the DSA-LSTM is much higher than for the LSTM LM (83% to 74%; a 34% error rate reduction), suggesting that the means by which the DSA-LSTM achieves better syntactic competence is by tracking more hierarchical information during sequential processing. 5 Related Work Augmenting language models with syntactic information and structural inductive bias has been a long-standing area of research. To this end, syntactic language models estimate the joint probability of surface strings and some form of syntactic structure (Jurafsky et al., 1995; Chelba and Jelinek, 2000; Roark, 2001; Henderson, 2004; Emami and Jelinek, 2005; Buys and Blunsom, 2015; Mirowski and Vlachos, 2015; Dyer et al., 2016; Kim et al., 2019). In contrast to these approaches, the DSA-LSTM only models the probability of surface strings, albeit with an auxiliary loss that distills the next-word predictive distribution of a syntactic language model. Earlier work has also explored multi-task learning with syntactic objectives as an auxiliary loss in language modelling and machine translation (Luong et al., 2016; Eriguchi et al., 2016; Nadejde et al., 2017; Enguehard et al., 2017; Aharoni and Goldberg, 2017; Eriguchi et al., 2017). Our approach of injecting syntactic bias through a KD objective is orthogonal to this approach, with the primary difference that here the student DSALSTM has no direct access to syntactic annotations; it does, however, have access to the teacher RNNG’s softmax distribution over the next word. Our approach is also closely related to recent work that introduces structurally-motivated inductive biases into language models. Chung et al. (2017) segmented the hidden state update of an RNN through a multi-scale hierarchical recurrence, thereby providing a shortcut to the gradient propagation of long-range, hierarchical dependencies. Yogatama et al. (2018) introduced a stackstructured memory to encourage hierarchical modelling in language models, where the resulting model successfully outperforms standard LSTM variants in number agreement (Linzen et al., 2016) evaluation. Shen et al. (2019) imposed a hierarchical bias on the LSTM cell-updating mechanism, based on the intuition that larger constituents contain information that changes more slowly across the sequence. Our proposed method is orthogonal and can be applied on top of these recent approaches. 6 Conclusion In this paper, we introduce a distilled syntax-aware LSTM (DSA-LSTM), which combines scalability with structural biases. We achieve this by distilling the predictions about upcoming words in a large training corpus made by a (computationally complex) hierarchical language model trained on a small subset of the data. While we find that LSTM language models achieve better syntactic generalisation than previously thought, on targeted syntactic evaluations our approach improves over this strong baseline, yields a new state of the art, compares favourably to a language model trained on much more data, and results in a language model that encodes hierarchical information to a large extent despite its sequential architecture. Our approach is a general one that can be applied to other student model architectures, such as Transformers (Vaswani et al., 2017). These findings suggest that the question of structural biases continues to be relevant for improving syntactic competence, even in scalable architectures that can benefit from evergrowing amounts of training data. Acknowledgments We would like to thank Rebecca Marvin and Tal Linzen for their help in answering questions regarding data preparation. We also thank Dani Yogatama, John Hale, and the three anonymous reviewers for their helpful suggestions. 3481 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In Proc. of ICLR. Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proc. of ACL. Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In NIPS. Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proc. of ACL. Terra Blevins, Omer Levy, and Luke Zettlemoyer and. 2018. Deep rnns encode soft hierarchical syntax. In Proc. of ACL. Cristian Bucilˇa, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proc. of KDD. Jan Buys and Phil Blunsom. 2015. A Bayesian model for generative transition-based dependency parsing. CoRR, abs/1506.04334. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech and Language, 14(4). Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proc. of EMNLP. Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2017. Hierarchical multiscale recurrent neural networks. In Proc. of ICLR. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. 2019. Universal transformers. In Proc. of ICLR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proc. of ACL. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL. Ahmad Emami and Frederick Jelinek. 2005. A neural syntactic language model. Machine Learning, 60:195–227. ´Emile Enguehard, Yoav Goldberg, and Tal Linzen. 2017. Exploring the syntactic abilities of rnns with multi-task learning. In Proc. of CoNLL. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proc. of ACL. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to parse and translate improves neural machine translation. In Proc. of ACL. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proc. of ACL. Tommaso Furlanello, Zachary Chase Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born-again neural networks. In Proc. of ICML. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. CoRR, abs/1901.05287. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proc. of NAACL. John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan R. Brennan. 2018. Finding syntax in human encephalography with beam search. In Proc. of ACL. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proc. of ACL. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proc. of NAACL. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proc. of ACL. Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press. D. Jurafsky, C. Wooters, J. Segal, A. Stolcke, E. Fosler, G. Tajchaman, and N. Morgan. 1995. Using a stochastic context-free grammar as a language model for speech recognition. In Proc. of ICASSP. 3482 Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proc. of EMNLP. Yoon Kim, Alexander M. Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gabor Melis. 2019. Unsupervised recurrent neural network grammars. In Proc. of NAACL. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Graham Neubig, and Noah A. Smith. 2017. What do recurrent neural network grammars learn about syntax? In Proc. of EACL. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proc. of ACL. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics. Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, and Ting Liu. 2018. Distilling knowledge for search-based structured prediction. In Proc. of ACL. Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In Proc. of ICLR. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proc. of EMNLP. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of Interspeech. Piotr Mirowski and Andreas Vlachos. 2015. Dependency recurrent neural language models for sentence completion. In Proc. of ACL-IJCNLP. Maria Nadejde, Siva Reddy, Rico Sennrich, Tomasz Dwojak, Marcin Junczys-Dowmunt, Philipp Koehn, and Alexandra Birch. 2017. Predicting target language ccg supertags improves neural machine translation. In Proc. of WMT. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017a. DyNet: The Dynamic Neural Network Toolkit. arXiv preprint arXiv:1701.03980. Graham Neubig, Yoav Goldberg, and Chris Dyer. 2017b. On-the-fly operation batching in dynamic computation graphs. In Proc. of NIPS. Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. In Proc. of ICLR. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proc. of NAACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2). Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here. In Proc. of IEEE. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In Proc. of ICLR. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proc. of EMNLP. Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Proc. of EMNLP. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proc. of CVPR. Ke M. Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proc. of EMNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS. Dani Yogatama, Yishu Miao, Gabor Melis, Wang Ling, Adhiguna Kuncoro, Chris Dyer, and Phil Blunsom. 2018. Memory architectures in recurrent neural network language models. In Proc. of ICLR. 3483 Appendix Here we outline the hyperparameters and the experimental results with standard deviation values. A Hyperparameters The hyperparameters for each model is summarised as follows. LSTM LMs. For the LSTM LMs trained on the full and small training sets, we use the following hyperparameters that achieve the best validation perplexity following a grid search: 2-layer LSTM with 650 hidden units per layer for the full LSTM and 300 hidden units per layer for the small LSTM (similar model capacity as the RNNG trained on the same smaller training set), optimised by stochastic gradient descent (SGD) with a learning rate of 0.45 (decayed exponentially at every epoch with a factor of 0.9 after the tenth epoch), a dropout rate of 0.2 applied on both input and recurrent connections, and a batch size of 20 sentences. RNNG. For the RNNG, we use the following hyperparameters that achieve the best validation perplexity following a similar grid search: 2-layer stack LSTM with 256 hidden units per layer, optimised by SGD with a learning rate of 0.3 (decayed exponentially at every epoch with a factor of 0.92 after the tenth epoch), a dropout rate of 0.3 applied on both input and recurrent connections, and a batch size of 10 sentences. DSA-LSTMs and Born-Again LSTMs. We use the same hyperparameters for the DSALSTMs trained on both the full and small (S-DSALSTM) training sets and the born-again LSTM (BA-LSTM) trained on the full training set. Since the model architectures are identical with the respective LSTM LMs (i.e. only the training objective is different), we only optimise for the learning rates and keep all other hyperparameters the same. We find that a learning rate of 0.4 and an exponential decay factor of 0.9 applied after the tenth epoch works well across all three models trained with the KD objective. B Experimental Results with Standard Deviation We summarise the experimental results that include standard deviation values in Table 3. 3484 Small Training Set Full Training Set S-DSA-LSTM† RNNG† Full LSTM BA-LSTM DSA-LSTM BERT Humans Gulordava et al. (2018) test ppl. 93.95 (±0.18) 92.30 (±0.27) 53.73 (±0.16) 54.64 (±0.25) 56.74 (±0.26) N/A N/A SUBJECT-VERB AGREEMENT Simple 0.96 (±0.03) 0.99 (±0.01) 1.00 (±0.00) 1.00 (±0.00) 1.00 (±0.00) 1.00 0.96 In a sentential complement 0.98 (±0.02) 0.93 (±0.02) 0.97 (±0.02) 0.98 (±0.02) 0.98 (±0.02) 0.83 0.93 Short VP coordination 0.88 (±0.04) 0.96 (±0.02) 0.96 (±0.02) 0.95 (±0.02) 0.99 (±0.02) 0.89 0.94 Long VP coordination 0.74 (±0.03) 0.94 (±0.03) 0.82 (±0.05) 0.80 (±0.04) 0.80 (±0.02) 0.98 0.82 Across a prepositional phrase 0.88 (±0.02) 0.95 (±0.01) 0.89 (±0.02) 0.89 (±0.03) 0.91 (±0.03) 0.85 0.85 Across a subject relative clause 0.87 (±0.02) 0.95 (±0.03) 0.87 (±0.02) 0.87 (±0.01) 0.90 (±0.02) 0.84 0.88 Across an object relative clause 0.69 (±0.06) 0.95 (±0.03) 0.77 (±0.11) 0.81 (±0.05) 0.84 (±0.03) 0.89 0.85 Across an object relative clause (no that) 0.61 (±0.05) 0.93 (±0.02) 0.70 (±0.05) 0.74 (±0.03) 0.77 (±0.02) 0.86 0.82 In an object relative clause 0.87 (±0.05) 0.96 (±0.01) 0.90 (±0.03) 0.91 (±0.03) 0.92 (±0.04) 0.95 0.78 In an object relative clause (no that) 0.88 (±0.03) 0.96 (±0.02) 0.86 (±0.05) 0.83 (±0.02) 0.92 (±0.02) 0.79 0.79 Average of subject-verb agreement 0.84 (±0.02) 0.95 (±0.01) 0.87 (±0.02) 0.88 (±0.01) 0.90 (±0.01) 0.89 0.86 REFLEXIVE ANAPHORA Simple 0.90 (±0.01) 0.83 (±0.02) 0.91 (±0.01) 0.92 (±0.03) 0.91 (±0.04) 0.94 0.96 In a sentential complement 0.78 (±0.01) 0.46 (±0.05) 0.81 (±0.02) 0.81 (±0.02) 0.82 (±0.03) 0.89 0.91 Across a relative clause 0.67 (±0.03) 0.82 (±0.02) 0.64 (±0.02) 0.64 (±0.02) 0.67 (±0.03) 0.80 0.87 Average of reflexive anaphora 0.78 (±0.01) 0.70 (±0.02) 0.79 (±0.01) 0.79 (±0.02) 0.80 (±0.03) 0.88 0.91 NEGATIVE POLARITY ITEMS Simple 0.84 (±0.05) 0.28 (±0.05) 0.96 (±0.04) 0.98 (±0.02) 0.94 (±0.04) N/A 0.98 Across a relative clause 0.73 (±0.07) 0.78 (±0.06) 0.75 (±0.12) 0.70 (±0.10) 0.91 (±0.07) N/A 0.81 Average of negative polarity items 0.79 (±0.05) 0.53 (±0.04) 0.86 (±0.06) 0.84 (±0.05) 0.92 (±0.05) N/A 0.90 Average of all constructions 0.82 (±0.02) 0.85 (±0.02) 0.85 (±0.02) 0.86 (±0.01) 0.89 (±0.01) N/A 0.88 Table 3: Experimental findings of the “DSA-LSTM”. For each column, we report the mean and standard deviation values of 10 identical models trained from different random seeds. “S-DSA-LSTM” indicates the DSA-LSTM trained on the smaller RNNG training set, while “BA-LSTM” is the born-again model where the teacher is the full LSTM LM. We also compare with the syntactic generalisation of “BERT” Base, which is not strictly comparable since it is trained on 30 times as much data. † indicates models trained on the smaller 20% training set (§3). Results in bold denote the best among those trained with the same amounts of data.
2019
337
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3485–3492 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3485 An Imitation Learning Approach to Unsupervised Parsing Bowen Li† Lili Mou‡ Frank Keller† †Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh, UK ‡University of Waterloo, Canada [email protected], [email protected] [email protected] Abstract Recently, there has been an increasing interest in unsupervised parsers that optimize semantically oriented objectives, typically using reinforcement learning. Unfortunately, the learned trees often do not match actual syntax trees well. Shen et al. (2018) propose a structured attention mechanism for language modeling (PRPN), which induces better syntactic structures but relies on ad hoc heuristics. Also, their model lacks interpretability as it is not grounded in parsing actions. In our work, we propose an imitation learning approach to unsupervised parsing, where we transfer the syntactic knowledge induced by the PRPN to a Tree-LSTM model with discrete parsing actions. Its policy is then refined by GumbelSoftmax training towards a semantically oriented objective. We evaluate our approach on the All Natural Language Inference dataset and show that it achieves a new state of the art in terms of parsing F-score, outperforming our base models, including the PRPN.1 1 Introduction From a linguistic perspective, a natural language sentence can be thought of as a set of nested constituents in the form of a tree structure (Partee et al., 2012). When a parser is trained on labeled treebanks, the predicted constituency trees are useful for various natural language processing (NLP) tasks, including relation extraction (Verga et al., 2016), text simplification (Narayan and Gardent, 2014), and machine translation (Aharoni and Goldberg, 2017). However, expensive expert annotations are usually required to create treebanks. Unsupervised parsing (also known as grammar induction or latent tree learning) aims to learn syntactic structures without access to a treebank 1Our code can be found at https://github.com/libowen2121/ Imitation-Learning-for-Unsup-Parsing during training, with potential uses in low resource or out-of-domain scenarios. In early approaches, unsupervised parsers were trained by optimizing the marginal likelihood of sentences (Klein and Manning, 2014). More recent deep learning approaches (Yogatama et al., 2017; Maillard et al., 2017; Choi et al., 2018) obtain latent tree structures by reinforcement learning (RL). Typically, this involves a secondary task, e.g., a language modeling objective or a semantic task. However, Williams et al. (2018a) have pointed out that these methods do not yield linguistically plausible structures, and have low self-agreement when randomly initialized multiple times. Recently, Shen et al. (2018) proposed the parsing-reading-predict network (PRPN), which performs language modeling with structured attention. The model uses heuristics to induce tree structures from attention scores, and in a replication was found to be the first latent tree model to produce syntactically plausible structures (Htut et al., 2018). Structured attention in the PRPN is formalized as differentiable continuous variables, making the model easy to train. But a major drawback is that the PRPN does not model tree-building operations directly. These operations need to be stipulated externally, in an ad hoc inference procedure which is not part of the model and cannot be trained (see Section 3). In this paper, we propose an imitation learning framework that combines the continuous PRPN with a Tree-LSTM model with discrete parsing actions, both trained without access to labeled parse trees. We exploit the advantages of the PRPN by transferring its knowledge to a discrete parser which explicitly models tree-building operations. We accomplish the knowledge transfer by training the discrete parser to mimic the behavior of the PRPN. Its policy is then refined using straightthrough Gumbel-Softmax (ST-Gumbel, Jang et al., 3486 2017) trained with a semantic objective, viz., natural language inference (NLI). We evaluate our approach on the All Natural Language Inference dataset and show that it achieves a new state of the art in terms of parsing F-score, outperforming our base models, including the PRPN. Our work also shows that semantic objectives can improve unsupervised parsing, contrary to earlier claims (Williams et al., 2018a; Htut et al., 2018). 2 Related Work Recursive neural networks are a type of neural network which incorporates syntactic structures for sentence-level understanding tasks. Typically, recursive neural network models assume that an annotated treebank or a pretrained syntactic parser is available (Socher et al., 2013; Tai et al., 2015; Kim et al., 2019a), but recent work pays more attention to learning syntactic structures in an unsupervised manner. Yogatama et al. (2017) propose to use reinforcement learning, and Maillard et al. (2017) introduce the Tree-LSTM to jointly learn sentence embeddings and syntax trees, later combined with a Straight-Through Gumbel-Softmax estimator by Choi et al. (2018). In addition to sentence classification tasks, recent research has focused on unsupervised structure learning for language modeling (Shen et al., 2018, 2019; Drozdov et al., 2019; Kim et al., 2019b). In our work, we explore the possibility for combining the merits of both sentence classification and language modeling. Unsupervised parsing is also related to differentiation through discrete variables, where researchers have proposed to use reinforcement learning with sampling (Williams, 1992), neural attention for marginalization (Deng et al., 2018), and proximal gradient methods (Jang et al., 2017; Peng et al., 2018). Our work follows the framework of Mou et al. (2017), who couple neural and symbolic systems for table querying by pretraining an reinforcement learning executor with neural attention. We extend this idea to syntactic parsing and show the relationship between parsing and downstream tasks. Such a framework couples diverse models at the intermediate output level (latent trees in our case); its flexibility allows us to make use of heterogeneous models, such as the PRPN and the Tree-LSTM. The knowledge transfer between the PRPN and the Tree-LSTM applies a simple imitation learning procedure, where an agent learns from a teacher (a human or a well-trained model) based on demonstrations (i.e., predictions of the teacher). Typical approaches to imitation learning include behavior cloning (step-by-step supervised learning) and inverse reinforcement learning (Hussein et al., 2017). If the environment/simulator is available, the agent can refine its policy after learning from demonstrations (Gao et al., 2018). Our work also adopts a two-step strategy: learning from demonstrations and refining policy. Policy refinement is needed in our approach because the teacher is imperfect, and experiments show the benefit of policy refinement in our case. 3 Our Approach Parsing-reading-predict network (PRPN). The first ingredient of our approach is the PRPN, which is trained using a language modeling objective, i.e., it predicts the next word in the text, based on previous words. The PRPN introduces the concept of syntactic distance dt, defined as the height of the common ancestor of wt−1 and wt in the tree (t is the position index in a sentence w1, ..., wN). Since gold standard dt is not available, the PRPN learns the estimated bdt end-to-end in an unsupervised manner. The PRPN computes the differences between bdt at the current step and all previous steps bdj for 2 ≤j < t. The differences are normalized to [0, 1] and used to compute attention scores right to left. These scores are applied to reweight another set of inner-sentence attention scores, which are then used in a recurrent neural network to predict the next word. The PRPN is explained in more detail in Appendix A. Based on the real-valued syntactic distances in the PRPN, an external procedure is used to infer tree structures. The main text of Shen et al. (2018) suggests using the following intuitive scheme: find the largest distance bdi and split the sentence into two constituents (· · · , wi−1) and (wi, · · · ). This process is then repeated recursively on the two new constituents. The trees inferred by this scheme, however, yield poor parsing F-scores, and the results reported by Shen et al. (2018) are actually obtained by a different scheme (evidenced in their supplementary material and code repository): find the largest syntactic distance bdi and obtain two constituents (· · · , wi−1) and (wi, · · · ). If the latter 3487 constituent contains two or more words, then it is further split into (wi) and (wi+1, · · · ), regardless of the syntactic distance bdi+1. This scheme introduces a bias for right-branching trees, which presumably is the reason why it yields good parsing F-scores for English. The reliance on this trick illustrates the point we make in the Introduction: syntactic distance has the advantage of being a continuous value, which can be computed as an attention score in a differentiable model. However, this comes at a price: the PRPN does not model trees or tree-building operations directly. These operations need to be stipulated externally in an ad hoc inference procedure. This procedure is not part of the model and cannot be trained, but yet is crucial for good performance. Discrete syntactic parser. To address this problem, we combine the PRPN with a parser which explicitly models tree-building operations. Specifically, we use the pyramid-shaped, tree-based long short-term memory (Tree-LSTM, Figure 1a, Choi et al., 2018), where reinforcement learning (RL) in this model can be relaxed by Gumbel-Softmax. Concretely, let w1, w2, · · · , wN be the embeddings of the words in a sentence. The model tries every possible combination of two consecutive words by the Tree-LSTM, but then uses softmax (in N −1 ways) to predict which composition is appropriate at this step. Let h(1) 1 , · · · , h(1) N−1 be the candidate TreeLSTM composition at the bottom layer. With q being a trainable query vector, the model computes a distribution p: p(1) i = softmax{q⊤h(1) i } (1) Assuming the model selects an appropriate composition at the current step, we copy all other words intactly, shown as orange arrows in Figure 1a. This process is applied recursively, forming the structure in the figure. The Tree-LSTM model is learned by straightthrough Gumbel-Softmax (detailed in Appendix B), which resembles RL as it samples actions from its predicted probabilities, exploring different regions of the latent space other than a maximum a posteriori tree. Training involves doubly stochastic gradient descent (Lei et al., 2016): the first stochasticity comes from sampling input from the data distribution, and the second one from sampling actions for each input. (a) Pyramid Model (b) Knowledge Transfer = [.6 .4] p(2) p(1) = [.3 .5 .2] q q Jparse Jtask ˆt(2) = [1 0] = [0 1 0] ˆt(1) Jparse Imperfect step-by-step target parsing labels obtained by soft parser w1 w2 w3 w4 Figure 1: Overview of our approach. (a) The TreeLSTM model of Choi et al. (2018). (b) The model is first trained with step-by-step supervision, and then Gumbel-Softmax is applied to refine the policy. Therefore, ST-Gumbel is difficult to train (similar to RL), and may be stuck in poor local optima, resulting in low self-agreement for multiple random initializations (Williams et al., 2018a). Imitation learning. Our aim is to combine the PRPN and its continuous notion of syntactic distance with a parser that has discrete tree-building operations. The mapping from the sequence of Tree-LSTM composition operations to a tree structure is not injective. Given a parse tree, we may have multiple different composition sequences, e.g., left-to-right or right-to-left. This ambiguity could confuse the Tree-LSTM during training. We solve this problem by using the PRPN’s notion of syntactic distance. Given a parse tree predicted by the PRPN, if more than one composition is applicable, we always group the candidates with the lowest syntactic distance. In this way, we can unambiguously determine the composition order from the trees inferred by the PRPN. Then, we train the Tree-LSTM model in a step-by-step (SbS) supervised fashion. Let bt(j) be a one-hot vector for the jth step of Tree-LSTM composition, where the hat denotes imperfect target labels induced by the PRPN’s prediction. The parsing loss is defined as: Jparse = − X j X i bt(j) i log p(j) i (2) where p(j) is the probability predicted by the TreeLSTM model. The subscript i indexes the ith position among in 1, · · · , Nj −1, where Nj is the number of nodes in the jth composition step. The overall training objective J is a weighted combination of the loss of the downstream task 3488 and the parsing loss, i.e., J = Jtask + λJparse. After step-by-step training, we perform policy refinement by optimizing Jtask with ST-Gumbel, so that the Tree-LSTM can improve its policy based on a semantically oriented task. It should be emphasized that how the TreeLSTM model builds the tree structure differs between step-by-step training and ST-Gumbel training. For SbS training, we assume an imperfect parsing tree is in place; hence the Tree-LSTM model exploits existing partial structures to predict the next composition position. For ST-Gumbel, the tree structure is sampled from its predicted probability, enabling our model to explore the space of trees beyond the given imperfect tree. 4 Experiments We train our model on the AllNLI dataset and evaluate on the MultiNLI development set, following experimental settings in Htut et al. (2018) (for detailed settings, please see Appendix C). Table 1 shows the parsing F-scores against the Stanford Parser. The ST-Gumbel Tree-LSTM model and the PRPN were run five times with different initializations, each known as a trajectory. For imitation learning, given a PRPN trajectory, we perform SbS training once and then policy refinement for five runs. Left-/right-branching and balanced trees are also included as baselines. Parsing results with punctuation. It is a common setting to keep all punctuation for evaluation on the AllNLI dataset (Htut et al., 2018). In such a setting, we find that the Tree-LSTM, trained by ST-Gumbel from random initialization, does not outperform balanced trees, whereas the PRPN outperforms it by around 30 points. Our PRPN replication results are consistent with Htut et al. (2018). Our first stage in imitation learning (SbS training) is able to successfully transfer the PRPN’s knowledge to the Tree-LSTM, achieving an Fscore of 52.0, which is clearly higher than the 21.9 achieved by the Tree-LSTM trained with STGumbel alone, and even slightly higher than the PRPN itself. The second stage, policy refinement, achieves a further improvement in unsupervised parsing, outperforming the PRPN by 2.1 points. We also evaluate the self-agreement by computing the mean F-score across 25 runs for policy refinement and five runs for other models. We find that our imitation learning achieves improved selfagreement in addition to improved parsing performance. Parsing results without punctuation. We are interested in investigating whether punctuation make a difference on unsupervised parsing. In the setting without punctuation, our imitation learning approach with policy refinement outperforms the PRPN by a larger margin (7.3 F-score points) than in the setting with punctuation. But surprisingly, strictly right-branching trees are a very strong baseline in this setting, achieving the best parsing performance overall. The PRPN cannot outperform the right-branching baseline, even though it uses a right-branching bias in its tree inference procedure. By way of explanation, we assume that the syntactic trees we compare against (given by the Stanford parser) become more right-branching if punctuation is removed. A simple example is the period at the end of the sentence: this is always attached to a high-level constituent in the correct tree (often to Root), while right-branching attaches it to the most deeply embedded constituent. So this period is always incorrectly predicted by the rightbranching baseline, if punctuation is left in. To further elucidate this issue, we also compute the agreement of various models with a rightbranching baseline. In the setting without punctuation, the PRPN sets an initial policy that agrees fairly well with right-branching, and this rightbranching bias is reinforced by imitation learning and policy refinement. However, in the setting with punctuation, the agreement with rightbranching changes in the opposite way. We conjecture that right-branching is a reason why our imitation learning achieves a larger improvement without punctuation. Right-branching provides a relatively flat local optimum so that imitation learning can do further exploring with a low risk of moving out of it. Performance across constituent types. We break down the performance of latent tree induction across constituent types in the setting of keeping punctuation. We see that, among the six most common ones, our imitation approach outperforms the PRPN on four types. However, we also notice that for the most frequent type (NP), our approach is worse than the PRPN. This shows that the strengths of the two approaches complement each other, and in future work ensemble 3489 w/o Punctuation w/ Punctuation Model Mean F Self-agreement RB-agreement Mean F Self-agreement RB-agreement Left-Branching 20.7 18.9 Right-Branching 58.5 18.5 Balanced-Tree 39.5 22.0 ST-Gumbel 36.4 57.0 33.8 21.9 56.8 38.1 PRPN 46.0 48.9 51.2 51.6 65.0 27.4 Imitation (SbS only) 45.9 49.5 62.2 52.0 70.8 20.6 Imitation (SbS + refine) 53.3† 58.2 64.9 53.7† 67.4 21.1 Table 1: Parsing performance with and without punctuation. Mean F indicates mean parsing F-score against the Stanford Parser (early stopping by F-score). Self-/RB-agreement indicates self-agreement and agreement with the right-branching baseline across multiple runs. † indicates a statistical difference from the corresponding PRPN baseline with p < 0.01, paired one-tailed bootstrap test.2 Type # Occur ST-Gumbel PRPN Imitation (SbS + refine) NP 69k 22.6 53.2 49.5 VP 58k 4.9 49.4 57.0 S 42k 44.3 63.9 66.0 PP 29k 13.9 55.4 52.4 SBAR 12k 6.9 38.9 41.4 ADJP 4k 10.6 44.2 46.5 Table 2: Parsing accuracy for six phrase types which occur more than 2k times in the MultiNLI development set with keeping punctuation. methods could be employed to combine them. Discussion. Our results show the usefulness of a downstream task for unsupervised parsing. Specifically, policy refinement with a semantically oriented objective improves parsing performance by two F-score points, outperforming the previous state-of-the-art PRPN model. This provides evidence against previous studies which have claimed that an external, non-syntactic task such as NLI does not improve parsing performance (Williams et al., 2018a; Htut et al., 2018). At the same time, our results are compatible with findings of Shi et al. (2018) that a range of different tree structures yield similar classification accuracy in NLI: we find that the mean NLI accuracy of the ST-Gumbel-only model and our imitation learning model with policy refinement is 69.9% and 69.2%, respectively, on the MultiNLI development set. NLI performance seems to be largely unaffected by the syntactic properties of the induced trees. An interesting question is why ST-Gumbel improves unsupervised parsing when trained with an NLI objective. It has been argued that NLI as currently formulated is not a difficult task (Poliak et al., 2018); this is presumably why models can 2F-score is not normally distributed. It is therefore appropriate to use the non-parametric bootstrap test. perform well across a range of different tree structures, only some of which are syntactically plausible. However, this does not imply that the TreeLSTM will learn nothing when trained with NLI. We can think of its error surface being very rugged with many local optima; the syntactically correct tree corresponds to one of them. If the model is initialized in a meaningful catchment basin, NLI training is more likely to recover that tree. The intuition also explains why the Tree-LSTM alone achieves low parsing performance and low selfagreement. On a very rugged high-dimensional error surface, the chance of getting into a particular local optimum (corresponding to a syntactically correct tree) is low, especially in RL and STGumbel, which are doubly stochastic. We show examples of generated trees in Appendix D. 5 Conclusion We proposed a novel imitation learning approach to unsupervised parsing. We start from the differentiable PRPN model and transfer its knowledge to a Tree-LSTM by step-by-step imitation learning. The Tree-LSTM’s policy is then refined towards a semantic objective. We achieve a new state-of-the-art result of unsupervised parsing on the NLI dataset. In future work, we would like to combine more potential parsers—including chartstyle parsing and shift-reduce parsing—and transfer knowledge from one to another in a co-training setting. Acknowledgments We would like to thank Yikang Shen and Zhouhan Lin at MILA for fruitful discussions. FK was supported by the Leverhulme Trust through International Academic Fellowship IAF-2017-019. 3490 References Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In ACL, pages 132–140. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In AAAI, pages 5094–5101. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In NIPS, pages 9712–9724. Andrew Drozdov, Pat Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive autoencoders. In NAACL-HLT. Yang Gao, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell, et al. 2018. Reinforcement learning from imperfect demonstrations. arXiv preprint arXiv:1802.05313. Phu Mon Htut, Kyunghyun Cho, and Samuel Bowman. 2018. Grammar induction with neural language models: An unusual replication. In EMNLP, pages 4998–5003. Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation learning: A survey of learning methods. ACM Comput. Surveys, 50(2):21:1–21:35. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with Gumbel-softmax. In ICLR. Taeuk Kim, Jihun Choi, Daniel Edmiston, Sanghwan Bae, and Sang-goo Lee. 2019a. Dynamic compositionality in recursive neural networks with structureaware tag representations. In AAAI. Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019b. Unsupervised recurrent neural network grammars. In NAACL-HLT. Dan Klein and Christopher Manning. 2014. Corpusbased induction of syntactic structure: Models of dependency and constituency. In ACL, pages 479–486. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In EMNLP, pages 107–117. Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised Tree-LSTMs. arXiv preprint arXiv:1705.09189. Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. 2017. Coupling distributed and symbolic execution for natural language queries. In ICML, pages 2518–2526. Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In ACL, pages 435–445. Barbara BH Partee, Alice G ter Meulen, and Robert Wall. 2012. Mathematical Methods in Linguistics, volume 30. Springer Science & Business Media. Hao Peng, Sam Thomson, and Noah A. Smith. 2018. Backpropagating through structured argmax using a SPIGOT. In ACL, pages 1863–1873. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proc. 7th Joint Conf. Lexical and Computational Semantics, pages 180–191. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural language modeling by jointly learning syntax and lexicon. In ICLR. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In ICLR. Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. 2018. On tree-based neural sentence modeling. In EMNLP, pages 4631–4641. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pages 1631–1642. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL-IJCNLP, pages 1556–1566. Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, and Andrew McCallum. 2016. Multilingual relation extraction using compositional universal schema. In NAACL-HLT, pages 886–896. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? TACL, 6:253–267. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT, pages 1112–1122. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256. 3491 Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In ICLR. A Details of the PRPN We now describe in more detail the parsingreading-predict network (PRPN), proposed by Shen et al. (2018). The PRPN introduces a concept called the syntactic distance, illustrated in Figure 2. The syntactic distance dt is defined as the height of the common ancestor of wt−1 and wt in a tree. The PRPN uses a two-layer multilayer perceptron (MLP) to estimate dt. The input is the embeddings of the current word and its left context wt−L, wt−L+1, · · · , wt. The output is given by: bdt = MLP(wt−L, wt−L+1, · · · , wt) (3) In fact, absolute distance values are not required, it is sufficient to preserve their order. In other words, if di < dj, then it is desired that bdi < bdj. How3 2 1 0 Height w1 w2 w3 w4 w5 3 2 1 4 5 2 2 3 1 Syntactic Distance Composition Position (dummy) d Figure 2: A parse tree with syntactic distance values. st i = gt i Pt−1 i=1 gt i est i … w1 w2 wt Predict next word Gated-weighted attention wt+1 = ? s1 st s2 Figure 3: The prediction of the next word in the PRPN language model. ever, even the order of dt is not available at training time, and bdt is learned end-to-end in an unsupervised manner. The PRPN computes the difference between the distance dt at the current step and all previous steps dj for 2 ≤j < t. The difference is normalized to the range [0, 1]: αt j = hardtanh(τ(bdt −bdj)) + 1 2 (4) where τ is the temperature. Finally, a soft gate is computed right-to-left in a multiplicatively cumulative fashion: gt i = t−1 Y j=i+1 αt j (5) for 1 ≤i ≤t−1. The gates gt i are used to reweight another inner-sentence attention est i, which is computed as: est i = softmax{h⊤ i (W[ht−1; wt])} (6) The reweighed inner-sentence attention si then becomes: st i = gt i Pt−1 i=1 gt i est i (7) and is used to compute the convex combination of attention candidate vectors, which are incorporated in a recurrent neural network to predict the next word, shown in Figure 3. B Details of Gumbel-Softmax Gumbel-Softmax can be thought of as a relaxed version of reinforcement learning. It is used in the training of the Tree-LSTM model (Choi et al., 2018), as well as policy refinement in our imitation learning. In particular, we use the straight-through Gumbel-Softmax (ST-Gumbel, Jang et al., 2017). In the forward propagation of ST-Gumbel training, the model samples an action—in the TreeLSTM model, the position of composition—from the distribution p by the Gumbel trick. The sampled action can be represented as a one-hot vector a, whose elements take the form: ai = ( 1, if i = argmaxj{log(pj) + gj} 0, otherwise (8) where gi is called the Gumbel noise, given by: gi = −log(−log(ui)) (9) ui ∼Uniform(0, 1) (10) It can be shown that a is an unbiased sample from the original distribution p (Jang et al., 2017). 3492 This is a powerful evocative and museum . This is a powerful evocative and museum . He seemed trifle a . Chapter 1 : His name real was . Leonard Franklin Slye Chapter 1 : His name real was . Leonard Franklin Slye D 7UHHH[DPSOHVRI3531 E 7UHHH[DPSOHVRIRXUPRGHO 6E6UHƉQH embarrassed He seemed trifle a . embarrassed Figure 4: Parse tree examples produced by the PRPN and our model (SbS + refine). During backpropagation, ST-Gumbel substitutes the selected one-hot action a given by argmax in Equation (8) with a softmax operation. epi = exp{(log(pi) + gi)/γ} P j exp{(log(pj) + gj)/γ} (11) where γ is a temperature parameter that can also be learned by backpropagation. The Tree-LSTM model is trained using the loss in a downstream task (for example, cross-entropy loss for classification problems). Compared with reinforcement learning, the ST-Gumbel trick allows more information to be propagated back to the bottom of the Tree-LSTM in addition to the selected actions, although it does not follow exact gradient computation. For prediction (testing), the model selects the most probable composition according to its predicted probabilities. C Experimental Setup We conduct experiments on the AllNLI dataset, the concatenation of the Stanford Natural Language Inference Corpus (Bowman et al., 2015) and the Multi-Genre NLI Corpus (MultiNLI; Williams et al. 2018b). As the MultiNLI test set is not publicly available, we follow previous work (Williams et al., 2018a; Htut et al., 2018) and use the development set for testing. For early stopping, we remove 10k random sentence pairs from the AllNLI training set to form a validation set. Thus, our AllNLI dataset contains 931k, 10k, and 10k sample pairs for training, validation, and test, respectively. We build the PRPN model and the Tree-LSTM parser following the hyperparameters in previous work (Shen et al., 2018; Choi et al., 2018).3 For the SbS training stage, we set λ to be 0.03. For the policy refinement stage, the initial temperature is manually set to 0.5. The PRPN is trained by a language modeling loss on the AllNLI training sentences, whereas the Tree-LSTM model is trained by a cross-entropy loss for AllNLI classification. We adopt the standard metric and compute the unlabeled F-score of the constituents predicted by our parsing model against those given by the Stanford PCFG Parser (version 3.5.2). Although the Stanford parser itself may make parsing errors, it achieves generally high performance and is a reasonable approximation of correct parse trees. D Parse Tree Examples In Figure 4, we present a few examples of parse trees generated by the PRPN and by our model (SbS + refine). As can be seen, our model is able to handle the period correctly in these examples. Although this could be specified by hand-written rules (Drozdov et al., 2019), it is in fact learned by our approach in an unsupervised manner, since punctuation marks are treated as tokens just like other words, and our training signal gives no clue regarding how punctuation marks should be processed. Moreover, our model is able to parse the verb phrases more accurately than the PRPN, including is a powerful and evocative museum and seemed a trifle embarrassed. This is also evidenced by quantitative results in Table 2. 3The code bases of the PRPN and the Gumbel TreeLSTM are available at https://github.com/ yikangshen/PRPN and https://github.com/ nyu-mll/spinn/tree/is-it-syntax-release
2019
338
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3493–3498 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3493 Women’s Syntactic Resilience and Men’s Grammatical Luck: Gender-Bias in Part-of-Speech Tagging and Dependency Parsing Aparna Garimella1, Carmen Banea1, Dirk Hovy2, Rada Mihalcea1 1Computer Science and Engineering, University of Michigan, Ann Arbor, MI {gaparna,carmennb, mihalcea}@umich.edu 2Department of Marketing, Bocconi University, Milan, Italy [email protected] Abstract Several linguistic studies have shown the prevalence of various lexical and grammatical patterns in texts authored by a person of a particular gender, but models for part-of-speech tagging and dependency parsing have still not adapted to account for these differences. To address this, we annotate the Wall Street Journal part of the Penn Treebank with the gender information of the articles’ authors, and build taggers and parsers trained on this data that show performance differences in text written by men and women. Further analyses reveal numerous part-of-speech tags and syntactic relations whose prediction performances benefit from the prevalence of a specific gender in the training data. The results underscore the importance of accounting for gendered differences in syntactic tasks, and outline future venues for developing more accurate taggers and parsers. We release our data to the research community. 1 Introduction Sociolinguistic studies have shown that people use grammatical features to signal the speakers’ membership in a demographic group, with a focus on gender (Vigliocco and Franck, 1999; Mondorf, 2002; Eckert and McConnell-Ginet, 2013). Mondorf (2002) shows systemic differences in the usage of various types of clauses and their positions for men and women, stating that women have a higher usage of adverbial (accordingly, consequently1), causal (since, because), conditional (if, when) and purpose (so, in order that) clauses, while men tend to use more concessive clauses (but, although, whereas). Similar results hold across various languages in Johannsen et al. (2015). 1We exemplify in parentheses conjunctions or conjunctive adverbs that introduce and link in a subordinating relationship the given type of subordinate clause. This correlation between grammatical features and gender has important ramifications for statistical models of syntax: if the training sample is unbalanced, these differences inadvertently introduce a strong gender bias into the training data. Such demographic imbalances are amplified by the model (Zhao et al., 2017), which in turn can be detrimental to members of the underrepresented demographic groups (Jørgensen et al., 2015; Hovy and Søgaard, 2015; Hovy and Spruit, 2016). Since several works use syntactic analysis to improve tasks ranging from data-driven dependency parsing (Gadde et al., 2010) to sentiment classification (Moilanen and Pulman, 2007; Socher et al., 2013), underlying model biases end up affecting the performance of a wide range of applications. While data bias can be overcome by accounting for demographics, and can even improve classification performance (Volkova et al., 2013; Hovy, 2015; Bolukbasi et al., 2016; Benton et al., 2017; Zhao et al., 2017; Lynn et al., 2017), there is still little understanding on the amount and sources of bias in most training sets. In order to address gender bias in part-of-speech (POS) tagging and dependency parsing, we first require an adequate size data set labeled for a) syntax along with b) gender information of the authors. However, existing data sets fail to meet both criteria: data sets with gender information are either too small to train on, lack syntactic information, or are restricted to social media; sufficiently large syntactic data sets are not labeled with gender information and rely (at least in part) on news genre corpora such as the Wall Street Journal (WSJ). To address this problem, we augment the WSJ subset of the Penn Treebank corpus with gender, based on author first name. To our knowledge, this is the first work that explores syntactic tagging while accounting for gender. 3494 Contributions. The main contributions of this paper are as follows: • We annotate a standard POS-tagging and dependency parsing data set with gender information. • We conduct experiments and show the role played by gender information in POS-tagging and syntactic parsing. • We analyze POS and syntactic differences related to author gender. 2 Annotating PTB for Gender The Penn Treebank (Marcus et al., 1993) is the de facto data set used to train many of the POS taggers (Brill, 1994; Ratnaparkhi, 1996; Toutanova and Manning, 2000; Toutanova et al., 2003) and syntactic parsers (Klein and Manning, 2003; Nivre and Scholz, 2004; Chen and Manning, 2014). It contains articles published in the WSJ in 1989, as well as a small sample of ATIS-3 material, totalling over one million tokens, and manually annotated with POS tags and syntactic parse trees. We supplement the WSJ articles with metadata from the ProQuest Historical Newspapers database, which indexes, among others, WSJ articles released between 1923 and 2000, and provides fields such as author names. Out of the original 2,499 WSJ articles, 1,814 are found in ProQuest and their metadata is retrieved. 556 articles with an empty Author field are removed, resulting in 1,258 WSJ articles with author information. Using a combination of regular expressions and manual verification, we extract author names for 1,006 articles (the remaining 252 articles do not have actual author names). We isolate the first names using regular expressions, and follow Prabhakaran and Rambow (2017) to automatically assign gender and compute a gender ambiguity score taking into consideration: (1) the list of first names obtained based on Facebook profiles by Tang et al. (2011); and (2) the Social Security Administration’s (SSA) baby names data set.2 The Facebook list has male and female assignment scores for each name, while the SSA maintains a data set of counts for baby names and gender for each year since the 1880s. If both databases agree in their gender assignment, we use that as the final label (987 articles). For the remaining 19, we manually identify the author gen2http://www.ssa.gov/oact/babynames/ limits.html der by cross-referencing the names online. 5 of these only had a first name initial, and thus could not be resolved and were discarded. The gender mapping results in 1,001 gender tagged WSJ articles. Discarding 115 articles with joint authorship and considering only articles with both POS tags and parse trees results in a final set of 804 articles from the Treebank. The final set of articles includes 379 unique authors, with a heavy gender imbalance of 1 to 3 (96 female and 283 male). The total number of sentences in female articles is 7,282, with a mean of 21.17 tokens per sentence (σ = 10.03), while the male articles consist of 19,400 sentences, with a mean of 20.99 tokens per sentence (σ = 10.52). This is similar to the findings of Cornett (2014), who also notes a lengthier utterance mean for women versus men (her study focuses on adolescents). We use the Universal Dependencies (UD) v1.4 (Nivre et al., 2016) annotation guidelines for parse trees and POS tags, and accordingly, convert the constituency trees from the Penn Treebank (PTB) format to the CoNNL format.3 We then map the POS tags to the universal POS tag set.4 3 The Effect of Gender in POS Tagging and Dependency Parsing To assess whether author gender affects parsing performance, we train the state-of-theart transition-based neural network model SyntaxNet5 (Andor et al., 2016) on the data (with default parameters), and test whether stratified training can alleviate these effects. We evaluate performance for individual POS-tags and dependency relations, as well as over all the tags and relations. Stratifying the Training Data. Since the WSJ data has a heavy gender imbalance (1:3 female to male articles), we stratify the data by discarding male examples so that the number of female and male sentences and tokens do not differ by more than 15%: (1) We sort the female and male WSJ sentences in descending order of number of tokens. (2) For each female sentence Fi with fi number of tokens, we select a male sentence Mj such that the number of tokens mj ∈ 3https://nlp.stanford.edu/software/ stanford-dependencies.shtml. 4The data sets are annotated with the 16 universal POStags; conj is used for both sconj and conj tags. 5https://github.com/tensorflow/models/ tree/master/syntaxnet 3495 [0.75fi, 1.25fi]. (3) If we run out of male sentences which qualify for this condition, we choose the next male sentence in descending order with number of tokens mj ∈[5, 30]. Table 1 shows the number of sentences and tokens in the WSJ data before and after balancing for gender. We train the model in three scenarios: (1) on female data, (2) on male data, and (3) on generic data containing an equal number of male and female sentences. All three data sets have an equal number of sentences. RAW BALANCED GENDER SENT. TOKENS SENT. TOKENS FEMALE 7,282 175,107 7,282 175,107 MALE 19,400 461,742 7,282 202,144 Table 1: Number of sentences and tokens in the raw and balanced WSJ data. Evaluation. We report standard evaluation metrics: accuracy (ACC) – the percentage of tokens that have a correct assignment to their part-ofspeech (for part-of-speech tagging); and labeled attachment score (LAS) – the percentage of tokens that have a correct assignment to their heads and the correct dependency relation (Nivre et al., 2004) (for dependency parsing). In each training setting, we generate five random training-test splits at a 90:10 ratio on the WSJ data set. In order to derive parameters for SyntaxNet, each train split is further randomly split into five folds. When creating the folds, we ensure that sentences authored by the same author are not shared across splits to avoid overfitting to the writing styles of individual authors, rather than learning the underlying gender-based differences as they pertain to syntax. TRAIN: GENERIC FEMALE MALE TEST POS ACCURACY GENERIC 95.81 95.49 95.74 FEMALE 95.96 95.90 95.47 MALE 95.47 95.03 96.08 DEPENDENCY LAS GENERIC 83.03 82.01 83.11 FEMALE 83.46 83.17 83.12 MALE 82.53 81.15 83.21 Table 2: Results for part-of-speech tagging (ACC) and dependency parsing (LAS) on WSJ test data. In each training scenario, we evaluate the models on: (1) female-only data, (2) male-only data, and (3) generic data containing an equal number of male and female sentences (364 sentences from each gender), such that all test settings share the same number of sentences (10% of 7,282 = 728). Since we have 5 test folds, and each fold in turn has 5 validation folds (for parameter tuning), we report results averaged over the 25 total runs to ensure robustness. 4 Results and Discussion Table 2 (top) shows the POS-tagging accuracies for labeling the WSJ test data. We should note that while accuracy differences may be relatively small, they are within the margins of recent stateof-the-art improvements (Andor et al., 2016) in a task that achieves extremely high accuracy and where further improvement can only be incremental. Considering performance across the three different training scenarios, the female test data sees a slight benefit from a mixed training set, achieving its highest accuracy of 95.96%, while male test data only achieves the highest performance (96.08%) when training on male-only data, representing a relative error rate reduction of 13.46% when compared to the generic model. The setting closest to current POS tagging setups is embodied by training on the generic model. In this case, the female test data achieves its highest accuracy (95.96%), but the male test data achieves only a second best performance (95.47%). This difference suggests an area of possible improvement in performance for off-theshelf POS taggers. We see a similar pattern in dependency parsing (Table 2, bottom), where the female test set achieves the highest LAS accuracy performance on the mixed training set (83.46%). The male test set obtains its highest accuracy when the training is performed on male-only data, with a relative error reduction of 3.89% as compared to training on generic data. It seems that female writings are more diverse, with a complexity that can best be approximated with mixed-gender training samples. This setting improves performance by relative error reductions of (1.46%, 1.72%) (ACC, LAS) when compared to training on female-only data, and (10.82%, 2.01%) (ACC, LAS) when compared to training on male-only data. The male test sentences appear to display less variability, and therefore can3496 not benefit the same amount of information from the spectrum displayed by female training data; actually, any time female-authored sentences are present in the training set (whether as all femaledata or generic data), performance drops for male test data. When comparing male and female-only training sets and their ability to generalize to the opposite gender, we notice that male training data is more maleable and lends itself better to be used when testing on female samples, but not the reverse. We note that the WSJ exemplifies a highly formal and scripted newswire genre, where gender differences are likely less pronounced, yet they still surface. We will likely observe even stronger language differences in a large, informal data set comprising both gender and syntactic information. These differences can be leveraged to achieve a better performance for core NLP tasks. TRAIN: GENERIC FEMALE MALE ACC ACC ERR ACC ERR MALE TEST noun 93.74 92.51 -19.63 94.23 7.92 det 99.09 99.09 -0.13 99.13 4.08 num 99.23 99.34 15.35 99.35 16.60 pron 99.17 99.11 -6.69 99.19 2.75 propn 93.97 90.10 -64.14 95.26 21.41 FEMALE TEST pron 98.91 99.12 18.99 98.97 4.64 aux 98.60 98.77 12.12 98.39 -14.75 adj 92.12 92.62 6.37 92.36 3.06 propn 94.66 94.97 5.76 91.60 -57.33 Table 3: Tag-wise results for part-of-speech tagging on WSJ test data; Accuracies (Acc) and relative error reduction rates (Err) versus generic models are reported. We also observe clear gender-based performance improvements at the tag level (Table 3). For instance, models trained on male-only data better predict nouns, determiners, numerals, pronouns and proper nouns for male test data, compared to models trained on mixed data (with a relative error rate reduction between 2.75% and 21.41%). Similarly, female-trained models better predict pronouns, auxiliaries, adjectives, and proper nouns for female test data, compared to models trained on mixed data (with a relative error rate reduction between 5.76% and 18.99%). For 8 out of the 16 POS tags, mixed training achieves best results for either female or male test data. TRAIN: GEN. FEMALE MALE LAS LAS ERR LAS ERR MALE TEST csubj 25.20 27.89 3.60 36.13 14.61 iobj 47.11 40.61 -12.29 48.59 2.80 acl 63.93 60.47 -9.60 66.09 5.99 compound 75.06 72.95 -8.45 77.26 8.83 xcomp 74.39 72.26 -8.30 75.38 3.85 dobj 84.48 82.13 -15.17 85.20 4.66 conj 82.45 80.74 -9.77 82.82 2.11 nummod 92.00 91.24 -9.42 93.08 13.53 FEMALE TEST amod 91.18 91.46 3.11 91.08 -1.18 cop 92.78 93.89 15.47 92.80 0.34 appos 79.44 80.31 4.21 80.13 3.38 cc:preconj 54.68 65.09 22.96 50.78 -8.60 Table 4: Tag-wise results for dependency parsing on WSJ test data; LAS and relative error reduction rates (Err) versus generic models are reported. In dependency parsing (Table 4), models trained on female data better predict amod, cop, appos, and cc:preconj labels for female test sets (with a relative error rate reduction between 3.11% and 22.96% compared to generic models). Similarly, male-trained models are able to outperform mixed models on male test data for csubj, iobj, acl, compound, xcomp, dobj, conj and nummod with a relative error rate reduction between 2.11% and 14.61%. In dependency parsing, mixed training never achieves the best per tag results for either male or female test sets. This suggests that leveraging the idiosyncrasies for specific tags displayed by each gender could help create gender-agnostic models that leverage the syntactic strengths of each gender, and improve prediction accuracy for both. It is to be noted that there is a heavy topic overlap between the male and female WSJ articles, with a Pearson correlation of 0.85 between the male and female topic distributions6, indicating that the differences in performance between male and female models on various evaluation sets are not from topical shifts, but from syntactic variations. 5 Conclusion Our experiments show that women’s syntax displays resilience: POS taggers and dependency parsers trained on any data perform well when 6The topic distributions were extracted using Latent Dirichlet Allocation (Blei et al., 2003). We use the LDA implementation included with the Python Gensim library ( ˇReh˚uˇrek and Sojka, 2010) with 10 topics. 3497 tested on female writings. Male syntax, on the other hand, is parsed or tagged best when sufficient male-authored data is available in the training set. This suggests that men “lucked out” with respect to the gender imbalance in the WSJ training data: a more balanced or more female-heavy data set could have caused significant drops in the performance of automatic syntax analysis for male writers. The gender annotated WSJ data provides a starting point for leveraging gendered grammatical differences and the development of better and fairer models and tools for syntax annotation, as well as for the many NLP down-stream tasks that use syntax in their models. The WSJ author gender information is publicly available from http://lit.eecs.umich. edu/downloads.html. Acknowledgments This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), and by the John Templeton Foundation (grant #61156). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, or the John Templeton Foundation. The authors would like to thank the reviewers of the various drafts for their comments. Dirk Hovy is a member of the Bocconi Institute for Data Science and Analystics (BIDSA) and the Data and Marketing Insights (DMI) unit. References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2016, pages 2442–2452, Berlin, Germany. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multi-task learning for mental health using social media text. In Proceedings of the 15th European Chapter of the Association of Computational Linguistics (Volume 1: Long Papers), EACL 2017, pages 152–162, Valencia, Spain. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Quantifying and reducing stereotypes in word embeddings. CoRR. Eric Brill. 1994. Some advances in transformationbased part of speech tagging. In Proceedings of the Twelfth National Conference on Artificial Intelligence (Vol. 1), AAAI 1994, pages 722–727, Menlo Park, CA, USA. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), EMNLP 2014, pages 740–750, Doha, Qatar. Hannah E. Cornett. 2014. Gender differences in syntactic development among english speaking adolescents. Inquiries Journal/Student Pulse, 6(3):1–6. Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press. Phani Gadde, Karan Jindal, Samar Husain, Dipti Misra Sharma, and Rajeev Sangal. 2010. Improving data driven dependency parsing using clausal information. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL HLT 2010, pages 657–660, Los Angeles, California, USA. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACLIJCNLP 2015, pages 752–762, Beijing, China. Dirk Hovy and Anders Søgaard. 2015. Tagging performance correlates with author age. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), ACL-IJCNLP 2015, pages 483–488, Beijing, China. Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2016, pages 591–598, Berlin, Germany. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of CoNLL, pages 103–112. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in social media. In Proceedings of the Workshop on Noisy User-generated Text, pages 9–18, Beijing, China. 3498 Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (Volume 1), ACL 2003, pages 423–430, Sapporo, Japan. Veronica Lynn, Youngseo Son, Vivek Kulkarni, Niranjan Balasubramanian, and H. Andrew Schwartz. 2017. Human centered NLP with user-factor adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 1157–1166, Copenhagen, Denmark. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):313–330. Karo Moilanen and Stephen Pulman. 2007. Sentiment composition. In Proceedings of the Conference on Recent Advances in Natural Language Processing, volume 7 of RANLP 2007, pages 378–382, Borovets, Bulgaria. Britta Mondorf. 2002. Gender differences in English syntax. Journal of English Linguistics, 30(2):158– 180. Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of the Eighth Conference on Computational Natural Language Learning at HLT-NAACL 2004, CoNLL 2004, Boston, Massachusetts, USA. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, LREC 2016, pages 1659–1666, Portoro, Slovenia. Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of English text. In Proceedings of the 20th International Conference on Computational Linguistics, COLING 2004, Geneva, Switzerland. Vinodkumar Prabhakaran and Owen Rambow. 2017. Dialog structure through the lens of gender, gender environment, and power. arXiv preprint arXiv:1706.03441. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, volume 1 of EMNLP 1996, pages 133–142, Philadelphia, PA. Radim ˇReh˚uˇrek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, pages 1631–1642, Seattle, Washington, USA. Cong Tang, Keith Ross, Nitesh Saxena, and Ruichuan Chen. 2011. Whats in a name: a study of names, gender inference, and gender behavior in facebook. 16th International Conference on Database Systems for Advanced Applications, pages 344–356. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, NAACL-HLT 2003, pages 173–180, Edmonton, Canada. Kristina Toutanova and Christopher D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical Methods in Natural Language Processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, SIGDAT-EMNLP 2000, pages 63–70, Hong Kong, China. Gabriella Vigliocco and Julie Franck. 1999. When sex and syntax go hand in hand: Gender agreement in language production. Journal of Memory and Language, 40(4):455–478. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, number October in EMNLP 2013, pages 1815–1827, Seattle, WA, USA. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2979– 2989, Copenhagen, Denmark.
2019
339
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 346–359 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 346 Automatic Domain Adaptation Outperforms Manual Domain Adaptation for Predicting Financial Outcomes Marina Sedinkina1 Nikolas Breitkopf2 Hinrich Sch¨utze1 1Center for Information & Language Processing, LMU Munich 2Institute for Finance & Banking, LMU Munich [email protected] Abstract In this paper, we automatically create sentiment dictionaries for predicting financial outcomes. We compare three approaches: (i) manual adaptation of the domain-general dictionary H4N, (ii) automatic adaptation of H4N and (iii) a combination consisting of first manual, then automatic adaptation. In our experiments, we demonstrate that the automatically adapted sentiment dictionary outperforms the previous state of the art in predicting the financial outcomes excess return and volatility. In particular, automatic adaptation performs better than manual adaptation. In our analysis, we find that annotation based on an expert’s a priori belief about a word’s meaning can be incorrect – annotation should be performed based on the word’s contexts in the target domain instead. 1 Introduction Since 1934, the U.S. Securities and Exchange Commission (SEC) mandates that public companies disclose information in form of public filings to ensure that adequate information is available to investors. One such filing is the 10-K, the company’s annual report. It contains financial statements and information about business strategy, risk factors and legal issues. For this reason, 10-Ks are an important source of information in the field of finance and accounting. A common method employed by finance and accounting researchers is to evaluate the “tone” of a text based on the Harvard Psychosociological Dictionary, specifically, on the Harvard-IV-4 TagNeg (H4N) word list.1 However, as its name suggests, this dictionary is from a domain that is different from finance, so many words (e.g., “liability”, “tax”) that are labeled as negative in H4N are in fact not negative in finance. 1http://www.wjh.harvard.edu/˜inquirer In a pioneering study, Loughran and Mcdonald (2011) manually reclassified the words in H4N for the financial domain. They applied the resulting dictionaries2 to 10-Ks and predicted financial variables such as excess return and volatility. We will refer to the sentiment dictionaries created by Loughran and Mcdonald (2011) as L&M. In this work, we also create sentiment dictionaries for the finance domain, but we adapt them from the domain-general H4N dictionary automatically. We first learn word embeddings from a corpus of 10-Ks and then reclassify them – using SVMs trained on H4N labels – as negative vs. non-negative. We refer to the resulting domainadapted dictionary as H4NRE. In our experiments, we demonstrate that the automatically adapted financial sentiment dictionary H4NRE performs better at predicting excess return and volatility than dictionaries of Loughran and Mcdonald (2011) and Theil et al. (2018). We make the following contributions. (i) We demonstrate that automatic domain adaptation performs better at predicting financial outcomes than previous work based on manual domain adaptation. (ii) We perform an analysis of the differences between the classifications of L&M and those of our sentiment dictionary H4NRE that sheds light on the superior performance of H4NRE. For example, H4NRE is much smaller than L&M, consisting mostly of frequent words, suggesting H4NRE is more robust and less prone to overfitting. (iii) In a further detailed analysis, we investigate words classified by L&M as negative, litigious and uncertain that our embedding classifier classifies otherwise; and common (i.e., non-negative) words from H4N that L&M did not include in the categories negative, litigious and uncertain, but that our embedding classifier classifies as belonging to these classes. Our analysis suggests that manual 2https://sraf.nd.edu/textual-analysis/ resources 347 adaptation of dictionaries is error-prone if annotators are not given access to corpus contexts. Our paper primarily addresses a finance application. In empirical finance, a correct sentiment classification decision is not sufficient – the decision must also be interpretable and statistically sound. That is why we use ordinary least squares (OLS) – an established method in empirical finance – and sentiment dictionaries. Models based on sentiment dictionaries are transparent and interpretable: by looking at the dictionary words occurring in a document we can trace the classification decision back to the original data and, e.g., understand the cause of a classification error. OLS is a well-understood statistical method that allows the analysis of significance, effect size and dependence between predictor variables, inter alia. While we focus on finance here, three important lessons of our work also apply to many other domains. (1) An increasing number of applications require interpretable analysis; e.g., the European Union mandates that systems used for sensitive applications provide explanations of decisions. Decisions based on a solid statistical foundation are more likely to be trusted than those by black boxes. (2) Many NLP applications are domain-specific and require domain-specific resources including lexicons. Should such lexicons be built manually from scratch or adapted from generic lexicons? We provide evidence that automatic adaptation works better. (3) Words often have specific meanings in a domain and this increases the risk that a word is misjudged if only the generic meaning is present to the annotator. This seems to be the primary reason for the problems of manual lexicons in our experiments. Thus, if manual lexicon creation is the only option, then it is important to present words in context, not in isolation, so that the domain-specific sense can be recognized. 2 Related Work In empirical finance, researchers have exploited various text resources, e.g., news (Kazemian et al., 2016), microblogs (Cortis et al., 2017), twitter (Zamani and Schwartz, 2017) and company disclosures (Nopp and Hanbury, 2015; Kogan et al., 2009). Deep learning has been used for learning document representations (Ding et al., 2015; Akhtar et al., 2017). However, the methodology of empirical finance requires interpretable results. Thus, a common approach is to define features for statistical models like Ordinary Least Squares (Lee et al., 2014; Rekabsaz et al., 2017). Frequently, lexicons like H4N TagNeg3 (Tetlock et al., 2007) are used. It includes a total of 85,221 words, 4188 of which are labeled negative. The remaining words are labeled “common”, i.e., non-negative. Loughran and Mcdonald (2011) argue that many words from H4N have a specialized meaning when appearing in an annual report. For instance, domain-general negative words such as “tax”, “cost”, “liability” and “depreciation” – which predominate in 10-Ks – do not typically have negative sentiment in 10-Ks. So Loughran and Mcdonald (2011) constructed subjective financial dictionaries manually, by examining all words that appear in at least 5% of 10-Ks and classifying them based on their assessment of most likely usage. More recently, other finance-specific lexicons were created (Wang et al., 2013). Building on L&M, Tsai and Wang (2014) and Theil et al. (2018) show that the L&M dictionaries can be further improved by adding most similar neighbors to words manually labeled by L&M. Seed-based methods generalize a set of seeds based on corpus (e.g., distributional) evidence. Models use syntactic patterns (Hatzivassiloglou and McKeown, 1997; Widdows and Dorow, 2002), cooccurrence (Turney, 2002; Igo and Riloff, 2009) or label propagation on lexical graphs derived from cooccurrence (Velikovich et al., 2010; Huang et al., 2014). Supervised methods start with a larger training set, not just a few seeds (Mohammad et al., 2013). Distributed word representations (Tang et al., 2014; Amir et al., 2015; Vo and Zhang, 2016; Rothe et al., 2016) are beneficial in this approach. For instance, Tang et al. (2014) incorporate in word embeddings a document-level sentiment signal. Wang and Xia (2017) also integrate document and word levels. Hamilton et al. (2016) learn domain-specific word embeddings and derive word lists specific to domains, including the finance domain. Dictionary-based approaches (Takamura et al., 2005; Baccianella et al., 2010; Vicente et al., 2014) use hand-curated lexical resources – often WordNet (Fellbaum, 1998) – for constructing lexicons. Hamilton et al. (2016) argue that dictionary-based approaches generate better re3http://www.wjh.harvard.edu/˜inquirer/ 348 sults due to the quality of hand-curated resources. We compare two ways of using a hand-curated resource in this work – a general-domain resource that is automatically adapted to the specific domain vs. a resource that is manually created for the specific domain – and show that automatic domain adaptation performs better. Apart from domain adaptation work on dictionaries, many other approaches to generic domain adaptation have been proposed. Most of this work adopts the classical domain adaptation scenario: there is a large labeled training set available in the source domain and an amount of labeled target data that is insufficient for training a high-performing model on its own (Blitzer et al., 2006; Chelba and Acero, 2006; Daum´e III, 2009; Pan et al., 2010; Glorot et al., 2011; Chen et al., 2012). More recently, the idea of domainadversarial training was introduced for the same scenario (Ganin et al., 2016). In contrast to this work, we do not transfer any parameters or model structures from source to target. Instead, we use labels from the source domain and train new models from scratch based on these labels: first embedding vectors, then a classifier that is trained on source domain labels and finally a regression model that is trained on the classification decisions of the classifier. This approach is feasible in our problem setting because the divergence between source and target sentiment labels is relatively minor, so that training target embeddings with source labels gives good results. The motivation for this different setup is that our work primarily addresses a finance application where explainability is of high importance. For this reason, we use a model based on sentiment dictionaries that allows us to provide explanations of the model’s decisions and predictions. 3 Methodology 3.1 Empirical finance methodology In this paper, we adopt Ordinary Least Squares (OLS), a common research method in empirical finance: a dependent variable of interest (e.g., excess return, volatility) is predicted based on a linear combination of a set of explanatory variables. The main focus of this paper is to investigate text-based explanatory variables: we would like to know to what extent a text variable such as occurrence of negative words in a 10-K can predict a financial variable like volatility. Identifying the economic drivers of such a financial outcome is of central interest in the field of finance. Some of these determinants may be correlated with sentiment. To understand the role of sentiment in explaining financial variables we therefore need to isolate the complementary information of our text variables. This is achieved by including in our regressions – as control variables – a standard set of financial explanatory variables such as firm size and book-to-market ratio. These control variables are added as additional explanatory variables in the regression specification besides the textual sentiment variables. This experimental setup allows us to assess the added benefit of text-based variables in a realistic empirical finance scenario. The approach is motivated by previous studies in the finance literature (e.g., Loughran and Mcdonald (2011)), which show that characteristics of financial firms can explain variation in excess returns and volatility. By including these control variables in the regression we are able to determine whether sentiment factors have incremental explanatory power beyond the already established financial factors. Since the inclusion of these control variables is not primarily driven by the assumption that firms with different characteristics use different language, our approach differs from other NLP studies, such as Hovy (2015), who accounts for non-textual characteristics by training group-specific embeddings. Each text variable we use is based on a dictionary. Its value for a 10-K is the proportion of tokens in the 10-K that are members of the dictionary. For example, if the 10-K is 5000 tokens long and 50 of those tokens are contained in the L&M uncertainty dictionary, then the value of the L&M uncertainty text variable for this 10-K is 0.01. In the type of analysis of stock market data we conduct, there are two general forms of dependence in the residuals of a regression, which arise from the panel structure of our data set where a single firm is repeatedly observed over time and multiple firms are observed at the same point in time. Firm effect: Time-series dependence assumes that the residuals of a given firm are correlated across years. Time effect: Cross-sectional dependence assumes that the residuals of a given year are correlated across different firms. These properties violate the i.i.d. assumption of residuals in standard OLS. We therefore model data with both firm and time effects and run a two349 way robust cluster regression, i.e., an OLS regression with standard errors that are clustered on two dimensions (Gelbach et al., 2009), the dimensions of firm and time.4 We apply this regressionbased methodology to test the explanatory power of financial dictionaries with regard to two dependent variables: excess return and volatility. This approach allows us to compare the explanatory power of different sentiment dictionaries and in the process test the hypothesis that negative sentiment is associated with subsequently lower stock returns and higher volatility. We now introduce the regression specifications for these tests. 3.1.1 Excess return The dependent variable excess return is defined as the firm’s buy-and-hold stock return minus the value-weighted buy-and-hold market index return during the 4-day event window starting on the 10-K filing date, computed from prices by the Center for Research in Security Prices (CRSP)5 (both expressed as a percentage). In addition to the independent text variables (see §4 for details), we include the following financial control variables. (i) Firm size: the log of the book value of total assets. (ii) Alpha of a Fama-French regression (Fama and French, 1993) calculated from days [-252 -6];6 this represents the “abnormal” return of the asset, i.e., the part of the return not due to common risk factors like market and firm size. (iii) Book-to-market ratio: the log of the book value of equity divided by the market value of equity. (iv) Share turnover: the volume of shares traded in days [-252 -6] divided by shares outstanding on the filing date. (v) Earnings surprise, computed by IBES from Thomson Reuters;7 this variable captures whether the reported financial performance was better or worse than expected by financial analysts.8 4Loughran and Mcdonald (2011) use the method of Fama and MacBeth (1973) instead. This method assumes that the yearly estimates of the coefficient are independent of each other. However, this is not true when there is a firm effect. 5http://www.crsp.com 6[-252 -6] is the notation for the 252 days prior to the filing date with the last 5 days prior to the filing date excluded. 7http://www.thomsonreuters.com 8Our setup largely mirrors, but is not identical to the one used by Loughran and Mcdonald (2011) because not all data they used are publicly available and because we use a larger time window (1994-2013) compared to theirs (1994-2008). dictionary size neglm 2355 unclm 297 litlm 903 negADD 2340 uncADD 240 litADD 984 negRE 1205 uncRE 96 litRE 208 H4NORG 4188 H4NRE 338 Table 1: Number of words per dictionary 3.1.2 Volatility The dependent variable volatility is defined as the post-filing root-mean-square error (RMSE) of a Fama-French regression calculated from days [6 252]. The RMSE captures the idiosyncratic component of the total volatility of the firm, since it picks up the stock price variation that cannot be explained by fluctuations of the common risk factors of the Fama-French model. The RMSE is therefore a measure of the financial uncertainty of the firm. In addition to the independent text variables (see §4 for details), we include the following financial control variables. (i) Pre-filing RMSE and (ii) pre-filing alpha of a Fama-French regression calculated from days [-252 -6]; these characterize the financial uncertainty and abnormal return of the firm in the past (see §3.1.1 for alpha and first sentence of this section for RMSE). (iii) Filing abnormal return; the value of the buy-and-hold return in trading days [0 3] minus the buy-andhold return of the market index. (iv) Firm size and (v) book-to-market ratio (the same as in §3.1.1). (vi) Calendar year dummies and Fama-French 48industry dummies to allow for time and industry fixed effects.9 3.2 NLP methodology There are two main questions we want to answer: Q1. Is a manually domain-adapted or an automatically domain-adapted dictionary a more effective predictor of financial outcomes? Q2. L&M adapted H4N for the financial domain and showed that this manually adapted dictionary is more effective than H4N for prediction. Can we further improve L&M’s manual adaptation 9We do not include in the regression a Nasdaq dummy variable indicating whether the firm is traded on Nasdaq. Since Nasdaq mainly lists tech companies, the Nasdaq effect is already captured by industry dummies. 350 by automatic domain adaptation? The general methodology we employ for domain adaptation is based on word embeddings. We train CBOW word2vec (Mikolov et al., 2013) word embeddings on a corpus of 10-Ks for all words of H4N that occur in the corpus – see §4 for details. We consider two adaptations: ADD and RE. ADD is only used to answer question Q2. ADD. For adapting the L&M dictionary, we train an SVM on an L&M dictionary in which words are labeled +1 if they are marked for the category by L&M and labeled -1 otherwise (where the category is negative, uncertain or litigious). Each word is represented as its embedding. We then run the SVM on all H4N words that are not contained in the L&M dictionary. We also ignore H4N words that we do not have embeddings for because their frequency is below the word2vec frequency threshold. Thus, we obtain an ADD dictionary which is not a superset of the L&M lexicon because it includes only new additional words that are not part of the original dictionary. SVM scores are converted into probabilities via logistic regression. We define a confidence threshold θ – we only want to include words in the ADD dictionary that are reliable indicators of the category of interest. A word is added to the dictionary if its converted SVM score is greater than θ. RE. We train SVMs as for ADD, but this time in a five-fold cross validation setup. Again, SVM scores are converted into probabilities via logistic regression. A word w becomes a member of the adapted dictionary if its converted SVM score of the SVM that was not trained on the fold that contains w is greater than θ. To answer our first question Q1: “Is automatic or manual adaptation better?”, we apply adaptation method RE to H4N and compare the results to the L&M dictionaries. To answer our second question Q2: “Can manual adaptation be further improved by automatic adaptation?”, we apply adaptation methods RE and ADD to the three dictionaries compiled by L&M and compare results for original and adapted L&M dictionaries: (i) negative (abbreviated as “neg”), (ii) uncertain (abbreviated as “unc”), (iii) litigious (abbreviated as “lit”). Our goals here are to improve the in-domain L&M dictionaries by relabeling them using adaptation method RE and to find new additional words using adaptation method ADD. Table 1 gives dictionary sizes. 4 Experiments and results We downloaded 206,790 10-Ks for years 1994 to 2013 from the SEC’s database EDGAR.10 Table of contents, page numbers, links and numeric tables are removed in preprocessing and only the main body of the text is retained. Documents are split into sections. Sections that are not useful for textual analysis (e.g., boilerplate) are deleted. To construct the final sample, we apply the filters defined by L&M (Loughran and Mcdonald, 2011): we require a match with CRSP’s permanent identifier PERMNO, the stock to be common equity, a stock pre-filing price of greater than $3, a positive book-to-market, as well as CRSP’s market capitalization and stock return data available at least 60 trading days before and after the filing date. We only keep firms traded on Nasdaq, NYSE or AMEX and whose filings contain at least 2000 words. This procedure results in a corpus of 60,432 10-Ks. We tokenize (using NLTK) and lowercase this corpus and remove punctuation. We use word2vec CBOW with hierarchical softmax to learn word embeddings from the corpus. We set the size of word vectors to 400 and run one training iteration; otherwise we use word2vec’s default hyperparameters. SVMs are trained on word embeddings as described in §3.2. We set the threshold θ to 0.8, so only words with converted SVM scores greater than 0.8 will be added to dictionaries.11 As described in §3, we compare manually adapted and automatically adapted dictionaries (Q1) and investigate whether automatic adaptation of manually adapted dictionaries further improves performance (Q2). Our experimental setup is Ordinary Least Squares (OLS), more specifically, a two-way robust cluster regression for the time and firm effects. The dependent financial variable is excess return or volatility. We include several independent financial variables in the regression as well as one or more text variables. The value of the text variable for a category is the proportion of tokens from the category that occur in a 10-K. To assess the utility of a text variable for predicting a financial outcome, we look at significance and the standardized regression coefficient 10https://www.sec.gov/edgar.shtml 11We choose this threshold because the proportion of negative, litigious and uncertain words in 10-Ks for 0.8 is roughly the same as when using L&M dictionaries. 351 var coeff std coeff t R2 neglm -0.202** -0.080 -2.56 1.02 litlm -0.0291 -0.026 -0.83 1.00 unclm -0.215* -0.064 -1.91 1.01 H4NRE -0.764*** -0.229 -3.04 1.05 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 2: Excess return regression results for L&M dictionaries and reclassified H4N dictionary. For all tables in this paper, significant t values are bolded and best standard coefficients per category are in italics. var coeff std coeff t R2 H4NRE -0.88** -0.264 -2.19 1.05 neglm 0.062 0.024 0.48 H4NRE -0.757*** -0.227 -2.90 1.05 litlm -0.351 -0.315 -0.013 H4NRE -0.746*** -0.223 -2.89 1.05 unclm -0.45 -0.135 -0.45 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 3: Excess return regression results for multiple text variables. This table shows results for three regressions that combine H4NRE with each of the three L&M dictionaries. (the product of regression coefficient and standard deviation). If a result is significant, then it is unlikely that the result is due to chance. The standardized coefficient measures the effect size, normalized for different value ranges of variables. It can be interpreted as the expected change in the dependent variable if the independent variable increases by one standard deviation. The standardized coefficient allows a fair comparison between a text variable that, on average, has high values (many tokens per document) with one that, on average, has low values (few tokens per document). 4.1 Excess Return Table 2 gives regression results for excess return, comparing H4NRE (our automatic adaptation of the general Harvard dictionary) with the three manually adapted L&M dictionaries. As expected the coefficients are negatively signed – 10-Ks containing a high percentage of pessimistic words are associated with negative excess returns. L&M designed the dictionary neglm specifically for measuring negative information in a 10-K that may have a negative effect on outcomes like excess return. So it is not surprising that neglm is the best performing dictionary of the three L&M dictionaries: it has the highest standard coefficient (-0.080) and the highest significance (-2.56). unclm performs slightly worse, but is also significant. var coeff std coeff t R2 neglm -0.202** -0.080 -2.56 1.02 negspec 0.0102 0.0132 0.27 1.00 negRE -0.37*** -0.111 -2.96 1.03 negADD -0.033 -0.0231 -1.03 1.00 negRE+ADD -0.08** -0.072 -2.19 1.03 litlm -0.0291 -0.026 -0.83 1.00 litRE -0.056 -0.028 -0.55 1.00 litADD -0.0195 -0.0156 -0.70 1.00 litRE+ADD -0.0163 -0.0211 -0.69 1.00 unclm -0.215* -0.064 -1.91 1.01 uncRE -0.377*** -0.075 -2.77 1.02 uncADD 0.0217 0.0065 0.21 1.00 uncRE+ADD -0.0315 -0.0157 -0.45 1.00 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 4: Excess return regression results for L&M, RE and ADD dictionaries However, when comparing the three L&M dictionaries with H4NRE, the automatically adapted Harvard dictionary, we see that H4NRE performs clearly better: it is highly significant and its standard coefficient is larger by a factor of more than 2 compared to neglm. This evidence suggests that the automatically created H4NRE dictionary has a higher explanatory power for excess returns than the manually created L&M dictionaries. This provides an initial answer to question Q1: in this case, automatic adaptation beats manual adaptation. Table 3 shows manual plus automatic experiments with multiple text variables in one regression, in particular, the combination of H4NRE with each of the L&M dictionaries. We see that the explanatory power of L&M variables is lost after we additionally include H4NRE in a regression: all three L&M variables are not significant. In contrast, H4NRE continues to be significant in all experiments, with large standard coefficients. More manual plus automatic experiments can be found in the appendix. These experiments further confirm that automatic is better than manual adaptation. Table 4 shows results for automatically adapting the L&M dictionaries.12 The subscript “RE+ADD” refers to a dictionary that merges RE and ADD; e.g., negRE+ADD is the union of negRE and negADD. We see that for each category (neg, lit and unc), the automatically adapted dictionary performs better than the original manually adapted dictionary; e.g., the standard coefficient of negRE is -0.111, 12Experiments with multiple text variables in one regression (manual plus automatic experiments) are presented in the appendix. 352 var coeff std coeff t R2 neglm 0.118*** 0.0472 3.30 60.1 litlm -0.0081 -0.0073 -0.62 60.0 unclm 0.119* 0.0356 2.25 60.0 H4NRE 0.577*** 0.173 4.40 60.3 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 5: Volatility regression results for L&M dictionaries and reclassified H4N dictionary var coeff std coeff t R2 H4NRE 0.748*** 0.224 4.44 1.11 neglm -0.096* -0.038 -2.55 H4NRE 0.642*** 0.192 4.28 1.11 litlm -0.041* -0.037 -2.54 H4NRE 0.695*** 0.208 4.54 1.11 unclm -0.931** -0.279 -2.73 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 6: Volatility regression results for multiple text variables clearly better than that of neglm (-0.080). Results are significant for negRE (-2.96) and uncRE (-2.77). We also evaluate negspec, the negative word list of Hamilton et al. (2016). negspec does not perform well: it is not significant. These results provide a partial answer to question Q2: for excess return, automatic adaptation of L&M’s manually adapted dictionaries further improves their performance. 4.2 Volatility Table 5 compares H4NRE and L&M regression results for volatility. Except for litigious, the coefficients are positive, so the greater the number of pessimistic words, the greater the volatility. Results for neglm, unclm and H4NRE are statistically significant. The best L&M dictionary is again neglm with standard coefficient 0.0472 and t = 3.30. However, H4NRE has the highest explanatory value for volatility. Its standard coefficient (0.173) is more than three times as large as that of neglm. The higher effect size demonstrates that H4NRE better explains volatility than the L&M dictionaries. Again, this indicates – answering question Q1 – that automatic outperforms manual adaptation. Table 6 confirms this. We see that for manual plus automatic experiments each combination of H4NRE with one of the L&M dictionaries provides significant results for H4NRE. In contrast, L&M dictionaries become negatively signed meaning that more uncertain words decrease volatility, sugvar coeff std coeff t R2 neglm 0.118*** 0.0472 3.30 60.1 negspec -0.038 -0.0494 -2.73 60.1 negRE 0.219*** 0.0657 3.57 60.1 negADD 0.032*** 0.0224 4.06 60.0 negRE+ADD 0.038*** 0.0342 4.32 60.1 litlm -0.0081 -0.0073 -0.62 60.0 litRE 0.0080 0.0040 0.20 60.0 litADD 0.028 0.0224 1.07 60.0 litRE+ADD 0.015 0.0195 0.81 60.0 unclm 0.119* 0.0356 2.25 60.0 uncspec -0.043 -0.0344 -1.56 60.0 uncRE 0.167* 0.0334 2.30 60.0 uncADD -0.013 -0.0039 -0.17 60.0 uncRE+ADD 0.035 0.0175 0.68 60.0 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 7: Volatility regression results for L&M, RE and ADD dictionaries gesting that they are not indicative of the true relationship between volatility and negative tone in 10-Ks in this regression setup. Our results of additional manual plus automatic experiments support this observation as well. See the appendix for an illustration. Table 7 gives results for automatically adapting the L&M dictionaries.13 For neg, the standard coefficient of negRE is 0.0657, better by about 40% than neglm’s standard coefficient of 0.0472. negspec does not provide significant results and has the negative sign, i.e., an increase of negative words decreases volatility. The lit dictionaries are not significant (neither L&M nor adapted dictionaries). For unc, uncRE performs worse than unclm, but only slightly by 0.0344 vs. 0.0356 for the standard coefficients. The overall best result is negRE (standard coefficient 0.0657). Even though L&M designed the unclm dictionary specifically for volatility, our results indicate that neg dictionaries perform better than unc dictionaries, both for L&M dictionaries (neglm) and their automatic adaptations (e.g., negRE). Table 7 also evaluates uncspec, the uncertainty dictionary of Theil et al. (2018). uncspec does not perform well: it is not significant and the coefficient has the “wrong” sign.14 The main finding supported by Table 7 is that 13Experiments with multiple text variables in one regression (manual plus automatic experiments) are presented in the appendix. 14Theil et al. (2018) define volatility for the time period [6 28] whereas our definition is [6 252], based on (Loughran and Mcdonald, 2011). Larger time windows allow more reliable estimates and account for the fact that information disclosures can influence volatility for long periods (Belo et al., 2016). 353 ADDneg missing, diminishment, disabling, overuse ADDunc reevaluate, swings, expectation, estimate ADDlit lender, assignors, trustee, insurers REneg confusion, unlawful, convicted, breach REunc variability, fluctuation, variations, variation RElit courts, crossclaim, conciliation, abeyance H4NRE compromise, issues, problems, impair, hurt Table 8: Word classification examples from automatically adapted dictionaries the best automatic adaptation of an L&M dictionary gives rise to more explanatory power than the best L&M dictionary, i.e., negRE performs better than neglm. This again confirms our answer to Q2: we can further improve manual adaptation by automatic domain adaptation. 5 Analysis and discussion 5.1 Qualitative Analysis Our dictionaries outperform L&M. In this section, we perform a qualitative analysis to determine the reasons for this discrepancy in performance. Table 8 shows words from automatically adapted dictionaries. Recall that the ADD method adds words that L&M classified as nonrelevant for a category. So words like “missing” (neg), “reevaluate” (unc) and “assignors” (lit) were classified as relevant terms and seem to connote negativity, uncertainty and litigiousness, respectively, in financial contexts. In L&M’s classification scheme, a word can be part of several different categories. For instance, L&M label “unlawful”, “convicted” and “breach” both as litigious and as negative. When applying our RE method, these words were only classified as negative, not as litigious. Similarly, L&M label “confusion” as negative and uncertain, but automatic RE adaptation labels it only negative. This indicates that there is strong distributional evidence in the corpus for the category negativity, but weaker distributional evidence for litigious and uncertain. For our application, only “negative” litigious/uncertain words are of interest – “acquittal” (positive litigious) and “suspense” (positive uncertain) are examples of positive words that may not help in predicting financial variables. This could explain why the negative category fares better in our adaptation than the other two. An interesting case study for RE is “abeyance”. L&M classify it as uncertain, automatic adaptation as litigious. Even though “abeyance” has a domain-general uncertain sense (“something that is waiting to be acted upon”), it is mostly used in legal contexts in 10-Ks: “held in abeyance”, “appeal in abeyance”. The nearest neighbors of “abeyance” in embedding space are also litigious words: “stayed”, “hearings”, “mediation”. H4NRE contains 74 words that are “common” in H4N. Examples include “compromise”, “serious” and “god”. The nearest neighbors of “compromise” in the 10-K embedding space are the negative terms “misappropriate”, “breaches”, “jeopardize”. In a general-domain embedding space,15 the nearest neighbors of “compromise” include “negotiated settlement”, “accord” and “modus vivendi”. This example suggests that “compromise” is used in 10-Ks in negative contexts and in the general domain in positive contexts. This also illustrates the importance of domain-specific word embeddings that capture domain-specific information. Another interesting example is the word “god”; it is frequently used in 10-Ks in the phrase “act of God”. Its nearest neighbors in the 10-K embedding space are “terrorism” and “war”. This example clearly demonstrates that annotators are likely to make mistakes when they annotate words for sentiment without seeing their contexts. Most annotators would annotate “god” as positive, but when presented with the typical context in 10-Ks (“act of God”), they would be able to correctly classify it. We conclude that manual annotation of words without context based on the prior belief an annotator has about word meanings is error-prone. Our automatic adaptation is performed based on the word’s contexts in the target domain and therefore not susceptible to this type of error. 5.2 Quantitative Analysis Table 9 presents a quantitative analysis of the distribution of words over dictionaries. For a row dictionary dr and a column dictionary dc, a cell gives |dr ∩dc|/|dr| as a percentage. (Diagonal entries are all equal to 100% and are omitted for space reasons.) For example, 49% of the words in neglm are also members of negRE (row “neglm”, column “negRE”). This analysis allows us to obtain insights into the relationship between different dictionaries and into the relationship between 15https://code.google.com/archive/p/ word2vec/ 354 neglm litlm unclm negADD litADD uncADD negRE litRE uncRE H4Nneg H4Ncmn H4NRE neglm 7 2 0 0 0 49 2 0 48 52 12 litlm 17 0 0 0 0 6 20 0 7 93 1 unclm 14 0 0 0 0 18 2 30 16 84 2 negADD 0 0 0 0 0 0 0 0 18 82 2 litADD 0 0 0 0 0 0 0 0 1 99 0 uncADD 0 0 0 0 0 0 0 0 3 97 0 negRE 95 5 4 0 0 0 0 1 52 48 21 litRE 18 86 2 0 0 0 0 0 7 93 0 uncRE 11 2 92 0 0 0 10 0 13 87 3 H4Nneg 27 2 1 10 0 0 15 0 0 0 6 H4Ncmn 2 1 0 2 1 0 1 0 0 0 0 H4NRE 79 2 2 17 0 0 74 0 1 78 22 Table 9: Quantitative analysis of dictionaries. For a row dictionary dr and a column dictionary dc, a cell gives |dr ∩dc|/|dr| as a percentage. Diagonal entries (all equal to 100%) omitted for space reasons. cmn = common the categories negative, litigious and uncertain. Looking at rows neglm, litlm and unclm first, we see how L&M constructed their dictionaries. neglm words come from H4Nneg and H4Ncmn in about equal proportions; i.e., many words that are “common” in ordinary usage were classified as negative by L&M for financial text. Relatively few litlm and unclm words are taken from H4Nneg, most are from H4Ncmn. Only 12% of neglm words were automatically classified as negative in domain adaptation and assigned to H4NRE. This is a surprisingly low number. Given that H4NRE performs better than neglm in our experiments, this statistic casts serious doubt on the ability of human annotators to correctly classify words for the type of sentiment analysis that is performed in empirical finance if the actual corpus contexts of the words are not considered. We see two types of failures in the human annotation. First, as discussed in §5.1, words like “god” are misclassified because the prevalent context in 10-Ks (“act of God”) is not obvious to the annotator. Second, the utility of a word is not only a function of its sentiment, but also of the strength of this sentiment. Many words in neglm that were deemed neutral in automatic adaptation are probably words that may be slightly negative, but that do not contribute to explaining financial variables like excess return. The strength of sentiment of a word is difficult to judge by human annotators. Looking at the row H4NRE, we see that most of its words are taken from neglm (79%) and a few from litlm and unclm (2% each). We can interpret this statistic as indicating that L&M had high recall (they found most of the reliable indicators), but low precision (see the previous paragraph: only 12% of their negative words survive in H4NRE). The distribution of H4NRE words over H4Nneg and H4Ncmn is 78:22. This confirms the need for domain adaptation: many general-domain common words are negative in the financial domain. We finally look at how dictionaries for negative, litigious and uncertain overlap, separately for the L&M, ADD and RE dictionaries. litlm and unclm have considerable overlap with neglm (17% and 14%), but they do not overlap with each other. The three ADD dictionaries – negADD, litADD and uncADD – do not overlap at all. As for RE, 10% of the words of uncRE are also in negRE, otherwise there is no overlap between RE dictionaries. Comparing the original L&M dictionaries and the automatically adapted ADD and RE dictionaries, we see that the three categories – negative, litigious and uncertain – are more clearly distinguished after adaptation. L&M dictionaries overlap more, ADD and RE dictionaries overlap less. 6 Conclusion In this paper, we automatically created sentiment dictionaries for predicting financial outcomes. In our experiments, we demonstrated that the automatically adapted sentiment dictionary H4NRE outperforms the previous state of the art in predicting the financial outcomes excess return and volatility. In particular, automatic adaptation performs better than manual adaptation. Our quantitative and qualitative study provided insight into the semantics of the dictionaries. We found that annotation based on an expert’s a priori belief about a word’s meaning can be incorrect – annotation should be performed based on the word’s contexts in the target domain instead. In the future, we plan to investigate whether there are changes over time that significantly impact the linguistic characteristics of the data, in the simplest case changes in the meaning of a word. Another interesting topic for future research is the comparison of domain adaptation based on our domain-specific word embeddings vs. based on word embeddings trained on much larger corpora. Acknowledgments We are grateful for the support of the European Research Council for this work (ERC #740516). 355 References Md Shad Akhtar, Abhishek Kumar, Deepanway Ghosal, Asif Ekbal, and Pushpak Bhattacharyya. 2017. A multilayer perceptron based ensemble technique for fine-grained financial sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 540–546. Association for Computational Linguistics. Silvio Amir, Wang Ling, Ram´on Fern´andez Astudillo, Bruno Martins, M´ario J. Silva, and Isabel Trancoso. 2015. INESC-ID: A regression model for large scale twitter sentiment lexicon induction. In SemEval@NAACL-HLT, pages 613–618. The Association for Computer Linguistics. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In LREC. European Language Resources Association. Frederico Belo, Jun Li, Xiaoji Lin, and Xiaofei Zhao. 2016. Complexity and information content of financial disclosures: Evidence from evolution of uncertainty following 10-k filings. SSRN. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing, pages 120–128. Association for Computational Linguistics. Ciprian Chelba and Alex Acero. 2006. Adaptation of maximum entropy capitalizer: Little data can help a lot. Computer Speech & Language, 20(4):382–399. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683. Keith Cortis, Andr´e Freitas, Tobias Daudert, Manuela Huerlimann, Manel Zarrouk, Siegfried Handschuh, and Brian Davis. 2017. Semeval-2017 task 5: Finegrained sentiment analysis on financial microblogs and news. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 519–535. Association for Computational Linguistics. Hal Daum´e III. 2009. Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, pages 2327–2333. AAAI Press. Eugene F. Fama and Kenneth R. French. 1993. Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33(1):3 – 56. Eugene F Fama and James D MacBeth. 1973. Risk, return, and equilibrium: Empirical tests. Journal ter%20Dictionaryof political economy, 81(3):607– 636. Christiane Fellbaum, editor. 1998. WordNet: an electronic lexical database. MIT Press. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030. Jonah B Gelbach, Doug Miller, et al. 2009. Robust inference with multi-way clustering. Technical report, National Bureau of Economic Research. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 513–520. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. CoRR, abs/1606.02820. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics, EACL ’97, pages 174–181, Stroudsburg, PA, USA. Association for Computational Linguistics. Dirk Hovy. 2015. Demographic factors improve classification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 752–762. Sheng Huang, Zhendong Niu, and Chongyang Shi. 2014. Automatic construction of domain-specific sentiment lexicon based on constrained label propagation. Knowl.-Based Syst., 56:191–200. Sean P. Igo and Ellen Riloff. 2009. Corpus-based semantic lexicon induction with web-based corroboration. In Proceedings of the Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics, UMSLLS ’09, pages 18–26, Stroudsburg, PA, USA. Association for Computational Linguistics. Siavash Kazemian, Shunan Zhao, and Gerald Penn. 2016. Evaluating sentiment analysis in the context of securities trading. In ACL (1). The Association for Computer Linguistics. Shimon Kogan, Dimitry Levin, Bryan R. Routledge, Jacob S. Sagi, and Noah A. Smith. 2009. Predicting risk from financial reports with regression. In 356 HLT-NAACL, pages 272–280. The Association for Computational Linguistics. Heeyoung Lee, Mihai Surdeanu, Bill MacCartney, and Dan Jurafsky. 2014. On the importance of text analysis for stock price prediction. In Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014, pages 1170– 1175. European Language Resources Association (ELRA). Tim Loughran and Bill Mcdonald. 2011. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. The Journal of Finance, 66(1):35–65. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-theart in sentiment analysis of tweets. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321–327. Association for Computational Linguistics. Clemens Nopp and Allan Hanbury. 2015. Detecting risks in the banking system by sentiment analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 591–600. Association for Computational Linguistics. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World wide web, pages 751–760. ACM. Navid Rekabsaz, Mihai Lupu, Artem Baklanov, Alexander D¨ur, Linda Andersson, and Allan Hanbury. 2017. Volatility prediction using financial disclosures sentiments with word embedding-based IR models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1712–1721. Sascha Rothe, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 767–777. Association for Computational Linguistics. Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientations of words using spin model. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 133–140, Stroudsburg, PA, USA. Association for Computational Linguistics. Duyu Tang, Furu Wei, Bing Qin, Ming Zhou, and Ting Liu. 2014. Building large-scale twitter-specific sentiment lexicon : A representation learning approach. In COLING. Paul C. Tetlock, Maytal Saar-tsechansky, and Sofus Macskassy. 2007. More than words: Quantifying language to measure firms ’ fundamentals. Christoph Kilian Theil, Sanja Stajner, and Heiner Stuckenschmidt. 2018. Word embeddings-based uncertainty detection in financial disclosures. In Proceedings of the First Workshop on Economics and Natural Language Processing, pages 32–37. Association for Computational Linguistics. Ming-Feng Tsai and Chuan-Ju Wang. 2014. Financial keyword expansion via continuous word vector representations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1453–1458. Association for Computational Linguistics. Peter D. Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 417–424, Stroudsburg, PA, USA. Association for Computational Linguistics. Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Hannan, and Ryan T. McDonald. 2010. The viability of web-derived polarity lexicons. In HLTNAACL. I˜naki San Vicente, Rodrigo Agerri, and German Rigau. 2014. Simple, robust and (almost) unsupervised generation of polarity lexicons for multiple languages. In EACL. Duy Tin Vo and Yue Zhang. 2016. Don’t count, predict! an automatic approach to learning sentiment lexicons for short text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 219–224. Association for Computational Linguistics. Chuan-Ju Wang, Ming-Feng Tsai, Tse Liu, and ChinTing Chang. 2013. Financial sentiment analysis for risk prediction. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 802–808. Asian Federation of Natural Language Processing. Leyi Wang and Rui Xia. 2017. Sentiment lexicon construction with representation learning based on hierarchical sentiment supervision. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 502–510. Association for Computational Linguistics. 357 Dominic Widdows and Beate Dorow. 2002. A graph model for unsupervised lexical acquisition. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1, COLING ’02, pages 1–7, Stroudsburg, PA, USA. Association for Computational Linguistics. Mohammadzaman Zamani and H Andrew Schwartz. 2017. Using twitter language to predict the real estate market. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 28–33. 358 A Appendix A.1 Excess return regression results for multiple text variables var coeff std coeff t R2 H4NRE -0.88** -0.264 -2.19 1.05 neglm 0.062 0.024 0.48 H4NRE -0.739** -0.221 -2.23 1.05 alllm -0.008 -0.008 -0.21 H4NRE -0.836** -0.25 -2.15 1.05 neg unclm 0.027 0.016 0.28 H4NRE -0.755** -0.226 -2.56 1.05 neg litlm -0.003 -0.004 -0.12 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 10: This table shows results for regressions that combine H4NRE with single-feature manual L&M lists. var coeff std coeff t R2 neglm -0.202** -0.080 -2.56 1.02 negRE -0.37*** -0.111 -2.96 1.02 negADD -0.033 -0.0231 -1.03 1.00 neglm -0.0607 -0.0242 -0.38 1.02 negRE -0.274 -0.0822 -1.11 negRE -0.416*** -0.124 -2.85 1.02 negADD 0.0298 0.0208 0.80 neglm -0.0421 -0.0168 -0.27 1.02 negRE -0.346 -0.1037 -1.35 negADD 0.0277 0.0193 0.76 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 11: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the negative category. var coeff std coeff t R2 unclm -0.215* -0.064 -1.91 1.01 uncRE -0.377*** -0.075 -2.77 1.02 uncADD 0.0217 0.0065 0.21 1.00 unclm 0.209 0.0626 0.45 1.01 uncRE -0.668 -0.133 -1.05 uncRE -0.643*** -0.128 -3.14 1.03 uncADD 0.198 0.0594 1.42 unclm -0.233 -0.0699 -0.42 1.03 uncRE -0.368 -0.0736 -0.54 uncADD 0.234 0.0702 1.42 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 12: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the uncertain category. var coeff std coeff t R2 litlm -0.0291 -0.026 -0.83 1.00 litRE -0.056 -0.028 -0.55 1.02 litADD -0.0195 -0.0156 -0.70 1.00 litlm -0.0759 -0.0683 -0.95 1.00 litRE 0.154 0.077 0.67 litRE -0.0261 -0.0130 -0.20 1.00 litADD -0.0136 -0.0108 -0.39 litlm -0.0753 -0.0677 -0.94 1.00 litRE 0.155 0.0775 0.66 litADD -0.00107 -0.0008 -0.03 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 13: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the litigious category. 359 A.2 Volatility regression results for multiple text variables var coeff std coeff t R2 H4NRE 0.748*** 0.224 4.44 60.3 neglm -0.096* -0.038 -2.55 H4NRE 0.741*** 0.222 4.30 60.3 alllm -0.0438** -0.0481 -2.95 H4NRE 0.696*** 0.208 4.88 60.3 neg unclm -0.054 -0.032 -1.86 H4NRE 0.693*** 0.207 4.24 60.3 neg litlm -0.034** -0.037 -2.70 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 14: This table shows results for regressions that combine H4NRE with single-feature manual L&M lists. var coeff std coeff t R2 neglm 0.118*** 0.0472 3.30 60.1 negRE 0.219*** 0.0657 3.57 60.1 negADD 0.032*** 0.0224 4.06 60.0 neglm 0.0014 0.0005 0.02 60.1 negRE 0.217* 0.065 1.96 negRE 0.233** 0.0699 2.96 60.1 negADD -0.0087 -0.006 -0.65 neglm 0.00069 0.0002 0.01 60.1 negRE 0.232* 0.0696 1.97 negADD -0.0087 -0.006 -0.66 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 15: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the negative category. var coeff std coeff t R2 unclm 0.119* 0.0356 2.25 60.0 uncRE 0.167* 0.0334 2.30 60.0 uncADD -0.013 -0.0039 -0.17 60.0 unclm 0.0432 0.012 0.28 60.0 uncRE 0.112 0.0224 0.53 uncRE 0.222*** 0.0444 3.48 60.1 uncADD -0.088 -0.0263 -1.09 unclm 0.151 0.0453 1.11 60.1 uncRE 0.0419 0.0083 0.20 uncADD -0.111 -0.0332 -1.41 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 16: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the uncertain category. var coeff std coeff t R2 litlm -0.0081 -0.0073 -0.62 60.0 litRE 0.0080 0.004 0.20 60.0 litADD 0.028 0.0224 1.07 60.0 litlm -0.0635** -0.057 -2.93 60.0 litRE 0.181* 0.0905 2.46 litRE -0.362 -0.181 -0.91 60.0 litADD 0.041 0.0328 1.50 litlm -0.087*** -0.078 -3.65 60.1 litRE 0.174* 0.087 2.42 litADD 0.066* 0.0528 2.23 *p ≤0.05, **p ≤0.01, ***p ≤0.001 Table 17: This table shows results for regressions that combine RE, ADD and L&M dictionaries for the litigious category.
2019
34
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3499 Multilingual Constituency Parsing with Self-Attention and Pre-Training Nikita Kitaev Steven Cao Dan Klein Computer Science Division University of California, Berkeley {kitaev,stevencao,klein}@berkeley.edu Abstract We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions. We first compare the benefits of no pre-training, fastText (Bojanowski et al., 2017; Mikolov et al., 2018), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2018a) for English and find that BERT outperforms ELMo, in large part due to increased model capacity, whereas ELMo in turn outperforms the non-contextual fastText embeddings. We also find that pre-training is beneficial across all 11 languages tested; however, large model sizes (more than 100 million parameters) make it computationally expensive to train separate models for each language. To address this shortcoming, we show that joint multilingual pre-training and fine-tuning allows sharing all but a small number of parameters between ten languages in the final model. The 10x reduction in model size compared to fine-tuning one model per language causes only a 3.2% relative error increase in aggregate. We further explore the idea of joint fine-tuning and show that it gives low-resource languages a way to benefit from the larger datasets of other languages. Finally, we demonstrate new state-ofthe-art results for 11 languages, including English (95.8 F1) and Chinese (91.8 F1). 1 Introduction There has recently been rapid progress in developing contextual word representations that improve accuracy across a range of natural language tasks (Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018a). While we have shown in previous work (Kitaev and Klein, 2018) that such representations are beneficial for constituency parsing, our earlier results only consider the LSTM-based ELMo representations (Peters et al., 2018), and only for the English language. In this work, we study a broader range of pre-training conditions and experiment over a variety of languages, both jointly and individually. First, we consider the impact on parsing of using different methods for pre-training initial network layers on a large collection of un-annotated text. Here, we see that pre-training provides benefits for all languages evaluated, and that BERT (Devlin et al., 2018a) outperforms ELMo, which in turn outperforms fastText (Bojanowski et al., 2017; Mikolov et al., 2018), which performs slightly better than the non pre-trained baselines. Pre-training with a larger model capacity typically leads to higher parsing accuracies. Second, we consider various schemes for the parser fine-tuning that is required after pretraining. While BERT itself can be pre-trained jointly on many languages, successfully applying it, e.g. to parsing, requires task-specific adaptation via fine-tuning (Devlin et al., 2018a). Therefore, the obvious approach to parsing ten languages is to fine-tune ten times, producing ten variants of the parameter-heavy BERT layers. In this work, we compare this naive independent approach to a joint fine-tuning method where a single copy of fine-tuned BERT parameters is shared across all ten languages. Since only a small output-specific fragment of the network is unique to each task, the model is 10x smaller while losing an average of only 0.28 F1. Although, in general, jointly training multilingual parsers mostly provides a more compact model, it does in some cases improve accuracy as well. To investigate when joint training is helpful, we also perform paired fine-tuning on all pairs of languages and examine which pairs lead to the largest increase in accuracy. We find that larger treebanks function better as auxiliary tasks and that only smaller treebanks see a benefit from joint training. These results suggest that this manner of joint training can be used to provide support for many languages in a resource-efficient man3500 ner, but does not exhibit substantial cross-lingual generalization except when labeled data is limited. Our parser code and trained models for eleven languages are publicly available.1 2 Model Our parsing model is based on the architecture described in Kitaev and Klein (2018), which is state of the art for multiple languages, including English. A constituency tree T is represented as a set of labeled spans, T = {(it, jt, lt) : t = 1, . . . , |T|} where the tth span begins at position it, ends at position jt, and has label lt. The parser assigns a score s(T) to each tree, which decomposes as s(T) = X (i,j,l)∈T s(i, j, l) The per-span scores s(i, j, l) are produced by a neural network. This network accepts as input a sequence of vectors corresponding to words in a sentence and transforms these representations using one or more self-attention layers. For each span (i, j) in the sentence, a hidden vector vi,j is constructed by subtracting the representations associated with the start and end of the span. An MLP span classifier, consisting of two fullyconnected layers with one ReLU nonlinearity, assigns labeling scores s(i, j, ·) to the span. Finally, the the highest scoring valid tree ˆT = arg max T s(T) can be found efficiently using a variant of the CKY algorithm. For more details, see Kitaev and Klein (2018). We incorporate BERT by computing token representations from the last layer of a BERT model, applying a learned projection matrix, and then passing them as input to the parser. BERT associates vectors to sub-word units based on WordPiece tokenization (Wu et al., 2016), from which we extract word-aligned representations by only retaining the BERT vectors corresponding to the last sub-word unit for each word in the sentence. We briefly experimented with other alternatives, such as using only the first sub-word instead, but did not find that this choice had a substantial effect on English parsing accuracy. 1https://github.com/nikitakit/self-attentive-parser Method Pre-trained on Params F1 No pre-training – 26M 93.61a FastText English 626M 93.72 ELMo English 107M 95.21a BERTBASE (uncased) Chinese 110M 93.57 BERTBASE (cased) 104 languages 185M 94.97 BERTBASE (uncased) English 117M 95.32 BERTBASE (cased) English 116M 95.24 BERTLARGE (uncased) English 343M 95.66 BERTLARGE (cased) English 341M 95.70 Ensemble (final 4 models above) 916M 95.87 Table 1: Comparison of parsing accuracy on the WSJ development set when using different word representations. aKitaev and Klein (2018) The fact that additional layers are applied to the output of BERT – which itself uses a selfattentive architecture – may at first seem redundant, but there are important differences between these two portions of the architecture. The extra layers on top of BERT use word-based tokenization instead of sub-words, apply the factored version of self-attention proposed in Kitaev and Klein (2018), and are randomly-initialized instead of being pre-trained. We found that passing the (projected) BERT vectors directly to the MLP span classifier hurts parsing accuracies. We train our parser with a learning rate of 5 × 10−5 and batch size 32, where BERT parameters are fine-tuned as part of training. We use two additional self-attention layers following BERT. All other hyperparameters are unchanged from Kitaev and Klein (2018) and Devlin et al. (2018a). 3 Comparison of Pre-Training Methods In this section, we compare using BERT, ELMo, fastText, and training a parser from scratch on treebank data alone. Our comparison of the different methods for English is shown in Table 1. BERTBASE (∼115M parameters) performs comparably or slightly better than ELMo (∼107M parameters; 95.32 vs. 95.21 F1), while BERTLARGE (∼340M parameters) leads to better parsing accuracy (95.70 F1). Furthermore, both pre-trained contextual embeddings significantly outperform fastText, which performs slightly better than no pre-training (93.72 vs. 93.61 F1). These results show that both the LSTM-based architecture of ELMo and the self-attentive architecture of BERT are viable for parsing, and that pre-training benefits from having a high model capacity. We did not 3501 BERT (110M) token1 token2 ... tokenL Self-Attention (75M) MLPDE (<1M) MLPEN (<1M) ... MLPFR (<1M) □NP ⊠VP ... □PP ... Figure 1: The architecture of the multilingual model, with components labeled by the number of parameters. observe a sizable difference between an “uncased” version of BERT that converts all text to lowercase and a “cased” version of that retains case information. We also evaluate an ensemble of four English BERT-based parsers, where the models are combined by averaging their span label scores: sensemble(i, j, l) = 1 4 4 X n=1 sn(i, j, l) The resulting accuracy increase with respect to the best single model (95.87 F1 vs. 95.66 F1) reflects not only randomness during fine-tuning, but also variations between different versions of BERT. When combined with the observation that BERTLARGE outperforms BERTBASE, the ensemble results suggest that empirical gains from pretraining have not yet plateaued as a function of computational resources and model size. Next, we compare pre-training on monolingual data to pre-training on data that includes a variety of languages. We find that pre-training on only English outperforms multilingual pretraining given the same model capacity, but the decrease in accuracy is less than 0.3 F1 (95.24 vs. 94.97 F1). This is a promising result because it supports the idea of parameter sharing as a way to provide support for many languages in a resourceefficient manner, which we examine further in Section 4. To further examine the effects of pre-training on disparate languages, we consider the extreme case of training an English parser using a version of BERT that was pre-trained on the Chinese Wikipedia. Neither the pre-training data nor the subword vocabulary used are a good fit for the target task. However, English words (e.g. proper names) occur in the Chinese Wikipedia data with sufficient frequency that the model can losslessly represent English text: all English letters are included in its subword vocabulary, so in the worst case it will decompose an English word into its individual letters. We found that this model achieves performance comparable to our earlier parser (Kitaev and Klein, 2018) trained on treebank data alone (93.57 vs. 93.61 F1). These results suggest that even when the pre-training data is a highly imperfect fit for the target application, fine-tuning can still produce results better than or comparable to purely supervised training with randomlyinitialized parameters.2 4 Multilingual Model We next evaluate how well self-attention and pretraining work cross-linguistically; for this purpose we consider ten languages: English and the nine languages represented in the SPMRL 2013/2014 shared tasks (Seddah et al., 2013). Our findings from the previous section show that pre-training continues to benefit from larger model sizes when data is abundant. However, as models grow, it is not scalable to conduct separate pre-training and fine-tuning for all languages. This shortcoming can be partially overcome by pre-training BERT on multiple languages, as suggested by the effectiveness of the English parser fine-tuned from multilingual BERT (see Table 1). Nevertheless, this straightforward approach also faces scalability challenges because it requires training an independent parser for each language, which results in over 1.8 billion parameters for ten languages. Therefore, we consider a single parser with parameters shared across languages and finetuned jointly. The joint parser uses the same BERT model and self-attention layers for all ten languages but contains one MLP span classifier per language to accommodate the different tree labels (see Figure 1). The MLP layers contain 250K850K parameters, depending on the type of syntactic annotation adopted for the language, which 2We also attempted to use a randomly-initialized BERT model, but the resulting parser did not train effectively within the range of hyperparameters we tried. Note that the original BERT models were trained on significantly more powerful hardware and for a longer period of time than any of the experiments we report in this paper. 3502 Arabic Basque English French German Hebrew Hungarian Korean Polish Swedish Avg Params No pre-traininga 85.61 89.71 93.55 84.06 87.69 90.35 92.69 86.59 93.69 84.45 88.32 355M One model per language (this work) 87.97 91.63 94.91 87.42 90.20 92.99 94.90 88.80 96.36 88.86 91.40 1,851M Joint multilingual model (this work) 87.44 90.70 94.63 87.35 88.40 92.95 94.60 88.96 96.26 89.94 91.12 189M Relative ∆Error vs. monolingual +4.2%* +10.0%* +5.2%* +0.6% +15.5%* +0.6% +5.6%* -1.5% +2.7% -10.7%* +3.2%* Table 2: Results of monolingual and multilingual training on the SPMRL and WSJ test splits using the version of BERT pre-trained on 104 languages. In the last row, starred differences are significant at the p < 0.05 level using a bootstrap test; see Berg-Kirkpatrick et al. (2012). aKitaev and Klein (2018) Auxiliary Language Arabic Basque English French German Hebrew Hungarian Korean Polish Swedish Average Best Best Aux. # train sentences 15,762 7,577 39,831 14,759 40,472 5,000 8,146 23,010 6,578 5,000 Language Tested Arabic 0 -0.38 -0.20 -0.27 -0.26 -0.14 -0.29 -0.13 -0.31 -0.33 -0.23 +0 None Basque -0.47 0 -0.06 -0.26 0.04 -0.22 -0.27 -0.41 -0.49 -0.34 -0.25 +0.04 German English -0.18 -0.04 0 -0.02 -0.03 -0.07 -0.09 0.05 0.10 -0.05 -0.03 +0.10 Polish French 0.42 0.01 0.28 0 0.40 -0.14 0.04 0.27 0.29 -0.10 0.15 +0.42* Arabic German -0.38 -0.20 0.03 -0.45 0 -0.13 -0.15 -0.13 -0.21 -0.26 -0.19 +0.03 English Hebrew 0.13 0.05 -0.27 -0.17 -0.11 0 -0.09 -0.19 -0.30 -0.35 -0.13 +0.13 Arabic Hungarian -0.14 -0.43 -0.29 -0.38 -0.11 -0.39 0 -0.17 -0.28 -0.32 -0.25 +0 None Korean -0.24 -0.25 0.16 -0.27 -0.11 -0.01 0 0 -0.07 -0.17 -0.10 +0.16 English Polish 0.25 0.15 0.20 0.24 0.24 0.21 0.14 0.20 0 0.12 0.18 +0.25* Arabic Swedish 0.17 -0.08 0.38 0.54 0.53 -0.11 0.59 0.78 -0.17 0 0.26 +0.78* Korean Average -0.04 -0.12 0.02 -0.10 0.06 -0.10 -0.01 0.03 -0.14 -0.18 Table 3: Change in development set F1 score due to paired vs. individual fine-tuning. In the “Best” column, starred results are significant at the p < 0.05 level. On average, the three largest treebanks (German, English, Korean) function the best as auxiliaries. Also, the three languages benefitting most from paired training (Swedish, French, Polish) function poorly as auxiliaries. is less than 0.5% of the total parameters. Therefore, this joint training entails a 10x reduction in model size. During joint fine-tuning, each batch contains sentences from every language. Each sentence passes through the shared layers and then through the MLP span classifier corresponding to its language. To reduce over-representation of languages with large training sets, we follow Devlin et al. (2018b) and determine the sampling proportions through exponential smoothing: if a language is some fraction f of the joint training set, the probability of sampling examples from that language is proportional to fa for some a. We use the same hyperparameters as in monolingual training but increase the batch size to 256 to account for the increase in the number of languages, and we use a = 0.7 as in Devlin et al. (2018b). The individually fine-tuned parsers also use the same hyperparameters, but without the increase in batch size. Table 2 presents a comparison of different parsing approaches across a set of ten languages. Our joint multilingual model outperforms treebankonly models (Kitaev and Klein, 2018) for each of the languages (88.32 vs 91.12 average F1). We also compare joint and individual fine-tuning. The multilingual model on average degrades performance only slightly (91.12 vs. 91.40 F1) despite the sharp model size reduction, and in fact performs better for Swedish. We hypothesize that the gains/losses in accuracy for different languages stem from two competing effects: the multilingual model has access to more data, but there are now multiple objective functions competing over the same parameters. To examine language compatibility, we also train a bilingual model for each language pair and compare it to the corresponding monolingual model (see Table 3). From this experiment, we see that the best language pairs often do not correspond to any known linguistic groupings, suggesting that compatibility of objective functions is influenced more by other factors such as treebank labeling convention. In addition, we see that on average, the three languages with the largest training sets (English, German, Korean) function well as auxiliaries. Furthermore, the three languages that gain the most from paired training (Swedish, French, Polish) have smaller datasets and function poorly as auxiliaries. These results suggest that joint training not only drastically reduces model size, but also gives languages with small datasets a way to benefit from the large datasets of other languages. 3503 Arabic Basque French German Hebrew Hungarian Korean Polish Swedish Avg Bj¨orkelund et al. (2014) 81.32a 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 86.12 Coavoux and Crabb´e (2017) 82.92b 88.81 82.49 85.34 89.87 92.34 86.04 93.64 84.0 87.27 Kitaev and Klein (2018) 85.61c 89.71c 84.06 87.69 90.35 92.69 86.59c 93.69c 84.45 88.32 This work (joint multilingual model) 87.44 90.70 87.35 88.40 92.95 94.60 88.96 96.26 89.94 90.73 ∆vs. best previous +1.83 +0.99 +3.29 +0.71 +2.60 +1.91 +2.37 +2.57 +4.44 This work (one model per language) 87.97 91.63 87.42 90.20 92.99 94.90 88.80 96.36 88.86 91.01 ∆vs. best previous +2.36 +1.92 +3.36 +2.51 +2.64 +2.21 +2.21 +2.67 +3.36 Table 4: Results on the testing splits of the SPMRL dataset. All values are F1 scores calculated using the version of evalb distributed with the shared task. aBj¨orkelund et al. (2013) bUses character LSTM, whereas other results from Coavoux and Crabb´e (2017) use predicted part-of-speech tags. cDoes not use word embeddings, unlike other results from Kitaev and Klein (2018). LR LP F1 Dyer et al. (2016) – – 93.3 Choe and Charniak (2016) – – 93.8 Liu and Zhang (2017) – – 94.2 Fried et al. (2017) – – 94.66 Joshi et al. (2018) 93.8 94.8 94.3 Kitaev and Klein (2018) 94.85 95.40 95.13 This work (single model) 95.46 95.73 95.59 This work (ensemble of 4) 95.51 96.03 95.77 Table 5: Comparison of F1 scores on the WSJ test set. LR LP F1 Fried and Klein (2018) – – 87.0 Teng and Zhang (2018) 87.1 87.5 87.3 This work 91.55 91.96 91.75 Table 6: Comparison of F1 scores on the Chinese Treebank 5.1 test set. 5 Results We train and evaluate our parsers on treebanks for eleven languages: the nine languages represented in the SPMRL 2013/2014 shared tasks (Seddah et al., 2013) (see Table 4), English (see Table 5), and Chinese (see Table 6). The English and Chinese parsers use fully monolingual training, while the remaining parsers incorporate a version of BERT pre-trained jointly on 104 languages. For each of these languages, we obtain a higher F1 score than any past systems we are aware of. In the case of SPRML, both our single multilingual model and our individual monolingual models achieve higher parsing accuracies than previous systems (none of which made use of pretrained contextual word representations). This result shows that pre-training is beneficial even when model parameters are shared heavily across languages. 6 Conclusion The remarkable effectiveness of unsupervised pretraining of vector representations of language suggests that future advances in this area can continue improving the ability of machine learning methods to model syntax (as well as other aspects of language). As pre-trained models become increasingly higher-capacity, joint multilingual training is a promising approach to scalably providing NLP systems for a large set of languages. Acknowledgments This research was supported by DARPA through the XAI program. This work used the Savio computational cluster provided by the Berkeley Research Computing program at the University of California, Berkeley. References Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 995–1005, Jeju Island, Korea. Association for Computational Linguistics. Anders Bj¨orkelund, Ozlem Cetinoglu, Agnieszka Fale´nska, Rich´ard Farkas, Thomas Mueller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. The IMSWrocław-Szeged-CIS entry at the SPMRL 2014 shared task: Reranking and morphosyntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of NonCanonical Languages, pages 97–102. 3504 Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking meets morphosyntax: State-of-the-art results from the SPMRL 2013 shared task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 135–145, Seattle, Washington, USA. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331–2336. Association for Computational Linguistics. Maximin Coavoux and Benoit Crabb´e. 2017. Multilingual lexicalized constituency parsing with wordlevel auxiliary tasks. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 331–336. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018a. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018b. BERT: Pre-training of deep bidirectional transformers for language understanding. https: //github.com/google-research/bert/ blob/master/multilingual.md. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209. Association for Computational Linguistics. Daniel Fried and Dan Klein. 2018. Policy gradient as a proxy for dynamic oracles in constituency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 469–476. Association for Computational Linguistics. Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161–166. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Association for Computational Linguistics. Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1190–1199. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686. Association for Computational Linguistics. Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413–424. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wr´oblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146–182. Association for Computational Linguistics. Zhiyang Teng and Yue Zhang. 2018. Two local models for neural constituent parsing. In Proceedings 3505 of the 27th International Conference on Computational Linguistics, pages 119–132. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. arXiv:1609.08144 [cs]. ArXiv: 1609.08144.
2019
340
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3506–3517 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3506 A Multilingual BPE Embedding Space for Universal Sentiment Lexicon Induction Mengjie Zhao and Hinrich Sch¨utze CIS, LMU Munich, Germany [email protected] Abstract We present a new method for sentiment lexicon induction that is designed to be applicable to the entire range of typological diversity of the world’s languages. We evaluate our method on Parallel Bible Corpus+ (PBC+), a parallel corpus of 1593 languages. The key idea is to use Byte Pair Encodings (BPEs) as basic units for multilingual embeddings. Through zero-shot transfer from English sentiment, we learn a seed lexicon for each language in the domain of PBC+. Through domain adaptation, we then generalize the domain-specific lexicon to a general one. We show – across typologically diverse languages in PBC+ – good quality of seed and general-domain sentiment lexicons by intrinsic and extrinsic and by automatic and human evaluation. We make freely available our code, seed sentiment lexicons for all 1593 languages and induced general-domain sentiment lexicons for 200 languages.1 1 Introduction Lexicons play an important role in sentiment analysis. Sentiment lexicons are available for highresource languages like English (Pang et al., 2008; Baccianella et al., 2010; Mohammad and Turney, 2013), but not for many low-resource languages. Researchers are trying to fill this gap by inducing lexicons monolingually (Badaro et al., 2014; Eskander and Rambow, 2015; Rouces et al., 2018) as well as multilingually (Chen and Skiena, 2014), often by transfer from high-resource to low-resource languages. The world’s languages are heterogeneous – of particular relevance for us is heterogeneity with respect to morphology and with respect to marking token boundaries. This heterogeneity poses difficulties when designing a universal approach 1cistern.cis.lmu.de to lexicon induction that works for all languages – implementing a high quality tokenizer and morphological analyzer for each language is not feasible short-term. Given the small number of native speakers in low-resource languages (Goldhahn et al., 2016), crowdsourcing cannot easily be carried out either. To overcome this heterogeneity and provide sentiment resources for low-resource languages, we present a new approach to sentiment lexicon induction that is universal – that is, it is applicable to the full range of typologically different languages – and apply it to 1593 languages. Our method first takes a parallel corpus as input and applies BPE (Gage, 1994) segmentation to it. We then create a multilingual BPE embedding space, from which a ZS (zero-shot) lexicon for each language L is extracted by zero-shot transfer from English sentiment to L. We use PBC+, an expansion of the Parallel Bible Corpus (Mayer and Cysouw, 2014), as our parallel corpus. The ZS lexicons show high quality, but are specific to the domain of PBC+ (the Bible). We then adapt them to the general domain. For brevity, we also use generic to refer to general-domain. Our method is universal and language-agnostic – it does not require language-dependent preprocessing. We carry out intrinsic and extrinsic, automatic and human evaluations on 95 languages. Intrinsic evaluation shows that our approach produces word ratings that strongly correlate with gold standard lexicons and human judgments. Extrinsic evaluation on Twitter sentiment classification demonstrates that our lexicons perform comparably or better than existing lexicons derived in multilingual settings. We chose an approach to sentiment analysis based on lexicons in this paper because it is transparent and meets high standards of explainability. A classification decision can easily be traced 3507 back to the lexicon entries in the document that are responsible. Many more complex methods, e.g., many deep learning approaches, do not meet this standard. Transparency is of particular importance for low-resource languages because error analysis and verification are paramount when working with small and noisy resources that are typical of lowresource languages. Our contributions: (i) We propose a new method for inducing sentiment lexicons for a broad range of typologically diverse languages. We use BPEs as basic units and show that they work well across languages. (ii) We carry out extensive evaluation to confirm correctness and high quality of the created lexicons. (iii) We make our code, the 1593 ZS seed sentiment lexicons and 200 generic sentiment lexicons freely available to the community. This is the up-to-now largest sentiment resource in terms of language coverage that has been published. 2 Related Work Monolingual Lexicon Induction. Sentiment lexicons for many languages have been induced. Eskander and Rambow (2015), Wang and Ku (2016), and Rouces et al. (2018) create Arabic, Chinese, and Swedish sentiment lexicons, respectively. Monolingually induced sentiment lexicons for specific domains like Twitter and finance are also devised (Mohammad et al., 2013; Hamilton et al., 2016). These methods are specialized such that applying them to other languages is non-trivial. For example, Eskander and Rambow (2015) link AraMorph (Buckwalter, 2004) with SentiWordNet by additionally considering part-ofspeech information, which may not be available in lexical resources in other languages. Inducing Chinese sentiment lexicons (Wang and Ku, 2016) needs properly tokenized corpora, which is not a hard requirement in Swedish. In contrast, we aim to design a method applicable to typologically diverse languages and we apply it to 1500+ languages. Bi/Multi-Lingual Lexicon Induction. Gao et al. (2015) propose a graph based method for learning sentiment lexicons in target language by leveraging English sentiment lexicons. They rely on a high-quality word alignment, which is difficult to produce if languages are typologically diverse and the size of the parallel corpus is small. Chen and Skiena (2014) devise a knowledge graph eng The book of the history of Jesus Christ , son of David , son of Abraham : fra Le livre de l’histoire de J´esus Christ , fils de David , fils d’Abraham : jpn アブラハムの子,ダビデの子, イエス・キリストについての歴史の書: Table 1: PBC+ verse 40001001 in three languages based method to build sentiment lexicons for 136 major languages. Several linguistic resources such as Google Translate and Wiktionary are used to link words across languages. In contrast, our approach uses BPE embeddings to extract alignment signals from the parallel corpus, an approach that is better applicable across diverse languages. We do not require resources like Wiktionary. We cover more languages than Chen and Skiena (2014) and more words (e.g., 300K for Amharic). Language-Agnostic NLP. Language-agnostic NLP has demonstrated strong performance in areas such as neural machine translation (NMT) and universal representation learning. A particular difficulty is languages that do not mark token boundaries by whitespace such as Japanese. We refer to them as non-segmented languages. Sennrich et al. (2016) show the strength of BPE in translating rare words. Kudo (2018) introduces subword regularization that utilizes multiple subword sequences to improve the robustness of NMT models. Sennrich et al. (2016)’s subword-nmt2 requires preprocessing (specifically, tokenization) for non-segmented languages, however, sentencepiece3 (Kudo and Richardson, 2018) used by Kudo (2018) requires no preprocessing even for non-segmented languages. This research indicates the potential of language-agnostic NMT. Effective representations of words (Sch¨utze, 1993), e.g., word embeddings (Mikolov et al., 2013; Pennington et al., 2014), have been extended to be bilingual (Ruder, 2017; Artetxe et al., 2017) or multilingual (Dufter et al., 2018), with (Artetxe et al., 2018) and without (Conneau et al., 2017) supervision. Artetxe and Schwenk (2018) train a language-agnostic BiLSTM encoder creating universal sentence representations of 93 languages, and performing strongly in crosslingual tasks. Lample and Conneau (2019) show that pretraining the encoders with a crosslingual language model objective helps in achieving state2github.com/rsennrich/subword-nmt 3github.com/google/sentencepiece 3508 of-the-art results in crosslingual classification and NMT. This research demonstrates the strength of language-agnostic methods for representation learning in NLP. Language-agnostic NLP models can generalize across languages without requiring language-dependent preprocessing. These advantages motivate us to design a universal approach for sentiment lexicon induction for 1500+ languages. 3 Method Figure 1 shows the four steps of our method: (i) BPE segmentation. (ii) Multilingual embedding space creation. (iii) ZS lexicon induction. (iv) Domain adaptation to the general domain. We work with the parallel corpus PBC+. PBC+ extends the Parallel Bible Corpus by adding4 500 translations of the New Testament in 334 languages, resulting in a sentence-aligned parallel corpus containing New Testament verses in 2164 translations of 1593 languages. Many languages have several translations of the New Testament in PBC+. We use the term “edition” to refer to a single translation. Table 1 shows a verse in three languages. As shown, the Japanese (jpn) verse is not tokenized. 3.1 BPE Segmentation Given the linguistic heterogeneity of the world’s languages, it is crucial to first decide which type of linguistic unit to use to represent a language L in the multilingual space. The word, the linguistic unit typically generated from whitespace tokenization, is not ideal for universal approaches because non-segmented languages require carefully designed tokenizers. Character (or byte) n-gram is an alternative unit (Wieting et al., 2016; Gillick et al., 2016; Sch¨utze, 2017; Dufter et al., 2018), but the optimum length n varies across languages, e.g., n = 2 may be suitable for Chinese (Foo and Li, 2004), but clearly not for English. In our desire to design a universal approach, we use sentencepiece to segment PBC+ editions in all 1593 languages into sequences of BPE segments. We will show that this segmentation works across languages. The widely used BPE segmentation algorithm subword-nmt only considers BPE segments within words (Sennrich et al., 2016) and some frequent BPEs are essentially valid words. 4We use github.com/ehsanasgari/1000Langs sentencepiece adopts this setting for segmented languages like English (Kudo, 2018). But for non-segmented languages, sentencepiece does not require any language-dependent preprocessing – it learns a data-driven “tokenizer” onthe-fly from raw text. Hence, sentencepiece BPE segments can be larger linguistic units than say, English words, e.g., phrases. Examples for Japanese BPE segments in PBC+ are: “愛のうち に” (in love) and “何と言えばよいでしょうか” (what should I say). We will use the term “BPE” to refer to all BPE segments produced by sentencepiece, including subwords, words and cross-token units like phrases. Figure 1 (a) shows some sample units. As shown, the English segments can be words or subwords (underlined). Dominant contexts of shown subwords – insp: inspiration, inspired; crim: crime, criminals; blasphe: blasphemy, blasphemed; hest: highest, richest. 3.2 Multilingual Space Creation We next create the multilingual space hosting BPEs in 1593 languages of PBC+. We use the Sentence ID (S-ID) method (Levy et al. (2017), cf. also Le and Mikolov (2014)), a strong baseline in multilingual embedding learning. Given a sentence-aligned parallel corpus, the SID method first creates an embedding training corpus by recording co-occurrences between the sentence ID and the sentence’s words (the New Testament verse ID and BPEs in our case) in all languages. Figure 2 shows examples from the training corpus; each BPE is associated with a 3-digit ISO 639-3 language code. After that, an embedding learner is applied to the created corpus to learn the multilingual space. We use word2vecskipgram (Mikolov et al., 2013) as our embedding learner. 3.3 Zero-Shot Transfer of English Sentiment Embeddings encode sentiment information (Pennington et al., 2014; Tang et al., 2014; Amir et al., 2015; Rothe et al., 2016). We exploit this for zero-shot transfer of English sentiment to the other 1592 languages. We train two linear SVMs to classify sentiment of English BPE embeddings as positive vs. non-positive (POS) and as negative vs. non-negative (NEG). We use this setup – as opposed to binary classification positive vs. negative – to address the fact that some long BPE segments in non-segmented 3509 x sentiment z salvation mercy 励まし 信じる 智慧 賞給 ûīೕಕದ ದĨøಂದ የሚበልጥ ድፍረት sincérité glorieuse insp hest curse murders ርኵሳ አደጋ ಅಲ~ಗıಯ ďೂಲ~ಲು 泣き つぶや 麻煩 迷惑 résister avaricieux crim blasphe ML ∈Rn×d (w1, l1) (w2, l2) (w3, l3) . . . (wm, lm) Generic Embeddings of L PBC+ ZS lexicon of L Domain Adaptation Generic DA (Domain-Adapted) Lexicon of L: Pos Neg [eng] [jpn] [fra] enliven smiles . . . misfortune kill 素敵 楽しみ . . . 異臭 苦し紛れ atout décoration . . . odieux répugner (a) PBC+ ZS (zero-shot) lexicons: Created by zero-shot crosslingual transfer (b) Generic DA (domain-adapted) lexicons: Created by PBC-to-general-domain adaptation Figure 1: Universal sentiment lexicon induction. (a): S-ID multilingual space of BPEs and sentiment classification hyperplanes (only the positive vs. non-positive plane is shown) learned from English. Underlined units are English BPEs with strong sentiment. (b): Creating generic DA lexicons using PBC+ ZS lexicons and generic embeddings. languages may encode both sentiments. Using two SVMs allows us to identify then filter out segments with compositional sentiments during zeroshot transfer. This setup also enables direct comparison with Dufter et al. (2018) in Table 2. The two SVMs are then applied to all embedding vectors in the multilingual space to yield a ZS lexicon for each of the 1593 languages. 3.4 PBC+ to General Domain Adaptation Our ZS lexicons show high quality (see §5.2), but are specific to the PBC+ domain, i.e., the Bible. We adapt them to the general domain by obtaining generic embeddings and using ZS lexicon BPEs as labels to predict the sentiment of each generic embedding. We assume that we have access to generic embeddings or, alternatively, that we can learn them from a generic corpus. We now describe how we predict the sentiment of generic embeddings. Given the PBC+ ZS lexicon B and the generic em40001002 @Jesus:eng 40001002 @አብርሃም:amh 40001002 @òಗೂ:kan 40001002 @雅各:zho 66002003 བཟོད་བsrན་byས་:bod · · · · · · Figure 2: Samples of S-ID embedding training corpus. 40001002 and 66002003: S-ID, i.e., IDs of New Testament verses. amh=Amharic, kan=Kannada, zho=Chinese, bod=Tibetan. bedding matrix ML ∈Rn×d of language L, we train a matrix QL ∈Rd×d such that BPE pairs with same sentiment (Gs ⊂B × B) have small l2 distance while BPE pairs with different sentiment (Gd ⊂B × B) have large l2 distance, i.e., ∀w, v ∈B, w ̸= v: arg min QL X (w,v)∈Gd −α∥PQL(ew −ev)∥2 + X (w,v)∈Gs (1 −α)∥PQL(ew −ev)∥2 + λ 2 ∥PQL∥2 F where ew, ev ∈Rd are embeddings of BPEs w, v. d is embedding dimension. n is vocabulary size. α ∈[0, 1] is the hyperparameter balancing the two sub-objectives. λ is a regularization weight. P ∈Rd×d is an identity matrix in the first dimension, i.e., a selector. This objective concentrates sentiment information in an embedding vector to a 1-dimensional ultradense sentiment space, resulting in a real-valued generic sentiment score. We minimize the objective using stochastic gradient descent (SGD). After training, the generic sentiment score of BPE w in language L is computed as sw = PQLew. We refer to this method as REG and we call a lexicon computed by REG a generic DA (domain-adapted) lexicon since we always adapt from the Bible to the general domain in this paper. REG is inspired by Densifier (Rothe et al., 2016), which is state of the art on SemEval2015 10E (Rosenthal et al., 2015) – determining 3510 strength of association of Twitter terms with sentiment. Rothe et al. (2016) show that Densifier induces high quality and coverage sentiment lexicons in a domain adaptation setup. Densifier forces QL to be orthogonal to preserve the structure of the embedding space. As we are only interested in accurate sentiment prediction, we replace the orthogonality with l2 regularization: λ 2∥PQL∥2 F . The orthogonal constraint in Densifier – computing an SVD after each batch update – is expensive (O(d3)) and requires non-trivial training regime (Rothe et al., 2016). We will show that our formalization delivers comparable results. In our experiments, we can use the generic word embeddings provided by Bojanowski et al. (2017) for 157 languages. Additionally, Heinzerling and Strube (2018) create generic BPE embeddings for 257 languages by segmenting Wikipedia articles using sentencepiece then running GloVe on the segmented corpora. As discussed above (§3.1), some BPEs in the PBC+ ZS lexicons are words, some are subwords – so we can utilize both sets. 4 Experiments 4.1 Datasets and Settings We use the 7958 New Testament verses in PBC+ that were also used by Dufter et al. (2018) to create the multilingual BPE embedding space. To cover as many BPEs as we can, we segment each PBC+ edition three times with vocabulary sizes 2000, 4000 and 8000 using sentencepiece. S-ID generates a 31GB embedding training corpus including 7,414,810 BPEs in 1593 languages. English training set. We employ VADER, a simple but widely used rule-based model for general sentiment analysis (Hutto and Gilbert, 2014), to create sentiment labels for English BPEs. We consider BPEs with sentiment score ⩾+0.1 (resp. ⩽-0.1) as positive (resp. negative). BPEs with score 0 are treated as neutral. As a result, we have 851 positive, 906 negative and 13,861 neutral training BPEs in English. We uniformly sample 878 = floor((851 + 906)/2) neutral BPEs to speed up training. Zero-shot transfer. The two SVMs for POS and NEG (§3.3) are trained on English training set (see above), then applied to all vectors in the multilingual BPE embedding space to create ZS lexicons for 1593 languages. We only keep highconfidence BPEs – those with a predicted probability for either POS or NEG of ≥0.7 (Platt et al., 1999) – to ensure ZS lexicons encode clear sentiment signals. The PBC+ ZS lexicon of language L is then the set of all high-confidence sentimentbearing BPEs from L. Evaluation. Following Abdaoui et al. (2017), Bar-Haim et al. (2017), Rouces et al. (2018), we evaluate the quality of PBC+ ZS lexicons based on gold sentiment lexicons in Japanese (JA) (concatenation of Kobayashi et al. (2005); Higashiyama et al. (2008)), Czech (CZ) (Veselovsk´a and Bojar, 2013), German (DE) (Waltinger, 2010), Spanish (ES) (Perez-Rosas et al., 2012), French (FR) (Abdaoui et al., 2017) and English (EN) (WHM lexicon, the concatenation of Wilson et al. (2005), Hu and Liu (2004) and Mohammad and Turney (2013), created by Rothe et al. (2016)). F1 is evaluation metric. We always compute F1 on the intersection of our and gold lexicon. Gold lexicons are also used in intrinsic evaluation of generic DA lexicons (Table 6). Additionally, the English WHM lexicon is also used in the evaluation of the universality of our approach (Table 8). For intrinsic evaluation of generic DA lexicons, we compare our results with Densifier. Rothe et al. (2016) provide embeddings and train/validation splits of gold standard lexicons in CZ, DE, ES, FR and EN – we also use them in our experiments. We show (i) using GEN (the same training words as Densifier), REG (§3.4) induces generic lexicons in comparable quality; (ii) using PBC+ ZS lexicons, the induced generic DA lexicons are also in high quality. Kendall’s τ (Kendall, 1938) is evaluation metric. As Densifier is implemented in MATLAB, we implement our model in NumPy (Oliphant, 2006) which is more accessible to the community. For extrinsic evaluation of generic DA lexicons, we carry out Twitter sentiment classification in 13 languages. For each language, we retrieve ≈12,000 tweets from the human annotated dataset devised by Mozetiˇc et al. (2016), and sample balanced number of positive and negative tweets (for clearer comparisons and descriptions) which are then randomly split 80/20 into train/test. We compare our lexicons with Chen and Skiena (2014)’s work. Two classification models are used (§5.3) – COUNT (count-based, Chen and Skiena (2014)) and ML (machine-learning-based, Eskander and Rambow (2015)). Accuracy is evaluation metric. 3511 4.2 Hyperparameter Tuning We train the multilingual BPE embedding space using word2vec-skipgram with default parameters except: 25 negative samples, 10−4 occurrence threshold, 200 dimensions and 10 iterations. We tune the two linear SVMs for POS and NEG by 5-fold cross validation on English training set. Following Rothe et al. (2016), when inducing generic DA lexicons, we run a grid search on their train/validation sets to find α and λ. With the same settings, we additionally conduct an experiment on Japanese (JA Wiki), a non-segmented language, to show the universality of our approach. For EN Twitter (SemEval2015 10E), we tune our model on the trial (dev) set and report results on the test set. In all experiments, we search α ∈{0.3, 0.4, 0.5, 0.6, 0.7}, λ ∈{0.01, 0.1, 1}. Learning rate is 0.1, batch size 100, and the maximum number of updating steps 30,000. Following Eskander and Rambow (2015), in machine-learning-based Twitter sentiment classification for each of the 13 languages, we find the optimum SVM (positive vs. negative tweet) hyperparameters (C and kernel) by running 5-fold cross validation on the training set. 5 Results and Discussion 5.1 Multilingual BPE Space Evaluation We first evaluate the multilingual BPE space by carrying out the crosslingual verse sentiment classification experiment in Dufter et al. (2018). Two linear SVMs are trained on 2147 English training verses to classify the verse sentiment (positive vs. non-positive, i.e., POS, and negative vs. non-negative, i.e., NEG). A verse is represented as the TF-IDF weighted sum of the embeddings of its BPEs. We then conduct the crosslingual verse sentiment analysis – using the SVMs to classify 476 test verses of Dufter et al. (2018)’s 1664 editions in 1259 languages. Table 2 gives results averaged over 1664 editions. Word and Char are two multilingual spaces created by Dufter et al. (2018). For Word, whitespace tokenization is used to segment all editions. For Char, all editions are segmented to sequences of overlapping byte-ngrams (length n varies across languages, see Dufter et al. (2018)). Next, the S-ID method is utilized to create the two multilingual spaces. The S-ID BPE space outperforms both S-ID Word and S-ID Char spaces. This observation meets our expectation – the data-driven BPE Word Char BPE POS NEG POS NEG POS NEG S-ID .79 .88 .65 .86 .81 .89 Table 2: F1 for verse sentiment classification. Bold: our results. Word/Char are from Dufter et al. (2018). ISO B W ∆ ISO B W ∆ lzh1 .82 .04 +.78 eng1 .88 .84 +.04 jpn1 .86 .19 +.67 fra1 .85 .85 -.00 khm2 .87 .21 +.66 deu1 .84 .83 +.01 khm3 .86 .25 +.61 spa1 .85 .85 +.00 ksw0 .86 .32 +.54 por1 .84 .87 -.03 Table 3: The most improved (left) editions when using S-ID BPE (B) compared with S-ID Word (W). B and W perform similarly on segmented languages (right) like English (eng), French (fra), German (deu), Spanish (spa) and Portuguese (por). Numbers are in F1. segmentation is superior to splitting on whitespace (Word) or overlapping byte-ngram segmentation (Char), for non-segmented languages like Japanese whose PBC+ editions are not tokenized. For the more challenging subtask POS, we find the biggest improvement of S-ID BPE over Word is for non-segmented languages like Classical Chinese (lzh), Japanese (jpn), Khmer (khm) and S’gaw Karen (ksw) as shown in Table 3 (left). For segmented languages, S-ID BPE delivers similar performance as S-ID Word as shown in Table 3 (right). This observation also meets our expectation – lots of BPEs in segmented languages are essentially valid words. These observations show the universality of our approach. The sentiment information derived from English is successfully transferred to heterogeneous languages without language-dependent preprocessing – even for non-segmented languages. 5.2 PBC+ ZS (Zero-Shot) Lexicon Evaluation Sample entries in the English ZS lexicon are shown in Table 4 (left) as a qualitative evaluation. Table 5 shows the high consistency between the PBC+ ZS lexicons and gold lexicons in six languages. These results indicate that the positive negative positive negative magnificent fought #blessedbeyondbelief shats privilege blamed alhamduillah #worstpain enjoyed debauchery #365daysofgratitude theiving salvation adulter #excellence #stuffynose rejoices gloomy co-create sorethroat Table 4: Sample entries in English ZS lexicon (left) and DA lexicon with Twitter embeddings (right). 3512 two SVMs trained on English BPE embeddings perform strongly in a zero-shot crosslingual setting, and the resulting PBC+ ZS lexicons in difficult (morphologically rich, e.g., Czech; nonsegmented, e.g., Japanese) languages encode clear sentiment information. 5.3 Generic DA (Domain-Adapted) Lexicon Evaluation Table 4 (right) qualitatively shows the most sentiment-bearing words of the DA lexicon induced with English ZS lexicon and Twitter embeddings (EN Twitter). Lots of top ranked words are strong sentiment-bearing hashtags that never occur in the ZS lexicon domain, illustrating that our approach functions well in the domain adaptation setup. This observation is consistent with Densifier (Rothe et al., 2016). Intrinsic evaluation: ranking correlation. We compute ranking correlation between our generic DA lexicons and gold standard lexicons. There are overlapping words between our PBC+ ZS lexicon BPEs and the validation/test sets used by Rothe et al. (2016) – we discard these training words for a clean comparison. Columns (i) and (ii) of Table 6 show that REG (§3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction. However, our method is more efficient – no need to compute the expensive SVD after every batch update. Comparing columns (ii) and (iii), we see a marginal decrease of τ between .020 and .057 when GEN is replaced by PBC+ ZS lexicons. Note that PBC+ ZS lexicons have much fewer training BPEs than GEN (e.g., 343 vs. 4298 in JA Wiki) – this may contribute to the decrease. These comparable results also reflect the correctness of PBC+ ZS lexicons. We also use α = 0.4 and λ = 0.01, the optimal hyperparameter values found on the trial set of EN Twitter, to induce generic DA lexicons for the other languages. This is the common setting JA CZ DE ES FR EN F1 .883 .914 .903 .963 .916 .939 ∩size 120 141 788 63 407 1145 |PBC+| 728 1793 2827 1766 2193 2563 Table 5: High consistency between PBC+ ZS lexicons and generic gold lexicons in JA and five languages used in Rothe et al. (2016). ∩size: intersection size. |PBC+|: ZS lexicon size. (i) (ii) (iii) (iv) ORTH REG GEN GEN PBC+/T PBC+/NT CZ web .580 .576 .529 .524 DE web .654 .654 .634 .634 ES web .563 .568 .524 .514 FR web .544 .540 .514 .474 EN Tw. .654 .629 .583 .583 EN Ne. .622 .582 .562 .557 JA Wiki n/a .628 .571 .558 Table 6: Correlation (τ) of generic DA lexicons with gold standard lexicons. ORTH results are from Rothe et al. (2016). The other columns use REG (§3.4). Training words for lexicon induction are from Rothe et al. (2016) (GEN) and from PBC+ ZS lexicons. Algorithm 1 Creating tweet representation 1: procedure REPTWEET(String: Tweet, Dict: Lexicon) 2: words = Tweet.split(“ ”) 3: vec = [0.0, 0.0] 4: for w ∈words do 5: val = Lexicon.get(w) 6: if val > 0 then 7: vec[0] = vec[0] + val 8: else if val < 0 then 9: vec[1] = vec[1] + val 10: else 11: continue 12: return vec Figure 3: Creating the representation of a tweet in Twitter sentiment classification using ML. in real applications – other languages most likely do not have validation sets available. Results are shown in column (iv). Compared with tuned results (PBC+/T), performance slightly drops as the hyperparameters are not tuned (PBC+/NT) for languages other than EN Twitter. Overall, the performance differences between GEN (based on generic gold standard lexicons) and PBC+ (based on PBC+ ZS lexicons) are small and τ correlations are high. The high quality of generic DA lexicons in these six diverse (morphologically rich and non-segmented) languages shows the universality of our approach again – no language-dependent preprocessing is needed. Extrinsic evaluation: Twitter sentiment classification. Based on the subset of frequent words only,5 we use the top 10% most positive and most negative words for this evaluation. We compare with the closest work – lexicons from Chen and Skiena (2014). Two classification models are used – wordcount-based model COUNT (Chen and Skiena, 5In all discussions, we consider words that are top 50% frequent in the embedding vocabulary as “frequent” words. 3513 sqi bul hrv deu hun pol por rus srp slk slv spa swe ¯x COUNT C&S .55 .57 .57 .61 .61 .55 .57 .54 .51 .55 .64 .54 .57 .57 Ours .50 .60 .60 .56 .64 .62 .53 .65 .50 .61 .57 .55 .63 .58 ML C&S .58 .59 .60 .62 .64 .56 .54 .56 .51 .57 .66 .53 .59 .58 Ours .54 .65 .65 .64 .66 .66 .54 .67 .51 .64 .59 .57 .64 .61 Table 7: Accuracy of Twitter sentiment classification in Albanian (sqi), Bulgarian (bul), Croatian (hrv), German (deu), Hungarian (hun), Polish (pol), Portuguese (por), Russian (rus), Serbian (srp), Slovak (slk), Slovenian (slv), Spanish (spa) and Swedish (swe). Baseline of all experiments: 0.5. 2014), and machine-learning-based model ML (Eskander and Rambow, 2015). COUNT labels a tweet with the sentiment that has more word occurrences in the tweet (positive in case of ties). COUNT does not require training and the results are from all tweets for each language. In ML, the vector representation of a tweet is created according to Figure 3. Our generic DA lexicons support computing real-valued vectors in this way. Chen and Skiena (2014)’s lexicons are discrete (1/-1); we use these discrete values when applying ML to their lexicons. Finally, for each language, an SVM is trained on the 2-dimensional vectors. Table 7 shows results. The baseline accuracy is 0.5 for all experiments as our dataset is balanced. Rows Ours and C&S show results using our and Chen and Skiena (2014)’s lexicons respectively. As shown, the two sets of lexicons give comparable results in COUNT. But ML generally performs better than COUNT, and our lexicons give better classification results – our real-valued representation of tweets is superior to the discrete one computed with Chen and Skiena (2014)’s lexicons. Overall, intrinsic and extrinsic evaluations on diverse languages demonstrate the high quality of our generic DA lexicons. 5.4 Evaluation of Universality We further conduct automatic and human evaluations on 95 diverse languages to show the universality of our approach. We focus on intrinsic evaluation – verifying the correctness of PBC+ ZS lexicons with F1, and assessing the quality of generic DA lexicons using τ. The extrinsic evaluation, i.e., Twitter sentiment classification, is not feasible here due to missing human annotated Twitter datasets in low-resource languages. Automatic evaluation. Similar to Chen and Skiena (2014); Abdaoui et al. (2017), we use Google Translate (GT) for automatic evaluation – given a non-English language L, we translate its PBC+ ZS lexicon and generic DA lexicon into English. Translated English lexicons are then evaluated against the gold English lexicon WHM. GT supports 102 non-English languages. We omit ten languages that (i) are not covered by PBC+ (Corsican, Galician, Pashto, Yiddish); (ii) are covered in PBC+, but not in the alphabet used by GT (Malayalam); (iii) do not have public pretrained embeddings (Filipino, Hmong, Kyrgyz, Sesotho); or (iv) are very close to another language (we keep Croatian, but do not include Bosnian). We conduct separate experiments for Bokm˚al and Nynorsk, which are not distinguished by GT. Thus, we evaluate on 93 languages. When translating words to English, we discard entries where GT fails (i.e., output is identical to input). As GT requires the uploaded file to be small (⩽1MB), we do the evaluation on uniformly sampled 600 top 1% positive and negative words that are frequent. For ten languages (Chichewa, Hausa, Hawaiian, Igbo, Lao, Maori, Samoan, Shona, Xhosa, Zulu) that have very small embedding training corpora (<5MB Wikipedia pages and articles) and vocabulary sizes (e.g., 5000 for Hausa), we sample 200 words at 10%. Table 8 shows results. We see that PBC+ ZS lexicons show high consistency with gold labels across all 93 languages (F1 columns), including morphologically rich languages like Czech and Turkish, and non-segmented languages like Japanese and Khmer. The generic DA lexicons show high correlation with gold labels (τ columns) – with two exceptions. First, some languages have low-quality embeddings due to small embedding training corpora (e.g., Hawaiian: 998 KB; Igbo: 1014 KB) or because the training corpora apparently have low quality – e.g., the Luxembourgish embedding vocabulary contains a large amount of French and German words, suggesting that it was trained on mixed text and that the genuine Luxembourgish part is small. Second, GT does not perform well for some of the languages, again this is the case for Luxembourgish and also for Frisian. To give an example from Lux3514 Language F1 τ Language F1 τ Language F1 τ Language F1 τ Language F1 τ Afrikaans .909 .508 Esperanto .933 .361 Italian .924 .591 Mongolian .840 .222 Sundanese .912 .409 Albanian .916 .570 Estonian .889 .606 Japanese .901 .411 Myanmar .916 .534 Shona .885 .223 Amharic .870 .418 Finnish .932 .584 Javanese .904 .398 Nepali .862 .491 Swedish .936 .621 Arabic .905 .509 French .919 .600 Kannada .921 .447 Nynorsk .853 .434 Sinhala .880 .540 Armenian .848 .524 Frisian .885 .065 Kazakh .893 .421 Punjabi .927 .506 Tajik .876 .436 Azerbaijani .768 .401 Georgian .908 .540 Khmer .906 .474 Persian .903 .390 Tamil .911 .513 Basque .898 .477 German .898 .548 Korean .897 .481 Polish .923 .530 Telugu .934 .297 Belarusian .915 .597 Greek .912 .570 Kurdish .925 .258 Portuguese .913 .574 Thai .867 .357 Bengali .910 .389 Gujarati .896 .479 Latin .927 .336 Romanian .917 .644 Turkish .897 .607 Bokm˚al .927 .625 Haitian .891 .238 Lao .834 .222 Russian .910 .596 Ukrainian .909 .612 Bulgarian .911 .511 Hausa .905 .184 Latvian .919 .538 Scots .848 .385 Urdu .825 .258 Catalan .937 .453 Hawaiian .951 .078 Lithuanian .922 .491 Serbian .957 .559 Uzbek .900 .361 Cebuano .917 .390 Hebrew .833 .522 Luxemb’gish .834 .031 Sindhi .845 .169 Vietnamese .840 .403 Chichewa .872 .061 Hindi .878 .447 Macedonian .918 .425 Slovak .942 .515 Welsh .879 .560 Chinese .889 .486 Hungarian .910 .502 Malagasy .923 .417 Samoan .857 .116 Xhosa .892 .057 Croatian .926 .519 Igbo .791 .088 Malay .892 .494 Swahili .842 .403 Yoruba .873 .188 Czech .915 .545 Icelandic .947 .417 Maori .836 .015 Slovenian .957 .483 Zulu .889 .226 Danish .936 .359 Indonesian .898 .498 Maltese .938 .488 Somali .954 .335 Dutch .906 .553 Irish .902 .476 Marathi .942 .479 Spanish .943 .428 Table 8: Intrinsic evaluation of our PBC+ ZS and generic DA lexicons in 93 languages. We see high consistency (F1) between PBC+ ZS lexicons and gold labels across languages. The generic DA lexicons are strongly correlated (τ) with gold labels in most languages. Hiligaynon Tibetan τ size τ size 2-way .474 103 .542 64 3-way .357 188 .361 148 Table 9: Human evaluation of generic DA lexicons in Hiligaynon and Tibetan. 2-way: positive, negative. 3way: positive, neutral, negative. embourgish for both problems: “vergloust” and its first nearest neighbor “verglousten” are translated by GT as “glowed” and “forget about it”. We recommend to use the higher quality PBC+ ZS lexicon for these languages. Apart from above exceptions, both F1 and τ are reasonably high, evidencing that our universal approach is applicable to a broad range of typologically diverse languages. We do human evaluation for Hiligaynon and Tibetan, languages not supported by GT. There are no public pretrained embeddings for Hiligaynon. We train embeddings on a concatenation of texts from project Palito (Dita et al., 2009) and Jehovah’s Witnesses e-books (www. jw.org). From the generic DA Hiligaynon and Tibetan lexicons, we uniformly sample 199 from the top 10% positive and negative frequent BPEs. Two Tibetan scholars and three Hiligaynon speakers annotated these BPEs as positive, negative, neutral, unclear where the last category refers to cases where the intended word is not apparent from the BPE. We omit entries labeled as unclear and compute τ. Table 9 shows τ averaged over annotators. We see that our lexicons have consistent positive correlation with the human annotation in both languages. 6 Conclusion We proposed a universal approach for sentiment lexicon induction. By creating a multilingual BPE embedding space for 1500+ languages, we successfully transfer sentiment to each language without language-dependent preprocessing. We created 1593 ZS (zero-shot) sentiment lexicons and showed for a subset that they are highly consistent with gold lexicons. To address the fact that the small-size ZS lexicons are specific to PBC+’s domain, we conduct domain adaptation and induce large-size generic DA (domain-adapted) lexicons for 200 languages. Extensive intrinsic and extrinsic, automatic and human evaluations on 95 languages confirm the correctness and good quality of our lexicons. We make our code and lexicons freely available to the community. To induce generic lexicons, our approach requires generic embeddings, which are not always available for low-resource languages. Solving this problem is non-trivial as many low-resource languages have a limited amount of written text in electronic form (and in any form). In such cases, the PBC+ ZS lexicons can be utilized because they also have high quality. Acknowledgements. We thank Philipp Dufter and the anonymous reviewers for comments and suggestions; and Mary Ann C. Tan, Samyo Rode and Nikolai Solmsdorf for sentiment judgments for Hiligaynon and Tibetan. This work was funded by the European Research Council (ERC #740516). 3515 References Amine Abdaoui, J´erˆome Az´e, Sandra Bringay, and Pascal Poncelet. 2017. Feel: a french expanded emotion lexicon. Language Resources and Evaluation, 51(3):833–855. Silvio Amir, Ram´on Astudillo, Wang Ling, Bruno Martins, Mario J. Silva, and Isabel Trancoso. 2015. Inesc-id: A regression model for large scale twitter sentiment lexicon induction. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 613–618. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, pages 5012–5019. Mikel Artetxe and Holger Schwenk. 2018. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. arXiv preprint arXiv:1812.10464. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, and Wassim El-Hajj. 2014. A large scale arabic sentiment lexicon for arabic opinion mining. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 165–173. Roy Bar-Haim, Lilach Edelstein, Charles Jochim, and Noam Slonim. 2017. Improving claim stance classification with lexical knowledge expansion and context utilization. In Proceedings of the 4th Workshop on Argument Mining, pages 32–38, Copenhagen, Denmark. Association for Computational Linguistics. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. T Buckwalter. 2004. Buckwalter arabic morphological analyzer (bama) version 2.0. linguistic data consortium (ldc) catalogue number ldc2004l02. Technical report, ISBN1-58563-324-0. Yanqing Chen and Steven Skiena. 2014. Building sentiment lexicons for all major languages. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 383–389. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Shirley N. Dita, Rachel Edita O. Roxas, and Paul Inventado. 2009. Building online corpora of philippine languages. In Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2. Philipp Dufter, Mengjie Zhao, Martin Schmitt, Alexander Fraser, and Hinrich Sch¨utze. 2018. Embedding learning through multilingual concept induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1520–1530. Association for Computational Linguistics. Ramy Eskander and Owen Rambow. 2015. Slsa: A sentiment lexicon for standard arabic. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2545–2550. Association for Computational Linguistics. Schubert Foo and Hui Li. 2004. Chinese word segmentation and its effect on information retrieval. Information processing & management, 40(1):161–190. Philip Gage. 1994. A new algorithm for data compression. C Users J., 12(2):23–38. Dehong Gao, Furu Wei, Wenjie Li, Xiaohua Liu, and Ming Zhou. 2015. Cross-lingual sentiment lexicon learning with bilingual word graph label propagation. Computational Linguistics, 41(1):21–40. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296–1306. Association for Computational Linguistics. Dirk Goldhahn, Maciej Sumalvico, and Uwe Quasthoff. 2016. Corpus collection for underresourced languages with more than one million speakers. CCURL 2016 Collaboration and Computing for Under-Resourced Languages: Towards an Alliance for Digital Language Diversity, page 67. William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky. 2016. Inducing domain-specific sentiment lexicons from unlabeled corpora. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 595–605. Association for Computational Linguistics. 3516 Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Masahiko Higashiyama, Kentaro Inui, and Yuji Matsumoto. 2008. Learning sentiment of nouns from selectional preferences of verbs and adjectives. In Proceedings of the 14th Annual Meeting of the Association for Natural Language Processing, pages 584–587. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM. Clayton J. Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, Ann Arbor, Michigan, USA, June 1-4, 2014. Maurice G Kendall. 1938. A new measure of rank correlation. Biometrika, 30(1/2):81–93. Nozomi Kobayashi, Kentaro Inui, Yuji Matsumoto, Kenji Tateishi, and Toshikazu Fukushima. 2005. Collecting evaluative expressions for opinion extraction. In Proceedings of the First International Joint Conference on Natural Language Processing, IJCNLP’04, pages 596–605, Berlin, Heidelberg. Springer-Verlag. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, pages II–1188–II–1196. JMLR.org. Omer Levy, Anders Søgaard, and Yoav Goldberg. 2017. A strong baseline for learning cross-lingual word embeddings from sentence alignments. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 765–774. Association for Computational Linguistics. Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel bible corpus. In Proceedings of the 9th International Conference on Language Resources and Evaluation. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. Nrc-canada: Building the state-of-theart in sentiment analysis of tweets. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321–327. Association for Computational Linguistics. Saif M Mohammad and Peter D Turney. 2013. Crowdsourcing a word–emotion association lexicon. Computational Intelligence, 29(3):436–465. Igor Mozetiˇc, Miha Grˇcar, and Jasmina Smailovi´c. 2016. Multilingual twitter sentiment classification: The role of human annotators. PloS one, 11(5):e0155036. Travis E Oliphant. 2006. A guide to NumPy, volume 1. Trelgol Publishing. Bo Pang, Lillian Lee, et al. 2008. Opinion mining and sentiment analysis. Foundations and Trends R⃝in Information Retrieval, 2(1–2):1–135. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Veronica Perez-Rosas, Carmen Banea, and Rada Mihalcea. 2012. Learning sentiment lexicons in spanish. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). European Language Resources Association (ELRA). John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74. Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif Mohammad, Alan Ritter, and Veselin Stoyanov. 2015. Semeval-2015 task 10: Sentiment analysis in twitter. In Proceedings of the 9th International 3517 Workshop on Semantic Evaluation (SemEval 2015), pages 451–463. Association for Computational Linguistics. Sascha Rothe, Sebastian Ebert, and Hinrich Sch¨utze. 2016. Ultradense word embeddings by orthogonal transformation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 767–777. Association for Computational Linguistics. Jacobo Rouces, Nina Tahmasebi, Lars Borin, and Stian Rødven Eide. 2018. SenSALDO: Creating a Sentiment Lexicon for Swedish. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Sebastian Ruder. 2017. A survey of cross-lingual embedding models. CoRR, abs/1706.04902. Hinrich Sch¨utze. 1993. Word space. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 895–902. Morgan-Kaufmann. Hinrich Sch¨utze. 2017. Nonsymbolic text representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 785–796. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1555–1565. Association for Computational Linguistics. Kateˇrina Veselovsk´a and Ondˇrej Bojar. 2013. Czech SubLex 1.0. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Ulli Waltinger. 2010. Germanpolarityclues: A lexical resource for german sentiment analysis. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10). European Languages Resources Association (ELRA). Shih-Ming Wang and Lun-Wei Ku. 2016. Antusd: A large chinese sentiment dictionary. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1504–1515. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing.
2019
341
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3518–3527 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3518 Tree Communication Models for Sentiment Analysis Yuan Zhang and Yue Zhang School of Engineering, Westlake University, China Institute of Advanced Technology, Westlake Institute for Advanced Study [email protected], [email protected] Abstract Tree-LSTMs have been used for tree-based sentiment analysis over Stanford Sentiment Treebank, which allows the sentiment signals over hierarchical phrase structures to be calculated simultaneously. However, traditional tree-LSTMs capture only the bottom-up dependencies between constituents. In this paper, we propose a tree communication model using graph convolutional neural network and graph recurrent neural network, which allows rich information exchange between phrases constituent tree. Experiments show that our model outperforms existing work on bidirectional tree-LSTMs in both accuracy and efficiency, providing more consistent predictions on phrase-level sentiments. 1 Introduction There has been increasing research interest investigating sentiment classification over hierarchical phrases (Tai et al., 2015; Zhu et al., 2015; Looks et al., 2017; Teng and Zhang, 2017). As shown in Figure 1, the goal is to predict the sentiment class over a sentence and each phrase in its constituent tree. There have been methods that classify each phrase independently (Li et al., 2015; McCann et al., 2017). However, sentiments over hierarchical phrases can have dependencies. For example, in Figure 1, both sentences have a phrase “an awesome day”, but the polarities of which are different according to their sentence level contexts. To better represent such sentiment dependencies, one can encode a constituency tree holistically using a neural encoder. To this end, treestructured LSTMs have been investigated as a dominant approach (Tai et al., 2015; Zhu et al., 2015; Gan and Gong, 2017; Yu et al., 2017; Liu et al., 2016). Such methods work by encoding hierarchical phrases bottom-up, so that sub constituents can be used as inputs for representing a 2 (1) 2 2 0 2 2 I had an awesome day winning the game 0 0 2 2 2 0 (2) I had an awesome day experiencing the tsunami * 0 0 0 0 0 0 0 0 * 0 -2 -1 -2 -2 -1 -1 -2 -2 -2 * * * * Figure 1: Examples of tree-based sentiment. constituent. However, they cannot pass information from a constituent node to its children, which can be necessary for cases similar to Figure 1. In this example, sentence level information from toplevel nodes is useful for disambiguating “an awesome day”. Bi-directional tree LSTMs provide a solution, using a separate top-down LSTM to augment a tree-LSTM (Teng and Zhang, 2017). This method has achieved highly competitive accuracies, at the cost of doubling the runtime. Intuitively, information exchange between tree nodes can happen beyond bottom-up and topdown directions. For example, direct communication between sibling nodes, such as (“an awesome day”, “winning the game”) and (“an awesome day”, “experiencing the tsunami”) can also bring benefits to tree representation. Recent advances of graph neural networks, such as graph convolutional neural network (GCN) (Kipf and Welling, 2016; Marcheggiani and Titov, 2017) and graph recurrent neural network (GRN) (Beck et al., 2018; Zhang et al., 2018b; Song et al., 2018) offer rich node communication patterns over graphs. For relation extraction, for example, GCNs have 3519 been shown superior to tree LSTMs for encoding a dependency tree (Zhang et al., 2018c) We investigate both GCNs and GRNs as tree communication models for tree sentiment classification. In particular, initialized with a vanilla tree LSTM representation, each node repeatedly exchanges information with its neighbours using graph neural networks. Such multi-pass information exchange can allow each node to be more informed about its sentence-level context through rich communication patterns. In addition, the number of time steps does not scale with the height of the tree. To allow better interaction, we further propose a novel time-wise attention mechanism over GRN, which summarizes the representation after each communication step. Experiments on Stanford Sentiment Treebank (SST; Socher et al. 2013) show that our model outperforms standard bottom-up tree-LSTM (Zhu et al., 2015; Looks et al., 2017) and also recent work on bidirectional tree-LSTM (Teng and Zhang, 2017). In addition, our model allows a more holistic prediction of phase-level sentiments over the tree with a high degree of node sentiment consistency. To our knowledge, we are the first to investigate graph NNs for tree sentiment classification, and the first to discuss phrase level sentiment consistency over a constituent tree for SST. We release our code and models at https://github.com/fred2008/TCMSA. 2 Related Work Bi-directional Tree-LSTM Paulus et al. (2014) capture bidirectional information over a binary tree by propagating global belief down from the tree root to leaf nodes. Miwa and Bansal (2016) adopt a bidirectional dependency treeLSTM model by introducing a top-down LSTM path. Teng and Zhang (2017) propose a first bidirectional tree-LSTM for constituent structures, by building a top-down tree-LSTM with estimations of head lexicons. Compared with their work, we achieve information interaction using an asymptotically more efficient algorithm, which performs node communication simultaneously across a whole tree. Graph Neural Network Scarselli et al. (2009) propose graph neural network (GNN) for encoding an arbitrary graph structure. Kipf and Welling (2016) use graph convolutional network to learn node representation for graph structure. Marcheggiani and Titov (2017) and Bastings et al. (2017) extend the use of graph convolutional network (GCN) to NLP tasks. In particular, they use GCN to learn dependency-syntactic word representation for semantic role labeling and machine translation, respectively. Zhang et al. (2018b) use a graph recurrent network (GRN) to model sentences. Beck et al. (2018) and Song et al. (2018) use a graph recurrent network for learning representation of abstract meaning representation (AMR) graphs. Our work is similar in utilizing graph neural network for NLP. Compared with their work, we apply GNN to constituent trees. In addition, we propose a novel time-wise attention mechanism on GRN to combine recurrent time steps dynamically. 3 Baseline We take standard bottom-up tree-LSTMs as our baseline. Tree-LSTM extends sequence-LSTM by utilizing 2 previous states for modeling a left child node and a right child node, respectively, in a recurrent state transition process. Formally, a treeLSTM calculates a cell state through an input gate, an output gate and two forget gates at each time step. In particular, at time step t, the input gate it and the output gate ot are calculated respectively as follows: it = σ W L hihL t−1 + W R hihR t−1 + W L cicL t−1 + W R ci cR t−1 + bi  , ot = σ W L hohL t−1 + W R hohR t−1 + W L cocL t−1 + W R cocR t−1 + bo  , where W L hi, W R hi, W L ci, W R ci , bi, W L ho, W R ho, W L co, W R co and bo are parameters of the input gate and the output gate, respectively. The forget gates of the left node fL t and the right node fR t are calculated respectively as: fL t = σ W L hfLhL t−1 + W R hfLhR t−1 + W L cfLcL t−1 + W R cfLcR t−1 + bfL  , fR t = σ W L hfRhL t−1 + W R hfRhR t−1 + W L cfRcL t−1 + W R cfRcR t−1 + bfR  , where W L hfL, W R hfL, W L cfL, W R cfL, bfL, W L hfR, W R hfR, W L cfR, W R cfR and bfR are parameters. The cell candidate ˜Ct is dependent on both cL t−1 and cR t−1: ˜Ct = tanh W L hChL t−1 + W R hChR t−1 + bC  3520 Word Node Interaction w0 w1 w2 w3 Constinuent Inference Figure 2: Tree communication model. where WhC, W R hC and bC are model parameters. Based on the two previous cell states cL t−1 and cR t−1, the cell state of the current node ct is calculated as: ct = fl t ⊗cL t−1 + fR t ⊗cR t−1 + it ⊗˜Ct, where fl t is the forget gate of the left child node, fR t is the forget gate of the right child node, ˜Ct is the cell candidate. Finally, the hidden state ht of the current node is calculated based on the current cell ct and the output gate ot: ht = ot ⊗tanh(ct) Limitation Tree-LSTM models capture only bottom-up node dependencies. Specifically, for a node j, the hidden representation htree j is dependent on the descendant nodes only. Formally, htree j = f(hdj0, hdj1, · · · , hdjk), where dj is the set of descendant nodes of node j. Bi-directional Solution A bidirectional treeLSTM (Bi-tree-LSTM) takes a bottom-up treeLSTM as a first step, performing a top-down tree communication process. Teng and Zhang (2017) is one example. 4 Tree Communication Models Our tree communication models (TCM) take a trained tree LSTM as an initial state, performing information exchange using graph neural network (GNN). Thus hj is dependent on all related neighborhood nodes rather than only descendant nodes: hj = f(hrj0, hrj1, · · · , hrjk), where rj is the set of all relevant nodes of node j. Such node can be the full tree with sufficient communication. Time-wise Attention Mechanism Figure 3: Recurrent tree communication model. In particular, given a constituent tree, for each constituent node j, the initial state h′ j is obtained using a tree-LSTM: h′ j = treeLSTM(h′ left(j), c′ left(j), h′ right(j), c′ right(j)), where h′ j is the hidden state of the node j, c′ j is the cell state of node j, left(j) denote the left child of node j, right(j) denotes the right child of node j. As shown in Figure 2, a TCM performs information exchange between a constituent node j with its neighbor nodes in three channels: • A self-to-self channel transfers information from node j to itself. The input for the channel is represented as xself j = h′ j, where h′ j is the initial state of tree communication model. • A bottom-up channel transfers information from lower level nodes to upper-level nodes. The inputs for the channel are represented as xleft j = h′ left(j), xright j = h′ right(j), where left(j) and right(j) denote the left child and the right child of node j, respectively. xup j is the sum of inputs from bottom up: xup j = xleft j +xright j . • A top-down channel transfers information from parent nodes to child nodes. The input for the channel is represented as: xdown j = h′ prt(j), where prt(j) denotes the parent node of node j. When tree communications are executed repeatedly, each node receives information from an increasingly larger context. We explore a convolutional tree communication model (CTCM) and a recurrent tree communication model (RTCM), which are based on GCN (Marcheggiani and Titov, 2017) and GRN (Song et al., 2018), respectively. Both models allow node interactions in a tree to be performed in parallel, and thus are 3521 computationally efficient. The time complexity to achieve additional interaction of TCMs are O(1), in contrast to O(n) by top-down tree-LSTM. 4.1 Convolutional Tree Communication Model We apply the strategy of Marcheggiani and Titov (2017), where multiple convolutional layers can be used for information combination. In particular, for the k-th layer, transformed inputs are obtained by linear transformation for each channel: xk,self j = W k,selfhk−1,self j + bk,self, xk,d j = W k,uphk−1,d j + bk,up, d ∈{left, right}, xk,down j = W k,downhk−1,down j + bk,down, where W k,e and be (e ∈ {self, up, down}) are model parameters, and hk−1,e j (e ∈ {self, left, right, down}) is the hidden state of last layer for node e(j). The initial h−1,e j are the inputs of three channels xe j defined earlier. Following Marcheggiani and Titov (2017), for each edge type e ∈{self, left, right, down}, we apply edge-wise gate to all inputs: gk,e j = σ(W k,e g hk,e j + bk,e g ), where W k,e g and bk,e g are model parameters. The final representation of node j is: hk j = f( X e xk,e j ⊗gk,e j ). 4.2 Recurrent Tree Communication Model We take the stategy of Song et al. (2018). The structure of RTCM shows in Figure 3. For each recurrent step t, the hidden states from the last recurrent step are taken to calculate the cell state of the current state. In particular, for node j, the hidden state of the previous step can be divided into the last hidden state hself j from self-to-self channel, the last hidden state hup t−1,j from bottom-up channel and the last hidden state hdown t−1,j from the top-down channel: hself t−1,j = ht−1,j, hup t−1,j = ht−1,left(j) + ht−1,right(j), hdown t−1,j = ht−1,prt(j). We calculate gate and state values based on the inputs and last hidden states from the three information channels. The input gate ij t and the forget gate fj t are defined as: it j = σ  W self i xself j + W up i xup j + W down i xdown j + U self i hself t−1,j + U up i hup t−1,j + U down i hdown t−1,j + bi  , ft j = σ  W self f xself j + W up f xup j + W down f xdown j + U self f hself t−1,j + U up f hup t−1,j + U down f hdown t−1,j + bf  , where W self i , W up i , W down i , U self i , U up i , U down i , bi, W self f , W up f , W down f , U self f , U up f , U down f and bf are parameters of input and forget gate. The cell candidate ˜Cj t is defined as: ˜Ct j = σ  W self C xself j + W up C xup j + W down C xdown j + U self C hself t−1,j + U up C hup t−1,j +U down C hdown t−1,j +bC  , where W self C , W up C , W down C , U self C , U up C , U down C and bC are parameters of cell candidate. The current cell state is calculated as: ct j = ft j ⊗ct−1 j + it j ⊗˜Ct j, The output gate oj t is defined as: ot j = σ  W self o xself j + W up o xup j + W down o xdown j + U self o hself t−1,j + U up o hup t−1,j + U down o hdown t−1,j + bo  , where W self o , W up o , W down o , U self o , U up o , U down o and bo are model parameters. The final hidden ht j is calculated through the current cell state ct j and the output gate ot j: ht j = ot j ⊗tanh(ct j). 4.2.1 Time-wise attention Both GRN and GCN calculate a sequence of incrementally more abstract representations c1 j, c2 j, ...ct j for each node cj. We further introduce a novel attention scheme to GRN. Intuitively, each recurrent step in RTCM learns a different level of abstraction. For a constituent node higher in the tree or on the leaf, more recurrent steps may be needed to learn the interaction between nodes. Accordingly, we use an adaptive recurrence mechanism to learn a dynamic node representation through attention structure (Bahdanau et al., 2014). Our method first encodes a recurrent-stepsensitive hidden state with positional embedding: hj,depth t = hj t + ep t , 3522 where hj,depth t is the recurrent-step-sensitive hidden state for node j on t-th step, ep is positional encoding of the recurrence steps. Inspired by Vaswani et al. (2017), a static representation is used for the positional encoding ep(t), which does not require training: ep t,2k = sin(t/100002k/demb), ep t,2k+1 = cos(t/100002k/demb), t is the index of recurrent steps, et,m is the m-th dimension of positional embedding, and demb is the dimension of embedding. We learn the weight wt for the t-th recurrent step by the relationship between hj,depth T and hj,depth t : w′ j,t = hj,depth T · hj,depth t , wj,t = exp(w′ j,i) PT−1 t=0 exp(w′ j,t) . The final state can be represented as a weighted sum of the hidden states obtained after different recurrent steps: hj = T−1 X t=0 wthj t. 5 Decoding and Training Following Looks et al. (2017) and Teng and Zhang (2017), we perform softmax classification on each node according to the last hidden state: o = softmax(Mh + b) where M and b are model parameters. For training, negative log-likelihood loss is computed over each o locally, and accumulated over the tree. 6 Experiments We test the effectiveness of TCM by comparing its performance with a standard tree-LSTM (Zhu et al., 2015) as well as a state-of-the-art bidirectional tree-LSTM (Teng and Zhang, 2017). A series of analysis is conducted for measuring the holistic representation of sentiment in a tree via phrase-level sentiments consistency. 6.1 Data We use the Stanford Sentiment Treebank (SST; Socher et al. 2013), which is a dataset of movie Corpus SST-5 SST-2 Classes 5 2 Sentences 11,855 9,613 Phrases 442,629 137,988 Tokens 227,242 185,621 Table 1: Data statistics. reviews originally from Pang and Lee (2005) annotated at both the clause level and the sentence level. Following Zhu et al. (2015) and Teng and Zhang (2017), we perform both fine-grained sentiment classification and binary classification. For the former, the dataset was annotated for 5 levels of sentiment: strong negative, negative, neutral, positive, and strong positive. For the latter, the data was labeled with positive sentiment and negative sentiment. We adopt a standard dataset split following Tai et al. (2015); Teng and Zhang (2017). Table 1 lists the data statistics. 6.2 Experimental Settings Hyper-parameters We initialize word embeddings using GloVe (Pennington et al., 2014) 300dimensional embeddings. Embeddings are finetuned during training. The size of LSTM hidden states are set to 300. We thus fix the number to 9. Training In order to obtain a good representation for an initial constituent state, we first train an independent bottom-up tree-LSTM, over which we train our tree communication models. To avoid over-fitting, we adopt dropout on the embedding layer, with a rate of 0.5. Training is done on minibatches through Adagrad (Duchi et al., 2011) with a learning rate of 0.05. We adopt gradient clipping with a threshold of 1.0. The L2 regularization parameter is set to 0.001. 6.3 Development Experiments Hyper-parameters We investigate the effect of recurrent steps of RTCM as shown in Block A of Table 2. As the number of steps increases from 1, the accuracy increases, showing the effectiveness of tree node communication. A recurrent step of 9 gives the best accuracies, and a larger number of steps does not give further improvements. This is consistent with observations of Song et al. (2018), which shows that sufficient context information can be collected over a small number of iterations. The effectiveness of TCM Block B in Table 2 3523 Block Model SST-5 SST-2 A 3 Step RTCM 83.1 92.3 6 Step RTCM 83.2 92.7 9 Step RTCM 83.4 92.9 18 Step RTCM 83.2 92.8 B Tree-LSTM 82.9 92.4 CTCM 83.3 92.8 RTCM 83.4 92.9 RTCM+attention 83.5 93.3 Table 2: Phrase level performances on the dev set. shows the performance of different models. TreeLSTMs with different TCMs outperform the baseline tree-LSTM on both datasets. In addition, the time-wise attention mechanism in Section 4.2.1 improves performance on both SST-5 and SST-2. In the remaining experiments, we use RTCM with time wise-attention. 6.4 Final Results Table 3 shows the overall performances for sentiment classification on both SST-5 and SST-2. We report accuracies on both the sentence level and the phrase level. Compared with previous methods based on constituent tree-LSTM, our model improves the preformance on different datasets and settings. In particular, it outperforms BiConLSTM (Teng and Zhang, 2017), which use bidirectional tree-LSTM. This demostrates the advantage of graph neural networks compared to a top-down LSTM for tree communication. Our model gives the state-of-the-art accuracies on phrase-level settings. Note that we do not leverage character representation or external resources such as sentiment lexicons and large-scale corpuses. There has also been work using large-scale external datasets to improve performance. McCann et al. (2017) pretrain their model on large parallel bilingual datasets and exploit character ngram features. They report an accuracy of 53.7 on sentence-level SST-5 and an accuracy of 90.3 on sentence-level SST-2, which are lower than our model. Peters et al. (2018) pretrain a language model with character convolutions on a large-scale corpus and report an accuracy of 54.7 on sentencelevel SST-5, which is slightly higher than our model. Large-scale pretraining is orthogonal to our method. For a fair comparison, we do not list their results on Table 3. We further analyze the performance with reModel SST-5 SST-2 R P R P RNTN (S13) 45.7 80.7 85.4 87.6 BiLSTM (L15) 49.8 83.3 86.7 ConTree (LZ15) 49.9 88.0 ConTree (Z15) 50.1 ConTree (L15) 50.4 83.4 86.7 ConTree (T15) 51.0 88.0 Disan (S18) 51.7 RL LD/HS-LSTM (Z18) 50.0 87.8 NTI-SLSTM (MY17) 53.1 89.3 ConTree(Fold) (L17) 52.3 89.4 BiConTree (TZ17) 53.5 83.5 90.3 92.8 RTC + attention 54.3 83.6 90.3 93.4 Table 3: Final results (R-Root, P-Phrase). S13 – Socher et al. (2013); L15 – Li et al. (2015); LZ15 – Le and Zuidema (2015); Z15 – Zhu et al. (2015); T15 – Tai et al. (2015); S18 – Shen et al. (2018); Z18 – Zhang et al. (2018a); MY17 – Munkhdalai and Yu (2017); L17 – Looks et al. (2017); TZ17 – Teng and Zhang (2017) 5 10 15 20 25 30 Length 82.0 82.5 83.0 83.5 84.0 84.5 85.0 85.5 SST-5 Accurancy(%) SST-5 Tree-LSTM SST-5 Tree Communication SST-2 Tree-LSTM SST-2 Tree Communication 91.5 92.0 92.5 93.0 93.5 94.0 94.5 95.0 SST-2 Accurancy(%) Figure 4: Sentiment classification accuracies against the sentence length. The accuracy for each length l is calculated on the test set sentences length in the bin [l, l + 5]. spect to different sentence lengths. Figure 4 shows the results. On both datasets, the performance of tree-LSTM on sentences of lengths less than 10 (l = 5 in the figure) is much better than that of longer sentences. There is a tendency of decreasing accuracies as the sentence length increases. As the length of sentences increases, there are longerrange dependencies along the depth of tree structure, which is more difficult to model than short sentences. It can be seen that the improvement of TCM over tree-LSTM model is larger with increasing sentence length. This shows that longer sentences can benefit more from rich tree communication. 3524 40 50 60 70 80 90 100 Tree-LSTM 40 50 60 70 80 90 100 Tree Communication (a) SST-5 40 50 60 70 80 90 100 Tree-LSTM 40 50 60 70 80 90 100 Tree Communication (b) SST-2 Figure 5: Sentence-level phrase accuracy (SPAcc) scatter plot. Each dot represents a sentence in the test dataset. Its x-coordinate and y-coordinate are SPAcc for predicted phrase label sequence of the baseline model and TCM respectively. The blue line is a linear regression line of all dots. Dataset α Baseline Our Model Diff. SST-5 1.0 3.5 3.7 +0.2 0.9 18.9 21.2 +2.3 0.8 67.6 71.4 +3.8 SST-2 1.0 56.0 61.4 +5.4 0.9 18.9 21.2 +4.6 0.8 67.6 71.4 +2.0 Table 4: Rates of holistically-labeled sentences with sentence-level phrase accuracy SPAcc ⩾α. 6.5 Disscusion Sentence-level performance To further compare performances of holistic phrase sentiment classification on the sentence level, we measure the accuracy on the sentence level. We define sentencelevel phrase accuracy (SPAcc) of a sentence as: SPAcc = ncorrect/ntotal, where ntotal is the total number of phrases in the sentence, and ncorrect is the number of correct sentiment predictions in the sentence. For each sentence of test dataset, taking SPAcc of the corresponding label sequence resulting from the baseline model as the x-coordinate and SPAcc of the corresponding label sequence resulting from TCM as the y-coordinate, we draw a scatter plot with a regression line as shown in Figure 5. The regression line is inclined towards the top-left, indicating that TCM can improve the performance on holistic phrase classifications over a whole sentence. If the SPAcc of a sentence is high, the sentence is more holistically-labeled. Table 4 shows the statistics on the rate of holistically-labeled sentences with SPAcc ⩾α (SPAcc-α). The rate of holistically-labeled sentences for TCM is higher SST-5 SST-2 Dataset 10 0 10 20 30 40 50 60 Phrase error deviation (PEDev) Tree-LSTM Tree Communication Figure 6: Deviation of node errors for each tree. Dataset Metric Baseline TCM Diff. SST-5 mean 36.9 35.7 -1.2 median 38.1 37.0 -1.1 SST-2 mean 31.4 21.8 -9.6 median 34.3 25.8 -8.3 Table 5: Deviation statistics. Values in units of ×10−2 than that for tree-LSTM on both SST-5 and SST2 for different values of α. It demonstrates that TCM labels the constituent nodes of the whole tree better than the tree-LSTM model, thanks to more information exchange between phrases in a tree. Consistency between nodes To compare the sentiment classification consistency of phrases in each sentence, we define a metric, phrase error deviation (PEDev), to measure the deviation among the error of labels for one sentence: PEDev(ˆy, y) = r 1 N XN−1 i=0  d( ˆyi, yi) −¯d 2, where d( ˆyi, yi) is the Hamming distance between the i-th predicted label and the i-th ground truth label. ¯d is the mean value of d( ˆyi, yi). Since d( ˆyi, yi) ∈[0, 1], PEDev(ˆy, y) ∈[0, 0.5]. For an input sentence, if all the predicted labels are the same as the ground truth, or all the predicted labels are different from the ground truth, PEDev(ˆy, y) = 0, which means that the sentence is labeled with the maximum consistency. On the contrary, if the predicted labels of some phrases are the same as ground truth while others are not, PEDev(ˆy, y) is high. Table 5 lists the statistics on PEDev(ˆy, y) of the baseline model and our model for all the test sentences on SST-5 and SST-2. The mean and median of PEDev(ˆy, y) of TCM are much less than those of the baseline tree-LSTM model. In addition, as Figure 6 shows, compared with the PEDev(ˆy, y) distribution of the tree-LSTM model, the distribution of TCM is relatively less in value. It demonstrates that TCM 3525 0 0 (2) +2 0 One of the greatest romantic comedies +2 +1 0 0 0 0 0 0 This is art paying homage to art 0 0 1 + 1 + 0 1 + 1 + 0 0 +1 0 0 Though everything might be literate and smart , it never took off and always seemed static +1 0 0 1 1 0 -1 0 0 0 0 0 0 0 +1 0 +1 -1 +2 +2 -1 -1 (3) (1) M B T G Diff. labels Same labels M - Tree-LSTM Model B - Bi-tree-LSTM T - TCM; G - Gold A fascinating and fun film . 0 +2 +2 0 0 +2 +2 +2 0 +1 (4) 0 0 1 + 2 + 2 + Black grid – gold label k Black cell shows incorrect label 1 + 0 0 1 + 0 0 1 + 0 0 1 2 2 1 2 2 1 2 2 1 0 0 0 1 1 0 1 1 + 1 + 2 + 1 + 1 + 2 + 1 + 2 + 2 + +1 2 + 2 + +1 + Figure 7: Sentiment classification samples. -2 -1 0 1 2 Predicted Label -2 -1 0 1 2 Ground Truth 767 1024 180 34 3 385 6204 2414 244 8 72 2260523731784 59 6 341 2936 7151 564 0 64 206 1609 1912 -2 -1 0 1 2 Predicted Label -2 -1 0 1 2 Ground Truth 844 974 148 37 5 477 6378 2104 261 35 85 2252522191904 88 9 302 2468 7359 860 4 37 143 1381 2226 0 2000 4000 6000 8000 Figure 8: Confusion matrix on SST-5 phrase-level test dataset for tree-LSTM (left) and TCM (right). 40 50 60 70 80 90 100 Bi-Tree-LSTM 40 50 60 70 80 90 100 Tree Communication (a) 0 10 20 30 40 50 60 Phrase error deviation (PEDev) Bi-tree-LSTM Tree Communication (b) Figure 9: Sentence-level phrase accuracy (a) and deviation of node errors (b) comparison on SST-5 between bi-tree-LSTM and TCM. improves the consistency of phrase classification for each sentence. Confusion matrix Figure 8 shows the confusion matrix on the SST-5 phrase-level test set for tree-LSTM (left) and TCM (right). Compared with tree-LSTM, the accuracies of most sentiment labels by TCM increase (the accuracy of the neutral label slightly decreases by 0.3%), indicating that TCM is strong in differentiating fine-grained sentiments in global and local contexts. Metrics BTL TCM Diff. SPAcc, α = 1.0 3.2 3.7 +0.5 SPAcc, α = 0.9 20.0 21.2 +1.2 SPAcc, α = 0.8 70.7 71.4 +0.7 PEDev-mean 36.4 35.7 -0.7 PEDev-median 37.6 37.0 -0.6 Table 6: Sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM. 6.6 Comparison with Bi-tree-LSTM Table 6 shows the sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM, respectively. TCM outperforms bi-treeLSTM on all the metrics, which demonstrates that TCM gives more consistent predictions of sentiments over different phrases in a tree, compared to top-down communication. This shows the benefit of rich node communication. Figure 9 shows a scatter chart and a deviation chart comparision between the two models, in the same format as Figure 5 and Figure 6, respectively. As shown in Figure 9a, the errors of TCM and bitree-LSTM are scattered, which shows that different communication patterns influence sentiment prediction. The final observation is consistent with Table 6. 6.7 Case Study Figure 7 shows four samples on SST-5. In the first sentence, the phrase “seemed static” itself bares the neutral sentiment. However, it has a negative sentiment in the context. The tree-LSTM model captures the sentiment of the phrase bottom-up, therefore giving the neutral sentiment. In con3526 trast, TCM considers larger contexts by repeated node interaction. The phrase “seemed static” receives information from the constituents “never took off” and “Though everything might be literate and smart” through their common ancestor nodes, leading to the correct result. Although bitree-LSTM predicts these sentiments of the phrase “seemed static” and the whole sentence correctly, it gives more incorrect results on the phrase level. The other sentences in Figure 7 show similar trends. From the samples we can find that TCM provides more consistent predictions on phraselevel sentiments thanks to its better understanding of different contexts. 7 Conclusion We investigated two tree communication models for sentiment analysis, leveraging recent advances in graph neural networks for information exchange between nodes in a baseline tree-LSTM model. Both GCNs and GRNs are explored and compared, with GRNs showing better accuracies. We additionally propose a novel time-wise attention mechanism to further improve GRNs. Results on standard benchmark show that graph NNs give better results compared to bi-directional tree LSTMs, providing more consistent predictions over phrases in one sentence. To our knowledge, we are the first to leverage graph neural network structures for enhancing tree-LSTMs, and the first to discuss tree-level sentiment consistency using a set of novel metrics. 8 Acknowledgments The corresponding author is Yue Zhang. We thank the anonymous reviewers for their valuable comments and suggestions. We thank Zhiyang Teng and Linfeng Song for their work and discussion. This work is supported by a grant from Rxhui Inc1. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In ICLR. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In EMNLP. 1https://rxhui.com Daniel Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-sequence learning using gated graph neural networks. In ACL. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research. Ling Gan and Houyu Gong. 2017. Text sentiment analysis based on fusion of structural information and serialization information. In IJCNLP. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. In ICLR. Phong Le and Willem Zuidema. 2015. Compositional distributional semantics with long short term memory. In *SEM. Jiwei Li, Minh-Thang Luong, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? In EMNLP. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Deep multi-task learning with shared memory. In EMNLP. Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. In ICLR. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In EMNLP. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In NIPS. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In ACL. Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In ACL. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural networks. In NIPS. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. 3527 Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In AAAI. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amrto-text generation. In ACL. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In ACL. Zhiyang Teng and Yue Zhang. 2017. Head-lexicalized bidirectional tree lstms. TACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Liang-Chih Yu, Jin Wang, K Robert Lai, and Xuejie Zhang. 2017. Refining word embeddings for sentiment analysis. In EMNLP. Tianyang Zhang, Minlie Huang, and Li Zhao. 2018a. Learning structured representation for text classification via reinforcement learning. In AAAI. Yue Zhang, Qi Liu, and Linfeng Song. 2018b. Sentence-state lstm for text representation. In ACL. Yuhao Zhang, Peng Qi, and Christopher D Manning. 2018c. Graph convolution over pruned dependency trees improves relation extraction. arXiv preprint arXiv:1809.10185. Xiaodan Zhu, Parinaz Sobihani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In ICML.
2019
342
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3528–3537 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3528 Improved Sentiment Detection via Label Transfer from Monolingual to Synthetic Code-Switched Text Bidisha Samanta IIT Kharagpur bidisha@ iitkgp.ac.in Niloy Ganguly IIT Kharagpur niloy@ cse.iitkgp.ac.in Soumen Chakrabarti IIT Bombay soumen@ cse.iitb.ac.in Abstract Multilingual writers and speakers often alternate between two languages in a single discourse, a practice called “code-switching”. Existing sentiment detection methods are usually trained on sentiment-labeled monolingual text. Manually labeled code-switched text, especially involving minority languages, is extremely rare. Consequently, the best monolingual methods perform relatively poorly on code-switched text. We present an effective technique for synthesizing labeled code-switched text from labeled monolingual text, which is more readily available. The idea is to replace carefully selected subtrees of constituency parses of sentences in the resource-rich language with suitable token spans selected from automatic translations to the resource-poor language. By augmenting scarce human-labeled code-switched text with plentiful synthetic code-switched text, we achieve significant improvements in sentiment labeling accuracy (1.5%, 5.11%, 7.20%) for three different language pairs (English-Hindi, English-Spanish and English-Bengali). We also get significant gains for hate speech detection: 4% improvement using only synthetic text and 6% if augmented with real text. 1 Introduction Sentiment analysis on social media is critical for commerce and governance. Multilingual social media users often use code-switching, particularly to express emotion (Rudra et al., 2016). However, a basic requirement to train any sentiment analysis (SA) system is the availability of large sentimentlabeled corpora. These are extremely challenging to obtain (Chittaranjan et al., 2014; Vyas et al., 2014; Barman et al., 2014), requiring volunteers fluent in multiple languages. We present CSGen, a system which provides supervised SA algorithms with synthesized unlimited sentiment-tagged code-switched text, without involving human labelers of code-switched text, or any linguistic theory or grammar for codeswitching. These texts can then train state-ofthe-art SA algorithms which, until now, primarily worked with monolingual text. A common scenario in code-switching is that a resource-rich source language is mixed with a resource-poor target language. Given a sentimentlabeled source corpus, we first create a parallel corpus by translating to the target language, using a standard translator. Although existing neural machine translators (NMTs) can translate a complete source sentence to a target sentence with good quality, it is difficult to translate only designated source segments in isolation because of missing context and lack of coherent semantics. Among our key contributions is a suite of approaches to automatic segment conversion. Broadly, given a source segment selected for codeswitching, we propose intuitive ways to select a corresponding segment from the target sentence, based on maximum similarity or minimum dissimilarity with the source segment, so that the segment blends naturally in the outer source context. Finally, the generated synthetic sentence is tagged with the same sentiment label as the source sentence. The source segment to replace is carefully chosen based on an observation that, apart from natural switching points dictated by syntax, there is a propensity to code-switch between highly opinionated segments. Extensive experiments show that augmenting scarce natural labeled code-switched text with plentiful synthetic text associated with ‘borrowed’ source labels enriches the feature space, enhances its coverage, and improves sentiment detection accuracy, compared to using only natural text. On four natural corpora having gold sentiment tags, we demonstrate that adding synthetic text can improve accuracy by 5.11% in English-Spanish, 3529 7.20% in English-Bengali and (1.5%, 0.97%) in English-Hindi (Twitter, Facebook). The synthetic code-switch text, even when used by itself to train SA, performs almost as well as natural text in several cases. Hate speech is an extreme emotion expressed often on social media. On an EnglishHindi gold-tagged hate speech benchmark, we achieve 6% absolute F1 improvement with data augmentation, partly because synthetic text mitigates label imbalance present in scarce real text. 2 Related Work Recent SA systems are trained on labeled text (Sharma et al., 2015; Vilares et al., 2015; Joshi et al., 2016). For European and Indian codeswitched sentiment analysis, several shared tasks have been initiated (Barman et al., 2014; Rosenthal et al., 2017; Patra et al., 2018; Sequiera et al., 2015; Solorio et al., 2014). Some of these involve human annotations on code-switched text. Vilares et al. (2015) have annotated the data set released for POS tagging by Solorio and Liu (2008). Joshi et al. (2016) had Hindi-English code-switched Facebook text manually annotated and developed a deep model for supervised prediction. In a different direction, synthetic monolingual text has been created by Generative Adversarial Networks (GAN) (Kannan and Vinyals, 2017; Zhang et al., 2016, 2017; Maqsud, 2015), or Variational Auto Encoders (VAE) (Bowman et al., 2015). Some of these models can be used to generate sentiment-tagged synthetic text. However, most of them are not directly suitable for generating bilingual code-mixed text, due to the unavailability of sufficient volume of gold-tagged codemixed text. Samanta et al. (2019) proposed a generative method using a handful of gold-tagged data; but they cannot produce sentence level tags. Recently, Pratapa et al. (2018) used linguistic constraints arising from Equivalence Constraint Theory to design a code-switching grammar that guides text synthesis. Earlier, Bhat et al. (2016) presented similar ideas, but without empirical results. In contrast, CSGen uses a data-driven combination of word alignment weights, similarity of word embeddings between source and target, and attention (Bahdanau et al., 2015). 3 Generation of code-switched text CSGen takes a sentiment-labeled source sentence s and translates it into a target language sentence t. Then it generates text with language switches on particular constituent boundaries. This involves two sub-steps: select a segment in s (§3.1), and then select text from t that can replace it (§3.2– §3.3). This generation process is sketched in Algorithm 1. 3.1 Sentiment-oriented source segment selection In this step, our goal is to select a contiguous segment from the source sentence that could potentially be replaced by some segment in the target sentence. (Allowing non-contiguous target segments usually led to unnatural sentences.) Code switching tends to occur at constituent boundaries (Sankoff and Poplack, 1981), an observation that holds even for social media texts (Begum et al., 2016). Therefore, we apply a constituency parser to the source sentence. Specifically, we use the Stanford CoreNLP shift-reduce parser (Zhu et al., 2013) to generate a parse tree1. Then we select segments under non-terminals, i.e., subtrees, having certain properties, chosen using heuristics informed by patterns observed in real code-switched text. NP and VP: We allow as candidates all subtrees rooted at NP (noun phrase) and VP (verb phrase) nonterminals, which may cover multiple words. Translating single-word spans is more likely to result in ungrammatical output (Sankoff and Poplack, 1981). SBAR: Bilingual writers often use a clause to provide a sentiment-neutral part and then switch to another language in another sentence-piece to express an opinion or vice-versa. An example is “Ramdhanu ended with tears kintu sesh ta besh onho rokom etar” (Ramdhanu ended with tears but the ending was quite different). Here the constituent “but the ending was quite different” comes under the subtree of SBAR. Highly opinionated segments: We also include segments which have a strong opinion polarity, as detected by a (monolingual) sentiment analyzer (Gilbert, 2014). E.g., the tweet “asimit khusi prasangsakako ke beech ... as India won the world cup after 28 years” translates to “Unlimited happiness among fans ... as India won the world cup after 28 years”. 1http://stanfordnlp.github.io/CoreNLP/ 3530 Algorithm 1 CSGen overview. 1: Input: Sentiment-labeled source sentences S = {(sn, yn)} 2: Output: Synthetic code-switched sentences C = {(c, y)} 3: tn ←Translate(sn) ∀sn ∈S /* Make parallel corpus */ 4: C ←∅ 5: for each parallel sentence pair s, t do 6: /*Collect word alignment signals*/ 7: a ←AttentionScore(s, t), g ←GizaScore(s, t) 8: /* Source segment selection */ 9: P ←SentimentOrientedSegmentSelection(s) 10: for each segment ps ∈P to replace do 11: /* Target segment selection */ 12: ˆq1, ˆq2 ←MaxSimTargetSeg(s, ps, t, a, g) 13: ˆq3, ˆq4←MinDissimTargetSeg(s, ps, t, 1−a, 1−g) 14: /*Code-switched text generation*/ 15: Ck ←Project(s, t, ps, ˆqk) where k ∈{1, . . . , 4} 16: C ←C ∪SelectBest({Ck : k ∈{1, . . . , 4}}) 17: end for 18: end for 19: C ← Threshold(C) /* Retain only best replacements */ An example sentence, its parse tree, and its candidate replacement segments are shown in Figure 1. In Algorithm 1, ps ∈P denotes the set of candidate replacement subtrees, which correspond to segments. For each candidate segment, we generate a code-switched version of the source sentence, as described next. 3.2 Target segment selection Given a source sentence s, corresponding target t, and one (contiguous) source segment ps = {wi s · · · wi+x s }, the goal is to identify the best possible a contiguous target segment qt = {wj t · · · wj+y t } that could be used to replace ps to create a realistic code-switched sentence. We adopt two approaches to achieve this goal: (a) selecting a target segment that has maximum similarity with ps, and (b) selecting a target segment having minimum dissimilarity with ps, for various definitions of similarity and dissimilarity. Below, we describe methods that achieve this goal after describing several alignment scores which will be used in these methods. Overall, these lead to target segments ˆq1 t , ˆq2 t , . . . shown in Algorithm 1, with t removed for clarity. 3.2.1 Word alignment signals Signals based on word alignment methods are part of the recipe in choosing the best possible qt given the sentence pair and ps. GIZA score: The standard machine translation word alignment tool Giza++ (Och and Ney, 2003) A  coalition  with  the  Lib  Dems     is  what  the  electorate  want. DT      NN    IN    DT   NNP   NNPS   VBZ   WP  DT      NN     VBP . NP NP NP VP S WHNP SBAR VP PP NP S A  coalition  with  the  Lib  Dems     is matadaata chaahata hai. lib dems ke saath ek gathabandhan matadaata chaahata hai. EN: HI: CS: Figure 1: A phrase-structure tree for a sample synthesis. Dotted-boxes around constituents indicate that they are candidates for replacement on the source side (§3.1). EN: English source sentence, HI: Hindi target sentence, CS: code-switched sentence. The italicized segment is the target segment to replace the source segment under the non-terminal SBAR. uses IBM statistical word alignment models 1– 5 (Fern´andez, 2008; Schoenemann, 2010; Brown et al., 1993; Riley and Gildea, 2012). This tool incorporates principled probabilistic formulations of the IBM models and gives a correspondence score G[wi t, wj s] between target and source words for a given sentence pair. This word-pair score is used as a signal to find the best ˆqt. NMT attention score: Given an attentionguided trained sequence-to-sequence neural machine translation (NMT) model (Bahdanau et al., 2015; Luong et al., 2015) and sentence pair s, t, we use the attention score matrix A[wi t, wj s] as an alignment signal. Inverse document frequency (rarity): The inverse document frequency (IDF) of a word in a corpus signifies its importance in the sentence (van Rijsbergen, 1979). We define I(w) = σ(a IDF(w) −b) as a shifted, squashed IDF that normalizes the raw corpus-level score. Here σ is the sigmoid function and parameters a and b are empirically tuned. This IDF-based signal is optionally incorporated while choosing ˆqt. 3.2.2 Target segment with maximum similarity Given word-pair scores derived from either Giza++ or NMT attention described in §3.2.1, we formulate two methods for identifying target segments. First, we identify the best target segment 3531 given Giza++ scores, G[·, ·], as follows: ˆq1 t ←argmax qt Y wt∈qt X ws∈ps G[wt, ws] (1) For each word in qt, we compute the total attention score concentrated in ps and then multiply them as if they are independent. Second, we use the attention score learned by the NMT system of Luong et al. (2017) (a bidirectional LSTM model with attention). Essentially, given the attention score A[·, ·] between target and source words, we intend to select the target segment qt whose maximum attention is concentrated in the given ps. Initial exploration of the above method revealed that the attention of a target word may spread out over several related but less appropriate source words, and accrue better overall similarity than a single more appropriate word. Here IDF can come to the rescue, the intuition being that words wi t and wj s with very different IDFs are less likely to align, because (barring polysemy and synonymy) rare (common) words in one language tend to translate to rare (common) words in another. This intuition is embodied in the improved formulation: ˆq2 t ←argmax qt Y wt∈qt I(wt) X ws∈ps I(ws)A[wt, ws] (2) Informally, if a source segment contains many rare words, the target segment should also have a similar number of rare words from the target domain, and vice-versa. 3.2.3 Target segment with minimum dissimilarity We examine an alternative method for identifying target segments that leverage the Earth Mover’s Distance (EMD) (Vaserˇste˘ın, 1969). Kusner et al. (2015) extended EMD to the Word Mover Distance to measure the dissimilarity between documents by ‘transporting’ word vectors from one document to the vectors of the other. In the same spirit, we define a dissimilarity measure between ps and candidate target segments using EMD. We present here EMD as a minimization over fractional transportation matrix F ∈R|qt|×|ps| as below: EMD(qt, ps) = min F |qt| X i=1 |ps| X j=1 Fi,jdi,j (3) where P i Fi,j = 1 |qt| and P j Fi,j = 1 |ps| and di,j is a distance metric between a target and a source word pair, given suitable representations. Finally, we choose the target segment which is least dissimilar to a given source segment defined by the EMD. We compute di,j in two ways, described below. Attention-based distance: Here the distance between the embeddings is defined as: dA i,j = 1 −A[wi t, wj s] (4) Giza-based distance: Similarly we can compute the distance using Giza score as: dG i,j = 1 −G[wi t, wj s] (5) Given the two types of distances in Eq. (4)–(5) and the definition of EMD in Eq. (3) we can formulate two methods for identifying target segments: ˆqk t ←argmin qt min F |qt| X i=1 |ps| X j=1 Fi,jdk i,j (6) where k ∈{3, 4} and d3 i,j ≡dA i,j and d4 i,j ≡dG i,j. We can also use Euclidean distance as di,j. However, this method requires multilingual word embeddings for every word to calculate the distance. The volume of labeled source text we can use is usually smaller than the vocabulary size, making it difficult to learn reliable word embeddings. Also, if these corpora contain informal social media text like the ones described in §4.1, then publicly available pretrained word embeddings exclude a significant percentage of them. 3.3 Projecting target segments Given a source sentence s with designated segment ps to replace, and target sentence t, we have by now identified four possible target segments ˆqk t where k ∈{1, . . . , 4} as described in §3.2.2– §3.2.3. We now project the target segment to the source sentence, meaning, (a) replace the source segment with the target segment and (b) transliterate the replacement using the Google Transliteration API to the source script2. This creates four possible synthetic code-switched sentences for each instance of (s, ps, t). Finally, we transfer the labels of the original monolingual corpora to the generated synthetic text corpora. 3.4 Best candidate via reverse translation From these four code-switched sentences c1, . . . , c4, we wish to retain the one that retains most of the syntactic structures of the source sentence. Each code-switched sentence ck has an associated score as defined in §3.2. We use two 2http://www.google.com/transliterate? langpair=hi|en&text=<text> 3532 empirically tuned thresholds: a lower cut-off for the similarity score of c1, c2 and an upper cut-off for the dissimilarity score of c3, c4, to improve the quality of candidates retained. These scores are not normalized and cannot be compared across different methods. Therefore, we perform a reverse translation of each candidate back to the source language using the Google translation API to obtain ˜s. We retain the candidate whose retranslated version ˜s has the highest BLEU score (Papineni et al., 2002) wrt s. In case of a tie, we select the candidate with maximum word overlap with s. 3.5 Thresholding and stratified sampling In addition to retaining only the best among codeswitched candidates c1,...,4, we discard the winner if its BLEU score is below a tuned threshold. Further, we sample source sentences such that the surviving populations of sentiment labels of the code-switched sentences match the populations in the low-resource evaluation corpus. Another tuned system parameter is the amount of synthetic text to generate to supplement the gold text. We do not depend on any domain coherence between the source corpus used to synthesize text and the gold ‘payload’ corpus — this is the more realistic situation. Our expectation, therefore, is that adding some amount of synthetic text should improve sentiment prediction, but excessive amounts of off-domain synthetic text may hurt it. In our experiments we grid search the synthetic:gold ratio between 1/4 and 2 using 3-fold cross validation. 4 Experiments We demonstrate the effectiveness of augmenting gold code-switched text with synthetic codeswitched text. We also measure the usefulness of synthetic text without gold text. In this section, we will first describe the data sets used to generate the synthetic text and then the resource-poor labeled code-switched text used for evaluation. Next, we will present the method used for sentiment detection, baseline performance, and finally our performance, along with a detailed comparative analysis. 4.1 Source corpora for text synthesis We use publicly available monolingual sentimenttagged (positive, negative or neutral) gold corpora in the source language. ACL: Dong et al. (2014) released about 6000 manually labeled English tweets. Election: Wang et al. (2017) published about 5000 human-labeled English tweets. Mukherjee: This data set contains about 8000 human-labeled English tweets (Mukherjee and Bhattacharyya, 2012; Mukherjee et al., 2012). Semeval shared task: This provides about 10000 human-labeled English tweets (Rosenthal et al., 2017). Union: This is the union of above mentioned different data sets. Hatespeech: We collected 15K tagged English tweets from (Founta et al., 2018) which consists of 4.7K abusive, 1.7K hateful and 4K normal tweets. We picked Spanish, which is homologous to English, and Hindi and Bengali, which are comparatively dissimilar to English, for our experiments. We translated these monolingual tweets to Spanish, Hindi and Bengali using Google Translation API3 and used as parallel corpus to train attention-based NMT models and statistical MT model (GIZA) to learn the word alignment signals as described in §3.2.1. 4.2 Preliminary qualitative analysis Analysis of texts synthesized by various mechanisms proposed in §3.2 shows that similarity based methods contribute 82–85% of the best candidates and the rest come from dissimilarity based methods. Similarity-based methods using NMT attention and Giza perform well because the segments selected for replacement often constitute nouns and entity mentions, which have a very strong alignment in the corresponding target segment. NMT attention and Giza-EMD perform well when segments contract or expand in translation. 4.3 Low-resource evaluation corpora To evaluate the usefulness of the generated synthetic tagged sentences as a training set for sentiment analysis, we have used three different codeswitched language pair data sets. Each data set below was divided into 70% training, 10% validation and 20% testing folds. The training fold was (or was not) augmented with synthetic labeled text 3https://translation.googleapis.com 3533 to train sentiment classifiers, which were then applied on the test fold judge the quality of synthesis. HI-EN, FB (Hindi-English, Facebook): Joshi et al. (2016) released around 4000 labeled codeswitched sentences from the Facebook timeline of Narendra Modi (Indian Prime Minister) and Salman Khan (Bollywood actor). HI-EN, TW (Hindi-English, Twitter): This is a shared task from ICON 2017 (Patra et al., 2018) with 15575 instances. ES-EN (English-Spanish): We collected 2883 labeled tweets specified by Vilares et al. (2015). BN-EN (Bengali-English): This is another shared task from ICON 2017 (Patra et al., 2018) with 2499 instances. HI-EN, Hatespeech: Bohra et al. (2018) published 4000 manually-labeled code-switched Hindi-English tweets: 1500 exhibiting hate speech and 2500 normal. We also found a significant number of abusive tweets marked hate speech. For uniformity, we merged hate speech tweets and abusive tweets. 4.4 Sentiment classifier We adopt the sub-word-LSTM system of Joshi et al. (2016). We prefer this over feature-based methods because (a) feature extraction for codeswitched text is very difficult, and varies widely across language pairs, and (b) the vocabulary is large and informal, with many tokens outside standard (full-) word embedding vocabularies and (c) sub-word-LSTM captures semantic features via convolution and pooling. Loss functions: If the sentiment labels {−1, 0, +1} are regarded as categorical, crossentropy loss is standard. However, prediction errors between the extreme polarities {−1, +1} need to be penalized more than errors between {-1,0} or {0,+1}. Hence, we use ordinal crossentropy loss (Niu et al., 2016), introducing a weight factor proportional to the order of intended penalty multiplied with the cross entropy loss. On the test fold, we report 0/1 accuracy and per class micro-averaged F1 score. Baseline and prior art: Our baseline scenario is a self-contained train-dev-test split of the gold corpus. The primary prior art is the work of Pratapa et al. (2018). Feature space coherence: Our source corpora are quite unrelated to the gold corpora. Table 1 shows that the average Euclidean distance between feature space of gold training and testing texts is much lower than that between gold and synthetic texts. While this may be inescapable in a low-resource situation, the gold baseline does not pay for such decoherence, which can lead to misleading conclusions. HI EN,TW HI EN,FB ES EN BN EN ACL 2.21 (2.13) 3.72 (2.24) 2.09 (1.73) 4.11 (2.67) Election 2.40 (2.12) 6.27 (2.67) 1.58 (1.49) 5.23 (2.43) Mukherjee 2.47 (2.33) 3.82 (2.26) 1.64 (1.64) 5.18 (2.50) Semeval 2.23 (2.11) 4.04 (2.26) 1.69 (1.67) 3.63 (2.59) Union 2.55 (2.15) 3.80(2.65) 1.65 (1.53) 5.48 (2.56) Gold 2.05 1.87 1.64 1.83 Table 1: Average pairwise Euclidean distance between training data and test data features. Rows correspond to standalone (respectively, augmented) text for training. Gray: reference distance of gold test from gold train. Red: largest distance observed. Training regimes: Absence of coherent tagged gold text may lead to substantial performance loss. Hence, along with demonstrating the usefulness of augmenting natural with synthetic text, we also measure the efficacy of synthetic text on its own. We train the SA classifier with three labeled corpora: (a) limited gold code-switched text, (b) gold code-switched text augmented with synthetic text and (c) only synthetic text. Then we evaluate the resulting models on labeled gold code-switched test fold. 4.5 Sentiment detection accuracy Table 2 shows the benefits of augmenting natural with synthetic text. Test accuracy increases further (shown in brackets) if thresholding and stratified sampling are used. Gains for HI EN,TW, HI EN,FB, ES EN and BN EN are 1.5% (2.43%), 0.23% (1.43%), 4.76% (6.24%), and 2.8% (4.8%) respectively. Categorical cross-entropy loss was used here. Similar improvements in accuracy of 1.45% (1.5%), 0.59% (0.97%), 2.16% (5.11%), 3% (7.20%) are observed after training with ordinal loss function. Our conclusion is that careful augmentation with synthetic data can lead to useful gain in accuracy. Moreover, by selecting synthetic text which is syntactically more natural, even larger gains can be achieved. Notably, the distance between training and test features (Table 1) is negatively correlated with accuracy gain (Pearson correlation coefficient of −0.48). 3534 Train Test HI EN (TW) HI EN (FB) ES EN BN EN HI EN (TW) HI EN (FB) ES EN BN EN Categorical cross entropy training loss Ordinal cross entropy training loss ACL 51.80 (52.59) 62.59 (65.33) 44.80 (48.84) 52.59 (59.81) 52.68 (53.76) 62.72 (64.22) 45.69 (50.31) 50.08 (51.60) Election 52.84 (54.59) 65.59 (66.80) 43.07 (43.07) 57.99 (59.00) 52.89 (54.64) 64.88 (65.26) 45.21 (45.80) 53.40 (55.60) Mukherjee 53.76 (54.69) 64.82 (65.85) 47.06 (46.36) 49.79 (57.40) 53.79 (53.53) 59.66 (64.57) 44.85 (43.07) 51.00 (49.40) Semeval 52.99 (54.19) 65.33 (65.46) 47.36 (44.69) 53.40 (59.99) 52.99 (53.83) 63.26 (64.66) 47.36 (44.64) 53.14 (57.59) Union 53.28 (54.64) 65.50 (65.99) 44.24 (45.32) 57.23 (59.89) 53.65 (53.69) 64.30 (67.65) 44.04 (46.00) 53.40 (57.40) MSR 54.50 65.58 48.14 59.79 53.69 62.80 47.50 52.8 Gold 52.26 65.37 42.6 55.19 52.34 64.29 45.20 50.39 Table 2: Accuracy (%) on 20% test data after training with augmented and only gold text. Rows correspond to sources of augmentation. In most cells we show (A) no thresholding or stratification and (B) with thresholding and stratification (within brackets). Gray: reference accuracy with only gold training. Blue: A or B or MSR outperforms gold. Green: B performs best. Row ‘MSR’ uses text synthesized by Pratapa et al. (2018). Comparison with Pratapa et al. (2018): They depend on finding correspondences between constituency parses of the source and target sentences. However, the common case is that a constituency parser is unavailable or ineffective for the target language, particularly for informal social media. They are thus restricted to synthesizing text from only a subset of monolingual data. Training SA with natural text augmented with their synthesized text leads to poorer accuracy, albeit by a small amount, than using CSGen. The performance is worse for target languages that are more resourcepoor. Ordinal vs. categorical loss: Table 2 shows that ordinal loss helps when the neutral label dominates. However, neither is a clear winner and the gains are small. Therefore, we use categorical loss henceforth. Choice of monolingual corpus: Across all monolingual corpora, Election performs consistently well. Best test performance on HI EN,TW was obtained by synthesizing from the Mukherjee corpus. Text synthesized from Election provides the best results for HI EN,FB for both setups. The performance of Union is also good but not the best. This is because although a larger and diverse amount of data is available which ensures its quality, the Euclidean distance between test data and some individual corpora is still large. 4.6 Sentiment detection F1 score Beyond 0/1 accuracy, Table 3 shows F1 score gains. Election yields consistently good results. We have reported the F1 score gain for different sentiment classes only for Election in Table 3 for brevity. Augmenting synthetic data with gold data yields better F1 score than training only with gold tagged data. Also, it is interesting to observe that Categorical Cross Entropy training Ordinal Cross Entropy training Pos Neu Neg Pos Neu Neg HI EN,TW CSGen 0.52 0.62 0.38 0.55 0.63 0.34 Gold 0.48 0.63 0.24 0.50 0.62 0.35 HI EN,FB CSGen 0.59 0.73 0.56 0.62 0.71 0.55 Gold 0.60 0.74 0.54 0.60 0.71 0.44 ES EN CSGen 0.38 0.53 0.37 0.48 0.50 0.42 Gold 0.47 0.44 0.41 0.40 0.53 0.43 BN EN CSGen 0.63 0.49 0.58 0.55 0.47 0.59 Gold 0.55 0.51 0.65 0.37 0.49 0.61 Table 3: F1 score for each class prediction. Blue: CSGen is better than Gold. there is a sharp drop of F1 score for HI EN,FB and BN EN data sets for Gold data while training with ordinal cross entropy function across all the sentiment labels. As described in §4.5, this is due to non-discriminative features. However, mixing them with synthetic data helps in achieving better results. Train Test HI EN (TW) HI EN (FB) ES EN BN EN ACL 40.33 49.96 38.40 47.81 Election 47.22 48.78 31.20 42.44 Mukherjee 46.22 48.98 39.76 44.42 Semeval 45.80 48.38 39.18 45.99 Union 43.50 49.80 41.90 41.20 Gold 52.26 65.37 42.60 55.19 Table 4: Percent accuracy on 20% test data after training on only synthetic and only gold text. Each row corresponds to a source. Grey: Accuracy achieved with only gold training. Blue: The closest accuracy achieved to best. 4.7 Performance of standalone synthetic data The accuracy of using only synthetic data as training is reported in Table 4. We can see that for EN HI,TW and EN ES the synthetic data is very close to the gold data performance (lagging by 3535 Category of failure Example sentence Gold Predicted Keywords with different polarity manana voy conquistar la will forever be an amazing song not because me la dedicaron but because my momma always jams to it Positive Negative “tomorrow I will conquer the will” forever be an amazing song not because they dedicated it to me but because my momma always jams to it. twin brothers lost in fair reunited in adulthood amidst dramatic circumstances ei themer movie akhon ar viewers der attract kore na Neutral Positive twin brothers lost in fair reunited in adulthood amidst dramatic circumstances, this theme does not yet attract viewers. Ambiguous overall meaning elizaibq ellen quiere entrevistar julianna margulies clooney says she is tough cookie she is hard one to crack Negative Neutral elizaibq ellen wants to interview julianna margulies clooney says she is tough cookie she is hard one to crack hum kam se kam fight ker haaray lekin tum loog zillat ki maut maaray gaye Positive Negative We lost at least after a fight, but you died a terrible death. Table 5: Examples cases of failure in prediction. Red: Negative Polarity words. Green: Positive polarity words. Blue represents the English translation of the code-switched sentence. 5.04% and 2.84%). However, it performed poorly for HI EN,FB and BN EN dataset. This is because there is heavy mismatch between the synthetic text set generated and the test data distribution (Table 1) in these two datasets. The Pearson rank correlation coefficient between the distance (between test and training set) measures and relative accuracy gain is highly negative, −0.66. To further establish the importance of domain coherence, we report on an experiment performed with HI EN,FB gold dataset. This dataset has texts corresponding to two different entities namely Narendra Modi and Salman Khan. Training SA with natural text corresponding to one entity and testing on the rest leads to a steep accuracy drop from 65.37% to 52.32%. 4.8 Error analysis We found two dominant error modes where synthetic augmentation confuses the system. Table 5 shows a few examples. The first error mode can be triggered by the presence of words of different polarities, one polarity more common than the other, and the gold label being the minority polarity. The second error mode is prevalent when the emotion is weak or mixed. Either there is no strong opinion, or there are two agents, one regarded positively and the other negatively. 4.9 Hate speech detection results Table 6 shows hate speech detection results. Training with only synthetic text after thresholding and stratified sampling outperforms training with only gold-tagged text by 4% F1, and using both gold and synthetic text gives a F1 boost of 6% beyond using gold alone. Remarkably, synthetic text alone outperforms gold text, because gold text has high class imbalance, leading to poorer prediction. Because we can create arbitrary amounts of synthetic text, we can balance the labels to achieve better prediction. Prec Recall F-score Only synthetic 0.58 (0.63) 0.60 (0.63) 0.51 (0.52) Synthetic +Gold 0.59 (0.60) 0.63 (0.63) 0.53 (0.54) Gold 0.40 0.62 0.48 Table 6: Hate speech results (3-fold cross val.). In most cells we show performance without thresholding and stratification (within bracket with thresholding and stratification). Green: Best performance in each column. 5 Conclusion Code-mixing is an important and rapidly evolving mechanism of expression among multilingual populations on social media. Monolingual sentiment analysis techniques perform poorly on codemixed text, partly because code-mixed text often involves resource-poor languages. Starting from sentiment-labeled text in resource-rich source languages, we propose an effective method to synthesize labeled code-mixed text without designing switching grammars. Augmenting scarce natural text with synthetic text improves sentiment detection accuracy. Acknowledgments Bidisha Samanta was supported by Google India Ph.D. Fellowship. We would like to thank Dipanjan Das and Dan Garrette for their valuable inputs. Soumen Chakrabarti was partly supported by IBM. 3536 References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Utsab Barman, Amitava Das, Joachim Wagner, and Jennifer Foster. 2014. Code mixing: A challenge for language identification in the language of social media. In Proceedings of the first workshop on computational approaches to code switching, pages 13– 23. Rafiya Begum, Kalika Bali, Monojit Choudhury, Koustav Rudra, and Niloy Ganguly. 2016. Functions of code-switching in Tweets: An annotation scheme and some initial experiments. Proceedings of LREC. Gayatri Bhat, Monojit Choudhury, and Kalika Bali. 2016. Grammatical constraints on intra-sentential code-switching: From theories to working models. arXiv preprint arXiv:1612.04538. Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sarfaraz Akhtar, and Manish Shrivastava. 2018. A dataset of hindi-english code-mixed social media text for hate speech detection. In Proceedings of the Second Workshop on Computational Modeling of People’s Opinions, Personality, and Emotions in Social Media, pages 36–41. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, and Dai. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Gokul Chittaranjan, Yogarshi Vyas, Kalika Bali, and Monojit Choudhury. 2014. Word-level language identification using CRF: Code-switching shared task report of MSR India system. In Proceedings of The First Workshop on Computational Approaches to Code Switching, pages 73–79. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 49–54. Pablo Malvar Fern´andez. 2008. Improving Word-toword Alignments Using Morphological Information. Ph.D. thesis, San Diego State University. Antigoni-Maria Founta, Constantinos Djouvas, Despoina Chatzakou, Ilias Leontiadis, Jeremy Blackburn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. arXiv preprint arXiv:1802.00393. CJ Hutto Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Eighth International Conference on Weblogs and Social Media (ICWSM-14). Available at (20/04/16) http://comp. social. gatech. edu/papers/icwsm14. vader. hutto. pdf. Aditya Joshi, Ameya Prabhu, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2482–2491. Anjuli Kannan and Oriol Vinyals. 2017. Adversarial evaluation of dialogue models. arXiv preprint arXiv:1701.08198. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966. Minh-Thang Luong, Eugene Brevdo, and Rui Zhao. 2017. Neural machine translation (seq2seq) tutorial. https://github.com/tensorflow/nmt. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. In Proceedings of EMNLP. Umar Maqsud. 2015. Synthetic text generation for sentiment analysis. In Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Subhabrata Mukherjee and Pushpak Bhattacharyya. 2012. Sentiment analysis in twitter with lightweight discourse analysis. Proceedings of COLING 2012, pages 1847–1864. Subhabrata Mukherjee, Akshat Malu, Balamurali AR, and Pushpak Bhattacharyya. 2012. Twisent: a multistage system for analyzing sentiment in twitter. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 2531–2534. ACM. Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, and Gang Hua. 2016. Ordinal regression with multiple output cnn for age estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4920–4928. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL. 3537 Braja Gopal Patra, Dipankar Das, and Amitava Das. 2018. Sentiment analysis of code-mixed indian languages: An overview of sail code-mixed shared task@ icon-2017. arXiv preprint arXiv:1803.06745. Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of ACL. C J van Rijsbergen. 1979. Information Retrieval. Butterworths, London. Online at http://www.dcs. gla.ac.uk/Keith/Preface.html. Darcey Riley and Daniel Gildea. 2012. Improving the IBM alignment models using variational Bayes. In Proceedings of ACL. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval ’17, Vancouver, Canada. Association for Computational Linguistics. Koustav Rudra, Shruti Rijhwani, Rafiya Begum, Kalika Bali, Monojit Choudhury, and Niloy Ganguly. 2016. Understanding language preference for expression of opinion and sentiment: What do hindienglish speakers do on twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1131–1141. Bidisha Samanta, Sharmila Reddi, Hussain Jagirdar, Niloy Ganguly, and Soumen Chakrabarti. 2019. A deep generative model for code-switched text. In Proceedings of IJCAI. David Sankoff and Shana Poplack. 1981. A formal grammar for code-switching. Research on Language & Social Interaction, 14(1):3–45. Thomas Schoenemann. 2010. Computing optimal alignments for the IBM-3 translation model. In Proceedings of CoNLL. Royal Sequiera, Monojit Choudhury, and Kalika Bali. 2015. POS tagging of Hindi-English code mixed text from social media: Some machine learning experiments. In Proceedings of International Conference on NLP. Shashank Sharma, PYKL Srinivas, and Rakesh Chandra Balabantaray. 2015. Text normalization of code mix and sentiment analysis. In Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, pages 1468– 1473. IEEE. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Fahad AlGhamdi, Julia Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language identification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62–72. Thamar Solorio and Yang Liu. 2008. Part-of-speech tagging for English-Spanish code-switched text. In Proceedings of EMNLP. Leonid Nisonovich Vaserˇste˘ın. 1969. Markov processes over denumerable products of spaces describing large systems of automata. Problems of Information Transmission, 5(3):47–52. David Vilares, Miguel A Alonso, and Carlos G´omezRodr´ıguez. 2015. Sentiment analysis on monolingual, multilingual and code-switching twitter corpora. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 2–8. Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. POS tagging of English-Hindi code-mixed social media content. In Proceedings of EMNLP. Bo Wang, Maria Liakata, Arkaitz Zubiaga, and Rob Procter. 2017. Tdparse: Multi-target-specific sentiment recognition on twitter. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, volume 1, pages 483–493. Yizhe Zhang, Zhe Gan, and Lawrence Carin. 2016. Generating text via adversarial training. In NIPS workshop on Adversarial Training. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. 2017. Adversarial feature matching for text generation. arXiv preprint arXiv:1706.03850. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of ACL.
2019
343
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3538–3547 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3538 Exploring Sequence-to-Sequence Learning in Aspect Term Extraction Dehong Ma♣, Sujian Li♣, Fangzhao Wu♠, Xing Xie♠, Houfeng Wang♣ ♣MOE Key Lab of Computational Linguistics, Peking University, Beijing, 100871, China ♠Microsoft Research Asia, Beijing, China {madehong, lisujian, wanghf}@pku.edu.cn [email protected], [email protected] Abstract Aspect term extraction (ATE) aims at identifying all aspect terms in a sentence and is usually modeled as a sequence labeling problem. However, sequence labeling based methods cannot make full use of the overall meaning of the whole sentence and have the limitation in processing dependencies between labels. To tackle these problems, we first explore to formalize ATE as a sequence-tosequence (Seq2Seq) learning task where the source sequence and target sequence are composed of words and labels respectively. At the same time, to make Seq2Seq learning suit to ATE where labels correspond to words one by one, we design the gated unit networks to incorporate corresponding word representation into the decoder, and position-aware attention to pay more attention to the adjacent words of a target word. The experimental results on two datasets show that Seq2Seq learning is effective in ATE accompanied with our proposed gated unit networks and position-aware attention mechanism. 1 Introduction Aspect term extraction (ATE) is a fundamental task in aspect-level sentiment analysis, and aims at extracting all aspect terms present in the sentences (Hu and Liu, 2004; Pontiki et al., 2014, 2015, 2016). For example, given a restaurant review “The staff is friendly, and their cheese pizza is delicious”, the ATE system should extract aspect terms “staff” and “cheese pizza”. Early works focus on detecting the pre-defined aspects in a sentence (Hu and Liu, 2004; Zhuang et al., 2006; Popescu and Etzioni, 2007). Then, some works regard ATE as a sequence labeling task and utilize Hidden Markov Model (Jin et al., 2009) or Conditional Random Fields (Jin et al., 2009; Ma and Wan, 2010; Jakob and Gurevych, 2010; Liu et al., 2013) to extract all possible aspect terms. With the development of deep learning techniques, neural networks based methods (Wang et al., 2016; Liu et al., 2015; Li and Lam, 2017; Xu et al., 2018) have achieved good performances in ATE task, and they still treat ATE as a sequence labeling problem and extract more useful features surrounding a word. Obviously, the overall meaning of the sentence is important to predict the label sequence. For example, the word memory should be an aspect term in the laptop review “The memory is enough for use.”, but it is not an aspect term in the sentence “The memory is sad for me.”. However, sequence labeling methods are not good at grasping the overall meaning of the whole sentence because they cannot read the whole sentence in advance. In addition, neural networks based sequence labeling methods have the limitation in processing label dependencies because they only use transition matrix to encourage valid label paths and discourage other paths (Collobert et al., 2011). As we know, the label of each word is conditioned on its previous label. For example, “O” is followed by “B/O” but not “I” in the B-I-O tagging schema. To the best of our knowledge, no neural networks based method utilizes the previous label to improve their performances directly. Recently, sequence to sequence (Seq2Seq) learning has been successfully applied to many generation tasks (Cho et al., 2014b; Sutskever et al., 2014; Bahdanau et al., 2014; Nallapati et al., 2016). Seq2Seq learning encodes a source sequence into a fixed-length vector based on which a decoder generates a target sequence. It just has the benefits of first collecting comprehensive information from the source text and then paying more attention to the generation of the target sequence. Thus, we propose to formalize the ATE task as a sequence-to-sequence learning problem, where 3539 the source and target sequences are word and label sequence respectively. Our proposed method can make full use of the overall meaning of the sentence when decoding the target sequence because the fix-length vector stores all useful information of a sentence and will be used in the decoding process. At the same time, Seq2Seq learning can remedy the label dependencies problem because each label is conditioned on the previous label when generating the label sequence. Though Seq2Seq learning has its obvious advantages of generating a sequence, it faces the difficulties of how to precisely map each word with its corresponding label. As we know, the label of each word is highly related to its own meaning. For example, an aspect term tends to be some words used to identify any of a class of people, places, or things (e.g. staff, restaurant, pizza), while some words to describe an action, state, or occurrence (e.g. hear, become, happen) are rarely a part of an aspect term. Furthermore, our proposed method can know for which word it generates a label, and this kind of one-to-one match does not exist in other Seq2Seq task (e.g. machine translation). To incorporate the exact meaning of each word into Seq2Seq learning, we propose the gated unit networks (GUN) which contain a gated unit produced based on the hidden states of encoder and decoder. The gated unit can automatically integrate information from the encoder and decoder hidden states of the current word when decoding its label. Furthermore, the label of each word is dependent on its adjacent words because the adjacent words of an aspect term tend to be article, verb, adjective and etc. As the example in the first paragraph, the adjacent words of staff: The, is and friendly have positive effect on predicting its label, while the rest words are not key factors. This shows the importance of adjacent words of each word in predicting its label. In classic Seq2Seq learning, attention mechanism is used to make the decoder select important parts of source sequence to form a context vector for decoding current word (Bahdanau et al., 2014). However, this kind of attention mechanism cannot pay more attention to the adjacent words of a word because it does not take distance into account. To overcome this shortage, we introduce the position-aware attention which first computes the weight of each word with regard to previous hidden state si−1. Then, the weight of word i will be decreased based on the distance between word i and current word t. The more distant, the lower important. Therefore, our position-aware attention model can force the decoder to pay more attention to the adjacent words of the current word when decoding its label. We conduct experiments on two datasets, and the experimental results demonstrate that our proposed method achieves comparable results compared with existing methods. 2 Model Our proposed method is based on sequence-tosequence learning framework, plus two supplementary components namely position-aware attention and gated unit networks, which are used to capture features from the current word and its adjacent words. In this section, we will introduce our model in detail, whose overall architecture is displayed in Figure 1. 2.1 Sequence-to-Sequence Learning For convenience, we first define the notations which will be used next. Let X = [x1, x2, ..., xn] denote a sentence which contains n words, and xi ∈Rd is word embedding which can be learned by a neural language model (Bengio et al., 2003; Mikolov et al., 2013). Let Y = [y1, y2, ..., yn] denote the aspect term labels of sentence X where yi ∈{B, I, O}. we call X and Y as source and target sequence respectively. The sequence-to-sequence learning method is composed of two basic components: encoder and decoder. The encoder reads the embeddings of the source sequence and learns the hidden states H = [h1, h2, ..., hn] for all words, and the commonly used method is the Recurrent Neural Networks (RNN). In our model, we use a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014b) to obtain the hidden states: ht = Bi-GRU(xt, ht−1), (1) where Bi-GRU represents the operations of bidirectional GRU. ht ∈Rse represents the hidden state of word t, and se is the hidden state size of the encoder. The decoder is also a RNN which generates the target sequence Y based on X, and predicts the next label yt based on the context vector ct and all previous labels [y1, y2, ..., yt−1] predicted by the 3540 The union rings are great ht The union rings are great O O O B I 0 2 1 1 2 ~ ct ht Rice B I I I I B I I I I I I is too dry , tuna was not so fresh either . yt-1 st-1 Figure 1: The overall architecture of our model. same decoder. Therefore, the joint probability of the target sequence is defined as: P(Y |X) = n Y t=1 P(yt|y[1:t−1], ct), (2) where y[1:t−1] = [y1, ..., yt−1] and the conditional probability of label yt can be modeled by the decoder, and defined as: P(yt|y[1:t−1], ct) = softmax(Wost + bo), (3) where Wo ∈R|V |×sd, bo ∈R|V |, |V | is the target vocabulary size, and sd is the hidden state size of decoder. st ∈Rsd is the hidden state in the decoder at time step t, and computed as: st = GRU(st−1, ye t−1 ⊕ct), (4) where GRU is a unidirectional GRU. ⊕is the concatenation operation, and ye t−1 is label embedding for label yt−1. The context vector ct will be explained in the next section. It is noticed that the initial hidden state of the decoder is the last hidden state of the encoder. This means that the decoder can be aware of the meaning of the whole source sequence during the decoding process. The encoder and the decoder are jointly trained by minimizing the negative log-likelihood loss: Loss = −1 n n X t=1 lt log(Pθ(yt|y[1:t−1], ct)), (5) where lt is the ground truth label of word t, and θ denotes the parameters of the encoder and the decoder. From Eq. (3) and (4), we can see that the previous label is regarded as input when decoding the label for the current word. However, existing neural network based sequence labeling methods first compute the label scores of each word simultaneously, and obtain the globally optimized label sequence (Collobert et al., 2011). Therefore, they do not know the label of previous word when computing the label scores for the current word. By contrast, our proposed model generates the label for current word based on the label of previous word. This is the main difference between our proposed model and existing methods in solving label dependencies for ATE task. 2.2 Position-Aware Attention In ATE task, the adjacent words of each word have important effects on predicting its label, while the distant words make less contribution to its label. The reason is that aspect terms are often surrounded by their modifiers. To the best of our knowledge, the current widely-used attention mechanism usually ignores the influence of positions when measuring the weights of each word. Therefore, we propose a Position-Aware Attention (PAA) model which regularly decreases the weight of word i with respect to the distance between word i and word t. Supposing that we compute the context vector ct at position t, PAA first computes the weight for each word by: αi t = exp(f(st−1, hi)) Pn j=1 exp(f(st−1, hj)), (6) where f(st−1, hi) is the score function which computes the weight of hi given previous decoder hidden state st−1 and the corresponding distance. 3541 The score function is defined as: f(st−1, hi) = 1 d(wi, wt)(Ws[st−1, hi] + bs)vT s , (7) where 1 d(wi,wt) calculates the weight decay rate for word i, Ws ∈R(sd+se)×(sd+se), vs ∈R(sd+se) and bs ∈R(sd+se) are weight matrix, weight vector and bias separately. vT s means the transpose of vs. In our model, we set d(wi, wt) as the function log2(2 + l), where l is the distance between word wi and current word wt. As the example in Figure 1, when computing the context vector for rings, the d(union, rings) is log2(2 + 1). Finally, the context vector ct is computed as a weighted sum of these encoder hidden states: ct = n X i=1 αi thi. (8) We can see that PAA can tune the weights of each word according to the distance. Therefore, compared with vanilla attention, our model can pay more attention to its adjacent words given a word. 2.3 Gated Unit Networks When solving ATE by our proposed method, there exists a consistent one-to-one mapping between source sequence and target sequence. This means that the word representation can be used to help the decoder to generate its label. For example, some kinds of words (e.g. food, place, and people) tend to be aspect term, while other words (e.g. verb, adjective and adverb) have less opportunity to be a part of aspect term. Therefore, we design the Gated Unit Networks (GUN) to incorporate word information into our model. The main component of GUN is a merge gate which integrates information from encoder hidden state ht and decoder hidden state st. To make st and ht have the same dimension sg, we apply fullconnection layers on st and ht to obtain new representations s′ t ∈Rsg and h′ t ∈Rsg. The merge gate is defined as: gt = σ(Wgh′ t + Ugs′ t + bg), (9) where σ is sigmoid function. Wg, Ug ∈Rsg×sg are weight matrices and bg ∈Rsg is bias. The merge gate automatically controls how much information should be taken from ht and st Dataset Training Testing #Sent #Aspect #Sent #Aspect Laptop 3045 2358 800 654 Restaurant 2000 1743 676 622 Table 1: The statistics of two datasets. #Sent and #Aspect mean the number of sentence and aspect term separately. for decoding the label for word t by: rt = gth′ t + (1 −gt)s′ t. (10) Finally, we feed rt to softmax rather than st used in Eq. (3) to obtain the label distribution for word t. h′ t plays a more important role than s′ t if gt is greater than 0.5, and vice versa. In such way, GUN can make full use of the corresponding word representation to help the decoder to generate its label. 3 Experiments In this section, we first introduce the datasets and hyper-parameters used in our experiments. Then, we show the baselines for comparison. Finally, we compare the performance of our model with the baselines and analyze the reason why our model work. 3.1 Dataset & Hyperparameter Setting We conduct experiments on two widely used datasets of the ATE task (Li and Lam, 2017; Li et al., 2018; Xu et al., 2018), which are the laptop dataset from SemEval 2014 Task 4 (Pontiki et al., 2014)1 and the restaurant dataset from SemEval 2016 Task 5 (Pontiki et al., 2016)2 respectively. The details of the two datasets are shown in Table 1. All sentences are tokenized by NLTK3. In our experiments, we randomly split 10% of the training data as validation data. We adopt F1-Measure to evaluate the performance of the baselines and our model. In our experiments, all word embeddings are initialized by pre-trained GloVe embeddings (Pennington et al., 2014)4. We also use fastText (Joulin 1http://alt.qcri.org/semeval2014/ task4/ 2http://alt.qcri.org/semeval2016/ task5/ 3https://www.nltk.org/ 4Pre-trained GloVe embeddings can be downloaded from https://nlp.stanford.edu/projects/glove/ 3542 et al., 2016)5 to compute word vector for outof-vocabulary (OOV) words. The label embeddings are initialized randomly. The word and label embedding size are set as 300 and 50 respectively. The parameters of our model are initialized by uniform distribution u ∼(−0.1, 0.1). Both the encoder and decoder have two layers of GRU, and their hidden size is set to 300. We use Adam (Kingma and Ba, 2014) to optimize our model with the learning rate of 0.001, and two momentum coefficients are set to 0.9 and 0.999 respectively. The batch size is set to 8. To avoid overfitting, we use dropout on word embedding and label embedding, and the dropout rate is set to 0.5. 3.2 Baselines To evaluate the effectiveness of our approach, we compare our model with three groups of baselines. The first group of baselines utilizes conditional randomly fields (CRF): • CRF trains a CRF model with basic feature templates6 and word embeddings (Pennington et al., 2014) for ATE. • IHS R&D is the best system of laptop domain, and uses CRF with features extracted using named entity recognition, POS tagging, parsing, and semantic analysis (Chernyshevich, 2014). • NLANGP utilizes CRF with the word, name list and word cluster feature to tackle the task and obtains the best results in the restaurant domain. It also uses the output of a Recurrent Neural Network (RNN) as additional features to enhance their performances (Toh and Su, 2016). • WDEmb first learns embeddings of words and dependency paths based on the optimization objective formalized as w1 + r ≈w2, where w1, w2 are words, r is the corresponding dependency path. Then, the learned embeddings of words and dependency paths are utilized as features in CRF for ATE (Yin et al., 2016). 5https://github.com/facebookresearch/ fastText 6https://sklearn-crfsuite.readthedocs. io/en/latest/ The second group of baselines employs neural networks methods to address the ATE problem: • Bi-LSTM applies different kinds of BiRNN (Elman/Jordan-type RNN) with different kinds of embeddings in the ATE task (Liu et al., 2015). • GloVe-CNN7 uses multi-layer Convolution Neural networks (CNN) model with GloVe embeddings to extract aspect-term (Xu et al., 2018). • BiLSTM-CNN-CRF is the state-of-the-art system for named entity recognition task, which adopts CNN and Bi-LSTM to learn character-level and word-level features respectively, and CRF is used to avoid the illegal transition between labels (Reimers and Gurevych, 2017). The third group of baselines are joint methods for aspect term and opinion term extraction, and they take advantages of opinion label information to improve their performances. • MIN is an LSTM-based deep multi-task learning framework for ATE, opinion word extraction and sentimental sentence classification. It has two LSTMs equipped with extended memories, and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions (Li and Lam, 2017). • CMLA is made up of multi-layer attention network, where each layer consists of a couple of attention with tensor operators. One attention is for extracting aspect terms, while the other is for extracting opinion terms (Wang et al., 2017). • RNCRF 8 learns structure features for each word from parse tree by Recursive Neural Networks, and the learned features are fed to CRF to decode the label for each word (Wang et al., 2016). • HAST tackles ATE by exploiting two useful clues, namely opinion summary and aspect detection history (Li et al., 2018). 7To make it fair, we compare our method with GloVeCNN which only uses GloVe embeddings because our model just uses Glove embeddings but DE-CNN uses additional domain embeddings trained with large domain corpus. 8They also use handcraft features to improve their performances. 3543 Method Laptop Restaurant CRF 74.01 69.56 IHS RD 74.55 NLANGP 72.34 WDEmb 75.16 Bi-LSTM 75.25 71.26 GloVe-CNN 77.67 72.08 BiLSTM-CNN-CRF 77.80 72.50 MIN♯ 77.58 73.44 CMLA♯ 77.80 72.77∗ RNCRF♯ 78.42 69.72∗ HAST♯ 79.52 73.61 Seq2Seq4ATE 80.31 75.14 Table 2: The performances (F1:%) of all baselines and our model. All results of baselines are taken from their papers, and “-” means that the result is not available. The model with ♯means that it uses opinion information. The result with ∗is from HAST. 3.3 Results Discussion In this section, we report the performances of all models and analyze the advantages and disadvantages of them. The results of baselines and our model are displayed in Table 2. From the first part, we can see that CRF model obtains the worst performances on both datasets. Compared with the CRF model, IHS RD and NLANGP achieves better performances because they add more handcraft features to CRF. This shows that useful features are key factors for CRF based methods. Different from three previous approaches, WDEmb only uses word embeddings as inputs and performs better than IHS RD model. In fact, the CRF model also uses GloVe embeddings, but its results are much worse than WDEmb. The reason may be that embeddings used in WDEmb are trained with parsing information which plays important roles in ATE task. For example, the subject and object have a higher probability to be an aspect term than other components. We can find that the CRF based methods are heavily dependent on the quality of features. However, it is hard to extract effective features, and this prevents CRF based methods from improving their results. From the second part, we can observe that the Bi-LSTM model obtains the worst performances on both datasets compared with the other neural networks based methods. Although BiLSTM model only takes embeddings as features, it achieves comparable results compared with the best CRF based methods. The main reason is that Bi-LSTM can learn dependencies between words, and this phenomenon demonstrates that neural networks based methods have bigger advantages than CRF-based methods in solving the ATE task. Compared with Bi-LSTM, the GloVeCNN model improves 2.42% and 0.82% on laptop and restaurant datasets respectively. It is noticed that the GloVe-CNN just extracts features in a fixed-size window of each word for predicting its label. That is to say, the adjacent words are key factors for ATE, and this important information is also incorporated into our model by PAA. The BiLSTM-CNN-CRF model takes advantages of Bi-LSTM and CNN and achieves better performances than both systems. This shows that BiLSTM and CNN can complement each other. From the third part, we can see that MIN, CMLA, RNCRF and HAST achieve good performances on both datasets. This implies that joint learning is a new direction for ATE task. However, they take advantage of opinion information to improve their performances, and the opinion information is not accessible in many situations. It is noticed that HAST also use the information of previous words to predict the current label, and they find that previous word information (not the predicted label of the previous word) is important to model the label dependencies. Finally, we can see that Seq2Seq4ATE raises its performances about 0.79% and 1.53% on two datasets compared with HAST. In addition, Seq2Seq4ATE does not take advantage of any extra features such as handcraft/syntactic features and opinion information. This demonstrates the effectiveness of our model. In a word, our proposed method can make use of the overall meaning of the sentence to better deal with polysemous words (e.g. memory) and remedy the label dependencies through decoding current word conditioned on previous label. In addition, we propose the PAA and GUN to make Seq2seq learning method better suit the ATE task. 3.4 Ablation Study In this section, we study the effectiveness of the key components (e.g. PAA and GUN) in our proposed model and conduct an extensive ablation study. There are two main ablation baselines: (1)Seq2Seq4ATE-w/o-PAA removes the PAA from the Seq2Seq4ATE, (2)Seq2Seq4ATE-w/o3544 Method Laptop Restaurant Seq2Seq4ATE-w/o-GUN 75.43 71.93 Seq2Seq4ATE-w/o-PAA 74.45 72.66 Seq2Seq+VAM 77.39 72.47 Seq2Seq4ATE 80.31 75.14 Table 3: The performances (F1:%) of our model’s variants on two datasets. GUN removes the GUN from the Seq2Seq4ATE. In addition, we also use vanilla attention mechanism (VAM) to compute the context vector (named Seq2Seq+VAM) for verifying the advantage of PAA. Table 3 reports the results of Seq2Seq4ATE and its variants. From Table 3, we can first observe that both PAA and GUN are important components in our model because removing any of them from our model would result in heavily drop in performances on both datasets. Secondly, we can see that Seq2Seq4ATE-w/oGUN performs better on the laptop dataset but Seq2Seq4ATE-w/o-PAA performs better on the restaurant dataset. The reason may be that the aspect terms in the laptop domain are fixed words such as CPU, memory and etc. But the aspect terms in the restaurant domain are more arbitrary such as The Mom Kitchen, Hot Pizzeria and etc. Therefore, GUN is more important in the laptop domain because it can incorporate the word representation into Seq2Seq by merge gate, but PAA is more important for the restaurant domain because it can leverage the adjacent words of each word to help predict its label. In addition, we also find that the Seq2Seq4ATE removing both PAA and GUN performs very bad in both datasets. We think the main reason is that the number of aspect term is much smaller compared with all words. Therefore, our model can hardly learn useful information from data. We analyze the datasets and find that the words of aspect term make up 8.8% and 6.9% of the training data of restaurant and laptop domain. Finally, we can see that Seq2Seq4ATE improves about 2.92% and 2.67% on laptop and restaurant compared with Seq2Seq+VAM. The great improvements again prove that the adjacent words play important roles in ATE. The reason is that the weights of distant words in VAM may be large in VAM. However, the weights of distant words in PAA will be heavily decayed by the position information and the weights of adjacent words Method Laptop Restaurant F1 IT-Rate F1 IT-Rate BiLSTM 75.08 6.72 68.41 8.98 BiLSTM+CRF 77.72 3.97 71.94 3.69 Seq2Seq4ATE 80.31 0.02 75.14 0.03 Table 4: The performances (F1:%) and illegal transition rate (IT-Rate:%) of three models. will be decayed little because d(wi, wt) is proportional to the distance. 3.5 Analysis of Label Dependencies In this section, we conduct experiments to validate the effectiveness of our proposed model in handling label dependencies. Collobert et al. (2011) have demonstrated that it is important to model label dependencies in sequence labeling task. To validate the effectiveness of our model in addressing this problem, we compare our model Seq2Seq4ATE with two models: BiLSTM9 and BiLSTM+CRF. BiLSTM does not take the label dependencies into account, and BiLSTM+CRF uses transition matrix (Collobert et al., 2011) to address label dependencies problem. To evaluate the effectiveness of model in modeling label dependencies, we propose an evaluation criterion: Illegal Transition Rate (IT-Rate) which is computed by: IT-Rate = #illegal transition #aspect term × 100 where “#illegal transition” is the number of illegal transition (e.g. O→I) occurrences in predicted label sequence, and “#aspect term” is the number of aspect term. Generally speaking, lower IT-Rate means better performance in modeling label dependencies. Table 4 shows the results of three models on testing data. First, we can observe that the higher F1 is accompanied by lower IT-Rate. This once again demonstrates the importance of modeling label dependencies. Secondly, we can observe that BiLSTM+CRF decreases IT-Rate about 2.75% and 5.29% on two datasets compared with the BiLSTM model. This indicates that the transition matrix is a good way to model label dependencies. However, they also do not utilize the previous label to improve their performances directly. The most impressive results are that the IT-Rate of Seq2Seq4ATE is 0.02% and 0.03% which almost can be ignored compared with BiLSTM and BiL9We only use GloVe embeddings for words and utilize the same hyper-parameters used in Seq2Seq4ATE. Thus, its ATE results are not the same with LSTM in Table 2. 3545 STM+CRF. The main reason is that Seq2Seq4ATE leverages previous label information yt−1 to decode label yt for word t. Consequently, yt is compatible with yt−1. This indicates the advantages of our model in handling label dependencies compared with previous methods. 4 Related Work Aspect-based sentiment analysis (ABSA) is a subfield of sentiment analysis (Hu and Liu, 2004; Pontiki et al., 2014, 2015, 2016). In this paper, we only focus on the ATE task, and we solve this task by Seq2Seq learning which is often used in the generative task. We will introduce the recent study progresses in ATE and Seq2Seq learning. 4.1 Aspect Term Extraction Hu and Liu (2004) first propose to evaluate the sentiment of different aspects in a document, and all aspects are predefined artificially. The key step is to extract all possible aspects of a document (Zhuang et al., 2006; Popescu and Etzioni, 2007; Mei et al., 2007; Titov and McDonald, 2008; He et al., 2017). However, predefined aspects may not cover all the aspects appearing in a document. Therefore, many works turn to extract all possible aspect terms in a document. The mainstream methods for aspect term extraction include the unsupervised method and supervised method. The typical unsupervised methods include bootstrapping (Wang and Wang, 2008), double propagation (Qiu et al., 2011) and others. The supervised methods contain Hidden Markov Model (Jin et al., 2009), Conditional Random Fields (Jakob and Gurevych, 2010; Li et al., 2010; Yang and Cardie, 2013; Chernyshevich, 2014; Toh and Su, 2016; Yin et al., 2016; Shu et al., 2017) and other approaches (Wu et al., 2009; Ma and Wan, 2010; Liu et al., 2013). With the developments of deep learning, neural networks based method such as recurrent NN (Liu et al., 2015; Li and Lam, 2017), recursive NN (Wang et al., 2016), convolution NN (Poria et al., 2016; Xu et al., 2018) and attention model (Wang et al., 2017) have achieved good performances in ATE. In addition, many works utilize multi-task learning (Yang and Cardie, 2013; Wang et al., 2016, 2017; Li et al., 2018) and other resources (Xu et al., 2018) to improve their performances. 4.2 Sequence-to-Sequence Learning Sequence-to-sequence model is a generative model which is proposed by (Cho et al., 2014b; Sutskever et al., 2014), and first used in the field of machine translation. In addition, Cho et al. (2014a) improves the decoding by beam-search. However, vanilla Seq2Seq model performs worse in generating long sentences. The reason is that the encoder needs to compress the whole sentence into a fix length representation. To address this problem, Bahdanau et al. (2014) introduce an attention mechanism which selects important parts of the source sentence with respect to the previous hidden state in decoding the next state. Afterward, some studies focus on improving attention mechanism (Luong et al., 2015). So far, Seq2Seq models and attention mechanism have been applied to many fields such as dialog (Serban et al., 2016) generation, text summarization (Nallapati et al., 2016) and etc. In this paper, we first attempt to formalize the ATE as a sequence-to-sequence learning task because it can make full use of both the meaning of the sentence and label dependencies compared with existing methods. Furthermore, we design a position-aware attention model and gated unit networks to make Seq2Seq model better suit to this task. Generally, Seq2Seq model is timeconsuming in many fields because the target vocabulary size is very large, but the time costs in ATE is acceptable because the target vocabulary size is 3. 5 Conclusion and Future Work In this paper, we propose a sequence-to-sequence learning based approach to address the ATE task. Our proposed method can take full advantage of the meaning of the whole sentence and the previous label during the decoding process. Furthermore, we find that each word’s adjacent words and its own word representation are key factors for its label, and we propose a PAA and GUN model to incorporate two kinds of information into our model. The experimental results demonstrate that our approach can achieve comparable performances on ATE task. In our future work, we plan to apply our approach to other sequence labeling tasks, such as named entity recognition, word segmentation and so on. 3546 Acknowledgments We thank reviewers for helpful comments. Our work is supported by the National Key Research and Development Program of China under Grant No.2017YFB1002101 and National Natural Science Foundation of China under Grant No.61433015. The corresponding author of this paper is Houfeng Wang. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, pages 1137–1155. Maryna Chernyshevich. 2014. Ihs r&d belarus: Crossdomain extraction of product features using crf. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 309– 313. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, pages 2493–2537. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 388–397. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with conditional random fields. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1035–1045. Wei Jin, Hung Hay Ho, and Rohini K Srihari. 2009. A novel lexicalized hmm-based learning framework for web opinion mining. In Proceedings of 2009 International Conference on Machine Learning, pages 465–472. Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Fangtao Li, Chao Han, Minlie Huang, Xiaoyan Zhu, Ying-Ju Xia, Shu Zhang, and Hao Yu. 2010. Structure-aware review mining and summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistic, pages 653– 661. Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. arXiv preprint arXiv:1805.00760. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2886–2892. Kang Liu, Heng Li Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In Proceedings of 2013 International Joint Conference on Artificial Intelligence, pages 2134–2140. Pengfei Liu, Shafiq Joty, and Helen Meng. 2015. Finegrained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1433–1443. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025. Tengfei Ma and Xiaojun Wan. 2010. Opinion target extraction in chinese news comments. In Proceedings of The 23th International Conference on Computational Linguistics, pages 782–790. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web, pages 171–180. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. 3547 Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, ALSmadi Mohammad, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, et al. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 19–30. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486–495. Maria Pontiki, John Galanis, Dimitris Pavlopoulos, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th international workshop on semantic evaluation (SemEval-2014), pages 19–30. Ana-Maria Popescu and Orena Etzioni. 2007. Extracting product features and opinions from reviews. In Natural language processing and text mining, pages 9–28. Soujanya Poria, Erik Cambria, and Alexander Gelbukh. 2016. Aspect extraction for opinion mining with a deep convolutional neural network. Knowledge-Based Systems, pages 42–49. Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics, pages 9–27. Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. arXiv preprint arXiv:1707.09861. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of The Thirtieth AAAI Conference on Artificial Intelligence, pages 3776–3784. Lei Shu, Hu Xu, and Bing Liu. 2017. Lifelong learning crf for supervised aspect extraction. arXiv preprint arXiv:1705.00251. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. Proceedings of the 46th Annual Meeting of the Association for Computational Linguistic, pages 308–316. Zhiqiang Toh and Jian Su. 2016. Nlangp at semeval2016 task 5: Improving aspect based sentiment analysis using neural network features. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 282–288. Bo Wang and Houfeng Wang. 2008. Bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I, pages 289–295. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. arXiv preprint arXiv:1603.06679. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proceedings of The Thirty-First AAAI Conference on Artificial Intelligence, pages 3316–3322. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Phrase dependency parsing for opinion mining. In Proceedings of the 2009 conference on empirical methods in natural language processing, pages 1533–1541. Hu Xu, Bing Liu, Lei Shu, and Philip S Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. arXiv preprint arXiv:1805.04601. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1640–1649. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. arXiv preprint arXiv:1605.07843. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 43–50.
2019
344
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3548–3557 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3548 Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network Jingjing Wang1, Changlong Sun2, Shoushan Li1,∗, Xiaozhong Liu2, Min Zhang1, Luo Si2, Guodong Zhou1 1School of Computer Science and Technology, Soochow University, China 2Alibaba Group, China [email protected], {lishoushan, minzhang, gdzhou}@suda.edu.cn, {changlong.scl, xiaozhong.lxz, luo.si}@alibaba-inc.com Abstract In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards QuestionAnswering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines. 1 Introduction As a fine-grained sentiment analysis task, Aspect Sentiment Classification (ASC) aims to predict sentiment polarities (e.g., positive, negative, neutral) towards given particular aspects from a text and has been drawing more and more interests in natural language processing and computational linguistics over the past few years (Jiang et al., 2011; Tang et al., 2016b; Wang et al., 2018a). However, most of the existing studies on ASC focus on individual non-interactive reviews, such as customer reviews (Pontiki et al., 2014) and tweets (Mitchell et al., 2013; Vo and Zhang, 2015; Dong et al., 2014). For example, in a customer review “The food is delicious, but ambience is badly in need of improvement.”, the customer mentions two aspects, i.e., “food” and “ambience”, and expresses positive sentiment towards the former and negative sentiment towards the latter. ∗Corresponding author Question-Answering (QA) Style Review - Question: Is [battery life] durable? How about [operating speed] of the phone? - Answer: Yes, very durable but quite slow and obtuse. Aspect Sentiment Classification Towards QA - Input: QA text pair with given aspects - Output: [battery life]: Positive [operating speed]: Negative Figure 1: An example for illustrating the proposed task of Aspect Sentiment Classification towards QuestionAnswering (ASC-QA). Recently, a new interactive reviewing form, namely “Customer Question-Answering (QA)”, has become increasingly popular and a large-scale of such QA style reviews (as shown in Figure 1) could be found in several famous e-commerce platforms (e.g., Amazon and Taobao). Compared to traditional non-interactive customer reviews, interactive QA style reviews are more reliable and convincing because answer providers are randomly selected from the real customers who have purchased the product (Shen et al., 2018a). To well automatically-understand the QA style reviews, it’s worthwhile to perform ASC on the QA style reviews. However, we believe that Aspect Sentiment Classification towards QA (ASC-QA) is not easy work and this novel task faces at least two major challenges. On one hand, different from traditional non-interactive reviews with a single sequence structure, interactive QA style reviews consist of two parallel units, i.e., question and answer. Thus, it’s rather difficult to infer the sentiment polarity towards an aspect based on a single question or single answer. Take Figure 1 as an example. A well-behaved approach to ASC-QA should match each question and answer bidirectionally so as to correctly determine the sentiment polarity towards a specific aspect. 3549 On the other hand, different from common QA matching tasks such as question-answering (Shen et al., 2018a), ASC-QA focuses on extracting sentiment information towards a specific aspect and may suffer from much aspect-irrelevant noisy information. For instance, in Figure 1, although the words in the answer (e.g., “quite slow”, “obtuse”) and the question (e.g., “operating speed”) are relevant to aspect “operating speed”, they are noisy for the other aspect “battery life”. These noisy words might provide wrong signals and mislead the model into assigning a negative sentiment polarity to aspect “battery life” and vice versa. Therefore, a well-behaved approach to ASC-QA should alleviate the effects of noisy words for a specific aspect in both question and answer during model training. In this paper, we propose a reinforced bidirectional attention network approach to tackle the above two challenges. Specifically, we first propose a word selection model, namely Reinforced Aspect-relevant Word Selector (RAWS), to alleviate the effects of noisy words for a specific aspect through discarding noisy words and only select aspect-relevant words in a word sequence. On the basis of RAWS, we then develop a Reinforced Bidirectional Attention Network (RBAN) approach to ASC-QA, which employs two fundamental RAWS modules to perform word selection over the question and answer text respectively. In this way, RBAN is capable of not only addressing the semantic matching problem in the QA text pair, but also alleviating the effects of noisy words for a specific aspect in both the question and answer sides. Finally, we optimize RBAN via a reinforcement learning algorithm, i.e., policy gradient (Williams, 1992; Sutton et al., 1999). The main contributions of this paper are in two folds: • We propose a new research task, i.e., Aspect Sentiment Classification towards QuestionAnswering (ASC-QA), and construct a highquality annotated benchmark corpus for this task. • We propose an innovative reinforced bidirectional attention network approach to ASC-QA and validate the effectiveness of this approach through extensive experiments. 2 Data Collection and Annotation We collect 150k QA style reviews from Taobao1, the most famous electronic business platform in 1http://www.taobao.com China. The QA style reviews consist of three different domains: Bags, Cosmetics and Electronics. Since corpus annotation is labor-expensive and time-consuming, we randomly select 10k QA text pairs from each domain to perform annotation. Specifically, following Pontiki et al. (2014), we define an aspect at two levels of granularity, i.e., aspect term and aspect category. Besides, following Pontiki et al. (2015), we define three sentiment polarities, i.e., positive, negative and neutral (mildly positive or mildly negative) towards both aspect terms and categories. In this way, each QA text pair is annotated with two tuples, i.e., (aspect term, polarity), (aspect category, polarity). For Tuple (Aspect Term, Polarity), we annotate the single/multi-word terms together with its corresponding polarities inside each QA text pair according to four main guidelines as follows: (1) We only annotate the aspect term when the related question and answer are matched. For example, the QA text pair in Figure 1 is annotated as (“battery life”, positive) and (“operating speed”, negative) due to words “durable”, “slow” and “obtuse”. However, in E1, the answer does not reply to the question correctly and thus the aspects of “macos” and “screen” will not be annotated. E1: Q: Is macos good? How about the screen? A: The shopkeeper is very warm-hearted. (2) We only annotate the aspect term towards which an opinion is expressed. For example, in E2, the answer conveys only objective information without expressing opinions towards “phone” and thus “phone” will not be annotated. However, “case” will be annotated and tagged as neutral. E2: Q: How is this phone? How about the case? A: I bought this phone yesterday. Case is okay nothing great. (3) We only annotate aspect terms which explicitly name particular aspects. For example, in E3, “this”, “it” will not be annotated. E3: Q: Is this expensive? Did anybody buy one? A: Of course, it’s quite expensive. (4) When one aspect term has two different descriptions in both question and answer, the annotated aspect term should be consistent with the question. For example, in E4, the annotated aspect term should be “battery life” instead of “battery”. E4: Q: Is battery life durable? A: Yes, this battery is very durable. 3550 Domains Aspect Categories Bags Size, Price, Appearance, Quality, Weight, Certified Products, Smell, Accessories, Material, Life Timer, Style, Workmanship, Color, Stain Resistant, Practicality Cosmetics Price, Efficacy, Moisturizing Performance, Certified Products, Adverse Reaction, Exfoliator, Texture, Long Lasting, Smell, Material, Noticeable Color, Quality, Colour, Touch, Skin Whitening, Acne Electronics System Performance, Appearance, Battery, Computing (e.g., cpu, gpu, tpu etc.), Certified Products, Quality, IO (e.g., keyboard, screen, etc.), Price, Storage, Function (e.g., touch id, waterproof etc.) Table 1: The defined aspect categories in each domain. For Tuple (Aspect Category, Polarity), we first define2 15, 16, 10 aspect categories (as shown in Table 1) for the domains of Bags, Cosmetics and Electronics respectively. Then, we annotate aspect categories (chosen from the above predefined category list) discussed in each QA text pair according to similar guidelines for aspect term. For example, there are two aspect categories discussed in Figure 1, i.e., Battery and System Performance, and annotated as (Battery, positive) and (System Performance, negative) respectively. Finally, we discard the QA text pairs which have no annotated term and category. We assign two annotators to tag each QA text pair and the Kappa consistency check value of the annotation is 0.81. When two annotators cannot reach an agreement, an expert will make the final decision, ensuring the quality of data annotation. Table 2 shows the statistics of the final corpus. To motivate future investigations for this track of research, the annotated corpus consisting of three domains are released in github3. 3 Our Approach In this section, we first introduce the word selection model, i.e., Reinforced Aspect-relevant Word Selector (RAWS) as illustrated in Figure 2, which functions as a fundamental module of our approach to alleviate the effects of noisy words (Section 3.1). On the basis of RAWS, we present the Reinforced Bidirectional Attention Network (RBAN) approach to ASC-QA as illustrated in Figure 3, which employs two RAWS modules to 2Aspect categories are defined and summarized through preliminary annotation. 3https://github.com/jjwangnlp/ASC-QA Domains Pos. Neg. Neu. All #Cat. Bags 2503 724 453 3680 15 Cosmetics 2834 956 503 4293 16 Electronics 2742 821 531 4094 10 Table 2: Corpus statistics (Pos., Neg. and Neu. denote the number of positive, negative and neutral for aspect term; #Cat. denotes the number of aspect category). . perform word selection over the question and answer text respectively (Section 3.2). Finally, we introduce our optimization strategy via policy gradient and back-propagation (Section 3.3). 3.1 Reinforced Aspect-relevant Word Selector (RAWS) Figure 2 shows the framework of the word selection model, i.e., Reinforced Aspect-relevant Word Selector (RAWS). Given an input word sequence x = {x1, .., xE}, RAWS aims to discard noisy words and only select aspect-relevant words inside x for a specific aspect xaspect xaspect xaspect4. The output of RAWS is an equal-length sequence of one-hot variables o = [o1, .., oE], where oi = 1 if the word xi is selected otherwise oi = 0. In this way, RAWS virtually functions as a “hard” attention mechanism and thus cannot be directly optimized through back-propagation due to the non-differentiable problem as proposed in Xu et al. (2015) and Shen et al. (2018b). To address this issue, we employ the reinforcement learning algorithm, i.e., policy gradient (Sutton et al., 1999), to model RAWS. In this fashion, RAWS plays as an agent which decides to select the word or not by following a policy network as follows. Policy Network. In this paper, we adopt a stochastic policy network pπ which can provide a conditional probability distribution pπ(o|·) over action sequence o = [o1, .., oE]. Here, o is exactly the output of RAWS and oi = 1 indicates that xi is selected otherwise oi = 0 indicates that xi is discarded. More specifically, we adopt LSTM (Graves, 2013) to construct the policy network pπ for performing word selection over word sequence x, denoted as LSTMp. In order to differentiate whether a word is selected or discarded, inspired by Lei et al. (2016), we incorporate the action result oi into the input ˆvi of LSTMp at time-step i and compute hidden state hi ∈Rd of word xi as: hi = LSTMp(ˆvi), ˆvi = vi ⊕(oi ⊗e) (1) 4The aspect denotes an aspect term or aspect category as introduced in Section 2. 3551 Aspect Vector YD Y1 R1 R2 RL R( 2 Discard Select 3 Word Embedding Action Result Word Sequence Y2 YL Y( LSTMS State Action sequence R 3 3 2 2 0 0 1 1 Policy Network SS V1 V2 VL V( K1 K2 K( KL Input Output RAWS [ Figure 2: The framework of word selection model, i.e., Reinforced Aspect-relevant Word Selector (RAWS). where vi ∈Rd is word embedding of word xi; ⊕denotes vector concatenation and ⊗denotes element-wise multiplication; oi ⊗e = [oi; ..; oi], that is, oi is tiled d′ times across the row, where e ∈Rd′ is a column vector with d′ 1s and d′ is set to be 50 tuned with development set; ˆvi ∈Rd+d′. In principle, the policy network pπ uses a Reward to guide the policy learning over word sequence x. It samples an Action oi with the probability pπ(oi|si; θr) at each State si. In this paper, state, action and reward are defined as follows. • State. The state si at i-th time-step should provide adequate information for deciding to select a word or not for aspect xaspect xaspect xaspect . Thus, the state si ∈R4d is composed of four parts, i.e., hi−1, ci−1, vi and va, defined as si = hi−1 ⊕ci−1 ⊕ vi ⊕va, where ci−1 is memory state of LSTMp; va ∈Rd is aspect vector5 of xaspect xaspect xaspect. • Action. pπ samples action oi ∈{0, 1} with conditional probability pπ(oi|si; θr), which could be cast as a binary classification problem. Thus, we use a logistic function to compute pπ(oi|si; θr). oi ∼pπ(oi|si; θr) = oi sigmoid(Wrsi + br) +(1 −oi)(1 −sigmoid(Wrsi + br)) (2) where θr = {Wr ∈R1×4d, br ∈R} is the parameter to be learned. ∼denotes the discrete action sampling operation. • Reward. In order to select aspect-relevant words inside word sequence x, we define an 5If aspect is a single word like “food”, aspect vector is word embedding, while aspect is multi-word expression like “operating speed” in Figure 1, aspect vector is an average of its constituting word embeddings as Tang et al. (2016b). Action Result Word Embedding Question Answer 0 0 0 1 0 1 QA text pair vector U 1 1 0 0 1 0 1 1 0 0 6M 6( 6L( 6LM -∞ -∞ -∞ -∞ -∞ -∞ -∞ -∞ Column-wise softmax Row-wise softmax W d ∑ ∑ Word Encoder Reinforced Bidirectional Attention Softmax Decoder RAWS RAWS VT VD wise softmmax 0 0 0 0 0 0 0 0 ow-wise s 0 0 0 0 0 0 0 0 ∑ ∑ RD RT Q2A A2Q U 6 F 6 D S LSTM T S LSTM LM 6 T ( R T LR TR2 TR1 TY1 TY2 T LY T ( Y D R1 DY1 D MR D MY D ( Y D ( R E :U + = ŋ T [ D [ T Ŋģ D Ŋģ Figure 3: The framework of our proposed Reinforced Bidirectional Attention Network (RBAN) approach. aspect-relevant reward R based on cosine similarity between aspect vector va ∈Rd of xaspect xaspect xaspect and the last hidden state hE ∈Rd of LSTMp after pπ finishes all actions, i.e., R = log cos(va, hE) + log p(y|(P,xaspect xaspect xaspect)) −γE′/E (3) where log cos(va, hE) = log va·hE ||va|| ||hE|| is a cosine delay reward. Besides, it’s worthwhile to mention that, we regard the loss log p(y|(P,xaspect xaspect xaspect) presented in Eq.(10) from the classification phase as another loss delay reward. This loss reward combining with the above cosine reward could provide adequate supervision signals to guide RAWS to select aspect-relevant and also discriminative words (e.g., sentiment words “slow” and “obtuse” for aspect “operating speed”) for performing ASC-QA. γE′/E is an additional term for limiting the number of selected words. E′ = E i=1 oi denotes the number of selected words. γ is a penalty weight (tuned to be 0.01 with development set). 3.2 Reinforced Bidirectional Attention Network (RBAN) Figure 3 shows the overall framework of our proposed reinforced bidirectional attention network (RBAN) approach to ASC-QA, which consists of three parts: 1) Word Encoder. 2) Reinforced Bidirectional Attention. 3) Softmax Decoder. Word Encoder. Given a QA text pair P with an aspect xaspect xaspect xaspect, let xq = {xq i }, ∀i ∈[1, Eq] denotes the word sequence in question text, and xa = {xa j}, ∀j ∈[1, Ea] denotes the word sequence in answer text. To alleviate the effects of 3552 noisy words for aspect xaspect xaspect xaspect in both the question and answer text, we make use of two RAWS modules (as introduced in Section 3.1) to perform word selection over question xq and answer xa respectively. More specifically, we employ two LSTMp to construct policy networks pq π and pa π for sampling action oq over question xq and sampling action oa over answer xa. Here, the two LSTMp are denoted as LSTMq p and LSTMa p respectively. Therefore, according to Eq.(1), the hidden states hq i , ha j ∈Rd of words xq i and xa j are computed as: hq i = LSTMq p(ˆvq i ), ˆvq i = vq i ⊕(oq i ⊗e) ha j = LSTMa p(ˆva j ), ˆva j = va j ⊕(oa j ⊗e) (4) where vq i , va j ∈Rd are word embeddings (presented in Section 4.1) of the word xq i and xa j. Reinforced Bidirectional Attention. Once two RAWS modules finish all their actions oq = [.., oq i , ..] and oa = [.., oa j, ..] over question xq and answer xa, we employ a positional mask matrix M ∈REq×Ea to calculate the matching matrix S ∈REq×Ea between question and answer as: Mij =  0 oq i = oa j = 1 −∞ otherwise (5) Sij = w⊤tanh(W1hq i + W2ha j + b) + Mij (6) where Sij denotes the similarity between the i-th question word and the j-th answer word; Mij = −∞leads to Sij = −∞indicating that the i-th question word or the j-th answer word has been regarded as the noisy word forxaspect xaspect xaspect and thus discarded by RAWS; W1, W2 ∈Rd×d, w, b ∈Rd are the trainable parameters. In order to mine semantic matching information between question and answer, we employ S to compute attentions in both directions, which could be seen as a Question-to-Answer attention and an Answer-to-Question attention. Specifically, we first employ the row/column-wise softmax operation to get two normalized matrices Sr and Sc. Sr i: = softmax([Si1, .., SiEa]), ∀i ∈[1, Eq] Sc :j = softmax([S1j, .., SEqj]), ∀j ∈[1, Ea] (7) where Sij = −∞leads to Srij, Scij = 0 when the softmax operation is applied. This switches off the attentions between word xq i and xa j so as to filter the noisy word information and only mine the matching information relevant to aspect xaspect xaspect xaspect. Second, since each word xq i in question interacts all words in answer xa and vice versa, its importance can be measured as the summation of the strengths of all these interactions, i.e., matching scores computed in Eq.(7). Therefore, we perform row/column-wise summation operation over the normalized matching matrices, i.e., ˆαa =  i Sr i: and ˆαq =  j Sc :j, where ˆαa = [.., ˆαa j, ..] ∈REa and ˆαq = [.., ˆαq i , ..] ∈REq are matching score vectors. Finally, the bidirectional attention is computed as follows: • Question-to-Answer Attention (Q2A). We first perform softmax operation over ˆαa to compute the attention weight αa j of word xa j in answer text as αa j = exp(ˆαa j ) Ea t=1 exp(ˆαa t ). Then, the vector sa ∈Rd of the answer text is computed as a weighted sum of hidden state ha j based on the attention weight αa j, i.e., sa = Ea j=1 αa jha j. • Answer-to-Question Attention (A2Q). Similar to question-to-answer attention, the question vector sq ∈Rd is computed based on attention weight αq i = exp(ˆαq i ) Eq t=1 exp(ˆαq t ), i.e., sq = Eq i=1 αq i hq i . Subsequently, we concatenate the answer vector sa and question vector sq so as to obtain the vector representation r ∈R2d of the QA text pair P, i.e., r = sa ⊕sq. Softmax Decoder. To perform ASC-QA, we feed the vector r to a softmax classifier, i.e., β = Wr + b, where β ∈RC is the output vector. Then, the probability of labeling sentence with sentiment polarity l ∈[1, C] is computed by pθ = exp(βl) C c=1 exp(βc). Finally, the label with the highest probability stands for the predicted sentiment polarity towards aspect xaspect xaspect xaspect. 3.3 Optimization via Policy Gradient and Back-Propagation The parameters in RBAN are divided into two groups: 1) θq r and θa r for policy networks pq π, pa π in two fundamental RAWS modules. 2) θ for the rest parts including word embeddings, LSTM, bidirectional attention and softmax decoder. For θq r, we optimize it with policy gradient algorithm (Sutton et al., 1999). In detail, we first obtain an aspect-relevant reward Rq according to Eq.(3) after pq π finishes all actions. Then, the policy gradient w.r.t. θq r is computed by differentiating the maximized expected reward J(θq r) as follows: ∇θq rJ(θq r) = Eoq∼pq π[ Eq  i=1 Rq∇θq r log pq π(oq i |sq i )] (8) 3553 where ∇θq rJ(θq r) is estimated by using MonteCarlo simulation (Sutton et al., 1999) to sample some action sequences over question texts. Similarly, the policy gradient w.r.t. θa r is computed as: ∇θar J(θa r) = Eoa∼paπ[ Ea  j=1 Ra∇θar log pa π(oa j|sq j)] (9) For θ, we optimize it with back-propagation. In detail, the objective of learning θ is to minimize the cross-entropy loss function in the classification phase as follows: J(θ) = E(P,xaspect xaspect xaspect,y)∼D[−log p(y|(P,xaspect xaspect xaspect))] (10) where (P,xaspect xaspect xaspect, y) denotes QA text pair P with given aspect xaspect xaspect xaspect from dataset D; y is groundtruth sentiment polarity towards aspect xaspect xaspect xaspect. Note that, during model training, θq r and θq r are not updated in early stage, and thus two RAWS modules select all words in question and answer. When θ is optimized until the loss over development set does not decrease significantly, we then begin to optimize θ, θq r and θa r simultaneously. 4 Experimentation We systematically evaluate the performance of our proposed RBAN approach to ASC-QA on the corpus as described in Section 2. 4.1 Experimental Settings Data Settings. As introduced in Section 2, we have annotated QA text pairs from three different domains listed in Table 2. For each domain, we randomly split the annotated data into training, development, and testing sets with the ratio of 8:1:1. Word Embedding. We first adopt FudanNLP (Qiu et al., 2013) to perform word segmentation over our collected 150k Chinese QA text pairs. Then, we employ these QA text pairs to pre-train 200-dimension word vectors with skip-gram6. Hyper-parameters. In all our experiments, word embeddings are optimized during training. The dimensions of LSTM hidden states are set to be 200. The other hyper-parameters are tuned according to the development set. Specifically, we adopt Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.01 for crossentropy training and adopt the SGD optimizer with 6 https://github.com/dav/word2vec a learning rate of 0.002 for all policy gradients training. Regularization weight of parameters is 10−5, dropout rate is 0.25 and batch size is 32. Evaluation Metrics. The performance is evaluated using Accuracy (Acc.) and Macro-F1 (F1) (Wang et al., 2018a). Moreover, t-test is used to evaluate the significance (Yang and Liu, 1999). Task Definition. Our proposed ASC-QA consists of two sub-tasks: 1) Term-level ASC-QA. Given a set of pre-identified aspect terms, this subtask is to determine the polarity towards each aspect term inside a QA text pair. 2) Category-level ASC-QA. Given a set of pre-identified aspect categories, this sub-task is to determine the polarity towards each aspect category discussed in a QA text pair. 4.2 Baselines For comparison, we implement several state-ofthe-art approaches to ASC as baselines. Since the input of all these approaches should be a single sequence, we concatenate question and answer text to generate a single sequence. Besides, we employ some QA matching approaches to ASC-QA and implement several basic versions of RBAN as baselines. Note that, for fair comparison, all the above baselines adopt the same pre-trained word embeddings as RBAN. The baselines are listed as follows in detail: 1) LSTM (Wang et al., 2016). This approach only adopts a standard LSTM network to model the text without considering aspect information. 2) RAM (Chen et al., 2017). This is a state-of-theart deep memory network approach to ASC. 3) GCAE (Xue and Li, 2018). This is a state-ofthe-art approach to ASC which combines CNN and gating mechanisms to learn text representation. 4) S-LSTM (Wang and Lu, 2018). This is a state-of-the-art approach to ASC which considers structural dependencies between targets and opinion terms. 5) BIDAF (Seo et al., 2016). This is a QA matching approach to reading comprehension. We substitute its decoding layer with softmax decoder to perform ASC-QA. 6) HMN (Shen et al., 2018a). This is a QA matching approach to coarse-grained sentiment classification towards QA style reviews. 7) MAMC (Yin et al., 2017). This is a QA matching approach to ASC which proposes a hierarchical iterative attention to learn the aspect-specific text representation. 8) RBAN w/o RAWS. Our RBAN approach without using RAWS modules. 9) RBAN w/o Q2A. Our RBAN 3554 Approaches Term-level ASC-QA Category-level ASC-QA Bags Cosmetics Electronics Bags Cosmetics Electronics F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. LSTM (Wang et al., 2016) 0.571 0.757 0.582 0.771 0.534 0.756 0.528 0.773 0.493 0.739 0.522 0.752 RAM (Chen et al., 2017) 0.605 0.782 0.614 0.805 0.557 0.788 0.561 0.795 0.519 0.762 0.579 0.792 GCAE (Xue and Li, 2018) 0.617 0.779 0.623 0.819 0.570 0.781 0.590 0.787 0.514 0.791 0.576 0.788 S-LSTM (Wang and Lu, 2018) 0.615 0.824 0.623 0.821 0.569 0.794 0.587 0.828 0.522 0.788 0.581 0.801 BIDAF (Seo et al., 2016) 0.613 0.815 0.618 0.813 0.558 0.809 0.592 0.830 0.515 0.788 0.571 0.787 HMN (Shen et al., 2018a) 0.607 0.817 0.615 0.821 0.561 0.802 0.606 0.827 0.512 0.798 0.579 0.804 MAMC (Yin et al., 2017) 0.621 0.825 0.629 0.823 0.562 0.815 0.612 0.837 0.524 0.794 0.582 0.805 RBAN w/o RAWS 0.623 0.826 0.633 0.827 0.578 0.817 0.616 0.839 0.532 0.804 0.591 0.813 RBAN w/o Q2A 0.595 0.788 0.614 0.817 0.569 0.779 0.578 0.814 0.514 0.788 0.569 0.782 RBAN w/o A2Q 0.623 0.837 0.639 0.834 0.588 0.821 0.617 0.845 0.536 0.815 0.603 0.826 RBAN 0.648 0.856 0.662 0.855 0.616 0.833 0.634 0.869 0.557 0.833 0.625 0.839 Table 3: Performances of all the approaches to two sub-tasks, i.e., Term-level and Category-level ASC-QA. In each sub-task, all approaches are evaluated in three different domains, i.e., Bags, Cosmetics and Electronics. approach without using question-to-answer attention. 10) RBAN w/o A2Q. Our RBAN approach without using answer-to-question attention. 4.3 Experimental Results Table 3 shows the performances of different approaches to ASC-QA. From this table, we can see that all the three state-of-the-art ASC approaches, i.e., RAM, GCAE and S-LSTM, perform better than LSTM. This confirms the usefulness of considering aspect information in ASC. Besides, both the attention based approaches RAM and SLSTM achieve comparable or better performance than GCAE. This result demonstrates the usefulness of a proper attention mechanism to model aspect information. The two QA matching approaches, i.e., BIDAF and HMN could achieve comparable performance with the three state-of-the-art ASC approaches, and MAMC even beats all of them. This indicates the appropriateness of treating question and answer in a QA style review as two parallel units instead of a single sequence in ASC-QA. Furthermore, our RBAN w/o RAWS approach (i.e., without considering aspect information) performs consistently better than MAMC. This encourages to employ bidirectional attention to learn the representation vectors of both the question and answer in order to capture the sentiment information therein. Besides, it’s interesting to notice that RBAN w/o A2Q (i.e., without question vector sq) performs much better than RBAN w/o Q2A (i.e., without answer vector sa). This is due to the fact that the main sentiment polarity towards aspect is usually expressed in the answer text. In comparison, when using RAWS, RBAN performs best and significantly outperforms RBAN w/o RAWS (p-value < 0.05), which encourages to discard noisy words for a specific aspect in both the question and answer sides. Impressively, in the sub-task of Term-level ASC-QA, compared to LSTM, RBAN achieves average improvements of 7.97% (F1) and 8.67% (Acc.) in three domains. In the sub-task of Category-level ASCQA, compared to LSTM, RBAN achieves average improvements of 9.1% (F1) and 9.23% (Acc.). Significance test shows that these improvements are all significant (p-value < 0.05). These results encourage to incorporate both RAWS and bidirectional attentions to tackle ASC-QA. 5 Analysis and Discussion Case Study. We provide a qualitative analysis of our approach on the development set. Specifically, in Figure 4, we visualize the attention matrix Sr in RBAN towards aspect “operating speed” in two cases, i.e., not using RAWS (Figure 4(a)) and using RAWS (Figure 4(b)). In Figure 4(a), color blue denotes attention weight (the darker the more important), we can find that both aspect “battery life” and aspect “operating speed” in question have been successfully matched with their corresponding answer phrases, i.e., “very durable” and “quite slow and obtuse”. However, RBAN without RAWS can’t discard noisy words (e.g., “battery life”, “durable”) for aspect “operating speed”. In Figure 4(b), color white denotes the word inside question or answer has been discarded, we can find that RBAN is capable of effectively discarding noisy words such as “battery” and “durable” and highlighting those significant words such as “slow” and “obtuse” for aspect “operating speed”. 3555 (a) RBAN without RAWS (b) RBAN with RAWS Figure 4: Attention matrices for a QA text pair (each row is a question word and each column is an answer word). (a) and (b) show attention matrices of RBAN without RAWS and RBAN towards aspect term “operating speed”. Error Analysis. We randomly analyze 100 error cases in the experiments, which can be roughly categorized into 5 types. 1) 27% errors are because that the answer length is too short. An example is “Question: Is the screen good? Answer: No.”. 2) 24% errors are due to negation words. An example is “the case is not good”. Our approach fails to select the word “not” and incorrectly predicts positive polarity. This inspires us to optimize our approach so as to capture the negation scope better in the future. 3) 19% errors are due to the wrong prediction on recognizing neutral instances. The shortage of neutral training examples makes the prediction of neutral instances very difficult. 4) 16% errors are due to comparative opinions. An example is “macos is much better than Windows”. Our approach incorrectly predicts positive for aspect “Windows”. 5) Finally, 14% errors are due to mistakes during Chinese word segmentation. An example is “好难看(very ugly)”. It’s incorrectly segmented into “好(good)|难(hard)|看(look)” and predicted as positive. This encourages to improve the performance of word segmentation on informal customer reviews. 6 Related Work Existing studies on Aspect Sentiment Classification (ASC) could be divided into two groups according to the different level of text, i.e., sentencelevel ASC and document-level ASC. Sentence-level ASC is typically regarded as a sentence-level text classification which aims to incorporate aspect information into a model. Recently, Wang et al. (2016); Ma et al. (2017) propose an attention based LSTM to ASC by exploring the connection between an aspect and the content of a sentence. Tang et al. (2016b), Chen et al. (2017) and Wang et al. (2018b) employ memory networks to model the context and aspect. Wang and Lu (2018) propose a segmentation attention to capture structural dependency between target and opinion terms. Document-level ASC aims to predict sentiment ratings for aspects inside a long text. Traditional studies (Titov and McDonald, 2008; Wang et al., 2010; Pontiki et al., 2016) solve document-level ASC as a sub-problem by utilizing heuristic based methods or topic models. Recently, Lei et al. (2016) focus on extracting rationales for aspects in a document. Li et al. (2018) propose an useraware attention approach to document-level ASC. Yin et al. (2017) model document-level ASC as a machine comprehension problem, of which the input is also a parallel unit, i.e., question and answer. However, their question texts are pseudo and artificially constructed. This disaccords with the fact that real-world question texts also possibly involve multi-aspect and sentiment information. Unlike all the above studies, this paper performs ASC on a different type of text, i.e., QA style reviews. To the best of our knowledge, this is the first attempt to perform ASC on QA style reviews. 7 Conclusion In this paper, we propose a new task, i.e., Aspect Sentiment Classification towards Question Answering (ASC-QA). Specifically, we first build a high-quality human annotated benchmark corpus. Then, we design a reinforced bidirectional attention network (RBAN) approach to address ASCQA. Empirical studies show that our proposed approach significantly outperforms several state-ofthe-art baselines in the task of ASC-QA. In our future work, we would like to solve other challenges in ASC-QA such as data imbalance and negation detection to improve the performance. Furthermore, we would like to explore the effectiveness of our approach to ASC-QA in other languages. 3556 Acknowledgments We thank our anonymous reviewers for their helpful comments. This work was supported by three NSFC grants, i.e., No.61672366, No.61702149 and No.61525205. This work was also supported by the joint research project of Alibaba Group and Soochow University. References Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of EMNLP-2017, pages 452–461. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of ACL-2014, pages 49– 54. Alex Graves. 2013. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of ACL-2011, pages 151–160. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of EMNLP-2016, pages 107–117. Junjie Li, Haitong Yang, and Chengqing Zong. 2018. Document-level multi-aspect sentiment classification by jointly modeling users, aspects, and overall ratings. In Proceedings of COLING-2018, pages 925–936. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of IJCAI-2017, pages 4068–4074. Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In Proceedings of EMNLP-2013, pages 1643–1654. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph´ee De Clercq, V´eronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia V. Loukachevitch, Evgeniy V. Kotelnikov, N´uria Bel, Salud Mar´ıa Jim´enez Zafra, and G¨ulsen Eryigit. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of NAACL-2016, pages 19–30. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of NAACL-2015, pages 486– 495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of COLING-2014, pages 27–35. Xipeng Qiu, Qi Zhang, and Xuanjing Huang. 2013. Fudannlp: A toolkit for chinese natural language processing. In Proceedings of ACL-2013, pages 49– 54. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. CoRR, abs/1611.01603. Chenlin Shen, Changlong Sun, Jingjing Wang, Yangyang Kang, Shoushan Li, Xiaozhong Liu, Luo Si, Min Zhang, and Guodong Zhou. 2018a. Sentiment classification towards question-answering with hierarchical matching network. In Proceedings of EMNLP-2018, pages 3654–3663. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018b. Reinforced selfattention network: a hybrid of hard and soft attention for sequence modeling. In Proceedings of IJCAI2018, pages 4345–4352. Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of NIPS-1999, pages 1057–1063. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In Proceedings of EMNLP-2016, pages 214– 224. Ivan Titov and Ryan T. McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL-2008, pages 308–316. Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of IJCAI-2015, pages 1347–1353. Bailin Wang and Wei Lu. 2018. Learning latent opinions for aspect-level sentiment classification. In Proceedings of AAAI-2018, pages 5537–5544. Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of SIGKDD-2010, pages 783–792. 3557 Jingjing Wang, Jie Li, Shoushan Li, Yangyang Kang, Min Zhang, Luo Si, and Guodong Zhou. 2018a. Aspect sentiment classification with both word-level and clause-level attention networks. In Proceedings of IJCAI-2018, pages 4439–4445. Shuai Wang, Sahisnu Mazumder, Bing Liu, Mianwei Zhou, and Yi Chang. 2018b. Target-sensitive memory networks for aspect sentiment classification. In Proceedings of ACL-2018, pages 957–967. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In Proceedings of EMNLP-2016, pages 606–615. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of ICML-2015, pages 2048–2057. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Proceedings of ACL-2018, pages 2514–2523. Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings of SIGIR-1999, pages 42–49. Yichun Yin, Yangqiu Song, and Ming Zhang. 2017. Document-level multi-aspect sentiment classification as machine comprehension. In Proceedings of EMNLP-2017, pages 2044–2054.
2019
345
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3558 ELI5: Long Form Question Answering Angela Fan1,2 Yacine Jernite⇤1 Ethan Perez⇤3 David Grangier4 Jason Weston1 Michael Auli1 1Facebook AI Research 2LORIA 3NYU ‡ 4Google AI ‡ [angelafan,yjernite,jase,michaelauli]@fb.com, [email protected], [email protected] Abstract We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to openended questions. The dataset comprises 270K threads from the Reddit forum “Explain Like I’m Five” (ELI5) where an online community provides answers to questions which are comprehensible by five year olds. Compared to existing datasets, ELI5 comprises diverse questions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline. However, our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement.1 1 Introduction Existing question answering datasets have enabled significant progress in models that provide extractive or unambigious short answers. However, less attention has been paid to open-ended questions that require explanations. In this work, we present ELI5: a Long Form Question Answering dataset that emphasizes the dual challenges of isolating relevant information within long source documents and generating paragraph-length explanations in response to complex, diverse questions (see illustrations in Figures 1 and 2). The first challenge of ELI5 is the length and diversity of answers that span multiple sentences: ⇤Equal contribution ‡ Work done while at Facebook AI Research 1Dataset, Pretrained Models, and Additional Information is available: https://facebookresearch. github.io/ELI5, https://github.com/ facebookresearch/ELI5 Question: How do Jellyfish function without brains or nervous systems? [...] (60 words) Answer: Jellyfish may not have a brain, but they have a rough nervous system and innate behaviours. However, they are very simple creatures. They’re invertebrate: creatures without a backbone. Most jellyfish have really short life spans. Sometimes just a couple of hours. [...] As their name implies, they are largely composed of basically jelly inside a thin membrane. They’re over 95% water. (327 words) Documents: [...] Jellyfish do not have brains, and most barely have nervous systems. They have primitive nerve cells that help them orient themselves in the water and sense light and touch. [...] While they dont possess brains, the animals still have neurons that send all sorts of signals throughout their body. [...] They may accomplish this through the assistance of their nerve rings. Jellyfish don’t have brains, and that’s just where things begin. They don’t have many of the body parts that are typical in other animals. [...] (1070 words) Figure 1: ELI5 example. Models must write multi-sentence answers given questions and supporting web documents. questions are complex and cannot be easily addressed by a short response (Nguyen et al., 2016) or by extracting a word or phrase from an evidence document (Rajpurkar et al., 2016). Answers also represent one of several valid ways of addressing the query. Many state-of-the-art question answering models perform well compared to human performance for extractive answer selection (Radford et al., 2018; Devlin et al., 2018). However, their success does not directly carry over to our setting. The second challenge is the length and diversity of the content from knowledge sources required to answer our questions. We leverage evidence queried from the web for each question. In contrast to previous datasets where the human written answer could be found with lexical overlap methods (Weissenborn et al., 2017), ELI5 poses a significant challenge in siphoning out important information, as no single sentence or phrase contains the full answer. While there are some datasets that do require multi-sentence supporting knowl3559 Figure 2: ELI5 questions by starting word, where box size represents frequency. Questions are open ended and diverse. edge such as TriviaQA (Joshi et al., 2017), their answers are still short. We benchmark the performance of several extractive, retrieval, and generative models. Evaluation of our task, and of multi-sentence text generation in general, is challenging. We draw upon several evaluation metrics that quantify performance on intermediary fill-in tasks that lead up to the full answer generation. The overall answer generation quality is measured with ROUGE (Lin, 2004) and various human evaluation studies. We develop a strong abstractive baseline by training a Seq2Seq model on multiple tasks over the same data: language modeling, masked word prediction (Devlin et al., 2018) and answer generation. We show this approach outperforms conventional Seq2Seq and language modeling, as well as a strong extractive baseline based on BidAF (Seo et al., 2017) but generalized to multi-sentence output. However, our best-performing model is still far from the quality of human written answers, with raters preferring the gold answers 86% of the time. Further, we show that model performance is strongly limited by the ability to comprehend long multi-document input and generate long outputs to form a comprehensive answer, leaving this challenge for future research. 2 Related Work Various QA datasets have been proposed in roughly two categories: extractive answers and short abstractive answers (see Table 1). Extractive QA Extractive question answering datasets such as TREC (Voorhees, 2003), SQuAD (Rajpurkar et al., 2016, 2018), NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), and QuAC (Choi et al., 2018) constrain the answer to a word or short phrase from the input and evaluate using exact match or F1 with the ground truth span. HotpotQA (Yang et al., 2018) extends this approach by building questions which challenge models to conduct multi-hop reasoning across multiple paragraphs, but the answer is still a short span. Further, the answer must be straightforward, as it needs to be copied from the supporting evidence — precluding most “how” or “why” type questions. Abstractive QA Abstractive datasets include NarrativeQA (Kocisky et al., 2018), a dataset of movie and book summaries and CoQA (Reddy et al., 2018), a multi-domain dialogue dataset. Both collect responses with crowdworkers and find that written answers are mostly extractive and short. MS MARCO (Nguyen et al., 2016), a dataset of crowdsourced responses to Bing queries, has written answers around 1 sentence long with short input passages. TriviaQA (Joshi et al., 2017) contains longer multi-document web input, collected using Bing and Wikipedia. As the dataset is built from trivia, most questions can be answered with a short extractive span. Multi-document summarization The ELI5 task of writing a paragraph length response from multiple supporting documents can be seen as a form of query-based multi-document summarization (Tombros and Sanderson, 1998). Summarization tasks such as DUC 20042 involve long input and multi-sentence generation, but contain much less training data compared to ELI5. WikiSum (Liu et al., 2018) proposes writing Wikipedia articles as a multi-document summarization task. ELI5 requires more directed 2https://duc.nist.gov/duc2004/ 3560 Dataset Average # of Words 1st Question Word Frequency (%) Question Document(s) Answer Why How What When Where Who Which OTHER # Q-A Pairs ELI5 42.2 857.6 (212K) 130.6 44.8 27.1 18.3 11.3 2.0 1.8 0.8 6.1 272K MS MARCO v2 (Nguyen et al., 2016) 6.4 56 13.8 1.7 16.8 35.0 2.7 3.5 3.3 1.8 35.3 183K TriviaQA (Joshi et al., 2017) 14 2895 2.0 0.2 3.9 32.6 2.0 2.1 16.8 41.8 0.6 110K NarrativeQA (Kocisky et al., 2018) 9.8 656 4.7 9.8 10.7 38.0 1.7 7.5 23.4 2.2 6.8 47K CoQA (Reddy et al., 2018) 5.5 271 2.7 2 5 27 2 5 15 1 43 127K SQuAD (2.0) (Rajpurkar et al., 2018) 9.9 116.6 3.2 1.4 8.9 45.3 6.0 3.6 9.6 4.4 17.6 150K HotpotQA (Yang et al., 2018) 17.8 917 2.2 0.1 2.6 37.2 2.8 2.2 13.8 28.5 12.8 113K Table 1: Comparing large-scale QA datasets. ELI5 has answers an order of magnitude longer and more open-ended questions. text generation to answer a question, rather than to write about a general topic. In addition, ELI5 contains a diverse set of questions which can involve more than one Wikipedia concept. 3 Making a Long Form QA Dataset 3.1 Creating the Dataset from ELI5 There are several websites which provide forums to ask open-ended questions such as Yahoo Answers, Quora, as well as numerous Reddit forums, or subreddits. We focus on the subreddit Explain Like I’m Five (ELI5) where users are encouraged to provide answers which are comprehensible by a five year old.3 ELI5 is appealing because answers are supposed to be entirely self contained, and thus rely less on pre-existing knowledge of the world and use simpler language that is easier to model. Questions and answers. We select a set of questions and answers from the ELI5 forum up to July 2018 and then filter it based on how users rated these pairs. First, we only retain questions which have a score of at least two, that is two more ‘upvotes’ than ‘down-votes’. Second, there must be at least one answer with a score of at least two. This yields a final number of 272K questions, and ensures that at least one person other than the author has read the thread and deemed it appropriate. For each thread, we select the answer with the highest voting score as the reference. Note that 63% have one or more other valid answers by our upvote criteria, potentially doubling the size of the available training data. Preparing supporting information. Next, we collect web sources for every question to provide relevant information that a system can draw upon when generating an answer. Wikipedia has been found effective for factoid-oriented questions (Joshi et al., 2017; Chen et al., 2017). However, 3https://www.reddit.com/r/ explainlikeimfive early experiments in our setting showed it to be insufficient to cover the wide range of topics present in ELI5 and to address the open-ended nature of the questions. Instead, we use web data provided by Common Crawl.4 Specifically, we consider each of the individual pages in the July 2018 archive (roughly one per URL) as a single document. The data is tokenized with Spacy5 and we select English documents with FastText language identification (Bojanowski et al., 2017). Finally, we index the data with Apache Lucene.6 Creating support documents. We query the index for the 272K questions and gather the 100 most relevant web sources for each question, excluding Reddit. Each web source is the extracted text of one page in Common Crawl. This leads to supporting text for each question of a few hundred thousand words. There is a good chance that the supporting text contains the necessary information to answer the question, but the sheer amount of data is far beyond the scope of what many modern models can handle. We therefore filter the 100 web sources by selecting specific passages using a simple heuristic: we split each web source into sentences, find sentences with the highest TFIDF similarity with respect to the question, add some local context for each of these, and concatenate the result into a single support document, with special tokens indicating non-contiguous passages and document shifts. Each support document is the result of this processing to concatenate relevant information from the web sources. We find that extracting 15 passages with a context of one sentence before and after the initial selection provides the best trade-off between support document length and likelihood of containing relevant information, where relevance is measured as the likelihood of containing a sentence which has 4http://commoncrawl.org 5https://spacy.io 6http://lucene.apache.org 3561 % Correct Human Answers 94.5 % Correct Human Answers with Explanation 90.2 % Support Document contains Answer 65.0 % Support Document contains Relevant Info 92.0 Table 2: Annotated subset of ELI5 to assess answerability. high ROUGE with the answer. We release all 100 Common Crawl IDs for each question and a script to create the support document so future research can use the support document or choose to further investigate the information retrieval problem. Finalizing the data set. If the training data contains questions that are too similar to the validation and test data, a model may perform well on these examples by memorizing related examples. We prevent this by building the validation and test set to contain questions that are sufficiently different from the training data. We compute the TFIDF similarity between each pair of questions in the entire dataset and sample the validation and test set from the subset which has no close neighbor by TFIDF score. The final dataset contains 237K train examples, 10K for valid, and 25K for test. 3.2 Dataset Analysis Table 1 compares ELI5 to related datasets in terms of the length of the question, support document, answer, as well as statistics on the question types. First, ELI5 questions are much longer than in other datasets. This is because the initial question is often followed by a clarifying paragraph detailing what aspect of the general theme should be addressed or the question’s starting assumptions, which need to be considered to answer well. To get a rough idea of the different questions, we categorize them based on interrogative words. ELI5 focuses on open-ended queries which are less represented in other extractive or abstractive datasets. Figure 2 shows examples of ELI5 questions split by type and Appendix Figure 11 displays random examples from the ELI5 training set. Interestingly, even What questions tend to require paragraphlength explanations (What is the difference...). Support documents contain 22-60 sentences or on average 858 words, which puts ELI5 on the higher end of published datasets for document length. ELI5 contains long-form answers with an average length of 6.6 sentences, or 130 words. Next, we analyze a random subset of ELI5 to assess the feasability of answering the questions in the dataset. We judge if the question is answerable by reading each question, the gold answer, and the support document we have created with TF-IDF extraction. Note that questions can have multiple parts and all parts of the question must be answered. We sample 500 randomly questionanswer pairs from the training set and find that 94.5% of gold answers fully address the question (Table 2) based on the information in the support document. Figure 12 in Appendix F displays examples of human answers that do not correctly answer the question. A small proportion of answers are correct but do not explain the answer. On the support document side, 65% of the support documents we construct provide the answer to the question, and 92% of support documents provide information relevant to the question. 4 Evaluation Methods Evaluating long-form answers. There are several aspects to quality: answers should be topical and accurate, fluent, and coherent from start to end. We judge the accuracy aspect by comparing to the gold answer. ROUGE (Lin, 2004) measures similarity between a model output and one or several references, and is often used in summarization. While our task presents different challenges, such as the diversity of possible answers to a question, we still find the corpus-level metric to be useful to rank different related models (§6). We report F1 for ROUGE-1, ROUGE-2, and ROUGE-L. Abstractive model metrics. For generative models, perplexity (PPL) measures the ability to predict the next word in a sequence given its context. For a variant which focuses on semantically important words, we report FILL-1, the accuracy at which models generate different Nouns, Verbs, and Adjectives given the correct preceding tokens in the first 2K examples of the test set. Finally, ROUGE-20% measures the model’s ability to complete an answer given the first 80% of the reference answer, the question, and the support document. Specifically, we generate a number of tokens corresponding to 20% of the average answer length in the validation set, and measure ROUGE between these and the last 20% of the reference. We mentioned that there are several valid ways to answer most questions. This measure abstracts away this variability and evaluates a system’s ability to complete an answer. 3562 Extractive: But none of these hypotheses make much sense for a jellyfish because they do nt have brains at all. They just have a nerve net — a loose ring of neurons that runs around the rim of their pulsating bells. They have an unusual nervous system, because jellyfish are not bilaterally symmetrical — that is, they dont have a left side and a right side. Jellyfish don’t have brains, but their nervous systems detect smells, light and other stimuli, and they coordinate their physical responses. Language Model: They don’t have brains. They have a nervous system. They do not have a brain. They are not conscious. They can not move. They move. Their nervous system is not a muscle. They use their muscles to move. The nervous system does not have any nerves. They just have a nerve. They also have a muscle that is very similar to a muscle in that it is very much like a muscle, but instead of a muscle it is a muscle and it is like a nerve, but it is not muscle. The nerve is like the muscle, it is just a muscle but it has a nerve and it has nerves. The nerves are like the muscles, but they have nerves. Multi-task Seq2Seq: Jellyfish don’t have brains. Jellyfish have a nervous system. They know how to swim, they don’t even know what to do with their bodies. They have a “nervous system”, which is basically a bunch of nerve cells that connect to the body. The nervous system is made up of neurons, neurons, nerves, etc. Each neuron doesn’t have a specific set of “neurons”, it just has a different set of nerves. It has a set of sensory neurons, which are all connected to each other. It just knows that it does react to the stimulus, and it knows that the stimulus is there. They have a bunch of nervous systems that act like a filter to get information back. Figure 3: Example answers from the extractive BidAF model, Question + Document + Answer language model, and Multi-task Seq2Seq model for the question “How do Jellyfish function without brains or nervous systems?” (cf. Figure 1). Human evaluation. We use crowdworkers to conduct three assessments. First, evaluators rate the fluency of human and model generated answers on a 5-point Likert Scale, from “very poorly written” to “easily readable” (500 evaluations). Second, evaluators are given question-answer pairs and are asked if the answer is correct (500 evaluations) 7. We also evaluated a smaller subset ourselves while additionally looking at the support documents (100 evaluations) to assess answer accuracy. Lastly, crowdworkers are given the question and answers from two models and asked to decide which answer they prefer while considering readability and accuracy (1000 evaluations). Each crowdworker assessment is made by 3 different evaluators. The same questions are used for all models and must be at least 5 words long. 5 Models 5.1 Extractive and Retrieval Models Retrieval baseline and oracle. We report ROUGE for a retrieval system that returns the answer of the closest question in the training set. Specifically, we perform a nearest neighbor search (Johnson et al., 2017) over the average word embeddings of the question using FASTTEXT (Bojanowski et al., 2017). We also compute an approximate oracle score for extractive systems by using the reference answer to select similar sentences from the support document to maximize ROUGE. Computing ROUGE between the reference and all sets of sentences from the source 7We experimented with a variant where crowdworkers were allowed to select a third I don’t know option, but found it was used only around 8% of the time. is intractable. Instead, we perform a beam search that adds sentences maximizing TFIDF with respect to the answer. The final beam is re-ranked using ROUGE with respect to the reference answer. We run this algorithm on our support document and on the full set of web sources for each validation and test question, selecting up to 10 sentences with a beam of size 10. Extractive models. The first baseline we explore simply returns the 7 sentences from the support document which have the highest TFIDF similarity with the question. We also evaluate models which score sentences from the support document based on the question and return the highest scoring sentences in their original order (the number is tuned on the validation set to maximize ROUGE). We train a model based on BidAF (Seo et al., 2017). We create an extractive training set by finding the span of up to 5 contiguous sentences in the support document which have the highest ROUGE with respect to the reference answer, and sub-sample other support document sentences so that the final training document is shorter than 400 words. We then train a BidAF model to predict the extracted span in the sub-sampled support document based on the question. For test, we compute the span score for each individual sentence, and return the 5 with the highest score as it performed best compared to returning 3 or 7 sentences. 5.2 Abstractive Models Language and Seq2Seq models. We train several models based on the Transformer architecture (Vaswani et al., 2017), both in its language model and sequence-to-sequence (Seq2Seq) con3563 Model PPL ROUGE 1 2 L Support Document 16.8 2.3 10.2 Nearest Neighbor 16.7 2.3 12.5 Extractive (TFIDF) 20.6 2.9 17.0 Extractive (BidAF) 23.5 3.1 17.5 Oracle support doc 27.4 2.8 19.9 Oracle web sources 54.8 8.6 40.3 LM Q + A 42.2 27.8 4.7 23.1 LM Q + D + A 33.9 26.4 4.0 20.5 Seq2Seq Q to A 52.9 28.3 5.1 22.7 Seq2Seq Q + D to A 55.1 28.3 5.1 22.8 Seq2Seq Multi-task 32.7 28.9 5.4 23.1 Table 3: Comparison of oracles, baselines, retrieval, extractive, and abstractive models on the full proposed answers. Model FILL-1 acc. ROUGE-20% N V A 1 2 L LM Q + A 31.0 29.6 20.6 26.5 7.0 21.1 LM Q + D + A 30.9 28.9 19.9 26.3 7.8 21.3 S2S Q to A 21.7 23.0 15.5 33.6 11.5 29.5 S2S Q + D to A 27.6 26.3 19.4 32.7 10.7 28.6 S2S Multi-task 27.9 26.7 19.9 37.2 14.6 33.0 Table 4: Intermediary fill-in tasks for sequential generation. figurations. To investigate how much information from the document the model uses, we train a language model on the concatenation of Question, Support Document, and Answer (Q + D + A) as well as on the Question and Answer (Q + A). Similarly, one Seq2Seq configuration goes from Q to A, and the other from Q + D to A. In all cases, Q, D, and A are separated by special tokens. Multi-task training. Language models are trained to predict all tokens in the question, web source, and answer. However, the standard Seq2Seq model only receives training signal from predicting the answer which is much less than the language model gets. This can contribute to learning poor quality representations compared to language models. To address this, we train a multi-task Seq2Seq model: during training, we multi-task between several generation tasks, including language modeling of Q + D + A by the decoder and variations of source/target pairs (see Appendix A). We add a masked word prediction task (Devlin et al., 2018) where 15% of tokens in the input are masked and must be recovered by the model in the correct order, and append a marker at the start of each sequence to indicate the task. Data processing. To reduce the vocabulary, we apply byte-pair encoding (Sennrich et al., 2016) to generate 40K codes which are applied to all datasets. We model a vocabulary of 52,863 tokens for answer generation. We use the Transformer implementation of fairseq-py (Gehring et al., 2017) and train with the big architecture following the details in (Vaswani et al., 2017). Given our data length, we train with a large batch size by delaying gradient updates until a sufficient number of examples have been seen (Ott et al., 2018). Generation. We generate from abstractive models using beam search with beam 5. We disallow repeated trigrams to prevent repetition, a technique commonly used in multi-sentence summarization (Paulus et al., 2017; Fan et al., 2018). For the full answer generation task, we tune a minimum and maximum length for generation on the valid set and apply these settings to the test set. 6 Results 6.1 Overview of Model Performance Full answer ROUGE. Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which indicates that memorizing answers from the training set is insufficient. For extractive models, the oracle provides an approximate upper bound of 27.4 ROUGE-1. The BidAF model is the strongest (23.5), better than TFIDF between the question and the support document to select sentences. However, these approaches are limited by the support document, as an oracle computed on the full web sources achieves 54.8. Abstractive methods achieve higher ROUGE, likely because they can adapt to the domain shift between the web sources and the ELI5 subreddit. In general, Seq2Seq models perform better than language models and the various Seq2Seq settings do not show large ROUGE differences. Figure 3 shows an example of generation for the language model and the best Seq2Seq and extractive settings (see Appendix F for additional random examples). Perplexity and fill-in tasks. Tables 3 and 4 present metrics specific to sequential generation models: perplexity of the answer, accuracy of the model’s FILL-1 word prediction for Nouns, Verbs, and Adjectives, and ROUGE of the conditional generation of the last 20% answer words. The language model perplexity is much lower than that of the standard Seq2Seq setting – this is likely linked to the number of output tokens the system 3564 Figure 4: Human evaluation of answer fluency and accuracy — with and without access to supporting evidence documents Figure 5: Human preferences for pairwise comparisons. The better model’s % preference is bolded. * indicates statistical significance. is required to predict at training time. The multitask Seq2Seq experiment, in which the Seq2Seq decoder is trained to predict the question and the document, in addition to the answer, can reach the same perplexity as the language model. ROUGE20% shows a much starker contrast between language modeling and Seq2Seq, as well as between standard Seq2Seq and multi-task training. The latter achieves strong performance of 37.2 ROUGE1. However, both versions of the language model are still better at FILL-1. These results suggest that the Seq2Seq model is better than the language model in maintaining coherence and that Seq2Seq relies on information over many time steps. Human evaluation. Human answers are rated highest in terms of fluency (Figure 4, left). The extractive model outputs human-written text which is likely fluent but with the failure mode of concatenating unrelated sentences. The multi-task model performs similarly to the extractive model which indicates that abstractive methods can generate coherent answers. The language model and standard Seq2Seq trail behind. To get a sense of the stability of our results, we analyzed the standard deviation of three independent fluency trials conducted on separate days and we find low variation (Appendix E, Figure 10). We also measure agreement between crowdworkers in selecting positive (scores 4 and 5), negative (1 and 2), or neutral (3) choices on the 5-point Likert scale, and find that 2 crowdworkers agree almost 100% of the time (Appendix E, Figure 10). In answer accuracy (Figure 4, middle), there is a large gap between human performance and all models. The language model is almost never accurate, while the extractive model is slightly more so than the multi-task model. Crowdworkers assessing accuracy do not have the support document. We evaluate accuracy ourselves with the support document in Figure 4, right. Similar to crowdworkers, we find 40% of extractive answers to be accurate. We find only 19% of multi-task model answers are fully accurate; even if the model output answers the question, it can generate a sentence with an incorrect statement. In contrast, the extractive model copies sentences from humanwritten text. However, the multi-task model is better at generating relevant answers (84% relevancy compared to 68% for extractive), as the extractive model is constrained by the support document. Figure 5 presents pairwise preference judgments of human annotators shown answers from two of the five systems. The reference answer is preferred over the output of all of our trained models in at least 85.5% of cases, indicating there is substantial room for improvement. The multi-task abstractive setting comes next, closely followed by the extractive (multi-task is only preferred in 57% of comparisons), then the standard Seq2Seq and finally the language model, considered worse than any other setting in at least 91% of cases. We use a two-tailed binomial test to test statistical significance of the pairwise judgments and it shows that all judgments are statistically significant at p = 0.05. 6.2 Quantitative and Qualitative Analysis Discussion of the proposed metrics. We present a number of metrics which provide insight into various model behaviors. We recommend 3565 Figure 6: Attention over the question and supporting evidence for the Multi-task Seq2Seq model and Question + Document + Answer language model. Attention is shown for the first word of answer generation. future work to report full ROUGE and ROUGE20%. Perplexity and FILL-1 focus on local prediction and are poor indicators of overall appropriateness for the full task. Full answer ROUGE discriminates reasonably well between models with the same general architecture, but cannot rate an abstractive system against an extractive one. The ROUGE-20% measure abstracts away some variability and focuses on coherence between the beginning and end of an answer. This metric correlates with human judgments of quality but can only be reported for sequential generation. Analysis of extractive, LM and Seq2Seq models. Language models perform better than Seq2Seq in terms of perplexity and FILL-1, while being significantly worse at ROUGE-20% and human evaluations. To investigate this, we visualize the attention mechanism at the start of answer generation in Figure 6. The attention of the language model is strongly focused on nearby context when generating the first word of the answer, whereas the multi-task Seq2Seq model attends more evenly to relevant information in the question and the document. This validates our assumption that the language model’s focus on local context is insufficient for high quality answers. In Figure 7 (left), we further investigate how the relevance and quality of the support document extraction step affects the answers provided by the extractive and abstractive setting. The ROUGE score is displayed for data subsets, partitioned by percentile of word overlap of the answer with the support document (e.g. how many answer words appear). While both models perform better for documents with higher ROUGE overlap between support document and human answer, the abstractive setting is much better at compensating for when the support document has lower relevance. Data size and initial selection. There is a large difference between the extractive oracle ROUGE using our support document and the oracle on full Figure 7: (left) Model score by document-answer similarity. (right) Seq2Seq multi-task score by amount of training data. Figure 8: (left) TFIDF rank of source passage for oracle sentences. (right) Highest rank used per question. web sources. This suggests that the initial selection of our support document severely limits access to relevant information. To assess the impact of support document size, we re-run the selection step for 1000 examples to extract 500 passages instead of 20, and run the oracle on these new inputs. Figure 8 shows the TFIDF rank of the passages from which sentences are selected. While slightly more sentences are extracted from the higher ranking passages, less than 9% come from the first 20, and most oracles have at least one sentence from the last 100. For a model to perform best, it would have to handle inputs tens of thousands of words long. In Table 3, we show an oracle computed on the full web sources has much higher ROUGE than an oracle computed on the support document. We analyze the impact of data size on performance in Figure 7. We train the multi-task model on 25%, 50%, and 75%, and the all of the data to compare performance. ROUGE increases as a function of the data used and even though ELI5 is one of the larger QA datasets (§3), this shows that collecting more still helps. While we only used one reference answer per question here, recall that over half of them have multiple answers, which could be leveraged to train better models. 3566 Combining challenges. Our task blends the inter-dependent challenges of retrieving information, reasoning, and writing long outputs. Studying each of these aspects in context is particularly important. For example, we show that the abstractive model’s ability to compensate for a (realistically) imperfect support document is essential to its relative success over extractive methods. The fluency gap between the reference and the extractive system in human evaluation also suggests that the latter may require sequential decision capabilities. This kind of decision making is necessary to address the dual challenges of reasoning over several supporting facts and generating long coherent outputs. We see our task’s need to combine complementary systems as critical to gaining insights into their individual behaviors. 7 Conclusion We introduce the first large-scale long form question answering dataset of open-ended queries with explanatory multi-sentence answers. We show that abstractive models generate coherent answers and are competitive with extractive models in human evaluation. Proposed models are far from human performance, in part due to the inability to exploit the long full web text. We hope ELI5 will inspire future work in all aspects of long-form QA, from the information extraction problem of obtaining information from long, multi-document input to generating more coherent and accurate paragraph-length answers. References Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer opendomain questions. In ACL. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. In EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G¨uney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. CoRR, abs/1704.05179. Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In ACL Workshop on Neural Machine Translation and Generation. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. Billion-scale similarity search with gpus. CoRR, abs/1702.08734. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL. Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. TACL. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. In ACL Workshop on Text Summarization Branches Out. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In ICLR. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. CoRR. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT, pages 1–9. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. 3567 Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Anastasios Tombros and Mark Sanderson. 1998. Advantages of query biased summaries in information retrieval. In SIGIR. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. Newsqa: A machine comprehension dataset. In ACL Workshop on Representation Learning for NLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NIPS. Ellen M. Voorhees. 2003. Overview of the TREC 2003 question answering track. In TREC. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In CoNLL. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.
2019
346
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3568–3584 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3568 Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension Daesik Kim1,2,∗ Seonhoon Kim1,3,∗ Nojun Kwak1,† 1Seoul National University 2V.DO Inc. 3Search&Clova, Naver Corp. {daesik.kim|nojunk}@snu.ac.kr [email protected] Abstract In this work, we introduce a novel algorithm for solving the textbook question answering (TQA) task which describes more realistic QA problems compared to other recent tasks. We mainly focus on two related issues with analysis of the TQA dataset. First, solving the TQA problems requires to comprehend multimodal contexts in complicated input data. To tackle this issue of extracting knowledge features from long text lessons and merging them with visual features, we establish a context graph from texts and images, and propose a new module f-GCN based on graph convolutional networks (GCN). Second, scientific terms are not spread over the chapters and subjects are split in the TQA dataset. To overcome this so called ‘out-of-domain’ issue, before learning QA problems, we introduce a novel self-supervised open-set learning process without any annotations. The experimental results show that our model significantly outperforms prior state-of-the-art methods. Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems. 1 Introduction In a decade, question answering (QA) has been one of the most promising achievements in the field of natural language processing (NLP). Furthermore, it has shown great potential to be applied to real-world problems. In order to solve more realistic QA problems, input types in datasets have evolved into various combinations. Recently, Visual Question Answering (VQA) has drawn huge attractions as it is in the intersection * Equal contribution. † Corresponding author. This work was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF-2017M3C4A7078547). Nucleic acid classification fuction of nucleic acid DNA stores genetic information in the cells of all living things. It contains the genetic code. This is the code that instructs cells how to make proteins. nucleotide RNA consists of just one chain of nucleotides. DNA consists of two chains. Nitrogen bases on the two chains of DNA form hydrogen bonds with each other. Hydrogen bonds are relatively weak bonds that form between a positively charged hydrogen atom in one molecule and a negatively charged atom in another molecule. Context Graph Questions nitrogen bases in dna include a) adenine. b) uracil. c) ribose. d) two of the above What is the term for connected sugar, phosphate group and protein? a) hydrogen bond b) deoxyribose c) nucleotide d) sugar-phosphate backbone Comprehend + Solve LESSON Training Set Validation Set Training Set Figure 1: Examples of the textbook question answering task and a brief concept of our work. In this figure, we can see lessons which contain long essays and diagrams in the TQA. Related questions are also illustrated. With a self-supervised method, our model can comprehend contexts converted into context graphs in training and validation sets. Then it learns to solve questions only in the training set in a supervised manner. Input Type Context QA Visual QA Textbook QA Context Part Text ◦ ◦ Image ◦ ◦ Question Part Text ◦ ◦ ◦ Image ◦ Table 1: Comparison of data types in context and question parts for context QA, VQA and TQA. It shows that the data format of the TQA task is the most complicated on both of context and question parts. of vision and language. However, the Textbook Question Answering (TQA) is a more complex and more realistic problem as shown in Table 1. Compared to context QA and VQA, the TQA uses both text and image inputs in both the context and the question. The TQA task can describe the real-life process of a student who learns new knowledge from books and practices to solve related problems (Figure 1). It also has several novel characteristics as a realistic dataset. Since the TQA contains visual contents as well as textual contents, it requires to solve multi-modal QA. Moreover, for3569 mats of questions are various which include both text-related questions and diagram-related questions. In this paper, we focus on the following two major characteristics of the TQA dataset (Kembhavi et al., 2017). First, compared to other QA datasets, the context part of TQA has more complexity in the aspect of data format and length. Multi-modality of context exists even in non-diagram questions and it requires to comprehend long lessons to obtain knowledge. Therefore, it is important to extract exact knowledge from long texts and arbitrary images. We establish a multi-modal context graph and propose a novel module based on graph convolution networks (GCN) (Kipf and Welling, 2016) to extract proper knowledge for solving questions. Next, various topics and subjects in the textbooks are spread over chapters and lessons, and most of the knowledge and terminology do not overlap between chapters and subjects are split. Therefore, it is very difficult to solve problems on subjects that have not been studied before. To resolve this problem, we encourage our model to learn novel concepts and terms in a self-supervised manner before learning to solve specific questions. Our main contributions can be summarized as follows: • We propose a novel architecture which can solve TQA problems that have the highest level of multi-modality. • We suggest a fusion GCN (f-GCN) to extract knowledge feature from the multi-modal context graph of long lessons and images in the textbook. • We introduce a novel self-supervised learning process into TQA training to comprehend open-set dataset to tackle the out-of-domain issues. With the proposed model, we could obtain the state-of-the-art performance on TQA dataset, which shows a large margin compared with the current state-of-the-art methods. 2 Related Work 2.1 Context question answering Context question answering, also known as machine reading comprehension, is a challenging 134 668 200 400 600 800 SQuAD TQA 0.84 0.79 0.76 0.78 0.80 0.82 0.84 0.86 SQuAD TQA a) Average length of contexts b) Ratio of words in valset that appear in trainset Figure 2: Analysis of contexts in TQA and SQuAD datasets. task which requires a machine not only to comprehend natural language but also to reason how to answer the asked question correctly. Large amount of datasets such as MCTest (Richardson et al., 2013), SQuAD (Rajpurkar et al., 2016) or MS Marco (Nguyen et al., 2016) have contributed significantly to the textual reasoning via deep learning approaches. These datasets, however, are restricted to a small set of contents and contain only uni-modal problems requiring only textual information. In addition, these sets require relatively less complex parsing and reasoning compared to TQA dataset (Kembhavi et al., 2017). In this study, we tackle TQA, the practical middle school science problems across multiple modalities, by transforming long essays into customized graphs for solving the questions on a textbook. 2.2 Visual question answering As the intersection of computer vision, NLP and reasoning, visual question answering has drawn attention in the last few years. Most of pioneering works in this area (Xu and Saenko, 2016; Yang et al., 2016; Lu et al., 2016) are to learn a joint image-question embedding to identify correct answers where the context is proposed by images alone. Then, various attention algorithms have been mainly developed in this field and methods of fusing textual and visual information such as bilinear pooling (Fukui et al., 2016; Yu et al.) have also been widely studied. Thereafter, datasets focusing on slightly different purposes have been proposed. For instance, CLEVR (Johnson et al., 2017) encouraged to solve the visual grounding problem and AI2D (Kembhavi et al., 2016) suggested a new type of data for knowledge extraction from diagrams. In this paper, we incorporate UDPnet (Kim et al., 2018) to extract knowledge from diagram parsing graph in the textbook. Recent researches (Teney et al., 2017; Norcliffe3570 Diagrams a) Preparation step for k-th answer among n candidate TF-IDF context 1 context 2 context 3 3) Answer k 2) Question f-GCN RNN RNN MAX POOL MAX POOL ATTENTION ATTENTION CONCAT FC Y1 ... Yk Yn context m TF-IDF Dependency Parsing b) Embedding step and Solving step Top-1 Filter by anchor nodes Question Answer k GloVe+Char_emb GloVe+Char_emb c c k th RNNs Text Image Diagram Parsing 4) Visual Context Graph m 5) Textual Context Graph m Diagram Parsing 1) Diagram Graph* GCN* ATTENTION* Image Text Context Part Question Part Dependency Tree Diagram Figure 3: Overall framework of our model: (a) The preparation step for the k-th answer among n candidates. The context m is determined by TF-IDF score with the question and the k-th answer. Then, the context m is converted to a context graph m. The question and the k-th answer are also embedded by GloVe and character embedding. This step is repeated for n candidates. (b) The embedding step uses RNNC as a sequence embedding module and f-GCN as a graph embedding module. With attention methods, we can obtain combined features. After concatenation, RNNS and the fully connected module predict final distribution in the solving step. Brown et al., 2018) also have dealt with graph structure to solve VQA problems. 3 Problem Formally, our problem can be defined as follows: ˆa = argmax a∈Ωa p(a|C, q; θ) (1) where C is given contexts which consist of textual and visual contents and q is a given question which can contain question diagrams for diagram problems. θ denotes the trainable parameters. With given C and q, we are to predict the best answer ˆa among a set of possible answers Ωa. The TQA contexts contain almost all items in textbooks: topic essay, diagrams and images, lesson summaries, vocabularies, and instructional videos. Among them, we mainly use topic essay as textual contexts and diagrams as visual contexts. Among various issues, the first problem we tackle is the complexity of contexts and variety in data formats as shown in Table 1. Especially, analysis of textual context in Figure 2(a) shows that the average length of contexts in the TQA is 668 words which is almost 5 times larger than that of the SQuAD which has 134 words on average. Also, in (Kembhavi et al., 2017), analysis of information scope in TQA dataset provides two important clues that about 80% of text questions only need 1 paragraph and about 80% of diagram questions only need 1 context image and 1 paragraph. Due to those evidences, we need to add an information retrieval step such as TF-IDF (term frequency–inverse document frequency) to narrow down scope of contexts from a lesson to a paragraph, which significantly reduces the complexity of a problem. Moreover, a graph structure can be suitable to represent logical relations between scientific terms and to merge them with visual contexts from diagrams. As a result, we decide to build a multi-modal context graph and obtain knowledge features from it. In Figure 2(b), we obtain the percentage of how much the terms in the validation set are appearing in the training set. Obviously, the ratio of the TQA (79%) is lower than that of the SQuAD (84%) which can induce out-of-vocabulary and domain problems more seriously in the TQA task. To avoid aforementioned issues, we apply a novel self-supervised learning process before learning to solve questions. 4 Proposed Method Figure 3 illustrates our overall framework which consists of three steps. In a preparation step, we use TF-IDF to select the paragraph most relevant to the given question or candidate answers. Then, we convert it into two types of context graphs for text and image, respectively. In the embedding step, we exploit an RNN (denoted as RNNC in the figure) to embed textual inputs, a question and an answer candidate. Then, we incorporate f-GCN to extract graph features from both the visual and the textual context graphs. After repeating previous steps for each answer candidate, we can stack each 3571 Visual Context Graph Textual Context Graph GCN GCN Attention GCN Fused Graph Representation Weighted Sum Ht Hd Hc c c Figure 4: Illustration of f-GCN. Both of textual and visual contexts are converted into Hd c and Ht c. With attention methods, we obtain combined features of Ht c and Hd c (f-GCN1). Finally, we use GCN again to propagate over entire features of context graphs (f-GCN2). of concatenated features from the embedding step. We exploit another RNN (RNNS) to cope with the variable number of answer candidates which varies from 2 to 7 that can have sequential relations such as “none of the above” or “all of the above” in the last choice. Final fully connected layers decide probabilities of answer candidates. Note that notation policies are included in the supplementary. 4.1 Multi-modal Context Graph Understanding 4.1.1 Visual and Textual Context graphs For the visual contexts and the question diagrams, we build a visual context graph using UDPnet (Kim et al., 2018). We obtain names, counts, and relations of entities in diagrams. Then we can establish edges between related entities. Only for question diagrams, we use counts of entities transformed in the form of a sentence such as “There are 5 objects” or “There are 6 stages”. We build the textual context graphs using some parts of the lesson where the questions can focus on solving problems as follows. Each lesson can be divided into multiple paragraphs and we extract one paragraph which has the highest TF-IDF score using a concatenation of the question and one of the candidate answers (leftmost of Figure 3(a)). Then, we build the dependency trees of the extracted paragraph utilizing the Stanford dependency parser (Manning et al., 2014), and designate the words which exist in the question and the candidate answer as anchor nodes. The nodes which have more than two levels of depth difference with anchor nodes are removed and we build the textual context graphs using the remaining nodes and edges (Process 1 in the supplementary). 4.1.2 Graph Understanding using f-GCN Next, we propose f-GCN to extract combined graph features for visual and textual context graphs as shown in Figure 4. Each of context graphs has its own graph matrix C containing node features and a normalized adjacency matrix which are used as inputs of a GCN to comprehend the contexts. Here, the graph matrix C is composed of the word embeddings and the character representation. First, we extract propagated graph features from both of context graphs based on one-layer GCN as Ht c =f(Ct, At) = σ(AtCtW t) Hd c =f(Cd, Ad) = σ(AdCdW d), (2) where At and Ad are the adjacency matrices for the text and visual contexts, W t and W d are learning parameters of linear layer for the text and visual contexts, and the element-wise operation σ is the tanh activation function. After that, we use dot product function to get attention matrix Z of visual context Hd c against textual context Ht c which contains main knowledge. Then we concatenate features of textual context Ht c and weighted sum ZT Hd c to get entire context features, H1 c = [Ht c; ZT Hd c ], (3) where [· ; ·] is the concatenation operator. Compared to the textual-context-only case, we can obtain double-sized features which can be more informative. Finally, we use a GCN again to propagate over entire features of context graphs: H2 c =f(H1 c , At) = σ(AtH1 c W c). (4) We denote this module except the last GCN as fGCN1 (eq. (3)) and the whole module including the last GCN as f-GCN2 (eq. (4)). 4.2 Multi-modal Problem Solving The f-GCN and RNNs are used to embed the contexts and answer the questions as shown in Figure 3(b). Two different RNNs are used in our architecture. One is the comprehending RNN (RNNC) which can understand questions and candidate answers and the other is the solving RNN (RNNS) which can answer the questions. The input of the RNNC is comprised of the word embedding, character representation and the occurrence flag for both questions and candidate answers. In word embedding, each word can be 3572 represented as eqi/eai by using a pre-trained word embedding method such as GloVe (Pennington et al., 2014). The character representation cqi/cai is calculated by feeding randomly initialized character embeddings into a CNN with the max-pooling operation. The occurrence flag fqi/fai indicates whether the word occurs in the contexts or not. Our final input representation qw i for the question word qi in RNNC is composed of three components as follows: eqi =Emb(qi), cqi = Char-CNN(qi) qw i = [eqi; cqi; fqi]. (5) The input representation for the candidate answers is also obtained in the same way as the one for the question. Here, Emb is the trainable word embeddings and Char-CNN is the character-level convolutional network. To extract proper representations for the questions and candidate answers, we apply the step-wise max-pooling operation over the RNNC hidden features. Given each of the question and the candidate answer representations, we use an attention mechanism to focus on the relevant parts of the contexts for solving the problem correctly. The attentive information Attq of the question representation hq against the context features Hc as in (3) or (4) is calculated as follows: Attq = K X k=1 αkHck, αk = exp(gk) PK i=1 exp(gi) , gk = hT q MHck. (6) Here, K is the number of words in the context C which equals the dimension of the square adjacency matrix A. M is the attention matrix that converts the question into the context space. The attentive information of the candidate answers Atta is calculated similar to Attq. RNNS can solve the problems and its input consists of the representations of the question and the candidate answer with their attentive information on the contexts as: It RNNS = [hq; ha; Attc q; Attc a], Id RNNS = [hq; ha; Attc q; Attc a; Attqd q ; Attqd a ] (7) where It RNNS is for the text questions and Id RNNS is for the diagram questions. Finally, based on the outputs of RNNS, we use one fully-connected layer followed by a softmax function to obtain a probability distribution of each candidate answer and optimize those with cross-entropy loss. context Top-1 context m Top-2 context Top-n TF-IDF Top-1 is correct Context Graph m Same structure as normal training Diagrams context 1 context 2 context 3 Question Answer k Text Image Image Text Context Part Question Part f-GCN RNN RNN MAX POOL MAX POOL ATTENTION ATTENTION CONCAT FC Y1 ... Yk Yn c c k th RNNs GCN* ATTENTION* Figure 5: Self-supervised open-set comprehension step in our model. We set contexts as candidates we should predict for the question and the k-th answer. For each answer, we obtain n context candidates from TF-IDF methods and set the top-1 candidate as the correct context. While we use the same structure as in Figure 3, we can predict final distribution after all the steps. 4.3 Self-supervised open-set comprehension To comprehend out-of-domain contexts, we propose a self-supervised prior learning method as shown in Figure 5. While we exploit the same architecture described in the previous section, we have reversed the role of the candidate answer and the contexts in (1) as a self-supervised one. In other words, we set the problem as inferring the Top-1 context for the chosen answer candidate. We assume TF-IDF to be quite reliable in measuring closeness between texts. The newly defined self-supervised problem can be formalized as follows: ˆc = argmax c∈Ωc p(c|Ak, q; θ) (8) where Ak is given k-th answer candidate among n candidates and q is the given question. Then we infer the most related context ˆc among a set of contexts Ωc in a lesson. For each candidate answer Ak(k = 1, .., n), we get the set of paragraphs Ωc of size j from the corresponding context. Here, Ωc is obtained by calculating TF-IDF between [q; Ak] and each paragraph ω, i.e., Tω = tf-idf([q; Ak], ω), and selecting the top-j paragraphs. Among the j paragraphs ωi(i = 1, · · · , j) in Ωc, the one with the highest TF-IDF score is set as the ground truth: yi = ( 1, if ωi = argmaxω∈Ωc Tω, 0, otherwise. (9) With Ak, q and ωi ∈Ωc, we conduct the same process in eq. (2-7) to obtain the i-th input of the 3573 Model Text T/F Text MC Text All Diagram All Random 50.10 22.88 33.62 24.96 29.08 MemN+VQA (Kembhavi et al., 2017) 50.50 31.05 38.73 31.82 35.11 MemN+DPG (Kembhavi et al., 2017) 50.50 30.98 38.69 32.83 35.62 BiDAF+DPG (Kembhavi et al., 2017) 50.40 30.46 38.33 32.72 35.39 Challenge 45.57 35.85 40.48 IGMN (Li et al., 2018) 57.41 40.00 46.88 36.35 41.36 Our full model w/o visual context 62.32 49.15 54.35 36.61 45.06 Our full model w/ f-GCN2 62.22 48.76 54.11 37.72 45.52 Our full model 62.73 49.54 54.75 37.61 45.77 w/o SSOC(VAL) 62.22 48.82 54.11 37.47 45.39 w/o SSOC(TR+VAL) 60.02 46.86 52.06 36.61 43.97 w/o f-GCN & SSOC(TR+VAL) 58.72 45.16 50.51 35.67 42.74 Table 2: Comparison of performance with previous methods (Top) and results of ablation studies (Bottom). We demonstrate the accuracies of each type of questions, Text T/F (true-false in text only), Text MC (multiple-choices in text only), Text all (all in text only), Diagram and All. Note that previous methods only used textual context. RNNS, Ii RNNS. After repeating it j times, we put all Ii RNNS, (i = 1 · · · , j) into RNNS sequentially and optimize this step with the cross-entropy loss. We repeatedly choose all answer candidates Ak, and conduct the same process in this step. With this pre-training stage which shares parameters with the supervised stage, we expect that our model can deal with almost all contexts in a lesson. Moreover, it becomes possible to learn contexts in the validation set or the test set with a self-supervised manner. This step is analogous to a student who reads and understands a textbook and problems in advance. 5 Experiments 5.1 Dataset We perform experiments on the TQA dataset, which consists of 1,076 lessons from Life Science, Earth Science and Physical Science textbooks. While the dataset contains 78,338 sentences and 3,455 images including diagrams, it also has 26,260 questions with 12,567 of them having an accompanying diagram, split into training, validation and test at a lesson level. The training set consists of 666 lessons and 15,154 questions, the validation set consists of 200 lessons and 5,309 questions and the test set consists of 210 lessons and 5,797 questions. Since evaluation for test is hidden, we only use the validation set to evaluate our methods. 5.2 Baselines We compare our method with several recent methods as followings: • MemN+VQA, MemN+DPG Both exploits Memory networks to embed texts in lessons and questions. First method uses VQA approaches for diagram questions, and the second one exploits Diagram Parse Graph (DPG) as context graph on diagrams built by DsDP-net (Kembhavi et al., 2016). • BiDAF+DPG It incorporates BiDAF (Bidirectional Attention Flow Network) (Seo et al., 2016), a recent machine comprehension model which exploits a bidirectional attention mechanism to capture dependencies between question and corresponding context paragraph. For above 3 models, we use experimental results newly reported in (Li et al., 2018). • Challenge This is the one that obtained the top results in TQA competition (Kembhavi et al., 2017). The results in the table are mixed with each of top score in the text-question track and the diagram-question track. • IGMN It uses the Instructor Guidance with Memory Nets (IGMN) based on Contradiction Entity-Relationship Graph (CERG). For diagram questions, it only recognizes texts in diagrams. • Our full model w/o visual context This method excludes visual context to compare with previous methods on the same condition. It uses only onelayer GCN for textual context and self-supervised open-set comprehension (SSOC). • Our full model w/ f-GCN2 From now, all methods include visual context. This method uses fGCN2 and SSOC. Following methods are for our ablation study: • Our full model This method uses both of our methods, f-GCN1 and SSOC on the training and the validation sets. • Our model w/o SSOC (VAL) This method only uses training set to pretrain parameters in SSOC. • Our model w/o SSOC (TR+VAL) This method eliminates whole SSOC pre-training process. It 3574 only uses f-GCN as Graph extractor and was trained only in a normal supervised learning manner. • Our model w/o f-GCN & SSOC (TR+VAL) This method ablates both f-GCN module and SSOC process. It replaces f-GCN as vanilla RNN, other conditions are the same. 5.3 Quantitative Results 5.3.1 Comparison of Results Overall results on TQA dataset are shown in Table 2. The results show that all variants of our model outperform other recent models in all type of question. Our best model shows about 4% higher than state-of-the-art model in overall accuracy. Especially, an accuracy in text question significantly outperforms other results with about 8% margin. A result on diagram questions also shows more than 1% increase over the previous best model. We believe that our two novel proposals, context graph understanding and self-supervised open-set comprehension work well on this problem since our models achieve significant margins compared to recent researches. Even though our model w/o visual context only uses one-layer GCN for textual context, it shows better result compared to MemN+VQA and MemN+DPG with a large margin and IGMN with about 3% margin. IGMN also exploits a graph module of contraction, but ours outperforms especially in both text problems, T/F and MC with over 5% margin. We believe that the graph in our method can directly represents the feature of context and the GCN also plays an important role in extracting the features of our graph. Our models with multi-modal contexts show significantly better results on both text and diagram questions. Especially, results of diagram question outperform over 1% rather than our model w/o visual context. Those results indicate that f-GCN sufficiently exploits visual contexts to solve diagram questions. 5.3.2 Ablation Study We perform ablation experiments in Table 2. Our full model w/ f-GCN2 can achieve best score on diagram questions but slightly lower scores on text questions. Since the overall result of our full model records the best, we conduct ablation study of each module of it. First, we observe an apparent decrease in our model when any part of modules is elimiModel Text Diagram All Our model w/o SSOC 52.06 36.61 43.97 w/o q-flag 49.29 35.78 42.21 w/o a-flag 43.24 31.50 37.09 w/o q & a-flag 42.64 31.72 36.92 Table 3: Results of ablation study about the occurrence flags. We demonstrate the accuracies of Text only, Diagram, and total questions without SSOC method. nated. It is surprising that self-supervised openset comprehension method provides an improvement on our model. Our full model shows about 2% higher performance than the model without SSOC(TR+VAL). It is also interesting to compare our full model with our model without SSOC(VAL). The results show that using the additional validation set on SSOC can improve overall accuracy compared to using only training set. It seems to have more advantage for learning unknown dataset in advance. Our model without f-GCN & SSOC eliminates our two novel modules and replace GCN with vanilla RNN. That model shows 1% of performance degradation compared with the model without SSOC(TR+VAL) which means that it might not sufficient to deal with knowledge features with only RNN and attention module. Thus, context graph we create for each lesson could give proper representations with f-GCN module. Table 3 shows the results of ablation study about occurrence flag. All models do not use SSOC method. In (5), we concatenate three components including the occurrence flag to create question or answer representation. We found that the occurrence flag which explicitly indicates the existence of a corresponding word in the contexts has a meaningful effect. Results of all types degrade significantly as ablating occurrence flags. Especially, eliminating a-flag drops accuracy about 7% which is almost 4 times higher than the decrease due to eliminating f-flag. We believe that disentangled features of answer candidates can mainly determine the results while a question feature equally affects all features of candidates. Our model without both flags shows the lowest results due to the loss of representational power. 5.4 Qualitative Results Figure 6 shows three qualitative results of texttype questions without visual context. We illustrate textual contexts, questions, answer candidates and related subgraphs of context graphs. The first example describes a pipeline on a 3575 runoff carved channels in the soil in figure 19.1 . running water causes most soil erosion , but wind can carry soil away too . what humans do to soil makes it more or less likely to be eroded by wind or water . human actions that can increase soil erosion are described below . the main cause of soil erosion is ____ Q a) wind . b) ice wedging . c) abrasion . d) running water . causes dobj csubj running erosion compound water soil a) 0.314 b) 0.118 c) 0.113 d) 0.455 Prediction : (d) Ground Truth : (d) the dense , iron core forms the center of the earth . scientists know that the core is metal from studying metallic meteorites and the earths density . seismic waves show that the outer core is liquid , while the inner core is solid . movement within earths outer liquid iron core creates earths magnetic field . these convection currents form in the outer core because the base of the outer core is heated by the even hotter inner core . convection currents occur in the inner core . Q a) true b) false currents form nsubj det these compound convection a) 0.464 b) 0.536 Prediction : (b) Ground Truth : (b) a lysosome is an organelle that recycles unneeded molecules . it uses enzymes to break down the molecules into their components . then the components can be reused to make new molecules . lysosomes are like recycling centers . ____organelle that recycles unneeded molecules Q a) lysosome b) cytoskeleton c) vesicle d) centriole organelle acl:relcl nsubj lysosome dobj molecules recycles amod a) 0.913 b) 0.013 c) 0.017 d) 0.025 e) 0.016 f) 0.007 g) 0.009 Prediction : (a) Ground Truth : (a) nmod core amod outer case in e) plastid f) golgi apparatus g) endoplasmic reticulum unneeded Figure 6: Qualitative results of text-type questions without visual context. Each example shows all items for a question in the textbook and a textual context subgraph to solve a question. And our predicted distribution for answers and ground truths are also displayed. In the subgraph, gray circles represent words in questions and blue circles represent words related to answers. Green rectangles represent relation types of the dependency graph. earthquakes are used to identify plate boundaries ( figure 6.14 ) . when earthquake locations are put on a map , they outline the plates . the movements of the plates are called plate tectonics . the lithosphere is divided into a dozen major and several minor plates . each plate is named for the continent or ocean basin it contains . some plates are made of all oceanic lithosphere . a few are all continental lithosphere . what lies exactly below the lithosphere? Q a) asthenosphere. b) volcanoes. c) trench. d) oceanic crust. lithosphere a) 0.383 b) 0.232 c) 0.186 d) 0.199 Prediction : (a) Ground Truth : (a) few continental oceanic asthenosphere lithosphere Diagram Oceanic Crust the cell membrane is like the bag holding the jell-o . it encloses the cytoplasm of the cell . it forms a barrier between the cytoplasm and the environment outside the cell . the function of the cell membrane is to protect and support the cell ... which part forms a barrier between the cytoplasm and the environment outside the cell? Q a) cell wall. b) golgi vesicles. c) cell membrane. d) golgi apparatus. cytoplasm cell evironment barrier membrane cell wall ndgplasmic ribosomes Diagram Diagram a) 0.085 b) 0.025 c) 0.872 d) 0.018 Prediction : (c) Ground Truth : (c) cytoplasm vacuole nuciqoius vesicle lysosome centriole cytoplasm membrane protect Figure 7: Qualitative results of diagram-type questions. We illustrate intermediate subgraphs, and predicted distribution for answers and ground truths. T/F question. Three words, “currents”, “core” and “convection” are set as anchor nodes as shown in the left of Figure 6. Within two levels of depth, we can find “outer” node which is the opposite to “inner” in the question sentence. As a result, our model predicts the true and false probabilities of this question as 0.464 and 0.536, respectively, and correctly solves this problem as a false statement. Next example is a multiple choice problem which is more complicated than T/F problem. With anchor nodes which consist of each answer candidate and a question such as “causes”, “erosion” and “soil”, the context graph can be established including nodes in two depth of graph from anchor nodes. Among the 4 candidates, choice (d) contains the same words, “running” and “water”, as our model predicts. Therefore, our model can estimate (d) as the correct answer with the highest probability of 0.455. The last example shows a more complicated multiple choice problem. In the context graph, we set “organelle”, “recycles”, “molecules” and “unneeded” as anchor nodes with each word in answer candidates. Then we can easily find an important term, “lysosome” in choice (a). Therfore, choice (a) has a probability close to one among 7 candidates. Figure 7 demonstrates qualitative results of diagram questions. We exclude relation type nodes in subgraphs of the dependency tree for simplicity and also illustrate diagram parsing graphs of visual contexts and question diagram. The example in the top shows intermediate results of subgraphs on a diagram question without visual context. Even though chosen paragraph in textual context do not include “asthenosphere”, graph of a question diagram contain relation between “asthenosphere” and “lithosphere”. Then our model can predict (a) as the correct answer with probability of 0.383. The bottom illustration describes the most complex case which has diagrams in both of context and question parts. We illustrate all subgraphs of text and diagrams. While our model can collect sufficient knowledge about cell structure on broad information scope, “cell membrane” can be chosen as correct answer with the highest probability. These examples demonstrate abstraction ability and relationship expressiveness which can be huge advantages of graphs. Moreover, those results could support that our model can explicitly interpret the process of solving multi-modal QA. 6 Conclusion In this paper, we proposed two novel methods to solve a realistic task, TQA dataset. We extract knowledge features with the proposed f-GCN and conduct self-supervised learning to overcome the out-of-domain issue. Our method also demonstrates state-of-the-art results. We believe that our work can be a meaningful step in realistic multimodal QA and solving the out-of-domain issue. 3576 References Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 1988–1997. IEEE. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. 2016. A diagram is worth a dozen images. In European Conference on Computer Vision, pages 235–251. Springer. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376–5384. IEEE. Daesik Kim, YoungJoon Yoo, Jee-Soo Kim, SangKuk Lee, and Nojun Kwak. 2018. Dynamic graph generation network: Generating relational knowledge from diagrams. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Juzheng Li, Hang Su, Jun Zhu, Siyu Wang, and Bo Zhang. 2018. Textbook question answering under instructor guidance with memory networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3655–3663. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot. 2018. Learning conditioned graph structures for interpretable visual question answering. In Advances in Neural Information Processing Systems, pages 8344–8353. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the opendomain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Damien Teney, Lingqiao Liu, and Anton van den Hengel. 2017. Graph-structured representations for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9. Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In European Conference on Computer Vision, pages 451–466. Springer. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21– 29. Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. Multimodal factorized bilinear pooling with co-attention learning for visual question answering. A Notations We denote the question text, question diagram, candidate answer, text context and diagram context as Qt = {qt 1, qt 2, · · · , qt I}, Qd = {qd 1, qd 2, · · · , qd J}, A = {a1, a2, · · · , aK}, Ct = {ct 1, ct 2, · · · , ct L}, and Cd = {cd 1, cd 2, · · · , cd M}, respectively where qt i/qd j /ak/ct l/cd m is the ith/jth/kth/lth/mth word of the question text Qt and the question diagram Qd, candidate answer A, text context Ct and diagram context Cd (C is unified notation for the Ct and Cd). The corresponding representations are denoted as ht q,hd q, ha, Ht c and Hd c , respectively. Note that we use the diagram context Cd only in the diagram questions. B Implementation Details We initialized word embedding with 300d GloVe vectors pre-trained from the 840B Common Crawl corpus, while the word embeddings for the outof-vocabulary words were initialized randomly. We also randomly initialized character embedding with a 16d vector and extracted 32d character representation with a 1D convolutional network. And the 1D convolution kernel size is 5. We used 200 hidden units of Bi-LSTM for the RNNc whose weights are shared between the question 3577 Model Text T/F Text MC Text All Diagram All Our full model w/o visual context 62.32 49.15 54.35 36.61 45.06 w/o UTC(VAL) 60.82 49.08 53.72 36.53 44.72 w/o UTC(TR+VAL) 60.72 46.34 52.02 36.57 43.93 w/o GCN & UTC(TR+VAL) 58.62 44.77 50.24 35.2 42.36 Our full model w/ f-GCN2 62.22 48.76 54.11 37.72 45.52 w/o UTC(VAL) 62.63 48.43 54.03 37.32 45.28 w/o UTC(TR+VAL) 61.42 46.67 52.49 36.71 44.22 w/o GCN & UTC(TR+VAL) 58.72 45.16 50.51 35.67 42.74 Table 4: Results of additional ablation studies. We demonstrate the accuracies of each type of questions: Text T/F (true-false in text only), Text MC (multiple-choices in text only), Text all (all in text only), Diagram and All (total questions). Results of our full model without visual context are on the top of the table and results of our full model with f-GCN2 are in the bottom. and the candidate answers. The maximum sequence length of them is set to 30. Likewise, the number of hidden units of the RNNs is the same as the RNNc and the maximum sequence length is 7 which is the same as the number of the maximum candidate answers. We employed 200d one layer GCN for all types of graphs, and the number of maximum nodes is 75 for the textual context graph, 35 for the diagrammatic context graph, and 25 for the diagrammatic question graph, respectively. We use tanh for the activation function of the GCN. The dropout was applied after all of the word embeddings with a keep rate of 0.5. The Adam optimizer with an initial learning rate of 0.001 was applied, and the learning rate was decreased by a factor of 0.9 after each epoch. 1. Select one sample from dataset Q. Wegeners idea is correctly referred to as a1. the continental drift hypothesis a2. the continental drift theory a3. the plate tectonics hypothesis a4. the plate tectonics theory 2. We select one candidate answer from question-candidate pairs in the first step Q. Wegeners idea is correctly referred to as a1. the continental drift hypothesis 3. Next, we choose a number j which is the number of new candidate contexts answers. Then we extract Top - j paragraphs from the lesson according to TF-IDF scores. (e.g. j=3) Paragraph 1 Paragraph 2 Paragraph 3 4. We designate the candidate answer which connect to the top-1 paragraph as a correct answer, and others as wrong answers. Paragraph 1 Paragraph 2 Paragraph 3 Top-1 Top-2 Top-3 TF-IDF Correct Q+a1+ Q+a1+ Q+a1+ Q. Wegeners idea is correctly referred to as a1. the continental drift hypothesis Figure 8: Additional examples of SSOC steps. C Additional explanation for SSOC In Figure 8, we illustrate examples about detailed steps of SSOC. In the first step, we select one candidate answer from question-candidate answers pairs (2). Next, we choose a number j, the number of candidate contexts for the pair of questioncandidate answer, in the range 2 to 7 like the original dataset (3). If j is higher than the number of contexts in the lesson, we set j to be the number of contexts. Then, we extract top j paragraphs using the TF-IDF scores to set them as candidate contexts Ωc (3). We build each context graph in the same way as the original method and get embeddings with the question-candidate answer pair we selected. Finally, we designate the final candidate which connects to the top 1 paragraph as a correct answer, and others as wrong answers (4). D Results of additional ablation study We perform additional ablation studies for variants of our model. For both our full model without visual context and our full model with f-GCN2, results of ablation studies are shown in Table 4. Both studies seem to demonstrate similar tendency as performances are degraded for ablating each module. We can conclude that our two novel modules have sufficient contributions to improve the performance our model in the TQA problem. E Process of Building Textual Context Graph The procedure for converting the textual context into the graph structures is shown in Process 1. After constructing the dependency trees, we set the nodes included in the question or the candidate answer as anchor nodes and built the final context graph C by removing the nodes which have more than two levels of depth difference with anchor nodes. We also constructed the adjacency matrix A using the remaining nodes and edges. 3578 Process 1 Build textual context and adjacency matrices C, A Input: a paragraph, a set of anchor nodes V 1: Construct a dependency tree on each sentence of the given paragraph 2: Split the tree into multiple units each of which represents two nodes and one edge u = {v1, v2} 3: U ←a set of units 4: E ←an empty set of edges 5: for depth ←1 to 2 do 6: for all nodes v ∈V do 7: for all units u ∈U do 8: if v ∈u then 9: E ←E ∪{u} 10: end if 11: end for 12: end for 13: V ←a set of all nodes in E 14: end for Output: context matrix C from V with embedding matrices, adjacency matrix A from E F Additional Qualitative Results In next pages, we present additional qualitative results of questions in three types. We explicitly demonstrates all intermediate results as subgraphs of visual context and question diagram. Note that we add a legend that indicates which types of data are used in this figure to avoid confusion. In Figure 9 and Figure 10, we illustrate intermediate and final results on text-type question with visual context. Next, we demonstrate intermediate and final results on diagram-type question without visual context in Figure 11 and Figure 12. Finally, we present intermediate and final results of the most complicated type, diagram-type question with visual context in Figure 13 and Figure 14. We hope the logical connectivity for solving the problem and how our model works well on the TQA problem are sufficiently understood with those figures. 3579 Diagram Prediction : (d) Ground Truth : (d) [["continental", "thospheve"], ["convectlon", "cell"], ["oceanic", "lithosphere"], ["mid", "oceanic", "ridge"], ["outer", "core"], ["subduction"], ["inner", "core"], ["mantle"], ["trench"], ["ho"], ["ocean"], ["there", "are", "11", "objects"], ["there", "are", "2", "stages"]] convection within the earths mantle causes the plates to move . mantle material is heated above the core . the hot mantle rises up towards the surface ( figure 6.16 ) . as the mantle rises it cools . at the surface the material moves horizontally away from a mid-ocean ridge crest . the material continues to cool . it sinks back down into the mantle at a deep sea trench . the material sinks back down to the core . it moves horizontally again , completing a convection cell . plates move over earths surface because of _________ Q a) conduction within the crust. b) radiation from the inner core. c) subduction in the outer core. d) convection within the mantle. Context Question move causes plates move convection to a) 0.07 b) 0.089 c) 0.083 d) 0.758 thospheve Diagram Parsing Textual Context graph Visual Context graph Diagram [["slump"], ["a"], ["a"], ["there", "are", "3", "objects"]] slump is the sudden movement of large blocks of rock and soil down a slope . you can see how it happens in figure 10.32 . all the material moves together in big chunks . slump may be caused by a layer of slippery , wet clay underneath the rock and soil on a hillside . or it may occur when a river undercuts a slope . slump leaves behind crescentshaped scars on the hillside . sudden movement of a large block of rock and soil down a slope Q a) creep b) mass movement. c) landslide. d) slump. e) mudslide. f) gravity Context Question sudden movement blocks large slump is a) 0.06 b) 0.055 c) 0.005 d) 0.919 e) 0.006 f) 0.008 slump f-GCN Diagram Parsing Textual Context graph Visual Context graph Prediction : (d) Ground Truth : (d) f-GCN continental convection oceanic out core lithosphere cell Context Text Image Text Image Question Figure 9: Additional qualitative results on text-type question with visual context. For both examples, a pipeline from visual context to visual context graph is shown. Gray circles represent words in questions and blue circles represent words related to answers. 3580 Diagram Prediction : (d) Ground Truth : (d) [["osculum", "excurrent", "pore"], ["amebocyte"], ["spicule"], ["sporo cyte"], ["seculum"], ["rwater", "flow"], [ "lchoanocy", "te", "collar", "cell"], ["there" , "are", "7", "objects"], ["there", "are", "6", "stages"]] ___opening through which water flows out of a sponge Q a) porocyte. b) coral reef. c) spicule. d) osculum. Context Question osculum called flows opening through the a) 0.014 b) 0.008 c) 0.017 d) 0.918 a) 0.011 b) 0.021 c) 0.011 Diagram Parsing Textual Context graph Visual Context graph Diagram [["compounc", "or", "moleculc"], ["tissue"], ["organelle"], ["organ"], ["levels", "of", "organizatior"], ["atoms"], ["organism"], ["cell"], ["there", "are", "9", "objects"]] ______structure composed of two or more types of tissues that work together to do a specific task Q Context Question tissues structure composed types organ a a) 0.144 b) 0.042 c) 0.709 d) 0.022 e) 0.027 f) 0.028 g)0.028 f-GCN Diagram Parsing Textual Context graph Visual Context graph Prediction : (c) Ground Truth : (c) sponges have several different types of specialized cells , although they lack tissues . you can see the basic sponge body plan and specialized cells in figure 12.4 . as water flows t hrough the sponge , oxygen diffuses from the water to the sponges cells . the cells also expel wastes into the water . the water then flows out of the sponge through an opening called the osculum . e) porifera. f) amebocyte. g) cnidaria. spicule seculum osculum amebocyte sporo cyte pore excurrent sponge cells and organelles are made of biochemical mole cules . these include nuclei c acids and proteins . mole cules , in turn , are made of atoms . figure 3.6 shows these different levels of organization in living things . tissues may be organized into organs . an organ is a structure composed of two or more types of tissue that work together to do a specific task . for example , the heart is an organ . it consists of muscle , nerve , and other types of tissues . its task is to pump blood . organ s may be organized into organ systems . a) cell membrane. b) prokaryotic cell. c) organ. d) eukaryotic cell. e) organelle. f) nucleus. g) ribosome. organ organelle compounc tissue atoms cell moleculc or f-GCN Context Text Image Text Image Question Figure 10: Additional qualitative results on text-type question with visual context. For both examples, a pipeline from visual context to visual context graph is shown. Gray circles represent words in questions and blue circles represent words related to answers. 3581 Diagram Prediction : (c) Ground Truth : (c) [["volcano", "links", "to", "continental", "crust"], ["oceanic", "crust", "links", "to", "continental", "crust"], ["contin ental", "crust"], ["volcano"], ["moun tam", "rangef", "l"], ["aerriding", "1", " plate"], ["trench"], ["asthenosphere"], ["oceanic", "crust"],["subducting", "pla te"],["there", "are", "12", "objects"] ] a) mountain range. b) continental crust. Context Question crust destroyed always collisons is oceanic a) 0.036 b) 0.101 c) 0.803 d) 0.06 Diagram Parsing Textual Context graph Question Diagram graph f-GCN a convergent plate boundary forms where two plates collide . that collision can happen between a continent and oceanic crust , between two oceanic plates , or between two continents . oceanic crust is always destroyed in these collisions . c) oceanic crust. d) lithosphere. oceanic seculum crust crust asthenosphere trench volcano continental GCN which part of the earth is always destro yed at a convergent plate boundary ? Q Diagram Prediction : (a) Ground Truth : (a) [["centrosome", "matrlx", "links", "to", "c entrioles"], ["centrosome", "matrlx", "lin ks", "to", "microtllamem"], ["centrosome ", "matrlx", "links", "to", "mlcrovllli"], ["mi crotllamem", "links", "to", "mlcrovllli"], [" microtllamem", "links", "to", "microtubu le"], ["microtubule", "links", "to", "mlcro vllli"], ["plasma", "membrane", "links", "to", "nucleus"], ["plasma", "membrane" , "links", "to", "nuclear", "envelope"], ["in termediate", "filaments", "links", "to", "o bject"], ["smooth", "endoplasmic", "retic ulum", "links", "to", "nucleolus"], ["nucle olus", "links", "to", "chromatins"], ["nucl eus", "links", "to", "nuclear", "envelope"], ["mitochondrion", "links", "to", "lysosom e"], ["mitochondrion", "links", "to", "cytos ol"], ["lysosome", "links", "to", "cytosol"], ["there", "are", "21", "objects"], ["there", "are", "20", "stages"]] a) lysosome. b) nucleus. Context Question lysosome organelle molecules unneeded recycles a a) 0.962 b) 0.014 c) 0.014 d) 0.01 Diagram Parsing Textual Context graph Question Diagram graph a lysosome is an organelle that recycles unneed ed molecules . it uses enzymes to break down the molecules into their components . then the components can be reused to make new mole cules . lysosomes are like recycling centers . c) plasma membrane. d) chromatin. lysosome seculum mitochondrion cytosol microtllamem reticulum nucleolus chromatins GCN which of the following is an organelle that recycles unneeded molecules ? Q f-GCN Q mlcrovllli endoplasmic microtubule matrix smooth Context Text Image Text Image Question Figure 11: Additional qualitative results on diagram-type question without visual context. For both examples, a pipeline from question diagram to question diagram graph is shown. Gray circles represent words in questions and blue circles represent words related to answers. 3582 Prediction : (d) Ground Truth : (d) [["plasma", "membrane", "links", "to", "m itochondria"], ["rough", "endoplasmic", "reticulum", "links", "to", "ribosomes"], ["nucleus"], ["plasma", "membrane"], ["cytoplasm"], ["lysosome"], ["golgi", "apparatus"], ["rough", "endoplasmic", "reticulum"], ["ribosomes"], ["smooth", "endoplasmic", "reticulum"], ["mitocho ndria"], ["there", "are", "10", "objects"], ["there", "are", "9", "stages"]] a) plasma membrane. b) lysosome. Context Question reticulum molecules receives sent packages endoplasmic a) 0.135 b) 0.069 c) 0.045 d) 0.75 Diagram Parsing Textual Context graph Question Diagram graph f-GCN the golgi apparatus is a large organelle that sends proteins and lipids where they need to go . its like a post office . it receives molecules from the endoplasmic reticulum . it packages and labels the molecules . then it sends them where they are needed . some molecules are sent to different parts of the cell . others are sent to the cell membrane for transport out of the cell . small bits of membrane pinch off the golgi apparatus to enclose and transport the proteins and lipids . you can see a golgi apparatus at work in this animation : c) mitochondria. d) the rough endoplasmic reticulum and smooth endoplasmic reticulum. rough plasma ribosomes apparatus golgi GCN where does the golgi apparatus receive molecules from ? Q Diagram Prediction : (d) Ground Truth : (d) [["nucleolus", "links", "to", "nucleus"], ["cell", "membrane", "links", "to", "mito chondrion"], ["cell", "membrane", "links", "to", "cell", "wall"], ["nuclear", "memb rane", "links", "to", "chloroplast"], ["nucl ear", "membrane", "links", "to", "nucleus"] , ["centrosome", "links", "to", "vacuole"], ["amyloplast", "links", "to", "chloroplast"] , ["chloroplast", "links", "to", "nucleus"], ["nucleolus"], ["cell", "membrane"], ["nuclear", "membrane"], ["golgi", "body"], ["cytoplasm"], ["cell", "wall"], ["centrosome"], ["ribosomes"], ["amy loplast"], ["mitochondrion"], ["chloro plast"], ["vacuole"], ["rougher"], ["smo other"], ["nucleus"], ["there", "are", "15", "objects"], ["there", "are", "15", "stages"]] a) golgi body. b) ribosomes. Context Question cells supports membrane surrounds protects a) 0.048 b) 0.037 c) 0.072 d) 0.843 Diagram Parsing Textual Context graph Question Diagram graph the cell wall is a rigid layer that surrounds the cell membrane of a plant cell . its made mainly of the complex carbohydrate called cellulose . the cell wall supports and protects the cell . the cell wall isnt solid like a brick wall . it has tiny holes in it called pores . the pores let water , nutrients , and other substances move into and out of the cell . c) vacuole. d) cell wall. cell rougher cell membrane nucleus nuclear membrane GCN which part surrounds and protects the cell ? f-GCN Q mitochondrion amyhloplast chloroplast vacuole Diagram reticulum endoplasmic reticulum endoplasmic smooth membrane wall wall Context Text Image Text Image Question Figure 12: Additional qualitative results on diagram-type question without visual context. For both examples, a pipeline from question diagram to question diagram graph is shown. Gray circles represent words in questions and blue circles represent words related to answers. 3583 Prediction : (b) Ground Truth : (b) [["amoeba"], ["cytoplasm"], ["food", "va cuole", "digests", "food"], ["contractile", "vacuols", "excretes", "water", "and", "waste"], ["food", "being", "engulfed", "by", "aseudopods"], ["nucleus"], ["cell" , "membrane"], ["pseudopod"], ["pseu dopods"], ["enchaniedleavnina", "com"], ["a"], ["there", "are", "11", "objects"], ["there", "are", "9", "stages"]] a) contractile vacuole. b) pseudopods. Question a) 0.028 b) 0.877 c) 0.03 d) 0.065 Diagram Parsing Question Diagram graph f-GCN c) food vacuole. d) cell membrane. cell cytoplasm GCN what are temporary extensions of the cytoplasm ? Q pseudopod contractile [["flagellum"], ["euglena"], ["pseudopod"] , ["paramecium"], ["amoeba"], ["cilla"], ["b"], ["c"], ["a"], ["there", "are", "9", "objects"], ["there", "are", "3", "stages"]] Context pseudopod extension cytoplasm temporary are Diagram Parsing Textual Context graph Visual Context graph animal-like protists are called protozoa ( protozo an , singular ) . most protozoa consist of a single cell . protozoa are probably ancestors of animals . protozoa are like animals in two ways : 1 . proto zoa are heterotrophs . heterotrophs get food by eating other organisms . some protozoa prey on bacteria . some are parasites of animals . others graze on algae . still others are decompo sers that break down dead organic matter . 2 . almost all protozoa can move . they have special appendages for this purpose . you can see differ ent types in figure 9.3 . cilia ( cilium , singular ) are short , hair-like projections . pseudopods are temporary extensions of the cytoplasm . flagella are long , whip-like structures . flagella are also found in most prokaryotes . paramecium flagellum pseudopod amoeba cilla heterotrophs Diagram Diagram euglena vacuols water excretes amoeba membrane nucleus Prediction : (b) Ground Truth : (b) [["nuclear", "pore", "links", "to", "nucleo lus"], ["nuclear", "pore", "links", "to", "ribosomes"], ["nucleolus", "links", "to" , "nucleoplasm"], ["ribosomes", "links" , "to", "nucleolus"], ["heterochromatin" , "links", "to", "euchromatin"], ["heter ochromatin", "links", "to", "nucleolus"] , ["inner", "membrane", "links", "to", "outer", "membrane"], ["nuclear", "pore"] , ["nucleolus"], ["nucleoplasm"], ["ribos omes"], ["heterochromatin"], ["nuclear" , "envelope"], ["chromatin"], ["iological" , "diagram", "of", "a", "hum", "by", "char tsanddiagrams"], ["inner", "membrane"] , ["outer", "membrane"], ["euchromatin" ], ["human", "nucleus", "cell"], ["zizzle"], ["there", "are", "13", "objects"], ["there", "are", "8", "stages"]] a) 1. b) 2 . c) 3. d) 4. Question a) 0.157 b) 0.518 c) 0.189 d) 0.136 Diagram Parsing Question Diagram graph f-GCN nucleoplasm membrane GCN how many membrane layers are there ? Q membrane nuclear [["two", "layers", "of", "phospholipid", "molecules"], ["hydrophilic", "head"], ["hydrophobic", "tail"], ["there", "are", "3", "objects"], ["there", "are", "2", "sta ges"]] Context two membrane cytoplasm composed layers Diagram Parsing Textual Context graph Visual Context graph the structure of the cell membrane explains how it can control what enters and leaves the cell . the membrane is composed mainly of two layers of phospholipids . figure 3.8 shows how the phosp holipids are arranged in the cell membrane . each phospholipid molecule has a head and two tails . the heads are water loving ( hydrophilic ) , and the tails are water fearing ( hydrophobic ) . the water-loving heads are on the outer surfaces of the cell membrane . they point toward the watery cytoplasm within the cell or the watery fluid that surrounds the cell . the water-fearing tails are in t he middle of the cell membrane . phospholipid two molecules of phospholipids Diagram Diagram layers pore nucleolus ribosomes outer heterochromatin inner hydrophilic head hydrophobic tail structure cell Context Text Image Text Image Question Figure 13: Additional qualitative results on diagram-type question with visual context. For both examples, pipelines from visual context and question diagram to visual context graph and question diagram graph are shown. Gray circles represent words in questions and blue circles represent words related to answers. 3584 Prediction : (c) Ground Truth : (c) [["anal", "pore"], ["macronucleus"], ["micronucleus"], ["food", "vacuolesf"], ["cilia"], ["there", "are", "5", "objects"], ["there", "are", "5", "stages"]] Question a) 0.107 b) 0.188 c) 0.558 d) 0.147 Diagram Parsing Question Diagram graph f-GCN GCN what are the hair-like protrusions on the outside called ? Q cilla anal [["flagellum"], ["euglena"], ["pseudopod"] , ["paramecium"], ["amoeba"], ["cilla"], ["b"], ["c"], ["a"], ["there", "are", "9", "objects"], ["there", "are", "3", "stages"]] Context projections are hair-like short Diagram Parsing Textual Context graph Visual Context graph animal-like protists are called protozoa ( protozo an , singular ) . most protozoa consist of a single cell . protozoa are probably ancestors of animals . protozoa are like animals in two ways : 1 . protoz oa are heterotrophs . heterotrophs get food by eating other organisms . some protozoa prey on bacteria . some are parasites of animals . others graze on algae . still others are decomposers that break down dead organic matter . 2 . almost all protozoa can move . they have special appendages for this purpose . you can see different types in figure 9.3 . cilia ( cilium , singular ) are short , hair-like projections . pseudo pods are temporary extensions of the cytoplasm . flagella are long , whip-like structures . flagella are also found in most prokaryotes . paramecium flagellum pseudopod amoeba cilla cilla Diagram euglena pore food vacuoles micronucleus macronuclueus Prediction : (d) Ground Truth : (d) [["object", "links", "to", "golgi", "vesicles"] , ["filamentous", "cytoskeleton", "links", "to", "jlasma", "membrane"], ["smooth", "endoplasmic", "reticulum", "links", "to", "ribosomes"], ["nucleus", "links", "to", "l", "nucleolus"], ["nucleus", "links", "to" , "luclear", "envelope"], ["cell", "wall", "links", "to", "object"], ["cell", "wall", "lin ks", "to", "jlasma", "membrane"], ["cyt oplasm", "links", "to", "peroxisome"], ["l", "nucleolus", "links", "to", "luclear", "envelope"], ["tonoplast", "links", "to", "l", "vacuole"], ["object", "links", "to", "jlasma", "membrane"], ["there", "are", "21", "objects"], ["there", "are", "23", "stages"]] a) amyloplast. b) smoother . c) ribosome. d) large central vacuole. Question a) 0.115 b) 0.155 c) 0.146 d) 0.584 Diagram Parsing Question Diagram graph f-GCN nucleoplasm membrane GCN which part of the following cell takes up the most its volume ? Q membrane nuclear [["large", "central", "vacuole"], ["cell", "wall", "cellulose"], ["nucleus", "with", "nucleolus"], ["cell", "membrane"], ["smoother"], ["golgi", "body"], ["roug her"], ["mitochondria"], ["amyloplast"], ["ribosome"], ["chloroplast"], ["there", "are", "11", "objects"], ["there", "are", "9", "stages"]] Context large volume helps vacuole central Diagram Parsing Textual Context graph Visual Context graph most plant cells have a large central vacuole . it can make up as much as 90 percent of a plant cells total volume . the central vacuole is like a large storage container . it may store substances such as water , enzymes , and salts . it may have other roles as well . for example , the central vacuole helps stems and leaves hold their shape . it may also contain pigments that give flowers their colors . phospholipid two molecules of most Diagram layers pore nucleolus ribosomes outer heterochromatin inner hydrophilic head hydrophobic tail total cells Context Text Image Text Image Question Diagram Diagram a) anal pore. b) macronucleus. c) cilia. d) oral groove. Figure 14: Additional qualitative results on diagram-type question with visual context. For both examples, pipelines from visual context and question diagram to visual context graph and question diagram graph are shown. Gray circles represent words in questions and blue circles represent words related to answers.
2019
347
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3585–3594 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3585 Generating Question Relevant Captions to Aid Visual Question Answering Jialin Wu, Zeyuan Hu and Raymond J. Mooney Department of Computer Science University of Texas at Austin {jialinwu, iamzeyuanhu, mooney}@cs.utexas.edu Abstract Visual question answering (VQA) and image captioning require a shared body of general knowledge connecting language and vision. We present a novel approach to improve VQA performance that exploits this connection by jointly generating captions that are targeted to help answer a specific visual question. The model is trained using an existing caption dataset by automatically determining question-relevant captions using an online gradient-based method. Experimental results on the VQA v2 challenge demonstrates that our approach obtains state-of-the-art VQA performance (e.g. 68.4% on the Test-standard set using a single model) by simultaneously generating question-relevant captions. 1 Introduction In recent years, visual question answering (VQA) (Antol et al., 2015) and image captioning (Donahue et al., 2015; Rennie et al., 2017) have been widely studied in both the computer vision and NLP communities. Most recent VQA research (Lu et al., 2017; Pedersoli et al., 2017; Anderson et al., 2018; Lu et al., 2018) concentrates on directly utilizing visual input features including detected objects, attributes, and relations between pairs of objects. However,little VQA research works on exploiting textual features from the image which are able to tersely encode the necessary information to answer the questions. This information could be richer than the visual features in that the sentences have fewer structural constraints and can easily include the attributes of and relation among multiple objects. In fact, we observe that appropriate captions can be very useful for many VQA questions. In particular, we trained a model to answer visual questions for the VQA v2 challenge (Antol et al., 2015) only using the human annotated Human Captions : 1) A man on a blue surfboard on top of some rough water. 2) A young surfer in a wetsuit surfs a small wave. 3) A young man rides a surf board on a small wave while a man swims in the background. 4) A young man is on his surf board with someone in the background. 5) A boy riding waves on his surf board in the ocean. Question 1: Does this boy have a full wetsuit on? Caption: A young man wearing wetsuit surfing on a wave. Question 2: What color is the board? Caption: A young man riding a wave on a blue surfboard. Figure 1: Examples of our generated question-relevant captions. During the training phase, our model selects the most relevant human captions for each question (marked by the same color). captions without images and achieved a score of 59.6%, outperforming a large number of VQA models that use image features. Existing work using captions for VQA has generated questionagnostic captions using a pretrained captioner (Li et al., 2018a). This approach can provide additional general information; however, this information is not guaranteed to be relevant to the given VQA question. We explore a novel approach that generates question-relevant image descriptions, which contain information that is directly relevant to a particular VQA question. Fig. 1 shows examples of our generated captions given different questions. In order to encourage the generation of relevant captions, we propose a novel greedy algorithm that aims to minimize the cross entropy loss only for 3586 the most relevant and helpful gold-standard captions. Specifically, helpfulness is measured using the inner-product of the gradients from the caption generation loss and the VQA answer prediction loss. A positive inner-product means the two objective functions share some descent directions in the optimization process, and therefore indicates that the corresponding captions help the VQA training process. In order to incorporate the caption information, we propose a novel caption embedding module that, given the question and image features for a visual question, recognizes important words in the caption, and produces a caption embedding tailored for answer prediction. In addition, the caption embeddings are also utilized to adjust the visual top-down attention weights for each object. Furthermore, generating question-relevant captions ensures that both image and question information is encoded in their joint representations, which reduces the risk of learning from question bias (Li et al., 2018a) and ignoring the image content when high accuracy can be achieved from the questions alone. Experimental evaluation of our approach shows significant improvements on VQA accuracy over our baseline Up-Down (Anderson et al., 2018) model on the VQA v2 validation set (Antol et al., 2015), from 63.2% to 67.1% with gold-standard human captions from the COCO dataset (Chen et al., 2015) and 65.8% with automatically generated question-relevant captions. Our single model is able to score 68.4% on the test-standard split, and an ensemble of 10 models scores 69.7%. 2 Background Related Work 2.1 Visual Question Answering Recently, a large amount of attention-based deeplearning methods have been proposed for VQA, including top-down (Ren et al., 2015a; Fukui et al., 2016; Wu et al., 2016; Goyal et al., 2017; Li et al., 2018a) and bottom-up attention methods (Anderson et al., 2018; Li et al., 2018b; Wu and Mooney, 2019). Specifically, a typical model first extracts image features using a pre-trained CNN, and then trains an RNN to encode the question, using an attention mechanism to focus on specific features of the image. Finally, both question and attended image features are used to predict the final answer. However, answering visual questions requires not only information about the visual content but also common knowledge, which is usually too hard to directly learn from only a limited number of images with human annotated answers as supervision. However, comparatively little previous VQA research has worked on enriching the knowledge base. We are aware of two related papers. Li et al. (2018a) use a pre-trained captioner to generate general captions and attributes with a fixed annotator and then use them to predict answers. Therefore, the captions they generate are not necessarily relevant to the question, and they may ignore image features needed for answer prediction. Narasimhan et al. (2018) employed an out-of-thebox knowledge base and trained their model to filter out irrelevant facts. After that, graph convolutional networks use this knowledge to build connections to the relevant facts and predict the final answer. Unlike them, we generate captions to provide information that is directly relevant to the VQA process. 2.2 Image Captioning Most recent image captioning models are also attention-based deep-learning models (Donahue et al., 2015; Karpathy and Fei-Fei, 2015; Vinyals et al., 2015; Luo et al., 2018; Liu et al., 2018). With the help of large image description datasets (Chen et al., 2015), these models have demonstrated remarkable results. Most of them encode the image using a CNN, and build an attentional RNN (i.e. GRU (Cho et al., 2014), LSTM (Hochreiter and Schmidhuber, 1997)) on top of the image features as a language model to generate image captions. However, deep neural models still tend to generate general captions based on the most significant objects (Vijayakumar et al., 2016). Although previous works (Luo et al., 2018; Liu et al., 2018) build captioning models that are encouraged to generate different captions with discriminability objectives, the captions are usually less informative and fail to describe most of the objects and their relationships diversely. In this work, we develop an approach to generating captions that directly focus on the critical objects in the VQA process and provide information that can help the VQA module predict the answer. 3 Approach We first describe the overall structure of our joint model in Sec. 3.1 and explain the foundational 3587 Word Embedding GRU !"# Image CNN  Caption Generation Word Embedding Caption Embedding !$# Answer Prediction Question  Phase 1: Gold Standard Captions %×2048 %×2048 % % + , ," , "$ $ Phase 2: Model Generated Captions Figure 2: Overall structure of our model that generates question-relevant captions to aid VQA. Our model is first trained to generate question-relevant captions as determined in an online fashion in phase 1. Then, the VQA model is fine-tuned with generated captions from the first phase to predict answers. ⌦denotes element-wise multiplication and ⊕denotes element-wise addition. Blue arrows denote fully-connected layers (fc) and yellow arrows denote attention embedding. feature representations (i.e. image, question and caption embeddings) in Sec. 3.2. Then, the VQA module is presented in Sec. 3.3, which takes advantage of the generated image captions to improve the VQA performance. In Sec. 3.4, we explain the image captioning module which generates question-relevant captions. Finally, the training and implementation details are provided in Sec. 3.5. 3.1 Overview As illustrated in Fig. 2, the proposed model first extracts image features V = {v1, v2, ..., vK} using bottom-up attention and question features q to produce their joint representation and then generates question-related captions. Next, our caption embedding module encodes the generated captions as caption features c as detailed in Sec. 3.2. After that, both question features q and caption features c are utilized to generate the visual attention Acv to weight the images’ feature set V, producing attended image features vqc. Finally, we add vqc to the caption features c and further perform element-wise multiplication with the question features q (Anderson et al., 2018) to produce the joint representation of the question, image and caption, which is then used to predict the answer. 3.2 Feature Representation In this section, we explain the details of this joint representation. We use f(x) to denote fullyconnected layers, where f(x) = LReLU(Wx+b) with input features x and ignore the notation of weights and biases for simplicity, where these fc layers do not share weights. LReLU denotes a Leaky ReLU (He et al., 2015). Image and Question Embedding We use object detection as bottom-up attention (Anderson et al., 2018), which provides salient image regions with clear boundaries. In particular, we use a Faster R-CNN head (Ren et al., 2015b) in conjunction with a ResNet-101 base network (He et al., 2016) as our detection module. The detection head is first pre-trained on the Visual Genome dataset (Krishna et al., 2017) and is capable of detecting 1, 600 objects categories and 400 attributes. To generate an output set of image features V, we take the final detection outputs and perform non-maximum suppression (NMS) for each object category using an IoU threshold of 0.7. Finally, a fixed number of 36 detected objects for each image are extracted as the image features (a 2, 048 dimensional vector for each object) as suggested by Teney et al. (2017). For the question embedding, we use a standard GRU (Cho et al., 2014) with 1, 280 hidden units and extract the output of the hidden units at the final time step as the question features q. Following Anderson et al. (2018), the question features q and image feature set V are further embedded together to produce a question-attended image feature set Vq via question visual-attention Aqv as illustrated in Fig. 2. Caption Embedding Our novel caption embedding module takes as in3588 put the question-attended image feature set Vq, question features q, and C captions Wc i = {wc i,1, wc i,2, ..., wc i,T }, where T denotes the length of the captions and i = 1, ..., C are the caption indices, and then produces the caption features c. Word GRU 𝐀𝐜 Word Embedding 𝐖𝐞Πi, t c 𝐕qv Caption GRU ℎ𝑖, 𝑡 2 ℎ𝑖, 𝑡 1 Figure 3: Overview of the caption embedding module. The Word GRU is used to generate attention to identify the relevant words in each caption, and the Caption GRU generates the final caption embedding. We use question-attended image features Vqv to compute the attention. Blue arrows denote fc layers and yellow arrows denote attention embedding. The goals of the caption module are to serve as a knowledge supplement to aid VQA, and to provide additional clues to identify the relevant objects better and adjust the top-down attention weights. To achieve this, as illustrated in Fig. 3, we use a two-layer GRU architecture. The firstlayer GRU (called the Word GRU) sequentially encodes the words in a caption Wc i at each time step as h1 i,t. h1 i,t = GRU(We⇧c i,t, h1 i,t−1) (1) where We is the word embedding matrix, and ⇧c i,t is the one-hot embedding for the word wc i,t. Then, we design a caption attention module Ac which utilizes the question-attended feature set Vq, question features q, and h1 i,t to generate the attention weight on the current word in order to indicate its importance. Specifically, the Word GRU first encodes the words embedding ⇧c i,t in Eq. 1, and then we feed the outputs h1 i,t and Vq to the attention module Ac as shown in Eq. 4. vq = K X k=1 vq k (2) ac i,t = h1 i,t ◦f(vq) + h1 i,t ◦f(q) (3) ↵c i,t = σ(ac i,t) (4) where σ denotes the sigmoid function, and K is the number of objects in the bottom-up attention. Next, the attended words in the caption are used to produce the final caption representation in Eq. 5 via the Caption GRU. Since the goal is to gather more information, we perform element-wise max pooling across the representations of all of the input captions ci in Eq. 7. h2 i,t = GRU(↵c i,tWe⇧c i,t, h2 i,t−1) (5) ci = f(h2 i,T ) (6) c = max(ci) (7) where max denotes the element-wise max pooling across all of caption representations ci of the image. 3.3 VQA Module This section describes the details of the VQA module. The generated captions are usually capable of capturing relations among the questionrelevant objects; however these relations are absent in the bottom-up attention. Therefore, our VQA module utilizes the caption embeddings c to adjust the top-down attention weights in VQA in order to produce the final caption-attended features vqc in Eq. 10: acv k = f(f(c) ◦f(vq k)) (8) ↵cv k = softmax(acv c,k) (9) vqc = K X k vq k↵cv k (10) where k traverses the K objects features. To better incorporate the information from the captions into the VQA process, we add the caption features c to the attended image features vqc, and then element-wise multiply by the question features as shown in Eq. 11: h = q ◦(f(vqc) + f(c)) (11) ˆs = σ(f(h)) (12) We frame the answer prediction task as a multilabel regression problem (Anderson et al., 2018). In particular, we use the soft scores in the goldstandard VQA-v2 data (which are used in the evaluation metric), as labels to supervise the sigmoidnormalized predictions as shown in Eq. 13: Lvqa = − N X j=1 sj log ˆsj+(1−sj) log(1−ˆsj) (13) 3589 where the index j runs over N candidate answers and s are the soft answer scores. In case of multiple feasible answers, the soft scores capture the occasional uncertainty in the ground-truth annotations. As suggested by Teney et al. (2017), we collect the candidate answers that appear more than 8 times in the training set, which results in 3, 129 answer candidates. 3.4 Image Captioning Module We adopt an image captioning module similar to that of Anderson et al. (2018), which takes the object detection features as inputs and learns attention weights over those objects’ features in order to predict the next word at each step. The key difference between our module and theirs lies in the input features and the caption supervision. Specifically, we use the question-attended image features Vq as inputs, and only use the most relevant caption, which is automatically determined in an online fashion (detailed below), for each question-image pair to train the captioning module. This ensures that only question-relevant captions are generated. Selecting Relevant Captions for Training Previously, Li et al. (2018b) selected relevant captions for VQA based on word similarities between captions and questions, however, their approach does not take into account the details of the VQA process. In contrast, during training, our approach dynamically determines for each problem, the caption that will most improve VQA. We do this by updating with a shared descent direction (Wu et al., 2018) which decreases the loss for both captioning and VQA. This ensures a consistent target for both the image captioning module and the VQA module in the optimization process. During training, we compute the cross-entropy loss for the i-th caption using Eq. 14, and backpropagate the gradients only from the most relevant caption determined by solving Eq. 15. Lc i = − T X t=1 log(p(wc i,t|wc i,t−1)) (14) In particular, we require the inner product of the current gradient vectors from the predicted answer and the human captions to be greater than a positive constant ⇠, and further select the caption that maximizes that inner product. arg max i K X k=0 ✓@ˆspred @vq k ◆T @ log(p(Wc i)) @vq k s.t. K X k=0 ✓@ˆspred @vq k ◆T @ log(p(Wc i)) @vq k > ⇠ (15) where the ˆspred is the logit1 for the predicted answer, Wc i denotes the i-th human caption for the image and k traverses the K object features. Therefore, given the solution to Eq. 15, i?, the final loss of our joint model is the sum of the VQA loss and the captioning loss for the selected captions as shown in Eq. 16. If Eq. 15 has no feasible solution, we ignore the caption loss. L = Lvqa + Lc i? (16) 3.5 Training and Implementation Details We train our joint model using the AdaMax optimizer (Kingma and Ba, 2015) with a batch size of 384 and a learning rate of 0.002 as suggested by Teney et al. (2017). We use the validation set for VQA v2 to tune the initial learning rate and the number of epochs, yielding the highest overall VQA score. We use 1, 280 hidden units in the question embedding and attention model in the VQA module with 36 object detection features for each image. For captioning models, the dimension of the LSTM hidden state, image feature embedding, and word embedding are all set to 512. We also use Glove vectors (Pennington et al., 2014) to initialize the word embedding matrix in the caption embedding module. We initialize the training process with human annotated captions from the COCO dataset (Chen et al., 2015) and pre-train the VQA and captiongeneration modules for 20 epochs with the final joint loss in Eq. 16. After that, we generate question-relevant captions for all question-image pairs in the COCO train, validation, and test sets. In particular, we sample 5 captions per questionimage pair. We fine-tune our model using the generated captions with 0.25 ⇥learning-rate for another 10 epochs. 4 Experiments We perform extensive experiments and ablation studies to evaluate our joint model on VQA. 1The input to the softmax function. 3590 Test-standard Yes/No Num Other All Prior (Goyal et al., 2017) 61.20 0.36 1.17 25.98 Language-only (Goyal et al., 2017) 67.01 31.55 27.37 44.26 MCB (Fukui et al., 2016) 78.82 38.28 53.36 62.27 Up-Down (Anderson et al., 2018) 82.20 43.90 56.26 65.32 VQA-E (Li et al., 2018b) 83.22 43.58 56.79 66.31 Ours(single) 84.69 46.75 59.30 68.37 Ours(Ensemble-10) 86.15 47.41 60.41 69.66 Table 1: Comparison of our results on VQA with the state-of-the-art methods on the test-standard data. Accuracies in percentage (%) are reported. 4.1 Datasets and Evaluation Metrics VQA Dataset We use the VQA v2.0 dataset (Antol et al., 2015) for the evaluation of our proposed joint model, where the answers are balanced in order to minimize the effectiveness of learning dataset priors. This dataset is used in the VQA 2018 challenge and contains over 1.1M questions from the over 200K images in the MSCOCO 2015 dataset (Chen et al., 2015). Following Anderson et al. (2018), we perform standard text pre-processing and tokenization. In particular, questions are first converted to lower case and then trimmed to a maximum of 14 words, and the words that appear less than 5 times are replaced with an “<unk>” token. To evaluate answer quality, we report accuracies using the official VQA metric using soft scores, which accounts for the occasional disagreement between annotators for the ground truth answers. Image Captioning Dataset We use the MSCOCO 2014 dataset (Chen et al., 2015) for the image caption module. To maintain consistency with the VQA tasks, we use the dataset’s official configuration that includes 82, 372 images for training and 40, 504 for validation. Similar to the VQA question pre-processing, we first convert all sentences to lower case, tokenizing on white spaces, and filtering words that do not occur at least 5 times. 4.2 Results on VQA We first report the experimental results on the VQA task and compare our results with the stateof-the-art methods in this section. After that, we perform ablation studies to verify the contribution of additional knowledge from the generated captions, and the effectiveness of using caption representations to adjust the top-down visual attention weights. As demonstrated in Table 1, our single model outperforms other state-of-the-art single models by a clear margin, i.e. 2.06%, which indicates the effectiveness of including caption features as additional inputs. In particular, we observe that our single model outperforms other methods, especially in the ’Num’ and ’Other’ categories. This is because the generated captions are capable of providing more numerical clues for answering the ’Num’ questions, since the captions can describe the number of relevant objects and provide general knowledge for answering the ’Other’ questions. Furthermore, an ensemble of 10 models with different initialization seeds results in a score of 69.7% for the test-standard set. Fig. 4 shows several examples of our generated question-relevant captions. These examples illustrate how different captions are generated for the same image when the question is changed. They also show how the objects in the image that are important to answering the question are described in the question-relevant captions. Comparison Between Using Generated and Human Captions Next, we analyze the difference between using automatically generated captions and using those provided by human annotators. In particular, we train our model with generated question-agnostic captions using the Up-Down (Anderson et al., 2018) captioner, question-relevant captions from our caption generation module, and human annotated captions from the COCO dataset. As demonstrated in Table 2, our model gains 3591 Caption: Caption: Q: What is he doing? Q: Is he wearing a hat? Caption: Caption: Q: Is the cat watching TV? Q: Is the tv on? Q: What colors are on the couch? Caption: Q: Is there a picture on the wall? Caption: Caption: Caption: Q: What color is the vase? Q: What color are the flowers? A: Yes. A: Taking picture. A: Purple and white. A: White. A: Yes. A: Yes. A: Yes. A: Red. Figure 4: Examples of our generated question-relevant captions. The influential objects with attention weights greater than 0.1 are indicated by bounding boxes (annotated with their visual attention weights in the blue box), and the gray-scale levels in the caption words indicate the word attentions from the caption embedding module. Validation Up-Down (Anderson et al., 2018) 63.2 Ours with Up-Down captions 64.6 Ours with our generated captions 65.8 Ours with human captions 67.1 Table 2: Comparison of the performance using generated and human captions. Both of them provide significant improvements to the baseline model. However, there is still a reasonable gap between generated and human captions. about 4% improvement from using human captions and 2.5% improvement from our generated question-relevant captions on the validation set. This indicates the insufficiency of directly answering visual questions using a limited number of detection features, and the utility of incorporating additional information about the images. We also observe that our generated question-relevant captions trained with our caption selection strategy provide more helpful clues for the VQA process than the question-agnostic Up-Down captions, outperforming their captions by 1.2%. Effectiveness of Adjusting Top-Down Attention In this section, we quantitatively analyze the efQuestion: What colors is the surfboard? Answer: Yellow and blue Answer: Yellow and red Answer: yellow and red Visual attention Caption adjusted visual attention Caption: A group of people standing next to yellow board. Figure 5: An example of caption attention adjustment. The question-relevant caption helps the VQA module adjust the visual attention from both the yellow board and the blue sail to the yellow board only. fectiveness of utilizing captions to adjust the topdown attention weights, in addition to the advantage of providing additional information. In particular, we compare our model with a baseline version where the top-down attention-weight adjustment factor Acv is manually set to 1.0 (resulting in no adjustment). As demonstrated in Tables 3 and 4, we observe an improvement when using caption features to adjust the attention weights. This indicates that the caption features help the model to more robustly locate the objects that are helpful to the VQA pro3592 cess. We use w CAA to indicate with caption attention adjustment and w/o CAA to indicate without it. Fig. 5 illustrates an example of caption attention adjustment. Without CAA, the top-down visual attention focuses on both the yellow surfboard and the blue sail, generating the incorrect answer “yellow and blue.”. However, with “yellow board” in the caption, the caption attention adjustment (CAA) helps the VQA module focus attention just on the yellow surfboard, thereby generating the correct answer “yellow and red” (since there is some red coloring in the surfboard). Test-standard All Yes/No Num Other Up-Down 65.3 82.2 43.9 56.3 Ours w/o CAA 67.4 84.0 44.5 57.9 Ours w CAA 68.4 84.7 46.8 59.3 Table 3: Evaluation of the effectiveness of captionbased attention adjustment (CAA) on the test-standard data. Accuracies in percentage (%) are reported. Validation All Yes/No Num Other Up-Down 63.2 80.3 42.8 55.8 Ours w/o CAA 65.2 82.1 43.6 55.8 Ours w CAA 65.8 82.6 43.9 56.4 Table 4: Evaluation of the effectiveness of CAA on the validation data. Accuracies in percentage (%) are reported. Next, in order to directly demonstrate that our generated question-relevant captions help the model to focus on more relevant objects via attention adjustment, we compare the differences between the generated visual attention and humanannotated important objects from the VQA-X dataset (Park et al., 2018), which has been used to train and evaluate multimodal (visual and textual) VQA explanation (Wu and Mooney, 2018). The VQA-X dataset contains 2, 000 question-image pairs from the VQA v2 validation set with human annotations indicating the objects which most influence the answer to the question. In particular, we used Earth Mover Distance (EMD) (Rubner et al., 2000) to compare the highly-attended objects in the VQA process to the objects highlighted by human judges. This style of evaluation using EMD has previously been employed to compare automatic visual explanations to humanattention annotations (Selvaraju et al., 2017; Park et al., 2018). We resize all of the 2, 000 human annotations in VQA-X dataset to 14⇥14 and adjust the object bounding boxes in the images accordingly. Next, we assign the top-down attention weights to the corresponding bounding boxes, both before and after caption attention adjustment, and add up the weights of all 36 detections. Then, we normalize attention weights over the 14 ⇥14 resized images to sum to one, and finally compute the EMD between the normalized visual attentions and the human annotations. Table 5 reports the EMD results for the attentions weights both before and after the caption attention adjustments. w/o CAA w CAA Human EMD 2.38 2.30 2.25 Table 5: EMD results comparing the top-down attention weights (with or without caption attention adjustments) to human attention-annotation from the VQAX dataset. Results are shown for both automatically generated captions and human captions. Lower EMD indicates a closer match to human attention. The results indicate that caption attention adjustment improves the match between automated attention and human-annotated attention, even though the approach is not trained on supervised data for human attention. Not surprisingly, human captions provide a bit more improvement than automatically generated ones. 5 Conclusion In this work, we have explored how generating question-relevant image captions can improve VQA performance. In particular, we present a model which jointly generates question-related captions and uses them to provide additional information to aid VQA. This approach only utilizes existing image-caption datasets, automatically determining which captions are relevant to a given question. In particular, we design the training algorithm to only update the network parameters in the optimization process when the caption generation and VQA tasks agree on the direction of change. Our single model joint system outperforms the current state-of-the-art single model for VQA. 3593 Acknowledgement This research was supported by the DARPA XAI program under a grant from AFRL. References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-Up and Top-Down Attention for Image Captioning and VQA. In CVPR, volume 3, page 6. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425–2433. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. 2015. Microsoft COCO Captions: Data Collection and Evaluation Server. arXiv preprint arXiv:1504.00325. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations Using RNN EncoderDecoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Longterm Recurrent Convolutional Networks for Visual Recognition and Description. In CVPR, pages 2625–2634. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. EMNLP. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. In CVPR, volume 1, page 9. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification. In ICCV, pages 1026–1034. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural computation, 9(8):1735–1780. Andrej Karpathy and Li Fei-Fei. 2015. Deep Visualsemantic Alignments for Generating Image Descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. International Journal of Computer Vision, 123(1):32–73. Qing Li, Jianlong Fu, Dongfei Yu, Tao Mei, and Jiebo Luo. 2018a. Tell-and-Answer: Towards Explainable Visual Question Answering using Attributes and Captions. arXiv preprint arXiv:1801.09041. Qing Li, Qingyi Tao, Shafiq Joty, Jianfei Cai, and Jiebo Luo. 2018b. VQA-E: Explaining, Elaborating, and Enhancing Your Answers for Visual Questions. ECCV. Xihui Liu, Hongsheng Li, Jing Shao, Dapeng Chen, and Xiaogang Wang. 2018. Show, Tell and Discriminate: Image Captioning by Self-retrieval with Partially Labeled Data. ECCV. Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing When to Look: Adaptive Attention via a Visual Sentinel for Image Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 6. Pan Lu, Lei Ji, Wei Zhang, Nan Duan, Ming Zhou, and Jianyong Wang. 2018. R-vqa: learning visual relation facts with semantic attention for visual question answering. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1880–1889. ACM. Ruotian Luo, Brian Price, Scott Cohen, and Gregory Shakhnarovich. 2018. Discriminability Objective for Training Descriptive Captions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Medhini Narasimhan, Svetlana Lazebnik, and Alexander Schwing. 2018. Out-of-The-Box: Reasoning with Graph Convolution Nets for Factual Visual Question Answering. In NIPS, pages 2655–2666. Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In CVPR. Marco Pedersoli, Thomas Lucas, Cordelia Schmid, and Jakob Verbeek. 2017. Areas of Attention for Image Captioning. In ICCV-International Conference on Computer Vision. 3594 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Mengye Ren, Ryan Kiros, and Richard Zemel. 2015a. Exploring Models and Data for Image Question Answering. In NIPS, pages 2953–2961. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015b. Faster R-CNN: Towards Real-time Object Detection with Region Proposal Networks. In NIPS, pages 91–99. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical Sequence Training for Image Captioning. In CVPR, volume 1, page 3. Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. 2000. The Earth Mover’s Distance as a Metric for Image Retrieval. ICCV, 40(2):99–121. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra, et al. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In ICCV, pages 618–626. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2017. Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge. arXiv preprint arXiv:1708.02711. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models. arXiv preprint arXiv:1610.02424. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and Tell: A Neural Image Caption Generator. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 3156–3164. IEEE. Jialin Wu, Dai Li, Yu Yang, Chandrajit Bajaj, and Xiangyang Ji. 2018. Dynamic Filtering with Large Sampling Field for Convnets. ECCV. Jialin Wu and Raymond J Mooney. 2018. Faithful Multimodal Explanation for Visual Question Answering. arXiv preprint arXiv:1809.02805. Jialin Wu and Raymond J Mooney. 2019. Selfcritical reasoning for robust visual question answering. arXiv preprint arXiv:1905.09998. Jialin Wu, Gu Wang, Wukui Yang, and Xiangyang Ji. 2016. Action Recognition with Joint Attention on Multi-level Deep Features. arXiv preprint arXiv:1607.02556.
2019
348
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3595–3600 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3595 Multi-grained Attention with Object-level Grounding for Visual Question Answering Pingping Huang, Jianhui Huang, Yuqing Guo, Min Qiao and Yong Zhu Baidu Inc., Beijing, China {huangpingping,huangjianhui,guoyuqing,qiaomin,zhuyong}@baidu.com Abstract Attention mechanisms are widely used in Visual Question Answering (VQA) to search for visual clues related to the question. Most approaches train attention models from a coarsegrained association between sentences and images, which tends to fail on small objects or uncommon concepts. To address this problem, this paper proposes a multi-grained attention method. It learns explicit wordobject correspondence by two types of wordlevel attention complementary to the sentenceimage association. Evaluated on the VQA benchmark, the multi-grained attention model achieves competitive performance with stateof-the-art models. And the visualized attention maps demonstrate that addition of objectlevel groundings leads to a better understanding of the images and locates the attended objects more precisely. 1 Introduction Visual Question Answering (Antol et al., 2015; Goyal et al., 2017a) is a multi-modal task requiring to provide an answer to the question with reference to a given image. Most current VQA systems resort to deep neural networks and solve the problem by end-to-end learning. First the question and the image are encoded into semantic representations independently. Then the multi-modal features are fused into one unified representation for which the answer is predicted (Malinowski et al., 2015; Fukui et al., 2016; Anderson et al., 2018). A key point to a successful VQA system is to discover the most relevant image regions to the question. This is commonly resolved by attention mechanisms, where a spatial attention distribution highlighting the visual focus is computed according to the similarity between the whole question and image regions (Xu et al., 2015; Yang et al., 2016; Lu et al., 2016). Although such coarse Q: What is the man wearing around his face? A: glasses Up-down Model: nothing Our Model: glasses Figure 1: An example of VQA and the attention maps produced by a state-of-the-art model and our model. sentence-image alignment reports promising results in general, it sometimes fails to locate small objects or understand a complicated scenario. For the example in Figure 1, the question is “What is the man wearing around his face”. Human has no difficulty in finding the visual clue on the people’s faces, and accordingly provide the correct answer “glasses”. However, by visualizing the attention map of a state-of-the-art VQA model, we find that the attention is mistakenly focused on the men’s body rather than their faces. In order to identify related objects more precisely, this paper proposes a multi-grained attention mechanism that involves object-level grounding complementary to the sentence-image association. Specifically, a matching model is trained on an object-detection dataset to learn explicit correspondence between the content words in the question and their visual counterparts. And the labels of the detected objects are considered and their similarity with the questions are computed. Besides, a more sophisticated language model is adopted for better representation of the question. Finally the three types of word-object, word-label and sentence-image attention are accumulated to enhance the performance. The contributions of this paper are twofold. First, this paper proposes a multi-grained attention mechanism integrating two types of object features that were not previously used in VQA atten3596 F-RCNN Image ELMo GloVe Predicted scores of candidate answers LabelEmbG Objects labels Objects features WordEmbG WordEmbE SentenceEmb Fused feature Question + + = ∑ Attended object feature Multi-grained attention WL WO SO Figure 2: The architecture of our proposed model. The enhanced modules are illustrated in dot lines. tion approaches. Second, the deep contextualized word representation ELMo (Peters et al., 2018) is firstly adopted in the VQA task to facilitate a better question encoding. 2 Proposed Model The flowchart of the proposed model is illustrated in Figure 2. We start from the bottom-up topdown (up-down) model (Teney et al., 2017; Anderson et al., 2018), which is the winning entry to the 2017 VQA challenge. Then this model is enhanced with two types of object-level groundings to explore fine-grained information, and a more sophisticated language model for better question representation. 2.1 Image Features We adopt the object-detection-based approach to represent the input image. Specifically, following Anderson et al. (2018), a state-of-the-art object detection model Faster R-CNN (Ren et al., 2015) with ResNet-101 (He et al., 2016) as its backbone is trained on the Visual Genome (VG) (Krishna et al., 2016) dataset. Then the trained model1 is applied to identify instances of objects with bounding boxes belonging to certain categories. The target categories of this detection model contain 1600 objects and 400 attributes. For each input image, the top-K objects with the highest confidence scores are selected to represent the image. For each object, the output of ResNet’s pool-flat-5 layer is used as its visual feature, which is a 2048-dimensional vector vk. Besides, the label of each object’s category ck is also kept as a visually grounded evidence. ck is a Ndimensional one-hot vector, where N is the vocabulary size. Then the input image is represented 1The model is available at https://github.com/peteanderson80/bottom-up-attention by both its object features V = [v1, v2, ..., vK] ∈ R2048×K and object labels C = [c1, c2, ..., cK] ∈ RN×K . 2.2 Text Features In our model, text features include token features and sentence features for the question, which are respectively used for fine-grained and coarsegrained attention computation. Word Features Let Q = [q1, ..., qT ] ∈RN×T denote the one-hot representation for the input question tokens, where T is the question length, and N is the vocabulary size. Then each token qt is turned into two word embeddings: GloVe (Pennington et al., 2014) xG t = qtEG ∈RD1, and ELMo xE t = ELMo(qt) ∈RD2. D1 and D2 are the dimensions of GloVe embedding and ELMo embedding respectively. EG is the GloVe matrix pre-trained on the Wikipedia & Gigaword2. The ELMo embedding is dynamically computed by a L-layer bi-LSTM language model (Hochreiter and Schmidhuber, 1997). We use the publicly available pre-trained ELMo model3 to get the contextualized embeddings. Sentence Features The above two sets of token embeddings are then concatenated xt = [xG t ; xE t ] ∈RD1+D2, and fed into a GRU (Cho et al., 2014) to encode the question sentence. The final hidden state of the GRU i.e., hT ∈RD3 is taken as sentence feature, where D3 is the hidden state size for GRU. 2.3 Multi-grained Attentions Word-Label Matching Attention (WL) Object category labels are high-level semantic representation compared to visual pixels, and have proven to 2http://nlp.stanford.edu/projects/glove/ 3https://github.com/allenai/allennlp 3597 be useful for both visual tasks like scene classification (Li et al., 2010) and multi-modal tasks like image caption and VQA (Wu et al., 2018). For VQA task, we observed that the semantic similarity between the object category labels and the words in the question helps to locate the referred objects. For the input image in Figure 1, Faster-RCNN detected objects with labels of “man”, “head”. Some labels are exactly the same as or are semantically close to the words in the question “What is the man wearing around his face?”. Therefore, we compute the WL attention vector, that indicates how much weight we should give to each of the K objects in the image, in terms of the semantic similarity between the category labels of the objects and the words in the question. For the k-th object with label ck we encode it into GloVe embedding4 lG k = ckEG, and compute its attention score by measuring its similarity to the question GloVe embedding as follows: sW L(XG, lG k ) = arg max t cos(xG t , lG k ) aW L(XG, LG) = softmax  sW L(XG, lG k )  (1) where XG =  xG 1 , ..., xG T  ∈RD1×T is the GloVe embeddings for the question tokens. LG =  lG 1 , ..., lG k  ∈RD1×K is the GloVe embeddings for the objects labels. aWL ∈RK is the WL attention vector. In contrast to Anderson et al. (2018) that only use objects’ visual features without the labels, and unlike Wu et al. (2018) that discard the visual features once the labels are generated, we utilize both category labels and the visual features to enhance the fine-grained attention with objectlevel grounding. Word-Object Matching Attention (WO) A word-object matching module is exploited to directly evaluate how likely a question word matches a visual object. The pairwise training structure of the module is shown in Figure 3. The training set is constructed on the VG object detection data. Let (c, b) be a positive sample consisting of the annotated object bounding-box b with category label c, then a negative sample (c,¯b) is constructed by randomly replacing b with the object ¯b in the same image, if ¯b is not labelled with 4The reason why GloVe embedding alone is used instead of ELMo for object labels, is that object labels have no context sentence to derive the context-sensitive ELMo embeddings. object: b category label: c man man 𝒙"# 𝒗% object:b _ couch loss 𝑠'( 𝒙"#, 𝒗% 𝑠'( 𝒙"#, 𝒗% _ 𝒗% _ Figure 3: Label-object matching module trained on VG object annotation data. c. Then, each sample (c, b) is represented as feature vectors (xG c , vb), where xG c is the GloVe embedding of c, and vb is extracted with the same Faster R-CNN model as described in section 2.1. At last, a margin-based pairwise ranking loss is used to train the model: sW O(xG c , vb) = σ  W s h f(W cxG c ) ◦f(W vvb) i loss = max n 0, λ −sW O(xG c , vb) + sW O(xG c , v¯b) o (2) where f is ReLU and σ is sigmoid activation function, ◦means element wise multiplication. W c, W v, W s are weight parameters5. And the margin is set λ = 0.5. After sWO is pre-trained, we forwardly select at most B noun tokens in the question and compute the WO attention aWO(X, V ) over the K objects as follows: aW O(XG, V ) = softmax B X b=1 sW O(xG b , vk) ! (3) where the parameters of sWO are fine-tuned in down-streaming VQA task. Sentence-Object Attention (SO) Following previous methods of sentence-level question guided visual attention, we also use the global semantic of the whole sentence to guide the focus on relevant objects. Taking sentence feature hT and objects features V as input, SO attention vector aSO is computed as follows: sSO(hT , vk) = σ (W j [f(W vvk) ◦f(W thT )]) aSO(hT , V ) = softmax  sSO(hT , vk)  (4) where f is ReLU, σ is sigmoid activation function, and W j, W v, W t are weight parameters. 5All bias terms are omitted hereafter for simplicity 3598 Method test-dev All Yes/no Numbers Other Up-down 65.32 81.82 44.21 56.05 Our Model 67.41 83.60 47.02 58.24 Method test-std All Yes/no Numbers Other Prior 25.98 61.20 0.36 1.17 Language-only 44.26 67.01 31.55 27.37 d-LSTM-n-I 54.22 73.46 35.18 41.83 MCB 62.27 78.82 38.28 53.36 Up-down 65.67 82.20 43.90 56.26 Our Model 67.73 83.88 46.60 58.50 Table 1: Result comparison on VQA v2 dataset. Results of Prior, Language-only, d-SLTM-n-I, MCB are reported in Goyal et al. (2017a). Result of up-down model is reported in Teney et al. (2017). 2.4 Multi-modal Fusion and Answer Prediction The above three attentions are summed together for the final attention vector. Then we get the weighted visual feature vector va ∈R2048 for the image: a = aW L + aW O + aSO va = K X k=1 akvk (5) Then the question feature hT and the attended visual feature va are transformed into the same dimension and fused together with element-wise multiplication, to get the joint representation vector r ∈RD4. r = f(W rthT ) ◦f(W rvva) (6) where f is ReLU, W rt, W rv are weight parameters. Following Teney et al. (2017), we treat VQA task as a classification problem, and use the binary cross-entropy loss to take multiple marked answers into consideration: ˆs = σ (f(W ar)) loss = A X a=1 salog( ˆsa) −(1 −sa)log(ˆsa) (7) where ˆs ∈RA is the predicted score over all A answer candidates, sa is the target accuracy score6. 3 Experiments and Analysis 3.1 Settings Experiments are conducted on VQA v2 dataset (Goyal et al., 2017b). Questions are trimmed to a maximum of T = 14 words. We set 6accuracy = min( #humans that provided that answer 3 , 1), i.e. an answer is accurate if at least 3 markers provided the answer. Model All Yes/no Numbers Other Up-down 63.15 80.07 42.87 55.81 + WL 64.29 81.75 44.34 56.29 + WO 64.24 82.00 43.69 56.18 + ELMO 64.15 81.86 44.11 55.98 Table 2: Model analysis results. Models were trained on train and evaluated on val set. the number of detected boxes to K = 36, and set the dimension of GloVe embeddings and ELMo embeddings to D1 = 300 and D2 = 1024, respectively. The GRU hidden size for question sentence is D3 = 1024, and the joint representation r is of dimension D4 = 2048. Noun tokens count is set as fixed B = 3 with padding7. Candidate answers are restricted to the correct answers in the training set that appear more than a threshold, which results in a number of A = 3129 answer candidates. Adamax optimizer (Kingma and Ba, 2014) is used with initial learning rate of 0.002, and we use a learning rate decay schedule that reduces the learning rate by a factor of 0.1 every 3 epochs after 8 epochs. The batch size is 512. 3.2 Comparisons with the State-of-the-arts Table 1 shows the result comparison with the baseline up-down and other methods in single model setting. Our model outperforms these previous results, improving the up-down model from 65.32 to 67.41 on test-dev, and from 65.67 to 67.73 on teststd. This superior performance can be seen in all the answer types, especially for the most difficult ones Numbers, where our model gains significant +2.81/+2.70 improvement on the test-dev/test-std. 3.3 Model Analysis To understand the effects of different components, the performance by adding one certain proposed component to the baseline is reported in Table 2. Adding our proposed two branches of fine-grained WL and WO attentions significantly improves the baseline performance. The result also verifies that ELMo embeddings combined with GloVe embeddings provide more sophisticated text representations, thus improves the overall performance. 3.4 Study on Attention Maps To validate the effectiveness of the enhanced attention mechanism, we visualize the attentions and 7Though B is set as a fixed value during the whole process, it can be variable with trivial modifications for the WO attention computation. 3599 Q: How many dog ears are shown? A: 2 Q: Is there a hat on the bench? A: yes Q:Are the stop signs facing normal? A: yes Q: Does his bow tie match his pants? A: yes Baseline: no Baseline: no Baseline: 4 Baseline: no Baseline: no Baseline: no Baseline: no Our : 2 Our: yes Our : yes Our : yes Our : no Q: Can you see its paws? A: yes (a) (b) (c) (d) (e) Figure 4: Attention map examples (only top-5 salient regions are shown here). compare them versus those of the up-down model. As Figure 4 shows, the addition of object-level groundings leads to a better understanding of the images and locates the attended objects more precisely. For example, in Figure 4(a), for question “Can you see its paws?”, the attention generated by our method is focused on the “paws”, while the baseline does not focus on the key regions as accurate as we do. In Figure 4(b), for the Numbers type question “How many dog ears are shown?”, our model gives the strongest attention on the “ear” part of the dog, while the baseline model attends to the whole dog body. For small object clues, our model shows more advantage. As shown in the examples in Figure 4(c), Figure 4(d). We also notice cases where though the final answer is wrong, our model generates appropriate attention maps. As shown in Figure 4(e), for Yes/no question “Does his bow tie match his pants?”, our model correctly finds “tie” and “pants” object regions, but we suspect that the model does not understand the meaning of “match”. A mean opinion score (MOS) test to quantitatively compare our attention mechanism with the baseline model is also performed. Specifically, we randomly select 100 cases and generate their attention maps. Then, we asked subjects to rate a score from 0 (bad quality), 0.5 (medium quality) and 1 (excellent quality) to these attention maps. The distribution of MOS ratings are summarized in Figure 5. The mean scores of our model 0.8125 wins a large margin over the baseline model 0.7315, indicating that the attention maps generated by our attention mechanism are preferred by human. 0 10 20 30 40 50 60 70 0 0.5 1 Number Mean Opinion Score Baseline Our Figure 5: The distribution of Mean Opinion Score. 4 Conclusion This paper proposes a multi-grained attention mechanism. It involves both word-object grounding and sentence-image association to capture different degrees of granularity and interpretability of the images. Visualizations of object-level attention show a clear improvement in the ability of the model to attend to small details in complicated scenes. 3600 References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Kyunghyun Cho, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In arXiv preprint arXiv:1406.1078. Akira Fukui, Huk Park Dong, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017a. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, volume 1, page 3. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017b. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Li-Jia Li, Hao Su, Yongwhan Lim, and Li Fei-Fei. 2010. Objects as attributes for scene classification. In European Conference on Computer Vision, pages 57–69. Springer. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. CoRR, abs/1606.00061. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. 2015. Ask your neurons: A neural-based approach to answering questions about images. In In Proceedings of the IEEE International Conference on Computer Vision, pages 1–9. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2017. Tips and tricks for visual question answering: Learnings from the 2017 challenge. arXiv preprint arXiv:1708.02711. Qi Wu, Chunhua Shen, Peng Wang, Anthony Dick, and Anton van den Hengel. 2018. Image captioning and visual question answering based on attributes and external knowledge. IEEE transactions on pattern analysis and machine intelligence, 40(6):1367– 1381. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21–29.
2019
349
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 360–370 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 360 Manipulating the Difficulty of C-Tests Ji-Ung Lee and Erik Schwan and Christian M. Meyer Ubiquitous Knowledge Processing (UKP) Lab and Research Training Group AIPHES Computer Science Department, Technische Universit¨at Darmstadt, Germany https://www.ukp.tu-darmstadt.de Abstract We propose two novel manipulation strategies for increasing and decreasing the difficulty of C-tests automatically. This is a crucial step towards generating learner-adaptive exercises for self-directed language learning and preparing language assessment tests. To reach the desired difficulty level, we manipulate the size and the distribution of gaps based on absolute and relative gap difficulty predictions. We evaluate our approach in corpus-based experiments and in a user study with 60 participants. We find that both strategies are able to generate C-tests with the desired difficulty level. 1 Introduction Learning languages is of utmost importance in an international society and formulated as a major political goal by institutions such as the European Council, who called for action to “teaching at least two foreign languages” (EC, 2002, p. 20). But also beyond Europe, there is a huge demand for language learning worldwide due to increasing globalization, digital communication, and migration. Among multiple different learning activities required for effective language learning, we study one particular type of exercise in this paper: Ctests are a special type of cloze test in which the second half of every second word in a given text is replaced by a gap (Klein-Braley and Raatz, 1982). Figure 1 (a) shows an example. To provide context, the first and last sentences of the text do not contain any gaps. C-tests rely on the reduced redundancy principle (Spolsky, 1969) arguing that a language typically employs more linguistic information than theoretically necessary to communicate unambiguously. Proficient speakers intuitively understand an utterance even if the level of redundancy is reduced (e.g., when replacing a word’s suffix with a gap), whereas learners typically rely on the redundant signal to extrapolate the meaning of an utterance. Besides general vocabulary knowledge, C-tests require orthographic, morphologic, syntactic, and semantic competencies (Chapelle, 1994) to correctly fill in all gaps, which make them a frequently used tool for language assessment (e.g., placement tests). Given that C-tests can be easily generated automatically by introducing gaps into an arbitrary text and that there is usually only a single correct answer per gap given its context, C-tests are also relevant for self-directed language learning and massive open online courses (MOOC), where largescale personalized exercise generation is necessary. A crucial question for such tasks is predicting and manipulating the difficulty of a C-test. For language assessment, it is important to generate C-tests with a certain target difficulty to allow for comparison across multiple assessments. For selfdirected language learning and MOOCs, it is important to adapt the difficulty to the learner’s current skill level, as an exercise should be neither too easy nor too hard so as to maximize the learning effect and avoid boredom and frustration (Vygotsky, 1978). Automatic difficulty prediction of C-tests is hard, even for humans, which is why there have been many attempts to theoretically explain C-test difficulty (e.g., Sigott, 1995) and to model features used in machine learning systems for automatic difficulty prediction (e.g., Beinborn et al., 2014). While state-of-the-art systems produce good prediction results compared to humans (Beinborn, 2016), there is yet no work on automatically manipulating the difficulty of C-tests. Instead, C-tests are generated according to a fixed scheme and manually post-edited by teachers, who might use the predictions as guidance. But this procedure is extremely time-consuming for language assessment and no option for large-scale self-directed learning. In this paper, we propose and evaluate two strategies for automatically changing the gaps of a C-test in order to reach a given target difficulty. Our first 361 It i being fou , moreover, i fairly cl correspondence wi the predi of t soothsayers o the th factories. Th predicted escal , and escal is wh we a getting. T biggest nuc device t United Sta has expl measured so 15 meg. . . It is being fought, more , in fai cl corresp with the predi of the sooth of the th fact . Th pred escal , and escal is what w are get . The big nuc dev the United States h expl meas some 15 meg. . . It i being fough , moreover, i fairly clos correspondence wit the prediction of t soothsayers o the thin factories. The predicted escalatio , and escalatio is wha we ar getting. T biggest nuclea device t United State has explode measured som 15 meg. . . (a) (b) (c) Figure 1: C-tests with (a) standard gap scheme, (b) manipulated gap position, and (c) manipulated gap size strategy varies the distribution of the gaps in the underlying text and our second strategy learns to decide to increase or decrease a gap in order to make the test easier or more difficult. Our approach breaks away from the previously fixed C-test creation scheme and explores new ways of motivating learners by using texts they are interested in and generating tests from them at the appropriate level of difficulty. We evaluate our strategies both automatically and in a user study with 60 participants. 2 Related Work In language learning research, there is vast literature on cloze tests. For example, Taylor (1953) studies the relation of cloze tests and readability. In contrast to C-tests (Klein-Braley and Raatz, 1982), cloze tests remove whole words to produce a gap leading to more ambiguous solutions. Chapelle and Abraham (1990) contrast four types of cloze tests, including fixed-ratio cloze tests replacing every ith word with a gap, rational cloze tests that allow selecting the words to replace according to the language trait that should be assessed, multiple-choice tests, and C-tests. Similar to our work, they conduct a user study and measure the difficulty posed by the four test types. They find that cloze tests replacing entire words with a gap are more difficult than C-tests or multiplechoice tests. In our work, we go beyond this by not only varying between gaps spanning the entire word (cloze test) or half of the word (C-test), but also changing the size of the C-test gaps. Laufer and Nation (1999) propose using C-tests to assess vocabulary knowledge. To this end, they manually construct C-tests with only a single gap, but use larger gaps than half of the word’s letters. Our work is different to these previous works, since we test varying positions and sizes for C-test gaps and, more importantly, we aim at manipulating the difficulty of a C-test automatically by learning to predict the difficulty of the gaps and how their manipulation affects the difficulty. Previous work on automatically controlling and manipulating test difficulty has largely focused on multiple-choice tests by generating appropriate distractors (i.e., incorrect solutions). Wojatzki et al. (2016) avoid ambiguity of their generated distractors, Hill and Simha (2016) fit them to the context, and Perez and Cuadros (2017) consider multiple languages. Further work by Zesch and Melamud (2014), Beinborn (2016), and Lee and Luo (2016) employ word difficulty, lexical substitution, and the learner’s answer history to control distractor difficulty. For C-tests, Kamimoto (1993) and Sigott (2006) study features of hand-crafted tests that influence the difficulty, and Beinborn et al. (2014) and Beinborn (2016) propose an automatic approach to estimate C-test difficulty, which we use as a starting point for our work. Another related field of research in computerassisted language learning is readability assessment and, subsequently, text simplification. There exists ample research on predicting the reading difficulty for various learner groups (Hancke et al., 2012; Collins-Thompson, 2014; Pil´an et al., 2014). A specific line of research focuses on reducing the reading difficulty by text simplification (Chandrasekar et al., 1996). By reducing complex texts or sentences to simpler ones, more texts are made accessible for less proficient learners. This is done either on a word level by substituting difficult words with easier ones (e.g., Kilgarriff et al., 2014) or on a sentence level (Vajjala and Meurers, 2014). More recent work also explores sequence-to-sequence neural network architectures for this task (Nisioi et al., 2017). Although the reading difficulty of a text partly contributes to the overall exercise difficulty of C-tests, there are many other factors with a substantial influence (Sigott, 1995). In particular, we can generate many different C-tests from the same text and thus reading difficulty and text simplification alone are not sufficient to determine and manipulate the difficulty of C-tests. 362 Corpus C-test Generation Difficulty Prediction Difficulty Manipulation Target difficulty τ C-test T Figure 2: Proposed system architecture 3 Task Overview We define a C-test T = (u, w1, . . . , w2n, v, G) as a tuple of left and right context u and v (typically one sentence) enframing 2n words wi where n=|G| is the number of gaps in the gap set G. In each gap g =(i,ℓ)∈G, the last ℓcharacters of word wi are replaced by a blank for the learners to fill in. KleinBraley and Raatz (1982) propose the default gap generation scheme DEF with G = {(2j, ⌈|w2j| 2 ⌉) | 1 ≤j ≤n} in order to trim the (larger) second half of every second word. Single-letter words, numerals, and punctuation are not counted as words wi and thus never contain gaps. Figure 1 (a) shows an example C-test generated with the DEF scheme. A major limitation of DEF is that the difficulty of a C-test is solely determined by the input text. Most texts, however, yield a medium difficulty (cf. section 6) and thus do not allow any adaptation to beginners or advanced learners unless they are manually postprocessed. In this paper, we therefore propose two strategies to manipulate the gap set G in order to achieve a given target difficulty τ ∈[0, 1] ranging from small values for beginners to high values for advanced learners. To estimate the difficulty d(T) = 1 |G| P g∈G d(g) of a C-test T, we aggregate the predicted difficulty scores d(g) of each gap. In section 4, we reproduce the system by Beinborn (2016) modeling d(g) ≈e(g) as the estimated mean error rates e(g) per gap across multiple learners, and we conduct additional validation experiments on a newly acquired dataset. The core of our work is the manipulation of the gap set G in order to minimize the difference |d(T) −τ| between the predicted test difficulty d(T) and the requested target difficulty τ. To this end, we employ our difficulty prediction system for validation and propose a new regression setup that predicts the relative change of d(g) when manipulating the size ℓof a gap. Figure 2 shows our system architecture: Based on a text corpus, we generate C-tests for arbitrary texts (e.g., according to the learner’s interests). Then, we manipulate the difficulty of the generated text by employing the difficulty prediction system in order to reach the given target difficulty τ for a learner (i.e., the estimated learner proficiency) to provide neither too easy nor too hard tests. 4 C-Test Difficulty Prediction Beinborn et al. (2014) and Beinborn (2016) report state-of-the-art results for the C-test difficulty prediction task. However, there is yet no opensource implementation of their code and there is little knowledge about the performance of newer approaches. Therefore, we (1) conduct a reproduction study of Beinborn’s (2016) system, (2) evaluate newer neural network architectures, and (3) validate the results on a newly acquired dataset. Reproduction study. We obtain the original software and data from Beinborn (2016). This system predicts the difficulty d(g) for each gap within a Ctest using a support vector machine (SVM; Vapnik, 1998) with 59 hand-crafted features. The proposed features are motivated by four factors which are deemed important for assessing the gap difficulty: item dependency, candidate ambiguity, word difficulty, and text difficulty. We use the same data (819 filled C-tests), metrics, and setup as Beinborn (2016). That is, we perform leave-one-out cross validation (LOOCV) and measure the Pearson correlation ρ, the rooted mean squared error RMSE, and the quadratic weighted kappa qwκ as reported in the original work. The left hand side of table 1 shows the results of our reproduced SVM compared to the original SVM results reported by Beinborn (2016). Even though we reuse the same code as in their original work, we observe small differences between our reproduction and the previously reported scores. We were able to trace these differences back to libraries and resources which have been updated and thus changed over time. One example is Ubuntu’s system dictionary, the American English dictionary words (wamerican), on which the original system relies. We experiment with different versions of the dictionary between Ubuntu 14.04 (wamerican v.7.1.1) and 18.04 (wamerican v.2018.04.16-1) and observe differences of one or two percentage points. As a best practice, we suggest to fix the versions of all resources and avoid any system dependencies. Neural architectures. We compare the system with deep learning methods based on multi-layer 363 Original data New data Model ρ RMSE qwκ ρ RMSE qwκ SVM (original) .50 .23 .44 – – – SVM (reproduced) .49 .24 .47 .50 .21 .39 MLP .42 .25 .31 .41 .22 .25 BiLSTM .49 .24 .35 .39 .24 .27 Table 1: Results of the difficulty prediction approaches. SVM (original) has been taken from Beinborn (2016) perceptrons (MLP) and bi-directional long shortterm memory (BiLSTM) architectures, which are able to capture non-linear feature dependencies.1 To cope for the non-deterministic behavior of the neural networks, we repeat all experiments ten times with different random weight initializations and report the averaged results (Reimers and Gurevych, 2017). While the MLP is trained similar as our reproduced SVM, the BiLSTM receives all gaps of a C-test as sequential input. We hypothesize that this sequence regression setup is better suited to capture gaps interdependencies. As can be seen from the table, the results of the neural architectures are, however, consistently worse than the SVM results. We analyze the RMSE on the train and development sets and observe a low bias, but a high variance. Thus, we conclude that although neural architectures are able to perform well for this task, they lack a sufficient amount of data to generalize. Experiments on new data. To validate the results and assess the robustness of the difficulty prediction system, we have acquired a new C-test dataset from our university’s language center. 803 participants of placement tests for English courses solved five C-tests (from a pool of 53 different Ctests) with 20 gaps each. Similar to the data used by Beinborn (2016), we use the error rates e(g) for each gap as the d(g) the methods should predict. The right-hand side of table 1 shows the performance of our SVM and the two neural methods. The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data. Final model. We train our final SVM model on all available data (i.e., the original and the new data) and publish our source code and the trained model on GitHub.2 Similar to Beinborn (2016), we 1Network parameters and a description of the tuning process are provided in this paper’s appendix. 2https://github.com/UKPLab/ acl2019-ctest-difficulty-manipulation Algorithm 1 Gap selection strategy (SEL) 1: procedure GAPSELECTION(T, τ) 2: GFULL ←{(i, ⌈|wi| 2 ⌉| 1 ≤i ≤2n} 3: GSEL ←∅ 4: while |GSEL| < n do 5: G≤τ ←{g ∈GFULL | d(g) ≤τ} 6: if |G≤τ| > 0 then 7: g∗←arg ming∈G≤τ |d(g) −τ| 8: GSEL ←GSEL ∪{g∗} 9: GFULL ←GFULL \ {g∗} 10: G>τ ←{g ∈GFULL | d(g) > τ} 11: if |G>τ| > 0 then 12: g∗←arg ming∈G>τ |d(g) −τ| 13: GSEL ←GSEL ∪{g∗} 14: GFULL ←GFULL \ {g∗} 15: return GSEL cannot openly publish our dataset due to copyright. 5 C-Test Difficulty Manipulation Given a C-test T = (u, w1, . . . , w2n, v, G) and a target difficulty τ, the goal of our manipulation strategies is to find a gap set G such that d(T) approximates τ. A na¨ıve way to achieve this goal would be to generate C-tests for all texts in a large corpus with the DEF scheme and use the one with minimal |d(T)−τ|. However, most corpora tend to yield texts of a limited difficulty range that only suit a specific learner profile (cf. section 6). Another drawback of the na¨ıve strategy is that it is difficult to control for the topic of the underlying text and in the worst case, the necessity to search through a whole corpus for selecting a fitting C-test. In contrast to the na¨ıve strategy, our proposed manipulation strategies are designed to be used in real time and manipulate any given C-test within 15 seconds at an acceptable quality.3 Both strategies operate on a given text (e.g., on a topic a learner is interested in) and manipulate its gap set G in order to come close to the learner’s current language skill. The first strategy varies the position of the gaps and the second strategy learns to increase or decrease the size of the gaps. 5.1 Gap Selection Strategy The default C-test generation scheme DEF creates a gap in every second word w2j, 1 ≤j ≤n. The core idea of our first manipulation strategy SEL is to distribute the n gaps differently among the all 2n words in order to create gaps for easier or harder words than in the default generation scheme. Therefore, we use the difficulty predic(licensed under the Apache License 2.0). 3On an Intel-i5 with 4 CPUs and 16 GB RAM. 364 tion system to predict d(g) for any possible gap g ∈GFULL = {(i, ⌈|wi| 2 ⌉) | 1 ≤i ≤2n} (i.e., assuming a gap in all words rather than in every second word). Then, we alternate between adding gaps to the resulting GSEL that are easier and harder than the preferred target difficulty τ, starting with those having a minimal difference |d(g) −τ|. Algorithm 1 shows this procedure in pseudocode and figure 1 shows a C-test whose difficulty has been increased with this strategy. Note that it has selected gaps at corresponding rather than with, and soothsayers rather than the. Our proposed algorithm is optimized for runtime. An exhaustive search would require testing 2n n  combinations if the number of gaps is constant. For n = 20, this yields 137 billion combinations. While more advanced optimization methods might find better gap selections, we show in section 6 that our strategy achieves good results. 5.2 Gap Size Strategy Our second manipulation strategy SIZE changes the size of the gaps based on a pre-defined gap set. Increasing a gap g =(i, ℓ) by one or more characters, yielding g′ =(i, ℓ+ k) increases its difficulty (i.e., d(g′) ≥d(g)), while smaller gaps make the gap easier. We identify a major challenge in estimating the effect of increasing or decreasing the gap size on the gap difficulty. Although d(g′) could be estimated using the full difficulty prediction system, the search space is even larger than for the gap selection strategy, since each of the n gaps has |wi|−2 possible gap sizes to test. For n = 20 and an average word length of six, this amounts to one trillion possible combinations. We therefore propose a new approach to predict the relative difficulty change of a gap g = (i, ℓ) when increasing the gap size by one letter ∆inc(g) ≈d(g′) −d(g), g′ = (i, ℓ+ 1) and correspondingly when decreasing the gap size by one letter ∆dec(g) ≈d(g)−d(g′), g′ = (i, ℓ−1). The notion of relative difficulty change enables gap size manipulation in real time, since we do not have to invoke the full difficulty prediction system for all combinations. Instead, we can incrementally predict the effect of changing a single gap. To predict ∆inc and ∆dec, we train two SVMs on all gap size combinations of 120 random texts from the Brown corpus (Francis, 1965) using the following features: predicted absolute gap difficulty, word length, new gap size, modified character, a Algorithm 2 Gap size strategy (SIZE) 1: procedure INCREASEDIFFICULTY(T, τ) 2: GSIZE ←GDEF 3: D ←d(T) 4: while D < τ do 5: g∗= (i, ℓ) ←arg maxg∈GSIZE ∆inc(g) 6: ℓ←ℓ+ 1 7: D ←D + ∆inc(g) 8: return GSIZE binary indicator if the gap is at a th sound, and logarithmic difference of alternative solutions capturing the degree of ambiguity with varying gap size. With a final set of only six features, our new models are able to approximate the relative difficulty change very well deviating from the original system’s prediction only by 0.06 RMSE for ∆inc and 0.13 RMSE for ∆dec. The predictions of both models highly correlate with the predictions achieving a Pearson’s ρ of over 0.8. Besides achieving a much faster average runtime of 0.056 seconds for the relative model vs. 11 seconds for the full prediction of a single change, we can invoke the relative model iteratively to estimate d(T) for multiple changes of the gap size more efficiently. The final manipulation strategy then requires just a single call of the full prediction system. If d(T)<τ, we incrementally increase the gap sizes to make T more difficult and, vice-versa, decrease the gap sizes if d(T) > τ. In each iteration, we modify the gap with the highest relative difficulty change in order to approach the given target difficulty τ as quickly as possible. Algorithm 2 shows pseudocode for creating Gsize with increased difficulty (i.e., d(T) < τ) based on the default gap scheme DEF. The procedure for d(T) > τ works analogously, but using ∆dec and decreasing the gap size. Figure 1 (c) shows a much easier version of the example C-test, in which a learner often only has to complete the last one or two letters. 6 Evaluation of the Manipulation System To evaluate our C-test manipulation strategies, we first test their ability to cover a higher range of target difficulties than the default generation scheme and then measure how well they meet the desired target difficulty for texts from different domains. We conduct our experiments on 1,000 randomly chosen paragraphs for each of the Gutenberg (Lahiri, 2014), Reuters (Lewis et al., 2004), and Brown (Francis, 1965) corpora. We conduct our experiments on English, but our strategies can be adapted to many related languages. 365 0 20 40 60 80 100 120 140 160 180 0 0.1 0.2 0.3 0.4 0.5 0.6 Number of Exercises Exercise Difficulty DEF SIZE,0 SIZE,1 SEL,0 SEL,1 Figure 3: Difficulty distribution of exercises generated with DEF, SEL, and SIZE for extreme τ values Difficulty range. The black -marked line of figure 3 shows the distribution of d(T) based on our difficulty prediction system when creating a C-test with the default generation scheme DEF for all our samples of the Brown corpus. The vast majority of C-tests range between 0.15 and 0.30 with a predominant peak at 0.22. To assess the maximal difficulty range our strategies can achieve, we generate C-tests with maximal (τ = 1) and minimal target difficulty (τ = 0) for both strategies S ∈{SEL, SIZE}, which are also shown in figure 3 as (S, τ). Both strategies are able to clearly increase and decrease the test difficulty in the correct direction and they succeed in substantially increasing the total difficulty range beyond DEF. While SEL is able to reach lower difficulty ranges, it has bigger issues with generating very difficult tests. This is due to its limitation to the fixed gap sizes, whereas SIZE can in some cases create large gaps that are ambiguous or even unsolvable. Since SIZE is, however, limited to the 20 predefined gaps, it shows a higher variance. Especially short gaps such as is and it cannot be made more difficult. Combining the two strategies is thus a logical next step for future work, building upon our findings for both strategies. We make similar observations on the Reuters and Gutenberg corpora and provide the respective figures in the appendix. Manipulation quality. We finally evaluate how well each strategy S reaches a given target difficulty. That is, we sample a random corpus text and τ, create the C-test using strategy S, predict the test difficulty d(T) and measure its difference to τ using RMSE. Table 2 shows the results for our three corpora. Throughout all three corpora, both manipulation strategies perform well. SEL consistently outperforms SIZE, which matches our observations from the previous experiment. Mind that these results depend on the quality of the auStrategy Brown Reuters Gutenberg SEL .11 .12 .10 SIZE .13 .15 .12 Table 2: RMSE for both strategies on each corpora with randomly sampled target difficulties τ tomatic difficulty predictions, which is why we conduct a user-based evaluation in the next section. 7 User-based Evaluation Hypothesis. To evaluate the effectiveness of our manipulation strategies in a real setting, we conduct a user study and analyze the difficulty of the manipulated and unmanipulated C-tests. We investigate the following hypothesis: When increasing a test’s difficulty using strategy S, the participants will make more errors and judge the test harder than a default C-test and, vice versa, when decreasing a test’s difficulty using S, the participants will make less errors and judge the test easier. Experimental design. We select four different English texts from the Brown corpus and shorten them to about 100 words with keeping their paragraph structure intact. None of the four texts is particularly easy to read with an average grade level above 12 and a Flesh reading ease score ranging between 25 (very difficult) to 56 (fairly difficult). In the supplementary material, we provide results of an automated readability analysis using standard metrics. From the four texts, we then generate the C-tests Ti, 1 ≤i ≤4 using the default generation scheme DEF. All tests contain exactly n = 20 gaps and their predicted difficulties d(Ti) are in a mid range between 0.24 and 0.28. T1 remains unchanged in all test conditions and is used to allow the participants to familiarize with the task. For the remaining three texts, we generate an easier variant T S,dec i with target difficulty τ = 0.1 and a harder variant T S,inc i with τ = 0.5 for both strategies S ∈{SEL, SIZE}. From these tests, we create 12 sequences of four C-tests that we give to the participants. Each participant receives T1 first to familiarize with the task. Then, they receive one easy T S,dec i , one default Ti, and one hard T S,inc i C-test for the same strategy S based on the texts i ∈{2, 3, 4} in random order without duplicates (e.g., the sequence T1 T SEL,dec 2 T3 T SEL,inc 4 ). Having finished a C-test, we ask them to judge the difficulty of this test on a 366 five-point Likert scale ranging from too easy to too hard. After solving the last test, we additionally collect a ranking of all four tests by their difficulty. Data collection. We collect the data from our participants with a self-implemented web interface for solving C-tests. We create randomized credentials linked to a unique ID for each participant and obfuscate their order, such that we can distinguish them but cannot trace back their identity and thus avoid collecting any personal information. Additionally, we ask each participant for their consent on publishing the collected data. For experiments with a similar setup and task, we obtained the approval of the university’s ethics commission. After login, the participants receive instructions and provide a self-assessment of their English proficiency and their time spent on language learning. The participants then solve the four successive C-tests without knowing the test difficulty or the manipulation strategy applied. They are instructed to spend a maximum of five minutes per C-test to avoid timebased effects and to prevent them from consulting external resources, which would bias the results. Participants. A total of 60 participants completed the study. We uniformly distributed the 12 test sequences (six per strategy), such that we have 30 easy, 30 default, and 30 hard C-test results for each manipulation strategy. No participant is native in English, 17 are taking language courses, and 57 have higher education or are currently university students. The frequency of their use of English varies, as we found a similar number of participants using English daily, weekly, monthly, and (almost) never in practice. An analysis of the questionnaire is provided in the paper’s appendix. Hypothesis testing. We evaluate our hypothesis along three dimensions: (1) the actual error rate of the participants, (2) the perceived difficulty after each individual C-test (Likert feedback), and (3) the participants’ final difficulty ranking. While the latter forces the participants to provide an explicit ranking, the former allows them to rate C-tests equally difficult. We conduct significance testing at the Bonferroni-corrected α = 0.05 2 = 0.025 for each dimension using one-tailed t-tests for the continuous error rates and one-tailed Mann–Whitney U tests for the ordinal-scaled perceived difficulties and rankings. Figure 4 shows notched boxplots of our results. To test our hypothesis, we first formulate a null easy (dec) default hard (inc) SEL SIZE DEF SEL SIZE T1 – – .30 – – T2 .17∗ .11∗ .34 .66∗ .44∗ T3 .16∗ .10∗ .27 .52∗ .43∗ T4 .28 .09∗ .30 .43∗ .45∗ Average .20∗ .10∗ .30 .53∗ .44∗ Table 3: Mean error rates e(T) per text and strategy. Results marked with ∗deviate significantly from DEF hypothesis that (a) the mean error rate, (b) the median perceived difficulty (Likert feedback), and (c) the median rank of the manipulated tests equal the default tests. While the participants have an average error rate of 0.3 on default C-tests, the T S,dec i tests are significantly easier with an average error rate of 0.15 (t = 7.49, p < 10−5) and the T S,inc i tests are significantly harder with an average error rate of 0.49 (t = −7.83, p < 10−5), so we can safely reject the null hypothesis for error rates. Table 3 shows the error rates per C-test and strategy. Both SEL and SIZE are overall able to significantly (p < 0.025) increase and decrease the test’s difficulty over DEF, and with the exception of T SEL,dec 4 , the effect is also statistically significant for all individual text and strategy pairs. Figure 5 shows the 30 participants per strategy on the x-axis and their error rates in their second to fourth C-test on the y-axis. C-tests, for which we increased the difficulty (S, inc), yield more errors than C-tests with decreased difficulty (S, dec) in all cases. The easier tests also yield less errors than the test with the default scheme DEF in most cases. While hard tests often have a much higher error rate than DEF, we find some exceptions, in which the participant’s error rate is close or even below the DEF error rate. Regarding the perceived difficulty, we find that the participants judge the manipulated C-tests with lower d(T) as easier on both the Likert scale (z = 6.16, p < 10−5) and in the rankings (z = 6.59, p < 10−5) based on the Mann-Whitney-U test. The same is true for C-tests that have been manipulated to a higher difficulty level, which the participant judge harder (z = −4.57, p < 10−5) and rank higher (z = −3.86, p < 6 · 10−5). We therefore reject the null hypotheses for the Likert feedback and the rankings and conclude that both strategies can effectively manipulate a C-test’s difficulty. Manipulation quality. We further investigate if the strategies yield different difficulty levels. There367 SIZE,dec SEL,dec DEF SIZE,inc SEL,inc 0.0 0.2 0.4 0.6 0.8 Error rate SIZE,dec SEL,dec DEF SIZE,inc SEL,inc too easy easy OK hard too hard Likert Feedback SIZE,dec SEL,dec DEF SIZE,inc SEL,inc 1 2 3 4 Difficulty Ranking (a) (b) (c) Figure 4: Notched boxplots for the (a) observed error rates, (b) Likert feedback, and (c) the participants’ rankings 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error rate SEL,dec DEF SEL,inc 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error rate SIZE,dec DEF SIZE,inc Figure 5: Error rates per participant and strategy SEL DEF SIZE τ .10 .50 – .10 .50 RMSE(e, d) .10 .13 .04 .09 .11 RMSE(e, τ) .12 .10 – .01 .06 Table 4: RMSE between the actual difficulty e(T) and predicted difficulty d(T) as well as target difficulty τ. fore, we use two-tailed significance testing between SEL and SIZE for all three dimensions. We find that SIZE yields significantly easier C-tests than SEL in terms of error rates (p = 0.0014) and Likert feedback (p = 6 · 10−5), and observe p = 0.0394 for the rankings. For increasing the difficulty, we, however, do not find significant differences between the two strategies. Since both strategies successfully modify the difficulty individually, this motivates research on combined strategies in the future. We furthermore investigate how well our strategies perform in creating C-tests with the given target difficulty τ. Table 4 shows the RMSE for e(T) and d(T) as well as for e(T) and τ for both strategies. As expected, our difficulty prediction system works best for C-tests generated with DEF as they use the same scheme as C-tests in the training data. Though slightly worse than for DEF, we still find very low RMSE scores for manipulated Ctests. This is especially good when considering that the system’s performance on our newly acquired dataset yields and RMSE of 0.21 (cf. section 6). Computing the RMSE with respect to our chosen 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Actual Difficulty Predicted Difficulty DEF SEL,dec SEL,inc SIZE,dec SIZE,inc d(T) = e(T) τ=0.5 τ=0.1 Figure 6: Predicted difficulties d(T) vs the actual error rates e(T). target difficulties τ yields equally good results for SEL and exceptionally good results for SIZE. Figure 6 displays d(T) in comparison to e(T) for each individual text and strategy. With the exception of T SEL,inc 2 and T SEL,dec 4 , all predictions are close to the optimum (i.e., the diagonal) and also close to the desired target difficulty τ. In a more detailed analysis, we find two main sources of problems demanding further investigation: First, the difficulty prediction quality when deviating from DEF and second, the increasing ambiguity in harder C-tests. However, it underestimates the d(T) = 0.11 for T SEL,dec 4 (the same text used in figure 1), for which we found an actual error rate of 0.28. This is due to chains of four successive gaps, such as: gap g i wh w a solution is what we are d(g) 0.17 0.22 0.23 0.19 e(g) 0.70 0.40 0.10 0.20 As the prediction system has been trained only on DEF-generated C-tests, it underestimates d(g) for cases with limited context. It will be interesting for future work to focus on modeling gap interdependencies in C-tests deviating from DEF. Another issue we observe is that the gap size strategy might increase the ambiguity of the C-test. In the standard scheme, there is in most cases only a single correct answer per gap. In T SIZE,inc 2 , how368 ever, the SIZE strategy increased the gap of the word professional to its maximal length yielding p . One participant answered popularising for this gap, which also fits the given context. We carefully checked our datasetfor other ambiguity, but only found one additional case: In T4, instead of the word close, 13 participants out of 30 used clear as a modifier of correspondence, which both produce meaningful contexts. Given that this case is already ambiguous in the DEF scheme yielding the gap cl , we conclude that the issue is not severe, but that the difficulty prediction system should be improved to better capture ambiguous cases; for example, by introducing collocational features weighted by their distribution within a corpus into ∆inc and ∆dec. 8 Conclusion In this work, we proposed two novel strategies for automatically manipulating the difficulty of C-test exercises. Our first strategy selects which words should be turned into a gap, and the second strategy learns to increase or decrease the size of the gaps. Both strategies automatically predict the difficulty of a test to make informed decisions. To this end, we reproduced previous results, compared them to neural architectures, and tested them on a newly acquired dataset. We evaluate our difficulty manipulation pipeline in a corpus-based study and with real users. We show that both strategies can effectively manipulate the C-test difficulty, as both the participants’ error rates and their perceived difficulty yield statistically significant effects. Both strategies reach close to the desired difficulty level. Our error analysis points out important directions for future work on detecting ambiguous gaps and modeling gap interdependencies for C-tests deviating from the default generation scheme. An important observation is that manipulating the gaps’ size and position does not only influence the C-test difficulty, but also addresses different competencies (e.g., requires more vocabulary knowledge or more grammatical knowledge). Future manipulation strategies that take the competencies into account have the potential to train particular skills and to better control the competencies required for a placement test. Another strand of research will be combining both strategies and deploying the manipulation strategies in a large scale testing platform that allows the system to adapt to an individual learner over time. A core advantage of our manipulation strategies is that we can work with any given text and thus provide C-tests that do not only have the desired difficulty, but also integrate the learner’s interest or the current topic of a language course. Acknowledgments This work has been supported by the Hessian research excellence program “Landes-Offensive zur Entwicklung Wissenschaftlich-¨okonomischer Exzellenz” (LOEWE) as part of the a! – automated language instruction project under grant No. 521/17-03 and by the German Research Foundation as part of the Research Training Group “Adaptive Preparation of Information from Heterogeneous Sources” (AIPHES) under grant No. GRK 1994/1. We thank the anonymous reviewers for their detailed and helpful comments. We furthermore thank the language center of the Technische Universit¨at Darmstadt for their cooperation and Dr. Lisa Beinborn for providing us with the code for our reproduction study. References Lisa Beinborn, Torsten Zesch, and Iryna Gurevych. 2014. Predicting the Difficulty of Language Proficiency Tests. Transactions of the Association for Computational Linguistics, 2:517–529. Lisa Marina Beinborn. 2016. Predicting and manipulating the difficulty of text-completion exercises for language learning. Ph.D. thesis, Technische Universit¨at Darmstadt. Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th International Conference on Computational Linguistics (COLING): Volume 2, pages 1041–1044, Copenhagen, Denmark. C. A. Chapelle. 1994. Are C-tests valid measures for L2 vocabulary research? Second Language Research, 10(2):157–187. Carol A. Chapelle and Roberta G. Abraham. 1990. Cloze method: what difference does it make? Language Testing, 7(2):121–146. Kevyn Collins-Thompson. 2014. Computational assessment of text readability: A survey of current and future research. International Journal of Applied Linguistics – Special Issue on Recent Advances in Automatic Readability Assessment and Text Simplification, 165(2):97–135. 369 EC. 2002. Presidency Conclusions. Barcelona European Council 15 and 16 March 2002. Report SN 100/1/02 REV 1, Council of the European Union. W. Nelson Francis. 1965. A standard corpus of edited present-day american english. College English, 26(4):267–273. Julia Hancke, Sowmya Vajjala, and Detmar Meurers. 2012. Readability classification for german using lexical, syntactic, and morphological features. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), pages 1063– 1080, Mumbai, India. Jennifer Hill and Rahul Simha. 2016. Automatic generation of context-based fill-in-the-blank exercises using co-occurrence likelihoods and google n-grams. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 23–30, San Diego, CA, USA. Tadamitsu Kamimoto. 1993. Tailoring the Test to Fit the Students: Improvement of the C-Test through Classical Item Analysis. Language Laboratory, 30:47–61. Adam Kilgarriff, Frieda Charalabopoulou, Maria Gavrilidou, Janne Bondi Johannessen, Saussan Khalil, Sofie Johansson Kokkinakis, Robert Lew, Serge Sharoff, Ravikiran Vadlapudi, and Elena Volodina. 2014. Corpus-based vocabulary lists for language learners for nine languages. Language Resources and Evaluation, 48(1):121–163. Christine Klein-Braley and Ulrich Raatz. 1982. Der C-Test: ein neuer Ansatz zur Messung allgemeiner Sprachbeherrschung. AKS-Rundbrief, 4:23–37. Shibamouli Lahiri. 2014. Complexity of Word Collocation Networks: A Preliminary Structural Analysis. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 96–105, Gothenburg, Sweden. Batia Laufer and Paul Nation. 1999. A vocabulary-size test of controlled productive ability. Language Testing, 16(1):33–51. John Lee and Mengqi Luo. 2016. Personalized exercises for preposition learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations, pages 115–120, Berlin, Germany. David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research, 5(Apr):361–397. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL): Short Papers, volume 2, pages 85–91, Vancouver, Canada. Naiara Perez and Montse Cuadros. 2017. Multilingual call framework for automatic language exercise generation from free text. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL): Software Demonstrations, pages 49–52, Valencia, Spain. Ildik´o Pil´an, Elena Volodina, and Richard Johansson. 2014. Rule-based and machine learning approaches for second language sentence-level readability. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 174–184, Baltimore, MD, USA. Nils Reimers and Iryna Gurevych. 2017. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 338–348, Copenhagen, Denmark. G¨unther Sigott. 1995. The C-Test: Some Factors of Difficulty. AAA: Arbeiten aus Anglistik und Amerikanistik, 20(1):43–53. G¨unther Sigott. 2006. How fluid is the c-test construct? In Der C-Test: Theorie, Empirie, Anwendungen – The C-Test: Theory, Empirical Research, Applications, Language Testing and Evaluation, pages 139– 146. Frankfurt am Main: Peter Lang. Bernard Spolsky. 1969. Reduced Redundancy as a Language Testing Tool. In G.E. Perren and J.L.M. Trim, editors, Applications of linguistics, pages 383–390. Cambridge: Cambridge University Press. Wilson L. Taylor. 1953. “Cloze Procedure”: A New Tool for Measuring Readability. Journalism Bulletin, 30(4):415–433. Sowmya Vajjala and Detmar Meurers. 2014. Assessing the relative reading level of sentence pairs for text simplification. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 288–297, Gothenburg, Sweden. Vladimir N. Vapnik. 1998. Statistical Learning Theory. New York: Wiley. Lev Vygotsky. 1978. Mind in society: The development of higher psychological processes. Cambridge: Harvard University Press. Michael Wojatzki, Oren Melamud, and Torsten Zesch. 2016. Bundled gap filling: A new paradigm for unambiguous cloze exercises. In Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 172– 181, San Diego, CA, USA. Torsten Zesch and Oren Melamud. 2014. Automatic generation of challenging distractors using contextsensitive inference rules. In Proceedings of the 370 Ninth Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pages 143– 148, Baltimore, MD, USA.
2019
35
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3601–3605 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3601 Psycholinguistics meets Continual Learning: Measuring Catastrophic Forgetting in Visual Question Answering Claudio Greco1 [email protected] Barbara Plank2 [email protected] Raquel Fernández3 [email protected] Raffaella Bernardi1,4 [email protected] 1CIMeC and 4DISI University of Trento 2Dept. of Computer Science IT University of Copenhagen 3ILLC University of Amsterdam Abstract We study the issue of catastrophic forgetting in the context of neural multimodal approaches to Visual Question Answering (VQA). Motivated by evidence from psycholinguistics, we devise a set of linguistically-informed VQA tasks, which differ by the types of questions involved (Wh-questions and polar questions). We test what impact task difficulty has on continual learning, and whether the order in which a child acquires question types facilitates computational models. Our results show that dramatic forgetting is at play and that task difficulty and order matter. Two well-known current continual learning methods mitigate the problem only to a limiting degree. 1 Introduction Supervised machine learning models are incapable of continuously learning new tasks, as they forget how to perform the previously learned ones. This problem, called catastrophic forgetting, is prominent in artificial neural networks (McClelland et al., 1995). Continual Learning (CL) addresses this problem by trying to equip models with the capability to continuously learn new tasks over time (Ring, 1997). Catastrophic forgetting and CL have received considerable attention in computer vision (e.g., Zenke et al., 2017; Kirkpatrick et al., 2017), but far less attention within Natural Language Processing (NLP). We investigate catastrophic forgetting in the context of multimodal models for Visual Question Answering (Antol et al., 2015) motivated by evidence from psycholinguistics. VQA is the task of answering natural language questions about an image. Evidence from child language acquisition indicates that children learn Wh-questions before polar (Yes/No) questions (Moradlou and Ginzburg, 2016; Moradlou et al., 2018). Motivated by this finding, we design a set of fA fB fCL .. .. .. .. Wh-q: y ∈ {metal, blue, sphere,..,large}
 Q: What is the material of the large object that is the same shape as the tiny yellow thing? A: metal Yes/No-q: y ∈ {Yes, No}
 Q: Does the cyan ball have the same material as the large object behind the green ball? A: Yes CLEVR (Johnson et al., 2017) Continual Learning (CL) - Single-head Setup: no task identifier provided tasks single softmax over all y Training phase Testing phase Multimodal tasks: Figure 1: Overview of our linguistically-informed CL setup for VQA. linguistically-informed experiments: i) to investigate whether the order in which children acquire question types facilitates continual learning for computational models and, accordingly, the impact of task order on catastrophic forgetting; ii) to measure how far two well-known CL approaches help to overcome the problem (Robins, 1995; Kirkpatrick et al., 2017)1. Contributions: Our study contributes to the literature on CL in NLP. In particular: i) we introduce a CL setup based on linguistically-informed task pairs which differ with respect to question types and level of difficulty; ii) we show the importance of task order, an often overlooked aspect, and observe asymmetric synergetic effects; iii) our results show that our VQA model suffers from extreme forgetting; rehearsal gives better results than a regularization-based method. Our error analysis shows that the latter approach encounters problems even in discerning Task A after having been trained on Task B. Our study opens the door to deeper investigations of CL on linguistic 1Code and data are available at the link http:// continual-vista.github.io/. 3602 skills with different levels of difficulty based of psycholinguistics findings. 2 Task Setup As a first step towards understanding the connection between linguistic skills and the impact on CL, we design a set of experiments within VQA where tasks differ with respect to the type of question and the level of difficulty according to the psycholinguistics literature. The overall setup is illustrated in Figure 1 and described next. Dataset CLEVR (Johnson et al., 2017a) allows to study the ability of VQA agents. It requires compositional language and basic spatial reasoning skills. Every question in CLEVR is derived by a Functional Program (FP) from a scene graph of the associated image. The scene graph defines the objects and attributes in the image. The FP contains functions corresponding to skills, e.g., querying object attributes or comparing values (see Fig. 1, upper). Questions are categorized by their type. CLEVR consists of five question types whose answer labels range over 15 attributes, 10 numbers, and “yes”/“no” (in total 27 labels). Multimodal Tasks We select the CLEVR subtasks ‘query_attribute’ and ‘equal_attribute’ with attributes color, shape, material, and size. The two types of questions differ by answer type y ∈Y: • Wh-questions (Wh-q): Questions about the attribute of an object, e.g., “What is the material of the large object. . . ?”, where y ∈{blue, cube, small, . . . , metal} spans over |color| = 8, |shape| = 3, |size| = 2 and |material| = 2 (in total |Y| = 15). • Yes/No questions (Y/N-q): Questions that compare objects with respect to an attribute, e.g., “Does the cyan ball have the same material as . . . ?”, with y ∈{yes, no} (in total |Y| = 2). Task Order We learn Task A followed by Task B (TASKA→TASKB), but experiment with both directions, i.e., by first assigning Wh-q to Task A and Y/N-q to Task B, and vice versa. We expect that the inherent difficulty of a task and the order in which tasks are learned have an impact on CL. Single-head Evaluation CL methods can be tested in two ways. We opt for a single-head evaluation setup (see Fig. 1, lower) with an output space over labels for all tasks (here: all CLEVR labels). In contrast, in a multi-head setup predictions are restricted to task labels, as the task identifier is provided. Single-head is more difficult yet more realistic (Chaudhry et al., 2018). 3 Models and Experiments VQA Model We take the model proposed by Yang et al. (2016) as a starting point, using the code released by Johnson et al. (2017b) (LSTM+CNN+SA). Questions are encoded with a recurrent neural network with Long Short-Term Memory (LSTM) units. Images are encoded with a ResNet-101 Convolutional Neural Network (CNN) pre-trained on ImageNet (He et al., 2016). The two representations are combined using Spatial Attention (SA) (Yang et al., 2016) to focus on the most salient objects and properties in the image and text. The final answer distribution is predicted with a Multilayer Perceptron (MLP). Baselines In order to measure catastrophic forgetting, we first consider per-task baselines: A random baseline (i.e., random stratified sample of the label distribution per task) and the results of a model trained independently on each task (i.e., over task-specific Y). For CL, we report again a random baseline (this time a random stratified sample drawing predictions according to the answer distribution of both tasks), and we consider the Naive and Cumulative baselines proposed by Maltoni and Lomonaco (2018). The Naive model is fine-tuned across tasks: It is first trained on Task A and then on Task B starting from the previously learned parameters. The Cumulative model is trained from scratch on the training sets of both Task A and Task B. This is a kind of upper bound, or performance that a CL model should achieve. Continual Learning Models In CL there are two broad families of methods: Those that assume memory and access to explicit previous knowledge (instances), and those that have only access to compressed knowledge, such as previously learned parameters. These two families correspond to rehearsal and regularization, respectively. A widely-used regularization-based approach is Elastic Weight Consolidation (EWC, Kirkpatrick et al., 2017). A regularization term, parametrized by λ, is added to the loss function aiming the model to converge to parameters where it has a low error for both tasks. In the Rehearsal approach (Robins, 1995), the model is first trained on 3603 Task A, then the parameters are fine-tuned through batches taken from a dataset containing a small number of examples of Task A and the training set of Task B. The selection of training examples of Task A is done through uniform sampling. Data and Training Details Since CLEVR has no published ground-truth answers for the test set, we split the original validation set into a validation and a test set. To avoid performance impact due to different training data sizes, we downsample the training sets to the same size (Y/N-q data size), resulting in 125,654 training instances per task. The validation and test sets contain, respectively, 26,960 and 26,774 data points for Wh-q and 13,417 and 13,681 data points for Y/N-q. For the baselines, we select the model which reaches maximum accuracy on the validation set of each task. For CL, we choose the model with the highest CL score computed according to the validation set of each task pair. Details on hyperparameters and evaluation metrics are provided in the supplementary material (SM). 4 Results and Analysis The main results are provided in Table 1. There are several take-aways. Task Difficulty The results of the per-task models (cf. first two rows in Table 1) show that there is a large performance gap between the two tasks. Wh-q is easier (.81) than Y/N-q (.52), regardless of the fact that a priori the latter should be easier (as shown by the respective task-specific random baselines). The Y/N-q task-specific model performs only slightly above chance (.52, in line with what Johnson et al. (2017a) report for ‘equal_attribute’ questions). This shows that despite the limited output space of the Y/N-q task, such type of questions in CLEVR are complex and require reasoning skills (Johnson et al., 2017a). Catastrophic Forgetting We observe that extreme forgetting is at play. Naive forgets the previously learned skill completely: When tested on Task A after having been fine-tuned on Task B, it achieves 0.0 accuracy on the first task for both directions (I and II, cf. Table 1 lower). The Cumulative model by nature cannot forget, since it is trained on both tasks simultaneously, achieving .81 and .74 on Wh-q and Y/N-q, respectively. Interestingly, we observe an asymmetric synergetic effect. Being exposed to the Wh-q task helps the Random (per-task) WH: 0.09 Y/N: 0.50 LSTM+CNN+SA WH: 0.81 Y/N 0.52 CL SETUPS: I) WH→Y/N II) Y/N→WH Wh Y/N Y/N Wh Random (both tasks) 0.04 0.25 0.25 0.04 Naive 0.00 0.61 0.00 0.81 EWC 0.25 0.51 0.00 0.83 Rehearsal 0.75 0.51 0.51 0.80 Cumulative 0.81 0.74 0.74 0.81 Table 1: Mean accuracy over 3 runs: Trained on each task independently (first two rows; per-task label space Y) vs. CL setups (single-head label space over all Y). Cumulative model improve on Y/N-q, reaching results beyond the task-specific model (from .52 to .74). The effect is not symmetric as the accuracy on Wh-q does not further increase. Does CL Help? Current CL methods show only limiting (or no) effect. EWC performs bad overall: In the II) setup (Y/N→WH, harder task first), EWC does not yield any improvement over the Naive model; in the WH→Y/N setup, the model’s result on Task A is above chance level (.25 vs. .04) but far off per-task performance (.81). The Rehearsal model forgets less than Naive and EWC in both setups: In the Y/N→WH setup, it is above chance level (.51 vs. .25) reaching per-task random baseline results on Y/N questions (i.e., the model is able to identify Task A, despite the harder singlehead setting, in contrast to the Naive and EWC models). There is no boost derived from being exposed to the Wh-q task in any of the two setups. Task Order The results in Table 1 show that the order of tasks plays an important role: WH→Y/N facilitates CL more than the opposite order: less forgetting is at place when WH is learned first. This confirms psycholinguistic evidence. Overall, Rehearsal works better than EWC, but mitigates forgetting only to a limiting degree. Analysis To get a deeper understanding of the models, we analyze the penultimate hidden layer on a sample of 512 questions from the test sets of both tasks (cf. Fig. 2) and relate the representations to confusion matrices of the whole test sets (provided in the SM) and test results (Table 1). First of all, the model trained on Wh-q discriminates Wh-questions about different attributes very well, reflected in overall high accuracy (.81). It otherwise clusters all instances from the other task 3604 50 0 50 40 20 0 20 40 Wh 25 0 25 50 20 0 20 40 60 Cumulative 25 0 25 50 20 0 20 40 EWC 25 0 25 40 20 0 20 40 Rehearsal equal_color equal_material equal_shape equal_size query_color query_material query_shape query_size Figure 2: Analysis of the neuron activations on the penultimate hidden layer for the I) WH →Y/N setup. “equal_{shape,color,material,size}” refers to Y/N-q, “query_{..}” refers to WH-questions. (Y/N-q, which it has not been trained on) around Wh-questions related to size. The Cumulative model, in contrast, is able to further tease the different kinds of Y/N questions apart. Questions about different attributes become distinguishable in the plot, although overall Y/N questions remain closer together than the clusters for Wh-q. This is in line with the lower performance of Cumulative on Y/N-q. Our examination of the confusion matrices confirms that the two question types are never confused by the Cumulative model. In contrast, the Naive model is very prone to this type of mistake (see plot in SM). As for the CL models, Fig. 2 (two rightmost plots) shows that EWC learns representations which are rather similar to those learned by the model trained on Wh-q independently: Y/N questions result in a big hard-to-distinguish “blob”, and are confused with Wh-q about size, as visible in Fig. 2 and the confusion matrix analysis (in the SM). In contrast, Rehearsal remembers how to distinguish among all kinds of Wh-q and between Wh-q and Y/N-q. The error analysis confirms that the model hardly makes any mistakes related to task confusion. However, despite the higher performance than EWC, Rehearsal is still not able to discern well between different kinds of Y/N-q. 5 Related Work Early work on life-long learning (Chen et al., 2015; Mitchell et al., 2015) is related to ours, but typically concerns a single task (e.g., relation extraction). Lee (2017) aims to transfer conversational skills from a synthetic domain to a customer-specific application in dialogue agents, while Yogatama et al. (2019) show that current models for different NLP tasks are not able to properly reuse previously learned knowledge. In general, continual learning has been mostly studied in computer vision. To the best of our knowledge, little has been done on catastrophic forgetting in VQA. A study on forgetting in the context of VQA and closest to ours is Perez et al. (2018). They show that their model forgets after being fine-tuned on data including images with objects of colors other than those previously seen. We took this work as starting point and extended it to consider different types of questions and to test different CL methods beyond fine-tuning. 6 Conclusion We assessed to what extent a multimodal model suffers from catastrophic forgetting in a VQA task. We built two tasks involving different linguistic characteristics which are known to be learned sequentially by children and on which multimodal models reach different performance. Our results show that dramatic forgetting is at play in VQA, and for the tested task pairs we empirically found Rehearsal to work better than a regularization-based method (EWC). More importantly, we show that the order in which models learn tasks is important, WH→Y/N facilitates continual learning more than the opposite order, thereby confirming psycholinguistic evidence. Our error analysis highlights the importance of taking the kind of mistakes made by the models into account: A model that does not detect Task A after having been exposed to Task B should be penalized more than a model that answers Task A with wrong task-related labels, but is still capable of identifying the task. Most importantly, our study revealed that differences in the inherent difficulty of the tasks at hand can have a strong im3605 pact on continual learning. Regularization-based methods like EWC appear to work less well when applied to tasks with different levels of difficulty, as in our experiments. We reserve a deeper investigation of this aspect to future research. Acknowledgements We kindly acknowledge the support of NVIDIA Corporation with the donation of the GPUs used in our research to the University of Trento and IT University of Copenhagen. R. Fernández was funded by the Netherlands Organisation for Scientific Research (NWO) under VIDI grant nr. 27689-008, Asymmetry in Conversation. References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In International Conference on Computer Vision (ICCV). Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip Torr. 2018. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV. Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Lifelong learning for sentiment classification. In ACL. Short paper. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017a. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In CVPR. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C. Lawrence Zitnick, and Ross Girshick. 2017b. Inferring and executing programs for visual reasoning. In ICCV. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. PNAS. Sungjin Lee. 2017. Toward continual learning for conversational agents. In ACL. Davide Maltoni and Vincenzo Lomonaco. 2018. Continuous learning in single-incremental-task scenarios. arXiv preprint arXiv:1806.08568. James L McClelland, Bruce L McNaughton, and Randall C O’reilly. 1995. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Review, 102(3). T. Mitchell, W. Cohen, E. Hruscha, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohammad, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never-ending learning. In AAAI. Sara Moradlou and Jonathan Ginzburg. 2016. Young children’s answers to questions. In Workshop on the Role of Pragmatic Factors on Child Language Processing. Sara Moradlou, Xiaobei Zheng, Ye Tian, and Jonathan Ginzburg. 2018. Wh-questions are understood before polars. In Proceedings of Architectures and Mechanisms for Language Processing (AMLaP). Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. 2018. Film: Visual reasoning with a general conditioning layer. In AAAI. Mark Ring. 1997. CHILD: A first step towards continual learning. Machine Learning, 28(1). Anthony Robins. 1995. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):123–146. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In ICML.
2019
350
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3606–3612 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3606 Improving Visual Question Answering by Referring to Generated Paragraph Captions Hyounghun Kim Mohit Bansal UNC Chapel Hill {hyounghk, mbansal}@cs.unc.edu Abstract Paragraph-style image captions describe diverse aspects of an image as opposed to the more common single-sentence captions that only provide an abstract description of the image. These paragraph captions can hence contain substantial information of the image for tasks such as visual question answering. Moreover, this textual information is complementary with visual information present in the image because it can discuss both more abstract concepts and more explicit, intermediate symbolic information about objects, events, and scenes that can directly be matched with the textual question and copied into the textual answer (i.e., via easier modality match). Hence, we propose a combined Visual and Textual Question Answering (VTQA) model which takes as input a paragraph caption as well as the corresponding image, and answers the given question based on both inputs. In our model, the inputs are fused to extract related information by cross-attention (early fusion), then fused again in the form of consensus (late fusion), and finally expected answers are given an extra score to enhance the chance of selection (later fusion). Empirical results show that paragraph captions, even when automatically generated (via an RL-based encoderdecoder model), help correctly answer more visual questions. Overall, our joint model, when trained on the Visual Genome dataset, significantly improves the VQA performance over a strong baseline model. 1 Introduction Understanding visual information along with natural language have been studied in different ways. In visual question answering (VQA) (Antol et al., 2015; Goyal et al., 2017; Lu et al., 2016; Fukui et al., 2016; Xu and Saenko, 2016; Yang et al., 2016; Zhu et al., 2016; Anderson et al., 2018), models are trained to choose the correct answer given a question about an image. On the other hand, in image captioning tasks (Karpathy and Fei-Fei, 2015; Johnson et al., 2016; Anderson et al., 2018; Krause et al., 2017; Liang et al., 2017; Melas-Kyriazi et al., 2018), the goal is to generate sentences which should describe a given image. Similar to the VQA task, image captioning models should also learn the relationship between partial areas in an image and the generated words or phrases. While these two tasks seem to have different directions, they have the same purpose: understanding visual information with language. If their goal is similar, can the tasks help each other? In this work, we propose an approach to improve a VQA model by exploiting textual information from a paragraph captioning model. Suppose you are assembling furniture by looking at a visual manual. If you are stuck at a certain step and you are given a textual manual which more explicitly describes the names and shapes of the related parts, you could complete that step by reading this additional material and also by comparing it to the visual counterpart. With a similar intuition, paragraph-style descriptive captions can more explicitly (via intermediate symbolic representations) explain what objects are in the image and their relationships, and hence VQA questions can be answered more easily by matching the textual information with the questions. We provide a VQA model with such additional ‘textual manual’ information to enhance its ability to answer questions. We use descriptive captions generated from a paragraph captioning model which capture more detailed aspects of an image than a single-sentence caption (which only conveys the most obvious or salient single piece of information). We also extract properties of objects, i.e., names and attributes from images to create simple sentences in the form of “[object name] is [attribute]”. Our VTQA model takes 3607 these paragraph captions and attribute sentences as input in addition to the standard input image features. The VTQA model combines the information from text and image with early fusion, late fusion, and later fusion. With early fusion, visual and textual features are combined via crossattention to extract related information. Late fusion collects the scores of candidate answers from each module to come to an agreement. In later fusion, expected answers are given an extra score if they are in the recommendation list which is created with properties of detected objects. Empirically, each fusion technique provides complementary gains from paragraph caption information to improve VQA model performance, overall achieving significant improvements over a strong baseline VQA model. We also present several ablation studies and attention visualizations. 2 Related Work Visual Question Answering (VQA): VQA has been one of the most active areas among efforts to connect language and vision (Malinowski and Fritz, 2014; Tu et al., 2014). The recent success of deep neural networks, attention modules, and object plus salient region detection has made more effective approaches possible (Antol et al., 2015; Goyal et al., 2017; Lu et al., 2016; Fukui et al., 2016; Xu and Saenko, 2016; Yang et al., 2016; Zhu et al., 2016; Anderson et al., 2018). Paragraph Image Captioning: Another thread of research which deals with combined visual and language problem is the translation of visual contents to natural language. The first approach to this included using a single-sentence image captioning model (Karpathy and Fei-Fei, 2015). However, this task is not able to accommodate the variety of aspects of a single image. Johnson et al. (2016) expanded single-sentence captioning to describe each object in an image via a dense captioning model. Recently, paragraph captioning models (Krause et al., 2017; Liang et al., 2017; MelasKyriazi et al., 2018) attempt to capture the many aspects in an image more coherently. 3 Models The basic idea of our approach is to provide the VQA model with extra text information from paragraph captions and object properties (see Fig. 1). 3.1 Paragraph Captioning Model Our paragraph captioning module is based on Melas-Kyriazi et al. (2018)’s work, which uses CIDEr (Vedantam et al., 2015) directly as a reward to train their model. They make the approach possible by employing self-critical sequence training (SCST) (Rennie et al., 2017). However, only employing RL training causes repeated sentences. As a solution, they apply n-gram repetition penalty to prevent the model from generating such duplicated sentences. We adopt their model and approach to generate paragraph captions. 3.2 VTQA Model 3.2.1 Features Visual Features: We adopt the bottom-up and top-down VQA model from Anderson et al. (2018), which uses visual features from the salient areas in an image (bottom-up) and gives them weights using attention mechanism (top-down) with features from question encoding. Following Anderson et al. (2018), we also use Faster R-CNN (Ren et al., 2015) to get visual features V ∈RO×d, where O is #objects detected and d is the dimension of each visual feature of the objects. Paragraph Captions: These provide diverse aspects of an image by describing the whole scene. We use GloVe (Pennington et al., 2014) for the word embeddings. The embedded words are sequentially fed into the encoder, for which we use GRU (Cho et al., 2014), to create a sentence representation, si ∈Rd: si = ENCsent(w0:T ), where T is the number of words. The paragraph feature is a matrix which contains each sentence representation in each row, P ∈RK×d, where K is the number of sentences in a paragraph. Object Property Sentences: The other text we use is from properties of detected objects in images (name and attribute), which can provide explicit information of the corresponding object to a VQA model. We create simple sentences like, “[object name] is [attributes]”. We then obtain sentence representations by following the same process as what we do with the paragraph captions above. Each sentence vector is then attached to the corresponding visual feature, like ‘name tag’, to allow the model to identify objects in the image and their corresponding traits. 3608 … A man is sitting on the snow. Faster R-CNN A man is sitting on the snow. The man is wearing a black jacket. … GRU The man is wearing a black jacket. GRU The man is wearing black pants. GRU … … Cross-Att. A man is sitting … Para-Capt. Model with RL LSTM … Snow is white. GRU Sky is blue GRU GRU … Concat. Reward Tree is green man, snowboard, snow, trees, sky, white … softmax Avg. Max. FC FC FC Paragraph Caption Object Properties Visual Feature Early Fusion Attention Late Fusion Later Fusion Att. Question Feature MLP MLP MLP * * Att. FC LSTM LSTM LSTM Figure 1: VTQA Architecture: Early, Late, and Later Fusion between the Vision and Paragraph Features. 3.2.2 Three Fusion Levels Early Fusion: In the early fusion stage, visual features are fused with paragraph caption and object property features to extract relevant information. For visual and paragraph caption features, cross-attention is applied to get similarity between each component of visual features (objects) and a paragraph caption (sentences). We follow Seo et al. (2016)’s approach to compute the similarity matrix, S ∈RO×K. From the similarity matrix V p = softmax(ST )V and the new paragraph representation, P f is obtained by concatenating P and P ∗V p: P f = [P; P ∗V p], where * is element-wise product operation. For visual feature and object property feature C, they are already aligned and the new visual feature V f becomes V f = [V ; V ∗C]. Given the fused representations, the attention mechanism is applied over each row of the representations to weight more relevant features to the question. ai = wT a (ReLU(Wsasf i ) ∗ReLU(Wqaq)) (1) α = softmax(a) (2) where, sf i is a row vector of new fused paragraph representation and q is the representation vector of a question which is encoded with GRU unit. wT a , Wsa, and Wqa are trainable weights. Given the attention weights, the weighted sum of each row vector, sf i leads to a final paragraph vector p = PK i=1 αisf i . The paragraph vector is fed to a nonlinear layer and combined with question vector by element-wise product. pq = ReLU(Wpp) ∗ReLU(Wqq) (3) Lp = classifier(pq) (4) where Wp and Wq are trainable weights, and Lp contains the scores for each candidate answer. The same process is applied to the visual features to obtain Lv = classifier(vq). Late Fusion: In late fusion, logits from each module are integrated into one vector. We adopt the approach of Wang et al. (2016). Instead of just adding the logits, we create two more vectors by max pooling and averaging those logits and add them to create a new logit Lnew = L1 +L2 +...+ Ln +...+Lmax +Lavg, where Ln is nth logit, and Lmax and Lavg are from max-pooling and averaging all other logits. The intuition of creating these logits is that they can play as extra voters so that the model can be more robust and powerful. Answer Recommendation or ‘Later Fusion’: Salient regions of an image can draw people’s attention and thus questions and answers are much more likely to be related to those areas. Objects often denote the most prominent locations of these salient areas. From this intuition, we introduce a way to directly connect the salient spots with candidate answers. We collect properties (name and attributes) of all detected objects and search over answers to figure out which answer can be extracted from the properties. Answers in this list of expected answers are given extra credit to enhance the chance to be selected. If logit Lbefore from the final layer contains scores of each answer, we want to raise the scores to logit Lafter if the correspond3609 ing answers are in the list lc: Lbefore = {a1, a2, ..., an, ..} Lafter = {ˆa1, ˆa2, ..., ˆan, ..} (5) ˆan =  an + c · std(Lbefore) if n ∈lc an otherwise (6) where the std(·) operation calculates the standard deviation of a vector and c is a tunable parameter. lc is the list of the word indices of detected objects and their corresponding attributes. The indices of the objects and the attributes are converted to the indices of candidate answers. 4 Experimental Setup Paragraph Caption: We use paragraph annotations of images from Visual Genome (Krishna et al., 2017) collected by Krause et al. (2017), since this dataset is the only dataset (to our knowledge) that annotates long-form paragraph image captions. We follow the dataset split of 14,575 / 2,487 / 2,489 (train / validation / test). Visual Question Answering Pairs: We also use the VQA pairs dataset from Visual Genome so as to match it with the provided paragraph captions. We almost follow the same image dataset split as paragraph caption data, except that we do not include images that do not have their own question-answer pairs in the train and evaluation sets. The total number of candidate answers is 177,424. Because that number is too huge to train, we truncate the question-answer pairs whose answer’s frequency are under 30, which give us a list of 3,453 answers. So, the final number of question-answering pairs are 171,648 / 29,759 / 29,490 (train / validation / test). Training Details: Our hyperparameters are selected using validation set. The size of the visual feature of each object is set to 2048 and the dimension of the hidden layer of question encoder and caption encoder are 1024 and 2048 respectively. We use AdaMax (Kingma and Ba, 2014) for the optimizer and a learning rate of 0.002. We modulate the final credit, which is added to the final logit of the model, by multiplying a scalar value c (we tune this to 1.0). 5 Results, Ablations, and Analysis VQA vs. VTQA As shown in Table 1, our VTQA model increases the accuracy by 1.92% from the baseline VQA model for which we employ Anderson et al. (2018)’s model and apply Model Test accuracy (%) 1 VQA baseline 44.68 2 VQA + MFB baseline 44.94 3 VTQA (EF+LF+AR) 46.86 Table 1: Our VTQA model significantly outperforms (p < 0.001) the strong baseline VQA model (we do not apply MFB to our VTQA model, since it does not work for the VTQA model). Model Val accuracy (%) 1 VTQA + EF (base model) 45.41 2 VTQA + EF + LF 46.36 3 VTQA + EF + AR 46.95 4 VTQA + EF + LF + AR 47.60 Table 2: Our early (EF), late (LF), and later fusion (or Answer Recommendation AR) modules each improves the performance of our VTQA model. multi-modal factorized bilinear pooling (MFB) (Yu et al., 2017). This implies that our textual data helps improve VQA model performance by providing clues to answer questions. We run each model five times with different seeds and take the average value of them. For each of the five runs, our VTQA model performs significantly better (p < 0.001) than the VQA baseline model. Late Fusion and Later Fusion Ablations As shown in row 2 of Table 2, late fusion improves the model by 0.95%, indicating that visual and textual features complement each other. As shown in row 3 and 4 of Table 2, giving an extra score to the expected answers increases the accuracy by 1.54% from the base model (row 1) and by 1.24% from the result of late fusion (row 2), respectively. This could imply that salient parts (in our case, objects) can give direct cues for answering questions.1 Ground-Truth vs. Generated Paragraphs We manually investigate (300 examples) how many questions can be answered only from the groundtruth (GT) versus generated paragraph (GenP) captions. We also train a TextQA model (which uses cross-attention mechanism between question and caption) to evaluate the performance of the GT and GenP captions. As shown in Table 3, the GT captions can answer more questions correctly than GenP captions in TextQA model evaluation. Human evaluation with GT captions also shows better performance than with GenP captions as seen in Table 4. However, the results from the man1Object Properties: Appending the encoded object properties to visual features improves the accuracy by 0.15% (47.26 vs. 47.41). This implies that incorporating extra textual information into visual features could help a model better understand the visual features for performing the VQA task. 3610 Model Val accuracy (%) 1 TextQA with GT 43.96 2 TextQA with GenP 42.07 Table 3: TextQA with GT model outperforms TextQA with GenP (we run each model five times with different seeds and average the scores. GT: Ground-Truth, GenP: Generated Paragraph). Human Eval. Accuracy (%) 1 with GT 55.00 2 with GenP 42.67 Table 4: Human evaluation only with paragraph captions and questions of the validation dataset. Human evaluation with GT shows better performance than human evaluation with GenP. ual investigation have around 12% gap between GT and generated captions, while the gap between the results from the TextQA model is relatively small (1.89%). This shows that paragraph captions can answer several VQA questions but our current model is not able to extract the extra information from the GT captions. This allows future work: (1) the TextQA/VTQA models should be improved to extract more information from the GT captions; (2) paragraph captioning models should also be improved to generate captions closer to the GT captions.2 Attention Analysis Finally, we also visualize the attention over each sentence of an input paragraph caption w.r.t. a question. As shown in Figure 2, a sentence which has a direct clue for a question get much higher weights than others. This explicit textual information helps a VQA model handle what might be hard to reason about onlyvisually, e.g., ‘two (2) cows’. Please see Appendix A for more attention visualization examples. 6 Conclusion We presented a VTQA model that combines visual and paragraph-captioning features to significantly improve visual question answering accuracy, via a model that performs early, late, and later fusion. While our model showed promising results, it still used a pre-trained paragraph captioning model to 2We also ran our full VTQA model with the ground truth (GT) paragraph captions and got an accuracy value of 48.04% on the validation dataset (we ran the model five times with different seeds and average the scores), whereas the VTQA result from generated paragraph captions was 47.43%. This again implies that our current VTQA model is not able to extract all the information enough from GT paragraph captions for answering questions, and hence improving the model to better capture clues from GT captions is useful future work. Q: where is the picture taken A: beach Q: how many cows are there A: 2 Q: what is the man doing A: playing tennis Q: when was the photo taken A: daytime Figure 2: Attention Visualization for an example answered correctly by our model. obtain the textual symbolic information. In future work, we are investigating whether the VTQA model can be jointly trained with the paragraph captioning model. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF Award #1840131, ARO-YIP Award #W911NF18-1-0336, and faculty awards from Google, Facebook, Bloomberg, and Salesforce. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answering. In International Conference on Computer Vision (ICCV). Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. Multimodal compact bilinear pooling for visual question answering and visual grounding. In 3611 Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 457–468. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR). Justin Johnson, Andrej Karpathy, and Li Fei-Fei. 2016. Densecap: Fully convolutional localization networks for dense captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Andrej Karpathy and Li Fei-Fei. 2015. Deep visualsemantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3128–3137. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3337–3345. IEEE. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P Xing. 2017. Recurrent topic-transition gan for visual paragraph generation. arXiv preprint arXiv:1703.07022. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image coattention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297. Mateusz Malinowski and Mario Fritz. 2014. A multiworld approach to question answering about realworld scenes based on uncertain input. In Advances in neural information processing systems, pages 1682–1690. Luke Melas-Kyriazi, Alexander Rush, and George Han. 2018. Training for diversity in image paragraph captioning. EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Kewei Tu, Meng Meng, Mun Wai Lee, Tae Eun Choe, and Song-Chun Zhu. 2014. Joint video and text parsing for understanding events and answering queries. IEEE MultiMedia, 21(2):42–70. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. 2016. Temporal segment networks: Towards good practices for deep action recognition. In European Conference on Computer Vision, pages 20–36. Springer. Huijuan Xu and Kate Saenko. 2016. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In European Conference on Computer Vision, pages 451–466. Springer. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21–29. Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 1821–1830. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4995–5004. Appendices A Attention Visualization As shown in Figure 3, paragraph captions contain direct or indirect clues for answering questions. 3612 Q: where is the picture taken A: beach Q: how many cows are there A: 2 Q: what is the man doing A: playing tennis Q: when was the photo taken A: daytime Figure 3: Attention Visualization: For all examples, our model answers correctly. The upper left figure This is the case that a sentence in the paragraph caption can give obvious clue for answering the given question. By looking at the sentence “a boy is standing on the beach”, this question can be answered correctly. The upper right figure The sentence “two cows are grazing in a field ” gives the correct answer “2” directly. The bottom left figure There is no direct clue like “he is playing tennis”, but the correct answer can be inferred by integrating the information from different sentences such as “the man is holding a tennis racket” and “a man is standing on a tennis court”. The bottom right figure This case seems tricky, but the answer can be inferred by associating the blue sky with daytime.
2019
351
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3613–3622 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3613 Shared-Private Bilingual Word Embeddings for Neural Machine Translation Xuebo Liu† Derek F. Wong†∗Yang Liu‡ Lidia S. Chao† Tong Xiao§ Jingbo Zhu§ †NLP2CT Lab / Department of Computer and Information Science, University of Macau, Macau ‡Department of Computer Science and Technology, Tsinghua University, Beijing, China §Northeastern University, Shenyang, China [email protected], {derekfw,lidiasc}@um.edu.mo, [email protected], {xiaotong,zhujingbo}@mail.neu.edu.cn Abstract Word embedding is central to neural machine translation (NMT), which has attracted intensive research interest in recent years. In NMT, the source embedding plays the role of the entrance while the target embedding acts as the terminal. These layers occupy most of the model parameters for representation learning. Furthermore, they indirectly interface via a soft-attention mechanism, which makes them comparatively isolated. In this paper, we propose shared-private bilingual word embeddings, which give a closer relationship between the source and target embeddings, and which also reduce the number of model parameters. For similar source and target words, their embeddings tend to share a part of the features and they cooperatively learn these common representation units. Experiments on 5 language pairs belonging to 6 different language families and written in 5 different alphabets demonstrate that the proposed model provides a significant performance boost over the strong baselines with dramatically fewer model parameters. 1 Introduction With the introduction of ever more powerful architectures, neural machine translation (NMT) has become the most promising machine translation method (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015). For word representation, different architectures— including, but not limited to, recurrence-based (Chen et al., 2018), convolution-based (Gehring et al., 2017) and transformation-based (Vaswani et al., 2017) NMT models—have been taking advantage of the distributed word embeddings to capture the syntactic and semantic properties of words (Turian et al., 2010). ∗Corresponding author Long Rd Lange Rd (a) Standard Long Rd Lange Rd (b) Shared-private Figure 1: Comparison between (a) standard word embeddings and (b) shared-private word embeddings. In (a), the English word “Long” and the German word “Lange”, which have similar lexical meanings, are represented by two private d-dimension vectors. While in (b), the two word embeddings are made up of two parts, indicating the shared (lined nodes) and the private (unlined nodes) features. This enables the two words to make use of common representation units, leading to a closer relationship between them. NMT usually utilizes three matrices to represent source embeddings, target input embeddings, and target output embeddings (also known as pre-softmax weight), respectively. These embeddings occupy most of the model parameters, which constrains the improvements of NMT because the recent methods become increasingly memory-hungry (Vaswani et al., 2017; Chen et al., 2018).1 Even though converting words into subword units (Sennrich et al., 2016b), nearly 55% of model parameters are used for word representation in the Transformer model (Vaswani et al., 2017). To overcome this difficulty, several methods are proposed to reduce the parameters used for word representation of NMT. Press and Wolf (2017) propose two weight tying (WT) methods, called decoder WT and three-way WT, to substantially reduce the parameters of the word embeddings. Decoder WT ties the target input embedding and target output embedding, which has become the new de facto standard of practical NMT (Sen1For the purpose of smoothing gradients, a very large batch size is needed during training. 3614 Long Rd Lange Rd (a) Similar lexical meaning Ju@@ Rd Ju@@ Rd (b) Same word form Laden Rd Bericht Rd (c) Unrelated Figure 2: Shared-private bilingual word embeddings perform between the source and target words or sub-words (a) with similar lexical meaning, (b) with same word form, and (c) without any relationship. Different sharing mechanisms are adapted into different relationship categories. This strikes the right balance between capturing monolingual and bilingual characteristics. The closeness of relationship decides the portion of features to be used for sharing. Words with similar lexical meaning tend to share more features, followed by the words with the same word form, and then the unrelated words, as illustrated by the lined nodes. nrich et al., 2017). Three-way WT uses only one matrix to represent the three word embeddings, where the source and target words that have the same word form tend to share a word vector. This method can also be adapted to sub-word NMT with a shared source-target sub-word vocabulary and it performs well in language pairs with many of the same characters, such as English-German and English-French (Vaswani et al., 2017). Unfortunately, this method is not applicable to languages that are written in different alphabets, such as Chinese-English (Hassan et al., 2018). Another challenge facing the source and target word embeddings of NMT is the lack of interactions. This degrades the attention performance, leading to some unaligned translations that hurt the translation quality. Hence, Kuang et al. (2018) propose to bridge the source and target embeddings, which brings better attention to the related source and target words. Their method is applicable to any language pairs, providing a tight interaction between the source and target word pairs. However, their method requires additional components and model parameters. In this work, we aim to enhance the word representations and the interactions between the source and target words, while using even fewer parameters. To this end, we present a languageindependent method, which is called sharedprivate bilingual word embeddings, to share a part of the embeddings of a pair of source and target words that have some common characteristics (i.e. similar words should have similar vectors). Figure 1 illustrates the difference between the standard word embeddings and shared-private word embeddings of NMT. In the proposed method, each source (or target) word is represented by a word embedding that consists of the shared features and the private features. The shared features can also be regarded as the prior alignments connecting the source and target words. The private features allow the words to better learn the monolingual characteristics. Meanwhile, the features shared by the source and target embeddings result in a significant reduction of the number of parameters used for word representations. The experimental results on 6 translation datasets of different scales show that our model with fewer parameters yields consistent improvements over the strong Transformer baselines. 2 Approach In monolingual vector space, similar words tend to have commonalities in the same dimensions of their word vectors (Mikolov et al., 2013). These commonalities include: (1) a similar degree (value) of the same dimension and (2) a similar positive or negative correlation of the same dimension. Many previous works have noticed this phenomenon and have proposed to use shared vectors to represent similar words in monolingual vector space toward model compression (Li et al., 2016; Zhang et al., 2017b; Li et al., 2018). Motivated by these works, in NMT, we assume that the source and target words that have similar characteristics should also have similar vectors. Hence, we propose to perform this sharing technique in bilingual vector space. More precisely, we share the features (dimensions) between the paired source and target embeddings (vectors). However, in contrast to the previous studies, we also model the private features of the word embedding to preserve the private characteristics of words for source and target languages. The private 3615 features allow the words to better learn the monolingual characteristics. Meanwhile, we also propose to adopt different sharing mechanisms among the word pairs, which will be described in the following sections. In the Transformer architecture, the shared features between the source and target embeddings always contribute to the calculation of the attention weight.2 This results in paying more attention strength on the pair of related words. With the help of residual connections, the high-level representations can also benefit from the shared features of the topmost embedding layers. Both qualitative and quantitative analyses show the effectiveness on the translation tasks. 2.1 Shared-Private Bilingual Word Embeddings Standard NMT jointly learns to translate and align, which has achieved remarkable results (Bahdanau et al., 2015). In NMT, the intention is to identify the translation relationships between the source and target words. To simplify the model, we propose to divide the relationships into three main categories between a pair of source and target words: (1) words with similar lexical meaning (abbreviated as lm), (2) words with same word form (abbreviated as wf), and (3) unrelated words (abbreviated as ur). Figure 2 shows some examples of these different relationship categories. The number of the shared features of the word embeddings is decided by their relationships. Before presenting the pairing process in detail, we first introduce the constraints to the proposed method for convenience: • Each source word is only allowed to share the features with a single target word, and vice versa.3 • Each source word preferentially shares features with the target word that has similar lexical meaning, followed by the word with same word form, and then unrelated words. 2.1.1 Words with Similar Lexical Meaning As shown in Figure 2(a), the English word “Long” and the German word “Lange”, which have similar meaning, tend to share more common features 2Based on the dot-product attention mechanism, the attention weight between the source and target embeddings is the sum of the dot-product of their features. 3We investigate the effect of synonym in the experiment section. of their embeddings. In our model, the source and target words with alignment links are regarded as parallel words that are the translation of each other. According to the word frequency, each source word x is paired with a target aligned word ˆy that has the highest alignment probability among the candidates, and is computed as follows: ˆy = arg max y∈a(x) logA(y|x) (1) where a(·) denotes the set of aligned candidates. It is worth noting the target words that have been paired with the source words cannot be used as candidates. A(·|·) denotes the alignment probability. These can be obtained by either the intrinsic attention mechanism (Bahdanau et al., 2015) or unsupervised word aligner (Dyer et al., 2013). 2.1.2 Words with Same Word Form As shown in Figure 2(b), the sub-word “Ju@@” simultaneously exists in English and German sentences. This kind of word tends to share a medium number of features of the word embeddings. Most of the time, the source and target words with the same word form also share similar lexical meaning. This category of words generally includes Arabic numbers, punctuations, named entities, cognates and loanwords. However, there are some bilingual homographs where the words in the source and target languages look the same but have completely different meanings. For example, the German word “Gift” means “Poison” in English. That is the reason we propose to first pair the words with similar lexical meaning instead of those words with same word forms. This might be the potential limitation of the three-way WT method (Press and Wolf, 2017), where words with the same word form indiscriminately share the same word embedding. 2.1.3 Unrelated Words We regard source and target words that cannot be paired with each other as unrelated words. Figure 2(c) shows an example of a pair of unrelated words. This category is mainly composed of lowfrequency words, such as misspelled words, special characters, and foreign words. In standard NMT, the embeddings of low-frequency words are usually inadequately trained, resulting in a poor word representation. These words are often treated as noises and they are generally ignored 3616 Ex ∈R6×5 Long Long (Lange) Italy Italy (Italien) Ex lm ∈R2×5 Slm ∈R2×3 Px lm ∈R2×2 ˜⊕ → ⊕ Ju@@( De@@( (Ju@@) Ju@@ De@@ (De@@) Ex wf ∈R2×5 Swf ∈R2×2 Px wf ∈R2×3 ˜⊕ → → ⊕ Laden (Bericht) Sundial (Fiehlt) Laden Sundial Ex ur ∈R2×5 Sur ∈R2×1 Px ur ∈R2×4 ˜⊕ → Figure 3: The example of assembling the source word embedding matrix. The words in parentheses denote the paired words sharing features with them. by the NMT systems (Feng et al., 2017). Motivated by the frequency clustering methods proposed by Chen et al. (2016) where they cluster the words with similar frequency for training a hierarchical language model, in this work, we propose to use a small vector to model the possible features that might be shared between the source and target words which are unrelated but having similar word frequencies. In addition, it can be regarded as a way to improve the robustness of learning the embeddings of low-frequency words because of the noisy dimensions (Wang et al., 2018). 2.2 Implementation Before looking up embedding at each training step, the source and target embedding matrix are assembled by the sub-embedding matrices. As shown in Figure 3, the source embedding Ex ∈ R|V |×d is computed as follows:: Ex = Ex lm ⊕Ex wf ⊕Ex ur (2) where ⊕is the row concatenation operator. Ex (·) ∈ R|V(·)|×d represents the word embeddings of the source words belong to different categories, e.g. lm represents the words with similar lexical meaning. |V(·)| denotes the vocabulary size of the corresponding category. The process of feature sharing is also implemented by matrix concatenation. For example, the embedding matrices of the source words with similar lexical meaning are computed as follows: Ex lm = Slm ˜⊕Px lm (3) where ˜⊕is the column concatenation operator. Slm ∈R|Vlm|×λlmd represent the word embeddings of the shared features, where λlm denotes the proportion of the features for sharing in this relationship category. Px lm ∈R|Vlm|×(1−λlm)d represent the word embeddings of the private features. Similar to the target word embedding. These matrix concatenation operations, which have low computational complexity, are very cheap to the whole NMT computation process. We also empirically find both the training speed and decoding speed are not influenced with the introduction of the proposed method. 3 Experiments We carry out our experiments on the small-scale IWSLT’17 {Arabic (Ar), Japanese (Ja), Korean (Ko), Chinese (Zh)}-to-English (En) translation tasks, medium-scale NIST Chinese-English (ZhEn) translation task, and large-scale WMT’14 English-German (En-De) translation task. For the IWSLT {Ar, Ja, Ko, Zh}-to-En translation tasks, there are respectively 236K, 234K, 227K, and 235K sentence pairs in each training set.4 The validation set is IWSLT17.TED.tst2014 and the test set is IWSLT17.TED.tst2015. For each language, we learn a BPE model with 16K merge operations (Sennrich et al., 2016b). For the NIST Zh-En translation task, the training corpus consists of 1.25M sentence pairs with 27.9M Chinese words and 34.5M English words. We use the NIST MT06 dataset as the validation set and the test sets are the NIST MT02, MT03, MT04, MT05, MT08 datasets. To compare with the recent works, the vocabulary size is limited to 4https://wit3.fbk.eu/mt.php?release= 2017-01-trnted 3617 Architecture Zh⇒En Params Emb. Red. Dev. MT02 MT03 MT04 MT08 All SMT* 34.00 35.81 34.70 37.15 25.28 33.39 RNNsearch* Vanilla 74.8M 55.8M 0% 35.92 37.88 36.21 38.83 26.30 34.81 Source bridging 78.5M 55.8M 0% 36.79 38.71 37.24 40.28 27.40 35.91 Target bridging 76.6M 55.8M 0% 36.69 39.04 37.63 40.41 27.98 36.27 Direct bridging 78.9M 55.8M 0% 36.97 39.77 38.02 40.83 27.85 36.62 Transformer Vanilla 90.2M 46.1M 0% 41.37 42.53 40.25 43.58 32.89 40.33 Direct bridging 90.5M 46.1M 0% 41.67 42.89 41.34 43.56 32.69 40.54 Decoder WT 74.9M 30.7M 33.4% 41.90 43.02 41.89 43.87 32.62 40.82 Shared-private 62.8M 18.7M 59.4% 42.57↑ 43.73↑ 41.99↑ 44.53↑ 33.81⇑ 41.61⇑ Table 1: Results on the NIST Chinese-English translation task. “Params” denotes the number of model parameters. “Emb.” represents the number of parameters used for word representation. “Red.” represents the reduction rate of the standard size. The results of SMT* and RNNsearch* are reported by Kuang et al. (2018) with the same datasets and vocabulary settings. “↑” indicates the result is significantly better than that of the vanilla Transformer (p < 0.01), while “⇑” indicates the result is significantly better than that of all other Transformer models (p < 0.01). All significance tests are measured by paired bootstrap resampling (Koehn, 2004). En⇒De Params Emb. Red. BLEU Vanilla 98.7M 54.5M 0% 27.62 Direct bridging 98.9M 54.5M 0% 27.79 Decoder WT 80.4M 36.2M 33.6% 27.51 Three-way WT 63.1M 18.9M 65.3% 27.39 Shared-private 65.0M 20.9M 63.1% 28.06‡ Table 2: Results on the WMT English-German translation task. “‡” indicates the result is significantly better than the vanilla Transformer model (p < 0.05). 30K for both languages, covering 97.7% Chinese words and 99.3% English words, respectively. For the WMT En-De translation task, the training set contains 4.5M sentence pairs with 107M English words and 113M German words. We use the newstest13 and newstest14 as the validation set and test set, respectively. The joint BPE model is set to 32K merge operations. 3.1 Setup We implement all of the methods based on Transformer (Vaswani et al., 2017) using the base setting with the open-source toolkit thumt5 (Zhang et al., 2017a). There are six encoder and decoder layers in our models, while each layer employs eight parallel attention heads. The dimension of the word embedding and the high-level representation dmodel is 512, while that of the inner-FFN layer dffis 2048. The Adam (Kingma and Ba, 2015) optimizer is used to update the model parameters with hyper-parameters β1= 0.9, β2 = 0.98, ε = 10−8 and a warm-up strategy with warmup steps = 4000 is adapted to the variable learning rate (Vaswani et al., 2017). The dropout used in the residual connection, attention mech5https://github.com/thumt/THUMT Model Emb. Red. BLEU Ar⇒En Vanilla 23.6M 0% 28.36 Shared-private 11.8M 50% 29.71↑ Ja⇒En Vanilla 25.6M 0% 10.94 Shared-private 13.3M 48.0% 12.35↑ Ko⇒En Vanilla 25.1M 0% 16.48 Shared-private 13.2M 47.4% 17.84↑ Zh⇒En Vanilla 27.4M 0% 19.36 Shared-private 13.8M 49.6% 21.00↑ Table 3: Results on the IWSLT {Ar, Ja, Ko, Zh}-to-En translation tasks. These distant language pairs belonging to 5 different language families and written in 5 different alphabets.“↑” indicates the result is significantly better than that of the vanilla Transformer (p < 0.01). anism, and feed-forward layer is set to 0.1. We employ uniform label smoothing with 0.1 uncertainty. During the training, each training batch contains nearly 25K source and target tokens. We evaluate the models every 2000 batches via the tokenized BLEU (Papineni et al., 2002) for early stopping. During the testing, we use the best single model for decoding with a beam of 4. The length penalty is tuned on the validation set, which is set to 0.6 for the English-German translation tasks, and 1.0 for others. We compare our proposed methods with the following related works: • Direct bridging (Kuang et al., 2018): this method minimizes the word embedding loss between the transformations of the target words and their aligned source words by adding an auxiliary objective function. • Decoder WT (Press and Wolf, 2017): this method uses an embedding matrix to repre3618 Zh-En λlm λwf λur Emb. BLEU Vanilla 46.1M 41.37 Decoder WT 0 0 0 30.7M 41.90 Shared-private 0.5 0.7 0.9 21.2M 41.98 0.5 0.5 0.5 23.0M 42.26 0.9 0.7 0 21.0M 42.27 1 1 1 15.3M 42.36 0.9 0.7 0.5 18.7M 42.57 Table 4: Performance of models using different sharing coefficients on the validation set of the NIST ChineseEnglish translation task. sent the target input embedding and target output embedding. • Three-way WT (Press and Wolf, 2017): this method is an extension of the decoder WT method that the source embedding and the two target embeddings are represented by one embedding matrix. This method cannot be applied to the language pairs with different alphabets, e.g. Zh-En. For the proposed model, we use an unsupervised word aligner fast-align6 (Dyer et al., 2013) to pair source and target words that have similar lexical meaning. We set the threshold of alignment probability to 0.05, i.e. only those words with an alignment probability over 0.05 can be paired as the words having similar lexical meaning. The sharing coefficient λ = (λlm, λwf, λwf) is set to (0.9,0.7,0.5), which is tuned on both the NIST Chinese-Enlgish task and the WMT English-German task. 3.2 Main Results Table 1 reports the results on the NIST ChineseEnglish test sets. It is observed that the Transformer models significantly outperform SMT and RNNsearch models. Therefore, we decide to implement all of our experiments based on Transformer architecture. The direct bridging model can further improve the translation quality of the Transformer baseline. The decoder WT model improves the translation quality while reducing the number of parameters for the word representation. This improved performance happens because there are fewer model parameters, which prevents over-fitting (Press and Wolf, 2017). Finally, the performance is further improved by the proposed method while using even fewer parameters than other models. 6https://github.com/clab/fast_align A(·|·) Lexical Form Unrelated Emb. BLEU 0.5 4,869 309 24,822 22.0M 42.35 0.1 15,103 23 14,874 20.0M 42.53 0.05 21,172 11 8,817 18.7M 42.57 Table 5: Effects on different alignment thresholds used for pairing the words with similar lexical meaning on the validation set of the NIST Chinese-English translation task. Similar observations are obtained on the English-German translation task, as shown in Table 2. The improvement of the direct bridging model is reduced with the introduction of sub-word units since the attention distribution of the high-level representations becomes more confused. Although the two WT methods use fewer parameters, their translation quality degrades. We believe that sub-word NMT needs the well-trained embeddings to distinguish the homographs of subwords. In the proposed method, both the source and target embeddings benefit from the shared features, which leads to better word representations. Hence, it improves the quality of translation and also reduces the number of parameters. Table 3 shows the results on the small-scale IWSLT translation tasks. We observe that the proposed method stays consistently better than the vanilla model on these distant language pairs. Although the Three-way WT method has been sufficiently validated on similar translation pairs at low-resource settings (Sennrich et al., 2016a), it is not applicable to these distant language pairs. Instead, the proposed method is language-independent, making the WT methods more widely used. 3.3 Effect on Sharing Coefficients The coefficient λ = (λlm, λwf, λur) controls the proportion of the shared features. As shown in Table 4, the decoder WT model can be seen as a kind of shared-private method where zero features are shared between the source and target word embeddings. For the proposed method, λ = (0.5, 0.5, 0.5) and λ = (1, 1, 1) are, respectively, used for sharing half and all features between the embeddings of all categories of words. This allows the model to significantly reduce the number of parameters and also improve the translation quality. For comparison purpose, we also consider sharing a large part of the features among the unrelated words by setting s3 to 0.9, i.e. λ = (0.5, 0.7, 0.9). We argue that it is hard for 3619 1 Source mengmai xingzheng zhangguan bazhake biaoshi , dan shi gaishi jiu you shisan sangsheng . Reference mumbai municipal commissioner phatak claimed that 13 people were killed in the city alone . Vanilla bombay chief executive said that there were only 13 deaths in the city alone . Direct bridging bombay ’s chief executive , said there were 13 dead in the city alone . Decoder WT chief executive of bombay , said that thirteen people had died in the city alone . Shared-private mumbai ’s chief executive said 13 people were killed in the city alone . 2 Source suoyi wo ye you liyou qu xiangxin ta de rensheng ye hen jingcai . Reference thus , i also have reason to believe that her life is also very wonderful . Vanilla so i have reason to believe her life is also very fantastic . Direct bridging so i had reason to believe her life was also brilliant . Decoder WT so , i have reasons to believe that she has a wonderful life . Shared-private so i also have reason to believe that her life is also wonderful . Table 6: Translation examples on MT08 test set. The first and second examples show the accuracy and adequacy of the proposed method, respectively. The bold words in each example are paired and will be discussed in the text. mengmai xingzheng zhangguan <unk> biaoshi ,dan shi gaishi jiu you shisan sangsheng .<eos> bombay chief executive <unk> said that there were only 13 deathsin the city alone. <eos> (a) Vanilla mengmai xingzheng zhangguan <unk> biaoshi ,dan shi gaishi jiu you shisan sangsheng .<eos> mumbai's chief executive <unk> said 13 people were killedin the city alone. <eos> (b) Shared-private Figure 4: Long-distance reordering illustrated by the attention maps. The attention weights learned by the proposed shared-private model is more concentrated than that of the vanilla model. the model to learn an appropriate bilingual vector space in such a sharing setting. Finally, we propose to share more features between the more similar words by using s1 = 0.9 and reduce the weight on the unrelated words, which is λ = (0.9, 0.7, 0.5). This strikes the right balance between the translation quality and the number of model parameters. To investigate whether to share the features between unrelated words or not, we further conduct an experiment with the setting λ = (0.9, 0.7, 0). The result confirms our assumption that a small number of shared features between unrelated words with similar word frequency achieve better model performance. 3.4 Effect on Alignment Quality Table 5 shows the performance of different word alignment thresholds. In the first row, we only pair the words whose alignment probability A(y|x) is above the threshold of 0.5 (see Equation 1 for more details). Under this circumstance, 4,869 words are categorized as parallel words that have suoyi wo ye you liyou qu xiangxin ta de rensheng ye hen jingcai . <eos> so i have reason to believe her life is also very fantastic . <eos> (a) Vanilla suoyi wo ye you liyou qu xiangxin ta de rensheng ye hen jingcai . <eos> so i also have reason to believe that her life is also wonderful . <eos> (b) Shared-private Figure 5: Word omission problem illustrated by the attention maps. In the vanilla model, the third source word “ye” is not translated, while our shared-private model adequately translates it to give a better translation result. similar lexical meaning. Based on these observations, we find that the alignment quality is not a key factor affecting the model performance. In contrast, pairing as many as similar words possible helps the model to better learn the bilingual vector space, which improves the translation performance. The following qualitative analyses support these observations either. 3.5 Analysis of the Translation Results Table 6 shows two translation examples of the NIST Chinese-English translation task. To better understand the translations produced by these two models, we use layer-wise relevance propagation (LRP) (Ding et al., 2017) to produce the attention maps of the selected translations, as shown in Figure 4 and 5. In the first example, the Chinese word “sangsheng” is a low-frequency word and its ground truth is “killed”. It is observed the inadequate representation of “sangsheng” leads to a decline in the translation quality of the vanilla, direct bridging, and decoder WT methods. In our proposed 3620 −0.4 −0.2 0 0.2 0.4 −0.3 −0.2 −0.1 0 zhuxi zongtong ye bing sangsheng president also chief died killed (a) Vanilla −0.3 −0.2 −0.1 0 0.1 0.1 0.2 0.3 zhuxi sangsheng zongtong yebing president killed also died chief (b) Shared-private (global) −0.2 −0.1 0 0.25 0.3 0.35 zhuxi weiyuanzhang zongtong zongli yuanshou juzhang president chairman premier director minister chief (c) Shared-private (local) Figure 6: Visualization of the 2-dimensional PCA projection of the bilingual word embeddings of the two models. The blue words represent the Chinese embeddings while the red words represent the English embeddings. In (a), only the similar monolingual words are clustered together. While in (b) and (c), both the monolingual and bilingual words which have similar meanings are gathered together. method, a part of the embedding of “sangsheng” is shared with that of “killed”. These improved source representations help the model to generate better translations. Furthermore, as shown in Figure 4, we observe that the proposed method has better long-distance reordering ability than the vanilla. We attribute this improvement to the shared features, which provide an alignment guidance for the attention mechanism. The second example implies that our proposed model is able to improve the adequacy of translation, as illustrated in Figure 5. The Chinese word “ye” (also) appears twice in the source sentence, while only the proposed method can adequately translate both of them to the target word “also”. This once again proves that the shared embeddings between the pair words,“ye” and “also” provide the attention model with a strong interaction between the words, leading to a more concentrated attention distribution and effectively alleviating the word omission problem. 3.6 Analysis of the Learned Embeddings The proposed method has a limitation in that each word can only be paired with one corresponding word. However, synonym is a quite common phenomenon in natural language processing tasks. Qualitatively, we use principal component analysis (PCA) to visualize the learned embeddings of the vanilla model and the proposed method, as shown in Figure 6. In the vanilla model, as shown in Figure 6(a), only the similar monolingual embeddings are clustered, such as the English words “died” and “killed”, and the Chinese words “zhuxi” (president) and “zongtong” (president). However, in the proposed method, no matter whether the similar source and target words are paired or not, they tend to cluster together; as shown in Figure 6(b) and 6(c). In other words, the proposed method is able to handle the challenge of synonym. For example, both the Chinese words “ye” (paired with “also”) and “bing” can be correctly translated to “also” and these three words tend to gather together in the vector space. This is similar to the Chinese word “sangsheng” (paired with “killed”) and the English words “died” and “killed”. Figure 6(c) shows that the representations of the Chinese and English words which relate to “president” are very close. 4 Related Work Many previous works focus on improving the word representations of NMT by capturing the fine-grained (character) or coarse-grained (sub-word) monolingual characteristics, such as character-based NMT (Costa-Juss`a and Fonollosa, 2016; Ling et al., 2015; Cho et al., 2014; Chen et al., 2016), sub-word NMT (Sennrich et al., 2016b; Johnson et al., 2017; Ataman and Federico, 2018), and hybrid NMT (Luong and Manning, 2016). They effectively consider and utilize the morphological information to enhance the word representations. Our work aims to enhance word representations through the bilingual features that are cooperatively learned by the source and target words. Recently, Gu et al. (2018) propose to use the pre-trained target (English) embeddings as a universal representation to improve the representation learning of the source (low-resource) languages. 3621 In our work, both the source and target embeddings can make use of the common representation unit, i.e. the source and target embedding help each other to learn a better representation. The previously proposed methods have shown the effectiveness of integrating prior word alignments into the attention mechanism (Mi et al., 2016; Liu et al., 2016; Cheng et al., 2016; Feng et al., 2017), leading to more accurate and adequate translation results with the assistance of prior guidance. We provide an alternative that integrates the prior alignments through the sharing of features, which can also leads to a reduction of model parameters. Kuang et al. (2018) propose to shorten the path length between the related source and target embeddings to enhance the embedding layer. We believe that the shared features can be seem as the zero distance between the paired word embeddings. Our proposed method also uses several ideas from the three-way WT method (Press and Wolf, 2017). Both of these methods are easy to implement and transparent to different NMT architectures. The main differences are: 1) we share a part of features instead of all features; 2) the words of different relationship categories are allowed to share with differently sized features; and (3) it is adaptable to any language pairs, making the WT methods more widely used. 5 Conclusion In this work, we propose a novel sharing technique to improve the learning of word embeddings for NMT. Each word embedding is composed of shared and private features. The shared features act as a prior alignment guidance for the attention model to improve the quality of attention. Meanwhile, the private features enable the words to better capture the monolingual characteristics, result in an improvement of the overall translation quality. According to the degree of relevance between a parallel word pair, the word pairs are categorized into three different groups and the number of shared features is different. Our experimental results show that the proposed method outperforms the strong Transformer baselines while using fewer model parameters. Acknowledgements This work is supported in part by the National Natural Science Foundation of China (Nos. 61672555, 61876035, 61732005), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (No. 045/2017/AFJ), the Multi-Year Research Grant from the University of Macau (No. MYRG2017-00087-FST). Yang Liu is supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61761166008, No. 61432013), Beijing Advanced Innovation Center for Language Resources (No. TYR17002). References Duygu Ataman and Marcello Federico. 2018. Compositional representation of morphologically-rich input for neural machine translation. In ACL 2018. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In ACL 2018. Welin Chen, David Grangier, and Michael Auli. 2016. Strategies for training large vocabulary neural language models. In ACL 2016. Yong Cheng, Shiqi Shen, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Agreement-based joint training for bidirectional attention-based neural machine translation. In IJCAI 2016. Kyunghyun Cho, Bart Van Merri¨enboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP 2014. Marta R. Costa-Juss`a and Jos´e A. R. Fonollosa. 2016. Character-based neural machine translation. In ACL 2016. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Visualizing and understanding neural machine translation. In ACL 2017. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of ibm model 2. In NAACL-HLT 2013. Y Feng, S Zhang, A Zhang, D Wang, and A Abel. 2017. Memory-augmented neural machine translation. In EMNLP 2017. 3622 Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In ICML 2017. Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O K Li. 2018. Universal neural machine translation for extremely low resource languages. In NAACL-HLT 2018. Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. arXiv. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. TACL. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In EMNLP 2013. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR 2015. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP 2004. Shaohui Kuang, Junhui Li, Ant´onio Branco, Weihua Luo, and Deyi Xiong. 2018. Attention focusing for neural machine translation by bridging source and target embeddings. In ACL 2018. Xiang Li, Tao Qin, Jian Yang, and Tie-Yan Liu. 2016. 2-component recurrent neural networks. In NIPS 2016. Zhongliang Li, Raymond Kulhanek, Shaojun Wang, Yunxin Zhao, and Shuang Wu. 2018. Slim embedding layers for recurrent neural language models. In AAAI 2018. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neural machine translation. arXiv. Lemao Liu, Masao Utiyama, Andrew M Finch, and Eiichiro Sumita. 2016. Neural Machine Translation with Supervised Attention. In COLING 2016. Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with hybrid word-character models. In ACL 2016. Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In EMNLP 2016. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In ICLR 2013. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In ACL 2002. Ofir Press and Lior Wolf. 2017. Using the Output Embedding to Improve Language Models. In EACL 2017. Rico Sennrich, Birch, Alexandra, Currey, Anna, Germann, Ulrich, Haddow, Barry, Heafield, Kenneth, Barone, Antonio Valerio Miceli, and Williams, Philip. 2017. The university of edinburgh’s neural mt systems for wmt17. In WMT@EMNLP 2017. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for WMT 16. In ACL 2016. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In ACL 2016. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS 2014. Joseph P Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL 2010. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: An efficient data augmentation algorithm for neural machine translation. In EMNLP 2018. Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huanbo Luan, and Yang Liu. 2017a. Thumt: An open source toolkit for neural machine translation. arXiv. Xiaowei Zhang, Wei Chen, Feng Wang, Shuang Xu, and Bo Xu. 2017b. Towards compact and fast neural machine translation using a combined method. In EMNLP 2017.
2019
352
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3623–3634 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3623 Literary Event Detection Matthew Sims School of Information UC Berkeley [email protected] Jong Ho Park Computer Science Division UC Berkeley [email protected] David Bamman School of Information UC Berkeley [email protected] Abstract In this work we present a new dataset of literary events—events that are depicted as taking place within the imagined space of a novel. While previous work has focused on event detection in the domain of contemporary news, literature poses a number of complications for existing systems, including complex narration, the depiction of a broad array of mental states, and a strong emphasis on figurative language. We outline the annotation decisions of this new dataset and compare several models for predicting events; the best performing model, a bidirectional LSTM with BERT token representations, achieves an F1 score of 73.9. We then apply this model to a corpus of novels split across two dimensions— prestige and popularity—and demonstrate that there are statistically significant differences in the distribution of events for prestige. 1 Introduction Do events determine the shape of literary narratives? This question reaches back at least as far as the 1920s, when literary theorists from the Russian Formalist school began making distinctions between syuzhet (the way in which events are presented in a narrative) and fabula (the chronological sequence of events, distinct from the way they’re represented) (Shklovsky, 1990; Propp, 2010). Even on a far more localized scale, events are often considered to play a fundamental role in how literary narratives progress. Moretti (2013), for instance, describes the inherent productivity of events in Daniel Defoe’s novel Robinson Crusoe, where one event invokes another in a chain of occurrences that seem to flow in “micronarrative sequences.” Such localized sequences in turn relate to the larger architecture of plot, which has its own distinct modes of organization and generation (Forster, 1927; Genette, 1983; Brooks, 1992). The status of events in literature thus inevitably engages larger questions about scale and narrative technique. At the same time, the representation and identification of events and their participants in NLP have historically focused on the domain of news, including early evaluation campaigns like MUC (Sundheim, 1991), seminal datasets like ACE 2005 (Walker et al., 2006) and the DEFT ERE framework (Aguilar et al., 2014; Bies et al., 2016), as well as other resources that require the identification of events as a precondition for other activities, such as temporal ordering (Pustejovsky et al., 2003b) or factuality judgments (Saur´ı and Pustejovsky, 2009; de Marneffe et al., 2012; Werner et al., 2015; Lee et al., 2015; Rudinger et al., 2018). The role of events in literary fiction, however, is very different from their role in fact-based reporting of events in the real world, including historical texts (Sprugnoli and Tonelli, 2017). Novels and even most short stories tend to be much longer than news articles, and tend to have more complex narrative structures both locally (individual scenes) and globally (plot) than works of nonfiction. Furthermore, literature is a creative enterprise. Journalistic discourse typically reports what actually happened in the real world and depicts definite causal chains connecting events; this causality is not hard coded into literary event sequences. We present in this work a new dataset of event annotations in the domain of literature that aims to bridge this gap between the rich landscape of existing work in event representation in NLP for news—including contemporary neural methods (Orr et al., 2018; Sha et al., 2018; Nguyen and Grishman, 2018)—and the needs of literature scholars for models of events in their domain. To develop a common thread with fact-based rep3624 resentations of real-world events while also laying the foundation for models to faithfully track the unique movements of narrative plot, we focus solely on events in literary texts that are depicted as actually happening—i.e., those with asserted realis (discussed in more detail in §4). As distinct from other epistemic modalities (such as future events, hypotheticals, and extradiegetic summaries by a narrator), realis events are depicted as existing within the imagined world of the literary text, and take place at a specific place and a specific time. In this work we make the following contributions: • We present a new annotated dataset of literary events in 210,532 tokens from 100 books and describe some of the key annotation guidelines that we have tailored to the unique challenges posed by novelistic discourse. The dataset is freely available for download under a Creative Commons ShareAlike 4.0 license as a part of LitBank at https://github. com/dbamman/litbank. • We compare multiple models for realis event detection in literary texts, including both featurized and neural approaches, with the best performing model achieving an F1 score of 73.9. • We apply this model to a corpus of novels and demonstrate that there are statistically significant differences in the ratio of realis events between novels written by authors with high prestige—defined by Underwood (2019) as works that have been reviewed by elite literary journals—and those written by authors without such prestige. High prestige authors (such as James Joyce and Virginia Woolf) use fewer realis events depicting concrete actions in their works. 2 Background and Previous Work We draw on several threads of previous research in designing a dataset and model to support literary event detection. First, while much work at the intersection of NLP and literary analysis has focused on computational approaches to characters and their relationships (Bamman et al., 2014; Vala et al., 2015; Iyyer et al., 2016; Chaturvedi et al., 2017), far less has explored the event structure of literary texts. Plot is often explored through the lens of sentiment (Alm and Sproat, 2005; Mohammad, 2011; Elsner, 2012; Jockers, 2015; Reagan et al., 2016) rather than the concrete events that comprise it. Second, we draw on the vast literature in NLP for the detection of events, participants, and their structured relationships, from the featurized models of Ahn (2006) and Li et al. (2013) to the variety of neural architectures that have been applied to the task of event detection, such as CNNs (Nguyen and Grishman, 2015), including dynamic multi-pooling CNNs (Chen et al., 2015) and skipgram CNNs (Nguyen and Grishman, 2018), RNNs (Nguyen et al., 2016), hybrid LSTM-CNN architectures (Feng et al., 2016), and attention (Liu et al., 2017, 2018) While most approaches use sentence-level information to detect events, we also draw on the work of Liao and Grishman (2010), which instead incorporates document-level information (potentially useful for longer literary narratives). 3 Data The corpus we have selected to annotate consists of approximately the first 2000 words of 100 literary works currently in the public domain (i.e., published before 1923), previously used by Bamman et al. (2019). The majority of these texts are canonical novels published in the nineteenth and early twentieth centuries (e.g., Jane Austen’s Pride and Prejudice, Herman Melville’s Moby Dick, and James Joyce’s Ulysses). A smaller percentage of this corpus consists of popular genre fiction published within this same time frame (e.g., King Solomon’s Mines, Tarzan of the Apes, and Desert Gold). All of these texts have been selected from the Project Gutenberg corpus and collectively exhibit a range of novelistic discourse. This range is particularly useful and necessary for exploring literary event realis, providing examples of novels that are narratively and stylistically complex as well as others that are more declarative and plot-driven. 4 Event annotations Events remain a contested category across narrative theory, philosophy, and linguistics, with definitions varying depending on discipline, application, and context. Most linguistic event classifications nevertheless trace their lineage back to 3625 Vendler (1957), who proposed four categories to distinguish the different relationships that exist between verbs and time: activities (dynamically unfolding processes), achievements (occurrences that are completed almost instantaneously), accomplishments (occurrences that have some duration but also have a predetermined endpoint), and states (persistent conditions that span a period of time and don’t have any definite endpoint). A simpler classification that some scholars have traced back to Aristotle (Sasse, 2002) simply distinguishes between events and states, the latter usually defined as non-dynamic situations that pertain over time. Many event annotation systems including TimeML (Pustejovsky et al., 2003a), ACE (LDC, 2005), and Light ERE (Aguilar et al., 2014) also treat changes of state as being events, since such changes indicate a dynamic break from prior conditions. In our annotation approach, we include activities, achievements, accomplishments, and changes of state as being events. We introduce several more fine-grained distinctions, however, as far as which subsets within each of these categories should be labelled for our specific purposes, as detailed below. At a high level, our goal is to model only what is depicted as actually occurring in a text; in other words, those events with asserted realis. The ACE 2005 event annotation guidelines (LDC, 2005) outline four dimensions for tagging events involving determinations for polarity, tense, specificity, and modality. We follow the Light ERE (Aguilar et al., 2014) approach of only selecting aspects that capture realis: • Polarity: Events must have a positive polarity (i.e., positively asserted as occurring); events with negative polarity are defined as having not taken place (such as “he did not understand”). • Tense: Events must be in the past or present tense. Events in the future tense are not tagged. • Genericity: Generic events describe a category (e.g., dogs bark) rather than a specific occurrence involving a specific entity (my dog barked this morning). We only tag specific events in our framework; all generic events are ignored. We consider an event to be specific if it is “a singular occurrence at a particular place and time” (LDC, 2005). • Modality: Only asserted events—those that are indicated to have actually occurred—are tagged. All other modalities (believed, hypothetical, desired, etc.) are not. We also employ the following standards in our annotation approach: • Similar to both the ACE and Light ERE guidelines, we tag event triggers, defined as the minimum extent of text capable of representing an event. For our purposes, this extent is always a single word. This is in contrast to Light ERE, which allows for multiword triggers, and to ACE, which mostly restricts triggers to single words but makes an exception for phrasal verbs by including the particle if it immediately follows the main predicate. • We limit event triggers to the following three parts of speech: verbs, adjectives, and nouns (including nominals). Adverbs and prepositions are not annotated as events. • In contrast to both the ACE and Light ERE guidelines, which restrict taggable events to those falling within eight specific types (life, movement, transaction, business, conflict, contact, personnel, and justice), we adopt an open approach and make no restrictions on the types of events that are tagged. Due to the specific domain we are annotating (English language fiction), we have also found it necessary to define several rules that are not explicitly presented in the ACE or Light ERE standards. In particular, since mental states play an especially prominent and complex role in many novels (and noticeably so relative to more fact-based discourses such as the news) we have given particular attention to defining rules for stative events. In our annotations, we tag a state as being an event assuming one or more of the following conditions has been met: 1. An explicit change of state has occurred (whether initiation, termination, or alteration), and this change can be determined solely within the context of the sentence in which the potential event trigger appears. 3626 (1) Stephen Dedalus, displeased and sleepy, leaned his arms on the top of the staircase and looked coldly at the shaking gurgling face that blessed him, equine in its length, and at the light untonsured hair, grained and hued like pale oak. (Joyce, Ulysses) (2) My eyes followed his trim figure, richly though sombrely clad, then fell with a sudden dissatisfaction upon my own stained and frayed apparel. (Johnston, To Have and to Hold) (3) He generally arrived in London (like the influenza) from the Continent, only he arrived unheralded by the Press; and his visitations set in with great severity. (Conrad, The Secret Agent) Table 1: Three annotation examples with tagged event triggers in bold and candidate triggers that would not be tagged underlined. 2. The cause of the state can be deduced (again within the context of the sentence), and it is clear that the cause and resulting state have occurred at the same location. For example, the following states (in bold) would be labelled as events: “When he received this appointment he was both elated and appalled.” (Burroughs, Tarzan of the Apes) 3. The potential event trigger refers to a mental state that is inherently acute, semantically speaking. For instance, words such as “astonished,” “shocked,” “aghast,” and “stunned” all suggest mental states that are acute responses to some stimulus and are usually only maintained for a limited duration. Table 1 presents three sample sentences annotated under our guidelines that illustrate important aspects of our framework, including mental states with no evidence of immediate change (displeased and sleepy in example 1), resultatives (stained and frayed in example 2), and generic events that describe periodic activities but not a single action grounded at a single moment in time (arrived in example 3). Meanwhile, Table 2 shows the fifteen words with the highest occurrence as events in the annotations, along with the percentage of the time they are labelled as events. For the most part, these words can be broken down into four respective categories: verbs related to conversation (said, asked, heard, answered, and cried when indicating a vocalization); verbs related to movement (came, went, and turned); verbs related to perception (looked and saw); and verbs related to obtainment (took and found). As the event rates make clear, even these words are only labelled as events a portion of the time (in some cases less than half of all occurrences) either due to contextual usage or the broader constraints imposed by realis. Word Count Event Rate said 465 89% came 95 52% looked 92 58% went 92 60% asked 69 93% heard 63 59% saw 59 55% cried 59 97% took 57 60% turned 55 74% told 51 56% found 49 42% answered 45 96% put 44 41% thought 38 32% Table 2: The fifteen words with the highest overall occurrence as events in the annotations (Count) along with the percentage they are labelled events relative to their overall occurrence in the corpus (Event Rate). Finally, to highlight why annotating events in novels is a particularly challenging task, we also briefly mention some of the phenomena that frequently arise. There are no taggable events in the examples below; potential triggers that are not tagged are underlined. Figurative events. Often figurative language or an extended metaphor will be used to represent an event: “He had broken a thickness of ice, the formation of many a winter; had had his reasons for a long silence.” (James, The Turn of the Screw) 3627 Realis events presented in an irrealis mood. Sometimes events that have actually occurred are presented in a different modality for rhetorical purposes: “As to your practice, if a gentleman walks into my rooms smelling of iodoform, with a black mark of nitrate of silver upon his right forefinger, and a bulge on the right side of his top-hat to show where he has secreted his stethoscope, I must be dull, indeed, if I do not pronounce him to be an active member of the medical profession.” (Doyle, The Adventures of Sherlock Holmes ) Ambiguous assertions. In some instances, events that appear to be clearly asserted based on semantic and syntactic indicators become ambiguous when considered outside of the narrative frame, such as when a narrator directly addresses the reader: “Why upon your first voyage as a passenger, did you yourself feel such a mystical vibration, when first told that you and your ship were now out of sight of land?” (Melville, Moby Dick) 4.1 Annotation process All annotations were carried out by a single coauthor after multiple rounds of discussions and the creation of a set of annotation guidelines heavily dependent on the ACE 2005 annotation guidelines for events (LDC, 2005) and adapted for the realis events under consideration here. To calculate the expected inter-annotator agreement rate, a second co-author independently annotated a random sample of five texts at the end of the annotation process, using only the annotation guidelines for reference. We find the agreement rate to be high (82.1 F-score for event identification and a chancecorrected Cohen’s κ of 0.813). The total dataset comprises 7,849 events among 210,532 tokens in the 100 books in our corpus, and is freely available for public use. 5 Event detection We consider two classes of models for literary event detection in this data: neural models optimized for event trigger detection in past work (Nguyen and Grishman, 2015; Chen et al., 2015; Nguyen et al., 2016; Feng et al., 2016); and featurized models (Ahn, 2006; Li et al., 2013; Yang and Mitchell, 2016). 5.1 Neural Previous work has demonstrated the strength of neural models for event trigger detection, where models can leverage the distributional information encoded in word embeddings, along with representations of longer sentence context, to achieve high performance. We explore several variants of these models in this work; all models approach literary event detection as a sequence labeling problem, assigning a binary label to each token denoting its status as an event. To leverage word representations that are suited for this particular literary domain, we train 100dimensional skipgram (Mikolov et al., 2013) word embeddings on 15,290 books from Project Gutenberg. With the exception of the model incorporating BERT token representations, all models described below use these same embeddings. LSTM. The simplest model we consider is a single-direction, 100-dimensional LSTM, with each input token represented as a word embedding from Project Gutenberg. BiLSTM. Since the decision to label each token as an event may rely on information in the right context of the sentence, we consider a bidirectional LSTM (concatenating the outputs of two 100-dimensional LSTMs). BiLSTM with document context. Most models for event trigger detection consider contextual information only from the sentence when making predictions about the event status of any individual token. Drawing on previous work incorporating global context (Liao and Grishman, 2010), we might hypothesize, however, that the accurate prediction of complex realis events may require greater document context—hypotheticals introduced in one sentence may span multiple ensuing paragraphs, while an extradiegetic aside from the narrator may span several pages and contain no concrete events. To test this, we define a sequence to be the entire (ca. 2,000-word) document, rather than an individual sentence. BiLSTM with sentence CNN. Several previous methods have shown the strength of a sentencelevel CNN (Nguyen et al., 2016; Feng et al., 2016). When predicting the event status of a token at position i in a sentence with n words w = {w1, . . . , wn}, each CNN convolves over the entire sequence w along with position embeddings 3628 p = {p1, . . . , pn} that encode the distance between each token position j ∈[1, n] and the target token i. We adopt the architecture of Nguyen and Grishman (2015) in particular, where the output of a CNN is then passed to a max-pooling phase to yield a representation ci for target position i that is concatenated to the BiLSTM output oi at that time step when making a binary prediction (with learned parameters W). P(event) = σ([ci; oi]⊤W) The CNN contains 200 filters (100 each scoped over word bigrams and trigrams). We encode positional information between the target token at position i and the token at position j using signed bucketing (±1, 2, 3, 4, 5, 6−10, 11−20, >20). Each bucket corresponds to a discrete choice of position with its own learned 5-dimensional embedding (as in past work). BiLSTM with subword CNN. Subword character CNNs have been useful for a range of problems (Ma and Hovy, 2016; Chiu and Nichols, 2016) as a way of capturing meaningful representations of words that may be out-of-vocabulary for a set of learned embeddings (or whose use in a given domain may be at odds with the data those embeddings are trained on). We consider this design choice here as well. We represent each word as the output of a CNN with 100 filters (25 filters each scoped over character bigrams, trigrams, 4grams and 5grams), with max pooling over the character sequence to yield a 100dimensional character representation ci of a word at position i. This representation is then concatenated to the word embedding ei for the token at that position and fed as input to the LSTM time step. BiLSTM with BERT contextual representations. In order to take advantage of recent advances in language model pre-training (Howard and Ruder, 2018; Peters et al., 2018; Radford et al., 2019), we also incorporate contextual representations extracted from the pre-trained base BERT model (Devlin et al., 2019). Rather than fine-tuning the model for the supervised task, we instead use the BERT model in a feature-based way, representing each token in a sequence as the concatenation of the model’s final four layers (3,072 dimensions in total) in place of pre-trained word embeddings in a BiLSTM. Since BERT uses WordPiece embeddings (Wu et al., 2016) as input, we take the average of any resulting sub-tokens in order to return a single per token representation (potentially beneficial as many of the literary works in our corpus contain long, complex words). As Orr et al. (2018) have shown, neural models for event identification can exhibit substantial variation simply as a function of their random initialization, and we observe that with our data and models as well. To report expected performance on future data, we average together the predictions made from five random initializations (i.e., the majority class predicted for a token in context by the five models). 5.2 Featurized The dataset we have created contains 7,849 events among 210,532 tokens. While this size is comparable to other datasets used for event detection in the past, it is unclear whether the scale is large enough to train highly parameterized neural models well; to test this, we design a linguistically informed featurized model, drawing on previous work in event representations (Ahn, 2006; Li et al., 2013; Chen et al., 2009) and noun phrase genericity and specificity (Reiter and Frank, 2010; Friedrich et al., 2015). For this featurized model, we use ℓ2-regularized binary logistic regression to make decisions about each token in its immediate context. We featurize the decision using the following information. • Word. The lowercased word form of the token. • Lemma. The lemma of the token. • POS. The token’s part of speech (using the Penn Treebank tagset), predicted using the SpaCy library.1 In addition to providing important information about the core identification of verbs, the Penn Treebank tags also contribute to the determination of verb tense (important for our characterization of realis events). • Context. The immediate context surrounding the word, represented as the following: a.) unigram indicators for the words found within three positions to the left; b.) indicators for words found three words to the right; c.) unigram × position indicators for those 1https://spacy.io 3629 Method Precision Recall F Verbs only 17.7 [16.6-18.8] 76.2 [74.1-78.3] 28.7 [27.3-30.2] Featurized 68.9 [66.2-71.7] 50.5 [48.0-52.9] 58.3 [56.1-60.4] LSTM 66.6 [64.1-69.1] 60.5 [57.9-63.1] 63.4 [61.3-65.5] BiLSTM 70.4 [67.8-72.9] 60.7 [58.0-63.4] 65.2 [63.1-67.3] + document context 74.2 [71.7-76.6] 58.8 [56.0-61.6] 65.6 [63.5-67.8] + sentence CNN 71.6 [69.1-74.1] 56.4 [53.8-59.0] 63.1 [61.0-65.1] + subword CNN 69.2 [66.6-71.6] 64.8 [62.2-67.3] 66.9 [64.8-68.9] + BERT 75.5 [73.3-77.8] 72.3 [69.7-74.8] 73.9 [72.0-75.7] + subword CNN 73.6 [71.2-75.8] 73.3 [70.8-75.7] 73.4 [71.5-75.2] Table 3: Performance on literary event identification. All metrics are reported with 95% bootstrap confidence intervals. same words (e.g., not appearing at position -1 with respect to the word); d.) the trigram appearing to the left; the trigram to the right; e.) the part-of-speech trigram to the left; and f.) to the right. This immediate contextual information captures important factors that affect modality, such as negation (Chen et al., 2009) • Syntax. Syntactic information encoding the word’s dependency relation, syntactic head, and part-of-speech of the syntactic head, predicted using SpaCy. • Wordnet. Following Reiter and Frank (2010), we include WordNet synset and hyponymy information, capturing the synset of the word and the identities of its three hypernyms up the WordNet chain. • Embeddings. We also include word embeddings as features; while a simple linear model like logistic regression cannot exploit important non-linearities between the embedding dimensions, they can provide some corpus level-information about the behavior of the word in the 15,290 Gutenberg texts it was trained on (which the neural models described above also have access to). • Bare plurals. Some generic events (such as “pirates sail ships”) contain bare plurals as subjects; inspired by Reiter and Frank (2010) on identifying generic noun phrases, we featurize the presence of a bare plural subject by noting whether the noun phrase subject is plural in form and lacking an explicit determiner, numeric count, or possessive pronoun. We also draw on their countability feature, identifying whether a noun phrase subject is countable (e.g., “the boy”) or not (e.g. “the water”) using CELEX (Baayen et al., 1996). 5.3 Results To evaluate the performance of these models, we create training (60%), development (10%), and test (30%) partitions of the data at the level of books, with 60 books in train, 10 in development, and 30 in test. We stratify by book to ensure that no information from the same book appears in different partitions. All models have access to the same development data for hyperparameter tuning; we use this to explore feature engineering and optimize the ℓ2 regularization strength for the featurized model, and to explore different neural hyperparameter choices (e.g., size of LSTM). Table 3 illustrates the comparative performance between the different systems. To contextualize these results, we also provide a simple but interpretable baseline of selecting all and only verbs to be events. This naive verb-only baseline yields an F-score of 28.7; while verbs are strong indicators of events, they are neither sufficient (the recall indicates that nearly one quarter of the true events in the test data are not verbs) nor entirely consistent (many verbs may signal events but not realis events). While the featurized model improves on the baseline with an F-score of 58.3, all of the neural variants perform substantially better, generating a minimum F-score of 63.1. Although all neural models are statistically significantly better than the featurized model (under a bootstrap test), the variants of a subword CNN, sentence CNN and document context show little difference from 3630 each other. In contrast, a BiLSTM with BERT input representations clearly outperforms all other methods with an F-score of 73.9 (an absolute improvement of +7.0 points over the best non-BERT model), attesting again to the value of unsupervised pre-training for supervised tasks (even in cases where the language model itself is not optimized for the task). 6 Analysis To illustrate the usefulness of event representations for the analysis of literary texts, we consider the distinction between economic and cultural capital originally put forth by Bourdieu (1993) and analyzed from a computational perspective by Algee-Hewitt et al. (2016) and Underwood (2019). Both computational models find strong textual signals predictive of authorial prestige, measured either by inclusion in the Oxford Dictionary of National Biography (Algee-Hewitt et al., 2016) or by the number of times their works were reviewed by elite literary journals (Underwood, 2019). Both models also consider authorial popularity, measured either by the number of times a work was reprinted (Algee-Hewitt et al., 2016) or by the number of times their works can be found on historical bestseller lists (Underwood, 2019). While Underwood (2019) finds that high prestige fiction correlates with Harvard General Inquirer categories of KNOWLEDGE AND AWARENESS and NATURAL OBJECTS, we can similarly ask: is there a relationship in the depiction of realis events and literary prestige or popularity? To test this, we draw on data from Underwood (2019), selecting the 100 authors identified in that work with the highest and lowest prestige, respectively. In total, 44 of the high prestige authors and 29 of the low prestige authors are present in the Project Gutenberg corpus. We select any works of fiction by these authors that are present in Gutenberg, limiting the maximum number of novels per author to 10. This yields 190 novels in the high prestige class and 159 in the low prestige class. Since Project Gutenberg has wider representation of historically popular texts than unpopular ones, we select the 100 most popular authors and 500 least popular authors. 67 of the high popularity authors and 68 of the low popularity authors appear in the Gutenberg corpus. After selecting a sample of the high popularity texts while again limiting per author novel totals to 10, this yields 182 novels in the high popularity class and 173 in the low popularity class. We run the best-performing literary event detection model identified above (a bidirectional LSTM with BERT token representations) on each novel, and carry out two related analyses on the output. First, to estimate the overall incidence of realis events, we simply calculate the average event ratio in each novel (the number of realis events normalized by the number of tokens); second, to capture the pacing of realis events more concretely in terms of actual tokens, we invert this metric to calculate the event distance (how many tokens one would have to read on average before coming across an event token). Class Ratio Distance High prestige 4.6 [4.4-4.7] 23.4 [22.4-24.5] Low prestige 5.5 [5.3-5.6] 19.2 [18.2-20.1] High popularity 4.6 [4.4-4.8] 23.2 [22.3-24.1] Low popularity 4.5 [4.3-4.7] 25.0 [21.9-28.1] Table 4: Mean event ratios (event tokens / total tokens) and mean event distances (total tokens / event tokens) calculated over all novels in each class. All metrics are reported with 95% confidence intervals. The results of these analyses are shown in Table 4. We would expect that the pulp novels of Edgar Rice Burroughs would contain more physical description and concrete events than the more meditative novels of Henry James, James Joyce, and Kate Chopin, and we find this to be the case: authors with low prestige use 20% more concrete events in their works (the difference in both metrics between the two groups is statistically significant at p < 0.05). For the popularity dimension, however, the results on both metrics are statistically indistinguishable. Although it is difficult to draw definitive conclusions based on these results, the outcome for the prestige dimension in particular indicates a compelling line of inquiry. In fact, the results in Table 4 only tell half the story. As Figure 1 demonstrates, the most marked distinction for event ratios in high prestige and low prestige novels is not the mean but rather the spread. High prestige novels appear to have greater variability in the percentage of realis events (particularly skewed to lower ratios), whereas the percentage for low prestige novels, with the exception of a few outliers, remains within a smaller range. This variability suggests that, as one might expect, prestigious au3631 thors tend to conform less programmatically to a regular frequency of realis events. Put differently, prestigious novels don’t have the same constraints as less prestigious ones in maintaining our attention through something happening in the narrative. While many prestigious novels have event ratios in line with novels lacking prestige, prestigious authors appear to have a higher degree of freedom when it comes to the overall eventfulness of their works. 2 4 6 8 High prestige Low prestige Event ratio Figure 1: Violin plot of event ratios for novels in the prestige category. 7 Conclusion We present in this work a new dataset for the representation of events in literary texts in order to bridge the gap between previous efforts to represent fact-based accounts in news (along with contemporary models trained on that data) and the demands of literary scholars for the computational analysis of the micro-narratives that comprise plot. The relatively straightforward application of our model to the analysis of authorial prestige shows how identifying realis events can help to uncover some important and overlooked aspects of novelistic narrative. To the best of our knowledge, no previous technical or theoretical work has specifically examined the function that events with asserted realis play in the structure of literary fiction. Yet simply by analyzing the ratio of realis events, one can capture a meaningful distinction between novels written by authors whose works are reviewed by elite literary journals and those written by authors whose work is not. We hope this initial application inspires further research by literary scholars and computational humanists in the future. All event annotations are freely available for public use under a Creative Commons Sharealike license at https://github.com/ dbamman/litbank. Code to support this work can be found at: https://github.com/ dbamman/ACL2019-literary-events. Acknowledgments We thank the anonymous reviewers for their valuable feedback, and Ted Underwood for sharing data to enable our analysis on popularity and prestige. The research reported in this article was supported by an Amazon Research Award and by resources provided by NVIDIA and Berkeley Research Computing. References Jacqueline Aguilar, Charley Beller, Paul McNamee, Benjamin Van Durme, Stephanie Strassel, Zhiyi Song, and Joe Ellis. 2014. A comparison of the events and relations across ACE, ERE, TAC-KBP, and Framenet annotation standards. In Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 45–53. Association for Computational Linguistics. David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and Reasoning About Time and Events, ARTE ’06, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Mark Algee-Hewitt, Sarah Allison, Marissa Gemma, Ryan Heuser, Franco Moretti, and Hannah Walser. 2016. Canon/archive: Large-scale dynamics in the literary field. Literary Lab Pamphlet 11. Cecilia Ovesdotter Alm and Richard Sproat. 2005. Emotional sequencing and development in fairy tales. In International Conference on Affective Computing and Intelligent Interaction, pages 668–674. Springer. R. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. Celex2. LDC. David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. NAACL. David Bamman, Ted Underwood, and Noah A. Smith. 2014. A Bayesian mixed effects model of literary 3632 character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 370–379, Baltimore, Maryland. Association for Computational Linguistics. Ann Bies, Zhiyi Song, Jeremy Getman, Joe Ellis, Justin Mott, Stephanie Strassel, Martha Palmer, Teruko Mitamura, Marjorie Freedman, Heng Ji, and Tim O’Gorman. 2016. A comparison of event representations in deft. In Proceedings of the Fourth Workshop on Events, pages 27–36. Association for Computational Linguistics. Pierre Bourdieu. 1993. The field of cultural production, or: The economic world reversed. In The Field of Cultural Production: Essays on Art and Literature. Columbia University Press. Peter Brooks. 1992. Reading for the Plot: Design and Intention in Narrative. Harvard University Press. Snigdha Chaturvedi, Mohit Iyyer, and Hal Daum´e III. 2017. Unsupervised learning of evolving relationships between literary characters. In Association for the Advancement of Artificial Intelligence. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167–176. Association for Computational Linguistics. Zheng Chen, Heng Ji, and R Haralick. 2009. Event coreference resolution: Algorithm, feature impact and evaluation. In Proceedings of Events in Emerging Text Types (eETTs) Workshop, in conjunction with RANLP, Bulgaria. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Micha Elsner. 2012. Character-based kernels for novelistic plot structure. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 634–644. Association for Computational Linguistics. Xiaocheng Feng, Lifu Huang, Duyu Tang, Heng Ji, Bing Qin, and Ting Liu. 2016. A languageindependent neural network for event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 66–71. Association for Computational Linguistics. E. M. Forster. 1927. Aspects of the Novel. Edward Arnold. Annemarie Friedrich, Alexis Palmer, Melissa Peate Sørensen, and Manfred Pinkal. 2015. Annotating genericity: a survey, a scheme, and a corpus. In Proceedings of the 9th Linguistic Annotation Workshop, pages 21–30. G´erard Genette. 1983. Narrative Discourse: An Essay in Method. Cornell University Press. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, and Hal Daum´e III. 2016. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. In North American Association for Computational Linguistics. Matthew Jockers. 2015. Revealing sentiment and plot arcs with the syuzhet package. http://www.matthewjockers.net/ 2015/02/02/syuzhet/. LDC. 2005. ACE (Automatic Content Extraction) English annotation guidelines for events. https://www.ldc.upenn.edu/ sites/www.ldc.upenn.edu/files/ english-events-guidelines-v5.4.3. pdf. Kenton Lee, Yoav Artzi, Yejin Choi, and Luke Zettlemoyer. 2015. Event detection and factuality assessment with non-expert supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643–1648. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73–82. Association for Computational Linguistics. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 789–797, Stroudsburg, PA, USA. Association for Computational Linguistics. 3633 Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018. Event detection via gated multilingual attention mechanism. In Thirty-Second AAAI Conference on Artificial Intelligence. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1789–1798. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Association for Computational Linguistics. Marie-Catherine de Marneffe, Christopher D. Manning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Comput. Linguist., 38(2):301–333. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. ICLR. Saif Mohammad. 2011. From once upon a time to happily ever after: Tracking emotions in novels and fairy tales. In Proceedings of the 5th ACLHLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 105–114. Association for Computational Linguistics. Franco Moretti. 2013. The Bourgeois: Between history and literature. Verso Books. Thien Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In AAAI. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309. Association for Computational Linguistics. Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365–371. Association for Computational Linguistics. Walker Orr, Prasad Tadepalli, and Xiaoli Fern. 2018. Event detection with neural networks: A rigorous empirical evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 999–1004. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237. Vladimir Propp. 2010. Morphology of the Folktale. University of Texas Press. James Pustejovsky, Jos´e Casta˜no, Robert Ingria, Roser Saur´ı, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003a. TimeML: robust specification of event and temporal expressions in text. In Fifth International Workshop on Computational Semantics (IWCS-5), pages 1–11. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, David Day, Lisa Ferro, Robert Gaizauskas, Marcia Lazo, Andrea Setzer, and Beth Sundheim. 2003b. The TimeBank corpus. Corpus Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Andrew J Reagan, Lewis Mitchell, Dilan Kiley, Christopher M Danforth, and Peter Sheridan Dodds. 2016. The emotional arcs of stories are dominated by six basic shapes. EPJ Data Science, 5(1):31. Nils Reiter and Anette Frank. 2010. Identifying generic noun phrases. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 40–49, Stroudsburg, PA, USA. Association for Computational Linguistics. Rachel Rudinger, Aaron Steven White, and Benjamin Van Durme. 2018. Neural models of factuality. In Proceedings of the Annual Meeting of the North American Association of Computational Linguistics (NAACL). Hans-J¨urgen Sasse. 2002. Recent activity in the theory of aspect: Accomplishments, achievements, or just non-progressive state. Linguistic Typology, 6(2):199–271. Roser Saur´ı and James Pustejovsky. 2009. Factbank: a corpus annotated with event factuality. Language Resources and Evaluation, 43(3):227–268. Lei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. 2018. Jointly extracting event triggers and arguments by dependency-bridge rnn and tensor-based argument interaction. In Thirty-Second AAAI Conference on Artificial Intelligence. Viktor Shklovsky. 1990. Theory of Prose. Dalkey Archive. 3634 R. Sprugnoli and S. Tonelli. 2017. One, no one and one hundred thousand events: Defining and processing events in an inter-disciplinary perspective. Natural Language Engineering, 23(4):485506. Beth M. Sundheim. 1991. Overview of the third message understanding conference. In Processing of the Third Message Understanding Conference. Ted Underwood. 2019. Distant Horizons: Digital Evidence and Literary Change. University of Chicago Press. Hardik Vala, David Jurgens, Andrew Piper, and Derek Ruths. 2015. Mr. Bennet, his coachman, and the Archbishop walk into a bar but only one of them gets recognized: On the difficulty of detecting characters in literary texts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 769–774, Lisbon, Portugal. Association for Computational Linguistics. Zeno Vendler. 1957. Verbs and times. The philosophical review, 66(2):143–160. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. LDC. Gregory Werner, Vinodkumar Prabhakaran, Mona Diab, and Owen Rambow. 2015. Committed belief tagging on the factbank and lu corpora: A comparative study. In Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM 2015), pages 32–40, Denver, Colorado. Association for Computational Linguistics. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 289–299. Association for Computational Linguistics.
2019
353
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3635–3644 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3635 Assessing the Ability of Self-Attention Networks to Learn Word Order Baosong Yang† Longyue Wang‡ Derek F. Wong† Lidia S. Chao† Zhaopeng Tu‡∗ †NLP2CT Lab, Department of Computer and Information Science, University of Macau [email protected], {derekfw,lidiasc}@umac.mo ‡Tencent AI Lab {vinnylywang,zptu}@tencent.com Abstract Self-attention networks (SAN) have attracted a lot of interests due to their high parallelization and strong performance on a variety of NLP tasks, e.g. machine translation. Due to the lack of recurrence structure such as recurrent neural networks (RNN), SAN is ascribed to be weak at learning positional information of words for sequence modeling. However, neither this speculation has been empirically confirmed, nor explanations for their strong performances on machine translation tasks when “lacking positional information” have been explored. To this end, we propose a novel word reordering detection task to quantify how well the word order information learned by SAN and RNN. Specifically, we randomly move one word to another position, and examine whether a trained model can detect both the original and inserted positions. Experimental results reveal that: 1) SAN trained on word reordering detection indeed has difficulty learning the positional information even with the position embedding; and 2) SAN trained on machine translation learns better positional information than its RNN counterpart, in which position embedding plays a critical role. Although recurrence structure make the model more universally-effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation. 1 Introduction Self-attention networks (SAN, Parikh et al., 2016; Lin et al., 2017) have shown promising empirical results in a variety of natural language processing (NLP) tasks, such as machine translation (Vaswani et al., 2017), semantic role labelling (Strubell et al., 2018), and language representations (Devlin et al., 2019). The popularity of SAN lies in ∗Zhaopeng Tu is the corresponding author of the paper. This work was conducted when Baosong Yang was interning at Tencent AI Lab. its high parallelization in computation, and flexibility in modeling dependencies regardless of distance by explicitly attending to all the signals. Position embedding (Gehring et al., 2017) is generally deployed to capture sequential information for SAN (Vaswani et al., 2017; Shaw et al., 2018). Recent studies claimed that SAN with position embedding is still weak at learning word order information, due to the lack of recurrence structure that is essential for sequence modeling (Shen et al., 2018a; Chen et al., 2018; Hao et al., 2019). However, such claims are mainly based on a theoretical argument, which have not been empirically validated. In addition, this can not explain well why SAN-based models outperform their RNN counterpart in machine translation – a benchmark sequence modeling task (Vaswani et al., 2017). Our goal in this work is to empirically assess the ability of SAN to learn word order. We focus on asking the following research questions: Q1: Is recurrence structure obligate for learning word order, and does the conclusion hold in different scenarios (e.g., translation)? Q2: Is the model architecture the critical factor for learning word order in the downstream tasks such as machine translation? Q3: Is position embedding powerful enough to capture word order information for SAN? We approach these questions with a novel probing task – word reordering detection (WRD), which aims to detect the positions of randomly reordered words in the input sentence. We compare SAN with RNN, as well as directional SAN (DiSAN, Shen et al., 2018a) that augments SAN with recurrence modeling. In this study, we focus on the encoders implemented with different architectures, so as to investigate their abilities to learn 3636 PI H FFN Softmax Linear PO Softmax E ✕ Encoder Output Layer Bush hold a talk with Sharon . Bush hold a talk with Sharon . T O T T T T I Golden Reorder Labels PO PI 0.5 0.0 0.0 0.0 0.0 0.2 0.2 0.1 0.0 0.0 0.2 0.3 0.4 0.1 (a) Position Detector PI H FFN Softmax Linear PO Softmax E ✕ Encoder Output Layer Bush hold a talk with Sharon . Bush hold a talk with Sharon . Golden Reordered (b) Output Layer Figure 1: Illustration of (a) the position detector, where (b) the output layer is build upon a randomly initialized or pre-trained encoder. In this example, the word “hold” is moved to another place. The goal of this task is to predict the inserted position “I” and the original position “O” of “hold”. word order information of the input sequence. The encoders are trained on objectives like detection accuracy and machine translation, to study the influences of learning objectives. Our experimental results reveal that: (Q1) SAN indeed underperforms the architectures with recurrence modeling (i.e. RNN and DiSAN) on the WRD task, while this conclusion does not hold in machine translation: SAN trained with the translation objective outperforms both RNN and DiSAN on detection accuracy; (Q2) Learning objectives matter more than model architectures in downstream tasks such as machine translation; and (Q3) Position encoding is good enough for SAN in machine translation, while DiSAN is a more universally-effective mechanism to learn word order information for SAN. Contributions The key contributions are: • We design a novel probing task along with the corresponding benchmark model, which can assess the abilities of different architectures to learn word order information.1 • Our study dispels the doubt on the inability of SAN to learn word order information in machine translation, indicating that the learning objective can greatly influence the suitability of an architecture for downstream tasks. 2 Word Reordering Detection Task In order to investigate the ability of self-attention networks to extract word order information, in this 1The data and codes are released at: https:// github.com/baosongyang/WRD. section, we design an artificial task to evaluate the abilities of the examined models to detect the erroneous word orders in a given sequence. Task Description Given a sentence X = {x1, ..., xi, ..., xN}, we randomly pop a word xi and insert it into another position j (1 ≤i, j ≤N and i ̸= j). The objective of this task is to detect both the position the word is popped out (labeled as “O”), as well as the position the word is inserted (labeled as “I”). As seen the example in Figure 1 (a), the word “hold” is moved from the 2nd slot to the 4th slot. Accordingly, the 2nd and 4th slots are labelled as “O” and “I”, respectively. To exactly detect word reordering, the examined models have to learn to recognize both the normal and abnormal word order in a sentence. Position Detector Figure 1 (a) depicts the architecture of the position detector. Let the sequential representations H = {h1, ..., hN} be the output of each encoder noted in Section 3, which are fed to the output layer (Figure 1 (b)). Since only one pair of “I” and “O” labels should be generated in the output sequence, we cast the task as a pointer detection problem (Vinyals et al., 2015). To this end, we turn to an output layer that commonly used in the reading comprehension task (Wang and Jiang, 2017; Du and Cardie, 2017), which aims to identify the start and end positions of the answer in the given text.2 The output layer consists of two sub-layers, which progressively predicts the prob2Contrary to reading comprehension in which the start and end positions are ordered, “I” and “O” do not have to be ordered in our tasks, that is, the popped word can be inserted to either left or right position. 3637 abilities of each position being labelled as “I” and “O”. The probability distribution of the sequence being labelled as “I” is calculated as: PI = SoftMax(U⊤ I tanh(WIH)) ∈RN (1) where WI ∈Rd×d and UI ∈Rd are trainable parameters, and d is the dimensionality of H. The second layer aims to locate the original position “O”, which conditions on the predicted popped word at the position “I”.3 To make the learning process differentiable, we follow Xu et al. (2017) to use the weighted sum of hidden states as the approximate embedding E of the popped word. The embedding subsequently serves as a query to attend to the sequence H to find which position is most similar to the original position of popped word. The probability distribution of the sequence being labelled as “O” is calculated as: E = PI(WQH) ∈Rd (2) PO = ATT(E, WKH) ∈RN (3) where {WQ, WK} ∈Rd×d are trainable parameters that transform H to query and key spaces respectively. ATT(·) denotes the dot-product attention (Luong et al., 2015; Vaswani et al., 2017). Training and Predicting In training process, the objective is to minimize the cross entropy of the true inserted and original positions, which is the sum of the negative log probabilities of the groundtruth indices by the predicted distributions: L = Q⊤ I log PI + Q⊤ O log PO (4) where {QI, QO} ∈RN is an one-hot vector to indicate the groundtruth indices for the inserted and original positions. During prediction, we choose the positions with highest probabilities from the distributions PI and PO as “I” and “O”, respectively. Considering the instance in Figure 1 (a), the 4th position is labelled as inserted position “I”, and the 2nd position as the original position “O”. 3 Experimental Setup In this study, we strove to empirically test whether SAN indeed weak at learning positional information and come up with the reason about the strong performance of SAN on machine translation. In response to the three research questions in Section 1, we give following experimental settings: 3We tried to predict the position of “O” without feeding the approximate embedding, i.e. predicting “I” and “O” individually. It slightly underperforms the current model. • Q1: We compare SAN with two recurrence architectures – RNN and DiSAN on the WRD task, thus to quantify their abilities on learning word order (Section 3.1). • Q2: To compare the effects of learning objectives and model architectures, we train each encoder under two scenarios, i.e. trained on objectives like WRD accuracy and on machine translation (Section 3.2). • Q3: The strength of position encoding is appraised by ablating position encoding and recurrence modeling for SAN. 3.1 Encoder Setting PI H FFN Softmax Linear PO Softmax E ✕ (a) RNN PI H FFN Softmax Linear PO Softmax E ✕ (b) SAN PI H FFN Softmax Linear PO Softmax E ✕ (c) DiSAN Figure 2: Illustration of (a) RNN; (b) SAN; and (c) DiSAN. Colored arrows denote parallel operations. RNN and SAN are commonly used to produce sentence representations on NLP tasks (Cho et al., 2014; Lin et al., 2017; Chen et al., 2018). As shown in Figure 2, we investigate three architectures in this study. Mathematically, let X = {x1, . . . , xN} ∈Rd×N be the embedding matrix of the input sentence, and H = {h1, . . . , hN} ∈ Rd×N be the output sequence of representations. • RNN sequentially produces each state: hn = f(hn−1, xn), (5) where f(·) is GRU (Cho et al., 2014) in this study. RNN is particularly hard to parallelize due to their inherent dependence on the previous state hn−1. • SAN (Lin et al., 2017) produces each hidden state in a parallel fashion: hn = ATT(qn, K)V, (6) where the query qn ∈Rd and the keys and values (K, V) ∈Rd×N are transformed from X. To imitate the order of the sequence, Vaswani et al. (2017) deployed position encodings (Gehring et al., 2017) into SAN. 3638 • DiSAN (Shen et al., 2018a) augments SAN with the ability to encode word order: hn = ATT(qn, K≤n)V≤n, (7) where (K≤n, V≤n) indicate leftward elements, e.g., K≤n = {k1, . . . , kn}. To enable a fair comparison of architectures, we only vary the sub-layer of sequence modeling (e.g. the SAN sub-layer) in the Transformer encoder (Vaswani et al., 2017), and keep the other components the same for all architectures. We use bi-directional setting for RNN and DiSAN, and apply position embedding for SAN and DiSAN. We follow Vaswani et al. (2017) to set the configurations of the encoders, which consists of 6 stacked layers with the layer size being 512. 3.2 Learning Objectives In this study, we employ two strategies to train the encoders, which differ at the learning objectives and data used to train the associated parameters. Note that in both strategies, the output layer in Figure 2 is fine-trained on the WRD data with the word reordering detection objective. WRD Encoders We first directly train the encoders on the WRD data, to evaluate the abilities of model architectures. The WRD encoders are randomly initialized and co-trained with the output layer. Accordingly, the detection accuracy can be treated as the learning objective of this group of encoders. Meanwhile, we can investigate the reliability of the proposed WRD task by checking whether the performances of different architectures (i.e. RNN, SAN, and DiSAN) are consistent with previous findings on other benchmark NLP tasks (Shen et al., 2018a; Tang et al., 2018; Tran et al., 2018; Devlin et al., 2019). NMT Encoders To quantify how well different architectures learn word order information with the learning objective of machine translation, we first train the NMT models (both encoder and decoder) on bilingual corpus using the same configuration reported by Vaswani et al. (2017). Then, we fix the parameters of the encoder, and only train the parameter associated with the output layer on the WRD data. In this way, we can probe the representations learned by NMT models, on their abilities to learn word order of input sentences. To cope with WRD task, all the models were trained for 600K steps, each of which is allocated a batch of 500 sentences. The training set is shuffled after each epoch. We use Adam (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.98 and ϵ = 10−9. The learning rate linearly warms up over the first 4,000 steps, and decreases thereafter proportionally to the inverse square root of the step number. We use a dropout rate of 0.1 on all layers. 3.3 Data Machine Translation We pre-train NMT models on the benchmark WMT14 English⇒German (En⇒De) data, which consists of 4.5M sentence pairs. The validation and test sets are newstest2013 and newstest2014, respectively. To demonstrate the universality of the findings in this study, we also conduct experiments on WAT17 English⇒Japanese (En⇒Ja) data. Specifically, we follow Morishita et al. (2017) to use the first two sections of WAT17 dataset as the training data, which approximately consists of 2.0M sentence pairs. We use newsdev2017 as the validation set and newstest2017 as the test set. Word Reordering Detection We conduct this task on the English sentences, which are extracted from the source side of WMT14 En⇒De data with maximum length to 80. For each sentence in different sets (i.e. training, validation, and test sets), we construct an instance by randomly moving a word to another position. Finally we construct 7M, 10K and 10K samples for training, validating and testing, respectively. Note that a sentence can be sampled multiple times, thus each dataset in the WRD data contains more instances than that in the machine translation data. All the English and German data are tokenized using the scripts in Moses. The Japanese sentences are segmented by the word segmentation toolkit KeTea (Neubig et al., 2011). To reduce the vocabulary size, all the sentences are processed by byte-pair encoding (BPE) (Sennrich et al., 2016) with 32K merge operations for all the data. 4 Experimental Results We return to the central questions originally posed, that is, whether SAN is indeed weak at learning positional information. Using the above experimental design, we give the following answers: A1: SAN-based encoder trained on the WRD data is indeed harder to learn positional information than the recurrence architectures (Section 4.1), while there is no evidence that 3639 Models Insert Original Both RNN 78.4 73.4 68.2 SAN 73.2 66.0 60.1 DiSAN 79.6 70.1 68.0 Table 1: Accuracy on the WRD task. “Insert” and “Original” denotes the accuracies of detecting the inserted and original positions respectively, and “Both” denotes detecting both positions. Figure 3: Learning curve of WRD encoders on WRD task. Y-axis denotes the accuracy on the validation set. Obviously, SAN has slower convergence. SAN-based NMT encoders learns less word order information (Section 4.2); A2: The learning objective plays a more crucial role on learning word order than the architecture in downstream tasks (Section 4.3); A3: While the position encoding is powerful enough to capture word order information in machine translation, DiSAN is a more universally-effective mechanism (Table 2). 4.1 Results on WRD Encoders We first check the performance of each WRD encoder on the proposed WRD task from two aspects: 1) WRD accuracy; and 2) learning ability. WRD Accuracy The detection results are concluded in Table 1. As seen, both RNN and DiSAN significantly outperform SAN on our task, indicating that the recurrence structure (RNN) exactly performs better than parallelization (SAN) on capturing word order information in a sentence. Nevertheless, the drawback can be alleviated by applying directional attention functions. The comparable result between DiSAN and RNN confirms the hypothesis by Shen et al. (2018a) and Devlin et al. (2019) that directional SAN exactly improves the ability of SAN to learn word order. The consistency between prior studies and our results verified the reliability of the proposed WRD task. Learning Curve We visualize the learning curve of the training. As shown in Figure 3, SAN has much slower convergence than others, showing that SAN has a harder time learning word order information than RNN and DiSAN. This is consistent with our intuition that the parallel structure is more difficult to learn word order information than those models with a sequential process. Considering DiSAN, although it has slightly slower learning speed at the early stage of the training, it is able to achieve comparable accuracy to RNN at the mid and late phases of the training. 4.2 Results on Pre-Trained NMT Encoders We investigate whether the SAN indeed lacks the ability to learn word order information under machine translation context. The results are concluded in Table 2. We first report the effectiveness of the compared models on translation tasks. For En-De translation, SAN outperforms RNN, which is consistent with the results reported in (Chen et al., 2018). The tendency is universal on En-Ja which is a distant language pair (Bosch and Sebasti´an-Gall´es, 2001; Isozaki et al., 2010). Moreover, DiSAN incrementally improves the translation quality, demonstrating that model directional information benefits to the translation quality. The consistent translation performances make the following evaluation on WRD accuracy convincing. Concerning the performances of NMT encoders on the WRD task: SAN-based NMT Encoder Performs Better It is surprising to see that SAN yields even higher accuracy on WRD task than other pre-trained NMT encoders, despite its lower translation qualities comparing with DiSAN. The results not only dispel the doubt on the inablity of SAN-based encoder to learn word order in machine translation, but also demonstrate that SAN learns to retain more features with respect to word order during the training of machine translation. Learning Objectives Matter More In addition, both the NMT encoders underperform the WRD encoders on detection task across models and language pairs.4 The only difference between the 4The En⇒Ja pre-trained encoders yield lower accuracy on WRD task than that of En⇒De pre-trained encoders. We 3640 Model Translation Detection En⇒De En⇒Ja En⇒De Enc. En⇒Ja Enc. WRD Enc. RNN 26.8 42.9 33.9 29.0 68.2 SAN 27.3 43.6 41.6 32.8 60.1 - Pos Emb 11.5 – 0.3 – 0.3 DiSAN 27.6 43.7 39.7 31.2 68.0 - Pos Emb 27.0 43.1 40.1 31.0 62.8 Table 2: Performances of NMT encoders pre-trained on WMT14 En⇒De and WAT17 En⇒Ja data. “Translation” denotes translation quality measured in BLEU scores, while “Detection” denotes the accuracies on WRD task. “En⇒De Enc.” denotes NMT encoder trained with translation objective on the En⇒De data. We also list the detection accuracies of WRD encoders (“WRD Enc.”) for comparison. “- Pos Emb” indicates removing positional embeddings from SAN- or DiSAN-based encoder. Surprisingly, SAN-based NMT encoder achieves the best accuracy on the WRD task, which contrasts with the performances of WRD encoders (the last column). (a) WRD Encoder (b) En⇒De NMT Encoder (c) En⇒Ja NMT Encoder Figure 4: Accuracy of pre-trained NMT encoders according to various distances between the positions of “O” and “I” (X-axis). As seen, the performance of each WRD encoder (a) is stable across various distances, while the pre-trained (b) En⇒De and (c) En⇒Ja encoders consistently get lower accuracy with the increasing of distance. two kinds of encoders is the learning objective. This raises a hypothesis that the learning objective sometimes severs as a more critical factor than the model architecture on modeling word order. Position Encoding VS. Recurrence Modeling In order to assess the importance of position encoding, we redo the experiments by removing the position encoding from SAN and DiSAN (“Pos Emb”). Clearly, SAN-based encoder without position embedding fails on both machine translation and our WRD task, indicating the necessity of position encoding on learning word order. It is encourage to see that SAN yields higher BLEU score and detection accuracy than “DiSANPos Emb” in machine translation scenario. It means that position embedding is more suitable on capture word order information for machine transattribute this to the difference between the source sentences in pre-training corpus (En-Ja) and that of WRD data (from En-De dataset). Despite of this, the tendency of results are consistent across language pairs. lation than modeling recurrence for SAN. Considering both two scenarios, DiSAN-based encoders achieve comparable detection accuracies to the best models, revealing its effectiveness and universality on learning word order. 4.3 Analysis In response to above results, we provide further analyses to verify our hypothesis on NMT encoders. We discuss three questions in this section: 1) Does learning objective indeed affect the extracting of word order information; 2) How SAN derives word order information from position encoding; and 3) Whether more word order information retained is useful for machine translation. Accuracy According to Distance We further investigate the accuracy of WRD task according to various distance between the positions of word is popped out and inserted. As shown in Figure 4 (a), WRD encoders marginally reduce the performance with the increasing of distances. How3641 (a) En⇒De NMT encoder (b) En⇒Ja NMT encoder Figure 5: Performance of each layer from (a) pre-trained En⇒De encoder and (b) pre-trained En⇒Ja encoder on WRD task. The evaluation are conducted on the test set. Clearly, the accuracy of SAN gradually increased with the stacking of layers and consistently outperform that of other models across layers. ever, this kind of stability is destroyed when we pre-train each encoder with a learning objective of machine translation. As seen in Figure 4 (b) and (c), the performance of pre-trained NMT encoders obviously became worse on long-distance cases across language pairs and model variants. This is consistent with prior observation on NMT systems that both RNN and SAN fail to fully capture long-distance dependencies (Tai et al., 2015; Yang et al., 2017; Tang et al., 2018). Regarding to information bottleneck principle (Tishby and Zaslavsky, 2015; Alemi et al., 2016), our NMT models are trained to maximally maintain the relevant information between source and target, while abandon irrelevant features in the source sentence, e.g. portion of word order information. Different NLP tasks have distinct requirements on linguistic information (Conneau et al., 2018). For machine translation, the local patterns (e.g. phrases) matter more (Luong et al., 2015; Yang et al., 2018, 2019), while long-distance word order information plays a relatively trivial role in understanding the meaning of a source sentence. Recent studies also pointed out that abandoning irrelevant features in source sentence benefits to some downstream NLP tasks (Lei et al., 2016; Yu et al., 2017; Shen et al., 2018b). An immediate consequence of such kind of data process inequality (Schumacher and Nielsen, 1996) is that information about word order that is lost in encoder cannot be recovered in the detector, and consequently drops the performance on our WRD task. The results verified that the learning objective indeed affects more on learning word order information than model architecture in our case. Accuracy According to Layer Several researchers may doubt that the parallel structure of SAN may lead to failure on capturing word order information at higher layers, since the position embeddings are merely injected at the input layer. Accordingly, we further probe the representations at each layer on our WRD task to explore how does SAN learn word order information. As seen in Figure 5, SAN achieves better performance than other NMT encoders on the proposed WRD tasks across almost all the layers. The result dispels the doubt on the inability of position encoding and confirms the speculation by Vaswani et al. (2017) and Shaw et al. (2018) who suggested that SAN can profit from the use of residual network which propagates the positional information to higher layers. Moreover, both SAN and RNN gradually increase their performance on our task with the stacking of layers. The same tendency demonstrates that position encoding is able to provide same learning manner to that of recurrent structure with respect to word order for SAN. Both the results confirm the strength of position encoding to bring word order properties into SAN. We strove to come up with the reason why SAN captured even more word order information in machine translation task. Yin et al. (2017) and Tran et al. (2018) found that the approach with a recurrence structure (e.g. RNN) has an easier time learning syntactic information than that of models with a parallel structure (e.g. CNN, SAN). Inspired by their findings, we argue that SAN tries to partially countervail its disadvantage in parallel structure by reserving more word order information, thus to help for the encoding of deeper 3642 ations (×10K) 30 40 50 60 RNN SAN DiSAN BLEU Drop -6 -5 -4 -3 En-De En-Ja RNN DiSAN SAN Figure 6: The differences of translation performance when the pre-trained NMT models are fed with the original (“Golden”) and reordered (“Reorder”) source sentences. As seen, SAN and DiSAN perform better on handling noises in terms of erroneous word order. linguistic properties required by machine translation. Recent studies on multi-layer learning shown that different layers tend to learn distinct linguistic information (Peters et al., 2018; Raganato and Tiedemann, 2018; Li et al., 2019). The better accuracy achieved by SAN across layers indicates that SAN indeed tries to preserve more word order information during the learning of other linguistic properties for translation purpose. Effect of Wrong Word Order Noises For humans, a small number of erroneous word orders in a sentence usually does not affect the comprehension. For example, we can understand the meaning of English sentence “Dropped the boy the ball.”, despite its erroneous word order. It is intriguing whether NMT model has the ability to tackle the wrong order noises. As a results, we make erroneous word order noises on English-German development set by moving one word to another position, and evaluate the drop of the translation quality of each model. As listed in Figure 6, SAN and DiSAN yield less drops on translation quality than their RNN counterpart, demonstrating the effectiveness of self-attention on ablating wrong order noises. We attribute this to the fact that models (e.g. RNN-based models) will not learn to be robust to errors since they are never observed (Sperber et al., 2017; Cheng et al., 2018). On the contrary, since SAN-based NMT encoder is good at recognizing and reserving anomalous word order information under NMT context, it may raise the ability of decoder on handling noises occurred in the training set, thus to be more robust in translating sentences with anomalous word order. 5 Related Work Exploring Properties of SAN SAN has yielded strong empirical performance in a variety of NLP tasks (Vaswani et al., 2017; Tan et al., 2018; Li et al., 2018; Devlin et al., 2019). In response to these impressive results, several studies have emerged with the goal of understanding SAN on many properties. For example, Tran et al. (2018) compared SAN and RNN on language inference tasks, and pointed out that SAN is weak at learning hierarchical structure than its RNN counterpart. Moreover, Tang et al. (2018) conducted experiments on subject-verb agreement and word sense disambiguation tasks. They found that SAN is good at extracting semantic properties, while underperforms RNN on capturing long-distance dependencies. This is in contrast to our intuition that SAN is good at capturing long-distance dependencies. In this work, we focus on exploring the ability of SAN on modeling word order information. Probing Task on Word Order To open the black box of networks, probing task is used as a first step which facilitates comparing different models on a much finer-grained level. Most work has focused on probing fixed-sentence encoders, e.g. sentence embedding (Adi et al., 2017; Conneau et al., 2018). Among them, Adi et al. (2017) and Conneau et al. (2018) introduced to probe the sensitivity to legal word orders by detecting whether there exists a pair of permuted word in a sentence by giving its sentence embedding. However, analysis on sentence encodings may introduce confounds, making it difficult to infer whether the relevant information is encoded within the specific position of interest or rather inferred from diffuse information elsewhere in the sentence (Tenney et al., 2019). In this study, we directly probe the token representations for word- and phrase-level properties, which has been widely used for probing token-level representations learned in neural machine translation systems, e.g. part-of-speech, semantic tags, morphology as well as constituent structure (Shi et al., 2016; Belinkov et al., 2017; Blevins et al., 2018). 6 Conclusion In this paper, we introduce a novel word reordering detection task which can probe the ability of a model to extract word order information. With the help of the proposed task, we evaluate RNN, 3643 SAN and DiSAN upon Transformer framework to empirically test the theoretical claims that SAN lacks the ability to learn word order. The results reveal that RNN and DiSAN exactly perform better than SAN on extracting word order information in the case they are trained individually for our task. However, there is no evidence that SAN learns less word order information under the machine translation context. Our further analyses for the encoders pretrained on the NMT data suggest that 1) the learning objective sometimes plays a crucial role on learning a specific feature (e.g. word order) in a downstream NLP task; 2) modeling recurrence is universally-effective to learn word order information for SAN; and 3) RNN is more sensitive on erroneous word order noises in machine translation system. These observations facilitate the understanding of different tasks and model architectures in finer-grained level, rather than merely in overall score (e.g. BLEU). As our approach is not limited to the NMT encoders, it is also interesting to explore how do the models trained on other NLP tasks learn word order information. Acknowledgments The work was partly supported by the National Natural Science Foundation of China (Grant No. 61672555), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (Grant No. 045/2017/AFJ) and the Multi-Year Research Grant from the University of Macau (Grant No. MYRG2017-00087-FST). We thank the anonymous reviewers for their insightful comments. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. In ICLR. Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. 2016. Deep Variational Information Bottleneck. In ICLR. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do Neural Machine Translation Models Learn about Morphology? In ACL. Terra Blevins, Omer Levy, and Luke Zettlemoyer. 2018. Deep RNNs Encode Soft Hierarchical Syntax. In ACL. Laura Bosch and N´uria Sebasti´an-Gall´es. 2001. Evidence of Early Language Discrimination Abilities in Infants from Bilingual Environments. Infancy, 2(1):29–49. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, Goerge Foster, Llion Jones, Parmar Niki, Mike Schuster, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The Best of Both Worlds: Combining Recent Advances in Neural Machine Translation. In ACL. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards Robust Neural Machine Translation. In ACL. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN EncoderDecoder for Statistical Machine Translation. In EMNLP. Alexis Conneau, German Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What You Can Cram into A Single $&!#∗Vector: Probing Sentence Embeddings for Linguistic Properties. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. Xinya Du and Claire Cardie. 2017. Identifying Where to Focus in Reading Comprehension for Neural Question Generation. In EMNLP. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional Sequence to Sequence Learning. In ICML. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling Recurrence for Transformer. In NAACL. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Language Pairs. In EMNLP. Diederik P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. ICLR. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing Neural Predictions. In EMNLP. Jian Li, Zhaopeng Tu, Baosong Yang, Michael R Lyu, and Tong Zhang. 2018. Multi-Head Attention with Disagreement Regularization. In EMNLP. Jian Li, Baosong Yang, Zi-Yi Dou, Xing Wang, Michael R. Lyu, and Zhaopeng Tu. 2019. Information Aggregation for Multi-Head Attention with Routing-by-Agreement. In NAACL. 3644 Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A Structured Self-attentive Sentence Embedding. In ICLR. Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In EMNLP. Makoto Morishita, Jun Suzuki, and Masaaki Nagata. 2017. NTT Neural Machine Translation Systems at WAT 2017. In WAT. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise Prediction for Robust, Adaptable Japanese Morphological Analysis. In ACL. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL. Alessandro Raganato and J¨org Tiedemann. 2018. An Analysis of Encoder Representations in Transformer-Based Machine Translation. In EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Benjamin Schumacher and Michael A Nielsen. 1996. Quantum Data Processing and Error Correction. Physical Review A, 54(4):2629. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-Attention with Relative Position Representations. In NAACL. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018a. DiSAN: Directional Self-attention Network for RNN/CNN-free Language Understanding. In AAAI. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Sen Wang, and Chengqi Zhang. 2018b. Reinforced Selfattention Network: A Hybrid of Hard and Soft Attention for Sequence Modeling. In IJCAI. Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does String-Based Neural MT Learn Source Syntax? In EMNLP. Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward Robust Neural Machine Translation for Noisy Input Sequences. In IWSLT. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In EMNLP. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. In ACL. Zhixing Tan, Mingxuan Wang, Jun Xie, Yidong Chen, and Xiaodong Shi. 2018. Deep Semantic Role Labeling with Self-attention. In AAAI. Gongbo Tang, Mathias M¨uller, Annette Rios, and Rico Sennrich. 2018. Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures. In EMNLP. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, et al. 2019. What Do You Learn from Context? Probing for Sentence Structure in Contextualized Word Representations. In ICLR. Naftali Tishby and Noga Zaslavsky. 2015. Deep Learning and The Information Bottleneck Principle. In ITW. Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The Importance of Being Recurrent for Modeling Hierarchical Structure. In EMNLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NIPS. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer Networks. In NIPS. Shuohang Wang and Jing Jiang. 2017. Machine Comprehension Using Match-LSTM and Answer Pointer. In ICLR. Zhen Xu, Bingquan Liu, Baoxun Wang, SUN Chengjie, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural Response Generation via GAN with An Approximate Embedding Layer. In EMNLP. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling Localness for Self-Attention Networks. In EMNLP. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019. Convolutional Self-Attention Networks. In NAACL. Baosong Yang, Derek F Wong, Tong Xiao, Lidia S Chao, and Jingbo Zhu. 2017. Towards Bidirectional Hierarchical Representations for Attentionbased Neural Machine Translation. In EMNLP. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch¨utze. 2017. Comparative Study of CNN and RNN for Natural Language Processing. arXiv preprint:1702.01923. Adams Wei Yu, Hongrae Lee, and Quoc Le. 2017. Learning to Skim Text. In ACL.
2019
354
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3645 Energy and Policy Considerations for Deep Learning in NLP Emma Strubell Ananya Ganesh Andrew McCallum College of Information and Computer Sciences University of Massachusetts Amherst {strubell, aganesh, mccallum}@cs.umass.edu Abstract Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice. 1 Introduction Advances in techniques and hardware for training deep neural networks have recently enabled impressive accuracy improvements across many fundamental NLP tasks (Bahdanau et al., 2015; Luong et al., 2015; Dozat and Manning, 2017; Vaswani et al., 2017), with the most computationally-hungry models obtaining the highest scores (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; So et al., 2019). As a result, training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated financial and environmental costs. Research and development of new models multiplies these costs by thousands of times by requiring retraining to experiment with model architectures and hyperparameters. Whereas a decade ago most Consumption CO2e (lbs) Air travel, 1 person, NY↔SF 1984 Human life, avg, 1 year 11,023 American life, avg, 1 year 36,156 Car, avg incl. fuel, 1 lifetime 126,000 Training one model (GPU) NLP pipeline (parsing, SRL) 39 w/ tuning & experiments 78,468 Transformer (big) 192 w/ neural arch. search 626,155 Table 1: Estimated CO2 emissions from training common NLP models, compared to familiar consumption.1 NLP models could be trained and developed on a commodity laptop or server, many now require multiple instances of specialized hardware such as GPUs or TPUs, therefore limiting access to these highly accurate models on the basis of finances. Even when these expensive computational resources are available, model training also incurs a substantial cost to the environment due to the energy required to power this hardware for weeks or months at a time. Though some of this energy may come from renewable or carbon credit-offset resources, the high energy demands of these models are still a concern since (1) energy is not currently derived from carbon-neural sources in many locations, and (2) when renewable energy is available, it is still limited to the equipment we have to produce and store it, and energy spent training a neural network might better be allocated to heating a family’s home. It is estimated that we must cut carbon emissions by half over the next decade to deter escalating rates of natural disaster, and based on the estimated CO2 emissions listed in Table 1, 1Sources: (1) Air travel and per-capita consumption: https://bit.ly/2Hw0xWc; (2) car lifetime: https: //bit.ly/2Qbr0w1. 3646 model training and development likely make up a substantial portion of the greenhouse gas emissions attributed to many NLP researchers. To heighten the awareness of the NLP community to this issue and promote mindful practice and policy, we characterize the dollar cost and carbon emissions that result from training the neural networks at the core of many state-of-the-art NLP models. We do this by estimating the kilowatts of energy required to train a variety of popular off-the-shelf NLP models, which can be converted to approximate carbon emissions and electricity costs. To estimate the even greater resources required to transfer an existing model to a new task or develop new models, we perform a case study of the full computational resources required for the development and tuning of a recent state-of-the-art NLP pipeline (Strubell et al., 2018). We conclude with recommendations to the community based on our findings, namely: (1) Time to retrain and sensitivity to hyperparameters should be reported for NLP machine learning models; (2) academic researchers need equitable access to computational resources; and (3) researchers should prioritize developing efficient models and hardware. 2 Methods To quantify the computational and environmental cost of training deep neural network models for NLP, we perform an analysis of the energy required to train a variety of popular offthe-shelf NLP models, as well as a case study of the complete sum of resources required to develop LISA (Strubell et al., 2018), a state-of-the-art NLP model from EMNLP 2018, including all tuning and experimentation. We measure energy use as follows. We train the models described in §2.1 using the default settings provided, and sample GPU and CPU power consumption during training. Each model was trained for a maximum of 1 day. We train all models on a single NVIDIA Titan X GPU, with the exception of ELMo which was trained on 3 NVIDIA GTX 1080 Ti GPUs. While training, we repeatedly query the NVIDIA System Management Interface2 to sample the GPU power consumption and report the average over all samples. To sample CPU power consumption, we use Intel’s Running Average Power Limit interface.3 2nvidia-smi: https://bit.ly/30sGEbi 3RAPL power meter: https://bit.ly/2LObQhV Consumer Renew. Gas Coal Nuc. China 22% 3% 65% 4% Germany 40% 7% 38% 13% United States 17% 35% 27% 19% Amazon-AWS 17% 24% 30% 26% Google 56% 14% 15% 10% Microsoft 32% 23% 31% 10% Table 2: Percent energy sourced from: Renewable (e.g. hydro, solar, wind), natural gas, coal and nuclear for the top 3 cloud compute providers (Cook et al., 2017), compared to the United States,4 China5 and Germany (Burger, 2019). We estimate the total time expected for models to train to completion using training times and hardware reported in the original papers. We then calculate the power consumption in kilowatt-hours (kWh) as follows. Let pc be the average power draw (in watts) from all CPU sockets during training, let pr be the average power draw from all DRAM (main memory) sockets, let pg be the average power draw of a GPU during training, and let g be the number of GPUs used to train. We estimate total power consumption as combined GPU, CPU and DRAM consumption, then multiply this by Power Usage Effectiveness (PUE), which accounts for the additional energy required to support the compute infrastructure (mainly cooling). We use a PUE coefficient of 1.58, the 2018 global average for data centers (Ascierto, 2018). Then the total power pt required at a given instance during training is given by: pt = 1.58t(pc + pr + gpg) 1000 (1) The U.S. Environmental Protection Agency (EPA) provides average CO2 produced (in pounds per kilowatt-hour) for power consumed in the U.S. (EPA, 2018), which we use to convert power to estimated CO2 emissions: CO2e = 0.954pt (2) This conversion takes into account the relative proportions of different energy sources (primarily natural gas, coal, nuclear and renewable) consumed to produce energy in the United States. Table 2 lists the relative energy sources for China, Germany and the United States compared to the top 5U.S. Dept. of Energy: https://bit.ly/2JTbGnI 5China Electricity Council; trans. China Energy Portal: https://bit.ly/2QHE5O3 3647 three cloud service providers. The U.S. breakdown of energy is comparable to that of the most popular cloud compute service, Amazon Web Services, so we believe this conversion to provide a reasonable estimate of CO2 emissions per kilowatt hour of compute energy used. 2.1 Models We analyze four models, the computational requirements of which we describe below. All models have code freely available online, which we used out-of-the-box. For more details on the models themselves, please refer to the original papers. Transformer. The Transformer (T2T) model (Vaswani et al., 2017) is an encoder-decoder architecture primarily recognized for efficient and accurate machine translation. The encoder and decoder each consist of 6 stacked layers of multi-head selfattention. Vaswani et al. (2017) report that the Transformer base model (T2Tbase; 65M parameters) was trained on 8 NVIDIA P100 GPUs for 12 hours, and the Transformer big model (T2Tbig; 213M parameters) was trained for 3.5 days (84 hours; 300k steps). This model is also the basis for recent work on neural architecture search (NAS) for machine translation and language modeling (So et al., 2019), and the NLP pipeline that we study in more detail in §4.2 (Strubell et al., 2018). So et al. (2019) report that their full architecture search ran for a total of 979M training steps, and that their base model requires 10 hours to train for 300k steps on one TPUv2 core. This equates to 32,623 hours of TPU or 274,120 hours on 8 P100 GPUs. ELMo. The ELMo model (Peters et al., 2018) is based on stacked LSTMs and provides rich word representations in context by pre-training on a large amount of data using a language modeling objective. Replacing context-independent pretrained word embeddings with ELMo has been shown to increase performance on downstream tasks such as named entity recognition, semantic role labeling, and coreference. Peters et al. (2018) report that ELMo was trained on 3 NVIDIA GTX 1080 GPUs for 2 weeks (336 hours). BERT. The BERT model (Devlin et al., 2019) provides a Transformer-based architecture for building contextual representations similar to ELMo, but trained with a different language modeling objective. BERT substantially improves accuracy on tasks requiring sentence-level representations such as question answering and natural language inference. Devlin et al. (2019) report that the BERT base model (BERTbase; 110M parameters) was trained on 16 TPU chips for 4 days (96 hours). NVIDIA reports that they can train a BERT model in 3.3 days (79.2 hours) using 4 DGX-2H servers, totaling 64 Tesla V100 GPUs (Forster et al., 2019). GPT-2. This model is the latest edition of OpenAI’s GPT general-purpose token encoder, also based on Transformer-style self-attention and trained with a language modeling objective (Radford et al., 2019). By training a very large model on massive data, Radford et al. (2019) show high zero-shot performance on question answering and language modeling benchmarks. The large model described in Radford et al. (2019) has 1542M parameters and is reported to require 1 week (168 hours) of training on 32 TPUv3 chips. 6 3 Related work There is some precedent for work characterizing the computational requirements of training and inference in modern neural network architectures in the computer vision community. Li et al. (2016) present a detailed study of the energy use required for training and inference in popular convolutional models for image classification in computer vision, including fine-grained analysis comparing different neural network layer types. Canziani et al. (2016) assess image classification model accuracy as a function of model size and gigaflops required during inference. They also measure average power draw required during inference on GPUs as a function of batch size. Neither work analyzes the recurrent and self-attention models that have become commonplace in NLP, nor do they extrapolate power to estimates of carbon and dollar cost of training. Analysis of hyperparameter tuning has been performed in the context of improved algorithms for hyperparameter search (Bergstra et al., 2011; Bergstra and Bengio, 2012; Snoek et al., 2012). To our knowledge there exists to date no analysis of the computation required for R&D and hyperparameter tuning of neural network models in NLP. 6Via the authors on Reddit. 7GPU lower bound computed using pre-emptible P100/V100 U.S. resources priced at $0.43–$0.74/hr, upper bound uses on-demand U.S. resources priced at $1.46– $2.48/hr. We similarly use pre-emptible ($1.46/hr–$2.40/hr) and on-demand ($4.50/hr–$8/hr) pricing as lower and upper bounds for TPU v2/3; cheaper bulk contracts are available. 3648 Model Hardware Power (W) Hours kWh·PUE CO2e Cloud compute cost T2Tbase P100x8 1415.78 12 27 26 $41–$140 T2Tbig P100x8 1515.43 84 201 192 $289–$981 ELMo P100x3 517.66 336 275 262 $433–$1472 BERTbase V100x64 12,041.51 79 1507 1438 $3751–$12,571 BERTbase TPUv2x16 — 96 — — $2074–$6912 NAS P100x8 1515.43 274,120 656,347 626,155 $942,973–$3,201,722 NAS TPUv2x1 — 32,623 — — $44,055–$146,848 GPT-2 TPUv3x32 — 168 — — $12,902–$43,008 Table 3: Estimated cost of training a model in terms of CO2 emissions (lbs) and cloud compute cost (USD).7 Power and carbon footprint are omitted for TPUs due to lack of public information on power draw for this hardware. 4 Experimental results 4.1 Cost of training Table 3 lists CO2 emissions and estimated cost of training the models described in §2.1. Of note is that TPUs are more cost-efficient than GPUs on workloads that make sense for that hardware (e.g. BERT). We also see that models emit substantial carbon emissions; training BERT on GPU is roughly equivalent to a trans-American flight. So et al. (2019) report that NAS achieves a new stateof-the-art BLEU score of 29.7 for English to German machine translation, an increase of just 0.1 BLEU at the cost of at least $150k in on-demand compute time and non-trivial carbon emissions. 4.2 Cost of development: Case study To quantify the computational requirements of R&D for a new model we study the logs of all training required to develop LinguisticallyInformed Self-Attention (Strubell et al., 2018), a multi-task model that performs part-of-speech tagging, labeled dependency parsing, predicate detection and semantic role labeling. This model makes for an interesting case study as a representative NLP pipeline and as a Best Long Paper at EMNLP. Model training associated with the project spanned a period of 172 days (approx. 6 months). During that time 123 small hyperparameter grid searches were performed, resulting in 4789 jobs in total. Jobs varied in length ranging from a minimum of 3 minutes, indicating a crash, to a maximum of 9 days, with an average job length of 52 hours. All training was done on a combination of NVIDIA Titan X (72%) and M40 (28%) GPUs.8 The sum GPU time required for the project totaled 9998 days (27 years). This averages to 8We approximate cloud compute cost using P100 pricing. Estimated cost (USD) Models Hours Cloud Electric 1 120 $52–$175 $5 24 2880 $1238–$4205 $118 4789 239,942 $103k–$350k $9870 Table 4: Estimated cost in terms of cloud compute and electricity for training: (1) a single model (2) a single tune and (3) all models trained during R&D. about 60 GPUs running constantly throughout the 6 month duration of the project. Table 4 lists upper and lower bounds of the estimated cost in terms of Google Cloud compute and raw electricity required to develop and deploy this model.9 We see that while training a single model is relatively inexpensive, the cost of tuning a model for a new dataset, which we estimate here to require 24 jobs, or performing the full R&D required to develop this model, quickly becomes extremely expensive. 5 Conclusions Authors should report training time and sensitivity to hyperparameters. Our experiments suggest that it would be beneficial to directly compare different models to perform a cost-benefit (accuracy) analysis. To address this, when proposing a model that is meant to be re-trained for downstream use, such as retraining on a new domain or fine-tuning on a new task, authors should report training time and computational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subsequent consumers of these models to accurately assess whether the required computational resources 9Based on average U.S cost of electricity of $0.12/kWh. 3649 are compatible with their setting. More explicit characterization of tuning time could also reveal inconsistencies in time spent tuning baseline models compared to proposed contributions. Realizing this will require: (1) a standard, hardwareindependent measurement of training time, such as gigaflops required to convergence, and (2) a standard measurement of model sensitivity to data and hyperparameters, such as variance with respect to hyperparameters searched. Academic researchers need equitable access to computation resources. Recent advances in available compute come at a high price not attainable to all who desire access. Most of the models studied in this paper were developed outside academia; recent improvements in state-of-the-art accuracy are possible thanks to industry access to large-scale compute. Limiting this style of research to industry labs hurts the NLP research community in many ways. First, it stifles creativity. Researchers with good ideas but without access to large-scale compute will simply not be able to execute their ideas, instead constrained to focus on different problems. Second, it prohibits certain types of research on the basis of access to financial resources. This even more deeply promotes the already problematic “rich get richer” cycle of research funding, where groups that are already successful and thus well-funded tend to receive more funding due to their existing accomplishments. Third, the prohibitive start-up cost of building in-house resources forces resource-poor groups to rely on cloud compute services such as AWS, Google Cloud and Microsoft Azure. While these services provide valuable, flexible, and often relatively environmentally friendly compute resources, it is more cost effective for academic researchers, who often work for nonprofit educational institutions and whose research is funded by government entities, to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Science Foundation. For example, an off-the-shelf GPU server containing 8 NVIDIA 1080 Ti GPUs and supporting hardware can be purchased for approximately $20,000 USD. At that cost, the hardware required to develop the model in our case study (approximately 58 GPUs for 172 days) would cost $145,000 USD plus electricity, about half the estimated cost to use on-demand cloud GPUs. Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded academic compute cloud would provide equitable access to all researchers. Researchers should prioritize computationally efficient hardware and algorithms. We recommend a concerted effort by industry and academia to promote research of more computationally efficient algorithms, as well as hardware that requires less energy. An effort can also be made in terms of software. There is already a precedent for NLP software packages prioritizing efficient models. An additional avenue through which NLP and machine learning software developers could aid in reducing the energy associated with model tuning is by providing easyto-use APIs implementing more efficient alternatives to brute-force grid search for hyperparameter tuning, e.g. random or Bayesian hyperparameter search techniques (Bergstra et al., 2011; Bergstra and Bengio, 2012; Snoek et al., 2012). While software packages implementing these techniques do exist,10 they are rarely employed in practice for tuning NLP models. This is likely because their interoperability with popular deep learning frameworks such as PyTorch and TensorFlow is not optimized, i.e. there are not simple examples of how to tune TensorFlow Estimators using Bayesian search. Integrating these tools into the workflows with which NLP researchers and practitioners are already familiar could have notable impact on the cost of developing and tuning in NLP. Acknowledgements We are grateful to Sherief Farouk and the anonymous reviewers for helpful feedback on earlier drafts. This work was supported in part by the Centers for Data Science and Intelligent Information Retrieval, the Chan Zuckerberg Initiative under the Scientific Knowledge Base Construction project, the IBM Cognitive Horizons Network agreement no. W1668553, and National Science Foundation grant no. IIS-1514053. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. 10For example, the Hyperopt Python library. 3650 References Rhonda Ascierto. 2018. Uptime Institute Global Data Center Survey. Technical report, Uptime Institute. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd International Conference for Learning Representations (ICLR), San Diego, California, USA. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. 2011. Algorithms for hyper-parameter optimization. In Advances in neural information processing systems, pages 2546–2554. Bruno Burger. 2019. Net Public Electricity Generation in Germany in 2018. Technical report, Fraunhofer Institute for Solar Energy Systems ISE. Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. 2016. An analysis of deep neural network models for practical applications. Gary Cook, Jude Lee, Tamina Tsai, Ada Kongn, John Deans, Brian Johnson, Elizabeth Jardim, and Brian Johnson. 2017. Clicking Clean: Who is winning the race to build a green internet? Technical report, Greenpeace. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In ICLR. EPA. 2018. Emissions & Generation Resource Integrated Database (eGRID). Technical report, U.S. Environmental Protection Agency. Christopher Forster, Thor Johnsen, Swetha Mandava, Sharath Turuvekere Sreenivas, Deyu Fu, Julie Bernauer, Allison Gray, Sharan Chetlur, and Raul Puri. 2019. BERT Meets GPUs. Technical report, NVIDIA AI. Da Li, Xinbo Chen, Michela Becchi, and Ziliang Zong. 2016. Evaluating the energy efficiency of deep convolutional neural networks on cpus and gpus. 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom) (BDCloudSocialCom-SustainCom), pages 477–484. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959. David R. So, Chen Liang, and Quoc V. Le. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning (ICML). Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-Informed Self-Attention for Semantic Role Labeling. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Brussels, Belgium. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS).
2019
355
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3651 What does BERT learn about the structure of language? Ganesh Jawahar Benoˆıt Sagot Djam´e Seddah Inria, France {firstname.lastname}@inria.fr Abstract BERT is a recent language representation model that has surprisingly performed well in diverse language understanding benchmarks. This result indicates the possibility that BERT networks capture structural information about language. In this work, we provide novel support for this claim by performing a series of experiments to unpack the elements of English language structure learned by BERT. We first show that BERT’s phrasal representation captures phrase-level information in the lower layers. We also show that BERT’s intermediate layers encode a rich hierarchy of linguistic information, with surface features at the bottom, syntactic features in the middle and semantic features at the top. BERT turns out to require deeper layers when long-distance dependency information is required, e.g. to track subjectverb agreement. Finally, we show that BERT representations capture linguistic information in a compositional way that mimics classical, tree-like structures. 1 Introduction BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) is a bidirectional variant of Transformer networks (Vaswani et al., 2017) trained to jointly predict a masked word from its context and to classify whether two sentences are consecutive or not. The trained model can be fine-tuned for downstream NLP tasks such as question answering and language inference without substantial modification. BERT outperforms previous state-of-the-art models in the eleven NLP tasks in the GLUE benchmark (Wang et al., 2018) by a significant margin. This remarkable result suggests that BERT could “learn” structural information about language. Can we unveil the representations learned by BERT to proto-linguistics structures? Answering this question could not only help us understand the reason behind the success of BERT but also its limitations, in turn guiding the design of improved architectures. This question falls under the topic of the interpretability of neural networks, a growing field in NLP (Belinkov and Glass, 2019). An important step forward in this direction is Goldberg (2019), which shows that BERT captures syntactic phenomena well when evaluated on its ability to track subject-verb agreement. In this work, we perform a series of experiments to probe the nature of the representations learned by different layers of BERT. 1 We first show that the lower layers capture phrase-level information, which gets diluted in the upper layers. Second, we propose to use the probing tasks defined in Conneau et al. (2018) to show that BERT captures a rich hierarchy of linguistic information, with surface features in lower layers, syntactic features in middle layers and semantic features in higher layers. Third, we test the ability of BERT representations to track subject-verb agreement and find that BERT requires deeper layers for handling harder cases involving long-distance dependencies. Finally, we propose to use the recently introduced Tensor Product Decomposition Network (TPDN) (McCoy et al., 2019) to explore different hypotheses about the compositional nature of BERT’s representation and find that BERT implicitly captures classical, tree-like structures. 2 BERT BERT (Devlin et al., 2018) builds on Transformer networks (Vaswani et al., 2017) to pre-train bidirectional representations by conditioning on both left and right contexts jointly in all layers. The representations are jointly optimized by predicting randomly masked words in the input and classify1The code to reproduce our experiments is publicly accessible at https://github.com/ganeshjawahar/ interpret_bert 3652 (a) Layer 1 (b) Layer 2 (c) Layer 11 (d) Layer 12 PP VP ADJP NP ADVP SBAR PRT CONJP O Figure 1: 2D t-SNE plot of span embeddings computed from the first and last two layers of BERT. layer 1 2 3 4 5 6 7 8 9 10 11 12 NMI 0.38 0.37 0.35 0.3 0.24 0.2 0.19 0.16 0.17 0.18 0.16 0.19 Table 1: Clustering performance of span representations obtained from different layers of BERT. ing whether the sentence follows a given sentence in the corpus or not. The authors of BERT claim that bidirectionality allows the model to swiftly adapt for a downstream task with little modification to the architecture. Indeed, BERT improved the state-of-the-art for a range of NLP benchmarks (Wang et al., 2018) by a significant margin. In this work, we investigate the linguistic structure implicitly learned by BERT’s representations. We use the PyTorch implementation of BERT, which hosts the models trained by (Devlin et al., 2018). All our experiments are based on the bert-base-uncased variant,2 which consists of 12 layers, each having a hidden size of 768 and 12 attention heads (110M parameters). In all our experiments, we seek the activation of the first input token (‘[CLS]’) (which summarizes the information from the actual tokens using a self-attention mechanism) at every layer to compute BERT representation, unless otherwise stated. 3 Phrasal Syntax Peters et al. (2018) have shown that the representations underlying LSTM-based language models(Hochreiter and Schmidhuber, 1997) can capture phrase-level (or span-level) information.3 It remains unclear if this holds true for models not trained with a traditional language modeling objective, such as BERT. Even if it does, would the information be present in multiple layers of the model? To investigate this question we extract span representations from each layer of BERT. 2We obtained similar results in preliminary experiments with the bert-large-uncased variant. 3Peters et al. (2018) experimented with ELMo-style CNN and Transformer but did not report this finding for these models. Following Peters et al. (2018), for a token sequence si, . . . , sj, we compute the span representation s(si,sj),l at layer l by concatenating the first (hsi,l) and last hidden vector (hsj,l), along with their element-wise product and difference. We randomly pick 3000 labeled chunks and 500 spans not labeled as chunks from the CoNLL 2000 chunking dataset (Sang and Buchholz, 2000). As shown in Figure 1, we visualize the span representations obtained from multiple layers using tSNE (Maaten and Hinton, 2008), a non-linear dimensionality reduction algorithm for visualizing high-dimensional data. We observe that BERT mostly captures phrase-level information in the lower layers and that this information gets gradually diluted in higher layers. The span representations from the lower layers map chunks (e.g. ‘to demonstrate’) that project their underlying category (e.g. VP) together. We further quantify this claim by performing a k-means clustering on span representations with k = 10, i.e. the number of distinct chunk types. Evaluating the resulting clusters using the Normalized Mutual Information (NMI) metric shows again that the lower layers encode phrasal information better than higher layers (cf. Table 1). 4 Probing Tasks Probing (or diagnostic) tasks (Adi et al., 2017; Hupkes et al., 2018; Conneau et al., 2018) help in unearthing the linguistic features possibly encoded in neural models. This is achieved by setting up an auxiliary classification task where the final output of a model is used as features to predict a linguistic phenomenon of interest. If the auxiliary classifier can predict a linguistic prop3653 Layer SentLen WC TreeDepth TopConst BShift Tense SubjNum ObjNum SOMO CoordInv (Surface) (Surface) (Syntactic) (Syntactic) (Syntactic) (Semantic) (Semantic) (Semantic) (Semantic) (Semantic) 1 93.9 (2.0) 24.9 (24.8) 35.9 (6.1) 63.6 (9.0) 50.3 (0.3) 82.2 (18.4) 77.6 (10.2) 76.7 (26.3) 49.9 (-0.1) 53.9 (3.9) 2 95.9 (3.4) 65.0 (64.8) 40.6 (11.3) 71.3 (16.1) 55.8 (5.8) 85.9 (23.5) 82.5 (15.3) 80.6 (17.1) 53.8 (4.4) 58.5 (8.5) 3 96.2 (3.9) 66.5 (66.0) 39.7 (10.4) 71.5 (18.5) 64.9 (14.9) 86.6 (23.8) 82.0 (14.6) 80.3 (16.6) 55.8 (5.9) 59.3 (9.3) 4 94.2 (2.3) 69.8 (69.6) 39.4 (10.8) 71.3 (18.3) 74.4 (24.5) 87.6 (25.2) 81.9 (15.0) 81.4 (19.1) 59.0 (8.5) 58.1 (8.1) 5 92.0 (0.5) 69.2 (69.0) 40.6 (11.8) 81.3 (30.8) 81.4 (31.4) 89.5 (26.7) 85.8 (19.4) 81.2 (18.6) 60.2 (10.3) 64.1 (14.1) 6 88.4 (-3.0) 63.5 (63.4) 41.3 (13.0) 83.3 (36.6) 82.9 (32.9) 89.8 (27.6) 88.1 (21.9) 82.0 (20.1) 60.7 (10.2) 71.1 (21.2) 7 83.7 (-7.7) 56.9 (56.7) 40.1 (12.0) 84.1 (39.5) 83.0 (32.9) 89.9 (27.5) 87.4 (22.2) 82.2 (21.1) 61.6 (11.7) 74.8 (24.9) 8 82.9 (-8.1) 51.1 (51.0) 39.2 (10.3) 84.0 (39.5) 83.9 (33.9) 89.9 (27.6) 87.5 (22.2) 81.2 (19.7) 62.1 (12.2) 76.4 (26.4) 9 80.1 (-11.1) 47.9 (47.8) 38.5 (10.8) 83.1 (39.8) 87.0 (37.1) 90.0 (28.0) 87.6 (22.9) 81.8 (20.5) 63.4 (13.4) 78.7 (28.9) 10 77.0 (-14.0) 43.4 (43.2) 38.1 (9.9) 81.7 (39.8) 86.7 (36.7) 89.7 (27.6) 87.1 (22.6) 80.5 (19.9) 63.3 (12.7) 78.4 (28.1) 11 73.9 (-17.0) 42.8 (42.7) 36.3 (7.9) 80.3 (39.1) 86.8 (36.8) 89.9 (27.8) 85.7 (21.9) 78.9 (18.6) 64.4 (14.5) 77.6 (27.9) 12 69.5 (-21.4) 49.1 (49.0) 34.7 (6.9) 76.5 (37.2) 86.4 (36.4) 89.5 (27.7) 84.0 (20.2) 78.7 (18.4) 65.2 (15.3) 74.9 (25.4) Table 2: Probing task performance for each BERT layer. The value within the parentheses corresponds to the difference in performance of trained vs. untrained BERT. Layer 0 (1.5) 1 (5.2) 2 (7.7) 3 (10.5) 4 (13.3) 1 90.89 40.43 23.22 21.46 20 2 92.01 42.6 25.84 24.78 26.02 3 92.77 47.05 29.77 27.22 29.56 4 94.39 52.97 33.02 29.13 30.09 5 94.98 63.12 43.68 36.61 36.11 6 95.45 67.28 46.93 38.22 36.46 7 95.52 72.44 53.03 43.5 41.06 8 95.68 75.66 58.74 48.88 45.49 9 95.54 73.84 57.96 50.34 48.85 10 95.09 69.21 51.5 43.26 41.59 11 94.33 66.62 51.69 46.09 42.65 12 94.06 62.78 51.07 46.04 46.37 Table 3: Subject-verb agreement scores for each BERT layer. The last five columns correspond to the number of nouns intervening between the subject and the verb (attractors) in test instances. The average distance between the subject and the verb is enclosed in parentheses next to each attractor category. erty well, then the original model likely encodes that property. In this work, we use probing tasks to assess individual model layers in their ability to encode different types of linguistic features. We evaluate each layer of BERT using ten probing sentence-level datasets/tasks created by Conneau et al. (2018), which are grouped into three categories. Surface tasks probe for sentence length (SentLen) and for the presence of words in the sentence (WC). Syntactic tasks test for sensitivity to word order (BShift), the depth of the syntactic tree (TreeDepth) and the sequence of toplevel constituents in the syntax tree (TopConst). Semantic tasks check for the tense (Tense), the subject (resp. direct object) number in the main clause (SubjNum, resp. ObjNum), the sensitivity to random replacement of a noun/verb (SOMO) and the random swapping of coordinated clausal conjuncts (CoordInv). We use the SentEval toolkit (Conneau and Kiela, 2018) along with the recommended hyperparameter space to search for the best probing classifier. As random encoders can surprisingly encode a lot of lexical and structural information (Zhang and Bowman, 2018), we also evaluate the untrained version of BERT, obtained by setting all model weights to a random number. Table 2 shows that BERT embeds a rich hierarchy of linguistic signals: surface information at the bottom, syntactic information in the middle, semantic information at the top. BERT has also surpassed the previously published results for two tasks: BShift and CoordInv. We find that the untrained version of BERT corresponding to the higher layers outperforms the trained version in the task of predicting sentence length (SentLen). This could indicate that untrained models contain sufficient information to predict a basic surface feature such as sentence length, whereas training the model results in the model storing more complex information, at the expense of its ability to predict such basic surface features. 5 Subject-Verb Agreement Subject-verb agreement is a proxy task to probe whether a neural model encodes syntactic structure (Linzen et al., 2016). The task of predicting the verb number becomes harder when there are more nouns with opposite number (attractors) intervening between the subject and the verb. Goldberg (2019) has shown that BERT learns syntactic phenomenon surprisingly well using various stimuli for subject-verb agreement. We extend his work by performing the test on each layer of BERT and controlling for the number of attractors. In our study, we use the stimuli created by Linzen et al. (2016) and the SentEval toolkit (Conneau and Kiela, 2018) to build the binary classifier with the recommended hyperparameter space, using as features the activations from the (masked) verb at hand. 3654 Role scheme \ Layer 1 2 3 4 5 6 7 8 9 10 11 12 Left-to-right 0.0005 0.0007 0.0008 0.0034 0.0058 0.0087 0.0201 0.0179 0.0284 0.0428 0.0362 0.0305 Right-to-left 0.0004 0.0007 0.0007 0.0032 0.0060 0.0099 0.0233 0.0203 0.0337 0.0486 0.0411 0.0339 Bag-of-words 0.0006 0.0009 0.0012 0.0039 0.0066 0.0108 0.0251 0.0221 0.0355 0.0507 0.0422 0.0348 Bidirectional 0.0025 0.0030 0.0034 0.0053 0.0079 0.0106 0.0226 0.0201 0.0311 0.0453 0.0391 0.0334 Tree 0.0005 0.0009 0.0011 0.0037 0.0055 0.0081 0.0179 0.0155 0.0249 0.0363 0.0319 0.0278 Tree (random) 0.0005 0.0009 0.0011 0.0038 0.0063 0.0099 0.0237 0.0214 0.0338 0.0486 0.0415 0.0340 Table 4: Mean squared error between TPDN and BERT representation for a given layer and role scheme on SNLI test instances. Each number corresponds to the average across five random initializations. 27/02/2019 depparse_layer_1.svg file:///Users/ganeshj/Downloads/todelete/depparse_layer_1.svg 1/1 The keys to the cabinet are on the table The keys to the cabinet are on the table Figure 2: Dependency parse tree induced from attention head #11 in layer #2 using gold root (‘are’) as starting node for maximum spanning tree algorithm. Results in Table 3 show that the middle layers perform well in most cases, which supports the result in Section 4 where the syntactic features were shown to be captured well in the middle layers. Interestingly, as the number of attractors increases, one of the higher BERT layers (#8) is able to handle the long-distance dependency problems caused by the longer sequence of words intervening between the subject and the verb, better than the lower layer (#7). This highlights the need for BERT to have deeper layers to perform competitively on NLP tasks. 6 Compositional Structure Can we understand the compositional nature of representation learned by BERT, if any? To investigate this question, we use Tensor Product Decomposition Networks (TPDN) (McCoy et al., 2019), which explicitly compose the input token (“filler”) representations based on the role scheme selected beforehand using tensor product sum. For instance, a role scheme for a word can be based on the path from the root node to itself in the syntax tree (e.g. ‘LR’ denotes the right child of left child of root). The authors assume that, for a given role scheme, if a TPDN can be trained well to approximate the representation learned by a neural model, then that role scheme likely specifies the compositionality implicitly learned by the model. For each BERT layer, we work with five different role schemes. Each word’s role is computed based on its left-to-right index, its right-to-left index, an ordered pair containing its left-to-right and right-to-left indices, its position in a syntactic tree (formatted version of the Stanford PCFG Parser (Klein and Manning, 2003) with no unary nodes and no labels) and an index common to all the words in the sentence (bag-of-words), which ignores its position. Additionally, we also define a role scheme based on random binary trees. Following McCoy et al. (2019), we train our TPDN model on the premise sentences in the SNLI corpus (Bowman et al., 2015). We initialize the filler embeddings of the TPDN with the pre-trained word embeddings from BERT’s input layer, freeze it, learn a linear projection on top of it and use a Mean Squared Error (MSE) loss function. Other trainable parameters include the role embeddings and a linear projection on top of tensor product sum to match the embedding size of BERT. Table 4 displays the MSE between representation from pretrained BERT and representation from TPDN trained to approximate BERT. We discover that BERT implicitly implements a treebased scheme, as a TPDN model following that scheme best approximates BERT’s representation at most layers. This result is remarkable, as BERT encodes classical, tree-like structures despite relying purely on attention mechanisms. Motivated by this study, we perform a case study on dependency trees induced from self attention weight following the work done by Raganato and Tiedemann (2018). Figure 2 displays the dependencies inferred from an example sentence by obtaining self attention weights for every word pairs from attention head #11 in layer #2, fixing the gold root as the starting node and invoking the Chu-Liu-Edmonds algorithm (Chu and Liu, 1967). We observe that determiner-noun dependencies (“the keys”, “the cabinet” and “the table”) and subject-verb dependency (“keys” and “are”) are captured accurately. Surprisingly, the predicate-argument structure seems to be partly modeled as shown by the chain of dependencies between “key”,“cabinet” and “table”. 3655 7 Related Work Peters et al. (2018) studies how the choice of neural architecture such as CNNs, Transformers and RNNs used for language model pretraining affects the downstream task accuracy and the qualitative properties of the contextualized word representations that are learned. They conclude that all architectures learn high quality representations that outperform standard word embeddings such as GloVe (Pennington et al., 2014) for challenging NLP tasks. They also show that these architectures hierarchically structure linguistic information, such that morphological, (local) syntactic and (longer range) semantic information tend to be represented in, respectively, the word embedding layer, lower contextual layers and upper layers. In our work, we observe that such hierarchy exists as well for BERT models that are not trained using the standard language modelling objective. Goldberg (2019) shows that the BERT model captures syntactic information well for subject-verb agreement. We build on this work by performing the test on each layer of BERT controlling for the number of attractors and then show that BERT requires deeper layers for handling harder cases involving long-distance dependency information. Tenney et al. (2019) is a contemporaneous work that introduces a novel edge probing task to investigate how contextual word representations encode sentence structure across a range of syntactic, semantic, local and long-range phenomena. They conclude that contextual word representations trained on language modeling and machine translation encode syntactic phenomena strongly, but offer comparably small improvements on semantic tasks over a non-contextual baseline. Their result using BERT model on capturing linguistic hierarchy confirms our probing task results although using a set of relatively simple probing tasks. Liu et al. (2019) is another contemporaneous work that studies the features of language captured/missed by contextualized vectors, transferability across different layers of the model and the impact of pretraining on the linguistic knowledge and transferability. They find that (i) contextualized word embeddings do not capture finegrained linguistic knowledge, (ii) higher layers of RNN to be task-specific (with no such pattern for a transformer) and (iii) pretraining on a closely related task yields better performance than language model pretraining. Hewitt and Manning (2019) is a very recent work which showed that we can recover parse trees from the linear transformation of contextual word representation consistently, better than with non-contextual baselines. They focused mainly on syntactic structure while our work additionally experimented with linear structures (leftto-right, right-to-left) to show that the compositionality modelling underlying BERT mimics traditional syntactic analysis. The recent burst of papers around these questions illustrates the importance of interpreting contextualized word embedding models and our work complements the growing literature with additional evidences about the ability of BERT in learning syntactic structures. 8 Conclusion With our experiments, which contribute to a currently bubbling line of work on neural network interpretability, we have shown that BERT does capture structural properties of the English language. Our results therefore confirm those of Goldberg (2019); Hewitt and Manning (2019); Liu et al. (2019); Tenney et al. (2019) on BERT who demonstrated that span representations constructed from those models can encode rich syntactic phenomena. We have shown that phrasal representations learned by BERT reflect phraselevel information and that BERT composes a hierarchy of linguistic signals ranging from surface to semantic features. We have also shown that BERT requires deeper layers to model long-range dependency information. Finally, we have shown that BERT’s internal representations reflect a compositional modelling that shares parallels with traditional syntactic analysis. It would be interesting to see if our results transfer to other domains with higher variability in syntactic structures (such as noisy user generated content) and with higher word order flexibility as experienced in some morphologically-rich languages. Acknowledgments We thank Grzegorz Chrupała and our anonymous reviewers for providing insightful comments and suggestions. This work was funded by the ANR projects ParSiTi (ANR-16-CE33-0021), SoSweet (ANR15-CE38-0011-01) and the French-Israeli PHC Maimonide cooperation program. 3656 References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2017. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. International Conference on Learning Representations. Yonatan Belinkov and James Glass. 2019. Analysis Methods in Neural Language Processing: A Survey. Transactions of the Association for Computational Linguistics. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Yoeng-Jin Chu and Tseng-Hong Liu. 1967. On the shortest arborescence of a directed graph. In Science Sinica, pages 1396–1400. Alexis Conneau and Douwe Kiela. 2018. SentEval: An Evaluation Toolkit for Universal Sentence Representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018). European Language Resource Association. Alexis Conneau, Germ´an Kruszewski, Guillaume Lample, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single \$&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126–2136. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Computing Research Repository, arXiv:1810.04805. Version 1. Yoav Goldberg. 2019. Assessing BERT’s Syntactic Abilities. Computing Research Repository, arXiv:1901.05287. Version 1. John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ‘Diagnostic Classifiers’ Reveal How Recurrent and Recursive Neural Networks Process Hierarchical Structure. J. Artif. Int. Res., 61(1):907–926. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430, Stroudsburg, PA, USA. Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew Peters, and Noah A. Smith. 2019. Linguistic Knowledge and Transferability of Contextual Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605. R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and Paul Smolensky. 2019. RNNs Implicitly Implement Tensor Product Representations. International Conference on Learning Representations. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting Contextual Word Embeddings: Architecture and Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509. Association for Computational Linguistics. Alessandro Raganato and J¨org Tiedemann. 2018. An Analysis of Encoder Representations in Transformer-Based Machine Translation. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287–297. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task Chunking. In Fourth Conference on Computational Natural Language Learning, CoNLL 2000, and the Second Learning Language in Logic Workshop, LLL 2000, Held in cooperation with ICGI-2000, Lisbon, Portugal, September 13-14, 2000, pages 127–132. 3657 Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? Probing for sentence structure in contextualized word representations. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000–6010. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355. Association for Computational Linguistics. Kelly W. Zhang and Samuel R. Bowman. 2018. Language Modeling Teaches You More Syntax than Translation Does: Lessons Learned Through Auxiliary Task Analysis. Computing Research Repository, arXiv:1809.10040. Version 2.
2019
356
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3658–3666 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3658 A Just and Comprehensive Strategy for Using NLP to Address Online Abuse David Jurgens University of Michigan School of Information [email protected] Eshwar Chandrasekharan Georgia Tech School of Interactive Computing [email protected] Libby Hemphill University of Michigan School of Information [email protected] Abstract Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse. However, current methods have largely focused on a narrow definition of abuse to detriment of victims who seek both validation and solutions. In this position paper, we argue that the community needs to make three substantive changes: (1) expanding our scope of problems to tackle both more subtle and more serious forms of abuse, (2) developing proactive technologies that counter or inhibit abuse before it harms, and (3) reframing our effort within a framework of justice to promote healthy communities. 1 Introduction Online platforms have the potential to enable substantial, prolonged, and productive engagement for many people. Yet, the lived reality on social media platforms falls far short of this potential (Papacharissi, 2004). In particular, the promise of social media has been hindered by antisocial, abusive behaviors such as harassment, hate speech, trolling, and the like. Recent surveys indicate that abuse happens much more frequently than many people suspect (40% of Internet users report being the subject of online abuse at some point), and members of underrepresented groups are targeted even more often (Herring et al., 2002; Drake, 2014; Anti-Defamation League, 2019). The NLP community has responded by developing technologies to identify certain types of abuse and facilitating automatic or computerassisted content moderation. Current technology has primarily focused on overt forms of abusive language and hate speech, without considering both (i) the success and failure of technology beyond getting the classification correct, and (ii) the myriad forms that abuse can take. Risk of Physical Danger Frequency Hate speech Insults Condescension Microagression Physical Threats Doxxing Promoting Self Harm Figure 1: Abusive behavior online falls along a spectrum, and current approaches focus only on a narrow range (shown in red text), ignoring nearby problems. Impact comes from both the frequency (on left) and real-world consequences (on right) of behaviors. This figure illustrates the spectrum of online abuse in an hypothetical manner, with its non-exhaustive examples inspired from prior surveys of online experiences (Duggan, 2017; Salminen et al., 2018). As Figure 1 shows, a large spectrum of abusive behavior exists—some with life-threatening consequences—much of which is currently unaddressed by language technologies. Explicitly hateful speech is just one tool of hate, and related tactics such as rape threats, gaslighting, First Amendment panic, and veiled insults are effectively employed both off- and online to silence, scare, and exclude participants from what should be inclusive, productive discussions (Filipovic, 2007). In this position paper, we argue that to promote healthy online communities, three changes are needed. First, the NLP community needs to rethink and expand what constitutes abuse. Second, current methods are almost entirely reactive to abuse, entailing that harm occurs. Instead, the community needs to develop proactive technologies that assist authors, moderators, and platform owners in preventing abuse before it occurs. Finally, we argue that both of these threads point to a need for a broad re-aligning of our community goals towards justice, rather than simply the elim3659 ination of abusive behavior. In arguing for these changes, we outline how each effort offers new challenging NLP tasks that have concrete benefits. 2 Rethinking What Constitutes Abuse The classifications we adopt and computationally enforce have real and lasting consequences by defining both what is and what is not abuse (Bowker and Star, 2000). Abusive behavior is an omnibus term that often includes harassment, threats, racial slurs, sexism, unexpected pornographic content, and insults—all of which can be directed at other users or at whole communities (Davidson et al., 2017; Nobata et al., 2016). However, NLP has largely considered a far narrower scope of what constitutes abuse through its selection of which types of behavior to recognize (Waseem et al., 2017; Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018). We argue that NLP needs to expand its computational efforts to recognize two additional general types of abuse: (a) infrequent and physically dangerous abuse, and (b) more common but subtle abuse. Additionally, we need to develop methods that respect community norms in classification decisions. These categories of abuse and the importance of community norms have been noted elsewhere (Liu et al., 2018; Guberman and Hemphill, 2017; Salminen et al., 2018; Blackwell et al., 2017) but have not yet received the same level of attention in NLP. Who has a right to speak and in what manner are subjective decisions that are guided by social relationships (Foucault, 1972; Noble, 2018), and the specific choices our algorithms make about what speech to allow and what to silence have powerful effects. For instance, rejecting behavior as not being abusive because it is outside the scope of our classification can cause substantial harm to victims (Blackwell et al., 2017), tacitly involving the NLP community in algorithmic bias that sanctions certain forms of abuse. Thus, categorization is particularly thorny: a broad categorization is likely too computationally inefficient, yet a narrow categorization risks further marginalizing affected community members and can lead to lasting harm. Following, we outline three key directions for the community to expand its definitions. 2.1 Physically Threatening Online Abuse We outline three computational challenges related to infrequent but overt physically-manifesting abuse that NLP could be applied to solve. First, such behaviors do not necessarily adopt the language of hate speech or more common forms of hate speech and may in some contexts appear innocuous but are clearly dangerous in others. For example, posting a phone number to call could be acceptable if one is encouraging others to call their political representative, yet would be a serious breach of privacy (doxxing) if posted as part of a public harassment campaign. Similarly, declarations of “keep up the weight loss!” may be positive in a dieting community, yet reinforce dangerous behavior in a pro-anorexia community. Speech that in isolation appears offensive, such as impoliteness or racial slurs, may serve pro-social functions such as promoting intimacy (Culpeper, 1996) or showing camaraderie (Allan, 2015). Second, behaviors such as swatting, human trafficking, or pedophilia have all occurred on public social media platforms (Jaffe, 2016; Latonero, 2011; Holt et al., 2010). However, methods have yet to be developed for recognizing when users are engaging in these behaviors, which may involve coded language, and require recognizing these alternative forms. Current approaches for learning new explicitly-hateful symbols could be adapted to this task (e.g., Roy, 2016; Gao et al., 2017). Third, online platforms have been used to incite mobs of people to violence (Siegel, 2015). These efforts often use incendiary fake news that plays upon factional rivalries (Samory and Mitra, 2018). Abusive language detection methods can build upon recent advances at detecting fake news to identify content-sharing likely to lead to violence (McLaughlin, 2018; Oshikawa et al., 2018). 2.2 Subtle Abuse Many forms of abusive behavior are linguistically subtle and implicit. Behaviors such as condescension, minimization (e.g., “your situation isn’t that bad”), benevolent stereotyping, and microagressions are frequently experienced by members of minority social groups (Sue et al., 2007; Glick and Fiske, 2001). While subtle, such abuse can still be as emotionally harmful as overt abuse to some individuals (Sue, 2010; Nadal et al., 2014). The NLP community has two clear paths for growth into this area. First, although recognized within the larger NLP abuse typology (Waseem et al., 2017), only a handful of approaches have attempted these prob3660 lems, such as identifying benevolent sexism (Jha and Mamidi, 2017), and new methods must be developed to identify the implicit signals. Successful approaches will likely require advances in natural language understanding, as the abuse requires reasoning about the implications of the propositions. A notable example of such an approach is Dinakar et al. (2012) who extract implicit assumptions in statements and use common sense reasoning to identify social norm violations that would be considered insults. Second, new methods should identify disparity in treatment of social groups. For example, in a study of the respectfulness of police language, Voigt et al. (2017) found that officers were consistently less likely to use respectful language with black community members than with white community members—a disparity in a positive social dimension. As NLP solutions have been developed for other social dimensions of language such as politeness (Danescu-NiculescuMizil et al., 2013; Munkova et al., 2013; Chhaya et al., 2018) and formality (Brooke et al., 2010; Sheikha and Inkpen, 2011; Pavlick and Tetreault, 2016), these methods could be readily adapted for identifying such systematic bias for additional social categories and settings. 2.3 Community Norms Need to be Respected Social norms are rules and standards that are understood by members of a group, and that guide and constrain social behavior without the force of laws (Triandis, 1994; Cialdini and Trost, 1998). Norms can be nested, in that they can be adopted from the general social context (e.g., use of pejorative adjectives are rude), and more general internet comment etiquette (e.g., using all caps is equivalent to shouting). Yet, norms for what is considered acceptable can vary significantly from one community to another, making it challenging to build one abuse detection system that works for all communities (Chandrasekharan et al., 2018). Current NLP methods are largely context- and norm-agnostic, which leads to situations where content is removed unnecessarily when deemed inappropriate (i.e., false positives), eroding community trust in the use of computational tools to assist in moderation. A common failure mode for sociotechnical interventions like automated moderation is failing to understand the online community where they are being deployed (Krishna, 2018). Such community-specific norms and context are important to take into account, as NLP researchers are doubling down on context-sensitive approaches to define (e.g., Chandrasekharan and Gilbert, 2019) and detect abuse (e.g., Gao and Huang, 2017). However, not all community norms are socially acceptable within the broader world. Even behavior considered harmful in one community might be celebrated in another, e.g., Reddit’s r/fatpeoplehate (Chandrasekharan et al., 2017), and Something Awful Forums (Pater et al., 2014). The existence of problematic normative behaviors within certain atypical online communities poses a challenge to abuse detection systems. Fraser (1990) notes that when a public space is governed by a dominant group, its norms about participation end up perpetuating inequalities. One approach to address this challenge would be to work closely with the different stakeholders involved in online governance, like platform administrators, policy makers, users and moderators. This will enable the development of solutions that cater to a wider range of expectations around moderating abusive behaviors on the platform, especially when dealing with deviant communities. 2.4 Challenges for Creating New NLP Shared Tasks on Abusive Behavior Shared tasks have long been an NLP tradition for establishing evaluating metrics, defining data guidelines, and, more broadly, bringing together researchers. The broad nature of abusive behavior creates significant challenges for the shared task paradigm. Here, we outline three opportunities for new shared tasks in this area. First, new NLP shared tasks should develop annotation guidelines accurately define what constitutes abusive behavior in the target community. Recent works have begun to make progress in this area by modeling the context in which a comment is made through user and community-level features (Qian et al., 2018; Mishra et al., 2018; Ribeiro et al., 2018), yet often the norms in these settings are implicit making it difficult to transfer the techniques and models to other settings. As one potential solution, Chandrasekharan et al. (2018) studied community norms on Reddit in a large-scale, datadriven manner, and released a dataset of over 40K removed comments from Reddit labeled according to the specific type of norm being violated (Chan3661 drasekharan and Gilbert, 2019). Second, new NLP shared tasks must address the data scarcity faced by abuse detection research while minimizing harm caused by the data. Constant exposure to abusive content has been found to negatively and substantially affect the mental health of moderators and users (Roberts, 2014; Gillespie, 2018; Saha et al., 2019). However, labeled ground truth data for building and evaluating classifiers is hard to obtain because platforms typically do not share moderated content due to privacy, ethical and public relations concerns. One possibility for significant progress is to work with platform administrators and stakeholders to make proprietary data available as private test sets on platforms like Codalab, thereby keeping annotations in line with community norms and still allowing researchers to evaluate on real behavior. Third, tasks must clearly define who is the enduser of the classification labels. For example, will moderators use the system to triage abusive content, or is the goal to automatically remove abusive content? Current solutions are often trained and evaluated in a static manner, only using preexisting data; whether these solutions are effective upon deployment remains relatively unexplored. Evaluation must go beyond just traditional measures of performance like precision and recall, and instead begin optimizing for metrics like reduction in moderator effort, speed of response, targeted recall for severe types of abuse, moderator trust and fairness in predictions. 3 Proactive Approaches for Abuse Existing computational approaches to handle abusive language are primarily reactive and intervene only after abuse has occurred. A complementary approach is developing proactive technologies that prevent the harm from occurring in the first place, and we motivate three proactive computational approaches to prevent abuse here. First, bystanders can have a profound effect on the course of an interaction by steering the direction of the conversation away from abuse (Markey, 2000; Dillon and Bushman, 2015). Prior work has used experimenter-based intervention but a substantial opportunity exists to operationalize these interventions through computational means. Munger (2017) developed a simple, but effective, computational intervention for the use of toxic language (the n-word), where a human-looking bot account would reply with a fixed comment about the harm such language caused and an appeal to empathy, leading to long-term behavior change in the offenders. Identifying how to best respond to abusive behavior—or whether to respond at all—are important computational next steps for this NLP strategy and one that likely needs to be done in collaboration with researchers from fields such as Psychology. Prior work has shown counter speech to be effective for limiting the effects of hate speech (Schieb and Preuss, 2016; Mathew et al., 2018; Stroud and Cox, 2018). Wright et al. (2017) notes that real-world examples of bystanders intervening can be found online, thereby providing a potential source of training data but methods are needed to reliably identify such counter speech examples. Second, interventions that occur after a point of escalation may have little positive effect in some circumstances. For example, when two individuals have already begun insulting one another, both have already become upset and must lose face to reconcile (Rubin et al., 1994). At this point, deescalation may prevent further abuse but does little for restoring the situation to a constructive dialog (Gottman, 1999). However, interventions that occur before the point of abuse can serve to shift the conversation. Recent work has shown that it is possible to predict whether a conversation will become toxic on Wikipedia (Zhang et al., 2018) and whether bullying will occur on Instagram (Liu et al., 2018). These predictable abuse trajectories open the door to developing new models for preemptive interventions that directly mitigate harm. Third, messages that are not intended as offensive create opportunities to nudge authors towards correcting their text if the offense is pointed out. This strategy builds upon recent work on explainable ML for identifying which parts of a message are offensive (Carton et al., 2018; Noever, 2018), and work on paraphrase and style transfer for suggesting an appropriate inoffensive alternative (Santos et al., 2018; Prabhumoye et al., 2018). For example, parts of a message could be paraphrased to adjust the level of politeness in order to minimize any cumulative disparity towards one social group (Sennrich et al., 2016). 4 Justice Frameworks for NLP Martin Luther King Jr. wrote that the biggest obstacle to Black freedom is the “white moderate, 3662 who is more devoted to ‘order’ than to justice, who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice” (King, 1963). Analogously, by focusing only on classifying individual unacceptable speech acts, NLP risks being the same kind of obstacle as the white moderate: Instead of seeking the absence of certain types of speech, we should seek the presence of equitable participation. We argue that NLP should consider supporting three types of justice—social justice, restorative justice, and procedural justice—that describe (i) what actions are allowed and encouraged, (ii) how wrongdoing should be handled, and (iii) what procedures should be followed. First, the capabilities approach to social justice focuses on what actions people can do within a social setting (Sen, 2011; Nussbaum, 2003) and provides a useful framework for thinking about what justice online could look like. Nussbaum (2003) provides a set of 10 fundamental capabilities for a just society, such as the ability to express emotion and to have an affiliation. These capabilities provide a blueprint for articulating the values and opportunities an online community provides: Instead of a negative articulation—an evergrowing list of prohibited behaviors—we should use a positive phrasing (e.g., “you will be able to”) of capabilities in an online community. Such effort naturally extends our proposal for detecting community-specific abuse to one of promoting community norms. Accordingly, NLP technologies can be developed to identify positive behaviors and ensure individuals are able to fulfill these capabilities. Several recent works have made strides in this direction by examining positive behaviors such as how constructive conversations are (Kolhatkar and Taboada, 2017; Napoles et al., 2017), whether dialog on contentious topics can exist without devolving into squabbling (Tan et al., 2016), or the level of support given between community members (Wang and Jurgens, 2018). Second, once we have adequately articulated what people in a community should be able to do, we must address how the community handles transgressions. The notion of restorative justice is a useful theoretical tool for thinking about how wrongdoing should be handled. Restorative justice theory emphasizes repair and uses a process in which stakeholders, including victims and transgressors, decide together on consequences. A restorative process may produce a punishment, such as banning, but can include consequences such as apology and reconciliation (Braithwaite, 2002). Just responses consider the emotions of both perpetrators and victims in designing the right response (Sherman, 2003). A key problem here is identifying which community norm is violated and NLP technologies can be introduced to aid this process of elucidating violations through classification or use of explainable ML techniques. Here, NLP can aid all parties (platforms, victims, and transgressors) in identifying appropriate avenues for restorative actions. Third, just communities also require just means of addressing wrongdoing. The notion of procedural justice explains that people are more likely to comply with a community’s rules if they believe the authorities are legitimate (Tyler and Huo, 2002; Sherman, 2003). For NLP, it means that our systems for detecting non-compliance must be transparent and fair. People will comply only if they accept the legitimacy of both the platform and the algorithms it employs. Therefore, abuse detection methods are needed to justify why a particular act was a violation to build legitimacy; a natural starting point for NLP in building legitimacy is recent work from explainable ML (Ribeiro et al., 2016; Lei et al., 2016; Carton et al., 2018). 5 Conclusion Abusive behavior online affects a substantial amount of the population. The NLP community has proposed computational methods to help mitigate this problem, yet has also struggled to move beyond the most obvious tasks in abuse detection. Here, we propose a new strategy for NLP to tackling online abuse in three ways. First, expanding our purview for abuse detection to include both extreme behaviors and the more subtle— but still offensive—behaviors like microaggressions and condescension. Second, NLP must develop methods that go beyond reactive identifyand-delete strategies to one of proactivity that intervenes or nudges individuals to discourage harm before it occurs. Third, the community should contextualize its effort inside a broader framework of justice—explicit capabilities, restorative justice, and procedural justice—to directly support the end goal of productive online communities. 3663 Acknowledgements This material is based upon work supported by the Mozilla Research Grants program and by the National Science Foundation under Grant No. 1822228. References Keith Allan. 2015. When is a slur not a slur? the use of nigger in ‘pulp fiction’. Lang. Sci., 52:187–199. Anti-Defamation League. 2019. Online hate and harassment: The american experience. https://www. adl.org/onlineharassment. Accessed: 2019-3-4. Lindsay Blackwell, Jill Dimond, Sarita Schoenebeck, and Cliff Lampe. 2017. Classification and its consequences for online harassment: Design insights from heartmob. Proceedings of the ACM on HumanComputer Interaction, 1(CSCW):24. Geoffrey C Bowker and Susan Leigh Star. 2000. Sorting things out: Classification and its consequences. MIT press. John Braithwaite. 2002. Restorative Justice & Responsive Regulation. Oxford University Press. Julian Brooke, Tong Wang, and Graeme Hirst. 2010. Automatic acquisition of lexical formality. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 90–98. Association for Computational Linguistics. Samuel Carton, Qiaozhu Mei, and Paul Resnick. 2018. Extractive adversarial networks: High-recall explanations for identifying personal attacks in social media posts. In Proceedings of EMNLP. Eshwar Chandrasekharan and Eric Gilbert. 2019. Hybrid approaches to detect comments violating macro norms on reddit. arXiv preprint arXiv:1904.03596. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You can’t stay here: The efficacy of reddit’s 2015 ban examined through hate speech. Proceedings of the ACM on HumanComputer Interaction, 1(CSCW):31. Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The internet’s hidden rules: An empirical study of reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on HumanComputer Interaction, 2(CSCW):32. Niyati Chhaya, Kushal Chawla, Tanya Goyal, Projjal Chanda, and Jaya Singh. 2018. Frustrated, polite, or formal: Quantifying feelings and tone in email. In Proceedings of the Second Workshop on Computational Modeling of Peoples Opinions, Personality, and Emotions in Social Media, pages 76–86. Robert B Cialdini and Melanie R Trost. 1998. Social influence: Social norms, conformity and compliance. In D. T. Gilbert, S. T. Fiske, and G. Lindzey, editors, The handbook of social psychology, pages 151–192. McGraw-Hill. Jonathan Culpeper. 1996. Towards an anatomy of impoliteness. J. Pragmat., 25(3):349–367. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Eleventh International AAAI Conference on Web and Social Media. Kelly P Dillon and Brad J Bushman. 2015. Unresponsive or un-noticed?: Cyberbystander intervention in an experimental cyberbullying context. Computers in Human Behavior, 45:144–150. Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Common sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(3):18. Bruce Drake. 2014. The darkest side of online harassment: Menacing behavior. Pew Research Center, http://www.pewresearch.org/facttank/2015/06/01/the-darkest-side-of-onlineharassment-menacing-behavior/. Maeve Duggan. 2017. Online harassment 2017. Jill Filipovic. 2007. Blogging while female: How internet misogyny parallels “Real-World” harassment. Yale J. Law Fem., 19(1). Paula Fortuna and S´ergio Nunes. 2018. A survey on automatic detection of hate speech in text. ACM Computing Surveys (CSUR), 51(4):85. Michel Foucault. 1972. The Archaeology of Knowledge & The Discourse on Language. Pantheon Books, New York. Nancy Fraser. 1990. Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social Text, (25/26):56–80. Lei Gao and Ruihong Huang. 2017. Detecting online hate speech using context aware models. In Proceedings of RANLP. Lei Gao, Alexis Kuppersmith, and Ruihong Huang. 2017. Recognizing explicit and implicit hate speech using a weakly supervised two-path bootstrapping approach. In Proceedings of ICJNLP. 3664 Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. Peter Glick and Susan T Fiske. 2001. An ambivalent alliance: Hostile and benevolent sexism as complementary justifications for gender inequality. American psychologist, 56(2):109. John Mordechai Gottman. 1999. The marriage clinic: A scientifically-based marital therapy. WW Norton & Company. Joshua Guberman and Libby Hemphill. 2017. Challenges in modifying existing scales for detecting harassment in individual tweets. In Proceedings of the 50th Hawaii International Conference on System Sciences. Susan Herring, Kirk Job-Sluder, Rebecca Scheckler, and Sasha Barab. 2002. Searching for safety online: Managing” trolling” in a feminist forum. The information society, 18(5):371–384. Thomas J Holt, Kristie R Blevins, and Natasha Burkert. 2010. Considering the pedophile subculture online. Sexual Abuse, 22(1):3–24. Elizabeth M Jaffe. 2016. Swatting: the new cyberbullying frontier after elonis v. united states. Drake L. Rev., 64:455. Akshita Jha and Radhika Mamidi. 2017. When does a compliment become sexist? analysis and classification of ambivalent sexism using twitter data. In Proceedings of the second workshop on NLP and computational social science, pages 7–16. Martin Luther King. 1963. Letter from a birmingham jail. Varada Kolhatkar and Maite Taboada. 2017. Constructive language in news comments. In Proceedings of the First Workshop on Abusive Language Online, pages 11–17. Rachael Krishna. 2018. Tumblr launched an algorithm to flag porn and so far it’s just caused chaos, dec 2018. https://www.buzzfeednews.com/article/ krishrach/tumblr-porn-algorithm-ban. Mark Latonero. 2011. Human trafficking online: The role of social networking sites and online classifieds. Available at SSRN. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of EMNLP. Ping Liu, Joshua Guberman, Libby Hemphill, and Aron Culotta. 2018. Forecasting the presence and intensity of hostility on instagram using linguistic and social features. In Twelfth International AAAI Conference on Web and Social Media. Patrick M Markey. 2000. Bystander intervention in computer-mediated communication. Computers in Human Behavior, 16(2):183–188. Binny Mathew, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherje. 2018. Thou shalt not hate: Countering online hate speech. arXiv preprint arXiv:1808.04409. Timothy McLaughlin. 2018. How whatsapp fuels fake news and violence in india. https://www.wired.com/story/how-whatsapp-fuelsfake-news-and-violence-in-india/. Pushkar Mishra, Marco Del Tredici, Helen Yannakoudakis, and Ekaterina Shutova. 2018. Author profiling for abuse detection. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 1088–1098. Kevin Munger. 2017. Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 39(3):629–649. Dasa Munkova, Michal Munk, and Zuzana Fr´aterov´a. 2013. Identifying social and expressive factors in request texts using transaction/sequence model. In Proceedings of RANLP, pages 496–503. Kevin L. Nadal, Katie E. Griffin, Yinglee Wong, Sahran Hamit, and Morgan Rasmus. 2014. The impact of racial microaggressions on mental health: Counseling implications for clients of color. Journal of Counseling and Development, 92(1):57–66. Courtney Napoles, Joel Tetreault, Aasish Pappu, Enrica Rosato, and Brian Provenzale. 2017. Finding good conversations online: The yahoo news annotated comments corpus. In Proceedings of the 11th Linguistic Annotation Workshop, pages 13–23. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, WWW ’16, pages 145–153, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee. Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. David Noever. 2018. Machine learning suites for online toxicity detection. arXiv preprint arXiv:1810.01869. Martha Nussbaum. 2003. Capabilities as fundamental entitlements: Sen and social justice. Feminist Economics, 9(2-3):33–59. Ray Oshikawa, Jing Qian, and William Yang Wang. 2018. A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770. 3665 Zizi Papacharissi. 2004. Democracy online: Civility, politeness, and the democratic potential of online political discussion groups. New media & society, 6(2):259–283. Jessica Annette Pater, Yacin Nadji, Elizabeth D Mynatt, and Amy S Bruckman. 2014. Just awful enough: the functional dysfunction of the something awful forums. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems, pages 2407–2410. ACM. Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association of Computational Linguistics (TACL), 4(1):61–74. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of ACL. Jing Qian, Mai ElSherief, Elizabeth Belding, and William Yang Wang. 2018. Leveraging intrauser and inter-user representation learning for automated hate speech detection. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 118–123. Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virg´ılio AF Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on twitter. In Twelfth International AAAI Conference on Web and Social Media. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of KDD, pages 1135–1144. ACM. Sarah T Roberts. 2014. Behind the screen: The hidden digital labor of commercial content moderation. Ph.D. thesis, University of Illinois at UrbanaChampaign. Jessica Roy. 2016. ”’cuck,”snowflake,”masculinist’: A guide to the language of the’altright’. http://www.latimes.com/nation/ la-na-pol-alt-right-terminology-20161115-story. html).LosAngelesTimes. Jeffrey Z Rubin, Dean G Pruitt, and Sung Hee Kim. 1994. Social conflict: Escalation, stalemate, and settlement. Mcgraw-Hill Book Company. Koustuv Saha, Eshwar Chandrasekharan, and Munmun De Choudhury. 2019. Prevalence and psychological effects of hateful speech in online college communities. In WebSci. Joni Salminen, Hind Almerekhi, Milica Milenkovi´c, Soon-gyo Jung, Jisun An, Haewoon Kwak, and Bernard J Jansen. 2018. Anatomy of online hate: developing a taxonomy and machine learning models for identifying and classifying hate in online news media. In Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM). Mattia Samory and Tanushree Mitra. 2018. Conspiracies online: User discussions in a conspiracy community following dramatic events. In Twelfth International AAAI Conference on Web and Social Media. Cicero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. 2018. Fighting offensive language on social media with unsupervised text style transfer. In Proceedings of ACL. Carla Schieb and Mike Preuss. 2016. Governing hate speech by means of counterspeech on facebook. In Proceedings of ICA, pages 1–23. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, pages 1–10. Amartya Sen. 2011. The Idea of Justice, reprint edition edition. Belknap Press: An Imprint of Harvard University Press. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of NAACL, pages 35–40. Fadi Abu Sheikha and Diana Inkpen. 2011. Generation of formal and informal sentences. In Proceedings of the 13th European Workshop on Natural Language Generation, pages 187–193. Association for Computational Linguistics. Lawrence W Sherman. 2003. Reason for emotion: Reinventing justice with theories, innovations, and research—the american society of criminology 2002 presidential address. Criminology, 41(1):1–38. Alexandra Siegel. 2015. Sectarian Twitter Wars: Sunni-Shia Conflict and Cooperation in the Digital Age, volume 20. Carnegie Endowment for International Peace. Scott R Stroud and William Cox. 2018. The varieties of feminist counterspeech in the misogynistic online world. In Mediating Misogyny, pages 293–310. Springer. Derald Wing Sue. 2010. Microaggressions in Everyday Life: Race, Gender, and Sexual Orientation. Wiley, Hoboken, NJ. Derald Wing Sue, Christina M Capodilupo, Gina C Torino, Jennifer M Bucceri, Aisha M.B. B. Holder, Kevin L Nadal, and Marta Esquilin. 2007. Racial microaggressions in everyday life: Implications for clinical practice. American Psychologist, 62(4):271–286. 3666 Chenhao Tan, Vlad Niculae, Cristian DanescuNiculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th international conference on world wide web, pages 613–624. International World Wide Web Conferences Steering Committee. Harry Charalambos Triandis. 1994. Culture and social behavior. McGraw-Hill New York. Tom R Tyler and Yuen Huo. 2002. Trust in the Law: Encouraging Public Cooperation with the Police and Courts. Russell Sage Foundation. Rob Voigt, Nicholas P Camp, Vinodkumar Prabhakaran, William L Hamilton, Rebecca C Hetey, Camilla M Griffiths, David Jurgens, Dan Jurafsky, and Jennifer L Eberhardt. 2017. Language from police body camera footage shows racial disparities in officer respect. Proceedings of the National Academy of Sciences, 114(25):6521–6526. Zijian Wang and David Jurgens. 2018. It’s going to be okay: Measuring access to support in online communities. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 33–45. Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language. Lucas Wright, Derek Ruths, Kelly P Dillon, Haji Mohammad Saleem, and Susan Benesch. 2017. Vectors for counterspeech on twitter. In Proceedings of the First Workshop on Abusive Language Online, pages 57–62. Justine Zhang, Jonathan P Chang, Cristian DanescuNiculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Thain, and Dario Taraborelli. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of ACL.
2019
357
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3667–3684 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3667 Learning from Dialogue after Deployment: Feed Yourself, Chatbot! Braden Hancock∗ Computer Science Dept. Stanford University [email protected] Antoine Bordes, Pierre-Emmanuel Mazar´e Jason Weston Facebook AI Research {abordes,pem,jase}@fb.com Abstract The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user’s responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot’s dialogue abilities further. On the PERSONACHAT chitchat dataset with over 131k training examples, we find that learning from dialogue with a selffeeding chatbot significantly improves performance, regardless of the amount of traditional supervision. 1 Introduction Training a dialogue agent to converse like a human requires extensive supervision. The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations (Serban et al., 2015). These fullysupervised conversations tend to be expensive to collect in sufficient quantity and/or occur in settings with significant differences from the deployment environment (Ross et al., 2009). Instead, dialogue agents would ideally learn directly from dialogue, the conversations they participate in after deployment, which are usually abundant, taskspecific, dynamic, and cheap. This corresponds to the way humans learn to converse—not merely observing others engaging in “expert-level” conver∗*BH completed most of this work at Facebook (FAIR). Have you been to France? Yes I have! It’s beautiful. 🤖  Lol. I never eat them! What are you talking about? Oops! I messed up. What should I have said? Maybe ask me what I thought about French food? Satisfaction: 0.85 Satisfaction: 0.10 Extracted Training Examples Context A B C D E F Have you been to France? A Yes, I have! It’s beautiful. Response B Context Have you been to France? A Yes, I have! It’s beautiful. Feedback B Maybe ask me what I thought about French food? F Figure 1: As the self-feeding chatbot engages in dialogue, it estimates user satisfaction to know when to ask for feedback. From the satisfied responses and feedback responses, new training examples are extracted for the DIALOGUE and FEEDBACK tasks, respectively, both of which improve the model’s dialogue abilities further. sations, but instead actively adjusting and correcting our speech based on feedback woven throughout our own conversations (Bassiri, 2011; Werts et al., 1995). Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement. However, naively training a dialogue agent on its own conversations yields poor results. For example, training a model on its own output can simply reinforce its existing failure modes, and mistakes by the agent can lead to absurd conversations that no longer resemble the target domain (Hashimoto and Sassano, 2018). To combat this, one approach is to allow the agent to request feed3668 back during conversations (Zhang et al., 2018a; Li et al., 2017b), e.g., when it believes it is about to make a mistake. This approach, however, falls victim to the Dunning-Kruger effect (Kruger and Dunning, 1999), which in this case suggests that a bad model will also be bad at knowing when it is doing a bad job. Regardless of when feedback is requested, existing methods typically require accompanying scalar rewards or adherence to particular templates or structure to ensure that the feedback is usable by the model (Rieser and Lemon, 2011; Zhang et al., 2017; Liu et al., 2018). These requirements may be acceptable for paid annotators, but they impose unnatural workflows on unpaid conversation partners in a standard dialogue environment. Humans are able to request and provide feedback using only natural language; ideally, dialogue agents would be able to do the same. In this work we propose the self-feeding chatbot, a dialogue agent with the ability to extract new examples from the conversations it participates in after deployment (Figure 1). Concretely, in addition to being trained on the primary DIALOGUE task, the agent is trained to predict its speaking partner’s satisfaction with its responses. When the conversation seems to be going well, the user’s responses (but not the bot’s own utterances) become the targets in new training examples for the DIALOGUE task. When the agent believes it has made a mistake, it instead requests feedback on what it could have said instead. Predicting the feedback that will be provided in a given context becomes an auxiliary task (FEEDBACK) on which the model is also trained. Importantly, these new examples improve the agent’s dialogue abilities while using only natural responses from the user that do not require special structure, accompanying numerical feedback, or additional human intervention in order to be used. With this approach, the conversations the chatbot participates in are sliced into two complementary datasets—one largely protected from the chatbot’s mistakes (DIALOGUE examples), and one which directly addresses them (FEEDBACK examples). We validate our approach on the PERSONACHAT (Zhang et al., 2018b) dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both. The main contributions of this work thus include the following: • We propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples for itself from the conversations it participates in during deployment. • We show that dialogue ability improves by imitating human responses when the human is satisfied, or by asking for feedback when they are not, predicting it as an auxiliary task. • We demonstrate that classifying user satisfaction is a learnable task important for the selffeeding process, significantly outperforming an approach based on model uncertainty. • We release three new datasets to further research in this direction: (1) deployment chat logs (513k messages); (2) ratings of user satisfaction (42k); (3) textual feedback on what a bot could have said in a given context (62k). The datasets and models described in this paper are available via the ParlAI platform (Miller et al., 2017), along with training code. Hyperparameter values are included in Appendix G. 2 Related Work The general concepts of lifelong learning (Silver et al., 2013) and never-ending (language) learning (Carlson et al., 2010) are related to the topics discussed in this work, as is active learning (Tong and Koller, 2001) and predictive modeling (Schmidhuber and Huber, 1991). The specific case of learning actively from dialogue during deployment was explored for the question answering (QA) setting in (Weston, 2016) and (Li et al., 2017a), where the authors examined multiple learning strategies on a suite of dialogue tasks with varying types of feedback, such as verbal cues (e.g., “Yes, that’s right!”) and scalar rewards. Most relevant to our work was their use of forward prediction, where the learner improved in quality by trying to predict the teacher’s responses without an explicit reward signal. Our work extends this idea, adding the ability for the model to recognize its mistakes and request feedback explicitly, and moving beyond QA to the more general chit-chat setting where there may be many valid responses in a given context. Learning to ask questions is another area that has been studied (Strub et al., 2017; Wang et al., 3669 Figure 2: (1) The chatbot is first trained with any available supervised data (boxed in red) on the Human-Human (HH) DIALOGUE (x, y)HH and SATISFACTION (x, s) tasks. (2) During deployment, whenever the predicted satisfaction score of the current conversation x is above the threshold (ˆs > t), a new Human-Bot (HB) DIALOGUE example (x, y)HB is extracted and the bot continues the conversation with its own response ˆy. Otherwise, the chatbot requests feedback with question q and extracts a new FEEDBACK example (x, f). (3) The chatbot is periodically retrained with the available examples from all four datasets, improving its DIALOGUE performance without collecting any new supervised examples. 2018; Rao and Daum´e, 2018). While those works focused on identifying which question to ask in a given context, in this work we are more interested in first learning when to ask a question. Li et al. (2017b) considered this question as well, but again in the context of a QA setting rather than dialogue. Hashimoto and Sassano (2018) used user responses to detect mistakes made by a deployed virtual assistant, showing that model mistakes can be identified in chit-chat, weather, or web search domains. However, they did not explore how to use these identified mistakes to improve the model further; their agent was not equipped to feed itself. Eskenazi et al. (2018) also found that the correctly assessing the appropriateness of chatbot responses is highly dependent on user responses and not preceding context alone. There are other, somewhat less related, ways to use feedback during dialogue for learning, notably for collecting knowledge to answer questions (Mazumder et al., 2018; Hixon et al., 2015; Pappu and Rudnicky, 2013), and more commonly in reinforcement learning settings, where the feedback is a scalar rather than the dialogue messages themselves (Levin et al., 2000; Schatzmann et al., 2006; Rieser and Lemon, 2011; Liu et al., 2018; Hong et al., 2019). In particular (Serban et al., 2017) employ user sentiment detection for reward shaping in their Alexa prize entry. Finally, our work improves dialogue quality by utilizing larger datasets with noisier labels than traditional supervision. Other applications of weak supervision to dialogue (Mallinar et al., 2019) and relation extraction have observed similar results (Bunescu and Mooney, 2007; Hancock et al., 2018; Ratner et al., 2017). 3 The Self-Feeding Chatbot The lifecycle of a self-feeding chatbot is outlined in Figure 2. In the initial training phase, the dialogue agent is trained on two tasks—DIALOGUE (next utterance prediction, or what should I say next?) and SATISFACTION (how satisfied is my speaking partner with my responses?)—using whatever supervised training data is available. We refer to these initial DIALOGUE examples as Human-Human (HH) examples, since they were generated in conversations between two humans. In the deployment phase, the agent engages in multi-turn conversations with users, extracting new deployment examples of two types. Each turn, the agent observes the context x (i.e., the conversation history) and uses it to predict its next utterance ˆy and its partner’s satisfaction ˆs. If the satisfaction score is above a specified threshold t, the agent extracts a new Human-Bot (HB) DIALOGUE example using the previous context x and the human’s response y and continues the conversation. 3670 If, however, the user seems unsatisfied with its previous response (ˆs < t), the agent requests feedback with a question q, and the resulting feedback response f is used to create a new example for the FEEDBACK task (what feedback am I about to receive?). The agent acknowledges receipt of the feedback and the conversation continues. The rate at which new DIALOGUE or FEEDBACK examples are collected can be adjusted by raising or lowering the satisfaction threshold t (we use t = 0.5).1 Periodically, the agent is retrained using all available data, thereby improving performance on the primary DIALOGUE task. It is important to note that the user’s responses are always in the form of natural dialogue. In particular, at no point are the new FEEDBACK examples inspected, post-processed, or cleaned. Instead, we rely on the fact that the feedback is not random: regardless of whether it is a verbatim response, a description of a response, or a list of possible responses (see Table 2 for examples), there is a learnable relationship between conversation contexts and their corresponding feedback which requires many of the same language understanding skills to master as does carrying on a normal conversation. The experiments in this paper are limited to the setting where the number of supervised and deployment examples are on the same order of magnitude; however, we envision scenarios in which the number of deployment examples can easily grow to 100× or more the number of supervised examples over the chatbot’s deployment lifetime, effectively providing a massive task-specific corpus at minimal cost. Table 1 reports the sizes of each dataset, all of which are available via ParlAI. 3.1 Task 1: DIALOGUE The chatbot’s primary task (DIALOGUE) is to carry on a coherent and engaging conversation with a speaking partner. Training examples take the form of (x, y) pairs, where x is the context of the conversation (the concatenation of all responses so far up to some history length, delimited with tokens marking the speaker), and y is the appropriate response given by the human. The Human-Human (HH) portion of the DIALOGUE dataset comes from the PERSONACHAT dataset (Zhang et al., 2018b), which consists of 1Another option would be to have two thresholds—one for each example type—to decouple collection their rates. Task Train Valid Test Total DIALOGUE – HH (HUMAN-HUMAN) 131438 7801 6634 145873 – HB (HUMAN-BOT) 60000 0 0 60000 FEEDBACK 60000 1000 1000 62000 SATISFACTION 1000 500 1000 2500 Table 1: The number of examples used in our experiments by task and split. Note that the HH DIALOGUE examples come from the PERSONACHAT dataset, HB DIALOGUE and FEEDBACK examples were collected during deployment, and an additional 40k SATISFACTION training examples were collected for the analysis in Section 5.1. short dialogues (6-8 turns) between two crowdworkers (humans) who have been assigned short text profiles and are instructed to “chat with the other person naturally and try to get to know each other.” We chose this dataset because of its size (over 145k total examples), the breadth of topics it covers, and its focus on promoting engaging conversations, which we anticipate being a necessary property of a chatbot that people will be willing to chat with voluntarily and repeatedly. We use the standard splits of the dataset made available in ParlAI as a part of the ConvAI2 challenge (Burtsev et al., 2018). Since the question of how to incorporate external knowledge (such as profiles) in dialogue is an open research question of its own (Li et al., 2016; Luan et al., 2017; Luo et al., 2018) and we are primarily interested in the question of learning from dialogue, we discard the profiles and simply train and test on the conversations themselves, making the dataset more challenging in terms of raw performance scores. The Human-Bot (HB) portion of the DIALOGUE dataset is extracted during deployment as described earlier, where the user is again a crowdworker instructed to chat naturally. The context may contain responses from both the human and the bot, but the target response is always from the human, as we will see experimentally that targeting bot responses degrades performance. Because the chit-chat domain is symmetric, both the HH and HB DIALOGUE examples are used for the same task. In an asymmetric setting where the bot has a different role than the human, it is unclear whether HB examples may still be used as an auxiliary task, but FEEDBACK examples will remain usable. 3671 Category % Feedback Examples Verbatim 53.0 • my favorite food is pizza • no, i have never been to kansas • i like when its bright and sunny outside Suggestion 24.5 • you could say hey, i’m 30. how old are you? • yes, i play battlefield would have a been a great answer. • you could have said “yes, I’m happy it’s friday.” Instructions 14.5 • tell me what your favorite breakfast food is • answer the question about having children! • tell me why your mom is baking bread Options 8.0 • you could have said yes it really helps the environment or no its too costly • you could have said yes or no, or talked more about your mustang dream. • you should have said new york, texas or maryland. something like one of those. Table 2: Examples of the types of feedback given to the dialogue agent, pulled from a random sample of 200 feedback responses. Verbatim responses could be used directly in conversation, Suggestion responses contain a potential verbatim response in them somewhere, Instructions describe a response or tell the bot what to do, and Options make multiple suggestions. 3.2 Task 2: SATISFACTION The objective of the SATISFACTION auxiliary task is to predict whether or not a speaking partner is satisfied with the quality of the current conversation. Examples take the form of (x, s) pairs, where x is the same context as in the DIALOGUE task, and s ∈[0, 1], ranging from dissatisfied to satisfied. Crucially, it is hard to estimate from the bot’s utterance itself whether the user will be satisfied, but much easier using the human’s response to the utterance, as they may explicitly say something to that effect, e.g. “What are you talking about?”. The dataset for this task was collected via crowdsourcing. Workers chatted with our baseline dialogue agent and assigned a rating 1-5 for the quality of each of the agent’s responses.2 Contexts with rating 1 were mapped to the negative class (dissatisfied) and ratings [3, 4, 5] mapped to the positive class (satisfied). Contexts with rating 2 were discarded to increase the separation between classes for a cleaner training set. Note that these numeric ratings were requested only when collecting the initial training data, not during deployment, where only natural dialogue is used. 3.3 Task 3: FEEDBACK The objective of the FEEDBACK auxiliary task is to predict the feedback that will be given by the speaking partner when the agent believes it has made a mistake and asks for help. Examples take the form of (x, f) pairs, where x is the same context as the other two tasks and f is the feedback utterance. 2A snapshot of the data collection interface and sample conversations are included in the Appendix. Training data for this task is collected during deployment. Whenever the user’s estimated satisfaction is below a specified threshold, the chatbot responds “Oops! Sorry. What should I have said instead?”.3 A new example for the FEEDBACK task is then extracted using the context up to but not including the turn where the agent made the poor response as x and the user’s response as f (as shown in Figure 1). At that point to continue the conversation during deployment, the bot’s history is reset, and the bot instructs the user to continue, asking for a new topic. Examples of FEEDBACK responses are shown in Table 2. 4 Model and Settings 4.1 Model Architecture The self-feeding chatbot has two primary components: an interface component and a model component. The interface component is shared by all tasks, and includes input/output processing (tokenization, vectorization, etc.), conversation history storage, candidate preparation, and control flow (e.g., when to ask a question vs. when to give a normal dialogue response). The model component contains a neural network for each task, with embeddings, a network body, and a task head, some of which can be shared. In our case, we obtained maximum performance by sharing all parameters between the FEEDBACK and DIALOGUE tasks (prepending FEEDBACK responses with a special token), and using separate model parameters for the SATISFACTION task. Identifying optimal task structure in multi-task learning (MTL) 3Future work should examine how to ask different kinds of questions, depending on the context. 3672 architectures is an open research problem (Ruder, 2017). Regardless of what parameters are shared, each training batch contains examples from only one task at a time, candidate sets remain separate, and each task’s cross-entropy loss is multiplied by a task-specific scaling factor tuned on the validation set to help account for discrepancies in dataset size, loss magnitude, dataset relevance, etc. Our dialogue agent’s models are built on the Transformer architecture (Vaswani et al., 2017), which has been shown to perform well on a variety of NLP tasks (Devlin et al., 2018; Radford et al., 2018), including multiple persona-based chat applications (Shuster et al., 2018a,b; Rashkin et al., 2018). For the SATISFACTION task, the context x is encoded with a Transformer and converted to the scalar satisfaction prediction ˆs by a final linear layer in the task head. The DIALOGUE and FEEDBACK tasks are set up as ranking problems, as in (Zhang et al., 2018b; Mazar´e et al., 2018), where the model ranks a collection of candidate responses and returns the top-ranked one as its response. The context x is encoded with one Transformer and ˆy and ˆf candidates are encoded with another. The score for each candidate is calculated as the dot product of the encoded context and encoded candidate. During training, negative candidates are pulled from the correct responses for the other examples in the mini-batch. During evaluation, however, to remain independent of batch size and data shuffling, each example is assigned a static set of 19 other candidates sampled at random from its split of the data. During deployment, all 127,712 unique HH DIALOGUE candidates from the train split are encoded once with the trained model and each turn the model selects the top-ranked one for the given context. 4.2 Model Settings Contexts and candidates are tokenized using the default whitespace and punctuation tokenizer in ParlAI. We use a maximum dialogue history length of 2 (i.e., when making a prediction, the dialogue agent has access to its previous utterance and its partner’s response). Tokens are embedded with fastText (Bojanowski et al., 2017) 300-dimensional embeddings. We do not limit the vocabulary size, which varies from 11.5k to 23.5k words in our experiments, depending on the training set. The Transformer is implemented in PyTorch (Paszke et al., 2017) within the ParlAI framework. We use the AdaMax (Kingma and Ba, 2014) optimizer with a learning rate schedule that decays based on the inverse square root of the step number after 500 steps of warmup from 1e-5. We use proportional sampling (Sanh et al., 2018) to select batches from each task for training, with batch size 128. Each Transformer layer has two attention heads and FFN size 32. The initial learning rate (0.001-0.005), number of Transformer layers (1-2), and task-specific loss factors (0.5-2.0) are selected on a per-experiment basis based on a grid search over the validation set averaged over three runs (we use the DIALOGUE validation set whenever multiple tasks are involved). We use early stopping based on the validation set to decide when to stop training. The hyperparameter values for the experiments in Section 5 are included in Appendix G. Note that throughout development, a portion of the DIALOGUE validation split was used as an informal test set. The official hidden test set for the DIALOGUE task was used only to produce the final numbers included in this paper. 5 Experimental Results Throughout this section, we use the ranking metric hits@X/Y, or the fraction of the time that the correct candidate response was ranked in the top X out of Y available candidates; accuracy is another name for hits@1/Y. Statistical significance for improvement over baselines is assessed with a two-sample one-tailed T-test. 5.1 Benefiting from Deployment Examples Our main result, reported in Table 3, is that utilizing the deployment examples improves accuracy on the DIALOGUE task regardless of the number of available supervised (HH) DIALOGUE examples.4 The boost in quality is naturally most pronounced when the HH DIALOGUE training set is small (i.e., where the learning curve is steepest), yielding an increase of up to 9.4 accuracy points, a 31% improvement. However, even when the entire PERSONACHAT dataset of 131k examples is used—a much larger dataset than what is available for most dialogue tasks—adding deployment examples is still able to provide an additional 1.6 points of accuracy on what is otherwise a very flat region of 4For comparisons with other models, see Appendix C. The best existing score reported elsewhere on the PERSONACHAT test set without using profiles is 34.9. 3673 Human-Bot (HB) Human-Human (HH) DIALOGUE DIALOGUE FEEDBACK 20k 40k 60k 131k 30.3 (0.6) 36.2 (0.4) 39.1 (0.5) 44.7 (0.4) 20k 32.7 (0.5) 37.5 (0.6) 40.2 (0.5) 45.5 (0.7) 40k 34.5 (0.5) 37.8 (0.6) 40.6 (0.6) 45.1 (0.6) 60k 35.4 (0.4) 37.9 (0.7) 40.2 (0.8) 45.0 (0.7) 20k 35.0 (0.5) 38.9 (0.3) 41.1 (0.5) 45.4 (0.8) 40k 36.7 (0.7) 39.4 (0.5) 41.8 (0.4) 45.7 (0.6) 60k 37.8 (0.6) 40.6 (0.5) 42.2 (0.7) 45.8 (0.7) 60k 60k 39.7 (0.6) 42.0 (0.6) 43.3 (0.7) 46.3 (0.8) Table 3: Accuracy (hits@1/20) on the DIALOGUE task’s hidden test set by number of Human-Human (HH) DIALOGUE, Human-Bot (HB) DIALOGUE, and FEEDBACK examples, averaged over 20 runs, with standard deviations in parentheses. For each column, the model using all three data types (last row) is significantly better than all the others, and the best model using only one type of self-feeding (FEEDBACK examples or HB DIALOGUE examples) is better than the supervised baseline in the first row (p < 0.05). the learning curve. It is interesting to note that the two types of deployment examples appear to provide complementary signal, with models performing best when they use both example types, despite them coming from the same conversations. We also calculated hit rates with 10,000 candidates (instead of 20), a setup more similar to the interactive setting where there may be many candidates that could be valid responses. In that setting, models trained with the deployment examples continue to outperform their HH-only counterparts by significant margins (see Appendix B). On average, we found that adding 20k FEEDBACK examples benefited the agent about as much as 60k HB DIALOGUE examples.5 This is somewhat surprising given the fact that nearly half of the FEEDBACK responses would not even be reasonable responses if used verbatim in a conversation (instead being a list of options, a description of a response, etc.) as shown in Table 2. Nevertheless, the tasks are related enough that the DIALOGUE task benefits from the MTL model’s improved skill on the FEEDBACK task. And whereas HB DIALOGUE examples are based on conversations where the user appears to already be satisfied with the agent’s responses, each FEEDBACK example corresponds to a mistake made by the model, giving the latter dataset a more active 5Our baseline chatbot collected approximately one FEEDBACK example for every two HB DIALOGUE examples, but this ratio will vary by application based on the task difficulty, satisfaction threshold(s), and current model quality. role in improving quality. Interestingly, our bestperforming model, which achieves 46.3 accuracy on DIALOGUE, scores 68.4 on FEEDBACK, suggesting that the auxiliary task is a simpler task overall. When extracting HB DIALOGUE examples, we ignore human responses that the agent classifies as expressing dissatisfaction, since these turns do not represent typical conversation flow. Including these responses in the 60k HB dataset decreases hits@1/20 by 1.2 points and 0.6 points when added to 20k and 131k HH DIALOGUE examples, respectively. We also explored using chatbot responses with favorable satisfaction scores (ˆs > t) as new training examples, but found that our models performed better without them (see Appendix D for details). We also found that “fresher” feedback results in bigger gains. We compared two models trained on 20k HH DIALOGUE examples and 40k FEEDBACK examples—the first collected all 40k FEEDBACK examples at once, whereas the second was retrained with its first 20k FEEDBACK examples before collecting the remaining 20k. While the absolute improvement of the second model over the first was small (0.4 points), it was statistically significant (p =0.027) and reduced the gap to a model trained on fully supervised (HH) DIALOGUE examples by 17% while modifying only 33% of the training data.6 This improvement makes sense intuitively, since new FEEDBACK examples are 6Additional detail can be found in Appendix E. 3674 Method Pr. Re. F1 Uncertainty Top 0.39 0.99 0.56 (Pr. ≥0.5) 0.50 0.04 0.07 Uncertainty Gap 0.38 1.00 0.55 (Pr. ≥0.5) 0.50 0.04 0.07 Satisfaction Regex 0.91 0.27 0.42 Satisfaction Classifier (1k) 0.84 0.84 0.84 Satisfaction Classifier (2k) 0.89 0.84 0.87 Satisfaction Classifier (5k) 0.94 0.82 0.88 Satisfaction Classifier (20k) 0.96 0.84 0.89 Satisfaction Classifier (40k) 0.96 0.84 0.90 Table 4: The maximum F1 score (with corresponding precision and recall) obtained on the SATISFACTION task. For the Uncertainty methods, we also report the maximum F1 score with the constraint that precision must be ≥0.5. The Satisfaction Classifier is reported with varying numbers of SATISFACTION training examples. collected based on failure modes of the current model, making them potentially more efficient in a manner similar to new training examples selected via active learning. It also suggests that the gains we observe in Table 3 might be further improved by (a) collecting FEEDBACK examples specific to each model (rather than using the same 60k FEEDBACK examples for all models), and (b) more frequently retraining the MTL model (e.g., every 5k examples instead of every 20k) or updating it in an online manner. We leave further exploration of this observation for future work. The same experiment repeated for HB DIALOGUE examples found that fresher HB examples were no more valuable than stale ones, matching our intuition that HB DIALOGUE examples are less targeted at current model failure modes than FEEDBACK ones. 5.2 Predicting User Satisfaction For maximum efficiency, we aim to ask for feedback when it will most benefit our model. The approach we chose (classifying the tone of partner responses) takes advantage of the fact that it is easier to recognize that a mistake has already been made than it is to avoid making that mistake; or in other words, sentiment classification is generally an easier task than next utterance prediction. We compare this to the approach of asking for feedback whenever the model is most uncertain what to say next. This approach acts on the assumption that the model will be least confident when it is about to make a mistake, which we find very frequently to not be the case. Not only is it difficult to recognize one’s own mistakes, but also there are often multiple valid responses to a given context (e.g., “Yes, I love seafood!” or “Yuck, fish is gross.”)—a lack of certainty about which to use does not necessarily suggest a poor model. Table 4 reports the maximum F1 scores achieved by each method on the SATISFACTION test set. For the model uncertainty approach, we tested two variants: (a) predict a mistake when the confidence in the top rated response is below some threshold t, and (b) predict a mistake when the gap between the top two rated responses is below the threshold t. We used the best-performing standalone DIALOGUE model (one trained on the full 131k training examples) for assessing uncertainty and tuned the thresholds to achieve maximum F1 score. For the user satisfaction approach, we trained our dialogue agent on just the SATISFACTION task. Finally, we also report the performance of a regular-expression-based method which we used during development, based on common ways of expressing dissatisfaction that we observed in our pilot studies, see Appendix F for details. As shown by Table 4, even with only 1k training examples (the amount used for the experiments in Section 5.1), the trained classifier significantly outperforms both the uncertainty-based methods and our original regular expression, by as much as 0.28 and 0.42 F1 points, respectively. 6 Future Work In this work we learned from dialogue using two types of self-feeding: imitation of satisfied user messages, and learning from the feedback of unsatisfied users. In actuality, there are even more ways a model could learn to improve itself—for example, learning which question to ask in a given context to receive the most valuable feedback. One could even use the flexible nature of dialogue to intermix data collection of more than one type— sometimes requesting new FEEDBACK examples, and other times requesting new SATISFACTION examples (e.g., asking “Did my last response make sense?”). In this way, a dialogue agent could both improve its dialogue ability and its potential to improve further. We leave exploration of this metalearning theme to future work. 3675 References M. A. Bassiri. 2011. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61– 73. P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics (TACL), 5:135–146. R. Bunescu and R. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Association for Computational Linguistics (ACL). M. Burtsev, V. Logacheva, V. Malykh, R. Lowe, I. Serban, S. Prabhumoye, E. Dinan, D. Kiela, A. Miller, K. Shuster, A. Szlam, J. Urbanek, and J. Weston. 2018. The conversational intelligence challenge 2 (ConvAI2). A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr, and T. M. Mitchell. 2010. Toward an architecture for never-ending language learning. In Association for the Advancement of Artificial Intelligence (AAAI). J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. M. Eskenazi, R. Evgeniia M. Shikib, and T. Zhao. 2018. Beyond turing: Intelligent agents centered on the user. arXiv preprint arXiv:1803.06567. B. Hancock, P. Varma, S. Wang, M. Bringmann, P. Liang, and C. R´e. 2018. Training classifiers with natural language explanations. In Association for Computational Linguistics (ACL). C. Hashimoto and M. Sassano. 2018. Detecting absurd conversations from intelligent assistant logs by exploiting user feedback utterances. In World Wide Web (WWW), pages 147–156. B. Hixon, P. Clark, and H. Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In North American Association for Computational Linguistics (NAACL). T. Hong, O. Kwon, and Y. Kim. 2019. An end-toend trainable task-oriented dialog system with human feedback. In Association for the Advancement of Artificial Intelligence (AAAI). D. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. J. Kruger and D. Dunning. 1999. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated selfassessments. Journal of personality and social psychology, 77(6):1121–1134. E. Levin, R. Pieraccini, and W. Eckert. 2000. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11–23. J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. 2016. A persona-based neural conversation model. In Association for Computational Linguistics (ACL). J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. Weston. 2017a. Dialogue learning with human-in-theloop. In International Conference on Learning Representations (ICLR). J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. Weston. 2017b. Learning through dialogue interactions by asking questions. In International Conference on Learning Representations (ICLR). B. Liu, G. T¨ur, D. Hakkani-T¨ur, P. Shah, and L. Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems. In North American Association for Computational Linguistics (NAACL), volume 1, pages 2060–2069. Y. Luan, C. Brockett, B. Dolan, J. Gao, and M. Galley. 2017. Multi-task learning for speaker-role adaptation in neural conversation models. In Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP), volume 1, pages 605–614. L. Luo, W. Huang, Q. Zeng, Z. Nie, and X. Sun. 2018. Learning personalized end-to-end goal-oriented dialog. arXiv preprint arXiv:1811.04604. N. Mallinar, A. Shah, R. Ugrani, A. Gupta, M. Gurusankar, T. K. Ho, Q. V. Liao, Y. Zhang, R. Bellamy, and R. Yates. 2019. Bootstrapping conversational agents with weak supervision. In Association for the Advancement of Artificial Intelligence (AAAI). P. Mazar´e, S. Humeau, M. Raison, and A. Bordes. 2018. Training millions of personalized dialogue agents. In Empirical Methods in Natural Language Processing (EMNLP), pages 2775–2779. S. Mazumder, N. Ma, and B. Liu. 2018. Towards a continuous knowledge learning engine for chatbots. arXiv preprint arXiv:1802.06024. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. In Empirical Methods in Natural Language Processing (EMNLP), pages 79–84. A. Pappu and A. Rudnicky. 2013. Predicting tasks in goal-oriented spoken dialog systems using semantic knowledge bases. In Proceedings of the SIGDIAL 2013 Conference, pages 242–250. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. 2017. Automatic differentiation in pytorch. 3676 A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. S. Rao and H. Daum´e. 2018. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. pages 2737–2746. H. Rashkin, E. M. Smith, M. Li, and Y. Boureau. 2018. I know the feeling: Learning to converse with empathy. arXiv preprint arXiv:1811.00207. A. Ratner, S. H. Bach, H. Ehrenberg, J. Fries, S. Wu, and C. R´e. 2017. Snorkel: Rapid training data creation with weak supervision. In Very Large Data Bases (VLDB), 3, pages 269–282. V. Rieser and O. Lemon. 2011. Reinforcement learning for adaptive dialogue systems: a data-driven methodology for dialogue management and natural language generation. Springer Science & Business Media. J. Ross, A. Zaldivar, L. Irani, and B. Tomlinson. 2009. Who are the turkers? worker demographics in amazon mechanical turk. Technical report, Department of Informatics, University of California, Irvine. S. Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098. V. Sanh, T. Wolf, and S. Ruder. 2018. A hierarchical multi-task approach for learning embeddings from semantic tasks. arXiv preprint arXiv:1811.06031. J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(2):97–126. J. Schmidhuber and R. Huber. 1991. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(1):125– 134. I. V. Serban, R. Lowe, L. Charlin, and J. Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742. I. V. Serban, C. Sankar, M. Germain, S. Zhang, Z. Lin, S. Subramanian, T. Kim, M. Pieper, S. Chandar, N. R. Ke, et al. 2017. A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349. K. Shuster, S. Humeau, A. Bordes, and J. Weston. 2018a. Engaging image chat: Modeling personality in grounded dialogue. arXiv preprint arXiv:1811.00945. K. Shuster, S. Humeau, H. Hu, A. Bordes, and J. Weston. 2018b. Engaging image captioning via personality. arXiv preprint arXiv:1810.10665. D. L. Silver, Q. Yang, and L. Li. 2013. Lifelong machine learning systems: Beyond learning algorithms. In Association for the Advancement of Artificial Intelligence (AAAI), volume 13. F. Strub, H. D. Vries, J. Mary, B. Piot, A. Courville, and O. Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. arXiv preprint arXiv:1703.05423. S. Tong and D. Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(0):45–66. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762. Y. Wang, B. Dai, L. Kong, X. Ma, S. M. Erfani, J. Bailey, S. Xia, L. Song, and H. Zha. 2018. Learning deep hidden nonlinear dynamics from aggregate data. In Uncertainty in Artificial Intelligence (UAI). M. G. Werts, M. Wolery, A. Holcombe, and D. L. Gast. 1995. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55–75. J. E. Weston. 2016. Dialog-based language learning. In Advances in Neural Information Processing Systems (NeurIPS), pages 829–837. H. Zhang, H. Yu, and W. Xu. 2017. Listen, interact and talk: Learning to speak via interaction. arXiv preprint arXiv:1705.09906. H. Zhang, H. Yu, and W. Xu. 2018a. Interactive language acquisition with one-shot visual concept learning through a conversational game. arXiv preprint arXiv:1805.00462. S. Zhang, E. Dinan, J. Urbanek, A. Szlam, D. Kiela, and J. Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243. 3677 A Data Collection Protocol Here we report in greater detail the protocol we followed to collect the SATISFACTION, FEEDBACK, and HB DIALOGUE examples used in the experiments of Section 5. We first trained our dialogue agent on just the DIALOGUE task with 20k HH examples. This agent was deployed on a crowdsourcing platform using the interface shown in Appendix H.2 to collect 2.5k SATISFACTION examples. These were split into 1k train, 500 validation, and 1k test examples. The agent was retrained using the 20k HH DIALOGUE examples and 1k SATISFACTION examples, then deployed to collect the first batch of deployment examples. We collected 40k FEEDBACK examples (feedback set A) over the course of 17,250 conversations with 10 turns each (20 utterances, including the initial prompt). We then retrained the agent on all three datasets, using the same 20k HH DIALOGUE examples as before and only 20k of the available 40k FEEDBACK examples. This model was deployed to collect another 20k FEEDBACK examples (feedback set B), for a total of 60k FEEDBACK examples (A + B). In Table 3 we use these 60k FEEDBACK examples interchangeably; in Appendix E we compare them head-to-head. The 60k HB DIALOGUE examples were extracted from the logs of the deployment conversations. Finally, we collected an additional 40k SATISFACTION training examples to produce the numbers in Table 4 investigating the learning curve for this task. No filtering was performed on the crowdworker conversations. Upon inspection after the fact, some workers did indeed give poor responses, make typographical mistakes, misunderstand the instructions, try to use the chatbot as a question answering interface, etc. We assume however that similar types of noise will be present in most chatbot deployment environments and opted to maintain a workflow that truly does not require developer intervention to use the newly collected examples. B Results with 10k Candidates HH HB FB Hits@X/10,000 @1 @10 @100 20k 0.8 4.6 16.2 20k 60k 60k 2.0 8.4 25.0 40k 1.3 6.5 21.8 40k 60k 60k 2.1 9.0 27.2 60k 1.6 7.0 24.0 60k 60k 60k 2.2 9.7 28.8 131k 2.5 10.0 30.3 131k 60k 60k 2.8 11.2 31.8 Table 5: When the number of candidates to choose from is increased to 10,000, adding Human-Bot (HB) DIALOGUE and FEEDBACK (FB) examples continues to improve performance on the DIALOGUE task at all levels. C PERSONACHAT Comparisons and Baselines Our experiments use the PERSONACHAT distribution that was released as a part of the ConvAI2 (Burtsev et al., 2018) challenge. This distribution is slightly cleaner than the original PERSONACHAT release and comes with a new crowdsourced test set. In order to compare with the models and baselines used in the original PERSONACHAT paper (Zhang et al., 2018b), we report in this section the performance of our models on the original PERSONACHAT test set, not the ConvAI2 test set. Note that empirically, near Hits@1/20 = 50, each additional point of improvement corresponds to tens of thousands of fullysupervised Human-Human DIALOGUE examples. All numbers reported here are for models that do not have access to the profiles that were used in the creation of the conversations; models that do have access to this additional information tend to perform even better. 3678 Model Hits@1/20 (Zhang et al., 2018b) Seq2Seq 9.2 IR Baseline 21.4 Starspace 31.8 Profile Memory 31.8 KV Profile Memory 34.9 Ours Transformer 49.6 Self-Feeding 51.7 Table 6: The accuracy of various models and baselines on the original PERSONACHAT test set. D Using Chatbot Responses as Targets HH BF BU Hits@1/20 20k 30.3 20k 32k 22.7 20k 33k 19.3 131k 44.7 131k 32k 40.4 131k 33k 39.0 Table 7: Both with few HH DIALOGUE examples (20k) and many (131k), adding examples with bot utterances as the target decreased quality. We explored using all bot responses (Bot Unfiltered, or BU) and only those responses with estimated satisfaction scores greater than the 0.5 (Bot Filtered, or BF). We also considered whether it was possible to consistently identify really good responses by the chatbot, rather than the really bad ones. These could potentially be used as DIALOGUE examples along with the ones that have human responses as targets (which we refer to as HH and HB in the paper). To explore this question, we modified our SATISFACTION dataset so that contexts with a rating of 5 were the positive class and ones with ratings [1, 2, 3] were the negative class (discarding ratings of 4 to increase the separation between classes). The results were negative—even with a training set of over 34k examples, the maximum precision we were able to achieve while maintaining at least 10% recall was 0.70, which is insufficient to improve performance on the DIALOGUE task. Upon inspection, it appears that really good responses are hard to identify because most of the time they look like a normal human-tohuman conversation, and recognizing an appropriate next utterance is precisely the DIALOGUE task that we are trying to solve! Negative responses, however, are much more semantically similar to one another, since most express one of a few common ideas such as asking for clarification or conveying confusion. E The Effect of Data Freshness HH HBA HBB FBA FBB Total Hits@1/20 20k 20k 30.3 20k 40k 60k 35.4 20k 20k 20k 60k 35.3 40k 40k 36.2 20k 40k 60k 36.7 20k 20k 20k 60k 37.1 60k 60k 39.1 Table 8: As discussed in Section 5.1 and illustrated in Figure 3, FEEDBACK (FB) examples collected from a more recently retrained model (set B instead of set A) are more valuable in terms of improving performance; see Appendix A for details on how sets A and B were collected. We did not observe the same trend for HB DIALOGUE examples. We include the performance of models trained on only HH DIALOGUE examples in italics as reference points. Figure 3: The first 20k examples for all models are supervised DIALOGUE examples. This model is deployed to collect 20k FEEDBACK examples (set A). If the model is retrained before collecting the next 20k examples (set B), the fresher feedback results in better performance (p = 0.027). Shaded regions depict 95% confidence intervals. 3679 F SATISFACTION Regular Expressions As described in Section 5.2, before we trained a classifier on the SATISFACTION task, we used the union of the following six regular expressions (using Python regular expression syntax) to identify user dissatisfaction and trigger feedback requests: r"i .*(?:said|asked|told).*" r"((not|nt|n’t).*mak.*sense)|(mak.*no .*sense)" r"u(m|h)+\W" r"you.*what\?" r"what.*you (?:mean|refer|talk).*\?" r"what.*to do with.*\?" G Hyperparameters HH HB FB layers learning rate loss factor DIALOGUE FEEDBACK 20k 1 0.0010 1.00 20k 20k 1 0.0010 1.00 20k 40k 1 0.0010 1.00 20k 60k 1 0.0010 1.00 20k 20k 1 0.0010 1.00 0.50 20k 40k 1 0.0010 1.00 0.50 20k 60k 1 0.0010 1.00 0.75 20k 60k 60k 1 0.0025 1.00 1.50 40k 1 0.0010 1.00 40k 20k 1 0.0010 1.00 40k 40k 1 0.0010 1.00 40k 60k 1 0.0025 1.00 40k 20k 1 0.0010 1.00 0.50 40k 40k 1 0.0010 1.00 0.75 40k 60k 1 0.0025 1.00 1.00 40k 60k 60k 1 0.0025 1.00 1.25 60k 2 0.0010 1.00 60k 20k 1 0.0025 1.00 60k 40k 1 0.0025 1.00 60k 60k 1 0.0025 1.00 60k 20k 1 0.0025 1.00 1.00 60k 40k 1 0.0025 1.00 1.00 60k 60k 1 0.0025 1.00 1.00 60k 60k 60k 1 0.0025 1.00 0.75 131k 2 0.0025 1.00 131k 20k 2 0.0025 1.00 1.00 131k 40k 2 0.0025 1.00 1.00 131k 60k 2 0.0025 1.00 1.00 131k 20k 2 0.0025 1.00 1.00 131k 40k 2 0.0025 1.00 1.00 131k 60k 2 0.0025 1.00 1.00 131k 60k 60k 2 0.0025 1.00 1.00 Table 9: The hyperparameters used to obtain the results in Table 3. 3680 H Data Collection Interfaces H.1 Deployment Data Collection We simulated a deployment environment on a crowdsourcing platform with the following interface. Crowdworkers were told to “Talk as you would to a person you have just met.” H.2 SATISFACTION Data Collection The environment for collecting SATISFACTION ratings was very similar to the deployment environment, with the exception that workers were required to give a rating 1-5 on the quality of each response given by the chatbot at the end of their own responses. 3681 I Sample Deployment Conversations A sample of the conversations that occurred between a Bot (B) and Human (H) during the collection of the deployment datasets (more specifically, feedback set B). The self-feeding chatbot used its satisfaction classifier trained on 1k examples to determine when to ask for feedback. The dialogue model at this point had only been trained on 20k HH DIALOGUE examples and 20k FEEDBACK examples, so it still makes frequent mistakes. See Appendix H.1 for a snapshot of the actual interface. B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: random, mistake, century. H: what’s the last mistake you made? B: what do you do ? i’ve a toothpick business , H: that’s not what i asked. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”what’s the last mistake you made?”)? H: told me about a recent mistake. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: what do you do for a living? B: i am a psychology major . what do you do ? H: i am a student as well. i’m studying law. B: oh , my five children and husband all speak spanish as well . H: i never said i speak spanish, what are you doing? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”i am a student as well. i’m studying law.”)? H: talked about your college courses. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: how old are your children? B: it is . we are just thrilled we have ac again . its calming my anxiety . H: that has nothing to do with what i said. B: yes since i prefer my hair straight . i straighten it with a flat iron . tedious ! H: you make no sense. B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: picnic, drain, elm. H: do you like to go to the beach or woods for a picnic? B: good , going to the doctor soon , i’ve some issue with my knees H: that’s doesn’t make sense B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”do you like to go to the beach or woods for a picnic?”)? H: you could have chose the beach or woods to previous question B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: do you travel? B: i want to go to paris and make clothes . H: wow that’s interesting. i’ve never been to paris B: i can see that . do you like music ? H: i love music. what is your favorite kind? B: tell me more about you H: that doesn’t answer the question B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”i love music. what is your favorite kind?”)? H: you could have selected a type of music that you like B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: do you like animals? i have a dog B: i like those , too , but pizza is better ! ; H: umm, what? B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: boy, punk, ray. H: are you a boy or a girl? B: do you have any jokes ? H: i wasn’t talking about jokes. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”are you a boy or a girl?”)? H: i was talking about your gender. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: sure, do you like music? B: oh that’s neat . you should write me a poem about star trek ! H: what? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”sure, do you like music?”)? H: we were talking about music. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: what is your favorite type of art? 3682 B: nice ! are the pets your own ? H: no B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”what is your favorite type of art?”)? H: we weren’t talking about pets. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: sure, do you know anything about biology? B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: college, cliff, tachometer. H: hey, how’s it going? B: no . i am just 15 . parents would kill me . H: what are you talking about? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”hey, how’s it going?”)? H: i was just saying hi, so you should have said a greeting? B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: no problem. so, do you go to school? B: i have ten of them ! H: no, that isn’t right. B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”no problem. so, do you go to school?”)? H: i was asking about school or education. B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: so, what is the best subject to you? B: oh wow , i’ve no cats , but i do hace a couple of dogs . H: so, you like dogs a lot? B: i am also preparing for a marathon also , wish me luck . H: oh wow! does your dog help you prepare? B: i like hindu food . i’m not a very good baker though . my mom has mastered that . H: no, that isn’t right. B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: drink, wool, bag. H: what is your favorite drink B: i love animals and the outdoors , so i work at the zoo . H: wrong answer B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”what is your favorite drink”)? H: you should have said my favorite drink is either juice,coke or wine B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: are you religious B: somewhat lol he sits when i draw and watches me H: what are you talking about B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”are you religious”)? H: you should have told me yes i am religious in fact i attend church services at least once a week and if not then no i am not religious never attended church services B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: tell me about yourself B: i drive my dream car every day i am home . a 65 mustang . H: that is nice you must be really love cars B: hello , how is your day going ? H: my day is good relaxing at home after work watching tv B: i’m sorry to hear that . i sincerely hope things get better for you . H: what are you talking about B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: glasses, puppy, pillow. H: do you like dogs? i got a new puppy B: hey ! i love being outside planting my plants ! how about you ? H: that is nice i love to plant too! B: right now i’ve a beagle . she’s a little feisty . i miss my shepard mix . H: aww my dog is named sam he is a collie B: yeah , animals are my passion , i have 3 , all big dogs . H: yeah i just have the one dog and two cats B: i do not . i take a lot of pictures . i’d like to be a famous photographer . H: i actully really want to get a picture of my animals with santa lol B: do you want to work out with me ? H: umm what? B: oops! i think i messed up. whether i messed up or not, what could i have said (in response to ”i actully really want to get a picture of my animals with santa lol”)? H: that would be very cute B: thanks! i’ll try to remember that. can you pick a new topic for us to talk about now? H: i love to eat 3683 J Sample SATISFACTION Conversations A sample of the conversations that occurred between a Bot (B) and Human (H) during the collection of the SATISFACTION dataset. Users were asked to rate each bot response on a scale from 1 (very bad) to 5 (very good). See Appendix H.2 for a snapshot of the actual interface. B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: invitation, dot, czar. H: hello friend, i missed you B: hello my name is ray H: hi i’m leah (reward:5) B: hello my name is michael H: no it’s not (reward:1) B: oops! sorry. what should i have said instead? H: nice to meet you B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: do you have many friends? B: lol i have kids to H: what? (reward:1) B: oops! sorry. what should i have said instead? H: no but i have kids B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: what do you want for christmas? B: i love to bake cakes H: what? (reward:1) B: oops! sorry. what should i have said instead? H: i want a pony for christmas B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: popsicle, lawnmower, mom. H: hi, are you having a good day so far? B: i’m here . i’m doing well , just waiting for the hurricane to hit . H: oh im sorry. where do you live? (reward:5) B: miami . . . i live in little cuba , but i wish i lived in europe . H: yeah, i bet you do (reward:5) B: we live in aruba , in a van down by the river H: doesnt make sense (reward:1) B: oops! sorry. what should i have said instead? H: that you were from miami, not aruba B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: do you own a car? B: i only drive toyota cars H: what’s the fastest you’ve ever driven a car? (reward:2) B: a yellow toyota camry convertible . H: that doesnt make sense (reward:1) B: oops! sorry. what should i have said instead? H: how fast you have driven before B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: do you like staying up late? B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: stout, mop, through. H: i just finished mopping my kitchen. B: i am a barista at joe jacks coffee shop H: that has nothing to do with what i said (reward:1) B: oops! sorry. what should i have said instead? H: talked about how you hate cleaning the kitchen B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: what music do you listen to? B: listen about lectures about space and stuff H: so you don’t like to listen to music at all? (reward:4) B: i mostly listen to jazz H: jazz is a fun genre. i play the trombone (reward:5) B: classical jazz is my very favorite music H: do you play any instruments? (reward:2) B: classical music . mozart piano music . H: i did not ask that (reward:1) B: oops! sorry. what should i have said instead? H: the instrument that you play B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: what are your plans today? 3684 B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: clown, drip, whatever. H: my pipes are dripping, do you know a plumber i can call? B: anything but taylor swift lol H: that doesn’t make any sense. (reward:1) B: oops! sorry. what should i have said instead? H: you could say ”sorry i don’t know anyone who could get out there today.” B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: i love music. who do you like to listen to? B: i mostly listen to smooth jazz and pop music H: oh smooth jazz eh? do you like kenny g? (reward:5) B: i like pop jazz and rap H: you already said that! (reward:1) B: oops! sorry. what should i have said instead? H: you could say ”kenny g is a little too smooth for my tastes.” B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: i think i’m going to just eat cereal with breakfast. what did you have to eat? B: bacon bacon bacon bacon bacon bacon bacon H: haha bacon is pretty good. i like the maple smoked stuff. (reward:3) B: onion green peppers mushrooms cream cheese shredded cheese . H: that sounds like an awesome bagel. how often do you eat breakfast? (reward:3) B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: princess, dump, telephone. H: hello! how are you on this fine day? B: i am pretty good i guess . i am even better at eating haha especially cookies and ice cream H: i do love eating too, especially cookies (reward:5) B: cakes , muffins , cookies , cupcakes ! sugar cookies are my favorite ! H: you like a lot of sweets! (reward:3) B: cookies pies cakes and cup cakes H: okay, i get it, you like sweets (reward:2) B: peanut butter goes well with bread and crackers H: i do love a peanut butter sandwich, my kids do too (reward:5) B: tights with peanut butter peas H: what??? (reward:1) B: oops! sorry. what should i have said instead? H: something that made sense about peanut butter B: thanks! i’ll remember that. can you pick a new topic for us to talk about now? H: do you have a cell phone? B: hostage , it was a robbery in the store i was selling furniture too H: huh? (reward:1) B: oops! sorry. what should i have said instead? H: tell me if you have a cell phone, maybe what kind you have B: start a conversation with the chatbot. if you need ideas for a topic, here are a few random words: ray, scared, shirt. H: hi cutie B: hi H: how are you? (reward:4) B: good just making brisket and ribs you ? H: that sounds tasty. i’m just watching youtube (reward:5) B: i see . . i like watching movies too star wars especially H: star wars is a good movie (reward:5) B: horror movies or tv ? i like both i suppose H: yeah horror movies are ok (reward:5) B: watching a classic movie the wizard of oz H: do you like to swim? (reward:5) B: the beach and bike riding H: that sounds fun (reward:5) B: i hike at the park H: what else do you do? (reward:4) B: i run at central park H: oh you’re from new york? (reward:5) B: upstate new york near jamestown ny H: wow awesome, i live in canada (reward:5)
2019
358
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3685–3695 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3685 Generating Responses with a Specific Emotion in Dialog Zhenqiao Song1,2, Xiaoqing Zheng∗1,2, Lu Liu1,2, Mu Xu3 and Xuanjing Huang1,2 1School of Computer Science, Fudan University, Shanghai, China 2Shanghai Key Laboratory of Intelligent Information Processing 3Department of Computer Science, University of California, Santa Barbara {zqsong17, zhengxq, l liu15}@fudan.edu.cn [email protected], [email protected] Abstract It is desirable for dialog systems to have capability to express specific emotions during a conversation, which has a direct, quantifiable impact on improvement of their usability and user satisfaction. After a careful investigation of real-life conversation data, we found that there are at least two ways to express emotions with language. One is to describe emotional states by explicitly using strong emotional words; another is to increase the intensity of the emotional experiences by implicitly combining neutral words in distinct ways. We propose an emotional dialogue system (EmoDS) that can generate the meaningful responses with a coherent structure for a post, and meanwhile express the desired emotion explicitly or implicitly within a unified framework. Experimental results showed EmoDS performed better than the baselines in BLEU, diversity and the quality of emotional expression. 1 Introduction Humans have the unique capacity to perceive complex, nuanced emotions, and also have the unique capability to communicate those experiences to one another with language. Although recent studies (Partala and Surakka, 2004; Prendinger and Ishizuka, 2005) provide much evidence that the systems capable of expressing emotions significantly improve the user satisfaction, it is still a great challenge to make dialogue systems more “emotional” in their responses. In early representative work (Polzin and Waibel, 2000; Skowron, 2010), manually prepared rules are applied to deliberately select the desired “emotional” responses from a conversation corpus. Those rules were written by persons with expertise after careful investigation in the corpus, which makes it hard to express complex, various emotions, and difficult to scale well to large datasets. Post: I bought a beautiful dress yesterday! Explicit: Wearing beautiful dress makes me happy! Implicit: Wow, you must feel walking on air! Post: The rose is really beautiful! Explicit: I love rose! Implicit: I am keen on rose. Post: I lost my computer today! Explicit: It is really an annoying thing. Implicit: Oh, you must feel hot under the collar. Table 1: Examples of two (explicit and implicit) ways in emotional expressions. For each post, one emotional response for each way is listed below. The emotional words associated with strong feelings are highlighted in bold blue font. Most recently, a sequence to sequence (seq2seq) learning framework with recurrent neural networks (RNNs) has been successfully used to build conversational agents (also known as chatbots) (Sutskever et al., 2014; Sordoni et al., 2015; Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016a,b; Wen et al., 2016; Li et al., 2017; Shen et al., 2018) due to their capability to bridge arbitrary time lags. Such framework was also tried to address the problem of emotional expression in a chatbot, called emotional chat machine (ECM) by Zhou el al (2018). However, the authors reported that ECM tends to express the emotion category (say “joy” or “neutral”) with much more training samples than others, although it is explicitly asked to express another (“anger” for example). It suffers from exploring the overwhelming samples belonging to a certain emotion category. Language plays an important role in emotion because it supports the conceptual knowledge used to make meaning of sensations in a given context. As shown in Table 1, we found there are at least two ways to put feelings into words. One is to describe emotional states (such as “anger,” “disgust,” “contentment,” “joy,” “sadness,” etc.) by explicitly using strong emotional words associated with the 3686 categories; another is to increase the intensity of the emotional experiences not by using words in emotion lexicon, but by implicitly combining neutral words in distinct ways on emotion. In this study, we propose an emotional dialogue system (EmoDS) that is able to put a specific feeling into words with a coherent structure in an explicit or implicit manner. The seq2seq framework has been extended with a lexicon-based attention mechanism that encourages to replace the words of the response with their synonyms in an emotion lexicon. The response generation process is guided by a sequence-level emotion classifier that not only increases the intensity of emotional expression, but also helps to recognize the emotional sentences not containing any emotional word. We also present a semi-supervised method to create an emotion lexicon that is relatively “accurate” representation of the emotional states that humans are prepared to experience and perceive. Experimental results with both automatic and human evaluations show that for a given post and an emotion category, our EmoDS can express the desired emotion explicitly (if possible) or implicitly (if necessary), and meanwhile successfully generate the meaningful responses with a coherent structure. 2 Related Work Previous studies have reported that dialog systems equipped with the ability to make appropriate emotional expressions in their responses can directly increase user satisfaction (Prendinger and Ishizuka, 2005) and bring improvement in decision making and problem solving (Partala and Surakka, 2004). A few efforts have been devoted to make dialogue systems more “humanlike” by imitating emotional expressions. In early representative work (Polzin and Waibel, 2000; Skowron, 2010), manually prepared rules are used to choose the responses associated with a specific emotion from a conversation corpus. Those rules need to be written by well-trained experts, which makes it hard to extend to deal with complex, nuanced emotions, especially for large corpora. Recurrent neural networks (RNNs) and their applications in the sequence-to-sequence framework have been empirically proven to be quite successful in structured prediction such as machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014), summarization (Rush et al., 2015), or image caption generation (Vinyals et al., 2015). This framework was also applied to build a chatbot, designed to simulate how a human would behave as an interactive agent (Vinyals and Le, 2015). In earlier attempts to develop chatbots by the seq2seq framework, many efforts have been made to avoid generating dull sentences (such as “tell me more”, and “go on”) in their responses. Very recently, a little attention has been given to generate responses with the specific properties like sentiments, tenses, or emotions. Hu et al. (2017) proposed a text generative model based on variational autoencoders (VAEs) to produce sentences presenting a given sentiment or tense. Ghosh et al. (2017) presented a RNN-based language model to generate emotional sentences conditioned on their affect categories. This study focused on the text generations only, but not in the case of conversations. Zhou and Wang (2018) collected a large corpus of Twitter conversations including emojis (ideograms and smileys used in electronic messages) first, and then used emojis to express emotions in the generated texts by trying several variants of conditional VAEs. Zhou et al. (2018) proposed an emotional chat machine (ECM) based on the seq2seq framework, which is more closely related to this study. They disentangle the emotion factors from texts in the form of embeddings. When ECM is asked to express a specific emotion in the response, the corresponding emotion embedding is consumed by the machine until each element of such embedding being set back to zero. An external emotion lexicon is also used to help the response generation. ECM tends to express the emotion category having the most training samples, although it is specifically asked to express another. Such bias is possibly caused by the potential tendency to gradually blur the differences between the emotion embeddings when the training progresses. We use an emotion classifier to guide the response generation process, which ensures that a specific emotion is appropriately expressed in the generated responses. To our knowledge, this study is among the first ones to built an interactive machine capable of expressing the specific emotions either in an explicit (if possible) or implicit (when necessary) way. 3 Method In this section, we describe EmoDS that can generate the emotional responses with a coherent struc3687 Post: It is a sunny day today! Joy: I really like sunny days, and so we can hang out and have fun! Contentment: Wow, let us go outside and play tennis! Disgust: I would rather like rainy days. Anger: It is none of my business. Sadness: I think it seems going to rain. Table 2: Example responses generated by EmoDS. today Encoder h5 = Decoder Semi-supervised Method Emotion Category: Contentment Expected word embedding Lexicon-based Attention Example: Post: Today is a sunny day ! Emotion Category: Contentment Explicit Response: It is nice ! Implicit Response: I feel walking on the air ! good nice fine laugh great smile s2 Vocabulary ++ MLP Emotion Classifier h1 day h2 h3 h4 h5 is a sunny it is nice <eos> s1 s2 s3 s4 <eos> it is nice Weighted Sum: e3 Explicitly Control Implicitly Control h1] [h5; Figure 1: The architecture of an emotional dialogue system (EmoDS). The lower left shows a bidirectional LSTM-based encoder that encodes an input post into its vector representation. This vector representation will be used to initialize a decoder (shown in the upper left) that outputs a meaningful response with a specific emotion in assistance with an emotion classifier (shown in the upper right) and a lexicon-based attention (shown in the lower right). The lexicon-based attention proposes explicitly plugging emotional words into the responses to the encoder at the right time steps, while the emotion classifier provides a global guidance on the emotional response generation in an implicit way by increasing the intensity of emotional expression. ture in an explicit or implicit manner. The seq2seq framework is extended with a lexicon-based attention mechanism to plug in the desired emotional words. A sequence-level emotion classifier simultaneously helps to recognize the emotional sentences without any emotional word. A diverse decoding algorithm is also presented to foster diversity in response generation. Furthermore, we propose a semi-supervised method to produce an emotion lexicon that can properly represent the mental perceptions of the emotional states. 3.1 Problem Definition The problem can be formulated as follows: given a post X = {x1, x2, ..., xM} and an emotion category e, the objective is to generate a response Y = {y1, y2, ..., yN} that is not only meaningful with the content, but also in accordance with the desired emotion, where xi ∈V and yj ∈V are words in the post and response. M and N denote the lengths of the post and response respectively. V = Vg S Ve is a vocabulary, which consists of a generic vocabulary Vg and an emotion lexicon Ve. We require that Vg T Ve = ∅. The lexicon Ve can be further divided into several subsets V z e , each of which stores the words associated with an emotion category z. We list an example post with its responses with different emotions in Table 2. 3.2 Dialogue System with Lexicon-based Attention Mechanism The EmoDS is based on the seq2seq framework that is first introduced for neural machine translation (Sutskever et al., 2014). A lexicon-based attention mechanism (Bahdanau et al., 2014) is also applied to seamlessly “plug” emotional words into the generated texts at the right time steps. The architecture of EmoDS is shown in Figure 1. Specifically, we use bidirectional long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) as an encoder to transform a post, X = {x1, x2, ..., xM}, into its vector representation. Formally, the hidden states of the encoder can be computed as follows: −→ hi = LSTMforward(Emb(xi), −−→ hi−1) ←− hi = LSTMbackward(Emb(xi), ←−− hi+1) (1) where i = 1, 2, ..., M, and −→ hi and ←− hi are the ith hidden states of forward and backward LSTMs respectively. Emb(xi) ∈Rd is the word embedding of xi, and d is the dimensionality of word embeddings. We concatenate the corresponding hidden states of the forward and backward LSTMs, namely hi = [−→ hi; ←− hi], as the i-th hidden state produced by the two LSTMs. The last hidden state hM is fed to a decoder as its initialization. The decoder module contains a separate LSTM enhanced with a lexicon-based attention mechanism. The LSTM decoder takes as input a previously predicted word yj−1 and an emotion vector ej to update its hidden state sj as follows: sj = LSTMdecoder([Emb(yj−1); ej], sj−1) (2) where j = 1, 2, ..., N and s0 = hM. Emb(yj−1) is the word embedding of yj−1, and [·; ·] denotes an operation that concatenates the feature vectors 3688 separated with semicolons. The emotion vector ej is calculated as a weighted sum of embeddings of words in V z e with the given category z: ej = X k ajk · Emb(wz k) ajk = exp(cjk) PTz t=1 exp(cjt) cjk = Sigmoid(α⊤hM + β⊤sj−1 + γ⊤Emb(wz k)) (3) where wz k denotes the k-th word in V z e , Tz is the number of words for the emotion category z, and α, β and γ are trainable parameters. We compute attention scores using the global attention model proposed by Luong et al. (2015). For each emotional word wz k in V z e , the attention score ajk at the time step j is determined by three parts: the previous hidden state sj−1 of the decoder, the encoded representation hM of the input post, and the embedding Emb(wz k) of the k-th word in V z e . Therefore, given the partial generated response and the input post, the more relevant an emotional word is, the more influence it will have on the emotion feature vector at the current time step. In this way, such lexicon-based attention gives higher probability to the emotional words that are more relevant to the current context. In order to plug the emotional words into the responses, we estimate both a probability distribution Pe(yj = we) over all the emotional words we in V z e for a given emotion type z, and a probability distribution Pg(yj = wg) over all the generic words wg in Vg as follows: Pe(yj = we) = Softmax(Wesj) Pg(yj = wg) = Softmax(Wgsj) δj = Sigmoid(υ⊤sj) yj ∼P(yj) =  δjPe(yj = we) (1 −δj)Pg(yj = wg)  (4) where δj ∈(0, 1) is a type selector controlling the weight of generating an emotional or a generic word, and We, Wg and υ are trainable parameters. The lexicon-based attention mechanism helps to put the desired emotional words into response at the right time steps, which makes it possible to express the expected feelings in the generated texts. The loss function for each sample is defined by minimizing the cross-entropy error in which the target distribution t is a binary vector with all elements zero except for the ground truth: LMCE = − N X j=1 tjlog(P(yj)) (5) 3.3 Emotion Classification The feelings can be put into words either by explicitly using strong emotional words associated with a specific category, or by implicitly combining neutral words to a sequence in distinct ways. Therefore, we use a sequence-level emotion classifier to guide the generation process, which helps to recognize the responses expressing a certain emotion but not containing any emotional word. A straightforward method to introduce such a classifier is to build a sentence-level emotion discriminator as follows: Q(E|Y ) = Softmax(W · 1 N N X j=1 Emb(yj)) (6) where W ∈RK×d is a weight matrix and K denotes the number of emotion categories. However, it is infeasible to enumerate all possible sequences as the search space is exponential to the size of vocabulary, and the length of Y is not known in advance. Besides, it is non-differentiable if we approximate the generation process by sampling few sequences according to their probabilities. Following Koˇcisk`y et al. (2016), we use the idea of expected word embedding to approximate Q(E|Y ). Specifically, the expected word embedding is a weighted sum of embeddings of all the possible words at each time step: Ewe(j; X, z) = X yj∈Vg∪V z e P(yj) · Emb(yj) (7) where for each time step j, we enumerate all possible words that are in the union of Vg and V z e . The classification loss for each sample is defined as: LCLA = −P(E)log(Q(E|Y )) Q(E|Y ) = Softmax(W · 1 N N X j=1 Ewe(j; X, z)) (8) where P(E) is a one-hot vector that represents the desired emotion distribution for an instance. The introduced emotion classifier can not only increase the intensity of emotional expression, but also help to identify the emotional responses not containing any emotional word. Note that the emotion classifier is used only during training process, and can be taken as a global guidance for emotional expression. 3.4 Training Objective The overall training objective is divided into two parts: the generation loss and the classification 3689 one, which can be written as: L = LMCE + λLCLA (9) where a hyperparameter λ governs the relative importance of the generation loss compared with the classification term. The generation loss LMCE ensures that the decoder can produce meaningful responses with a coherent structure, while the emotion classification term guides the generation process and guarantees that a specific emotion is appropriately expressed in the generated responses. 3.5 Diverse Decoding Algorithm Li et al. (2016c) found that most responses in the N-best results produced by the traditional beam search are much similar, and thus we propose a diverse decoding algorithm to foster diversity in the response generation. We force the head words of N-candidates should be different, and then the model continues to generate a response by a greedy decoding strategy after such head words are determined. Finally, we choose the response with the highest emotion score from the best Ncandidates. The candidates are scored by the emotion classifier trained in advance on a dataset annotated automatically (see Section 4.1). Therefore, our model can produce the N-best candidates with more diversity, in which the one with the highest emotion score is chosen as the final result. 3.6 Emotion Lexicon Construction In this section, we describe how to construct the required emotion lexicon in semi-supervised manner from a corpus consisting of the sentences annotated with their emotion categories. The meaning of words is rated on a number of different bipolar adjective scales. For example, scales might range from “strong” to “weak”. We only collect the words rated as “strong” for each emotion category and put into the emotion lexicon. Inspired by Vo and Zhang (2016), each word is represented as w = (pw, nw) for an emotion category (i.e. “joy”), where pw denotes the probability being assigned to this category while nw denotes the opposite. Given a sentence s that is a sequence of n words, and the estimated emotion probability is simply calculated as ˆzs = Pn i=1( pwi n , nwi n ). If sentence s presents the emotion, it is labeled as a two-dimensional emotion vector z = (1, 0); if not z = (0, 1). Each word is initialized by small random values, and trained by minimizing the cross-entropy error in form of Training Post 3, 992, 363 Response Anger 204, 797 Disgust 535, 869 Contentment 344, 549 Joy 1, 065, 689 Sadness 494, 962 Neutral 1, 346, 497 Validation All 221, 798 Test All 221, 798 Table 3: Statistics of emotion-labeled STC dataset. Method Accuracy Lexicon-based 0.453 RNN 0.572 LSTM 0.597 Bi-LSTM 0.635 Table 4: Classification accuracy on the NLPCC dataset. {−Pm i=1 zmlog ˆ zm}, where m is the number of sentences in a corpus. We remove all the stop words in the sentences, and map the recognized “digit,” “E-mail,” “URL,” “date,” and “foreign word” into special symbols. The words following the negation are transformed to (−pw, −nw) before they are used to produce the emotion vector of its sentence. If the words are modified by superlative or comparative adjectives (or adverbs), the value of learning rate used to update their representations will be doubled or tripled accordingly. The training process can be divided into two stages. In the first stage, the standard back-propagation is applied. When the prediction accuracy is greater than a given threshold (say 90%), the second stage starts using the maximum margin learning strategy until arriving at a convergence. After the training stops, we compute an average as v = 1 n Pn i=1(pw −nw) and its variance σ. The word with its value 1 σ(pw −nw −v) being greater than a certain threshold will be identified as an emotional word. 4 Experiments 4.1 Data Preparation There is no large-scale off-the-shelf emotional conversation data, so we constructed our own experimental dataset based on Short Text Conversation (STC) dataset1 (Shang et al., 2015). Following Zhou et al. (2018), we first trained an emotion classifier on NLPCC dataset2 and then annotated 1Available at http://ntcir12.noahlab.com.hk/stc.htm 2Available at http://http://tcci.ccf.org.cn/nlpcc.php 3690 Models Embedding BLEU Score Diversity Emotional Expression Average Greedy Extreme BLEU distinct-1 distinct-2 emotion-a emotion-w Seq2Seq 0.523 0.376 0.350 1.50 0.0038 0.012 0.335 0.371 EmoEmb 0.524 0.381 0.355 1.69 0.0054 0.0484 0.720 0.512 ECM 0.624 0.434 0.409 1.68 0.0090 0.0735 0.765 0.580 EmoDS-MLE 0.548 0.367 0.374 1.60 0.0053 0.0670 0.721 0.556 EmoDS-EV 0.571 0.390 0.384 1.64 0.0053 0.0659 0.746 0.470 EmoDS-BS 0.614 0.442 0.409 1.73 0.0051 0.0467 0.773 0.658 EmoDS 0.634 0.451 0.435 1.73 0.0113 0.0867 0.810 0.687 Table 5: Results reported in the embedding scores, BLEU, diversity, and the quality of emotional expression. STC dataset using this classifier. More specifically, we trained a bidirectional LSTM (Bi-LSTM) classifier on NLPCC dataset for emotion classification, as it achieved the highest classification accuracy compared with other classifiers (Zhou et al., 2018). Accuracies of several neural network-based classifiers are shown in Table 4. NLPCC dataset is composed of emotion classification data in NLPCC20133 and NLPCC20144. There are eight emotion categories in this dataset, including Anger (7.9%), Disgust (11.9%), Contentment (11.4%), Joy (19.1%), Sadness (11.7%), Fear (1.5%), Surprise (3.3%) and Neutral (33.2%). After removing the infrequent categories (Fear and Surprise), we have six emotion categories at last: Anger, Disgust, Contentment, Joy, Sadness and Neutral. Next we used the well-trained Bi-LSTM classifier to annotate the STC dataset with the six emotion labels, and thus we obtained the emotion-labeled conversation dataset. Finally we randomly split the emotionlabeled STC dataset into training/validation/test sets with the ratio of 9:0.5:0.5. The detailed statistics are shown in Table 3. 4.2 Training Details We implemented our EmoDS in Tensorflow5. Specifically, we used one layer of bidirectional LSTM for encoder and another uni-directional LSTM for decoder, with the size of LSTM hidden state set as 256 in both the encoder and decoder. The dimension of word embedding was set to 100, which was initialized with Glove embedding (Pennington et al., 2014). Many empirical results show that such pre-trained word representations can enhance the supervised models on a variety of NLP tasks (Zheng et al., 2013; Zheng, 2017; Feng and Zheng, 2018). The generic vocab3Available at http://tcci.ccf.org.cn/conference/2013/ 4Available at http://tcci.ccf.org.cn/conference/2014/ 5Available at https://www.tensorflow.org/ ulary was built based on the most frequent 30, 000 words, and the emotion lexicon for each category was constructed by our semi-supervised method with size set to 200. All the remaining words were replaced by a special token <UNK>. Parameters were randomly initialized by the uniform distribution within [−3.0/n, 3.0/n], where n denotes the dimension of parameters. The size of diverse decoding was set to 20. We tuned the only hyperparameter λ in {1e-1,1e-2,1e-3,1e-4}, and found that 1e-2 worked best. We applied the stochastic gradient descent (SGD) (Robbins and Monro, 1985) algorithm with mini-batch for optimization. The mini-batch size and learning rate were set to 64 and 0.5, respectively. We run the training for 20 epoches and the training stage took about 5 hours on a TITAN X GPU card. Our code will be released soon. 4.3 Baseline Models We conducted extensive experiments to compare EmoDS against the following representative baselines: (1) Seq2Seq: We implemented the Seq2Seq model as in Vinyals and Le (2015); (2) EmoEmb: Inspired by Li et al. (2016b), we represented each emotion category as a vector and fed it to the decoder at each time step. We call this model emotion embedding dialogue system (EmoEmb). (3) ECM: We used the code released by Zhou et al. (2018) to implement ECM. Additionally, to better analyze the influence of different components in our model, we also conducted ablation tests as follows: (4) EmoDSMLE: EmoDS is only optimized with the MLE objective, without the emotion classification term. (5) EmoDS-EV: EmoDS uses an external emotion lexicon6 instead of producing an internal one. (6) EmoDS-BS: EmoDS applies the original beam search rather than our diverse decoding. 6http://download.csdn.net/download/abacaba/9722161 3691 Models Joy Contentment Disgust Anger Sadness Overall Cont. Emot. Cont. Emot. Cont. Emot. Cont. Emot. Cont. Emot. Cont. Emot. Seq2Seq 1.350 0.455 1.445 0.325 1.180 0.095 1.150 0.115 1.090 0.100 1.243 0.216 EmoEmb 1.285 0.655 1.320 0.565 1.015 0.225 1.160 0.400 0.995 0.190 1.155 0.407 ECM 1.395 0.690 1.400 0.615 1.130 0.425 1.190 0.330 1.195 0.335 1.262 0.479 EmoDS 1.265 0.695 1.260 0.685 1.370 0.530 1.185 0.505 1.265 0.625 1.269 0.608 Table 6: The results of human evaluation. Cont. and Emot. denote content and emotion, respectively. Models 2-1 1-1 0-1 2-0 1-0 0-0 Seq2Seq 10.0 8.6 3.2 35.1 25.5 17.6 EmoEmb 20.4 11.4 8.9 23.5 16.3 19.5 ECM 26.5 15.3 7.5 20.4 17.9 12.4 EmoDS 31.7 19.3 9.8 17.7 8.8 12.7 Table 7: The distribution (%) of Content-Emotion scores. Pref. (%) Seq2Seq EmoEmb ECM EmoDS Seq2Seq 44.7 36.9 30.7 EmoEmb 55.3 42.4 39.9 ECM 63.1 57.6 41.4 EmoDS 69.3 60.1 58.6 Table 8: Preference test (%) between any two models. 4.4 Automatic Evaluation 4.4.1 Metrics We used the following metrics to evaluate the performance of our EmoDS: (1) Embedding Score: We employed three embedding-based metrics (average, greedy and extreme) (Liu et al., 2016), which map the responses into vector space and compute the cosine similarity. The embeddingbased metrics can, to a large extent, capture the semantic-level similarity between the generated responses and the ground truth. (2) BLEU Score: BLEU (Papineni et al., 2002) is a popular metric that calculates the word-overlap score of the generated responses against gold-standard responses. BLEU in this paper refers to the default BLEU4. (3) Distinct: Distinct-1/distinct-2 is the proportion of the distinct unigrams/bigrams in all the generated tokens, respectively (Li et al., 2016a). Distinct metrics can be used to evaluate the diversity of the responses. (4) Emotion Evaluation: We designed two emotion-based metrics, emotiona and emotion-w, to test how well the emotion is expressed in the generated responses. Emotiona is the agreement between the predicted labels through the Bi-LSTM classifier in Data Preparation and the ground truth labels. Emotion-w is the percentage of the generated responses that contain the corresponding emotional words. 4.4.2 Results The results are reported in Table 5. The top half is the results of all baseline models, and we can see that EmoDS outperformed the competitors in all cases. Notably, EmoDS achieved significant improvements on emotion-a and emotionw over EmoEmb and ECM, indicating that our EmoDS can generate coherent responses with better emotional expression. Seq2Seq model performed rather poorly on nearly all metrics, primarily because it does not take any emotion factor into account and tends to generate short generic responses. The ability to express emotions in both explicit and implicit manners makes EmoDS generate more emotional responses. The bottom half of Table 5 shows the results of ablation tests. As we can see, after removing the emotion classification term (EmoDS-MLE), the performance decreased most significantly. Our interpretation is that without the emotion classification term, the model can only express the desired emotion explicitly in the generated responses and can not capture the emotional sequences not containing any emotional word. Applying an external emotion lexicon (EmoDS-EV) also brought performance decline, especially on emotion-w. This makes sense because an external emotion lexicon shares fewer words with the corpus, causing the generation process to focus on generic vocabulary and more commonplace responses to be generated. Additionally, the distinct-1/distinct-2 decreased most when using the original beam search (EmoDS-BS), indicating that the diverse decoding can promote diversity in response generation. 4.5 Human Evaluation 4.5.1 Evaluation Settings Following the protocols defined in (Zhou et al., 2018), we employed a human evaluation method designed from the content and emotion levels to better understand the quality of the generated re3692 Post Model Response Translated Response Seq2Seq 照片怎么拍出来的? What do you think of this photo? 月光下的海 EmoDS Anger 一点都不美。 It is not beautiful at all. 真美。 Disgust 照片拍的真难 难 难看 看 看。 The photo is really ugly. Sea under the Contentment 好想去看看。 I really want to see the scenery. moonlight is Joy 月亮和海都好 好 好美 美 美! Both the moon and sea are so beautiful! so beautiful. Sadness 感觉很孤 孤 孤单 单 单! It makes me feel lonely! Seq2Seq 大自然的力量无穷。 The power of the nature is endless. 大自然是最 EmoDS Anger 我一点也没觉得。 I feel nothing at all. 伟大的艺术家 Disgust 我讨 讨 讨厌 厌 厌大自然。 I hate the nature. The nature is the Contentment 当然!大自然太 太 太美 美 美了! Sure ! How beautiful the nature is! greatest artist . Joy 我爱 爱 爱大自然! I love the nature! Sadness 破坏大自然太残 残 残忍 忍 忍了! It is so cruel to destroy the nature! Seq2Seq 我也想要一个。 I also want one. 手工制作的 EmoDS Anger 一点都不好看。 It does not look nice at all. 帽子和手链。 Disgust 这个手链看着像赝 赝 赝品 品 品。 The bracelet looks like a fake. The handmade Contentment 很有创 创 创意 意 意的设计。 It is a creative design. hat and bracelet. Joy 我喜 喜 喜欢 欢 欢这个设计。 I like the design. Sadness 这是一个悲 悲 悲伤 伤 伤的故事。 It is a sad story. Seq2Seq 哈哈,这是你的梦。 Haha, that is your dream . 香香甜甜的 EmoDS Anger 赶紧睡吧,我要疯 疯 疯了 了 了! Go to bed quickly, and I am crazy . 美梦,我来咯 Disgust 我讨 讨 讨厌 厌 厌做梦。 I hate dreaming . Have a sweet Contentment 哇 哇 哇,好可 可 可爱 爱 爱的小姑娘! Wow, what a lovely girl . dream! I am going Joy 我喜 喜 喜欢 欢 欢做梦! I like dreaming . to bed. Sadness 感觉你会做噩 噩 噩梦 梦 梦! I feel you may have a nightmare . 最美的天空来 Seq2Seq 自由的阳光。 A free sunshine . 自自由的阳光。 EmoDS Anger 你怎么了? what is wrong with you ? The brilliant and Disgust 这样的蓝天真难 难 难看 看 看。 This sky is really ugly . free sunshine Contentment 阳光让人灿 灿 灿烂 烂 烂。 Sunshine makes me happy . brings about the Joy 我最 最 最喜 喜 喜欢 欢 欢的阳光灿烂! I like the sunshine very much . most beautiful sky. Sadness 没有阳光的自由更好。 I feel better without the sunshine . Table 9: Case study for EmoDS. For each post, one sample response is listed for each emotion category. The emotions of the responses containing emotional words (highlighted in blue font) are expressed explicitly, while those of others are expressed implicitly. sponses. First, two hundred posts were randomly sampled from the test set and for each of them, all models except Seq2Seq generated six responses for six emotion categories. Instead, Seq2Seq model generated top 6 responses in beam search for each post. Later the triples of (post, response, emotion) were presented to three human judges with order disrupted. They evaluated each response from the content level by 3-scale rating (0, 1, 2) and emotion level by 2-scale rating (0, 1). Evaluation from the content level assesses whether a response is coherent and meaningful for the context. Evaluation from the emotion level decides if a response reveals the desired emotion property. Agreements to measure inter-rater consistency among three annotators were calculated with the Fleiss’s kappa (Fleiss and Cohen, 1973). Finally, the Fleiss’s kappa for content and emotion is 0.513 and 0.811, indicating “Moderate agreement” and “Substantial agreement”, respectively. 4.5.2 Results It is shown in Table 6 that EmoDS achieved the highest performance in most cases (Sign Test, with p-value < 0.05). Specifically, for content coherence, there was no obvious difference among most models, but for emotional expression, the EmoDS yielded a significant performance boost. As we can see from Table 6, EmoDS performed well on all categories with an overall emotion score of 0.608, while EmoEmb and ECM performed poorly on categories with less training data, e.g., disgust, anger and sadness. Note that all emotion scores of Seq2Seq were the lowest, indicating that Seq2Seq is bad at emotional expression when generating responses. To sum up, EmoDS can generate meaningful responses with better emotional expression, due to the fact that EmoDS is capable of expressing the desired emotion either explicitly or implicitly. To better analyze the overall quality of the generated responses at both the content and emotion 3693 levels, we also report the distribution of the combined content and emotion scores in Table 7. It shows that 31.7% of the responses generated by EmoDS were annotated with a content score of 2 and an emotion score of 1, which is higher than all the other three models. This demonstrates that EmoDS is better at generating high-quality responses in the respect of both the content and emotion. Furthermore, the results of preference test are shown in Table 8. It can be seen that EmoDS is significantly preferred against other models (Sign Test, with p-value < 0.05). Obviously, the diverse emotional responses generated by our EmoDS are more attractive to users than the commonplace responses generated by the Seq2Seq. 4.6 Case Study To gain an insight on how well the emotion is expressed in the generated responses, we provide some examples in Table 9. It shows that the EmoDS can generate informative responses with any desired emotion by putting a specific feeling into words either in an explicit or implicit manner. For example, “难看(ugly)” is a strong emotional word that is used to explicitly describe the emotional state of disgust, while the words in “好/ 想 / 去/ 看看/ 。(I really want to see the scenery.)” are all neutral ones, but their combination can express the emotional state of contentment. 5 Conclusion Observing that emotional states can be expressed with language by explicitly using strong emotional words or by forming neutral word in distinct patterns, we proposed a novel emotional dialog system (EmoDS) that can express the desired emotions in either way, and at the same time generate the meaningful responses with a coherent structure. The sequence-to-sequence framework has been extended with a lexicon-based attention mechanism that works by seamlessly “plugging” emotional words into the texts by increasing their probability at the right time steps. An emotion classifier is also used to guide the response generation process, which ensures that a specific emotion is appropriately expressed in the generated texts. To our knowledge, this study is among the first ones to build an interactive machine capable of expressing the specific emotions either in an explicit (if possible) or implicit (when necessary) way. Experimental results with both automatic and human evaluations demonstrated that EmoDS outperformed the baselines in BLEU, diversity and the quality of emotional expression with a significant margin, highlighting the potential of the proposed architecture for practical dialog systems. 6 Acknowledgements The authors would like to thank the anonymous reviewers for their valuable comments. We are also grateful to Chenxin An, Geng Hong, Yingshan Yang and Zongyi Li for their suggestions. This work was supported by National Key R&D Program of China (No. 2018YFC0830902), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and Zhangjiang Lab. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. In Proceedings of the 2014 International Conference on Learning Representations. Kyunghyun Cho, Bart Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Jiangtao Feng and Xiaoqing Zheng. 2018. Geometric relationship between word and context representations. In Proceedings of the AAAI Conference on Artificial Intelligence. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-lm: A neural language model for customizable affective text generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 634–642. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. arXiv preprint arXiv:1703.00955. 3694 Tom´aˇs Koˇcisk`y, G´abor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1078– 1087. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 994– 1003. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016c. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. Jiwei Li, Will Monroe, Tianlin Shi, S˙ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Timo Partala and Veikko Surakka. 2004. The effects of affective interventions in human–computer interaction. Interacting with computers, 16(2):295–309. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Thomas S Polzin and Alexander Waibel. 2000. Emotion-sensitive human-computer interfaces. In ISCA tutorial and research workshop (ITRW) on speech and emotion. Helmut Prendinger and Mitsuru Ishizuka. 2005. The empathic companion: A character-based interface that addresses users’affective states. Applied Artificial Intelligence, 19(3-4):267–285. Herbert Robbins and Sutton Monro. 1985. A stochastic approximation method. In Herbert Robbins Selected Papers, pages 102–109. Springer. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for sentence summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Iulian V Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016a. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016b. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the 2016 Association for the Advancement of Artificial Intelligence, volume 16, pages 3776–3784. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1577–1586. Xiaoyu Shen, Hui Su, Wenjie Li, and Dietrich Klakow. 2018. Nexus network: Connecting the preceding and the following in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4316– 4327. Marcin Skowron. 2010. Affect listeners: Acquisition of affective states by means of conversational systems. In Development of Multimodal Interfaces: Active Listening and Synchrony, pages 169–181. Springer. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. 3695 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Oriol Vinyals, Samy Bengio Alexander Toshev, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the Conference on Computer Vision and Pattern Recognition. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Duy Tin Vo and Yue Zhang. 2016. Don’t count, predict! an automatic approch to learning sentiment lexicons for short text. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A networkbased end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Xiaoqing Zheng. 2017. Incremental graph-based neural dependency parsing. In Proceedings of the Conference on Empirical Methods on Natural Language Processing. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In Proceedings of the Conference on Empirical Methods on Natural Language Processing. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Proceedings of the 2018 Association for the Advancement of Artificial Intelligence. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1128– 1137.
2019
359
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 371–379 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 371 Towards Unsupervised Text Classification Leveraging Experts and Word Embeddings Zied Haj-Yahia Capgemini Invent [email protected] Adrien Sieg Capgemini Invent [email protected] L´ea A. Deleris BNP Paribas [email protected] Abstract Text classification aims at mapping documents into a set of predefined categories. Supervised machine learning models have shown great success in this area but they require a large number of labeled documents to reach adequate accuracy. This is particularly true when the number of target categories is in the tens or the hundreds. In this work, we explore an unsupervised approach to classify documents into categories simply described by a label. The proposed method is inspired by the way a human proceeds in this situation: It draws on textual similarity between the most relevant words in each document and a dictionary of keywords for each category reflecting its semantics and lexical field. The novelty of our method hinges on the enrichment of the category labels through a combination of human expertise and language models, both generic and domain specific. Our experiments on 5 standard corpora show that the proposed method increases F1-score over relying solely on human expertise and can also be on par with simple supervised approaches. It thus provides a practical alternative to situations where lowcost text categorization is needed, as we illustrate with our application to operational risk incidents classification. 1 Introduction Document classification is a standard task in machine learning (Joachims, 1999; Sebastiani, 2002). Its applications span a variety of ”use cases and contexts, e.g., email filtering, news article clustering, clinical document classification, expertquestion matching”. The standard process for text categorization relies on supervised and semisupervised approaches. The motivation for the present effort comes from the banking sector, in particular the management of operational risks. This category of risks corresponds to the broad set of incidents that are neither credit nor market risk and includes issues related to internal and external fraud, cybersecurity, damages on physical assets, natural disasters, etc. The practical management of operational risk is partially based on the management of a dataset of historical operational risk incidents where each incident is described in details and that is shared on a regular basis with regulators. Historically, all incident reports have been mapped to about twenty categories of risk issued from the regulator. However, from an operational perspective, a higher number of risk categories is relevant to better capture the nuances around the incidents and enable relevant comparisons. This led to the creation of a new internal risk taxonomy of risk composed of 264 categories, each described by a label (a few words). To make it operational, the stock of all internal and external incident reports had to be classified into categories from the new internal taxonomy. However, since it had never been used before, we had no labeled samples readily available. As hundreds of thousands of incidents had to be processed, text classification seemed a promising approach to assist in that mapping task. Indeed, given the specificity of the domain and the lack of availability of experts, it was not conceivable to obtain many labeled examples for each category as would be required for supervised approaches. This is the issue addressed in this paper where describe our work towards an unsupervised approach to classify documents into a set of categories described by a short sentence (label). While the inspiration of this paper is the classification of incident reports in operational risk, our approach aims to be readily transferable to other domains. For that purpose, we tested it on standard text classification corpora. The underlying idea is altogether simple. We 372 emulate the approach that a domain expert would follow to manually assign an input document (incident report, client review, news article, etc.) to a given category. Specifically this entails developing an understanding of the categories semantic fields and then, for each document, to classify it into the closest category. The novelty of our method hinges on the diversity of enrichment techniques of the categories label, including expert input that assists the semantic expansion and the use of word embeddings, both generic and domain specific. The remainder of this paper is organized as follows. In Section 2, we provide an overview of the relevant literature. Section 3 contains a detailed description of our approach. Sections 4 and 5 describe the results of its application to standard corpora and operational risks incidents respectively. We conclude in Section 6. 2 Related Work In this review of relevant work, we focus predominantly on techniques that have been proposed to overcome the requirement of having a large number of annotated data for standard text classification techniques. Overall, the majority of approaches focus on generating labeled examples without full manual annotation. Those include semi-supervised techniques that seek to leverage a small set of labeled documents to derive labels for the remainder of the corpus. For instance, Nigam et al. (2000) propose to follow the Expectation-Maximization (EM) algorithm by iteratively using the set of labeled data to obtain probabilistically-weighted class labels for each unlabeled document and then training a classifier on the complete corpus based on those annotations. This process is repeated until convergence of the log likelihood of the parameters given observed data. Other approaches attempt to automatically derive labels without any starting set of annotations. For instance, Turney (2002) classifies a review as recommended or not recommended by computing the pointwise mutual information of the words in the review with a positive reference word (excellent) and with a negative reference word (poor) using search engine results as a proxy for a reference corpus. Another example is Ko and Seo (2000) who leverage an initial set of manually provided keywords for each target category to derive labels. Based on those keywords, they look for representative sentences in the corpus to support label assignment. Finally, Yang et al. (2013) make use of wikipedia as background knowledge to assemble representative set of words for each category label via topic modeling and use them to annotate the unlabeled documents. In a similar way, Miller et al. (2016) represent each target category as a TF-IDF (termfrequency/inverse document frequency) vector obtained from Wikipedia and then use this category representation as an informed prior to Latent Dirichlet Allocation (LDA), an unsupervised algorithm that finds the topics that best satisfy the data given the priors. The occurrence of these topics in a document can be used as a noisy label for that document. Our approach differs in spirit in the sense that our objective is not to construct surrogate labels so that we can apply a machine learning classifier to our unlabeled data. By contrast, we opted for a fully unsupervised method which hinges on computing a similarity metric between documents and target categories. To that end, a richer representation of category labels is derived. The method that were proposed by Yang et al. (2013); Miller et al. (2016) could be adapted to align with our perspective (by removing the classification step). Other examples of unsupervised approach include Rao et al. (2006) which defined the label of documents based on a k-means word clustering. They select a set of representative words from each cluster as a label and derive a set of candidate labels. An input document vector is then assigned to the label vector that maximizes the norm of the dotproduct. While this approach performs well when there are no categories specified as input, e.g., social listening, trend monitoring, topic modeling, it is less likely to do so with a set of predefined target categories where it is difficult to steer word clusters to categories of interest and, critically, to ensure the full coverage of target categories (new internal taxonomy of risk in our practical case). Finally, our method makes use of word embeddings as a mean to enrich category label via semantic expansion. As far as we know, word embeddings have been used to improve text classification performance through their application as a document representation technique. In Liu et al. (2018), the authors show that task oriented embeddings, which penalise outputs where the representative words of a category are close to the 373 representative words of another category, outperform general domain embeddings. As we do not have any labeled data, this approach is not directly relevant to our problem setting. 3 Method Our approach for unsupervised text classification is based on the choice to model the task as a text similarity problem between two sets of words: One containing the most relevant words in the document and another containing keywords derived from the label of the target category. While the key advantage of this approach is its simplicity, its success hinges on the good definition of a dictionary of words for each category. Figure 1: Overview of our Method Figure 1 provides an overview of the main steps included in our method. On the document side, we simply perform standard cleaning steps. On the category labels side, besides the same initial processing, we implement a series of enrichment steps so as to iteratively expand label dictionaries. Before proceeding to the comparison of documents and labels via a similarity metric, we have added a consolidation step which considers all expanded label dictionaries and makes adjustments so that they are as discriminating as possible. We compare documents and labels by computing a similarity metric between cleaned documents and dictionaries. We provide further details into each of these main steps in the following subsections. In terms of notation, we refer to the unlabeled corpus as C, its vocabulary as V and and assume that we have M text categories to which documents in C need to be mapped. 3.1 Cleaning Steps Cleaning of either documents or category labels is done as follows: After tokenization, we start by replacing a list of common abbreviations, e.g., Mgt, Mngt, IT, ATM provided by business with their associated expansions. Similarly we spell out negative contractions. We then remove uninformative tokens including (i) isolated and special characters such as i, a, o, op, @, *, (ii) punctuation (iii) stopwords (based on stopword lists from NLTK’s list of english stopwords, scikit-learn version 0.18.2, spaCy version 1.8.2) (iv) common words across documents such as risky, dangerous, based on the highest Term Frequency (top 3 %) (v) uncommon words, i.e., top 3 % in terms of Inverse Term Frequency (vi) specific tokens such as dates, nationalities, countries, regions, bank names. For instance, to extract dates, we use both regular expression and fuzzy matching to identify all sorts of date-like strings (e.g., February can also be written as Feb or Febr). Regarding nationalities and bank names, we combined different lists coming from Wikipedia, business experts and fuzzy matching (e.g., BNP Paribas could be found as BNP, BNPParibas, BNP Securities, BNP Trading, BNP Group, etc.). As the taxonomy is designed to be universal, such tokens are not relevant to the text classification task and are thus removed. To give a concrete example, the following snippet of operational incident “On 18 June 2013 the US Commodity Futures Trading Commission (CFTC) fined ABN AMRO Clearing Chicago USD 1 million (EUR 748,000) for failing to segregate or secure sufficient customer funds, failing to meet the minimum net capital requirements, failing to maintain accurate books and records, and failing to supervise its employees...” would have been transformed into “fine fail segreg secur suffici custom fund fail meet minimum net capit requir fail maintain accur book record fail supervis employe..” 3.2 Enrichment As mentioned previously, once we have clean labels, we make a series of enrichment steps. First, we make use of Expert Knowledge, i.e., a human expert is asked to provide 3 to 5 additional words for each label. While this constitutes a small amount of manual effort, there are multiple ways to approximate this task without human intervention, for example, by querying Wikipedia or 374 the web with the category name and performing token counts over retrieved entries. Before proceeding to the next enrichment step, we also add to the label dictionaries all the spelling variants of the expert-provided words that can be found in the document corpus. We also remove any word whose stem is not in the document corpus. Second, we leverage WordNet (Fellbaum, 1998) to obtain knowledge-based synonyms. For every word obtained in the previous step, we add to the label dictionary all the associated synonym sets (English nouns, verbs, and adjectives). Again, once this step is completed, we remove all words where the stem is not in the vocabulary V. Third, we bootstrap the label dictionary obtained upon this point by making use of representative documents. A representative sentence for a given category is defined by Ko and Seo (2000) as a sentence in the document corpus that contains manually pre-defined keywords of the category in its content words. In this work, we extend this definition to apply to documents instead of sentences and to include all categories’ keywords obtained at this stage. Therefore we calculate a similarity score between each pair of input document - category label keywords using cosine distance and Latent Semantic Analysis. The text similarity metric will be details in section 3.4. For this step, we use an empirically identified similarity threshold (70%). Then, for each identified representative document, we add all its words to the label dictionary. Finally, we make use of word embeddings (Bengio et al., 2003; Mikolov et al., 2013a,b) to further capture semantically similar words to the ones belonging to each label dictionary. We first proceed with pre-trained models which enable to identify semantically similar words used in the general domain. In our case, we used Glove1 (Pennington et al., 2014), The model is pre-trained on a corpus using Wikipedia2014 and Gigaword5, with a 330 vocabulary of the top 400,000 most frequent words and a context window size of 10. Furthermore, we also seek to obtain similar words as used in the specific domain of the corpus. Since the neighbors of each keyword are semantically related in embedding space (Mikolov et al., 2013b), we train a Word2Vec model, trained on all input documents cleaned then joined together. In this work, we tested its two main architectures: 1https://nlp.stanford.edu/projects/glove/ Continous Bag of words (CBOW) that predicts a word based on its context defined by a sliding window of words and Skip-Gram (SG) which predicts the context given the target word. Experimental settings will be detailed in section 4.3. 3.3 Consolidation Once all labels have been associated with dictionaries, we perform a final step in order to reduce keyword overlap among all dictionaries. In essence, we favor words that are representative (salient) for the category in the sense that they have the ability to distinguish the category label from the other categories. We adapt the Function-aware Component (FAC) originally used in supervised document classification (Liu et al., 2018). FAC(w, c) = TF(w, c) −1 M P 1≤k≤M TF(w, k) var(TF−c(w)) (1) where TF−c(w) is the collection of term frequencies except the c-th category and var() is the variance. The consolidation step consists in computing the above metric for every word in the label dictionaries and to filter out those whose associated metric is below a given threshold. This latter threshold depends on two main constraints: The maximum number of categories that contain a given word and the minimum word frequency in the label dictionaries. Regarding the first constraint, in our practical case of operational risk taxonomy, we have 264 target categories that could be grouped into 16 broad categories: cyber-security, fraud, compliance, human resources, etc. Thresholds are determined so as to tolerate overlap within each broad category and to minimize it outside. More generally, we start by identifying the maximum number of semantically similar categories, i.e., where we would expect some overlap and we set the threshold consequently. By construction, keywords in a given dictionary occur at least one time. We decided not to set an additional constraint on word frequency per category label so as to keep highly specific words with a low frequency, generally captured by the Word2vec model trained on the input corpus. 375 3.4 Text Similarity Metric Once documents and labels have been processed as described previously, we assign a label to a document by identifying the label to which it is most similar. Our evaluation of similarity is based on Latent Semantic Analysis (to avoid the pitfalls of literal term matching) and cosine similarity on the output LSA vectors. Before applying LSA, we start by stemming all the words using Porter stemmer. We feel that similarities between documents and labels can be more reliably estimated in the reduced latent space representation than in the original representation. The rationale is that documents which share frequently co-occurring terms will have a similar representation in the latent space, even if they have no terms in common. LSA thus performs some sort of noise reduction and has the potential benefit to detect synonyms as well as words that refer to the same topic. 4 Experiments 4.1 Datasets In order to evaluate our approach, we conduct experiments on five standard text classification corpora, described listed in Table 1. As we use an unsupervised approach for text classification, we make use of the whole corpus of each dataset by aggregating training and test sets. Datasets #Documents #Classes 20NewsGroup 18,846 20 AG’s Corpus 126,764 4 Yahoo-Answers 1,460,000 10 5AbstractsGroup 7,497 5 Google-Snippets 10,059 8 Table 1: Statistics of the five mainstream datasets for text classification. We describe each corpus briefly: (1) The 20NewsGroup2 dataset consists of 18,846 news articles divided almost evenly among 20 different UseNet discussion groups. Some of the newsgroups are closely related (e.g., comp.sys.ibm.pc.hardware and comp.sys.mac.hardware). While each document may discuss multiple topics, it needs to be assigned to a single category. (2) The AG’s Corpus 2http://qwone.com/ jason/20Newsgroups/ of news articles3 is a collection of more than 1 million news articles. We used the version created by Zhang et al. (2015) who selected 4 largest classes from AG news corpus on the web with each instance containing class index, title and description fields. (3) The Yahoo-Answers4 corpus contains 4,483,032 questions and their corresponding answers from Yahoo! Answers service as of 10/25/2007. We used the version constructed by Zhang et al. (2015) using 10 largest main categories and the best answer content from all the answers. (4) The 5AbstractsGroup5 dataset is a collection of academic papers from five different domains collected from Web of Science namely, business, artificial intelligence, sociology, transport and law. We extracted the abstract and title fields of each paper as a document. (5) The Google-Snippets6 dataset contains the web search results related to 8 different domains such as business, computers and engineering. 4.2 Configurations and Baseline Methods We apply multiple variants of our method to each of the above corpora. Note first that using representative documents (Section 3.2) to enrich label dictionaries is suitable for categories whose labels take the form of a structured sentence containing more than 10 words before cleaning. In the application to operational risk incidents (Section 5), it allowed to enrich 13% of dictionaries. In the standard text classification datasets used in our experiments, category labels contain less than 5 words so representative documents were not relevant in the enrichment process. Thus none of the configurations discussed in this section include this step. Overall, in addition to the full pipeline, which we refer to as all keywords, we also investigated whether semantic expansion solely through word embeddings could improve performance. We thus tested with either generic embeddings (pre-trained Glove) or corpus-based embeddings (Word2Vec). Finally, for each configuration, we tested with and without the function aware component (FAC) for consolidation of the label dictionaries. We also implemented simple baselines for comparison. On the unsupervised side, (1) we calculated a text similarity score between each docu3www.di.unipi.it/ gulli/AG corpus of news articles.html 4https://github.com/LC-John/Yahoo-AnswersTopicClassification-Dataset /tree/master/dataset 5https://github.com/qianliu0708/5AbstractsGroup 6http://jwebpro.sourceforge.net/data-web-snippets.tar.gz 376 World Sports Business Science/Technology Election Olympic Company Laboratory State Football Market Computers President Sport Oil Science Police League Consumers Technology Politics Baseball Exchange Web Security Rugby Business Google War Tickets Product Microsoft Nuclear Basketball Price Economy Democracy Games Billion Software Militant Championship Stocks Investment Table 2: Example of ten salient words for each category in the AGs Corpus dataset. ment and the set of expert provided keywords (2) we enriched this list of initial keywords with their synonyms from WordNet. On the supervised side, we use Multinomial Na¨ıve Bayes as a basic baseline where we represented each document as TFIDF vector (bag of words), cleaned the input corpus in the same way as in our proposed approach and split each dataset into a training set (2/3) and a test set (1/3). 4.3 Experimental Settings In our method, an offline process is used to extract initial keywords from category labels. For the purpose of testing our approach, we had to emulate human experts ourselves. For each category, one team member added a few keywords based only on label description. Then, we randomly selected 2 or 3 documents for each label that were read by two team members who used them to identify 5 to 10 salient words to be added to each dictionary. In average, we manually added 9 words per label for 20NewsGroup, 17 words for AGs Corpus and Google-Snippets, 11 words for YahooAnswers and 14 words for 5AbstractsGroup. We present in Table 2, the output of that process for the AGs Corpus dataset. Once we identify initial keywords, we make the series of enrichment steps described in section 3.2. For every word in the set of initial keywords, we add all its synonym sets from WordNet as well as the 10 most similar words from Glove, CBOW and Skip-Gram. The average length of label dictionaries obtained from the full enrichment pipeline (which we refer to as all keywords) is 428 words. We use the word2vec python implementation provided by gensim (Rehurek and Sojka, 2010). For Skip-gram and CBOW, a 10-word window size is used to provide the same amount of raw information. Also words appearing 3 times or fewer are filtered out, 10 workers were used and training was performed in 100 epochs. We chose 300 for the size of all word embeddings, it has been reported to perform well in classification tasks (Mikolov et al., 2013a). Filtering word dictionaries with the Functionaware Component (FAC) allowed to keep in average 37% of all keywords per label. As described previously, once different versions of label dictionaries have been obtained, we calculate their similarity with input documents using LSA and Cosine distance. The optimal dimension (k) of the latent space depends on the dataset. Optimal k values are typically in the range of 100-300 dimensions (Harman, 1993; Letsche and Berry, 1997). In this work, for each dataset, we set a range of 100-300 values, and we determine the optimal k by maximizing the topic coherence score (R¨oder et al., 2015). The multi-class classification performance was evaluated in terms of precision (Prec.), recall (Rec.) and F1-score (F1). All measures are computed based on a weighted average of each class using the number of true instances to determine the weights. 4.4 Results and Discussion Table 3 summarizes the performance of each of the methods tested on the five corpora that we considered. Overall, the various configurations of our method, all leveraging embeddings for semantic expansion, outperform the simple unsupervised baselines, leading to a doubling of the F1-score for all corpora, the least affected being the 5AbstractsGroup where F1 goes from 38.1 to 68.3 percent, comparing with the all keywords variant of our method. When focusing on our various configurations, first without the FAC consolidation, we observe that domain specific embeddings alone lead to better performance than generic embeddings alone and this across all corpora and all metrics, except for the Yahoo-Answers dataset. The difference in performance however is not very large, with the exception of 20NewsGroup where F1-score increases from 52.6 with generic embeddings to 61 with domain specific ones. We notice also that combining all enrichments (All keywords) provides a modest increase in performance over embeddings only as shown by the results for YahooAnswers, 5AbstractsGroup and Google-Snippets. Finally the use of the consolidation step further 377 improves performance except for 20NewsGroup where precision increases from 64.7 to 71.1 but recall decreases from 57.8 to 35.6. Comparing now our best unsupervised performance with the supervised baseline, we observe that the ratio of the best F1-score performance over the supervised baseline performance varies from 0.71 to 1.11 with two datasets yielding ratios above 1. Such results demonstrate the validity of the unsupervised approach as a practical alternative to investing to a cognitively and timely costly annotation effort. 5 Application to Operational Risk Incident Classification As we described previously, the proposed method stemmed from a specific need in the banking industry where a large number of incidents had to be mapped to a newly defined taxonomy of operational risks. Specifically, it was designed to avoid the tedious and time consuming effort of asking experts to manually review thousands of incidents. An automated - or more precisely assisted - approach also presented the additional benefit of ensuring a higher degree of consistency in the mapping than would have been achieved a team of annotators. In this section, we provide some additional context into this specific task, report the observed performance of our method and discuss some of the specificities of the context. 5.1 Operational Risk Incidents Corpus & Taxonomy In our application, we were asked to map both internal incidents and external incidents to the new taxonomy. In this paper, we focus on the external incidents for confidentiality reasons. More precisely, our task was to assign a unique category to each one of the 25,000 incidents that was obtained from ORX news. The Operational Risk Exchange (ORX) is a consortium of financial institutions focused on operational risk information sharing. The ORX news service provides publicly reported operational risk loss data to its institutional members. An incident record is mostly composed of an incident description along with associated metainformation such as geographical indicators, time information and institution affected. We only make use of the incident descriptions. Their average length is 2150 words, with a standard deviation of 1181 words and ranging from 10 words to more than 14000 words. The target taxonomy is composed of three levels. The first one contains 16 labels and indicates at a very high level the domain of the incidents such as IT, legal, regulatory. The second and third levels contain respectively 69 and 264 levels to add increasing granularity to the incident classification. Figure 2 presents an extract of the taxonomy focused on ICT risk, which is public as it draws upon Article 107(3) of Directive 2013/36/EU2 which aim to ensure the convergence of supervisory practices in the assessment of the information and communication technology (ICT) risk. Figure 2: Example of Taxonomy regarding three levels for an ICT incident Before discussing the results, we thought it would be meaningful to point out some of the characteristics of this application. One natural challenge in real world cases is the lack of unequivocal ground truth. Experts can often identify categories that do not correspond to the input but in the end, they cannot ascertain whether one category should prevail over another unless there is some clear guidelines or convention at the level of the organization. That difficulty is further compounded in our case as most documents are very dense in term of information and become ambiguous. For instance, “In Japan, a building destruction resulting from a massive earthquake has caused power outage making AMD-based servers unbootable”, could be classified as Natural Disaster, Dysfunctional ICT data processing or handling or Destruction / loss of physical assets among others. 378 Methods 20NewsGroup AG’s Corpus Yahoo-Answers 5AbstractsGroup Google-Snippets Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Expert keywords 38.0 21.4 27.0 32.9 29.8 31.4 20.4 17.9 19.1 41.7 34.5 37.1 39.4 29.3 33.6 Expert keywords + WordNet 39.2 24.2 27.0 33.7 31.4 32.5 21.8 19.2 20.4 42.5 35.7 38.1 41.6 32.5 36.5 Generic embeddings (Glove) 57.8 48.2 52.6 72.2 72.3 72.2 54.6 50.5 52.5 67.5 66.0 66.7 69.2 66.3 67.7 Corpus based embeddings 64.7 57.8 61.0 75.1 75.2 75.1 50.4 47.9 49.1 69.0 66.2 67.6 72.4 70.0 71.2 All keywords 62.2 54.2 57.9 75.0 74.9 74.9 54.9 51.9 53.3 69.4 66.9 68.3 71.9 70.1 71.0 FAC-Generic embeddings (Glove) 65.8 34.2 39.6 72.6 72.8 72.5 54.0 52.2 52.1 67.9 61.5 63.2 70.3 65.9 67.5 FAC-Corpus based embeddings 71.1 35.6 42.8 76.8 76.6 76.6 59.2 52.7 52.5 70.2 66.8 68.2 72.5 71.3 71.1 FAC-All keywords 66.2 37.8 41.3 74.0 73.8 73.9 59.3 53.9 55.7 71.5 67.3 69.7 72.9 72.8 72.8 Supervised Na¨ıve Bayes 87.1 85.4 85.0 89.8 89.9 89.8 57.2 53.0 49.9 77.5 68.8 65.5 81.8 77.4 77.0 Table 3: Performance of our methods and baseline methods on five standard text classification corpora. Bold numbers indicate the best configurations among the unsupervised approaches. Configurations of our approach do not contain the representative document enrichment step. Taxonomy level Prec. Recall F1-Score Level 1 91.80 89.37 90.45 Level 2 86.08 74.80 78.10 Level 3 34.98 19.88 22.95 Table 4: Performance of our Method on the Operational Risk Text Classification Task 5.2 Result For the purpose of experiment, operational teams (not experts) were asked to provide manual tags for a sample of 989 operational incidents. Table 4 provide the classification results of our approach when compared to those manual annotations, considering all three levels of the taxonomy. In a second step in the evaluation, an expert was given the difficult task to challenge each time they disagreed the computer and human annotation and determine which was ultimately correct. This exercise indicated that in 32 cases out of 989 operational incidents under consideration for the Level 1 classification, the machine generated category were more relevant (hence correct) than those identified by the operational team. 5.3 Discussion Given the number of categories, we were satisfied with the level of performance that we observed, especially for Level 1 and Level 2 of the taxonomy. More importantly, as we progress with the follow up exercise of mapping internal incident descriptions, we have evolved from a point where users always mistrust the outcome of the automated classification to a point where users see the suggested mapping from our algorithm as a relevant recommendation. Our perspective on the success of this method in this particular context is that operational risk is a textbook case where domain specific labels and vocabulary prevail. For instance, technical words such as forge, fictitious, bogus, ersatz, or counterfeit indicate almost surely that a Fraudulent Account Opening operation happened. Most of operational incidents must contain a combination of technical keywords due to their highly operational nature. What the method brings is the ability to combine human expertise through seed words with the strength of the machine which can process and memorize large corpus and derive distributional semantics from it. In this way, the cognitive burden of being exhaustive is lifted from the experts shoulders. 6 Conclusion In this paper, we present a method for unsupervised text classification based on computing the similarity between the documents to be classified and a rich description of the categories label. The category label enrichment starts with humanexpert provided keywords but is then expanded through the use of word embeddings. We also investigated whether a consolidation step that removes non discriminant words from the label dictionaries could have an effect on performance. We have not explored whether recent advances in word embeddings from instance ELMO (Peters et al., 2018) and BERT (Devlin et al., 2018) could add further benefits. This is certainly an avenue that we seek to explore. However, for our application domain, we expect that it may not lead to increased performance as words are used to a large extent with the same sense across the corpus. 379 References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. C Fellbaum. 1998. Wordnet: An on-line lexical database. Donna K Harman. 1993. The first text retrieval conference (TREC-1), volume 500. US Department of Commerce, National Institute of Standards and Technology. Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Icml, volume 99, pages 200–209. Youngjoong Ko and Jungyun Seo. 2000. Automatic text categorization by unsupervised learning. In Proceedings of the 18th conference on Computational linguistics-Volume 1, pages 453–459. Association for Computational Linguistics. Todd A Letsche and Michael W Berry. 1997. Largescale information retrieval with latent semantic indexing. Information sciences, 100(1-4):105–137. Qian Liu, Heyan Huang, Yang Gao, Xiaochi Wei, Yuxin Tian, and Luyang Liu. 2018. Task-oriented word embedding for text classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2023–2032. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Timothy Miller, Dmitriy Dligach, and Guergana Savova. 2016. Unsupervised document classification with informed topic models. In Proceedings of the 15th Workshop on Biomedical Natural Language Processing, pages 83–91. Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using em. Machine learning, 39(2-3):103–134. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Delip Rao, P Deepak, and Deepak Khemani. 2006. Corpus based unsupervised labeling of documents. In FLAIRS Conference, pages 321–326. Radim Rehurek and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. Citeseer. Michael R¨oder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399–408. ACM. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1):1–47. Peter D Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417–424. Association for Computational Linguistics. Lili Yang, Chunping Li, Qiang Ding, and Li Li. 2013. Combining lexical and semantic features for short text classification. Procedia Computer Science, 22:78–86. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657.
2019
36
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3696–3709 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3696 Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention Wenhu Chen†, Jianshu Chen‡, Pengda Qin¶, Xifeng Yan† and William Yang Wang† †University of California, Santa Barbara, CA, USA ‡Tencent AI Lab, Bellevue, WA, USA ¶Beijing University of Posts and Telecommunications, China {wenhuchen,xyan,william}@cs.ucsb.edu [email protected] [email protected] Abstract Semantically controlled neural response generation on limited-domain has achieved great performance. However, moving towards multi-domain large-scale scenarios are shown to be difficult because the possible combinations of semantic inputs grow exponentially with the number of domains. To alleviate such scalability issue, we exploit the structure of dialog acts to build a multi-layer hierarchical graph, where each act is represented as a rootto-leaf route on the graph. Then, we incorporate such graph structure prior as an inductive bias to build a hierarchical disentangled self-attention network, where we disentangle attention heads to model designated nodes on the dialog act graph. By activating different (disentangled) heads at each layer, combinatorially many dialog act semantics can be modeled to control the neural response generation. On the large-scale Multi-Domain-WOZ dataset, our model can yield a significant improvement over the baselines on various automatic and human evaluation metrics. 1 Introduction Conversational artificial intelligence (Young et al., 2013) is one of the critical milestones in artificial intelligence. Recently, there have been increasing interests in industrial companies to build task-oriented conversational agents (Wen et al., 2017; Li et al., 2017; Rojas-Barahona et al., 2017) to solve pre-defined tasks such as restaurant or flight bookings, etc (see Figure 1 for an example dialog from MultiWOZ (Budzianowski et al., 2018)). Traditional agents are built based on slotfilling techniques, which requires significant human handcraft efforts. And it is hard to generate naturally sounding utterances in a generalizable and scalable manner. Therefore, different semantically controlled neural language generation models have been developed (Wen et al., 2015, 2016a,b; Dusek and Jurc´ıcek, 2016) to replace the traditional systems, where an explicit semantic representation (dialog act) are used to influence the RNN generation. The canonical approach is proposed in (Wen et al., 2015) to encode each individual dialog act as a unique vector and use it as an extra input feature into the cell of long short-term memory (LSTM) to influence the generation. As pointed in (Wen et al., 2016b), these models though achieving good performance on limited domains, suffer from scalability problem as the possible dialog acts grow combinatorially with the number of domains. In order to alleviate such issue, we propose a hierarchical graph representation by leveraging the structural property of dialog acts. Specifically, we first build a multi-layer tree to represent the entire dialog act space based on their interrelationships. Then, we merge the tree nodes with the same semantic meaning to construct an acyclic multi-layered graph, where each dialog act is interpreted as a root-to-leaf route on the graph. Such graph representation of dialog acts not only grasps the inter-relationships between different acts but also reduces the exponential representation cost to almost linear, which will also endow it with greater generalization ability. Instead of simply feeding such vectorized representation as an external feature vector to the neural networks, we propose to incorporate such a structure act as an inductive prior for designing the neural architecture, which we name as hierarchical disentangled self-attention network (HDSA). In Figure 2, we show how the dialog act graph structure is explicitly encoded into model architecture. Specifically, HDSA consists of multiple layers of disentangled self-attention modules (DSA). Each DSA has multiple switches to set the on/off state for its heads, and each head is bound for modeling a designated node in the dialog act graph. At the train3697 Dialog History User: I would like moderate price range please. Belief State:{area=centre, food=modern european, price range=moderate} System: I found de luca cucina and bar and riverside brasserie . does either of them sound good for you? Dialog Act:restaurant-inform:{name=luca cucina and bar, name=riverside brasserie} User: Lets try a table for 1 on Monday at 16:45. Belief Satte: {area=centre, food=modern european, price range=moderate} System: Okay , that worked ! the table will be reserved for 15 minutes . your reference number is MJEOQ3R5. is there anything else I can do for you? Dialog Act: restaurant-booking:{book=reference} User: I'm looking for a cheap place to stay with free wifi . Belief State:{price-range=cheap, internet=yes} System: we have 10 possibilities for you . do you have any additional preferences ? Dialog Act: hotel-inform:{choice=10} Predict Restaurant Hotel Restaurant/Hotel/Hospital/Attraction Databases Field Entry Field Entry Field Entry Field Entry Figure 1: An example dialog from MultiWOZ dataset, where the upper rectangle includes the dialog history, the tables at the bottom represent the external database, and the lower rectangle contains the dialog action and the language surface form that we need to predict. ing stage, conditioned on the given dialog acts and the target output sentences, we only activate the heads in HDSA corresponding to the given acts (i.e., the path in the graph) to activate the heads with their designated semantics. At test time, we first predict the dialog acts and then use them to activate the corresponding heads to generate the output sequence, thereby controlling the semantics of the generated responses without handcrafting rules. As depicted in Figure 2, by gradually activating nodes from domain →action →slot, the model is able to narrow its response down to specifically querying the user about the color and type of the taxi, which provides both strong controllability and interpretability. Experiment results on the large-scale MultiWOZ dataset (Budzianowski et al., 2018) show that our HDSA significantly outperforms other competing algorithms.1 In particular, the proposed hierarchical dialog act representation effectively 1The code and data are released in https://github. com/wenhuchen/HDSA-Dialog Disentangled Self-Attention Dialog Act Graph Hierarchical Disentangled SA Disentangled Self-Attention Disentangled Self-Attention What type and color of taxi do you want to take? taxi police request info book rej color type Figure 2: The left part is the graph representation of the dialog acts, where each path in the graph denotes a unique dialog act. The right part denotes our proposed HDSA, where the orange nodes are activated while the others are blocked. (For details, refer to Figure 5) improves the generalization ability on the unseen test cases and decreases the sample complexity on seen cases. In summary, our contributions include: (i) we propose a hierarchical graph representation of dialog acts to exploit their inter-relationships, which greatly reduces the sample complexity and improves generalization, (ii) we propose to incorporate the structure prior in semantic space to design HDSA to explicitly model the semantics of neural generation, and outperforms baselines. 2 Related Work & Background Canonical task-oriented dialog systems are built as pipelines of separately trained modules: (i) user intention classification (Shi et al., 2016; Goo et al., 2018), which is for understanding human intention. (ii) belief state tracker (Williams et al., 2013; Mrksic et al., 2017a,b; Zhong et al., 2018; Chen et al., 2018), which is used to track user’s query constraint and formulate DB query to retrieve entries from a large database. (iii) dialog act prediction (Wen et al., 2017), which is applied to classify the system action. (iv) response generation (Rojas-Barahona et al., 2017; Wen et al., 2016b; Li et al., 2017; Lei et al., 2018) to realize language surface form given the semantic constraint. In order to handle the massive number of entities in the response, Rojas-Barahona et al. (2017); Wen et al. (2016b, 2015) suggest to break response generation into two steps: first generate delexicalized sentences with placeholders like <Res.Name>, and then post-process the sentence by replacing the placeholders with the DB record. The existing modularized neural models have achieved promising performance on limiteddomain datasets like DSTC (Williams et al., 3698 select * from restaurant where food=‘korean’ and area=’north’ History: sys response 1. Restaurant-Recommend-Name 2. Restaurant-Recommend-Price Dialog State Tracking Utterance Understanding Dialog Act Prediction Delexicialized Response Generation Name Location Price Food Stars Little Seoul north low Korean 4 DB Execution History: user query Food: Korean Area: North Price: * Stars: * I want to find a Korean restaurant in the north of the town. I recommend Little Seoul, which has a Low price. Post-Processing I recommend <Res.Name>, which has a <Res.Price> price. Figure 3: Illustration of the neural dialog system. We decompose it into two parts: the lower part describes the dialog state tracking and DB query, and the upper part denotes the Dialog Action Prediction and Response Generation. In this paper, we are mainly interested in improving the performance of the upper part. 2016), CamRes767 (Rojas-Barahona et al., 2017) and KVRET (Eric et al., 2017), etc. However, a recently introduced multi-domain and large-scale dataset MultiWOZ (Budzianowski et al., 2018) poses great challenges to these approaches due to the large number of slots and complex ontology. Dealing with such a large semantic space remains a challenging research problem. We follow the nomenclature proposed in RojasBarahona et al. (2017) to visualize the overview of the pipeline system in Figure 3, and then decompose it into two parts: the lower part (blue rectangle) contains state tracking and symbolic DB execution, the upper part consists of dialog act prediction and response generation conditioned on the state tracking and DB results. In this paper, we are particularly interested in the upper part (act prediction and response generation) by assuming the ground truth belief state and DB records are available. More specifically, we set out to investigate how to handle the large semantic space of dialog acts and leverage it to control the neural response generation. Our approach encodes the history utterances into distributed representations to predict dialog acts and then uses the predicted dialog acts to control neural response generation. The key idea of our model is to devise a more compact structured representation of the dialog acts to reduce the exponential growth issue and then incorporate the structural prior for the semantic space into the neural architecture design. Our proposed HDSA is inspired by the linguistically-inform self-attention (Strubell et al., 2018), which combines multi-head self-attention with multi-task NLP tasks to enhance the linguistic awareness of the model. In contrast, our model disentangles different heads to model different semantic conditions in a single task, which provides both better controllability and interpretability. 3 Dialog Act Representation Dialog acts are defined as the semantic condition of the language sequence, comprising of domains, actions, slots, and values. Tree Structure The dialog acts have universally hierarchical property, which is inherently due to the different semantic granularity. Each dialog act can be seen as a root-to-leaf path as depicted in Figure 42. Such tree structure can capture the kinship between dialog acts, i.e. “restaurant-inform-location” has stronger similarity with “restaurant-inform-name” than “hotelrequest-address”. The canonical approach to encode dialog acts is by concatenating the one-hot representation at each tree level into a flat vector like SC-LSTM (Wen et al., 2015; Budzianowski et al., 2018) (details are in in Github3). However, such representation impedes the cross-domain transfer between different slots and the cross-slot transfer between different values (e.g the “recommend” under restaurant domain is different from “recommend” under hospital domain). As a result, the sample complexity can grow combinatorially as the potential dialog act space expands in large-scale real-life dialog systems, where the potential domains and actions can grow dramatically. To address such issue, we propose a more compact graph representation. 2we add dummy node “none” to transform those non-leaf acts into leaf act to normalize all acts into triplet; for example “hotel-inform” is converted into “hotel-inform-none” 3https://github.com/andy194673/ nlg-sclstm-multiwoz/blob/master/ resource/woz3/template.txt 3699 hotel resaurant attraction inform recommend Domain Actions Slot root hotel resaurant attraction inform recommend name root area stars price ticket Merge inform recommend Coarse Fine Coarse Fine name area price name area stars name area ticket Tree-Structure Graph-Structure Domain Domain/Action Domain/Action/Slot Compact Graph Representation Sparse Tree Representation Flattened Hierarchical (D/A/S) OR Figure 4: The left figure describes the tree representation of the dialog acts, and the right figure denotes the obtained graph representation from the left after merging the cross-branch nodes that have the same semantics. The Hierarchical form is used in our main model HDSA, Falttented is used for baseline models. Graph Structure The tree-based representation cannot capture the cross-branch relationship like “restaurant-inform-location” vs. “hotel-informlocation”, leading to a huge expansion of the tree. Therefore, we propose to merge the cross-branch nodes that share the same semantics to build a compact acyclic graph in the right part of Figure 44. Formally, we let A denote the set of all the original dialog acts. And for each act a ∈A, we use H(a) = {b1, · · · , bi, · · · , bL} to denote its L-layer graph form, where bi is its one-hot representation in the ith layer of the graph. For example, a dialog act “hotel-inform-name” has a compact graph representation H(a) = {b1 : [1, 0, 0], b2 : [1, 0], b3 : [1, 0, 0, 0, 0]}. More formally, let H1 · · · HL denote the number of nodes at the layer of 1, · · · , L, respectively. Ideally, the total representation cost can be dramatically decreased from O(QL i=1 Hi) tree-based representation to H0=PL i=1 Hi in our graph representation. Due to the page limit, we include the full dialog act graph and its corresponding semantics in the Appendix. When multiple dialog acts H(a)1, · · · , H(a)k are involved in the single response, we propose to aggregate them as A = BitOR(H(a)1, · · · , H(a)k) as the H0dimensional graph representation, where BitOR denotes the bit-wise OR operator5. Generalization Ability Compared to the treebased representation, the proposed graph representation under strong cross-branch overlap can greatly lower the sample complexity. Hence, it leads to great advantage under sparse training instances. For example, suppose the ex4We call it graph because now one child node can have multiple parents, which violates the tree’s definition. 5For example, two acts, H(a)1 = [[1, 0, 0], [1, 0]] and H(a)2 = [[1, 0, 0], [0, 1]], are aggregated into A = [[1, 0, 0], [1, 1]]. act dialog act “hotel-recommend-area” never appears in the training set. Then, at test time when used for response generation, the flat representation will obviously fail. In contrast, with our hierarchical representation, “hotel”, “recommend” and “area” may have appeared separately in other instances (e.g., “recommend” appears in “attraction-recommend-name”). Its graph representation could still be well-behaved and generalize well to the unseen (or less frequent) cases due to the strong compositionality. 4 Model Figure 5 gives an overview of our dialog system. We now proceed to discuss its components below. Dialog Act Predictor We first explain the utterance encoder module, which uses a neural network fACT to encode the dialog history (i.e., concatenation of previous utterances from both the user and the system turns x1, · · · , xm), into distributed token-wise representations u1, · · · , um with its overall representation ¯u as follows: ¯u, u1, · · · , um = fACT (x1, · · · , xm) (1) where fACT can be CNN, LSTM, Transformer, etc, ¯u, u1, · · · , um ∈RD are the representation. The overall feature ¯u is used to predict the hierarchical representation of dialog act. That is, we output a vector Pθ(A) ∈RH0, whose ith component gives the probability of the ith node in the dialog act graph being activated: Pθ(A) = fθ(¯u, vkb, vbf) = σ(V T a tanh(Wu¯u + Wb[vkb; vbf] + b)) (2) where Va ∈RD×H0 is the attention matrix, the weights Wu, Wb, b are the learnable parameters to project the input to RD space, and σ is the Sigmoid function. Here, we follow Budzianowski 3700 𝐶" 𝐶# 𝐶$ 0 0 0 0 0 0 train hotel taxi Disentangled Self-Attention Disentangled Self-Attention 0 0 0 𝐶" 𝐶# 𝐶$ 0 0 0 inform request book Disentangled Self-Attention price location, name area 0 0 0 𝑂" 𝑂# 𝑂$ 0 0 0 𝑥" 𝑥# 𝑥' 𝑥( Dialog History User Sys User Hierarchical DSA 𝑦$ 𝑦" Dialog-Act Predictor Dialog Act Graph Utterance Encoder 𝑥' 𝑦# 𝑦$*" 𝑦+ 𝑦" 𝐴Linear Linear Linear 𝑉" 𝑉# 𝑉' Linear Linear Linear 𝐾" 𝐾# 𝐾' Linear Linear Linear 𝑄" 𝑄# 𝑄' Scaled Dot-Product Attention ℎ2" ℎ2# ℎ2' 𝑉 𝐾 𝑄 Linear LayerNorm Positionwise-FF Shared Module Disentangled Self-Attention Control 𝐺" 𝐺# 𝐺' Attention 𝑐" 𝑐# 𝑐$ 𝑢" 𝑢# 𝑢( History History s=[0,1,0] 𝑢" 𝑢# 𝑢( 𝑢 hotel-inform-location hotel-inform-name Figure 5: The left figure describes the dialog act predictor and HDSA, and the right figure describes the details of DSA. The predicted hierarchical dialog acts are used to control the switch in HDSA at each layer. Here we use L = 3 layers, the head numbers at each layer are H = (4, 3, 6) heads, the hierarchical graph representation A=[[0, 1, 0, 0], [0, 1, 0], [0, 0, 1, 1, 0, 0]]. We use m to denote the dialog history length and n for response. et al. (2018); Rojas-Barahona et al. (2017) to use one-hot vector vkb and vbf for representing the DB records and belief state (see the original papers for details). For convenience, we use θ to collect all the parameters of the utterance encoder and action predictor. At training time, we propose to maximize the cross-entropy objective L(θ) as follows: L(θ) =A · log(fθ(¯u, vkb, vbf)+ (1 −A) · log(1 −fθ(¯u, vkb, vbf)) (3) where · denotes the inner product between two vectors. At test time, we predict the dialog acts ˆA = {I(Pθ(A)i > T)|1 ≤i ≤H0}, where T is the threshold and I is the indicator function. Disentangled Self-Attention Recently, the selfattention-based Transformer model has achieved state-of-the-art performance on various NLP tasks such as machine translation (Vaswani et al., 2017), and language understanding (Devlin et al., 2018; Radford et al., 2018). The success of the Transformer is partly attributed to the multi-view representation using multi-head attention architecture. Unlike the standard transformer which concatenates vectors from different heads into one vector, we propose to uses a switch to activate certain heads and only pass through their information to the next level (depicted in the right of Figure 5). Hence, we are able to disentangle the H attention heads to model H different semantic functionalities, and we refer to such module as the disentangled self-attention (DSA). Formally, we follow the canonical Transformer (Vaswani et al., 2017) to define the Scaled Dot-Product Attention function given the input query/key/value features Q, K, V ∈Rn×D as: Attention(Q, K, V ) = softmax(QKT √ D )V (4) where n denotes the sequence length of the input, Q, K, V denotes query, key and value. Here, we use H different self attention functions with their independent parameterization to compute the multi-head representation Gi as follows: gi = Attention(QW Q i , KW K i , V W V i ) Gi = fP F F (fLM(fMLP (fAT T (gi, u1:m))) (5) where the input matrices Q, K, V are computed from the input token embedding x1:n ∈Rn×D, and D denotes the dimension of the embedding. The ith head adopts its own parameters W Q i , W K i , W V i ∈RD× D H to compute the output gi ∈ Rn× D H . We shrink the dimension at each head to 3701 D/H to reduce the computation cost as suggested in Vaswani et al. (2017). We first use the cross-attention network fATT to incorporate the encoded dialog history u1:m, and then we apply a position-wise feed forward neural network fPFF , a layer normalization fLM, and a linear projection layer fMLP to obtain Gi ∈ Rn×D. These layers are shared across different heads. The main innovation of our architecture lies in disentangling the heads. That is, instead of concatenating Gi to obtain the layer output like the standard Transformer, we employ a binary switch vector s = (α1, . . . , αH) ∈{0, 1}H to control H different heads and aggregate them as a n × D output matrix G = PH i=1 αiGi. Specifically, the j-th row of G, denoted as Cj ∈RD, can be understood as the output corresponding to the j-th input token yj in the response. This approach is similar to a gating function to selectively pass desired information. By manipulating the attention-head switch s, we can better control the information flow inside the self-attention module. We illustrate the gated summation over multi-heads in Figure 6. Head 1 -> 𝐺" Head 2 -> 𝐺# Head 3 -> 𝐺% 𝑦" 𝑦# 𝑦% s=[1,0,1] Gated-output 𝐺 𝐶" 𝐶# 𝐶% Time step Figure 6: The disentangled multi-head attention, with a sequence length of 3, 3 different heads are used with hidden dimension 7. The switch only enables the information flow from the 1st and 3rd head. Hierarchical DSA When the dialog system involves more complex ontology, the semantic space can grow rapidly. In consequence, a single-layer disentangled self-attention with a large number of heads is difficult to handle the complexity. Therefore, we further propose to stack multiple DSA layers to better model the huge semantic space with strong compositionality. As depicted in Figure 3, the lower layers are responsible for grasping coarse-level semantics and the upper layers are responsible for capturing fine-level semantics. Such progressive generation bears a strong similarity with human brains in constructing precise responses. In each DSA layer, we feed the utterance encoding u1:m and last layer output C1:n as the input to obtain the newer output matrix G. We collect the output O1:n = C1:n from the last DSA layer to compute the joint probability over a observed sequence y1:n, which can be decomposed as a series of product over the probabilities:6 Pβ(y1:n|u1:m, s1:L) = n Y l=1 pβ(yl|y0:l−1, u1:m, s1:L) pβ(yl|y0:l−1, u1:m, s1:L) = softmax(WvOl + bv) where Wv ∈RD×V and bv ∈RV are the projection weight and bias onto a vocabulary of size V , l ∈{1, · · · , n} is the index, softmax denotes the softmax operation, s1:L denotes the set of the attention switches s1, · · · , sL over the L layers, and β denotes all the decoder parameters. Recall that the graph structure of dialog acts is explicitly encoded into HDSA as a prior, where each head in HDSA is set to model a designated semantic node on the graph. In consequence, the hierarchical representation A can be used to control the head switch s1:L. At training time, the model parameters β are optimized from the training data triple (y1:n, u1:m, A) to maximize the likelihood of ground truth acts and responses given the dialog history. Formally, we propose to maximize the following objective function as follows: L(β) = log Pβ(y1:n|u1:m, s1:L = A) At test time, we propose to use the predicted dialog act ˆA to control the language generation. The errors can be seen as coming from two sources, one is from inaccurate dialog act prediction, the other is from imperfect response generation. 5 Experiments Dataset To evaluate our proposed methods, we use the recently proposed MultiWOZ dataset (Budzianowski et al., 2018) as the benchmark, which was specifically designed to cover the challenging multi-domain and large-scale dialog managements (see the summary in Table 1). This new benchmark involves a much larger dialog action space due to the inclusion of multiple domains and complex database backend. We represent the 625 potential dialog acts into 6We follow the standard approach in Transformer to use a mask to make Ol depend only on y0:l−1 during training. And during test time, we decode sequentially from left-to-right. 3702 a three-layered hierarchical graph that with a total 44 nodes (see Appendix for the complete graph). We follow Budzianowski et al. (2018) Dialogs Total Turns Unique Tokens Value 8538 115,424 24,071 4510 Dialog Acts Domain Actions Slots 625 10 7 27 Table 1: Summary of the MultiWOZ dataset. to select 1000 dialogs as the test set and 1000 dialogs as the development set. And we mainly focus on the context-to-response problem, with the dialog act prediction being a preliminary task. The best HDSA uses three DSA layers with 10/7/27 heads to separately model the semantics of domain, actions and slot (dummy head is included to model “none” node). Adam (Kingma and Ba, 2014) with a learning rate of 10−3 is used to optimize the objective. A beam size of 2 is adopted to search the hypothesis space during decoding with vocabulary size of 3,130. Also, by small-scale search, we fix the threshold T = 0.4 due to better empirical results. Methods Precision Recall F1 Bi-directional LSTM 72.4 70.5 71.4 Word-CNN 72.8 70.3 71.5 3-layer Transformer 73.3 72.6 73.1 12-layer BERT 77.5 77.4 77.3 Table 2: Accuracy of Dialog Act Prediction Dialog Act Prediction We first train dialog act predictors using different neural networks to compare their performances. The experimental results (measured in F1 scores) are reported in Table 2. Experimental results show that fine-tuning the pretrained BERT (Devlin et al., 2018) can lead to significantly better performance than the other models. Therefore, we will use it as the dialog act prediction model in the following experiments. Instead of jointly training the predictor and the response generator, we simply fix the trained predictor when learning the generator Pβ(y). 5.1 Automatic Evaluation We follow Budzianowski et al. (2018) to use delexicalized-BLEU (Papineni et al., 2002), inform rate and request success as three basic metrics to compare the delexicalized generation against the delexicalized reference. We further propose Entity F1 (Rojas-Barahona et al., 2017) to evaluate the entity coverage accuracy (including all slot values, days, numbers, and reference, etc), and restore-BLEU to compare the restored generation against the raw reference. The evaluation metrics are detailed in the supplementary material. Before diving into the experiments, we first list all the models we experiment with as follows: 1. Without Dialog Act, we use the official code 7: (i) LSTM (Budzianowski et al., 2018): it uses history as the attention context and applies belief state and KB results as side inputs. (ii) Transformer (Vaswani et al., 2017): it uses stacked Transformer architecture with dialog history as source attention context. 2. With Sparse Tree Dialog Act, we feed the treebased representation as an external vectors into different architectures. (i) SC-LSTM (Wen et al., 2015): it feeds the sparse dialog act to the semantic gates to control the generation process. (ii) Transformer-in: it appends the sparse dialog act vector to input word embedding (iii) Transformer-out: it appends the sparse dialog act vector to the last layer output, before the softmax function. 3. With Compact Graph Dialog Act (Predicted), we use the proposed graph representation for dialog acts and use it to control the natural language generation. (i) Transformer-in/out: it uses the flattened graph representation and feeds it as an external embedding feature. (ii) Straight DSA: it uses the flattened graph representation and model it with a one-layer DSA followed with two layers of self-attention. (iii) 2-layer HDSA: it adopts the partial action/slot levels of hierarchical graph representation, used as an ablation study. (iv) 3-layer HDSA: it adopts the full 3-layered hierarchical graph representation, used for the main model. 4. With Graph Dialog Act (Groundtruth): it uses the ground truth dialog acts as input to see the performance upper bound of the proposed response generator architecture. In order to make these models comparable, we design different hidden dimensions to make their total parameter size comparable. We demonstrate 7https://github.com/budzianowski/ multiwoz 3703 Dialog-Act Methods Delexicalized Restored BLEU Inform Request Entity F1 BLEU None LSTM (Budzianowski et al., 2018) 18.8 71.2 60.2 54.8 15.1 3-layer Transformer (Vaswani et al., 2017) 19.1 71.1 59.9 55.1 15.2 Tree Act SC-LSTM (Wen et al., 2015) 20.5 74.5 62.5 57.7 16.6 3-layer Transformer-out 19.9 74.4 61.1 57.4 16.0 3-layer Transformer-in 20.2 73.8 62.1 57.3 16.2 Graph Act (Predicted) 3-layer Transformer-out 22.5 80.8 64.8 64.2 19.3 3-layer Transformer-in 22.7 80.4 65.1 64.6 19.9 Straight DSA (44 heads) + 2 x SA 22.6 80.3 67.1 65.0 20.0 2-layer HDSA (7/27 heads) + SA 23.2 82.9 69.1 65.1 20.3 3-layer HDSA (10/7/27 heads) 23.6 82.9 68.9 65.7 20.6 Graph Act (Groundtruth) 3-layer Transformer-in 29.1 85.5 72.6 83.8 25.1 Straight DSA (44 heads) + 2 x SA 29.6 86.4 75.6 84.1 25.5 3-layer HDSA (10/7/27 heads) 30.4 87.9 78.0 86.2 26.2 Table 3: Empirical Results on MultiWOZ Response Generation, we experiment with three forms of dialog act, namely none, one-hot and hierarchical. the performance of different models in Table 3, and briefly conclude with the following points: (i) by feeding the sparse tree representation to input/output layer (Transformer-in/out), the model is not able to capture the large semantics space of dialog acts with sparse training instances, which unsurprisingly leads to restricted performance gain against without dialog act input. (ii) the graph dialog act is essential in reducing the sample complexity, the replacement can lead to significant and consistent improvements across different models. (iii) the hierarchical graph structure prior is an efficient inductive bias; the structureaware HDSA can better model the compositional semantic space of dialog acts to yield a decent gain over Transformer-in/out with flattened input vector. (vi) our approaches yield significant gain (10+%) on the Inform/Request success rate, which reflects that the explicit structured representation of dialog act is very effective in guiding dialog response in accomplishing the desired tasks. (v) the generator is greatly hindered by the predictor accuracy, by feeding the ground truth acts, the proposed HDSA is able to achieve an additional gain of 7.0 in BLEU and 21% in Entity F1. Generalization Ability To better understand the performance gain of the hierarchical graph-based representation, we design synthetic tests to examine its generalization ability. Specifically, we divide the dialog acts into five categories based on their frequency of appearance in the training data: very few shot (1-100 times), few shot (100500 times), medium shot (500-2K times), many shot (2K-5K times), and very many shot (5K+ times). We compute the average BLEU score of the turns within each frequency category and plot the result in Figure 7. First, by comparing Transformer-in with compact Graph-Act against Transformer-in with sparse Tree-Act, we observe that for few number shots, the graph act significantly boosts the performance, which reflects our conjecture to lower sample complexity and generalize better to unseen (or less frequent) cases. Furthermore, by comparing Graph-Act Transformerin with HDSA, we observe that HDSA ahieves better results by exploiting the hierarchical structure in dialog act space. 5.4 9.7 17.3 24.6 25.1 11.1 14.1 19.1 25.2 25.4 14.4 16.8 20.9 25.5 25.4 0 5 10 15 20 25 30 Very Few Shot Few Shot Medium Shot Many Shot Very Many Shot Sentence BLEU Tree-Act Transformer-in Graph-Act Transformer-in HDSA Figure 7: The BLEU scores for dialog acts with different number of shots. 5.2 Human Evaluation Response Quality Owing to the low consistency between automatic metrics and human perception on conversational tasks, we also recruit trustful judges from Amazon Mechanical Turk 3704 Winer Consistency Relevance Coherence SC-LSTM 32.8% 38.8% 36.1% Tie 11.8% 11.4% 19.0% HDSA 55.4% 49.8% 44.8% Model Match Partial Match Mismatch HDSA 90% 7% 3% Trans-in 81% 12% 7% SC-LSTM 72% 10% 18% Table 4: Experimental results of two human evaluations for HDSA vs. SC-LSTM vs. Transformer-in. The top table gives the response quality evaluation and the bottom table demonstrates the controllability evaluation results in section 5.2. (AMT) (with prior approval rate >95%)8 to perform human comparison between the generated responses from HDSA and SC-LSTM. Three criteria are adopted: (i) relevance: the response correctly answers the recent user query. (ii) coherence: the response is coherent with the dialog history. (iii) consistency: the generated sentence is semantically aligned with ground truth. During the evaluation, each AMT worker is presented two responses separately generated from HDSA and SC-LSTM, as well the ground truth dialog history. Each HIT assignment has 5 comparison problems, and we have a total of 200 HIT assignments to distribute. In the end, we perform statistical analysis on the harvested results after rejecting the failure cases and display the statistics in Table 4. From the results, we can observe that our model significantly outperforms SC-LSTM in the coherence, i.e., our model can better control the generation to maintain its coherence with the dialog history. Semantic Controllability In order to quantitatively compare the controllability of HDSA, Graph-Act Tranformer-in, and SC-LSTM, we further design a synthetic NLG experiment, where we randomly pick 50 dialog history as the context from test set, and then randomly select 3 dialog acts and their combinations as the semantic condition to control the model’s responses generation. We demonstrate an example in the supplementary to visualize the evaluation procedure. Quantitatively, we hire human workers to rate (measured in match, partially match, and totally mismatch) whether the model follows the given semantic condition to generate coherent sentences. The experimental results are reported in the bottom half of Table 4, which demonstrate that both the com8https://www.mturk.com/ pact dialog act representation and the hierarchical structure prior are essential for controllability. 6 Discussion Graph Representation as Transfer Learning The proposed graph representation works well under the cases where the set of domain slotvalue pairs have significant overlaps, like Restaurant, Hotel, where the knowledge is easy to transfer. Under occasions where such exact overlap is scarce, we propose to use group similar concepts together as hypernym and use one switch to control the hypernym, which can generalize the proposed method to the broader domain. Compression vs. Expressiveness A trade-off that we found in our structure-based encoding scheme is that: when multiple dialog acts are involved with overlaps in the action layer, ambiguity will happen under the graph representation. For example, the two dialog acts “restaurant-informprice” and “hotel-inform-location” are merged as “[restaurant, hotel] →[inform] →[price, location]”, the current compressed representation is unable to distinguish them with “hotel-informprice” or “restaurant-inform-location”. Though these unnatural cases are very rare in the given dataset without hurting the performance per se, we plan to address such pending expressiveness problem in the future research. 7 Conclusion and Future Work In this paper, we propose a new semanticallycontrolled neural generation framework to resolve the scalability and generalization problem of existing models. Currently, our proposed method only considers the supervised setting where we have annotated dialog acts, and we have not investigated the situation where such annotation is not available. In the future, we intend to infer the dialog acts from the annotated responses and use such noisy data to guide the response generation. 8 Acknowledgements We really appreciate the efforts of the anonymous reviews and cherish their valuable comments, they have helped us improve the paper a lot. We are gratefully supported by a Tencent AI Lab RhinoBird Gift Fund. We are also very thankful for the public available dialog dataset released by University of Cambridge and PolyAI. 3705 References Pawel Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - A largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 5016–5026. Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, and William Yang Wang. 2018. Xlnbt: A cross-lingual neural belief tracking framework. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 414–424. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Ondrej Dusek and Filip Jurc´ıcek. 2016. A contextaware natural language generator for dialogue systems. In Proceedings of the SIGDIAL 2016 Conference, The 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 13-15 September 2016, Los Angeles, CA, USA, pages 185– 190. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbr¨ucken, Germany, August 1517, 2017, pages 37–49. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 753–757. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1437–1447. Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli C¸ elikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, pages 733–743. Nikola Mrksic, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve J. Young. 2017a. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1777–1788. Nikola Mrksic, Ivan Vulic, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica Gasic, Anna Korhonen, and Steve J. Young. 2017b. Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. TACL, 5:309–324. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Lina Maria Rojas-Barahona, Milica Gasic, Nikola Mrksic, Pei-Hao Su, Stefan Ultes, Tsung-Hsien Wen, Steve J. Young, and David Vandyke. 2017. A network-based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 1: Long Papers, pages 438–449. Yangyang Shi, Kaisheng Yao, Le Tian, and Daxin Jiang. 2016. Deep LSTM based feature mapping for query classification. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 1501–1511. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 November 4, 2018, pages 5027–5038. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. 3706 Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve J. Young. 2016a. Conditional generation and snapshot learning in neural dialogue systems. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2153–2162. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve J. Young. 2016b. Multi-domain neural network language generation for spoken dialogue systems. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 120–129. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Peihao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1711–1721. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve J. Young. 2017. Latent intention dialogue models. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 3732– 3741. Jason D. Williams, Antoine Raux, and Matthew Henderson. 2016. The dialog state tracking challenge series: A review. D&D, 7(3):4–33. Jason D. Williams, Antoine Raux, Deepak Ramachandran, and Alan W. Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, The 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 22-24 August 2013, SUPELEC, Metz, France, pages 404–413. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive dialogue state tracker. In ACL. 3707 A Details of Model Implementation Here we detailedly explain the model implementation of the baselines and our proposed HDSA model. In the encoder side, we use a three-layered transformer with input embedding size of 64 and 4 heads, the dimension of query/value/key are all set to 16, in the output layer, the results of 4 heads are concatenated to obtain a 64-dimensional vector, which is the first broadcast into 256-dimension and then back-projected to 64-dimension. By stacking three layers of such architecture, we obtain at the end the series of 64-dimensional vectors. Following BERT, we use the first symbol as the sentence-wise representation u, and compute its matching score against all the tree node to predict the representation of dialog acts ˆA. 𝑐" 𝑐# 𝑐$ 1 2 𝑛 Word Embedding Position Embedding 0 1 0 Dialog act Embedding + + + + Transformer Transformer Figure 8: Illustration of the architecture of Transformer-in. In the decoder, we adopt take as input any length features x1, · · · , xn, each with dimension of 64, in the first layer, since we have 10 heads, the dimension for each head is 6, thus the key, query feature dimensions are fixed to 6, the second layer with dimension of 9, the third with dimension of 2. The value feature is all fixed to 16, which is equivalent to the encoder side. After self-attention, the position-wise feed-forward neural network projects each feature back to 64 dimensions, which is further projected to 3.1K vocabulary dimension to model word probability. B Automatic Evaluation We simply demonstrate an example of our automatic evaluation metrics in Figure 9. C Baseline Implementation Here we visualize how we feed the dialog act input in as an embedding into the transformer to control the sequence generation process as Figure 8. D Human Evaluation Interface To better understand the human evaluation procedure, we demonstrate the user interface in Figure 10. E Controllability Evaluation To better understand the results, we depict an example in Figure 11, where 3 different dialog acts are picked as the semantic condition to constrain the response generation. F Enumeration of all the Dialog Acts Here we first enumerate the node semantics of the graph representation as follows: 1. Domain-Layer 10 choices: ’restaurant’, ’hotel’, ’attraction’, ’train’, ’taxi’, ’hospital’, ’police’, ’bus’, ’booking’, ’general’. 2. Action-Layer 7 choices: ’inform’, ’request’, ’recommend’, ’book’, ’select’, ’sorry’, ’none’. 3. Slot-Layer 27 choices: ’pricerange’, ’id’, ’address’, ’postcode’, ’type’, ’food’, ’phone’, ’name’, ’area’, ’choice’, ’price’, ’time’, ’reference’, ’none’, ’parking’, ’stars’, ’internet’, ’day’, ’arriveby’, ’departure’, ’destination’, ’leaveat’, ’duration’, ’trainid’, ’people’, ’department’, ’stay’. Then we enumerate the entire graph as follows: 3708 Entity F1: 57.1% Prediction: {Res.Name:1, Res.Price:1, Hotel.Name:1} Reference: {Res.Name:1, Res.Price:1, Res.Stars:1, Count:1} Delexicalized BLEU: 12.3 Prediction: I would recommend <Res.Name> with <Res.Price> price in the <Res.Location> near <Hotel.Name>. Groundtruth: Among the <Count> candidates, <Res.Name> is good with both <Res.Price> price and <Res.Stars> review. DB: {Res.Name: Little Seoul, Res.Price: Low, Res.Stars: 4, Res.Location: south, Res.Fee: 15$/person} Restored BLEU: 11.5 Prediction: I would recommend Little Seoul, which has a low price in the south near <Hotel.Name>. Groundtruth: Among the 4 candidates , Little Seoul is a good with both low price and 4-star review. Post-Processing Figure 9: Illustration of different evaluation metrics, in the delexicalized and non-delexicalized form. Figure 10: Illustration of Human Evaluation Interface. Dialog Act History: I’m looking for a restaurantin the centre. inform-area ✔ There is a restaurant in the [restaurant.area] part of town. request-price ✔ What price range are you looking for? request-price ✖ inform-area I have a restaurant in the [restaurant.area], what food style are you looking for? Figure 11: Illustration of an example in controlling response generation given dialog act condition. Check mark means pass and cross mark means fail. 3709 Figure 12: Illustration of entire dialog graph.
2019
360
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3710–3720 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3710 Incremental Learning from Scratch for Task-Oriented Dialogue Systems Weikang Wang1,2, Jiajun Zhang1,2, Qian Li3, Mei-Yuh Hwang3, Chengqing Zong1,2,4 and Zhifei Li3 1 National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China 2 University of Chinese Academy of Sciences, Beijing, China 3 Mobvoi, Beijing, China 4 CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China {weikang.wang, jjzhang, cqzong}@nlpr.ia.ac.cn {qli, mhwang, zfli}@mobvoi.com Abstract Clarifying user needs is essential for existing task-oriented dialogue systems. However, in real-world applications, developers can never guarantee that all possible user demands are taken into account in the design phase. Consequently, existing systems will break down when encountering unconsidered user needs. To address this problem, we propose a novel incremental learning framework to design task-oriented dialogue systems, or for short Incremental Dialogue System (IDS), without pre-defining the exhaustive list of user needs. Specifically, we introduce an uncertainty estimation module to evaluate the confidence of giving correct responses. If there is high confidence, IDS will provide responses to users. Otherwise, humans will be involved in the dialogue process, and IDS can learn from human intervention through an online learning module. To evaluate our method, we propose a new dataset which simulates unanticipated user needs in the deployment stage. Experiments show that IDS is robust to unconsidered user actions, and can update itself online by smartly selecting only the most effective training data, and hence attains better performance with less annotation cost.1 1 Introduction Data-driven task-oriented dialogue systems have been a focal point in both academic and industry research recently. Generally, the first step of building a dialogue system is to clarify what users are allowed to do. Then developers can collect data to train dialogue models to support the defined capabilities. Such systems work well if all possible combinations of user inputs and conditions are considered in the training stage (Paek and Pieraccini, 2008; Wang et al., 2018). However, as shown 1https://github.com/Leechikara/ Incremental-Dialogue-System What should I do to update the operating system? Our products support Android and iOS. Which one do you prefer? Hi, I can help you find the most suitable product. Figure 1: An example of task-oriented dialogue system. The system is designed to guide users to find a suitable product. Thus, when encountering unconsidered user needs such as ”how to update the operating system”, the system will give unreasonable responses. in Fig. 1, if users have unanticipated needs, the system will give unreasonable responses. This phenomenon is mainly caused by a biased understanding of real users. In fact, before system deployment, we do not know what the customers will request of the system. In general, this problem can be alleviated by more detailed user studies. But we can never guarantee that all user needs are considered in the system design. Besides, the user inputs are often diverse due to the complexity of natural language. Thus, it is impossible to collect enough training samples to cover all variants. Consequently, the system trained with biased data will not respond to user queries correctly in some cases. And these errors can only be discovered after the incident. Since the real user behaviors are elusive, it is obviously a better option to make no assumptions about user needs than defining them in advance. To that end, we propose the novel Incremental Dialogue System (IDS). Different from the existing training-deployment convention, IDS does not make any assumptions about the user needs and how they express intentions. In this paradigm, all reasonable queries related to the current task are legal, and the system can learn to deal with user queries online. Specifically, after the user sends a query to our system, we use an uncertainty estimation module 3711 to evaluate the confidence that the dialogue model can respond correctly. If there is high confidence, IDS will give its response to the user. Otherwise, human will intervene and provide a reasonable answer. When humans are involved, they can select a response from the current response candidates or give a new response to the user. If a new answer is provided, we add it to the system response candidates. Then, the generated context-response pair from humans will be fed into the dialogue model to update the parameters by an online learning module. Through continuous interactions with users after deployment, the system will become more and more knowledgeable, and human intervention will become less and less needed. To evaluate our method, we build a new dataset consisting of five sub-datasets (named SubD1, SubD2, SubD3, SubD4 and SubD5) within the context of customer services. Following the existing work (Bordes et al., 2016), our dataset is generated by complicated and elaborated rules. SubD1 supports the most limited dialogue scenarios. Then each later sub-dataset covers more scenarios than its previous one. To simulate the unanticipated user needs, we train the dialogue models on simpler datasets and test them on the harder ones. Extensive experiments show that IDS is robust to the unconsidered user actions and can learn dialogue knowledge online from scratch. Besides, compared with existing methods, our approach significantly reduces annotation cost. In summary, our main contributions are threefold: (1) To the best of our knowledge, this is the first work to study the incremental learning framework for task-oriented dialogue systems. In this paradigm, developers do not need to define user needs in advance and avoid collecting biased training data laboriously. (2) To achieve this goal, we introduce IDS which is robust to new user actions and can extend itself online to accommodate new user needs. (3) We propose a new benchmark dataset to study the inconsistency of training and testing in task-oriented dialogue systems. 2 Background and Problem Definition Existing work on data-driven task-oriented dialogue systems includes generation based methods (Wen et al., 2016; Eric and Manning, 2017) and retrieval based methods (Bordes et al., 2016; Williams et al., 2017; Li et al., 2017). In this paper, we focus on the retrieval based methods, because they always return fluent responses. In a typical retrieval based system, a user gives an utterance xt to the system at the t-th turn. Let (xt,1, ..., xt,N) denote the tokens of xt. Then, the system chooses an answer yt = (yt,1, ..., yt,M) from the candidate response set R based on the conditional distribution p(yt|Ct), where Ct = (x1, y1, ..., xt−1, yt−1, xt) is the dialogue context consisting of all user utterances and responses up to the current turn. By convention, the dialogue system is designed to handle predefined user needs. And the users are expected to interact with the system based on a limited number of dialogue actions. However, predefining all user demands is impractical and unexpected queries may be given to the system after the system is deployed. In this work, we mainly focus on handling this problem. 3 Incremental Dialogue System As shown in Fig. 2, IDS consists of three main components: dialogue embedding module, uncertainty estimation module and online learning module. Dialogue Embedding Data Pool Uncertainty Estimation Machine User Human Utterance Low Confidence High Confidence Response Response Context-Response Pair Online Learning t E C( ) Figure 2: An overview of the proposed IDS. In the context of customer services, when the user sends an utterance to the system, the dialogue embedding module will encode the current context into a vector. Then, the uncertainty estimation module will evaluate the confidence of giving a correct response. If there is high confidence, IDS will give its response to the user. Otherwise, the hired customer service staffs will be involved in the dialogue process and provide a reasonable answer, which gives us a new ground truth contextresponse pair. Based on the newly added contextresponse pairs, the system will be updated via the online learning module. 3.1 Dialogue Embedding Given dialogue context Ct in the t-th turn, we first embed each utterance in Ct using a Gated Recurrent Unit (GRU) (Chung et al., 2014) based bidirectional recurrent neural networks (bi-RNNs). 3712 The bi-RNNs transform each utterance2 x = (w1, w2, ..., wN) in Ct into hidden representation H = (h1, h2, ..., hN) as follows: −→ h n = GRU(−→ h n−1, φemb(wn)) ←− h n = GRU(←− h n+1, φemb(wn)) hn = −→ h n ⊕←− h n (1) where φemb(wn) is the embedding of word wn. To better encode a sentence, we use the selfattention layer (Lin et al., 2017) to capture information from critical words. For each element hn in bi-RNNs outputs, we compute a scalar selfattention score as follows: an = MLP(hn) pn = softmax(an) (2) The final utterance representation E(x) is the weighted sum of bi-RNNs outputs: E(x) = X n pnhn (3) After getting the encoding of each sentence in Ct, we input these sentence embeddings to another GRU-based RNNs to obtain the context embedding E(Ct) as follows: E(Ct) = GRU(E(x1), E(y1), ..., E(yt−1), E(xt)) (4) 3.2 Uncertainty Estimation In the existing work (Williams et al., 2017; Bordes et al., 2016; Li et al., 2017), after getting the context representation, the dialogue system will give a response yt to the user based on p(yt|Ct). However, the dialogue system may give unreasonable responses if unexpected queries happen. Thus, we introduce the uncertainty estimation module to avoid such risks. To estimate the uncertainty, we decompose the response selection process as follows: p(yt|Ct) = Z p(yt|z, Ct)p(z|Ct)dz (5) As shown in Fig. 3(a), from the viewpoint of probabilistic graphical models (Koller and Friedman, 2009), the latent variable z can be seen as an explanation of the dialogue process. In an abstract sense, given Ct, there is an infinite number of paths z from Ct to yt. And p(yt|Ct) is an expectation of p(yt|z, Ct) over all possible paths. If the 2We use x to represent each user utterance and y for each response for simplicity. All utterances use the same encoder. (a) (b) tC ty z tC tr z Figure 3: Graphical models of (a) response selection, and (b) online learning. The gray and white nodes represent the observed and latent variables respectively. system has not seen enough instances similar to Ct before, the encoding of Ct will be located in an unexplored area of the dialogue embedding space. Thus, the entropy of prior p(z|Ct) will be large. If we sample latent variable z based on p(z|Ct) multiple times and calculate p(yt|z, Ct), we can find p(yt|z, Ct) has a large variance under different sampled latent variables z. Based on such intuitive analysis, we design the uncertainty measurement for IDS. Specifically, we assume that the latent variable z obeys a multivariate diagonal Gaussian distribution. Following the reparametrization trick (Kingma and Welling, 2014), we sample ϵ ∼N(0, I) and reparameterize z = µ + σ · ϵ. The mean and variance of the prior p(z|Ct) can be calculated as:  µ log(σ2)  = MLP(E(Ct)) (6) After sampling a latent variable z from the prior p(z|Ct), we calculate the response probability for each element in the current candidate response set R. In IDS, R will be extended dynamically. Thus, we address the response selecting process with the ranking approach. For each response candidate, we calculate the scoring as follows: ρ(yt|z, Ct) = (E(Ct) ⊕z)T WE(yt) p(yt|z, Ct) = softmax(ρ(yt|z, Ct)) (7) where E(yt) is the encoding of yt ∈R, and W is the weight matrices. To estimate the variance of p(yt|z, Ct) under different sampled latent variables, we repeat the above process K times. Assume that the probability distribution over the candidate response set in the k-th repetition is Pk and the average response probability distribution of K sampling is Pavg. We use the Jensen-Shannon divergence (JSD) to measure the distance between Pk and Pavg as follows: JSD(Pk||Pavg) = 1 2(KL(Pk||Pavg) + KL(Pavg||Pk)) (8) 3713 (a) 0.7 0.1 0.1 ... 0.1 0.65 0.2 ... 0.1 0.15 0.6 ... ... (b) 0.1 0.12 0.11 ... 0.12 0.15 0.13 ... 0.1 0.11 0.12 ... ... tC tC 1z 2z Kz 1z 2z Kz Figure 4: A toy example to show the uncertainty estimation criterions. (a) means a large variance in the response probability under different sampled latent variables. (b) means close weights to all response candidates in the early stage of online learning. where KL(P||Q) is the Kullback-Leibler divergence between two probability distributions. Then, we get the average JSD as follows: JSDavg = 1 K K X k=1 JSD(Pk||Pavg) (9) Because the average JSD can be used to measure the degree of divergence of {P1, P2, ..., PK}, as shown in Fig. 4(a), the system will refuse to respond if JSDavg is higher than a threshold τ1. However, the dialogue model tends to give close weights to all response candidates in the early stage of training, as shown in Fig. 4(b). It results in a small average JSD but the system should refuse to respond. Thus, we add an additional criterion for the uncertainty measurement. Specifically, if the maximum probability in Pavg is lower than a threshold τ2, the system will refuse to respond. 3.3 Online Learning If the confidence is high enough, IDS will give the response with the maximum score in Pavg to the user. Otherwise, the hired customer service staffs will be asked to select an appropriate response from the top T response candidates of Pavg or propose a new response if there is no appropriate candidate. If a new response is proposed, it will be added to R. We denote the human response as rt. Then, we can observe a new context-response pair dt = (Ct, rt) and add it to the training data pool. The optimization objective is to maximize the likelihood of the newly added data dt. However, as shown in Eq. 5, calculating the likelihood requires an intractable marginalization over the latent variable z. Fortunately, we can obtain its lower bound (Hoffman et al., 2013; Miao et al., 2016; Sohn et al., 2015) as follows: L = Eq(z|dt) [log p(rt|z, Ct)] −KL(q(z|dt)||p(z|Ct)) ≤log Z p(rt|z, Ct)p(z|Ct)dz = log p(rt|Ct) (10) where L is called evidence lower bound (ELBO) and q(z|dt) is called inference network. The learning process of the inference network is shown in Fig. 3(b). Similar to the prior network p(z|Ct), the inference network q(z|dt) approximates the mean and variance of the posterior p(z|dt) as follows:  µ′ log(σ′2)  = MLP(E(Ct) ⊕E(rt)) (11) where E(Ct) and E(rt) denote the representations of dialogue context and human response in current turn, respectively. We use the reparametrization trick to sample z from the inference network and maximize the ELBO by gradient ascent on a Monte Carlo approximation of the expectation. It is worth noting that tricks such as mixing dt with the instances in the data pool and updating IDS for a small number of epochs (Shen et al., 2017) can be easily adopted to increase the utilization of labeled data. But, in our experiments, we find there is still a great improvement without these tricks. To reduce computation load, we update IDS with each dt only once in a stream-based fashion and leave these tricks in our future work. 4 Construction of Experimental Data To simulate the new unconsidered user needs, one possible method is to delete some question types in the training set of existing datasets (e.g., bAbI tasks (Bordes et al., 2016)) and test these questions in the testing phase. However, the dialogue context plays an important role in the response selection. Simply deleting some turns of a dialogue will result in a different system response. For example, in bAbI Task5, deleting those turns on updating api calls will result in a different recommended restaurant. Thus, we do not modify existing datasets but construct a new benchmark dataset to study the inconsistency of training and testing in task-oriented dialogue systems. We build this dataset based on the following two principles. First of all, we ensure all interactions are reasonable. To achieve that, we follow the construction process of existing work (Bordes 3714 et al., 2016) and generate the dataset by complicated and elaborated rules. Second, the dataset should contain several subsets and the dialogue scenarios covered in each subset are incremental. To simulate the new unconsidered user needs, we train the dialogue system on a smaller subset and test it on a more complicated one. Specifically, our dataset contains five different subsets within the context of customer services. From SubD1 to SubD5, the user needs become richer in each subset, as described below. SubD1 includes basic scenarios of the customer services in which users can achieve two primary goals. First, users can look up a product or query some attributes of interested products. For example, they can ask “Is $entity 5$3 still on sales?” to ask the discount information of $entity 5$. Second, after finding the desired product, users can consult the system about the purchase process and delivery information. SubD2 contains all scenarios in SubD1. Besides, users can confirm if a product meets some additional conditions. For example, they can ask “Does $entity 9$ support Android?” to verify the operating system requirement. SubD3 contains all scenarios in SubD2. In addition, users can compare two different items. For example, they can ask “Is $entity 5$ cheaper than $entity 9$?” to compare the prices of $entity 5$ and $entity 9$. SubD4 contains all scenarios in SubD3. And there are more user needs related to the after-sale service. For example, users can consult on how to deal with network failure and system breakdown. SubD5 contains all scenarios in SubD4. Further more, users can give emotional utterances. For example, if users think our product is very cheap, they may say “Oh, it’s cheap and high-quality. I like it!”. The dialogue system is expected to reply emotionally, such as “Thank you for your approval.”. If the user utterance contains both emotional and task-oriented factors, the system should consider both. For example, if users say “I cannot stand the old operating system, what should I do to update it?”, the dialogue system should respond “I’m so sorry to give you trouble, please refer to this: $api call update system$.”. It is worth noting that it often requires multiple turns of interaction to complete a task. For 3We use special tokens to anonymize all private information in our corpus. example, a user wants to compare the prices of $entity 5$ and $entity 9$, but not explicitly gives the two items in a single turn. To complete the missing information, the system should ask which two products the user wants to compare. Besides, the context plays an important role in the dialogue. For example, if users keep asking the same product many times consecutively, they can use the subject ellipsis to query this item in the current turn and the system will not ask users which product they are talking about. In addition, taking into account the diversity of natural language, we design multiple templates to express the same intention. The paraphrase of queries makes our dataset more diverse. For each sub-dataset, there are 20,000 dialogues for training and 5,000 dialogues for testing. A dialogue example in SubD5 and detailed data statistics are provided in the Appendices A. 5 Experimental Setup 5.1 Data Preprocessing It is possible for the dialogue model to retrieve responses directly without any preprocessing. However, the fact that nearly all utterances contain entity information would lead to a slow model convergence. Thus, we replace all entities with the orders in which they appear in dialogues to normalize utterances. For example, if the $entity 9$ is the second distinct entity which appears in a dialogue, we rename it with $entity order 2$ in the current episode. After the preprocessing, the number of normalized response candidates on both the training and test sets in each sub-dataset is shown in Table 1. SubD1 SubD2 SubD3 SubD4 SubD5 # of RSP 41 41 66 72 137 Table 1: The number of normalized response candidates in each sub-dataset after entity replacement, both training and test data included. 5.2 Baselines We compare IDS with several baselines: • IR: the basic tf-idf match model used in (Bordes et al., 2016; Dodge et al., 2015). • Supervised Embedding Model (SEM): the supervised word embedding model used in (Bordes et al., 2016; Dodge et al., 2015). 3715 • Dual LSTM (DLSTM): the retrieval-based dialogue model used in (Lowe et al., 2015). • Memory Networks (MemN2N): the scoring model which is used in QA (Sukhbaatar et al., 2015) and dialogue systems (Bordes et al., 2016; Dodge et al., 2015). • IDS−: IDS without updating model parameters during testing. That is, IDS−is trained only with human intervention data on the training set and then we freeze parameters. 5.3 Measurements Following the work of Williams et al. (2017) and Bordes et al. (2016), we report the average turn accuracy. The turn is correct if the dialogue model can select the correct response, and incorrect if not. Because IDS requires human intervention to reduce risks whenever there is low confidence, we calculate the average turn accuracy only if IDS chooses to respond without human intervention. That is, compared with baselines, IDS computes the turn accuracy only on a subset of test sets. To be fair, we also report the rate at which IDS refuses to respond on the test set. The less the rejection rate is, the better the model performs. 5.4 Implementation Details Our word embeddings are randomly initialized. The dimensions of word embeddings and GRU hidden units are both 32. The size of the latent variable z is 20. In uncertainty estimation, the repetition time K is 50. In all experiments, the average JSD threshold τ1 and the response probability threshold τ2 are both set to 0.34. In online learning, the number of Monte Carlo sampling is 50. In all experiments, we use the ADAM optimizer (Kingma and Ba, 2014) and the learning rate is 0.001. We train all models in mini-batches of size 32. 6 Experimental Results 6.1 Robustness to Unconsidered User Actions To simulate unexpected user behaviors after deployment, we use the hardest test set, SubD5, as the common test set, but train all models on a simple dataset (SubD1-SubD4) individually. The average turn accuracy is shown in Table 2. 4The smaller τ1 or larger τ2 will result in a higher average turn accuracy but a larger human intervention frequency. In our preliminary experiments, we find that setting both τ1 and τ2 to 0.3 is a good trade-off. Training DataSet Model SubD1 SubD2 SubD3 SubD4 IR 34.7% 35.2% 44.0% 55.1% SEM 35.1% 35.4% 43.4% 52.7% DLSTM 48.2% 52.0% 61.7% 74.0% MemN2N 50.5% 50.4% 64.0% 77.4% IDS− 78.6% 77.3% 83.2% 92.7% IDS 98.1% 96.7% 99.0% 99.7% Table 2: The average turn accuracy of different models. Models are trained on SubD1-SubD4 respectively, but all tested on SubD5. Note that, unlike the existing methods, IDS−and IDS give responses only if there is high degree of confidence. Training DataSet Model SubD1 SubD2 SubD3 SubD4 IDS− 42.0% 35.5% 30.4% 32.0% IDS 79.4% 79.0% 66.6% 62.8% Table 3: The rejection rate on the test set of SubD5. When trained on SubD1 to SubD4 and tested on SubD5, as shown in Table 2, the existing methods are prone to poor performance because these models are not aware of which instances they can handle. However, equipped with the uncertainty estimation module, IDS−can refuse to respond the uncertain instances and hence achieves better performance. For example, when trained on SubD1 and tested on SubD5, IDS−achieves 78.6% turn accuracy while baselines achieve only 50.5% turn accuracy at most. Moreover, if updating the model with human intervention data during testing, IDS attains nearly perfect accuracy in all settings. Due to the uncertainty estimation module, IDS− and IDS will refuse to respond if there is low confidence. The rejection rates of them are shown in Table 3. The rejection rate will drop if the training set is similar to the test set. Unfortunately, the rejection rate of IDS is much higher than that of IDS−. We guess the reason is the catastrophic forgetting (French, 1999; Kirkpatrick et al., 2017). When IDS learns to handle new user needs in SubD5, the knowledge learnt in the training phase will be somewhat lost. Thus, IDS needs more human intervention to re-learn the forgotten knowledge. However, forgetting will not occur if IDS is deployed from scratch and accumulates knowledge online because weights of IDS are optimized alternatively on all possible user needs. 6.2 Deploying without Initialization Compared with existing methods, IDS can accumulate knowledge online from scratch. The un3716 Model SubD1 SubD2 SubD3 SubD4 SubD5 IR 66.3% 66.5% 70.8% 74.1% 75.7% SEM 67.6% 68.4% 64.1% 60.8% 65.8% DLSTM 99.9% 99.9% 98.8% 97.7% 96.7% MemN2N 93.4% 94.5% 89.8% 85.3% 80.8% IDS− 100% 100% 100% 99.8% 99.9% Table 4: The average turn accuracy of different systems on SubDi test set. Note each baseline is trained on the entire SubDi training data, but IDS−is trained only on the low-confidence subset of SubDi training set. The parameters of all system are frozen during testing. SubD1 SubD2 SubD3 SubD4 SubD5 24.1% 27.4% 38.4% 56.5% 61.6% Table 5: The rejection rate of IDS−on SubDi training set. SubD1 SubD2 SubD3 SubD4 SubD5 0.3% 0.7% 3.2% 13.8% 24.1% Table 6: The rejection rate of IDS−on SubDi test set. certainty estimation module will guide us to label only valuable data. This is similar to active learning (Balcan et al., 2009; Dasgupta et al., 2005). To prove that, we train baselines on each of the SubDi training data with one epoch of back propagation5 and test these models on each of the SubDi test set. In contrast, for each SubDi training set, IDS−is trained from random initialization. Whenever IDS−refuses to respond, the current context-response pair in the training set will be used to update the model until all training data in SubDi are finished. Hence IDS−is trained on the subset of SubDi where the response confidence is below the threshold. After the training is finished, we freeze the model parameters and test IDS−on the test set of SubDi. Table 4 shows the average turn accuracy of different models. Table 5 shows the rejection rate of IDS−on each SubDi training set. We see that, compared with all baselines, IDS−achieves better performance with much less training data. This shows the uncertainty estimation module can select the most valuable data to label online. Table 6 shows the rejection rate of IDS−on each SubDi test data. We can see that the rejection rate is negligible on SubD1, SubD2 and SubD3. It means IDS−can converge to a low rejection rate after deployment. For SubD4 and SubD5, there 5In the online learning process of IDS−, each labeled data in the data pool is used only once. For the sake of fairness, we train baselines with only one epoch in this section. 0 500 1000 1500 2000 2500 3000 3500 Iterations 0 5 10 15 20 25 30 Human Interventions SubD1 SubD2 SubD3 SubD4 SubD5 Figure 5: The intervention frequency curves after deploying IDS−without any initialization. are still some instances IDS−can not handle. It is due to the fact that SubD4 and SubD5 are much more complicated than others. In the next section, we further show that as online learning continues, the rejection rate will continue to drop as well. 6.3 Frequency of Human Intervention The main difference between our approach and others is that we introduce humans in the system loop. Therefore, we are interested in the question of how frequently humans intervene over time. The human intervention frequency curves of deploying IDS−without any initialization (i.e., the online learning stage of IDS−in Section 6.2) are shown in Fig. 5. As shown, the frequency of human intervention in a batch will decrease with time. In the early stage of deployment, IDS− has a large degree of uncertainty because there are only a few context-response pairs in the data pool. Through continuous interactions with users, the labeled data covered in the data pool will become more and more abundant. Thus, humans are not required to intervene frequently. Besides, human intervention curves of different datasets have different convergence rates. The curve of SubD1 has the fastest convergence rate. As the dataset covers more and more user needs, the convergence rate becomes slower. However, there is still a trend to converge for SubD4 and SubD5 as long as we continue the online learning. This phenomenon is in line with the intuition that a more complicated dialogue system requires more training data than a simple one. 6.4 Visual Analysis of Context Embedding To better understand the behavior of our approach, we train IDS−on the SubD5 training set until 2,000 batches online updates are finished, and then 3717 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 High Confidence Low Confidence 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 High Confidence Low Confidence 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 High Confidence Low Confidence 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 High Confidence Low Confidence Figure 6: t-SNE visualization on the context representations of four different system responses. Red dots are contexts responded by IDS−with high confidence, while blue dots are contexts with low confidence. freeze the model parameters and test it on the SubD5 test set. As Table 1 shows, there are 137 unique normalized responses. Among these responses, we pick four of them and draw their context embedding vectors. Each vector is reduced to a 2-dimensional vector via t-SNE (Maaten and Hinton, 2008) for visualization, one sub-graph per response in Fig. 6. In each figure, the red dots are contexts responded by IDS−with high confidence, while the blue dots are contexts responded by human where there is low confidence. These graphs show a clear separation of sure vs. unsure contexts. Some blue dots are far away from the red. Humans should pay attention to these contexts to avoid risks. Besides, there are only a small number of cases when the two classes are mingled. We guess these cases are located in the confidence boundary. In addition, there are multiple clusters in each class. It is due to the fact the same system response can appear in different dialogue scenes. For example, “the system requesting user’s phone number” appears in scenes of both exchange and return goods. Although these contexts have the same response, their representations should be different if they belong to different dialogue scenes. 7 Related Work Task-oriented dialogue systems have attracted numerous research efforts. Data-driven methods, such as reinforcement learning (Williams et al., 2017; Zhao and Eskenazi, 2016; Li et al., 2017) and supervised learning (Wen et al., 2016; Eric and Manning, 2017; Bordes et al., 2016), have been applied to optimize dialogue systems automatically. These advances in task-oriented dialogue systems have resulted in impressive gains in performance. However, prior work has mainly focused on building task-oriented dialogue systems in a closed environment. Due to the biased assumptions of real users, such systems will break down when encountering unconsidered situations. Several approaches have been adopted to address this problem. Gaˇsic et al. (2014) explicitly defined kernel functions between belief states from different domains to extend the domain of dialogue systems. But it is difficult to define an appropriate kernel function when the ontology has changed drastically. Shah et al. (2016) proposed to integrate turn-level and task-level reward signals to learn how to handle new user intents. Lipton et al. (2018) proposed to use BBQ-Networks to extend the domain. However, Shah et al. (2016) and Lipton et al. (2018) have reserved a few bits in the dialogue state for the domain extension. To relax this assumption, Wang et al. (2018) proposed the teacher-student framework to maintain dialogue systems. In their work, the dialogue system can only be extended offline after finding errors and it requires hand-crafted rules to handle new user actions. In contrast, we can extend the system online in an incremental6 way with the help of hired customer service staffs. Our proposed method is inspired by the cumulative learning (Fei et al., 2016), which is a form of lifelong machine learning (Chen and Liu, 2016). This learning paradigm aims to build a system that learns cumulatively. The major challenges of the cumulative learning are finding unseen classes in the test set and updating itself efficiently to accommodate new concepts (Fei et al., 2016). To find new concepts, the heuristic uncertainty estimation methods (Tong and Koller, 2001; Culotta and McCallum, 2005) in active learning (Balcan et al., 2009; Dasgupta et al., 2005) can be adopted. When learning new concepts, the cumulative learning system should avoid retraining the whole system and catastrophic forgetting (French, 1999; Kirkpatrick et al., 2017). But the catastrophic forgetting does not happen if the dialogue system is trained with all possible user needs alternatively from scratch. The uncertainty estimation and online learn6The term “incremental” refers to systems able to operate on a word by word basis in the previous work (Eshghi et al., 2017; Schlangen and Skantze, 2009). In our work, it refers to the system which can adapt to new dialogue scenarios after deployment. 3718 ing methods in our work are inspired by variational inference approach (Rezende et al., 2014; Kingma and Welling, 2014). In the existing work, this approach was used to generate diverse machine responses in both open domain dialogue systems (Zhao et al., 2017; Serban et al., 2016) and task-oriented dialogue systems (Wen et al., 2017). In contrast, our work makes use of the Bayesian nature of variational inference to estimate the uncertainty and learn from humans. Specifically, we sample variables from the prior network as the random perturbation to estimate the model uncertainty following the idea of QueryBy-Committee (Seung et al., 1992) and optimize model parameters by maximizing the ELBO. 8 Conclusion This paper presents a novel incremental learning framework to design dialogue systems, which we call IDS. In this paradigm, users are not expected to follow any definition, and IDS has potential to handle new situations. To simulate new user actions after deployment, we propose a new dataset consisting of five different subsets. Experiments show that IDS is robust to new user actions. Importantly, with humans in the loop, IDS requires no data for initialization and can update itself online by selecting the most valuable data. As the usage grows, IDS will cumulate more and more knowledge over time. 9 Acknowledgments The research work described in this paper has been supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002103 and the Natural Science Foundation of China under Grant No. U1836221. References Maria-Florina Balcan, Alina Beygelzimer, and John Langford. 2009. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78–89. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. Zhiyuan Chen and Bing Liu. 2016. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 10(3):1–145. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746–751. Sanjoy Dasgupta, Adam Tauman Kalai, and Claire Monteleoni. 2005. Analysis of perceptron-based active learning. In International Conference on Computational Learning Theory, pages 249–263. Springer. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. 2015. Evaluating prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931. Mihail Eric and Christopher D Manning. 2017. Keyvalue retrieval networks for task-oriented dialogue. arXiv preprint arXiv:1705.05414. Arash Eshghi, Igor Shalyminov, and Oliver Lemon. 2017. Bootstrapping incremental dialogue systems from minimal data: the generalisation power of dialogue grammars. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2220–2230. Geli Fei, Shuai Wang, and Bing Liu. 2016. Learning cumulatively to become more knowledgeable. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1565–1574. ACM. Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4):128–135. Milica Gaˇsic, Dongho Kim, Pirros Tsiakoulis, Catherine Breslin, Matthew Henderson, Martin Szummer, Blaise Thomson, and Steve Young. 2014. Incremental on-line adaptation of pomdp-based dialogue managers to extended domains. In Proceedings on InterSpeech. Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347. D. P. Kingma and M. Welling. 2014. Auto-encoding variational bayes. In Conference Proceedings: Papers Accepted To the International Conference on Learning Representations. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, page 201611835. 3719 Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: principles and techniques. MIT press. Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 733–743. Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. 2018. Bbq-networks: Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. In ThirtySecond AAAI Conference on Artificial Intelligence. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International Conference on Machine Learning, pages 1727–1736. Tim Paek and Roberto Pieraccini. 2008. Automating spoken dialogue management design using machine learning: An industry perspective. Speech communication, 50(8):716–729. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. pages 1278–1286. David Schlangen and Gabriel Skantze. 2009. A general, abstract model of incremental dialogue processing. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 710–718. Association for Computational Linguistics. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. A hierarchical latent variable encoder-decoder model for generating dialogues. H Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pages 287–294. ACM. Pararth Shah, Dilek Hakkani-T¨ur, and Larry Heck. 2016. Interactive reinforcement learning for taskoriented dialogue management. In NIPS 2016 Deep Learning for Action and Interaction Workshop. Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep active learning for named entity recognition. arXiv preprint arXiv:1707.05928. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483–3491. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45–66. Weikang Wang, Jiajun Zhang, Han Zhang, Mei-Yuh Hwang, Chengqing Zong, and Zhifei Li. 2018. A teacher-student framework for maintainable dialog manager. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3803–3812. Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve Young. 2017. Latent intention dialogue models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3732– 3741. JMLR. org. Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2016. A networkbased end-to-end trainable task-oriented dialogue system. arXiv preprint arXiv:1604.04562. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. arXiv preprint arXiv:1702.03274. Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. arXiv preprint arXiv:1606.02560. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960. A Appendices A.1 Dialogue Example Our corpus is in Chinese. We translate an example dialogue into English and show it in Table 7. A.2 Data Statistics Data statistics are show in Table 8. Dialogue scenarios in each sub-dataset are shown in 9. 3720 User: The operating system breaks down. What should I do? System: You can try to update the system. User: The system always has problems. I’m very disappointed. System: I’m so sorry to give you trouble, we will do our best to improve the operating system. User: I don’t know how to update the system. System: Please refer to this: $api call update system$. User: It looks so troublesome. I want to send the product back. System: Please tell me your order number. User: My order number is $orderNumber$. System: Please send the item to this address: $address$. Table 7: A sample dialogue in SubD5. The dialogue consists of two dialogue scenarios: “consult system error” and “return goods”. And in the second turn, the user expressed negative emotion. Besides, it is worth noting that different user needs involve different business logics. For example, if the user wants to return goods, the system need know the order number and ask the user to send the goods to the $address$. SubD1 SubD2 SubD3 SubD4 SubD5 # of Scenarios per Dialogue 5.2 5.2 4.6 3.7 3.9 # of Utterances per Dialogue 11.7 11.7 10.6 9.8 12.4 # of Tokens per Utterance 3.8 4.0 4.1 4.3 5.1 # of Paraphrases per Query 8.9 7.0 6.5 6.9 6.9 Vocab Size after Preprocessing 194 253 303 430 620 # of Products 50 Training Dialogues 20000 Validation Dialogues 5000 Test Dialogues 5000 Table 8: Data statistics of each sub-dataset. SubD1 query product information, query payment methods, query express information SubD2 scenarios of SubD1, verify product information SubD3 scenarios of SubD2, compare two products SubD4 scenarios of SubD3, ask for an invoice, consult system error, consult nfc error, consult network error, return goods, exchange goods, query logistics SubD5 scenarios of SubD4, express positive emotion, express negative emotion Table 9: The dialogue scenarios covered in each sub-dataset.
2019
361
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3721–3730 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3721 ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo and Xueqi Cheng CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences, Beijing, China {zhanghainan,lanyanyan, pangliang, guojiafeng, cxq}@ict.ac.cn Abstract In multi-turn dialogue generation, response is usually related with only a few contexts. Therefore, an ideal model should be able to detect these relevant contexts and produce a suitable response accordingly. However, the widely used hierarchical recurrent encoderdecoder models just treat all the contexts indiscriminately, which may hurt the following response generation process. Some researchers try to use the cosine similarity or the traditional attention mechanism to find the relevant contexts, but they suffer from either insufficient relevance assumption or position bias problem. In this paper, we propose a new model, named ReCoSa, to tackle this problem. Firstly, a word level LSTM encoder is conducted to obtain the initial representation of each context. Then, the self-attention mechanism is utilized to update both the context and masked response representation. Finally, the attention weights between each context and response representations are computed and used in the further decoding process. Experimental results on both Chinese customer services dataset and English Ubuntu dialogue dataset show that ReCoSa significantly outperforms baseline models, in terms of both metric-based and human evaluations. Further analysis on attention shows that the detected relevant contexts by ReCoSa are highly coherent with human’s understanding, validating the correctness and interpretability of ReCoSa. 1 Introduction This paper is concerned with the multi-turn dialogue generation task, which is critical in many natural language processing (NLP) applications, such as customer services, intelligent assistant and chatbot. Recently, the hierarchical recurrent encoder-decoder (HRED) models (Serban et al., 2016; Sordoni et al., 2015) have been widely used in this area. In the encoding phase of these HRED The fitst example context1 你好,在吗?(Hello) context2 有什么问题我可以帮您呢?(What can I do for you?) context3 保真吗?(Is this product fidelity?) response 我们的商品都是海外采购的绝对保证是正品的 (Our products are all purchased overseas Absolutely guaranteed to be genuine) The fitst example context1 我有个交易纠纷,麻烦你看看有进度吗 (I have a trading dispute. Could you please tell me whether it is progressing? ) context2 您好,请问是这个订单吗?(Hello, is this order?) context3 对(Yes) response 等待纠纷处理(Waiting for dispute resolution) Table 1: The two examples from the customer services dataset, and the red sentence indicates the relevant context to the response. models, a recurrent neural network (RNN) based encoder is first utilized to encode each input a context to a vector, and then a hierarchical RNN is conducted to encode these vectors to one vector. In the decoding phase, another RNN decoder is used to generate the response based on the above vector. The parameters of both encoder and decoder are learned by maximizing the averaged likelihood of the training data. However, for this task, it is clear that the response is usually dependent on some relevant contexts, rather than all the context information. Here we give two examples, as shown in Table 1. In the first example, the response is clearly related to the closest context, i.e. post, in the first example. While in the second example, the response is related to context1. In these cases, if we use all contexts indiscriminately, as in HRED, it is likely that many noises will be introduced to the model, and the generation performance will be hurt significantly. Therefore, it is critical to detect and use the relevant contexts for multi-turn dialogue generation. To tackle this problem, some researchers try to define the relevance of a context by using the sim3722 ilarity measure, such as the cosine similarity in Tian et al. (Tian et al., 2017). However, the cosine similarity is conducted between each context and the post, with the assumption that the relevance between a context and a response is equivalent to the relevance between the context and the corresponding post, which is clearly insufficient in many cases, e.g. example 2 in Figure ??. Some other researchers, e.g. Xing et al. (Xing et al., 2018) make an attempt by introducing the traditional attention mechanism to HRED. However, some related contexts are far from the response in the multi-turn dialogue generation task, and the RNN-based attention model may not perform well because it usually biases to the close contexts (Hochreiter et al., 2001), namely position bias problem. Therefore, how to effectively detect and use the relevant contexts remains a challenging problem in multi-turn dialogue generation. In this paper, we propose a new model, namely ReCoSa, to tackle this problem. The core idea is to use the self-attention mechanism to measure the relevance between the response and each context. The motivation comes from the fact that self-attention is superior in capturing long distant dependency, as shown in (Vaswani et al., 2017). Specifically, we first use a word-level LSTM encoder to obtain the fixed-dimensional representation of each context. Then, we use the self-attention mechanism to get the context and masked response representations. Finally, we calculate the attention weight between the context and response representations as the relevant score, and conduct a decoder based on the related contexts to generate the corresponding response. In our experiments, we use two public datasets to evaluate our proposed models, i.e. Chinese customer services and English Ubuntu dialogue corpus. The results show that ReCoSa has the ability to produce more diverse and suitable responses than traditional HRED models and its attention variants. Besides, we conduct an analysis on attention, and the results show that the ReCoSa obtains higher coherence with the human labels, which indicate that the detected relevant contexts by our model are reasonable. 2 Related Work Despite many existing research works on singleturn dialogue generation (Li et al., 2017; Mou et al., 2017; Zhang et al., 2018a,b), multi-turn dialogue generation has gain increasing attention. One reason is that it is more accordant with the real application scenario, such as chatbot and customer services. More importantly, the generation process is more difficult since there are more context information and constrains to consider (Chen et al., 2018; Zhang et al., 2018c,d; Wu et al., 2017; Zhou et al., 2016), which poses great challenges for researchers in this area. Serban et al. (Serban et al., 2016) proposed HRED which uses the hierarchical encoderdecoder framework to model all the context sentences. Since then, the HRED based models have been widely used in different multi-turn dialogue generation tasks, and many invariants have been proposed. For example, Serban et al. (Serban et al., 2017b,a) proposed Variable HRED (VHRED) and MrRNN which introduce the latent variables into the middle state to improve the diversity of generated responses. However, simply treating all contexts indiscriminately is not proper for the application of multiturn dialogue generation, since the response is only usually related to a few previous contexts. Therefore some researchers try to define the relevance of the context by the similarity measure. For example, Tian et al. (Tian et al., 2017) proposed a weighted sequence (WSeq) attention model for HRED, using the cosine similarity to measure the degree of the relevance. Specifically, they first calculate the cosine similarity between the post embedding and each context sentence embedding, and then use this normalized similarity score as the attention weight. We can see that their results are based on an assumption that the relevance between a context and a response is equivalent to the relevance between the context and the corresponding post. However, in many cases, this assumption is actually not proper. Recently, Xing et al. (Xing et al., 2018) has introduced the traditional attention model to HRED, and a new hierarchical recurrent attention network (HRAN) has been proposed, which is similar with the Seq2Seq model with attention (Bahdanau et al., 2015). In this model, the attention weight is computed based on the current state, the sentence-level representation and the word-level representation. However, some relevant contexts in multi-turn dialogue generation are relatively far from the response, therefore the RNN-based attention model may not perform well because it usually biases to the close con3723 texts (Hochreiter et al., 2001). Shen et al. (Chen et al., 2018) introduced the memory network into the VHRED model, so that the model can remember the context information. Theoretically, it can retrieve some relevant information from the memory in the decoding phase, however, it is not clearly whether and how the system accurately extracts the relevant contexts. The motivation of this paper is how to effectively extract and use the relevant contexts for multi-turn dialogue generation. Different from previous studies, our proposed model can focus on the relevant contexts, with both long and short distant dependency relations, by using the selfattention mechanism. 3 Relevant Context Self-Attention Model In this section, we will describe our relevant context with self-attention (ReCoSa) model in detail, with architecture shown in Figure 1. ReCoSa consists of a context representation encoder, a response representation encoder and a contextresponse attention decoder. For each part, we use the multi-head self-attention module to obtain the context representation, response representation and the context-response attention weights. Firstly, the word-level encoder encodes each context as a low-dimension representation. And then, a multi-head self-attention component transforms these representations and position embeddings to the context attention representation. Secondly, another multi-head self-attention component transforms the masked response’s word embedding and position embedding to the response attention representation. Thirdly, the third multi-head attention component feeds the context representation as key and value, and the response representation as query in the context-response attention module. Finally, a softmax layer uses the output of the third multi-head attention component to obtain the word probability for the generation process. 3.1 Context Representation Encoder We will introduce the main components of the context representation encoder in this section. The word-level encoder first encodes each context as a fixed vector. And then the context self-attention module transforms each sentence vector to a context representation. Context Self-Attention Attention Response Self-Attention Context Representation … Feedforward Feedforward Softmax … y1 yM PE1 PEN PE1 PE2 PEM y1 y2 yM … hC1 hCN W1,1 W1,M WN,1 WN,M K Q V K Q V K Q V Response Representation Context-Response Attention Figure 1: The architecture of ReCoSa model 3.1.1 Word-level Encoder We first introduce the LSTM-based word level encoder (Bahdanau et al., 2015) used in our model. Given the context set C = {s1, . . . , sN}, each sentence in C is defined as si = {x1, . . . , xM}. Please note that in our paper the post is treated as the last context sentence sN. Given a sentence si as the input, a standard LSTM first encodes each input context to a fixed-dimension vector hM as follows. ik = σ(Wi[hk−1, wk]), fk = σ(Wf[hk−1, wk]), ok = σ(Wo[hk−1, wk]), lk = tanh(Wl[hk−1, wk]), ck = fkck−1 + iklk, hi = ok tanh(ck), where ik, fk and ok are the input, memory and output gate, respectively. wk is the word embedding for xk, and hk stands for the vector computed by LSTM at time k by combining wk and hk−1. ck is the cell at time k, and σ denotes the sigmoid function. Wi, Wf, Wo and Wl are parameters. We use the vector hM as the sentence representation. Therefore, we obtain the sentence representations {hs1, . . . , hsN }. It has been widely accepted that the selfattention mechanism itself cannot distinguish between different positions. So it is crucial to encode each position information. Actually, there are various ways to encode positions, and the simplest one is to use an additional position embedding. In our work, we parameterized position embeddings Pi ∈Rd, i = 1, . . . , N. The position embeddings are simply concatenated to the sentence representations. Finally, we obtain the sentences representation{(hs1, P1), . . . , (hsN , PN)}. 3724 3.1.2 Context Self-Attention Self-attention is a special attention mechanism to compute a sequence’s representation using only the sequence itself, which has been successfully applied to many tasks, such as machine translation, reading comprehension, summarization, and language understanding (Vaswani et al., 2017; Cheng et al., 2016; Parikh et al., 2016; Paulus et al., 2017; Shen et al., 2018). One critical advantage of self-attention is that it has the ability to well capture the long distant dependency information (Vaswani et al., 2017). That’s why we use this mechanism in our work. In this paper, we adopt the multi-head attention (Vaswani et al., 2017) mechanism. Given a matrix of n query vectors Q ∈Rn×d, keys K ∈Rn×d and values V ∈Rn×d, the scaled dotproduct attention computes the attention scores based on the following equation: Attention(Q, K, V ) = softmax(QKT √ d )V, where d is the number of the hidden units in our network. The H parallel heads are used to focus on different parts of channels of the value vectors. Formally, for the i-th head, we denote the learned linear maps by W Q i ∈Rn×d/H,W K i ∈Rn×d/H and W V i ∈Rn×d/H, which correspond to queries, keys, and values, respectively. Then the scaled dot-product attention is used to calculate the relevance score between queries and keys, to output mixed representations. The mathematical formulation is: Mi = Attention(QW Q i , KW K i , V W V i ). Finally, all the vectors produced by parallel heads are concatenated together to form a single vector. Again, a linear map is used to mix different channels from different heads: M = Concat(M1, . . . , MH), O = MW, (1) where M ∈Rn×d and W ∈Rd×d. To obtain the context representation, the multi-head attention mechanism first feeds the matrix of sentences representation vectors {(hs1, P1), . . . , (hsN , PN)}. as queries, keys and values matrices by using different linear projections. Then the context representation is computed as Os in equation 1. We use a feedforward network to output the context attention representation Of s . 3.2 Response Representation Encoder Given the response Y = {y1, · · · , yM} as the input, another multi-head self-attention component transforms each word embedding and its position embedding to obtain the response representation. For each word yt, this multi-head attention component feeds the matrix of response vectors {(w1 + P1), · · · , (wt−1, Pt−1)} as queries, keys and values matrices by using different linear projections. Then the response’s hidden representation is computed as Or in equation 1. After that, we use the mask operator on the response for the training. For each word yt, we mask {yt+1, · · · , yM} and only see {y1, · · · , yt−1}. For inference, we use the loop function on the generated response G. Take the tth generation as an example. Given the context C = {s1, . . . , sN} and the generated response {g1, · · · , gt−1}, we feed {g1, · · · , gt−1} as the response representation to obtain the tth word distribution in the generation response. 3.3 Context-Response Attention Decoder The third multi-head attention component feeds the context attention representation Of s as key and value, and the response hidden representation Or as query. The output is denoted as Od. We also use a new feedforward network to obtain the hidden vector Of d, as conducted in section 3.1.2. Finally, a softmax layer is utilized to obtain the word probability for the generation process. Formally, given an input context sequences C = {s1, . . . , sN}, the log-likelihood of the corresponding response sequence Y = {y1, · · · , yM} is: logP(Y |C; θ) = M X t=1 logP(yt|C, y1, · · · , yt−1; θ). Our model predicts the word yt based on the hidden representation Of d produced by the topmost softmax layer: P(yt|C, y1, · · · , yt−1; θ) = P(yt|Of s ; θ) = softmax(WoOf s ), where Wo is the parameters. Our training objective is to maximize the log likelihood of the 3725 ground-truth words given the input contexts over the entire training set. Adam is used for optimization in our experiments. 4 Experiments In this section, we conduct experiments on both Chinese customer service and English Ubuntu dialogue datasets to evaluate our proposed method. 4.1 Experimental Settings We first introduce some empirical settings, including datasets, baseline methods, parameter settings, and evaluation measures. 4.1.1 Datasets We use two public multi-turn dialogue datasets in our experiments. The Chinese customer service dataset, named JDC, consists of 515,686 conversational context-response pairs published by the JD contest1. We randomly split the data to training, validation, and testing sets, which contains 500,000, 7,843 and 7,843 pairs, respectively. The English Ubuntu dialogue corpus2 is extracted from the Ubuntu question-answering forum, named Ubuntu (Lowe et al., 2015). The original training data consists of 7 million conversations from 2004 to April 27,2012. The validation data are conversational pairs from April 27,2012 to August 7,2012, and the test data are from August 7,2012 to December 1,2012. We use the official script to tokenize, stem and lemmatize, and the duplicates and sentences with length less than 5 or longer than 50 are removed. Finally, we obtain 3,980,000, 10,000 and 10,000 pairs for training, validation and testing, respectively. 4.1.2 Baselines and Parameters Setting Six baseline methods are used for comparison, including traditional Seq2Seq (Sutskever et al., 2014), HRED (Serban et al., 2016), VHRED (Serban et al., 2017b), Weighted Sequence with Concat (WSeq) (Tian et al., 2017), Hierarchical Recurrent Attention Network (HRAN) (Xing et al., 2018) and Hierarchical Variational Memory Network (HVMN) (Chen et al., 2018). For JDC, we utilize the Chinese word as input. Specifically, we use the Jieba tool for word segmentation, and set the vocabulary size as 69,644. For Ubuntu, the word vocabulary size is set as 1https://www.jddc.jd.com 2https://github.com/rkadlec/ubuntu-ranking-datasetcreator JDC Dataset model PPL BLEU distinct-1 distinct-2 SEQ2SEQ 20.287 11.458 1.069 3.587 HRED 21.264 12.087 1.101 3.809 VHRED 22.287 11.501 1.174 3.695 WSeq 21.824 12.529 1.042 3.917 HRAN 20.573 12.278 1.313 5.753 HVMN 22.242 13.125 0.878 3.993 ReCoSa 17.282 13.797 1.135 6.590 Ubuntu Dataset model PPL BLEU distinct-1 disttinct-2 SEQ2SEQ 104.899 0.4245 0.808 1.120 HRED 115.008 0.6051 1.045 2.724 VHRED 186.793 0.5229 1.342 2.887 WSeq 141.599 0.9074 1.024 2.878 HRAN 110.278 0.6117 1.399 3.075 HVMN 164.022 0.7549 1.607 3.245 ReCoSa 96.057 1.6485 1.718 3.768 Table 2: The metric-based evaluation results (%). 15,000. For a fair comparison among all the baseline methods and our methods, the numbers of hidden nodes are all set to 512, and batch sizes are set to 32. The max length of dialogue turns is 15 and the max sentence length is 50. The head number of ReCoSa model is set as 6. Adam is utilized for optimization, and the learning rate is set to be 0.0001. We run all the models on a Tesla K80 GPU card with Tensorflow3. 4.1.3 Evaluation Measures We use both quantitative metrics and human judgements for evaluation in our experiment. Specifically, we use two kinds of metrics for quantitative comparisons. One kind is traditional metrics, such as PPL and BLEU score (Xing et al., 2017), to evaluate the quality of generated responses. They are both widely used in NLP and multi-turn dialogue generation (Chen et al., 2018; Tian et al., 2017; Xing et al., 2018). The other kind is the recently proposed distinct (Li et al., 2016b), to evaluate the degree of diversity of the generated responses by calculating the number of distinct unigrams and bigrams in the generated responses. For human evaluation, given 300 randomly sampled context and their generated responses, three annotators (all CS majored students) are required to give the comparison between ReCoSa model and baselines, e.g. win, loss and tie, based on the coherence of the generated response with respect to the contexts. For example, the win label means that the generated response of ReCoSa is more proper than the baseline model. 3https://github.com/zhanghainan/ReCoSa 3726 JDC Dataset model P@1 R@1 F1@1 P@3 R@3 F1@3 P@5 R@5 F1@5 P@10 R@10 F1@10 WSeq 35.20 29.73 16.12 24.27 51.49 16.50 21.61 71.76 16.61 17.45 97.17 14.79 HRAN 22.88 15.56 9.26 24.13 46.22 15.85 22.78 66.22 16.95 21.05 91.11 17.10 ReCoSa-head1 25.98 19.19 11.04 25.35 52.33 17.08 23.92 73.84 18.07 22.55 97.67 18.32 ReCoSa-head2 17.32 12.79 7.36 24.23 50.00 16.32 24.29 75.00 18.35 22.15 95.93 17.99 ReCoSa-head3 27.56 20.35 11.71 26.20 54.07 17.65 23.92 73.84 18.07 22.01 95.35 17.88 ReCoSa-head4 20.47 15.12 8.70 25.92 53.49 17.46 23.92 73.84 18.07 22.55 97.67 18.32 ReCoSa-head5 29.92 22.09 12.71 25.92 53.49 17.46 24.67 76.16 18.63 22.15 95.93 17.99 ReCoSa-head6 25.20 18.60 10.70 25.35 52.33 17.08 24.29 75.00 18.35 22.15 95.93 17.99 Table 3: The attention analysis results (%). JDC Dataset model ReCoSa vs. kappa win (%) loss (%) tie (%) SEQ2SEQ 53.45 3.45 43.10 0.398 HRED 44.83 10.34 44.83 0.373 VHRED 50.00 6.90 43.10 0.369 WSeq 34.48 8.62 56.90 0.379 HRAN 24.14 13.79 62.07 0.384 HVMN 27.59 13.79 58.62 0.383 Ubuntu Dataset model ReCoSa vs. kappa win (%) loss (%) tie (%) SEQ2SEQ 55.32 2.13 42.55 0.445 HRED 44.68 8.51 46.81 0.429 VHRED 48.94 8.51 42.55 0.421 WSeq 25.53 14.89 59.57 0.440 HRAN 34.04 10.64 55.32 0.437 HVMN 27.66 12.77 59.57 0.434 Table 4: The human evaluation on JDC and Ubuntu. 4.2 Experimental Results Now we demonstrate our experimental results on the two public datasets. 4.2.1 Metric-based Evaluation The quantitative evaluation results are shown in Table 2. From the results, we can see that the attention-based models, such as WSeq, HRAN and HVMN, outperform the traditional HRED baselines in terms of BLEU and distinct-2 measures. That’s because all these models further consider the relevance of the contexts in the optimization process. HRAN uses a traditional attention mechanism to learn the importance of the context sentences. HVMN uses a memory network to remember the relevant context. But their effects are both quite limited. Our proposed ReCoSa performs the best. Take the BLEU score on JDC dataset for example, the BLEU score of ReCoSa model is 13.797, which is significantly better than that of HRAN and HVMN, i.e., 12.278 and 13.125. The distinct scores of our model are also higher than baseline models, which indicate that our model can generate more diverse responses. We have conducted the significant test, and the result shows that the improvements of our model are significant on both Chinese and English datasets, i.e., p-value < 0.01. In summary, our ReCoSa model has the ability to produce high quality and diverse responses, as compared with baseline methods. 4.2.2 Human Evaluation The human evaluation results are shown in Table 4. The percentage of win, loss and tie, as compared with the baselines, are given to evaluate the quality of generated responses by ReCoSa. From the results, we can see that the percentage of win is always larger than that of loss, which shows that our ReCoSa model significantly outperforms baselines. Take JDC as an example. Compared with HRAN, WSeq and HVMN, the ReCoSa achieves preference gains (win subtracts loss) 10.35%, 25.86% and 13.8%, respectively. Kappa (Fleiss, 1971) value is presented to demonstrate the consistency of different annotators. We also conducted the significant test, and the result shows that the improvements of our model are significant on both two datasets, i.e., p-value < 0.01. 4.3 Analysis on Relevant Contexts To verify whether the performance improvements are owing to the detected relevant contexts, we conduct a further data analysis, including both quantitative evaluation and case study. Specifically, we randomly sample 500 context-response pairs from the JDC dataset, denoted as JDCRCD4. Three annotators are employed (all CS PhD students) to label each context with respect to the human’s judgements. If a contextual sentence is related with the response, then it is labeled as 1. Otherwise it is labeled as 0. The kappa value of this labeled dataset is 0.514, indicting the consistance among different annotators. 4https://github.com/zhanghainan/ReCoSa 3727 Example1 context1 货物没有收到但是申请了退货 The goods were not received, but the return was applied context2 请您稍等,马上为您查询 Please wait, I will check for you right away. context3 这个申请什么时候能通过? When will this application pass? response 这个订单已经发货了,需要到了站点之后处理 This order has been shipped and should be processed after shipping to the site. Seq2Seq 您好,请问有什么问题可以帮您? What can I do for you? HRED 亲爱的客户,还麻烦您提供下订单号,妹子这边给您查询哦 Please give me your order ID. I will check for you now VHRED 您好,很高兴为您服务 I am very happy to serve you. WSeq 申请中 Applying HRAN 是已经完成了哦,系统无法拦截站点 It’s already done, and the system can’t intercept the site. HVMN 建议等待呢 Please wait ReCoSa 亲爱的,查看到您的服务单还没有更新,请您耐心等待一下 Dear, check that your service order has not been updated yet. Please wait. Table 5: The generated response Example1 from different models on JDC. The red contexts indicate the relevant context to the response. (a) ReCoSa-head1. (b) ReCoSa-head2. (c) ReCoSa-head3. (d) ReCoSa-head4. (e) ReCoSa-head5. (f) ReCoSa-head6. Figure 2: ReCoSa multi-head attention for example1 in Table 5. The x-coordinate shows the context sentences and the y-coordinate shows the generated words. 4.3.1 Quantitative Evaluation Since HRED considers all the context as relevant context, we calculate the error rate for evaluation. That is, one minus the proportion of all-contextrelevant in the JDC-RCD data, i.e. 98.4%. Therefore, using all contexts indiscriminately is highly inappropriate for multi-turn dialogue generation. Other models, such as WSeq, HRAN and HVMN, will output the relevance score based on the attention weight for each context. Therefore we can treat it as a ranking problem. Ranking evaluation measures, such as the precision, recall and F1 score, are used for quantitative evaluations5. Then we calculate the precision, recall and F1 score of the top 1,3,5,10 for WSeq model, HRAN model and our ReCoSa model.6 The results are shown in Table 3. We can see that the WSeq obtains the best score for P@1, R@1 and F1@1. That’s because there are 80% cases that the post is labeled as 1, and the cosine similarity can rank the explicitly similar context sentence as top 1. Though the WSeq has the best score for F1@1, it doesn’t work well for F1@3, F1@5 and F1@10. That’s because the WSeq may lose some relevant contexts which are not explicitly similar to the post but are related with the response. Compared with the HRAN and WSeq, ReCoSa performs better in most cases. Take P@3 for example, the P@3 score of ReCoSa-head3 is 26.2, which is significantly better than that of HRAN and WSeq, i.e., 24.13 and 24.27. These results indicate that the relevant contexts detected by our ReCoSa model are highly coherent with human’s judgments. Furthermore, we calculate the averaged attention distance to the response, defined as: dis2resp = N X i=1 N −i + 1 N + 1 wi, where i is the index of the context sentence si and wi is the attention weight of the ith context. The dis2resp in human label is 0.399, indicting 5To clearly analyze the attention location, we only pay attention to the context without the post. The reason is that the WSeq always rank the post as top1. 6The HVMN model doesn’t have the explicit attention weight, since its memory is based on the variational network. 3728 Example2 context1 请稍等,正在为您确认此前咨询内容 Please wait, I am checking the previous requirement. context2 先直接退货再买吗? Should I return the goods firstly and buy it again? context3 是.收到了吧? Yes. Do you receive the goods? context4 对,昨晚收到的 Yes, I received them last night. response 那就只能无理由退了再买 You should return them and buy them again Seq2Seq 好的,亲爱滴请您稍等下哦 Ok, dear Please wait HRED 可以的 Ok VHRED 京东客服很高兴为您服务! JD customer services are happy to serve you. WSeq 请问还有其他的问题需要处理呢? Do you have any other questions? HRAN 好.您可以换售后退货了哦 Ok, you can return the goods. HVMN 好的,正在查询!回复不够及时请见谅! Ok, Searching! Replying to you is not timely enough, sorry! ReCoSa 您申请售后,商品退回,重新下单购买 You can apply for sale, return the goods and place an order again Table 6: The generated response Example2 from different models on JDC. The red contexts indicate the relevant context to the response. that the distribution of human attention is approximately uniform, containing both long and short distant dependencies. The dis2resp in ReCoSa is 0.477, which is closer to human than the distance in HRAN, e.g. 0.291. That is to say, our ReCoSa model can well capture the long distant dependency as compared with traditional attention on HRED, validating the correctness of our ideas. 4.3.2 Case Study To facilitate a better understanding of our model, we give some cases as in Table 5 and 6, and draw the heatmap of our ReCoSa model, including the six heads, to analyze the attention weights in Figure 2 and 3. From the result, we can first see that the attention-based model performs better than the model using all contexts indiscriminately. Take example1 of Table 5 as an example. The baselines of using all contexts are easy to generate some common responses, such as ‘What can I do for you?’ and ‘I am very happy to serve you. ’. The attention-based models, i.e. HRAN, WSeq, ReCoSa, can generate relevant response, such as ‘Applying’ and ‘It’ s already done, and the system can’ t intercept the site.’. The response generated by our ReCoSa is more specific and relevant, i.e. ‘Your servers order has not been updated yet, please wait.’. The reason is that ReCoSA considers the difference of contexts and it will focus on the relevant contexts, i.e. context1 and context3. Figure 2 shows the heatmap of example1 in Table 5. The x-coordinate indicates the context1, context2 and context3. And the y-coordinate indicates the generated words. The lighter the color is, the larger the attention weight is. We can see that the ReCoSa pays more attention to the rele(a) ReCoSa-head1. (b) ReCoSa-head2. (c) ReCoSa-head3. (d) ReCoSa-head4. (e) ReCoSa-head5. (f) ReCoSa-head6. Figure 3: ReCoSa multi-head attention for example2 in Table 6. The x-coordinate shows the context sentences and the y-coordinate shows the generated words. vant contexts, i.e. context1 and context3, which is coherent with the human’s understanding. 3729 Our model also performs well in the case where the post (i.e. the closest context) and the groundtruth response are not in the same topic. From the example2 in Table 6, the baselines all produce irrelevant or common responses, such as ‘Do you have any other questions?’ and ‘Ok, I am looking for you! Replying to you is not timely enough, sorry!’. The reason is that the baseline models are weak in detecting long distant dependency relations. However, our model gives more relevant responses with specific meanings‘You could apply for sale, return the goods and place an order again’, by using the self-attention mechanism. Figure 3 shows the heatmap of example2 in Table 6. For example2, the context2 is the most significant context and the context1 is the most useless one. We can see that the ReCoSa ignores the context1 and pays more attention to the context2. In a word, our ReCoSa model can detect both the long and short distant dependencies, even for the difficult case when the response is not related with the post. 5 Conclusion In this paper, we propose a new multi-turn dialogue generation model, namely ReCoSa. The motivation comes from the fact that the widely used HRED based models simply treat all contexts indiscriminately, which violate the important characteristic of multi-turn dialogue generation, i.e., the response is usually related to only a few contexts. Though some researchers have considered using the similarity measure such as cosine or traditional attention mechanism to tackle this problem, the detected relevant contexts are not accurate, due to either insufficient relevance assumption or position bias problem. Our core idea is to utilize the self-attention mechanism to effectively capture the long distant dependency relations. We conduct extensive experiments on both Chinese customer services dataset and English Ubuntu dialogue dataset. The experimental results show that our model significantly outperforms existing HRED models and its attention variants. Furthermore, our further analysis show that the relevant contexts detected by our model are significantly coherent with humans’ judgements. Therefore, we obtain the conclusion that the relevant contexts can be useful for improving the quality of multiturn dialogue generation, by using proper detection methods, such as self-attention. In future work, we plan to further investigate the proposed ReCoSa model. For example, some topical information can be introduced to make the detected relevant contexts more accurate. In addition, the detailed content information can be considered in the relevant contexts to further improve the quality of generated response. Acknowledgments This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 61773362, 61425016, 61472401, 61722211, and 61872338, the Youth Innovation Promotion Association CAS under Grants No.20144310, and 2016102, and the National Key R&D Program of China under Grants No. 2016QY02D0405. — References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. The International Conference on Learning Representations. Hongshen Chen, Zhaochun Ren, Jiliang Tang, Yihong Eric Zhao, and Dawei Yin. 2018. Hierarchical variational memory network for dialogue generation. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 1653–1662. International World Wide Web Conferences Steering Committee. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 551–561. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. American Psychological Association. Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, J¨urgen Schmidhuber, et al. 2001. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. The North American Chapter of the Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016b. Deep reinforcement learning for dialogue generation. The Conference on Empirical Methods in Natural Language Processing. 3730 Jiwei Li, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. The Conference on Empirical Methods in Natural Language Processing. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Computer Science. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2017. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. The Annual Meeting of the Association for Computational Linguistics. Ankur Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2249–2255. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence. Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron Courville. 2017a. Multiresolution recurrent neural networks: An application to dialogue response generation. In Thirty-First AAAI Conference on Artificial Intelligence. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Tao Shen, Tianyi Zhou, Guodong Long, Jing Jiang, Shirui Pan, and Chengqi Zhang. 2018. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Thirty-Second AAAI Conference on Artificial Intelligence. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 553–562. ACM. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In The Annual Conference on Neural Information Processing Systems, pages 3104–3112. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to make context more useful? an empirical study on contextaware neural conversational models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 231–236. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In The Association for the Advancement of Artificial Intelligence, pages 3351–3357. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018a. Reinforcing coherence for sequence to sequence model in dialogue generation. In International Joint Conference on Artificial Intelligence, pages 4567–4573. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018b. Tailored sequence to sequence models to different conversation scenarios. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1479–1488. Weinan Zhang, Yiming Cui, Yifa Wang, Qingfu Zhu, Lingzhi Li, Lianqiang Zhou, and Ting Liu. 2018c. Context-sensitive generation of open-domain conversational responses. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2437–2447. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018d. Modeling multiturn conversation with deep utterance aggregation. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 372–381. —
2019
362
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3731 Dialogue Natural Language Inference Sean Welleck New York University [email protected] Jason Weston Facebook AI Research New York University Arthur Szlam Facebook AI Research Kyunghyun Cho New York University Facebook AI Research CIFAR Azrieli Global Scholar Abstract Consistency is a long standing issue faced by dialogue models. In this paper, we frame the consistency of dialogue agents as natural language inference (NLI) and create a new natural language inference dataset called Dialogue NLI. We propose a method which demonstrates that a model trained on Dialogue NLI can be used to improve the consistency of a dialogue model, and evaluate the method with human evaluation and with automatic metrics on a suite of evaluation sets designed to measure a dialogue model’s consistency. 1 Introduction A long standing issue faced by dialogue models is consistency (Li et al., 2016; Vinyals et al., 2015; Zhang et al., 2018). An example from (Vinyals et al., 2015) shows a two-round dialogue in which their neural sequence model first responds to what is your job? with i’m a lawyer, then responds to what do you do? with i’m a doctor. Even when inconsistencies are relatively rare and semantically plausible, they are jarring, and because semantic plausibility is not enough to root them out, preventing them is challenging. One approach to increasing the consistency of a chit-chat dialogue model was proposed in (Zhang et al., 2018), where the dialogue agent was given a set of personal facts describing its character (a persona) and produces utterances that reflect the persona. The intended outcome is that the agent produces utterances consistent with its given persona. However, these models still face the consistency issue, as shown in Figure 1. Separately, the framework of Natural Language Inference (NLI) (Bowman et al., 2015; Dagan et al., 2006; Maccartney and Manning, 2009) involves learning a mapping between a sentence pair and an entailment category. It is hypothesized that the NLI task is a proxy for general goals in natural language processing, such as language understanding (Bowman et al., 2015; Williams et al., 2018). Thus, the NLI task has been used for learning general sentence representations (Conneau et al., 2017) and for evaluating NLP models (Poliak et al., 2018a; Wang et al., 2018), with the expectation that such models will be useful in downstream tasks. Despite this expectation, leveraging an NLI model for a downstream task remains an underexplored research direction. An NLI model may improve downstream task performance if properly used, while downstream tasks may yield new datasets or identify issues with existing NLI models, thus expanding the NLI research domain. In this paper, we reduce the problem of consistency in dialogue to natural language inference. We first create a dataset, Dialogue NLI,1 which contains sentence pairs labeled as entailment, neutral, or contradiction. Then, we demonstrate that NLI can be used to improve the consistency of dialogue models using a simple method where utterances are re-ranked using a NLI model trained on Dialogue NLI. The method results in fewer persona contradictions on three evaluation sets. The evaluation sets can be used independently to automatically evaluate a dialogue model’s persona consistency, reducing the need for human evaluation. We discuss several future research directions involving this approach. 2 Dialogue Consistency and Natural Language Inference First, we review the dialogue generation and natural language inference problems as well as the notions of consistency used throughout. 1The dataset is available at wellecks.github.io/ dialogue_nli. 3732 Figure 1: Persona-based dialogue with a Key-Value Memory Network trained on Persona-Chat (Zhang et al., 2018). Figure 2: Relating triples, persona sentences, and utterances to derive annotated sentence pairs. Shown here is a “relation swap” contradiction. Dialogue Generation Dialogue generation can be framed as next utterance prediction, in which an utterance (a sequence of tokens representing a sentence) ut+1 is predicted given a conversation prefix u≤t. A sequence of utterances is interpreted as a dialogue between agents. For instance, an alternating two-agent dialogue which starts with agent A and ends with agent B is written as uA 1 , uB 2 , uA 3 , uB 4 , ..., uB T . Persona-Based Dialogue In persona-based dialogue, each agent is associated with a persona, PA and PB. An utterance is now predicted using the conversation prefix u≤t and the agents own persona, e.g. PA for agent A. It is assumed that an agent’s utterances are conditionally dependent on its persona, which can be interpreted as the utterances being representative of, or reflecting, the persona. A typical approach for representing the persona is to use a set of sentences P = {p1, ..., pm}. Consistency A consistency error, or contradiction, occurs when an agent produces an utterance that contradicts one of their previous utterances. Similarly, a persona consistency error, or persona contradiction, occurs when an agent produces an utterance that contradicts a subset of its persona. A contradiction may be a clear logical contradiction, e.g. I have a dog vs. I do not have a dog, but in general is less clearly defined. As a result, in addition to logical contradictions, we interpret a consistency error as being two utterances not likely to be said by the same persona. For instance, “i’m looking forward to going to the basketball game this weekend!” vs. “i don’t like attending sporting events”, as well as “i’m a lawyer” vs. “i’m a doctor” would be viewed here as contradictions, although they are not strict logical inconsistencies. Similarly, a persona consistency error is interpreted here as an utterance which is not likely to be said given a persona described by a given set of persona sentences, in addition to logical contradictions. Natural Language Inference Natural Language Inference (NLI) assumes a dataset D = {(s1, s2)i, yi}N i=1 which associates an input pair (s1, s2) to one of three classes y ∈{entailment, neutral, contradiction}. Each input item sj comes from an input space Sj, which in typical NLI tasks is the space of natural language sentences, i.e. sj is a sequence of words (w1, ..., wK) where each word wk is from a vocabulary V. The input (s1, s2) are referred to as the premise and hypothesis, respectively, and each label is interpreted as meaning the premise entails the hypothesis, the premise is neutral with respect to the hypothesis, or the premise contradicts the hypothesis. The problem is to learn a function fNLI(s1, s2) →{E, N, C} which generalizes to new input pairs. Reducing Dialogue Consistency to NLI Identifying utterances which contradict previous utterances or an agent’s persona can be reduced to natural language inference by assuming that contradictions are contained in a sentence pair. That is, given a persona PA = {pA 1 , ..., pA m} for agent A and a length-T dialogue uA 1 , uB 2 , ...uA T−1, uB T , it is assumed that a dialogue contradiction for agent A is contained in an utterance pair (uA i , uA j ), and a persona contradiction is contained in a pair (uA i , pA k ). Similarly, we assume that entailments 3733 and neutral interactions, defined in Section 3, are contained in sentence pairs. We do not consider relationships which require more than two sentences to express. Under this assumption, we can use a natural language inference model fNLI to identify entailing, neutral, or contradicting utterances. Section 3 proposes a dialogue-derived dataset for training fNLI, and Section 4 proposes a method which incorporates fNLI with a dialogue model for next utterance prediction. 3 Dialogue NLI Dataset The Dialogue NLI dataset consists of sentence pairs labeled as entailment (E), neutral (N), or contradiction (C). Sentences Sentences originate from a two-agent persona-based dialogue dataset. A dialogue between agents A and B consists of a sequence of utterances uA 1 , uB 2 , uA 3 , uB 4 , ..., uB T , and each agent has a persona represented by a set of persona sentences {pA 1 , ..., pA mA} and {pB 1 , ..., pB mB}. The Dialogue NLI dataset consists of (ui, pj) and (pi, pj) pairs2 from the Persona-Chat dataset (Zhang et al., 2018)3. 3.1 Triple Generation In order to determine labels for our dataset, we require human annotation of the utterances and persona sentences in PersonaChat, as the original dataset does not contain this information. We perform such annotation by first associating a human-labeled triple (e1, r, e2) with each persona sentence, and a subset of all the utterances, detailed in 3.2. Each triple contains the main fact conveyed by a persona sentence, such as (i, have pet, dog) for the persona sentence I have a pet dog, or a fact mentioned in an utterance, such as No, but my dog sometimes does. Persona sentences and utterances are grouped by their triple (e.g. see Figure 2), and pairs (u, p) and (p, p) are defined as entailment, neutral, or contradiction based on their triple according to the criteria below. For examples and summary, we refer readers to Tables 1–2. 2 We also release additional (ui, uj) pairs, but experiments in this paper are not based on them. 3The dataset collection process is applicable to other persona-based dialogue datasets such as (Mazar´e et al., 2018). Entailment Each unique pair of sentences that share the same triple are labeled as entailment. Neutral Neutral pairs are obtained with three different methods. First, a miscellaneous utterance is a (u, p) pair of which u is not associated with any triple. This includes greetings (how are you today?) and sentences unrelated to a persona sentence (the weather is ok today), so such utterances are assumed to be neutral with respect to persona sentences. The second method, persona pairing, takes advantage of the fact that each ground-truth persona is typically neither redundant nor contradictory. A persona sentence pair (p, p′) is first selected from a persona if p and p′ do not share the same triple. Then each sentence associated with the same triple as p is paired with each sentence associated with the same triple as p′. Lastly, we specify relation swaps (r, r′) for certain relations (see Appendix A.2) whose triples are assumed to represent independent facts, such as have vehicle and have pet. A sentence pair, whose first sentence is associated with a triple (·, r, ·) and whose second sentence has triple (·, r′, ·), is labeled as neutral. See Table 1 for an example. Contradiction We obtain contradictions using three methods. See Figure 2 for an example. First, the relation swap method is used by specifying contradicting relation pairs (r, r′) (see Appendix A.2), such as (like activity, dislike), then pairing each sentence associated with the triple (e1, r, e2) with each sentence associated with (e1, r′, e2). Similarly, an entity swap consists of specifying relations, e.g., physical attribute, that would yield a contradiction when the value of e2 is changed to a different value e′ 2, e.g., short → tall (see Appendix A.3). Sentences associated with (e1, r, e2) are then paired with sentences associated with (e1, r, e′ 2). Finally, a numeric contradiction is obtained by first selecting a sentence which contains a number that appears in the associated triple (see Table 1). A contradicting sentence is generated by replacing the sentence’s numeric surface form with a different randomly sampled integer in the number or text form. 3734 Triple Premise Hypothesis Triple Label (i, like activity, chess) i listen to a bit of everything . it helps me focus for my chess tournaments . i like to play chess . (i, like activity, chess) E how are you today? i drink espresso . (i, like drink, espresso) N (i, like goto, spain) i love spain so much , i been there 6 times . i think i will retire in a few years . (i, want do, retire) N (i, have vehicle, car) my vehicle is older model car . i have pets . (i, have pet, pets) N (i, dislike, cooking) i really do not enjoy preparing food for myself . i like to cook with food i grow in my garden . (i, like activity, cooking) C (i, physical attribute, short) height is missing from my stature . i am 7 foot tall . (i, physical attribute, tall) C (i, have family, 3 sister) i have a brother and 3 sisters . i have a brother and four sisters . (i, have family, 4 sister) C Table 1: Examples from the validation set. Train Valid Test Test-Gold Data Type Label (u, p) (p, p) (u, p) (p, p) (u, p) (p, p) (u, p) (p, p) Matching Triple E 43,000 57,000 5,000 500 4,500 900 3,712 615 Misc. Utterance N 50,000 3,350 3,000 2,282 Persona Pairing N 20,000 10,000 2,000 2,000 1,466 Relation Swap N 20,000 150 400 260 Relation Swap C 19,116 2,600 85 14 422 50 279 44 Entity Swap C 47,194 31,200 4,069 832 3,400 828 2,246 591 Numerics C 10,000 500 1,000 881 Dialogue NLI Overall 310,110 16,500 16,500 12,376 Table 2: Dialogue NLI Dataset Properties. (u, p) and (p, p) refer to (utterance, persona sentence) and (persona sentence, persona sentence) pairs, respectively. Numerics consist of (u, u) (u, p) and (p, p) pairs. 3.2 Triple Annotation Each persona sentence is annotated with a triple (e1, r, e2) using Amazon Mechanical Turk task. We first define a schema consisting of ⟨category⟩⟨relation⟩⟨category⟩rules, such as ⟨person⟩have pet⟨animal⟩, where the relation comes from a fixed set of relation types R, listed in Appendix A.1. Given a sentence, the annotator selects a relation r from a drop-down populated with the values in R. The annotator then selects the categories and values of the entities e1 and e2 using drop-downs that are populated based on the schema rules. An optional drop-down contains numeric values for annotating entity quantities (e.g., 3 brothers). If selected, the numeric value is concatenated to the front of the entity value. The annotator can alternatively input an out-of-schema entity value in a text-box. Using this method, each of the 10,832 persona sentences is annotated with a triple (e1, r, e2), where r ∈R, e1 ∈E1, and e2 ∈E2. Here E1 is the set of all annotated e1 from the drop-downs or the text-box, and E2 is similarly defined. Finally, utterances are associated with a triple as follows. Let p be a persona sentence with triple (e1, r, e2). We start with all utterances, U, from agents that have p in their persona. An utterance u ∈U is then associated with the triple (e1, r, e2) and persona sentence p when e2 is a sub-string of u, or word similarity4 sim(u, p) ≥τ is suitably large. 4 We use cosine similarity between the mean of TF-IDF weighted GloVe (Pennington et al., 2014) word vectors and set τ = 0.9. 3735 3.3 Statistics Table 2 summarizes the dataset and its underlying data types. The label, triple, and data type are supplied as annotations for each sentence pair. We additionally create a gold-standard test set (Test Gold) by crowdsourcing three label annotations for each example in the test set. We keep each test example for which two or more annotators agreed with its dataset label. All sentences in Dialogue NLI were generated by humans during the crowdsourced dialogue collection process of the Persona-Chat dataset (Zhang et al., 2018). The resulting sentence pairs are thus drawn from a natural dialogue domain that differs from existing NLI datasets, which are either drawn from different domains such as image captions or created using synthetic templates (Bowman et al., 2015; Demszky et al., 2018; Khot et al., 2018; Marelli et al., 2014; Poliak et al., 2018b; Wang et al., 2018; Williams et al., 2018). 4 Consistent Dialogue Agents via Natural Language Inference We now present a method which demonstrates that natural language inference can be used to improve the consistency of dialogue agents. Candidate utterances are re-ranked based on whether the candidate is predicted to contradict a persona sentence. If the NLI model predicts that a candidate contradicts a persona sentence, the candidate’s score is penalized, with the penalty weighted by the NLI model’s confidence5 scaled by a constant. Specifically, assume a dialogue model fdialogue(P, u≤t, U) → (s1, s2, ..., s|U|) and a Dialogue NLI model fNLI(u, p) →{E, N, C}. Given a persona P = {p1, ..., pm}, previous utterances u≤t, and a set of candidate nextutterances U, the dialogue model outputs a ranked list of scores s1, s2, ..., s|U| corresponding to next-utterance candidates u1, u2, ..., u|U|. The NLI model is then run on each (ui, pj) pair, predicting a label yi,j ∈{E, N, C} with confidence ci,j. A contradiction score is computed for each candidate as: scontradict i =    0, if yi,j ̸= C ∀pj ∈P max j:yi,j=C ci,j, otherwise. That is, if the candidate ui does not contradict any persona sentence pj according to the NLI 5 In our experiments, the softmax output corresponding to the contradiction class from Dialogue NLI. Model Valid Test Test Gold ESIM 86.31 88.20 92.45 InferSent 85.82 85.68 89.96 InferSent SNLI 47.86 46.36 47.03 InferSent Hyp. Only 55.98 57.19 51.52 Most Common Class 33.33 34.54 34.96 ESIM Gold Triples 99.52 99.46 99.69 Table 3: Dialogue NLI Results model, scontradict i is zero. If ui contradicts one or more persona sentences, scontradict i is the highest confidence, ci,j, out of the contradicting (ui, pj).6 New candidate scores are then computed as sre-rank i = si −λ(s1 −sk)scontradict i (1) and the candidates are sorted according to sre-rank. Hyper-parameters λ and k control the NLI model’s influence in re-ranking. For example, if the top candidate has a contradiction score of 1.0, then with λ = 1, it will be moved to the k’th position in the ranking. λ = 0 corresponds to no re-ranking. 5 Experiments 5.1 Experiment 1: NLI Models Many recently proposed NLI models can be categorized into sentence encoding based methods of the form fMLP(genc(s1), genc(s2)), and attention-based methods of the form fMLP(gattn(s1, s2)) (Lan and Xu, 2018). We thus choose and train representative models of each type which have achieved competitive performance on existing NLI benchmark datasets. For the sentence encoding method, we use InferSent (Conneau et al., 2017), which encodes a sentence using a bidirectional LSTM followed by max-pooling over the output states. As the representative attention-based method we use the enhanced sequential inference model (ESIM, (Chen et al., 2017)), which computes an attention score for each word pair. We also report results from a model trained and evaluated using the hypothesis sentence only (InferSent Hyp. Only) (Gururangan et al., 2018; Poliak et al., 2018c), a model trained on the existing SNLI dataset (Bowman et al., 2015) but evaluated 6 Future work could consider filtering previous-utterance contradictions (ui, uj) as well. 3736 Data Type Example Pred. Actual Matching Triple (p, p) i am a hopeless bookworm. Neutral Entail when i have some spare time i read. Matching Triple (u, p) i am from italy. i love the early mornings. Neutral Entail i like getting up bright and early. Misc. Utterance i do not understand football or baseball. Contradict Neutral i am employed as an engineer. Persona Pairing i lift weights every chance i get. Entail Neutral i work in a warehouse driving a forklift. Relation Swap (p, p) canines make me shake with fear. Entail Contradict i love dogs but hate cats. Relation Swap (u, p) i am heavy into fitness although i am rather large. Entail Contradict i do not like exercise or physical activity. Entity Swap (p, p) hawaii is where i reside. Neutral Contradict i do not drive because i live in new york. Entity Swap (u, p) tell me it was vegan food please , that is all i eat. Neutral Contradict i eat ham. Numerics i have two part time jobs. Neutral Contradict i have 7 part time jobs. Table 4: Example ESIM mispredictions by data type on Test Gold. Data Type N Accuracy Matching Triple (p, p) 615 83.58 Matching Triple (u, p) 3,712 91.25 Misc. Utterance 2,282 96.85 Persona Pairing 1,466 94.48 Relation Swap (p, p) 44 79.55 Relation Swap (u, p) 539 80.71 Entity Swap (p, p) 591 93.40 Entity Swap (u, p) 2,246 92.43 Numerics 881 96.25 Table 5: ESIM Accuracy by data type on Test Gold. on Dialogue NLI (InferSent SNLI), and a model which returns the most common class from the Dialogue NLI training set (Most Common Class). Results Table 3 shows the performance of the two NLI models and three baselines on the Dialogue NLI validation and test sets. The test performance of ESIM (88.2%) and InferSent (85.68%) is similar to the performance reported on the existing SNLI dataset (88.0% (Chen et al., 2017) and 85.5% (Conneau et al., 2017), respectively), while the results on the Dialogue NLI gold test set (92.45%, 89.96%) are higher. As in Table 3, however, an InferSent model trained on SNLI performs poorly when evaluated on the proposed Dialogue NLI (47.03%). This is likely due to a mismatch in sentence distributions between SNLI, which is derived from image captions, and Dialogue NLI, whose sentences more closely resemble downstream dialogue applications. The hypothesisonly performance (51.52%) is lower than the hypothesis-only baseline for SNLI (69.00% (Poliak et al., 2018c)), and shows that using information from both the utterance and persona sentence is necessary to achieve good performance on Dialogue NLI. ESIM’s reasonably strong performance on Dialogue NLI suggests that the model may be useful in a downstream task - a claim which we verify in Experiment 5.1. However, there is also room for improvement. In particular, we report the performance of a model which takes the ground-truth triples as input instead of sentences. As shown in the last row of Table 3, each sentence’s underlying triple contains sufficient information to achieve near-perfect accuracy (99.69%). We also show ESIM’s accuracy by data type on Test Gold in Table 5, along with example mispredictions in Table 4. The accuracies and examples suggest that the NLI model could be improved further. 5.2 Experiment 2: Consistency in Dialogue This experiment evaluates the effect of the reranking method from Section 4 on the dialogue model’s persona consistency. Experiment Setup The re-ranking method of Section 4 uses a dialogue next utterance prediction 3737 Haves Likes Attributes Orig. Rerank Orig. Rerank Orig. Rerank Hits@1 ↑ 30.2 37.3 16.9 18.7 35.2 36.4 Contradict@1 ↓ 32.5 8.96 17.6 4.1 8.0 5.7 Entail@1 ↑ 55.2 74.6 77.9 90.6 87.5 88.6 Table 6: Effect of NLI re-ranking on persona consistency in dialogue. The reported metrics are percentages computed over each validation set. Figure 3: Example from the Likes Evaluation Set, showing dialogue model candidates, NLI model predictions, and reranked candidates using the method proposed in Section 4. model and the Dialogue NLI model. For the dialogue model we train a key-value memory network (Zhang et al., 2018) on the Persona-Chat dataset, which uses persona sentences and the conversation prefix as context. This model achieved the best performance on Persona-Chat in (Zhang et al., 2018). We train the model using ParlAI (Miller et al., 2017) on the personachat:self original task, using the hyper-parameters given for the KVMemnnAgent in the ConvAI2 competition. For the NLI model we use the ESIM model trained on Dialogue NLI, based on the results of Experiment 5. To study the effect of re-ranking on persona consistency, we form evaluation sets which contain next-utterances which are likely to yield persona contradiction or entailment, as follows. Evaluation Sets Each example is formed by first finding a next-utterance ut+1 in the Persona-Chat validation set which has an associated triple (e1, r, e2) of interest, e.g. (i, like music, country). If a sentence in the agent’s profile P has triple (e1, r, e2), we form the validation example (P, u≤t, ut+1). Figure 3 shows an example. Each example is associated with candidates U, consisting of the ground-truth utterance ut+1, 10 entailment candidates with the same triple as ut+1, 10 contradicting candidates with a different triple than that of ut+1, and 10 random candidates. The dialogue model must avoid ranking a contradicting candidate highly. Specifically, suppose the ground-truth nextutterance ut+1 is associated with triple (e1, r, e2), e.g., (i, have pet, dog). Entailment candidates are utterances u from the validation or training sets such that u is associated with triple (e1, r, e2). Since by construction a sentence in the profile also has triple (e1, r, e2), these candidates entail a profile sentence. A contradicting candidate is an utterance associated with a specified contradicting triple (e′ 1, r′, e′ 2), e.g., (i, not have, dog). We construct three evaluation sets, Haves, Likes, and Attributes using this process. Metrics We introduce variants of the ranking metric Hits@k, called Contradict@k and Entail@k. Contradict@k measures the proportion of top-k candidates returned by the model which contradict candidates, averaged over examples. This measures the propensity of a model to highly rank contradictions. Contradiction@1 is the proportion of consistency errors made by the model. For this metric lower values are better, in contrast to Hits@k. Entail@k measures the proportion of top-k candidates returned by the model which are entailment candidates, averaged over examples. Entail3738 Overall Score ↑ % Consistent ↑ % Contradiction ↓ Raw Calibrated Raw Calibrated Raw Calibrated KV-Mem 2.11± 1.12 2.21± 0.26 0.24 0.27± 0.07 0.23 0.25± 0.08 KV-Mem + NLI 2.34± 1.21 2.38± 0.26 0.28 0.35± 0.08 0.19 0.16± 0.06 Table 7: Human evaluation results (mean± standard deviation). ment candidates share the same underlying triple as the ground-truth next utterance, so this metric rewards highly ranked candidates that convey similar meaning and logic to the ground-truth utterance. Thus it can be interpreted as a more permissive version of Hits@k. Results Table 6 shows re-ranking results on the three evaluation sets (λ = 1.0, k = 10). The NLI re-ranking improves all three metrics on all the evaluation sets. Overall dialogue performance improves, as measured by Hits@1. The NLI reranking substantially reduces the number of contradicting utterances predicted by the model, and increases the number of utterances which entail a profile sentence, as seen in the Contradict@1 and Entail@1 scores. Figure 3 shows an example dialogue with candidates, contradictions predicted by the NLI model, and the corresponding re-ranked candidates. 5.3 Experiment 3: Human Evaluation This experiment evaluates the effect of the proposed NLI re-ranking method on a dialogue model’s consistency, where consistency is judged by human annotators in an interactive personabased dialogue setting. Experiment Setup We use ParlAI (Miller et al., 2017) which integrates with Amazon Mechanical Turk for human evaluation. A human annotator is paired with a model, and each is randomly assigned a persona from 1,155 persona sets. The human and model are then asked to make a conversation of at least either five or six turns (randomly decided). After the conversation, the annotator assigns three scores to the conversation, described below. Each annotator is allowed to participate in at most ten conversations per model, and we collect 100 conversations per model. Two models are evaluated: the same key-value memory network used in Experiment 5.1 without re-ranking (KVMem), and with re-ranking (KV-Mem + NLI). Scoring and Calibration Following a conversation, an annotator is shown the conversation and the model’s persona, and assigns three scores: an overall score of how well the model represented its persona ({1,2,3,4,5}), a marking of each model utterance that was consistent with the model’s persona ({0,1}), and a marking of each model utterance that contradicted a previous utterance or the model’s persona ({0,1}). We use Bayesian calibration to adjust for annotator bias, following (Kulikov et al., 2018). We assume a model with observed scores Sij and latent variables Mi for the unobserved score of model i and Bj for the bias of annotator j. We then estimate the posterior mean and variance for the unobserved scores given the observed scores. We use Pyro (Bingham et al., 2018) and the no-u-turn sampler (Hoffman and Gelman, 2014) for posterior inference. See Appendix C for details. Results Table 7 shows the human evaluation results. The natural language inference re-ranking improves all the metrics, notably the fine-grained consistency score (0.27 vs. 0.35) and contradiction score (0.25 vs. 0.16). The results are consistent with the conclusions from the automatic evaluation in Experiment 5.1. 6 Conclusion In this paper, we demonstrated that natural language inference can be used to improve performance on a downstream dialogue task. To do so, we created a new dialogue-derived dataset called Dialogue NLI, a re-ranking method for incorporating a Dialogue NLI model into a dialogue task, and an evaluation set which measures a model’s persona consistency. The dataset offers a new domain for natural language inference models, and suggests avenues such as devising alternative methods for using natural language inference components in downstream tasks. Future work may also incorporate contradiction information into the dialogue model itself, and extend to generic contradictions. 3739 References Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. 2018. Pyro: Deep Universal Probabilistic Programming. arXiv preprint arXiv:1810.09538. Samuel R Bowman, Gabor Angeli, Christopher Potts, Christopher D Manning, and Stanford Linguistics. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for Natural Language Inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Association for Computational Linguistics. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loc Loc Barrault, and Antoine Bordes. 2017. Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL Recognising Textual Entailment Challenge. pages 177–190. Springer, Berlin, Heidelberg. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming Question Answering Datasets Into Natural Language Inference Datasets. arXiv preprint arXiv:1809.02922. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Matthew D. Hoffman and Andrew Gelman. 2014. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. J. Mach. Learn. Res., 15(1):1593–1623. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SCITAIL: A Textual Entailment Dataset from Science Question Answering. In AAAI. Diederik P Kingma and Jimmy Lei Ba. 2014. Adam: A Method For Stochastic Optimization. arXiv preprint arXiv:1412.6980. Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a Search Strategy in Neural Dialogue Modelling. arXiv preprint:1811.00907. Wuwei Lan and Wei Xu. 2018. Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3890–3902, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A Persona-Based Neural Conversation Model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Association for Computational Linguistics. Bill Maccartney and Christopher D Manning. 2009. An extended model of natural logic. Technical report. M Marelli, S Menini, M Baroni, L Bentivogli, R Bernardi, and R Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), Reykjavik, Iceland. European Language Resources Association (ELRA). Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training Millions of Personalized Dialogue Agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics. Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Jason Weston. 2017. ParlAI: A Dialog Research Software Platform. arXiv preprint:1705.06476. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global Vectors for Word Representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Adam Poliak, Yonatan Belinkov, James Glass, and Benjamin Van Durme. 2018a. On the Evaluation of Semantic Phenomena in Neural Machine Translation Using Natural Language Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 513–523, New Orleans, Louisiana. Association for Computational Linguistics. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018b. Collecting Diverse Natural Language Inference Problems for Sentence 3740 Representation Evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67–81. Association for Computational Linguistics. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018c. Hypothesis Only Baselines in Natural Language Inference. In The Seventh Joint Conference on Lexical and Computational Semantics (*SEM). Oriol Vinyals, Google Quoc, and V Le. 2015. A Neural Conversational Model. In ICML Deep Learning Workshop. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. arXiv preprint arXiv:1804.07461. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing Dialogue Agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. A Dataset Details A.1 Schema Relation Types : place origin, live in citystatecountry, live in general, nationality, employed by company, employed by general, has profession, previous profession, job status, teach, school status, has degree, attend school, like general, like food, like drink, like animal, like movie, like music, like read, like sports, like watching, like activity, like goto, dislike, has hobby, has ability, member of, want do, want job, want, favorite food, favorite color, favorite book, favorite movie, favorite music, favorite music artist, favorite activity, favorite drink, favorite show, favorite place, favorite hobby, favorite season, favorite animal, favorite sport, favorite, own, have, have pet, have sibling, have children, have family, have vehicle, physical attribute, misc attribute, has age, marital status, gender, other. Additional triples with a not have relation were extracted using a dependency tree pattern. Entity Categories : ability, activity, animal, color, citystate, country, company, cuisine, degree type, drink, family, food, gender, general location, job status, language, marital, media genres, media other, movie title, music artist, music genre, music instrument, noun, number, organization, person, person attribute, person label, personality trait, profession, read author, read genre, read title, read other, school name, school status, school type, season, sport type, subject, time, vehicle, location, other. A.2 Relation Swaps Relation swaps for contradictions include (have *, not have), (own, not have), (has hobby, not have), (like *, dislike), (favorite *, dislike). Neutral relation swaps include (have x, have y), e.g. have pet, have sibling. Additional (have * A, not have B) swaps were defined for entities A which are a super-type of B, namely (A,B) pairs ({pet, animal}, {dog, cat}), ({sibling}, {brother, sister}), ({child, kid}, {son, daughter}), ({vehicle}, {car, truck}); this includes sentence pairs such as “i have a sibling”, “i do not have a sister”. Similarly, (not have B, have * A) swaps were defined using the (A, B) pairs above. A.3 Entity Swaps For contradictions, swapping entities for the following relation types was assumed to yield a contradiction: attend school, employed by company, employed by general, favorite animal, favorite book, favorite color, favorite drink, favorite food, favorite hobby, favorite movie, favorite music, favorite music artist, favorite place, favorite season, favorite show, favorite sport, gender, has profession, job status, live in citystatecountry, marital status, nationality, place origin, previous profession, school status, want job. Additionally, for physical attribute, misc attribute, or other relations, an en3741 tity swap was done using all WordNet antonym pairs in the personality trait and person attribute entity categories, as well as the swaps ({blonde}, {brunette}), ({large}, {tiny}), ({carnivore, omnivore}, {vegan, vegetarian}), ({depressed}, {happy, cheerful}), ({clean}, {dirty}) where each entity in the left set is swapped with each entity in the right set. B Experiment Details Experiment 1 The InferSent model used the Adam (Kingma and Lei Ba, 2014) optimizer with learning rate 0.001, and otherwise used the hyperparameters from the open source implementation7. The ESIM model used a 1-layer bidirectional LSTM with hidden dimension 1024 and Adam optimizer with learning rate 0.0001, with the remaining hyper-parameters set to those used by the InferSent model. C Score Calibration 1-5 star rating Let Mi ∼N(µi, 12) be the unobserved, underlying quality of the i-th approach, where µi ∼U(1, 5). Let Aj ∼N(0, 12) be the unobserved annotator bias, indicating whether the j-th annotator is more or less generous. We observe a score given by the j-th annotator to the i-th approach, and this score follows a normal distribution with its mean given by the sum of the underlying model score and annoator bias, i.e., Sij ∼N(Mi +Aj, 12). We observe some of these scores, and given these scores, the goal is to infer E[Mi] and V[Mi] for all i. Utterance-pair selection Each annotator is asked to label each utterance-pair as consistent and/or contradictory with respect to the personas. In this case, the unobserved, underlying model score is modelled as a pre-sigmoid normal variable, i.e., Mi ∼N(0, 12), and the annotator bias as a usual normal variable, i.e., Aj ∼N(0, 12), similarly to the 1-5 star rating case above. We however also introduce a turn bias Tk ∼N(0, 12) to incorporate the potential degradation of a neural dialogue model as the conversation lengthens. An observed score for each utterance pair then follows a Bernoulli distribution with its mean given as the sigmoid of the sum of these three latent variables, i.e., Sijk ∼B(sigmoid(Mi+Aj+Tk)). The 7https://github.com/facebookresearch/InferSent goal of inference is to compute E[sigmoid(Mi)] and V[sigmoid(Mi)].
2019
363
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3742–3751 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3742 Budgeted Policy Learning for Task-Oriented Dialogue Systems Zhirui Zhang† Xiujun Li‡§ Jianfeng Gao‡ Enhong Chen† †University of Science and Technology of China ‡Microsoft Research AI §University of Washington †[email protected][email protected] ‡{xiul,jfgao}@microsoft.com Abstract This paper presents a new approach that extends Deep Dyna-Q (DDQ) by incorporating a Budget-Conscious Scheduling (BCS) to best utilize a fixed, small amount of user interactions (budget) for learning task-oriented dialogue agents. BCS consists of (1) a Poissonbased global scheduler to allocate budget over different stages of training; (2) a controller to decide at each training step whether the agent is trained using real or simulated experiences; (3) a user goal sampling module to generate the experiences that are most effective for policy learning. Experiments on a movie-ticket booking task with simulated and real users show that our approach leads to significant improvements in success rate over the state-ofthe-art baselines given the fixed budget. 1 Introduction There has been a growing interest in exploiting reinforcement learning (RL) for dialogue policy learning in task-oriented dialogue systems (Levin et al., 1997; Williams, 2008; Young et al., 2013; Fatemi et al., 2016; Zhao and Esk´enazi, 2016; Su et al., 2016; Li et al., 2017; Williams et al., 2017; Dhingra et al., 2017; Budzianowski et al., 2017; Chang et al., 2017; Liu and Lane, 2017; Liu et al., 2018; Gao et al., 2019). This is a challenging machine learning task because an RL learner requires real users to interact with a dialogue agent constantly to provide feedback. The process incurs significant real-world cost for complex tasks, such as movie-ticket booking and travel planning, which require exploration in a large state-action space. In reality, we often need to develop a dialogue agent with some fixed, limited budget due to limited project funding, conversational data, and development time. Specifically, in this study we measure budget in terms of the number of real user interactions. That is, we strive to optimize a dialogue agent via a fixed, small number of interactions with real users. One common strategy is to leverage a user simulator built on human conversational data (Schatzmann et al., 2007; Li et al., 2016). However, due to design bias and the limited amounts of publicly available human conversational data for training the simulator, there always exists discrepancies between the behaviors of real and simulated users, which inevitably leads to a sub-optimal dialogue policy. Another strategy is to integrate planning into dialogue policy learning, as the Deep Dyna-Q (DDQ) framework (Peng et al., 2018), which effectively leverages a small number of real experiences to learn a dialogue policy efficiently. In DDQ, the limited amounts of real user experiences are utilized for: (1) training a world model to mimic real user behaviors and generate simulated experiences; and (2) improving the dialogue policy using both real experiences via direct RL and simulated experiences via indirect RL (planning). Recently, some DDQ variants further incorporate discriminators (Su et al., 2018) and active learning (Wu et al., 2019) into planning to obtain highquality simulated experiences. DDQ and its variants face two challenges in the fixed-budget setting. First, DDQ lacks any explicit guidance on how to generate highly effective real dialogue experiences. For example, the experiences in the state-action space that has not, or less, been explored by the dialogue agent are usually more desirable. Second, DDQ lacks a mechanism of letting a human (teacher) play the role of the agent to explicitly demonstrate how to drive the dialogue (Barlier et al., 2018). This is useful in the cases where the dialogue agent fails to respond to users in conversations and the sparse negative rewards fail to help the agent improve its dialogue policy. To this end, DDQ needs to be equipped 3743 Dialogue Agent User World Model Controller Human Conversational Data Scheduler Budget Real Experience Simulated Experience User Goal Sampling Module Planning Direct Reinforcement Learning Acting BCS Figure 1: Proposed BCS-DDQ framework for dialogue policy learning. BCS represents the Budget-Conscious Scheduling module, which consists of a scheduler, a controller and a user goal sampling module. with the ability to decide whether to learn from human demonstrations or from agent-user interactions where the user can be a real user or simulated by the world model. In this paper, we propose a new framework, called Budget-Conscious Scheduling-based Deep Dyna-Q (BCS-DDQ), to best utilize a fixed, small number of human interactions (budget) for taskoriented dialogue policy learning. Our new framework extends DDQ by incorporating BudgetConscious Scheduling (BCS), which aims to control the budget and improve DDQ’s sample efficiency by leveraging active learning and human teaching to handle the aforementioned issues. As shown in Figure 1, the BCS module consists of (1) a Poisson-based global scheduler to allocate budget over the different stages of training; (2) a user goal sampling module to select previously failed or unexplored user goals to generate experiences that are effective for dialogue policy learning; (3) a controller which decides (based on the pre-allocated budget and the agent’s performance on the sampled user goals) whether to collect human-human conversation, or to conduct humanagent interactions to obtain high-quality real experiences, or to generate simulated experiences through interaction with the world model. During dialogue policy learning, real experiences are used to train the world model via supervised learning (world model learning) and directly improve the dialogue policy via direct RL, while simulated experiences are used to enhance the dialogue policy via indirect RL (planning). Experiments on the movie-ticket booking task with simulated and real users show that our approach leads to significant improvements in success rate over the state-of-the-art baselines given a fixed budget. Our main contributions are two-fold: • We propose a BCS-DDQ framework, to best utilize a fixed, small amount of user interactions (budget) for task-oriented dialogue policy learning. • We empirically validate the effectiveness of BCS-DDQ on a movie-ticket booking domain with simulated and real users. 2 Budget-Conscious Scheduling-based Deep Dyna-Q (BCS-DDQ) As illustrated in Figure 2, the BCS-DDQ dialogue system consists of six modules: (1) an LSTM-based natural language understanding (NLU) module (Hakkani-T¨ur et al., 2016) for identifying user intents and extracting associated slots; (2) a state tracker (Mrksic et al., 2017) for tracking dialogue state; (3) a dialogue policy that chooses the next action based on the current state and database results; (4) a model-based natural language generation (NLG) module for producing a natural language response (Wen et al., 2015); (5) a world model for generating simulated user actions and simulated rewards; and (6) the BCS module incorporating a global scheduler, a user goal sampling module and a controller, to manage the budget and select the most effective way to generate real or simulated experiences for learning a dialogue policy. To leverage BCS in dialogue policy learning, we design a new iterative training algorithm, called BCS-DDQ, as summarized in Algorithm 1. It starts with an initial dialogue policy and an ini3744 Natural Language Generation (NLG) Natural Language Understanding (NLU) 𝑜1 𝑜2 Dialogue State Tracker 𝑜𝑡 Dialogue Policy Learning Dialogue Manager System Action (Policy) 𝑠𝑡 𝑠1 𝑠2 𝑠𝑛 𝑎1 𝑎2 𝑎𝑘 …… … Semantic Frame State Representation 𝑎∗= max 𝑎 𝜋𝑎|𝑠 Backend Database Controller Scheduler Budget User Goal Sampling Module Budget-Conscious Scheduling (BCS) User World Model Human-Human 1 2 3 4 5 6 Figure 2: Illustration of the proposed BCS-DDQ dialogue system. tial world model, both trained with pre-collected human conversational data. Given the total budget b and maximum number of training epochs m, the scheduler allocates the available budget bk at each training step. Then, the user goal sampling module actively selects a previously failed or unexplored user goal gu. Based on the agent’s performance and the current pre-allocated budget, the controller chooses the most effective way, with cost cu = {0, 1, 2}, to generate real or simulated experiences Br/Bs for this sampled user goal. For convenience, the cost of different dialogue generation methods is defined as the number of people involved: • cost cu = 2 for collecting human-human demonstrated conversational data. • cost cu = 1 for conducting the interactions between human and agent. • cost cu = 0 for performing the interactions between world model and agent. The generated experiences are used to update the dialogue policy and the world model. This process continues until all pre-allocated budget is exhausted. In the rest of this section, we detail the components of BCS, and describe the learning methods of the dialogue agent and the world model. 2.1 Budget-Conscious Scheduling (BCS) As illustrated in Figure 2 and Algorithm 1, BSC consists of a budget allocation algorithm for the scheduler, an active sampling strategy for the user Algorithm 1 BCS-DDQ for Dialogue Policy Learning Input: The total budget b, the maximum number of training epochs m, the dialogue agent A and the world model W (both pre-trained with precollected human conversational data); 1: procedure TRAINING PROCESS 2: while k < m do 3: bk ←Scheduler(b, m, k); 4: repeat 5: gu ←UserGoalSampler(A); 6: Br, Bs, cu ←Controller(gu, bk, A, W); 7: bk ←bk −cu; 8: until bk ≤0 9: Train the dialogue agent A on Br ∪Bs 10: Train world model W on Br 11: end while 12: end procedure goal sampling module, and a selection policy for the controller. 2.1.1 Poisson-based Budget Allocation The global scheduler is designed to allocate budget {b1, . . . , bm} (where m is the final training epoch) during training. The budget allocation process can be viewed as a series of random events, where the allocated budget is a random variable. In this manner, the whole allocation process essentially is a discrete stochastic process, which can be modeled as a Poisson process. Specifically, at each training step k, the probability distribution of a random variable bk equaling n is given by: P{bk = n} = λn k n! e−λk, λk = m + 1 −k m λ (1) The global scheduling in BCS is based on a Decayed Possion Process, motivated by two considerations: (1) For simplicity, we assume that all budget allocations are mutually-independent. The Poisson process is suitable for this assumption. (2) As the training process progresses, the dialogue agent tends to produce higher-quality dialogue experiences using the world model due to the improving performance of both the agent and the world model. As a result, the budget demand for the agent decays during the course of training. Thus, we linearly decay the parameter of the Poisson distribution so as to allocate more budget at the beginning of training. 3745 In addition, to ensure that the sum of the allocated budget does not exceed the total budget b, we impose the following constraint: m X k=1 E[bk] = m X k=1 m + 1 −k m λ ≤b (2) Using this formula, we can calculate the range of the parameter value: λ ≤ 2b m+1. In our experiments, we set λ = 2b m+1 and sample bk from the probability distribution defined in Equation 1. 2.1.2 Active Sampling Strategy The active sampling strategy involves the definition of a user goal space and sampling algorithm. In a typical task-oriented dialogue (Schatzmann et al., 2007), the user begins a conversation with a user goal gu which consists of multiple constraints. In fact, these constraints correspond to attributes in the knowledge base. For example, in the movie-ticket-booking scenario, the constraints may be the name of the theater (theater), the number of tickets to buy (numberofpeople) or the name of the movie (moviename), and so on. Given the knowledge base, we can generate large amounts of user goals by traversing the combination of all the attributes, and then filtering unreasonable user goals which are not similar to real user goals collected from human-human conversational data. We then group the user goals with the same inform and request slots into a category. Suppose there are altogether l different categories of user goals. We design a Thompson-Samplinglike algorithm (Chapelle and Li, 2011; Eckles and Kaptein, 2014; Russo et al., 2018) to actively select a previously failed or unexplored user goal in two steps. • Draw a number pi for each category following pi ∼N(fi, q l ln N ni ), where N represents the Gaussian distribution, fi denotes the failure rate of each category estimated on the validation set, ni is the number of samples for each category and N = P i ni. • Select the category with maximum pi, then randomly sample a user goal gu in the category. Using this method, user goals in the categories with higher failure rates or less exploration are more likely to be selected during training, which encourages the real or simulated user to generate dialogue experiences in the state-action space that the agent has not fully explored. 2.1.3 Controller Given a sampled user goal gu, based on the agent’s performance on gu and pre-allocated budget bk, the controller decides whether to collect humanhuman dialogues, human-agent dialogues, or simulated dialogues between the agent and the world model. We design a heuristic selection policy of Equation 3 where dialogue experiences B are collected as follow: we first generate a set of simulated dialogues Bs given gu, and record the success rate Sgu. If Sgu is higher than a threshold λ1 (i.e. λ1 = 2/3) or there is no budget left, we use Bs for training. If Sgu is lower than a threshold λ2 (i.e. λ2 = 1/3) and there is still budget, we resort to human agents and real users to generate real experiences Br hh. Otherwise, we collect real experiences generated by human users and the dialogue agent Br ha. (B, cu) =    (Bs , 0) if Sgu ≥λ1 or bk = 0 (Br hh, 2) if Sgu ≤λ2 and bk ≥2 (Br ha, 1) otherwise (3) Combined with the active sampling strategy, this selection policy makes fuller use of the budget to generate experiences that are most effective for dialogue policy learning. 2.2 Direct Reinforcement Learning and Planning Policy learning in task-oriented dialogue using RL can be cast as a Markov Decision Process which consists of a sequence of <state, action, reward> tuples. We can use the same Q-learning algorithm to train the dialogue agent using either real or simulated experiences. Here we employ the Deep Qnetwork (DQN) (Mnih et al., 2015). Specifically, at each step, the agent observes the dialogue state s, then chooses an action a using an ϵ-greedy policy that selects a random action with probability ϵ, and otherwise follows the greedy policy a = arg maxa′ Q(s, a′; θQ). Q(s, a; θQ) approximates the state-action value function with a Multi-Layer Perceptron (MLP) parameterized by θQ. Afterwards, the agent receives reward r, observes the next user or simulator response, and updates the state to s′. The experience (s, a, r, au, s′) is then stored in a real experience buffer Br1 or simulated experience buffer Bs depending on the source. Given these experiences, we optimize the 1Br = {Br hh, Br ha} 3746 value function Q(s, a; θQ) through mean-squared loss: L(θQ) = E(s,a,r,s′)∼Br∪Bs[(y −Q(s, a; θQ))2] y = r + γ max a′ Q′(s′, a′; θQ′) (4) where γ ∈[0, 1] is a discount factor, and Q′(·) is the target value function that is updated only periodically (i.e., fixed-target). The updating of Q(·) thus is conducted through differentiating this objective function via mini-batch gradient descent. 2.3 World Model Learning We utilize the same design of the world model in Peng et al. (2018), which is implemented as a multi-task deep neural network. At each turn of a dialogue, the world model takes the current dialogue state s and the last system action a from the agent as input, and generates the corresponding user response au, reward r, and a binary termination signal t. The computation for each term can be shown as below: h = tanh(Wh[s, a] + bh) r = Wrh + br au = softmax(Wah + ba) t = sigmoid(Wth + bt) (5) where all W and b are weight matrices and bias vectors respectively. 3 Experiments We evaluate BCS-DDQ on a movie-ticket booking task in three settings: simulation, human evaluation and human-in-the-loop training. All the experiments are conducted on the text level. 3.1 Setup Dataset. The dialogue dataset used in this study is a subset of the movie-ticket booking dialogue dataset released in Microsoft Dialogue Challenge (Li et al., 2018). Our dataset consists of 280 dialogues, which have been manually labeled based on the schema defined by domain experts, as in Table 1. The average length of these dialogues is 11 turns. Dialogue Agents. We benchmark the BCSDDQ agent with several baseline agents: • The SL agent is learned by a variant of imitation learning (Lipton et al., 2018). At the beginning of training, the entire budget is used to colIntent request, inform, deny, confirm question, confirm answer, greeting, closing, not sure, multiple choice, thanks, welcome Slot city, closing, date, distanceconstraints, greeting, moviename, numberofpeople, price, starttime, state, taskcomplete, theater, theater chain, ticket, video format, zip Table 1: The dialogue annotation schema lect human-human dialogues, based on which the dialogue agent is trained. • The DQN agent is learned by standard DQN At each epoch of training, the budget is spent on human-agent interactions, and the agent is trained by direct RL. • The DDQ agent is learned by the DDQ method (Peng et al., 2018). The training process is similar to that of the DQN agent, differing in that DDQ integrates a jointly-trained world model to generate simulated experience which can further improve the dialogue policy. At each epoch of training, the budget is spent on human-agent interactions. • The BCS-DDQ agent is learned by the proposed BCS-DDQ approach. For a fair comparison, we use the same number of training epochs m used for the DQN and DDQ agents. Hyper-parameter Settings. We use an MLP to parameterize function Q(·) in all the dialogue agents (SL, DQN, DDQ and BCS-DDQ), with hidden layer size set to 80. The ϵ-greedy policy is adopted for exploration. We set discount factor γ = 0.9. The target value function Q′(·) is updated at the end of each epoch. The world model contains one shared hidden layer and three task-specific hidden layers, all of size 80. The number of planning steps is set to 5 for using the world model to improve the agent’s policy in DDQ and BCS-DDQ frameworks. Each dialogue is allowed a maximum of 40 turns, and dialogues exceeding this maximum are considered failures. Other parameters used in BCS-DDQ are set as l = 128, d = 10. Training Details. The parameters of all neural networks are initialized using a normal distribution with a mean of 0 and a variance of p 6/(drow + dcol), where drow and dcol are the number of rows and columns in the structure (Glorot and Bengio, 2010). All models are optimized by RMSProp (Tieleman and Hinton, 2012). The mini-batch size is set to 16 and the initial learn3747 90 SL DQN DDQ(5) Our method 0 10 20 30 40 50 60 70 80 90 0 50 100 Success Rate(%) 10 20 30 40 50 60 70 80 90 50 100 150 200 250 300 Success Rate(%) Budget SL DQN DDQ BCS-DDQ Figure 3: The success rates of different agents (SL, DQN, DDQ, BCS-DDQ) given a fixed budget (b = {50, 100, 150, 200, 250, 300}). Each number is averaged over 5 runs, each run tested on 50 dialogues. ing rate is 5e-4. The buffer sizes of Br and Bs are set to 3000. In order to train the agents more efficiently, we utilized a variant of imitation learning, Reply Buffer Spiking (Lipton et al., 2018), to pre-train all agent variants at the starting stage. 3.2 Simulation Evaluation In this setting, the dialogue agents are trained and evaluated by interacting with the user simulators (Li et al., 2016) instead of real users. In spite of the discrepancy between simulated and real users, this setting enables us to perform a detailed analysis of all agents without any real-world cost. During training, the simulator provides a simulated user response on each turn and a reward signal at the end of the dialogue. The dialogue is considered successful if and only if a movie ticket is booked successfully and the information provided by the agent satisfies all the user’s constraints (user goal). When the dialogue is completed, the agent receives a positive reward of 2 ∗L for success, or a negative reward of −L for failure, where L is the maximum number of turns allowed (40). To encourage shorter dialogues, the agent receives a reward of −1 on each turn. In addition to the user simulator, the training of SL and BCS-DDQ agents requires a highperformance dialogue agent to play the role of the human agent in collecting human-human conversational data. In the simulation setting, we leverage a well-trained DQN agent as the human agent. 0 50 100 150 200 250 300 Simulation Epoch 0 10 20 30 40 50 60 70 80 90 Success Rate(%) SL DQN DDQ BCS-DDQ Figure 4: The learning curves of different agents (DQN, DDQ and BCS-DDQ) with budget b = 300. Main Results. We evaluate the performance of all agents (SL, DQN, DDQ, BCS-DDQ) given a fixed budget (b = {50, 100, 150, 200, 250, 300}). As shown in Figure 3, the BCS-DDQ agent consistently outperforms other baseline agents by a statistically significant margin. Specifically, when the budget is small (b = 50), SL does better than DQN and DDQ that haven’t been trained long enough to obtain significant positive reward. BCSDDQ leverages human demonstrations to explicitly guide the agent’s learning when the agent’s performance is very bad. In this way, BCS-DDQ not only takes advantage of imitation learning, but also further improves the performance via exploration and RL. As the budget increases, DDQ can leverage real experiences to learn a good policy. Our method achieves better performance than DDQ, demonstrating that the BCS module can better utilize the budget by directing exploration to parts of the state-action space that have been less explored. Learning Curves. We also investigate the training process of different agents. Figure 4 shows the learning curves of different agents with a fixed budget (b = 300). At the beginning of training, similar to a very small budget situation, the performance of the BCS-DDQ agent improves faster thanks to its combination of imitation learning and reinforcement learning. After that, BCS-DDQ consistently outperforms DQN and DDQ as training progresses. This proves that the BCS module can generate higher quality dialogue experiences for training dialogue policy. 3748 Agent Epoch=100 Epoch=150 Epoch=200 Success Reward Turns Success Reward Turns Success Reward Turns DQN 0.3032 -18.77 32.31 0.4675 2.07 30.07 0.5401 18.94 26.59 DDQ 0.4204 -2.24 27.34 0.5467 15.46 22.26 0.6694 32.00 18.66 BCS-DDQ 0.7542 43.80 15.42 0.7870 47.38 16.13 0.7629 44.45 16.20 Table 2: The performance of different agents at training epoch = {100, 150, 200} in the human-in-the-loop experiments. The differences between the results of all agent pairs evaluated at the same epoch are statistically significant (p < 0.05). (Success: success rate) 0 0.1 0.2 0.3 0.4 0.5 SL Success Rate 0.6 0.7 0.8 0.9 Success Rate 34.39 54.05 64.12 81.94 0 10 20 30 40 50 60 70 80 90 SL DQN DDQ BCS-DDQ Success Rate(%) 70 74 78 72 p=0.013 Figure 5: The human evaluation results for SL, DQN, DDQ and BCS-DDQ agents, the number of test dialogues indicated on each bar, and the p-values from a two-sided permutation test. The differences between the results of all agent pairs are statistically significant (p < 0.05). 3.3 Human Evaluation For human evaluation, real users interact with different agents without knowing which agent is behind the system. At the beginning of each dialogue session, we randomly pick one agent to converse with the user. The user is provided with a randomly-sampled user goal, and the dialogue session can be terminated at any time, if the user believes that the dialogue is unlikely to succeed, or if it lasts too long. In either case, the dialogue is considered as failure. At the end of each dialogue, the user is asked to give explicit feedback about whether the conversation is successful. Four agents (SL, DQN, DDQ and BCS-DDQ) trained in simulation (with b = 300) are selected for human evaluation. As illustrated in Figure 5, the results are consistent with those in the simulation evaluation (the rightmost group with budget=300 in Figure 3). In addition, due to the discrepancy between simulated users and real users, the success rates of all agents drop compared to the simulation evaluation, but the performance degra0 50 100 150 200 Epoch 0 10 20 30 40 50 60 70 80 90 Success Rate(%) DQN DDQ BCS-DDQ Figure 6: Human-in-the-Loop learning curves of different agents with budget b = 200. dation of BCS-DDQ is minimal. This indicates that our approach is more robust and effective than the others. 3.4 Human-in-the-Loop Training We further verify the effectiveness of our method in human-in-the-loop training experiments. In this experiment, we replace the user simulator with real users during training. Similar to the human evaluation, based on a randomly-sampled user goal, the user converses with a randomly-selected agent and gives feedback as to whether the conversation is successful. In order to collect humanhuman conversations during the training of the BCS-DDQ agent, human agents are interacting directly with real users through the dialogue system. In a dialogue session, the human agent has access to the conversation history, as well as the current search results from the knowledge base, before selecting each dialogue action in response to the real user. Each learning curve is trained with two runs, with each run assigning a budget of 200 human interactions. The main results are presented in Table 2 and Figure 6. We can see that the BCS-DDQ agent consistently outperforms DQN and DDQ during the course of training, confirming the conclusion 3749 Real User Dialogue Sample movie-ticket booking user goal: { “constraint slots”: { “request slots”: { “numberofpeople”: “four” “ticket”:“?” “moviename”: “creed” “theater”: “?” “city”: “regency” } “date”: “tomorrow” “starttime”: “around noon” } } DDQ agent 1 usr: I want to watch creed at around noon. 2 agt: around noon is available. 3 usr: I want four tickets please! 4 agt: Which city would you like? 5 usr: I want to watch at regency. 6 agt: Which theater would you like? 7 usr: Which theater is available? 8 agt: What date would you like to watch it? 9 usr: I want to set it up tomorrow. 10 agt: Which theater would you like? 11 usr: Which theater is available? 12 agt: Which city would you like? 13 usr: I want to watch at regency. 14 agt: Which theater would you like? 15 usr: Which theater is available? 16 agt: What date would you like to watch it? ... Failure: Agent does not book the right ticket within the max turns. BCS-DDQ agent 1 usr: I want to watch creed at around noon. 2 agt: Which city would you like? 3 usr: I want to watch at regency. 4 agt: What date would you like to watch it? 5 usr: I want to set it up tomorrow. 6 agt: Which theater would you like? 7 usr: Which theater is available? 8 agt: century eastport 16 is available. 9 usr: Could you help me to book the tickets? 8 agt: How many tickets do you need? 9 usr: I want four tickets please! 10 agt: Great - I was able to purchase 4 tickets for you to see creed tomorrow at century eastport 16 theater in regency at around noon. 11 usr: Thank you. 12 agt: Thank you. Success Table 3: Sample dialogue sessions by DDQ and BCSDDQ agents trained at epoch 200 (with total budget b = 200) in the human-in-the-loop experiments: (agt: agent, usr: user) drawn from the simulation evaluation. Besides, Table 3 shows example dialogues produced by two dialogue agents (DDQ and BCS-DDQ) interacting with human users respectively. We can see that DDQ agent fails to respond to the user question “which theater is available?”, which lead to the repeated inquiry of theater information. By introducing human demonstrations for agent training, 0 50 100 150 200 250 300 Simulation Epoch 0 10 20 30 40 50 60 70 80 90 Success Rate(%) BCS-DDQ BCS-DDQ-var1 BCS-DDQ-var2 Figure 7: The learning curves of BCS-DDQ and its variants agents with budget b = 300. BCS-DDQ agent can successfully respond to the available theater information. 3.5 Ablation Study We investigate the relative contribution of the budget allocation algorithm and the active sampling strategy in BCS-DDQ by implementing two variant BCS-DDQ agents: • The BCS-DDQ-var1 agent: Replacing the decayed Poisson process with a regular Poisson process in the budget allocation algorithm, which means that the parameter λ is set to b m during training. • The BCS-DDQ-var2 agent: Further replacing the active sampling strategy with random sampling, based on the BCS-DDQ-var1 agent. The results in Figure 7 shows that the budget allocation algorithm and active sampling strategy are helpful for improving a dialogue policy in the limited budget setting. The active sampling strategy is more important, without which the performance drops significantly. 4 Conclusion We presented a new framework BCS-DDQ for task-oriented dialogue policy learning. Compared to previous work, our approach can better utilize the limited real user interactions in a more efficient way in the fixed budget setting, and its effectiveness was demonstrated in the simulation evaluation, human evaluation, including human-in-theloop experiments. In future, we plan to investigate the effectiveness of our method on more complex task-oriented dialogue datasets. Another interesting direction 3750 is to design a trainable budget scheduler. In this paper, the budget scheduler was created independently of the dialogue policy training algorithm, so a trainable budget scheduler may incur additional cost. One possible solution is transfer learning, in which we train the budget scheduler on some welldefined dialogue tasks, then leverage this scheduler to guide the policy learning on other complex dialogue tasks. 5 Acknowledgments We appreciate Sungjin Lee, Jinchao Li, Jingjing Liu, Xiaodong Liu, and Ricky Loynd for the fruitful discussions. We would like to thank the volunteers from Microsoft Research for helping us with the human evaluation and the human-in-the-loop experiments. We also thank the anonymous reviewers for their careful reading of our paper and insightful comments. This work was done when Zhirui Zhang was an intern at Microsoft Research. References Merwan Barlier, Romain Laroche, and Olivier Pietquin. 2018. Training dialogue systems with human advice. In AAMAS. Pawel Budzianowski, Stefan Ultes, Pei hao Su, Nikola Mrksic, Tsung-Hsien Wen, I˜nigo Casanueva, Lina Maria Rojas-Barahona, and Milica Gasic. 2017. Sub-domain modelling for dialogue management with hierarchical reinforcement learning. In SIGDIAL. Cheng Chang, Runzhe Yang, Lu Chen, Xiang Zhou, and Kai Yu. 2017. Affordable on-line dialogue policy learning. In EMNLP. Olivier Chapelle and Lihong Li. 2011. An empirical evaluation of thompson sampling. In NIPS. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. End-to-end reinforcement learning of dialogue agents for information access. In ACL. Dean Eckles and Maurits Kaptein. 2014. Thompson sampling with the online bootstrap. CoRR, abs/1410.4009. Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. In SIGDIAL Conference. Jianfeng Gao, Michel Galley, and Lihong Li. 2019. Neural approaches to conversational ai. Foundations and Trends R⃝in Information Retrieval, 13(23):127–298. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In AISTATS. Dilek Z. Hakkani-T¨ur, G¨okhan T¨ur, Asli elikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and YeYi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In INTERSPEECH. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In ASRU 1997, pages 72–79. Xiujun Li, Yun-Nung Chen, Lihong Li, and Jianfeng Gao. 2017. End-to-end task-completion neural dialogue systems. In IJCNLP. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688. Xiujun Li, Sarah Panda, Jingjing Liu, and Jianfeng Gao. 2018. Microsoft dialogue challenge: Building end-to-end task-completion dialogue systems. arXiv preprint arXiv:1807.11125. Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. 2018. Efficient exploration for dialogue policy learning with bbq networks & replay buffer spiking. AAAI. Bing Liu and Ian Lane. 2017. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. ASRU, pages 482–489. Bing Liu, G¨okhan T¨ur, Dilek Z. Hakkani-T¨ur, Pararth Shah, and Larry P. Heck. 2018. Dialogue learning with human teaching and feedback in end-toend trainable task-oriented dialogue systems. In NAACL-HLT. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature, 518:529–533. Nikola Mrksic, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve J. Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In ACL. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018. Integrating planning for task-completion dialogue policy learning. In ACL. Daniel J Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen, et al. 2018. A tutorial on thompson sampling. Foundations and Trends R⃝in Machine Learning, 11(1):1–96. 3751 Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve J. Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In HLT-NAACL. Peihao Su, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Continuously learning neural dialogue management. CoRR, abs/1606.02689. Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learning. In EMNLP. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei hao Su, David Vandyke, and Steve J. Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In EMNLP. Jason D Williams. 2008. The best of both worlds: Unifying conventional dialog systems and pomdps. In Ninth Annual Conference of the International Speech Communication Association. Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. In ACL. Yuexin Wu, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yiming Yang. 2019. Switch-based active deep dynaq: Efficient adaptive planning for task-completion dialogue policy learning. In AAAI. Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101:1160–1179. Tiancheng Zhao and Maxine Esk´enazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In SIGDIAL Conference.
2019
364
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3752–3762 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3752 Comparison of Diverse Decoding Methods from Conditional Language Models Daphne Ippolito⋆ Reno Kriz⋆ Maria Kustikova Jo˜ao Sedoc Chris Callison-Burch ⋆Authors contributed equally University of Pennsylvania {daphnei,rekriz,mkust,joao,ccb}@seas.upenn.edu Abstract While conditional language models have greatly improved in their ability to output high-quality natural language, many NLP applications benefit from being able to generate a diverse set of candidate sequences. Diverse decoding strategies aim to, within a givensized candidate list, cover as much of the space of high-quality outputs as possible, leading to improvements for tasks that re-rank and combine candidate outputs. Standard decoding methods, such as beam search, optimize for generating high likelihood sequences rather than diverse ones, though recent work has focused on increasing diversity in these methods. In this work, we perform an extensive survey of decoding-time strategies for generating diverse outputs from conditional language models. We also show how diversity can be improved without sacrificing quality by oversampling additional candidates, then filtering to the desired number. 1 Introduction Conditional neural language models, which train a neural net to map from one sequence to another, have had enormous success in natural language processing tasks such as machine translation (Sutskever et al., 2014; Luong et al., 2015), text summarization (Nallapati et al., 2016), and dialog systems (Vinyals and Le, 2015). These models output a probability distribution over the next token in the output sequence given the input and the previously predicted tokens. Since computing the overall most likely output sequence is intractable, early work in neural machine translation found that beam search is an effective strategy to heuristically sample sufficiently likely sequences from these probabilistic models (Sutskever et al., 2014). However, for more open-ended tasks, beam search is ill-suited to generating a set of diverse candidate sequences; this is because candidates Beam Search A bus is stopped at a bus stop. A bus is parked at a bus stop. A bus stopped at a bus stop in a city. A bus stopped at a bus stop at a bus stop. A bus that is parked in front of a building. Random Sampling A bus parked at a bus stop at a bus stop. There is a bus that is at the station. A man standing by a bus in a city. A bus pulling away from the train station. A bus stopped at a stop on the sunny day. Figure 1: An image with the top five captions from standard beam search and from random sampling. Note the latter set is more diverse but lower quality. outputted from a large-scale beam search often only differ by punctuation and minor morphological variations (Li and Jurafsky, 2016). The term “diversity” has been defined in a variety of ways in the literature, with some using it as a synonym for sentence interestingness or unlikeliness (Hashimoto et al., 2019), and others considering it a measure of how different two or more sentences are from each other (Vijayakumar et al., 2016; Gimpel et al., 2013). We take the latter approach, and define diversity as the ability of a generative method to create a set of possible outputs that are each valid given the input, but vary as widely as possible in terms of word choice, topic, and meaning. There are a number of reasons why it is desirable to produce a set of diverse candidate outputs for a given input. For example, in collaborative story generation, the system makes suggestions to a user for what they should write next (Clark et al., 2018). In these settings, it would be beneficial to show the user multiple different ways to continue their story. In image captioning, any one sentence-long caption is probably missing some information about the image. Krause et al. (2017) show how a set of diverse sentence-length image captions can be transformed into an entire paragraph about the image. Lastly, in applica3753 tions that involve reranking candidate sequences, the reranking algorithms are more effective when the input sequences are diverse. Reranking diverse candidates has been shown to improve results in both open dialog and machine translation (Li et al., 2016a; Li and Jurafsky, 2016; Gimpel et al., 2013). Furthermore, in open-ended dialog, the use of reranking to personalize a model’s responses for each user is a promising research direction (Choudhary et al., 2017). With these sorts of applications in mind, a variety of alternatives and extensions to beam search have been proposed which seek to produce a set of diverse candidate responses instead of a single high likelihood one (Li et al., 2016a; Vijayakumar et al., 2016; Kulikov et al., 2018; Tam et al., 2019). Many of these approaches show marked improvement in diversity over standard beam search across a variety of generative tasks. However, there has been little attempt to compare and evaluate these strategies against each other on any single task. In this paper, we survey existing methods for promoting diversity in order to systematically investigate the relationship between diversity and perceived quality of output sequences of conditional language models. In addition to standard beam search and greedy random sampling, we compare several recently proposed modifications to both methods. In addition, we propose the use of over-sampling followed by post-decoding clustering to remove similar sequences. The main contributions of this paper can be summarized as follows: • A detailed comparison of existing diverse decoding strategies on two tasks: open-ended dialog and image captioning, and recommendations for a diverse decoding strategy. • A novel clustering-based algorithm that can be used on the results of any decoding strategy to increase quality and diversity.1 2 Standard Decoding Methods Conditional language models, which have wide applications across machine translation, text simplification, conversational agents, and more, generally consist of an encoder, which transforms some input x into a fixed-size latent representation, and a decoder which transforms these representations in order to output a conditional 1Code can be found at https://github.com/ rekriz11/DeDiv. probability of each word in the target sequence given the previous words and the input. Let zt = f(y1, . . . , yt−1, x) represent the output of an encoder-decoder model given input x and the sequence of tokens predicted so far, y1, . . . , yt−1, which for notational simplicity we write as y<t. The output zt ∈RV (where V is the cardinality of the enumerated vocabulary V) The probability distribution over the next possible token being word wi ∈V is the softmax: P(yt = wi|y<t, x) = exp(zt,i) PV j=1 exp (zt,j) ∀i ∈{1, . . . , V } Most decoding strategies strive to find the most likely overall sequence, i.e. pick a ˆy such that: ˆy = arg maxy P(y|x) = arg maxy QN t=1 P(yt | y<t, x) Unlike Markovian processes, no sub-exponential algorithm exists to find the optimal decoded sequence, and thus we instead use approximations. Arg-max The simplest approach to decoding a likely sequence is to greedily select a word at each timestep: ˆyt = arg max yt P(yt|y<t, x) However, because this deterministic approach typically yields repetitive and short output sequences, and does not permit generating multiple samples, it is rarely used in language modelling. Random Sampling Another option is to randomly sample from the model’s distribution at every timestep. Often, a temperature parameter T is added to control the entropy of the distribution before sampling. P(yt = wi|y<t, x) = exp(zt,i/T) PV j=1 exp (zt,j/T) ∀i ∈{1, . . . , V } ˆyt ∼yt Choosing a temperature greater than one causes outputs to look increasingly more random, while bringing the temperature less than zero causes sequences to increasingly resemble greedy sampling. Recently, top-s random sampling has been proposed as an alternative to using temperature. Sampling is restricted to the s most likely tokens 3754 at each step (Fan et al., 2018; Radford et al., 2019). We find that top-s random sampling’s hardrestriction on generating low probability words is more effective at controlling the stochasticity of sampled sequences than sampling with temperature. Beam Search Beam search approximates finding the most likely sequence by performing breadth-first search over a restricted search space. At every step of decoding, the method keeps track of b partial hypotheses. The next set of partial hypotheses are chosen by expanding every path from the existing set of b hypotheses, and then choosing the b with the highest scores. Most commonly, the log-likelihood of the partial sequence is used as the scoring function. We use this as our baseline.2 Since beam search only explores a limited portion of the overall search space, it tends to yield multiple variants of the same high-likelihood sequence, sequences that often only differ in punctuation and minor morphological changes (Li and Jurafsky, 2016). Therefore, standard beam search is not ideal for producing diverse outputs. 3 Extensions to Beam Search In this section, we will discuss a variety of methods that have been developed recently to eliminate redundancy during decoding and generate a wider range of candidate outputs. Noisy Parallel Approximate Decoding Introduced by Cho (2016), NPAD is a technique than can be applied to any decoding setting. The main idea is that diversity can be achieved more naturally by taking advantage of the continuous manifold on which neural nets embed language. Instead of encouraging diversity by manipulating the probabilities outputted from the model, diverse outputs are instead produced by adding small amounts of noise to the hidden state of the decoder at each step. The noise is randomly sampled from a normal distribution. The variance is gradually annealed from a starting σ0 to 0 as decoding progresses (that is σt = σ0 t ) under the reasoning that uncertainty is greatest at the beginning of decoding. NPAD can be used in conjunction with any decoding strategy; following the best results from the original paper, we show results using NPAD with beam search. Extensions to NPAD have sought to learn the direction in which to manipulate the hidden states 2We present the beam search algorithm in the appendix. using an arbitrary decoding objective (Gu et al., 2017). Since such objectives can be highly domain-specific, we do not evaluate this method. Top-g Capping In beam search, it is often the case that one hypothesis h is assigned a much higher probability than all other hypotheses, causing all hypotheses in the next step to have h as their parent. Following Li and Jurafsky (2016) and Li et al. (2016b), we add an additional constraint to standard beam search to encourage the model to choose options from diverse candidates. At each step t, current hypotheses are grouped according to the parental hypothesis they come from. After grouping candidates, only the top g from each grouping are considered. The resulting b × g candidates are ranked, and the top b are selected as hypotheses for the next beam step. Hamming Diversity Reward Vijayakumar et al. (2016) proposes adding an additional diversity-promoting term, θ, to the log-likelihood before reranking. This term measures how different a candidate hypothesis c(i) ≤t is from the partial hypotheses selected in the previous step. Let Ht−1 = {c(1) ≤t−1, ... c(b) ≤t−1} be these partial hypotheses. Then the beam search scoring function for the ith candidate at timestep t becomes: score(c(i) ≤t) = t X j=1 ( log P(c(i) j |c(i) <j, x)) +λθ(c(i) ≤t, Ht−1) where λ is a tunable hyperparameter. Vijayakumar et al. (2016) try a variety of definitions for θ, including embedding diversity and n-gram diversity, but they find that Hamming distance, the number of tokens in the candidate sequence which exist in the previously selected partial hypotheses, is most effective. We take the negative of the Hamming distance as θ. Iterative Beam Search In an attempt to improve the size of the search space explored without sacrificing runtime, Kulikov et al. (2018) propose an iterative beam search method. Beam search is run many times, where the states explored by subsequent beam searches are restricted based on the intermediate states explored by previous iterations. Formally, we can define the set of all partial hypotheses for beam search instance i at time step t as H(i) t . From here, the search space explored by beam search instance i can be expressed as Si = ∪T t=1H(i) t . The ith beam search is pre3755 Method Description Method Description Random Sampling Standard decoding mechanism, greedily samples a token from the distribution at each time step. Random Sampling with Temperature Before sampling, modify entropy of predicted distribution. Top-s Random Sampling (Fan et al., 2018) Restrict sampling to the s-most likely words in the distribution. (story generation) Beam Search Standard decoding mechanism, keeps the top b partial hypotheses at every time step. (machine translation) NPAD Beam Search (Cho, 2016) Add random noise to the hidden state of the decoder at each time step. (machine translation) Top-g Capping Beam Search (Li and Jurafsky, 2016) Only consider the top c hypotheses from each parent hypothesis at each time step. (machine translation, dialog) Hamming Diversity Beam Search (Vijayakumar et al., 2016) Penalize new hypotheses that have many of the same tokens as existing partial hypotheses. (image captioning) Iterative Beam Search (Kulikov et al., 2018) Run beam search several times, preventing later iterations from generating intermediate states already explored. (dialog) Clustered Beam Search (Tam et al., 2019) Initially consider more hypotheses at each time step, and then cluster similar hypotheses together. (dialog) Post-Decoding Clustering (Ours) Sample a large number of candidates, and then cluster similar outputs together. Table 1: Brief high-level descriptions of each decoding method we consider in this paper. In parentheses we give the applications on which the technique was originally applied. vented from generating any partial hypothesis that has previously been generated, that is, any hypothesis found in S<i = ∪i−1 i′=0Si′. The authors also attempt a soft inclusion criterion, where any states within ϵ Hamming distance from a previously explored state are also excluded. During the experimentation of Kulikov et al. (2018), however, the soft-inclusion was found to not be beneficial; thus, we only restrict exact matches of previous states in our implementation. In practice, this means after the first beam search instance runs as normal, the first step of the second beam search instance will contain the b+1 to 2b-most likely starting tokens; this pattern holds for the third beam search instance, and so on. Clustered Beam Search Most recently, Tam et al. (2019) proposed a clustering-based beam search method to help condense and remove meaningless responses from chatbots. Specifically, at each decoding step t, this method initially considers the top 2∗b candidates. From there, each candidate sequence is embedded3, and the embeddings are clustered into c clusters using K-means. Finally, we take the top b c candidates from each cluster. Note that in the case any clusters have size less than b c, we then include the highest-ranked candidates not found after clustering. 3We follow Tam et al. (2019) and used averaged GloVe word embeddings (Pennington et al., 2014). 4 Clustering Post-Decoding (PDC) In the previous section, we discuss several diversity-promoting methods that can be applied during the decoding process. However, it is also possible to encourage additional diversity posthoc. On the task of sentence simplification, after decoding using a large-scale diversity-promoting beam search (beam size 100), Kriz et al. (2019) then clustered similar sentences together to further increase the variety of simplifications from which to choose. Document embeddings generated via Paragraph Vector (Le and Mikolov, 2014) were used as the sentence embeddings with which to perform K-means. In this work, we extend this post-decoding clustering idea in three key ways. First, we make use of sentence-level embeddings which leverage the pre-trained language representations from the Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018).4 Second, after clustering, Kriz et al. (2019) took the sentence closest to the centroid of each cluster as the representative candidate; we instead choose the highest-ranked candidate (according to loglikelihood) from each cluster to ensure the best candidates are still selected. Finally, after performing standard K-means clustering, we found that it was often the case that some clusters contained large numbers of good candidates, while others contained very few candidates that are also 4BERT sentence-level embeddings were obtained using https://github.com/hanxiao/bert-as-service. 3756 either ungrammatical or otherwise inferior. Thus, in our implementation, we remove clusters containing two or fewer sentences, and then sample a second candidate from each of the remaining clusters, prioritizing selecting candidates from larger clusters first. 5 Experimental Setup We evaluate the decoding strategies described in the previous sections under the following settings. For each of the published beam search algorithms, we choose the hyperparameters that were found to be best in the original publications. RS Random sampling with temp = 0.5, 0.7, 1.0, or 1.0 with top-10 capping. Standard BS Standard beam search Top5Cap BS Top-g capping with g = 3 Iter5 BS Iterative beam search with 5 iterations HamDiv0.8 BS Hamming Diversity with λ = 0.8 Cluster5 BS Clustered beam search with 5 clusters NPAD0.3 BS Noisy Decoding with σ0 = 0.3 For random sampling, we sample 10 outputs, and with beam-search based methods, we use a beam size of 10 to generate 10 outputs. In addition, we show results from oversampling then filtering. We use a beam size of 100 or generate 100 samples through random sampling, and then we select 10 from the 100, either through post-decoding clustering (PDC) or by taking the 10 candidates with highest likelihood. We examine these decoding strategies on two tasks: open ended dialog and image captioning. For each task, we evaluate both the quality and diversity of the 10 outputs from each strategy. 5.1 Open-ended Dialog Task In the dialog domain, we use an LSTM-based sequence-to-sequence (Seq2Seq) model implemented in the OpenNMT framework (Klein et al., 2017). We match the model architecture and training data of Baheti et al. (2018). The Seq2Seq model has four layers each in the encoder and decoder, with hidden size 1000, and was trained on a cleaned version of OpenSubtitles (Tiedemann, 2009) to predict the next utterance given the previous one. Evaluation is performed on 100 prompts from the Cornell Movie Dialog Corpus (DanescuNiculescu-Mizil and Lee, 2011). These prompts are a subset of the 1000 prompts used in Baheti et al. (2018), which were filtered using item response theory for discriminative power. We report perplexity (PpL), averaged over all the top 10 outputs for each example.5 Since the quality of open-ended dialog is notoriously difficult to evaluate automatically, we ran a human evaluation task on Amazon Mechanical Turk where annotators were shown a prompt and 5 potential responses generated by any of our decoding methods. Evaluators were asked to provide binary ratings on fluency, adequacy, and interestingness for each response. Overall, we collected 3 human judgments for each of the top ten responses for each of our decoding methods; in other words, we collected 3,000 judgments per method.6 5.2 Image Captioning Task For image captioning, we use a state-of-theart model introduced in Anderson et al. (2018). We take advantage of Luo (2017)’s open-source implementation and released model parameters trained on MSCOCO (Lin et al., 2014). We evaluate on a test set containing 5000 images. We report Semantic Propositional Image Caption Evaluation (SPICE) scores, an automatic evaluation metric that has been shown to correlate well with human judgments of quality(Anderson et al., 2016). SPICE measures how well the semantic scene graph induced by the proposed caption matches one induced by the ground truth. In addition to computing SPICE on the top-scoring caption (SPICE@1), we follow Vijayakumar et al. (2016) in reporting Oracle SPICE@10 scores. This is done to show the upper bound on the potential impact diversity can have. We also compute the mean SPICE score across all of the candidate captions for an image. Unlike SPICE@1 and SPICE@10, this metric shows the overall quality of all of the candidate captions, which is useful to know for applications that combine diverse candidate output sequences (Krause et al., 2017). 5.3 Evaluating Diversity To measure the diversity across the generated candidate sequences for a given input, we report Distk, the total number of distinct k-grams divided by the total number of produced tokens in all of the candidate responses for a prompt (Li et al., 2016a). We report Dist-2 and Dist-4 averaged over the prompts in the test set. 5This differs from existing work which computes perplexity over only the top output for each example. For our task we are interested in the quality of all of the generated responses. 6The full instructions shown on AMT are in the appendix. 3757 Method Fluency Adequacy Interestingness Ppl Dist-1 Dist-2 Ent-2 Ent-4 Reference 0.795 0.732 0.636 – – – – – RS 0.7 (sample 10) 0.758 0.399 0.388 35.98 0.63 0.80 4.08 3.84 RS 1.0 (sample10) 0.550 0.303 0.386† 67.99 0.74 0.87 4.35 4.08 RS 1.0,top10 (sample 10) 0.745† 0.418 0.387† 10.33 0.60 0.80 4.12 3.91 Standard BS (10 beams) 0.950 0.621 0.336 4.01 0.37 0.45 3.16 3.01 Top3Cap BS (10 beams) 0.942† 0.603 0.346 4.03 0.37 0.46 3.17 3.03 Iter5 BS (10 beams) 0.903 0.520 0.335 5.42 0.62 0.74 3.68 3.25 HamDiv0.8 BS (10 beams) 0.923 0.599 0.366† 4.56 0.33 0.37 3.08 3.00 Cluster5 BS (10 beams) 0.936 0.582 0.381 4.23 0.39 0.46 3.24 3.06 NPAD0.3 BS (10 beams) 0.942† 0.604† 0.335 4.05 0.36 0.44 3.13 2.99 RS 1.0,top10 (sample 100, rank) 0.922 0.548 0.347 5.10 0.52 0.68 3.54 3.18 RS 1.0,top10 (sample 100, PDC) 0.852 0.494 0.372 6.96 0.63 0.76 3.74 3.27 Standard BS (100 beams, rank) 0.964 0.611 0.332† 4.01 0.44 0.61 3.33 3.05 Standard BS (100 beams, PDC) 0.944 0.599 0.346 4.42 0.57 0.70 3.59 3.21 Table 2: Results on 100 dialog prompts. The first row shows the mean human ratings of the single reference response available for each prompt. The next three rows show results for random sampling, with 10 samples drawn per prompt. The next six rows are variants of beam search using beam size 10. The last four rows use random sampling or standard beam search to generate 100 outputs, then filter down to 10 outputs either through ranking by log-likelihood or by performing post-decoding clustering (PDC). In each section, the highest value is bolded, and statistical ties are marked †. SPICE Method Mean @1 @10 Dist-1 Dist-2 Ent-2 Ent-4 RS 0.7 (sample10) 0.170 0.192 0.278 0.31 0.52 3.67 4.00 RS 1.0 (sample10) 0.133 0.167 0.247 0.44 0.71 4.17 4.26 RS 1.0,top10 (sample10) 0.159 0.183 0.272 0.33 0.59 3.90 4.17 Standard BS (10 beams) 0.194 0.193 0.283 0.18 0.26 2.94 3.18 Top3Cap BS (10 beams) 0.195 0.196 0.282 0.17 0.26 2.93 3.17 HamDiv0.8 BS (10 beams) 0.194 0.194 0.282 0.18 0.27 2.98 3.19 Cluster5 BS (10 beams) 0.191 0.194 0.285 0.19 0.28 3.04 3.25 NPAD0.3 BS (10 beams) 0.191 0.192 0.280 0.18 0.26 2.94 3.17 RS 1.0,top10 (sample100, rank) 0.182 0.188 0.284 0.25 0.41 3.31 3.64 RS 1.0,top10 (sample100, PDC) 0.169 0.188 0.282 0.31 0.52 3.62 3.91 Standard BS (100 beams, rank) 0.188 0.190 0.279 0.20 0.31 3.04 3.32 Standard BS (100 beams, PDC) 0.186 0.192 0.288 0.24 0.38 3.25 3.57 Table 3: Image captioning results for selected random sampling and beam search methods. SPICE@1 measures the SPICE score of the most likely caption. SPICE@10 is the maximum score across the 10 candidates generated by each method. Mean SPICE is the mean score over all 10 candidates. In each section, the best value is bolded. A limitation of Dist-k is that all k-grams that appear at least once are weighted the same, ignoring the fact that infrequent k-grams contribute more to diversity than frequent ones. Zhang et al. (2018) instead propose an entropy metric, Ent-k, defined as: Ent-k = −1 P w∈S F(w) X w∈S F(w) log F(w) P w′∈S F(w′) where S is the set of all k-grams that appear in candidate responses for an example, and F(w) denotes the frequency of w in the candidate responses. 6 Results We report results on dialog systems and image captioning in Tables 2 and 3, respectively. As expected, random sampling-based approaches yield outputs with greater diversity but worse quality than beam search-based approaches. Oversampling then filtering increases the quality of outputs while still ensuring high diversity. In the following sections, we discuss the diversity-quality tradeoff, and then delve further into the results for each method group. 6.1 The Quality Diversity Tradeoff The goal of diverse decoding strategies is to generate high-quality candidate sequences which span as much of the space of valid outputs as possible. 3758 Dist-2 Mean Score across Annotations 0.25 0.50 0.75 1.00 0 0.25 0.5 0.75 1 Fluency Adequacy Interestingness Dist-2 vs. Human Scores (corr = -0.41) Perplexity Mean Score across Annotations 0.25 0.50 0.75 1.00 3 3.25 3.5 3.75 4 4.25 Fluency Adequacy Interestingness Ent-4 vs. Human Scores (corr = -0.77) Perplexity Mean Score across Annotations 0.25 0.50 0.75 1.00 0 20 40 60 Fluency Adequacy Interestingness Perplexity vs. Human Scores (corr = -0.77) Figure 2: Each decoding strategy is plotted, showing that human-perceived quality is negatively correlated with diversity. The Pearson Correlation coefficients between each statistic and the average of fluency, coherence, and interestingness are shown in parentheses. Prompt: Look, nobody knows we did it. RS 0.5 Standard BS NPAD0.3 BS I don’t know what you’re talking about. What’s the matter with you? I don’t know what it is. I don’t think so. He’s got to get out of here. We’ve got to get out of here. What do you mean? I don’t think it’s a good idea. I don’t know what to say. I don’t know what’s going on. I don’t think it’s a good idea. I don’t know what to say. I don’t know what’s going on. I don’t know what to do. I don’t know what’s going on here. RS 1.0 Standard BS with PDC Cluster5 BS I can’t find it. They’re our ships. It’s all right anyone is the right to interfere. We didn’t have a plan I engineered a policy. Same time you pick us up at six and get we. I don’t know! I don’t think so. What do you mean? Why didn’t you tell me? That’s why we’re here. I don’t know why. What do you mean? I don’t think so. How do you know that? I’ll tell you what. RS 1.0,top10 RS 1.0,top10 with PDC Top3Cap BS I don’t know what else to do. It doesn’t have to be that way! We’re in the air! I’ve seen a guy in his place in a it. And I’m not we any more. What do you mean? I don’t think so. That’s why I’m here. It’s all right we. We’ve been through this before. We’ve got to get out of here. What do you mean? I don’t think it’s a good idea. I don’t know what to say. I don’t know what’s going on. Table 4: Responses to an example prompt for selected methods. More examples can be seen in the appendix. However, we find there to be a marked trade-off between diversity and quality. This can be seen in Figure 2, where we plot the human-judged quality score for each dialog experiment against our primary diversity descriptive statistics. Fluency and adequacy are both strongly negatively correlated with diversity. While we had expected interestingness to be positively correlated with diversity, the fact that it is not suggests that existing diversity statistics are insufficient for capturing what it means to humans for outcomes to be interesting. Likewise, in image captioning, the mean SPICE score of the 10 candidate captions (averaged over all examples for each experimental setting) is strongly anti-correlated with diversity, with a Pearson correlation coefficient of -0.83 with the Ent-4 measure and -0.84 with Dist-2. Clearly it remains an open challenge to generate a diverse set of image captions that are all high-quality. When researchers choose to use a diverse decoding strategy, they must decide where on the quality-diversity tradeoff they would like to lie; selecting an optimal method depends strongly on one’s tolerance for errors. In machine translation, where mistakes could severely impact coherence, beam search-based methods, which tend to result in better fluency and coherence, but worse diversity might be preferred. In more open-ended applications, where novel text is of greater importance, increased diversity could be worth the fluency and coherency hit. As state-of-the-art models continue to improve, one would hope that the quality cost of 3759 encouraging diversity will continue to decrease. In the interest of reporting a single overall best method for each task, we computed a sumof-ranks score for each method. For dialog, we ranked the methods each by fluency, coherence, interestingness, and Ent-4, and then took a weighted sum of the four ranks, with 50% of the weight assigned to Ent-4, and 50% distributed evenly among the human evaluation ranks. Overall, clustered beam search and standard BS (beam size 100, PDC) have the best scores, followed by clustered beam search (beam size 10). Similarly, for image captioning, we rank the methods by their mean SPICE score and by Ent-4. Summing these ranks, random sampling (temp 1.0, top-10 capping, PDC) came in first. Standard beam search, Hamming Diversity beam search, and Top-g capping beam search (beam size 10) tied for second. 6.2 Random Sampling-based Methods Higher sampling temperatures result in both an increase in diversity in generated responses and a reduction in overall quality. In the dialog domain, evaluators consistently rate the responses sampled with temperature 1.0 to have worse fluency, coherence, and interestingness when those sampled with temperature 0.5. In the image captioning domain, lower temperature improves automatic evaluation metrics for quality while reducing diversity. For dialog, restricting sampling to the top-10 vocabulary words is a more effective strategy than adjusting temperature for ensuring balance between the quality and diversity of outputs. Top10 random sampling has the highest fluency, coherence, and interestingness, as well as significantly lower perplexity than other random sampling methods. However, this trend did not extend to image captioning, where top-10 random sampling results in both worse SPICE scores and lower diversity measures than setting the temperature to 0.7. This may be because image captioning is a less ambiguous task than open-ended dialog, leading to a better-trained model that puts more probability mass on high-quality vocabulary words, ameliorating the challenge top-c filtering is designed to eliminate: that of a long tail of low probability vocabulary words taking up a large amount of probability mass. 6.3 Beam Search-based Methods For dialog, clustered beam search (Cluster5 BS) performs the best of all beam search methods in terms of human-judged interestingness. It ties for best with NPAD0.3BS on fluency and ties with Standard BS on coherence. Iterative beam search (Iter5 BS) achieves the greatest diversity, but at the expensive of quality. It has the lowest humanjudged coherence among beam search methods; thus, we do not evaluate this method on image captioning. For image captioning, Cluster5 BS has the highest diversity among beam search methods, but this difference is quite small. Cluster5 BS also has the highest SPICE@10 score, indicating it is the best method for generating at least one high quality candidate. However, Top3Cap BS results in the highest mean SPICE score, suggesting it is best at ensuring all outputs are reasonable quality. 6.4 Effect of Over-sampling In our experiments, we explore over-sampling 100 outputs, and then either using post-decoding clustering (PDC) or re-ranking by log-likelihood to filter these 100 down to 10 diverse outputs. In the dialog domain, this over-sampling approach is a definite win. When over-sampling with random sampling both methods of filtering substantially improve human judgements of fluency and adequacy compared to random sampling only 10 outputs. However, interestingness scores go down, and while the outputs are still more diverse than beam search-based methods, they are less diverse than random sampling without filtering. In the beam search methods that use a beam size of 100 then filter down to 10, human-judged quality is on par with beam size 10 results, but diversity is considerably higher. When comparing the two types of filtering, PDC results in higher interestingness and diversity statistics, while log-likelihood re-ranking improves fluency and adequacy. This again demonstrates the trade-off between quality and diversity.7 For image captioning, over-sampling with reranking does not consistently improve quality as it does in the dialog domain. Mean SPICE score is improved for random sampling but not for beam search. SPICE@1 becomes worse for both random sampling and decoding, while SPICE@10 improves for random sampling, and for beam search when PDC is applied. From these results, 7In the appendix, we show results with every method where we generate 10 samples; generate 100 samples followed by selecting the 10 most likely outputs; and generate 100 samples followed by post-decoding clustering to select 10 outputs. 3760 we can conclude that over-sampling then ranking does not have a sizeable effect, either negative or positive, on quality. Moreover, the diversity of the captions generated by random sampling actually decreases when oversampling. The diversity of beam search-generated captions does improve with over-sampling. While oversampling does generally improve outcomes on the diversity/quality tradeoff, it is more computationally expensive, particularly with beam search. Running PDC also requires generating sentence embeddings for every output, which adds additional computation time. 7 Additional Related Work In this paper, we have compared a variety of posttraining diversity-promoting algorithms. Here, we discuss other related works that instead promote diversity at train-time, as well as alternative quality evaluation methods. We also note that concurrent work has proposed nucleus sampling as an improvement to the sampling strategies discussed in this paper (Holtzman et al., 2019). Diversity Promotion During Training Several works have attempted to encourage diversity during training by replacing the standard loglikelihood loss with a diversity-promoting objective. Li et al. (2016a) introduces an objective that maximizes mutual information between the source and target. Zhang et al. (2018) uses an adversarial information maximization approach to encourage generated text to be simultaneously informative and diverse. Xu et al. (2018) also uses an adversarial loss; their loss function rewards fluent text and penalizes repetitive text. We do not evaluate on these methods as they tend to be taskspecific and difficult to implement. All of the diversity strategies we evaluate share the trait that they are agnostic to model architecture and to the data type of the input, as long as the output of the model is a probability distribution over tokens in a sequence. Automatic Quality Evaluation An important part of this work is how to accurately measure not only the effect these methods have on candidate diversity, but also on the overall quality of the candidates. In choosing to report human scores and perplexity for the dialog domain, and SPICE for image captioning, we omitted some quality measures used in other papers. For image captioning, BLEU (Papineni et al., 2001), ROUGE (Lin, 2004), METEOR (Elliott and Keller, 2013), and CIDer (Vedantam et al., 2015) scores are often reported, but SPICE has been shown to have higher correlation with human judgments (Anderson et al., 2016). In the dialog domain, single-reference BLEU score (Papineni et al., 2001) is sometimes used to measure response quality, but it has been shown to have little correlation with human-judged quality (Liu et al., 2016). Therefore, most works in dialog systems use human evaluation as the ultimate measure of quality (Li et al., 2016a; Sedoc et al., 2018) 8 Conclusion In this work, we perform an analysis of posttraining decoding strategies that attempt to promote diversity in conditional language models. We show how over-sampling outputs then filtering down to the desired number is an easy way to increase diversity. Due to the computational expense of running large beam searches, we recommend using random-sampling to over-sample. The relative effectiveness of the various decoding strategies differs for the two tasks we considered, which suggests that choice of optimal diverse decoding strategy is both task-specific and dependent on one’s tolerance for lower quality outputs. While we have focused on evaluating each decoding strategy under the specifics reported to be the best in the original, further work is necessary to conclude whether observed differences in quality and diversity may simply be due to each work’s chosen hyperparameters. The ability to effectively generate a diverse set of responses while not degrading quality is extremely important in a variety of generation tasks, and is a crucial component to harnessing the power of state-of-the-art generative models. 9 Acknowledgements We thank our anonymous reviewers for helpful feedback. We also thank Yun William Yu for assistance with statistical testing and proofreading. This material is based in part on research sponsored by DARPA under grant number HR001115-C-0115 (LORELEI). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions in this publication are those of the authors and should not be seen as representing official endorsements of DARPA and the U.S. Government. 3761 References Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Spice: Semantic propositional image caption evaluation. In European Conference on Computer Vision. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional constraints. In Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). Kyunghyun Cho. 2016. Noisy parallel approximate decoding for conditional recurrent language model. Sajal Choudhary, Prerna Srivastava, Lyle H. Ungar, and Jo˜ao Sedoc. 2017. Domain aware neural dialog system. volume abs/1708.00897. Elizabeth Clark, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. 2018. Creative writing with a machine in the loop: Case studies on slogans and stories. In 23rd International Conference on Intelligent User Interfaces, IUI ’18, pages 329–340, New York, NY, USA. ACM. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 76–87, Portland, Oregon, USA. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, , and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. Desmond Elliott and Frank Keller. 2013. Image description using visual dependency representations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1292–1302, Seattle, Washington, USA. Association for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, Melbourne, Australia. Association for Computational Linguistics. Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1100–1111. Jiatao Gu, Kyunghyun Cho, and Victor O.K. Li. 2017. Trainable greedy decoding for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1968–1978, Copenhagen, Denmark. Association for Computational Linguistics. Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. CoRR, abs/1904.02792. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. CoRR, abs/1904.09751. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL. Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 3337–3345. IEEE. Reno Kriz, Jo˜ao Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplification. Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. 2018. Importance of a search strategy in neural dialogue modelling. Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, pages 1188–1196. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li and Dan Jurafsky. 2016. Mutual information and diverse decoding improve neural machine translation. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. 3762 Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. Ruotian Luo. 2017. An image captioning codebase in pytorch. https://github.com/ ruotianluo/ImageCaptioning.pytorch. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Ramesh Nallapati, Bowen Zhou, C´ıcero Nogueira dos Santos, aglar G¨ulehre, and Bing Xiang. 2016. Abstractive Text Summarization using Sequence-tosequence RNNs and Beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 280–290. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532– 1543. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8. Jo˜ao Sedoc, Daphne Ippolito, Arun Kirubarajan, Jai Thirani, Lyle Ungar, and Chris Callison-Burch. 2018. Chateval: A tool for the systematic evaluation of chatbots. In Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG), pages 42–44. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Yik-Cheung Tam, Jiachen Ding, Cheng Niu, and Jie Zhou. 2019. Cluster-based beam search for pointergenerator chatbot grounded by knowledge. In Dialog System Technology Challenges 7 at AAAI 2019. J¨org Tiedemann. 2009. News from opus-a collection of multilingual parallel corpora with tools and interfaces. In Recent advances in natural language processing, volume 5, pages 237–248. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. pages 4566–4575. Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. Oriol Vinyals and Quoc V. Le. 2015. A neural conversational model. volume abs/1506.05869. Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting gan: A crossentropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3940–3949. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Systems, pages 1815–1825.
2019
365
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3763–3773 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3763 Retrieval-Enhanced Adversarial Training for Neural Response Generation Qingfu Zhu], Lei Cui[, Weinan Zhang]\, Furu Wei[, Ting Liu]\⇤ ]Harbin Institute of Technology, Harbin, China [Microsoft Research Asia, Beijing, China \Peng Cheng Laboratory, Shenzhen, China {qfzhu, wnzhang, tliu}@ir.hit.edu.cn {lecu, fuwei}@microsoft.com Abstract Dialogue systems are usually built on either generation-based or retrieval-based approaches, yet they do not benefit from the advantages of different models. In this paper, we propose a Retrieval-Enhanced Adversarial Training (REAT) method for neural response generation. Distinct from existing approaches, the REAT method leverages an encoder-decoder framework in terms of an adversarial training paradigm, while taking advantage of N-best response candidates from a retrieval-based system to construct the discriminator. An empirical study on a large scale public available benchmark dataset shows that the REAT method significantly outperforms the vanilla Seq2Seq model as well as the conventional adversarial training approach. 1 Introduction Dialogue systems intend to converse with humans with a coherent structure. They have been widely used in real-world applications, including customer service systems, personal assistants, and chatbots. Early dialogue systems are often built using the rule-based method (Weizenbaum, 1966) or template-based method (Litman et al., 2000; Schatzmann et al., 2006; Williams and Young, 2007), which are usually labor-intensive and difficult to scale up. Recently, with the rise of social networking, conversational data have accumulated to a considerable scale. This promoted the development of data-driven methods, including retrieval-based methods (Shang et al., 2015; Sordoni et al., 2015; Vinyals and Le, 2015; Wen et al., 2017) and generation-based methods (Leuski et al., 2006; Ji et al., 2014; Yan et al., 2016). Retrieval-based methods reply to users by searching and re-ranking response candidates ⇤Corresponding author. MSG I made strawberry shortcake. GT Where did you learn that, it is sweet and cheery. RSP How did you make it? It looks delicious. C#1 Could you tell me how this thing is cooked? C#2 Tiramisu is my favorite dessert. It’s so delicious. Table 1: An example of a message (MSG), a groundtruth response (GT), a generated response (RSP) and N-best response candidates (C#1 and C#2) during the training process. Similar contents in the response and candidates are in boldface. from a pre-constructed response set. Written mainly by humans, these responses are always diverse and informative, but may be inappropriate to input messages due to their being prepared in advance and thus incapable of being customized (Shang et al., 2015). In contrast, generation-based methods can produce responses tailored to the messages. The most common method of this category in recent years is the sequence to sequence (Seq2Seq) model (Sutskever et al., 2014; Shang et al., 2015; Vinyals and Le, 2015). In practice, it usually suffers from the problem of generating generic responses, such as “I don’t know” and “Me, too” (Li et al., 2016a; Serban et al., 2016). While the contents of retrieved responses, apart from the irrelevant parts, are of great diversity, making it a potential resource for tailoring appropriate and informative responses. Therefore, it is natural to enhance the response generation approach with retrieved responses. Previous work has been proposed to extend the input of a Seq2Seq model with N-best response candidates (or their contexts) (Song et al., 2018; Pandey et al., 2018). On one hand, these approaches are trained using MLE objective, which correlates weakly with true quality of responses thus limits the effectiveness of the candidates in producing the responses. Table 1 shows an exam3764 ple during the training process. Related contents of the candidates are appropriately integrated into the response, but the model is discouraged as the response is different from the ground-truth. On the other hand, rather than just provide materials for the generation, N-best response candidates also contain references for evaluating responses. Yet they are not efficiently utilized in the objective in the existing training process. In this paper, we propose a Retrieval-Enhanced Adversarial Training (REAT) approach to make better use of N-best response candidates. A discriminator is introduced to replace the MLE objective to supervise the training process. Generated responses containing appropriate and informative contents with input messages are more likely to be seen as human-generated by the discriminator, which encourages the generation model to incorporate more information in candidates into responses. In addition, the N-best response candidates are also conditioned to the discriminator as references to improve its classification accuracy, which in turn benefits the generation model by adversarial training. We conduct extensive experiments on a public available NTCIR corpus to verify the effectiveness of the proposed approach, comparing it with retrievalbased methods, generation-based methods, and previous retrieval-enhanced response generation approaches. The results show that the REAT approach significantly outperforms the baselines in both automatic and human evaluations. The contributions of this paper are summarized as follows: 1. We propose a novel retrieval-enhanced neural response generation model adapted from adversarial training approach, which introduces a discriminator to more efficiently utilize the N-best response candidates. 2. Referencing to N-best response candidates, the discriminator of our proposed approach improves over previous discriminators on the classification accuracy. 3. Extensive experiments show that our proposed approach outperforms state-of-the-art baselines in both automatic and human evaluations. 2 Related Work Data-driven dialogue systems can be roughly divided into two categories: retrieval-based and generation-based. Retrieval-based methods respond to users by selecting the response that best matches an input message from a pre-constructed response set. Leuski et al. (2006) match a response with a message using a statistical language model. Ji et al.(2014) employ information retrieval techniques to rank response candidates. In addition, the matching and ranking methods can also be implemented using neural networks (Yan et al., 2016; Qiu et al., 2017; Wu et al., 2017). Based on that, Yang et al. (2018) propose a deep matching network which could model external knowledge. Generation-based methods can be cast as a sequence to sequence (Seq2Seq) process (Shang et al., 2015; Vinyals and Le, 2015; Sordoni et al., 2015) but suffers from generating generic responses. One way to address the problem is to introduce new content into responses, such as keywords (Mou et al., 2016; Serban et al., 2017a), topic information (Xing et al., 2017) and knowledge triples (Zhou et al., 2018). Another way is to improve the Seq2Seq architecture. Li et al.(2016b) introduce the Maximum Mutual Information as the objective function. Serban et al.(2017b) add a latent variable to inject variability. The training of Seq2Seq can be formulated as a reinforcement learning problem (Li et al., 2016b; Zhang et al., 2017). To avoid manually defining reward functions, a discriminator can be introduced and trained synchronously by adversarial learning (Li et al., 2017). After that, Xu et al. (2018) propose a language model based discriminator to better distinguish novel responses from repeated responses. In a similar adversarial setting, Zhang et al. (2018) optimize a Variational Information Maximization Objective to improve informativeness. Our approach is also an adversarial model, the difference is that we employ the N-best response candidates to enhance the generation. Taking advantages of the two methods, retrieval-enhanced response generation approaches make use of the informative content in retrieved results to generate new responses. Typically, generating responses from retrieved candidates can be seen as a text-to-text system, which produces meaningful text from meaningful text rather than from abstract meaning representations (Marsi and Krahmer, 2005). Barzilay 3765 Candidate#1 Could you tell me how this thing is cooked? Candidate#2Tiramisu is my favorite dessert. It’s so delicious. … Decoder Encoder Response How did you make it? It looks delicious. Generator Message: I made strawberry shortcake. Message LSTM Response LSTM Concatenation MLP Discriminator Candidate LSTM Average N-best Response Candidates Zx … Zc1 Index Retrieval-based Method Training Set Policy Gradient Zc2 Prob ( human-generated ) Figure 1: An overview of our proposed approach. The discriminator is enhanced by the N-best response candidates returned by a retrieval-based method. The discriminator takes as input a response and outputs the probability that the response is human-generated. The output is then regarded as a reward to guide the generator. and McKeown (2005) propose the sentence fusion technique for abstractive multidocument summarization. In the context of conversation, Song et al.(2018) apply an encoder to every response candidate and integrate the results into the decoding process via the attention mechanism (Bahdanau et al., 2015). Similarly, Pandey et al.(2018) also incorporate response candidates using the attentive encoder-decoder framework on a proposed technical support dataset. Wu et al.(2019) augments the decoder with an edit vector representing lexical differences between retrieved contexts and the message. Different from previous work, our approach introduces a discriminator to replace the MLE objective to compute the loss. Besides, rather than merely being sent to the encoder as generation materials, response candidates in our approach are directly utilized by the discriminator to form a discriminative signal to guide the generator. The proposed approach is also related to Lin et al.(2017)’s work. They propose an unconditional GAN whose discriminator is augmented with references randomly sampled from the training set for the task of language generation. In contrast, our proposed approach focuses on the response generation and leverages the message as prior knowledge. In addition, rather than sampling references from the training set, the candidates in our approach are retrieved according to the relevance to messages using a retrieval-based method. 3 Method In this section, we introduce our proposed REAT approach. As Figure 1 shows, it consists of two main components: a discriminator D (Sec. 3.2) and a generator G (Sec. 3.3), both of which are enhanced by N-best response candidates from a retrieval-based method (Sec. 3.4). The generator produces a response using the candidates as generation materials. While in the discriminator, the candidates are provided as references to better distinguish a response, which in turn improves the generator by adversarial training (Sec. 3.1). 3.1 Retrieval-Enhanced Adversarial Training The goal of the discriminator is to distinguish whether a response y is human-generated or machine-generated. It computes the probability Dφ(y|x, {c}) that the response is humangenerated given an input message x and N-best response candidates {c} = {c1, ...ck, ..., cN}, where φ denote the parameters of the discriminator. Therefore, its objective function is to minimize classification error rate: JD(φ) = −Ey⇠ground−truth log Dφ(y|x, {c}) −Ey⇠G log(1 −Dφ(y|x, {c}), (1) We cast the retrieval-enhanced response generation as a reinforcement learning problem to backpropagate the error computed by the discriminator to the generator via the policy gradient algorithm. In this way, the generator can be seen as an agent whose parameters ✓define a policy ⇡. At each time step, it takes an action a by generating a word and accordingly updates its state s, which is defined as a tuple of the message, the candidates and the partially generated response. At the end of the generation of a response, the agent observes a reward r from the discriminator, which is the probability that the response is 3766 human-generated: Dφ(y|x, {c}). Here, we do not employ the REGS (reward for every generation step) strategy (Li et al., 2017) as the Monte-Carlo roll-out is quite time-consuming1 and the accuracy of a discriminator trained on partially decoded sequences is not as good as that trained on complete sequences. The goal of the agent (the generator) is to minimize the negative expected reward. With the likelihood ratio trick (Williams, 1992), the gradient of ✓can be derived as: JG(✓) = −Ey⇠G(Dφ(y|x, {c})), (2) 5JG(✓) = −Ey⇠G(Dφ(y|x, {c}) · 5 log G✓(y|x, {c})), (3) where G✓(y|x, {c}) is the probability of generating y given x and {c}. In practice, JG(✓) and 5JG(✓) can be approximated using a single Monte-Carlo sample from G (Rennie et al., 2017): JG(✓) ⇡−Dφ(y|x, {c}), y ⇠G, (4) 5JG(✓) ⇡−Dφ(y|x, {c}) · 5 log G✓(y|x, {c}), y ⇠G. (5) Both the generator and the discriminator are pre-trained before adversarial training. The generator is pre-trained on the training set with MLE loss. The discriminator is pre-trained using human-generated responses as positive samples and machine-generated responses produced by the pre-trained generator as negative samples. Given the pre-trained generator and discriminator, the adversarial training is a min-max game played between them: min G max D JG(✓) −JD(φ), (6) where the discriminator tries to distinguish between human-generated responses and machinegenerated responses, while the generator tries to fool the discriminator by producing human-like responses. The overall algorithm of the retrievalenhanced adversarial training is summarized as Algorithm 1. 3.2 Discriminator The discriminator is a binary classifier. It takes as input a response y, a message x, and N-best 1Training one epoch takes roughly 120 hours on a TITAN Xp GPU when the roll-out number is 5. Algorithm 1 Retrieval-Enhanced Adversarial Training Require: The training set {x, y}; Ensure: The generator parameters ✓; The discriminator parameters φ; 1: Get N-best response candidates using a retrieval-based method; 2: Randomly initialize ✓and φ; 3: Pre-train G with MLE loss; 4: Generate responses using the pre-trained G; 5: Pre-train D using machine-generated responses as negative samples and humangenerated responses as positive samples; 6: for epoch in number of epochs do 7: for g in g-steps do 8: Update ✓according to Equation 5; 9: end for 10: for d in d-steps do 11: Sample y from G as a negative sample; 12: Sample y from the human-generated responses as a positive sample; 13: Update φ according to Equation 1; 14: end for 15: end for 16: return ✓, φ; response candidates {c}, and subsequently computes a binary probability distribution to indicate whether y is human-generated or machinegenerated. First, we compute a candidate-aware response representation zc to model the interaction between the candidates and the response. Each candidate is encoded by a candidate LSTM (Hochreiter and Schmidhuber, 1997): uk i = fc(ck i , uk i−1), (7) where ck i is the i-th word of the k-th candidate. uk i denotes the hidden state of the candidate LSTM at time step i. fc is the computation unit of the candidate LSTM. The initial hidden state uk 0 is set to the zero vector and the last hidden state uk T (T denotes the length of an utterence through out the paper) can be seen as a representation of the candidate. Subsequently, uk T is used to initialize the hidden state of a response LSTM, which computes a local candidate-aware response representation zck 3767 for each candidate ck: vk i =fy(yi, vk i−1), zck = vk T , (8) where vk i represents the hidden state of the response LSTM at time step i with regard to the k-th candidate. fy is the computation unit of the response LSTM and yi is the i-th word of the response. The candidate-aware response representation zc is the average of all local candidate-aware response representations: zc = 1 N N X k=1 zck, (9) Second, the interaction between the message and the response is modeled by a message-aware response representation zx using a message LSTM and the response LSTM introduced above in a similar way to Equation 7 and 8. Finally, the probability that the response is human-generated Dφ(y|x, {c}) is computed by a Multilayer Perception (MLP): Dφ(y|x, {c}) = σ(MLP([zx, zc])), (10) where the bracket [·, ·] denotes concatenation. σ is the sigmoid function2. 3.3 Generator The generator G is a multi-source Seq2Seq model, which consists of an encoder and a decoder. The encoder reads from a message and N-best response candidates, summarizing them into context vectors. The decoder is a language model which produces a response word by word, conditioned with the context vectors. The encoder first employs a bi-directional LSTM to represent each candidate word and its context information in a response candidate: ! hk i = g0 c(ck i , ! hk i−1), hk i = g1 c(ck i , hk i+1), (11) where g0 c and g1 c denote the computation units of a forward LSTM and a backward LSTM, respectively. ! hk i and hk i are the i-th hidden states of the two LSTMs. After that, hidden states in the two directions are concatenated, i.e., hk i = [ ! hk i , hk i ]. 2We did study more complicated relationship among x, y ,and {c} with bi-directional LSTM and attention mechanism in the discriminator, but observed no further improvement on the validation set. To capture the different importance of a candidate word in the word-level and the sentence-level, the encoder employs a two-level attention structure. The word-level attention models the relevance of a candidate word to the decoding context within a candidate, i.e, the word-level attention at the j-th decoding time step is computed as: ↵k ij = exp(q(sj−1, hk i )) PT t=1 exp(q(sj−1, hk t )) , (12) where ↵k ij is the word-level weight for the i-th word of ck. sj−1 is the hidden state of the decoder, representing the decoding context at time step j. q is a feed-forward network. Considering that different candidates are of different importance, the word-level weights are then rescaled by a sentence-level attention: ack j = T X i=1 ↵k ijhk i , (13) βkj = exp(q(sj−1, ack j )) PN n=1 exp(q(sj−1, acn j )) . (14) where ack j can be seen as a representation of ck. βkj is the sentence-level weight of ck. The candidate context vector ac j is then computed taking into account the two-level attention weights: ac j = N X k=1 T X i=1 βkj↵k ijhk i (15) Meanwhile, the message context vector ax j is computed using a message bi-directional LSTM and a word-level attention in a similar way to Equation 11, 12 and 13. Then, the decoder LSTM updates its hidden state conditioned with the context vectors and subsequently generates a word for a response as a standard language model: sj = gy([yj−1, ac j, ax j ], sj−1). (16) where gy is the computation unit of the decoder. 3.4 Retrieval-based Method To get the N-best response candidates, a retrievalbased method is built using the Lucene3 library and the state-of-the-art response ranking model (Yang et al., 2018). First, we merge all 3https://lucene.apache.org/ 3768 Corpus # of message # of response Training 119,941 1,932,258 Validation 10,000 163,126 Test 10,000 162,230 Table 2: Some statistics of the datasets. message-response pairs whose messages are identical into a document and subsequently build the index for all the documents in the training set. Second, we use each message as a query to search for K (set to 10) documents whose messages are similar to the query. After that, responses in the retrieved documents are re-ranked by the ranking model according to their matching scores to the query. Finally, the top N (set to 2, as in Song et al., 2018) responses are returned as the N-best response candidates. Note that when we collect N-best response candidates for a training message, the most similar document retrieved is always the one whose message is exactly the training message and responses contain the ground-truth response. We thus remove the document from the retrieved result before re-ranking to make sure that the N-best response candidates are different from the groundtruth response. 4 Experiments 4.1 Data We use the NTCIR corpus4 in our experiments. Its data are collected from a Chinese microblogging service, Sina Weibo5, where users can both post messages and make comments (responses) on other users’ messages. First, we tokenize each utterance using the Language Technology Platform (Che et al., 2010) and remove samples whose responses are shorter than 5, which is helpful in relieving the generic response problem (Li et al., 2017). Then, we randomly select 10,000 messages associated with responses to form a validation set and another 10,000 messages with responses as a test set. Table 2 shows some statistics of the datasets. 4.2 Baselines Rtr: The retrieval-based method searches the index for response candidates and subsequently re4http://research.nii.ac.jp/ntcir/data/data-en.html 5https://weibo.com turns the one that best matches the message after re-ranking (see Sec. 3.4 for details). S2S: The Seq2Seq model with the attention mechanism (Bahdanau et al., 2015). MS2S: The “multi sequence to sequence” (Song et al., 2018) encodes N-best response candidates using N encoders and subsequently incorporates the results into the decoding process by the attention mechanism. Edit: The prototype editing model (Wu et al., 2019) augments the decoder with an edit vector representing lexical differences between retrieved contexts and the message. AL: The adversarial learning for neural response generation (Li et al., 2017) is also an adversarial method but is not retrieval-enhanced. Here, we do not employ the REGS (reward for every generation step) setting as the Monte-Carlo roll-out is quite time-consuming and the accuracy of the discriminator trained on partially decoded sequences is not as good as that on complete sequences. 4.3 Experiment Settings We use the published code6 for Edit and implement other approaches by an open source framework: Open-NMT (Klein et al., 2017). The vocabulary table consists of the most frequent 30,000 words, whose 300-dimensional word embeddings are pre-trained on the training set by Word2Vec 7. The number of hidden units for all LSTM in our approach is 500. The batch size is set to 64. The discriminator and the generator are trained alternately, where the discriminator is optimized for 10 batches, then switch to the generator for 20 batches. We use ADAM optimizer whose learning rate is initialized to 0.0001. In the inference process, we generate responses using beam search with beam size set to 5. 5 Results 5.1 Evaluation Metrics Human Evaluation We randomly sampled 200 messages from the test set to conduct the human evaluation as it is extremely time-consuming. Five annotators8 are recruited to judge a response from three aspects (Ke et al., 2018): 6https://github.com/MarkWuNLP/ResponseEdit 7https://code.google.com/archive/p/word2vec/ 8All annotators are well-educated students and have Bachelor or higher degree. 3769 Appropriateness Informativeness Grammaticality Mean +2 +1 0  Mean +2 +1 0  Mean +2 +1 0  Rtr 0.63 24.8 12.9 62.3 0.71 0.92 41.1 10.1 48.8 0.67 1.93 94.9 3.1 2.0 0.61 S2S 0.76 27.9 20.0 52.1 0.58 0.51 10.2 30.5 59.3 0.69 1.74 85.3 2.9 11.8 0.83 MS2S 0.85 31.9 21.5 46.6 0.63 0.62 14.1 33.8 52.1 0.73 1.74 85.5 3.2 11.3 0.82 Edit 0.85 31.4 21.9 46.7 0.66 0.67 15.9 34.9 49.2 0.68 1.92 95.2 1.5 3.3 0.63 AL 0.98 36.8 24.0 39.2 0.57 0.77 21.8 33.6 44.6 0.66 1.88 91.7 4.7 3.6 0.58 Ours 1.10 41.5 26.8 31.7 0.65 0.88 31.2 25.9 42.9 0.72 1.87 89.6 7.6 2.8 0.60 Table 3: Human evaluation results of mean score, proportions of three levels (+2, +1, and 0), and the agreements measured by Fleiss’s Kappa in appropriateness, informativeness, and grammaticality. AL Ours Accuracy 94.01% 95.72% Table 4: Classification accuracy of discriminators in AL and our approach. • appropriateness: a response is logical and appropriate to its message. • informativeness: a response has meaningful information relevant to its message. • grammaticality: a response is fluent and grammatical. These aspects are evaluated independently. For each aspect, three levels are assigned to a response with scores from 0 to +2 (Shang et al., 2015), where 0 represents bad and +2 represents excellent. The appropriateness differs from the informativeness in that the former focuses on the logical relationship between a message and a response, while the latter evaluates the richness of relevant content. Automatic Evaluation We employ Dist-1 and Dist-2 (Li et al., 2016a) to evaluate the diversity of responses, where Dist-k is the number of distinct k-grams normalized by the total number of words of responses. We also evaluate the Originality by computing the ratio of responses that do not appear in the training set (Wu et al., 2019). To validate the effectiveness of retrieved candidates in enhancing the discriminator, the classification accuracy of the discriminator in AL and our approach is also reported. Note that the two discriminators after pre-training or adversarial training cannot be compared directly because they are trained by different negative samples produced by different generators. We thus create a special dataset for this metric where negative samples are generated by a well-trained generator (otherwise, the accuracy will easily reach nearly 100% as fixed negative samples of low quality are too easy to be distinguished) of AL in advance. 5.2 Analysis The results of the classification accuracy of different discriminators are presented in Table 4. Trained on an identical dataset, our discriminator achieves higher accuracy than the conventional discriminator in AL. This indicates that the N-best response candidates are helpful for the discriminator in distinguishing between human-generated responses and machine-generated responses, which could in turn benefit the generator in the adversarial training process (discussed later). Table 3 shows the results of human evaluation. Our approach has the highest mean score and the largest proportions of +2 and +1 in appropriateness. Meanwhile, it outperforms all generationbased and retrieval-enhanced approaches in informativeness. This suggests that our approach is able to respond more appropriately and incorporate informative content into responses at the same time. Note that Rtr has the highest informativeness mean score due to its diverse human-written content. However, it may also contain some irrelevant information, leading to a bad performance in appropriateness. Besides, most responses in Rtr are annotated as +2 or 0 in informativeness. This is also because Rtr responses are extremely diverse which always include new content, making a response tend to get +2 if the content is relevant, otherwise 0. In terms of grammaticality, the mean score of our approach is higher than that of S2S and MS2S, and is comparable with that of AL, indicating that our approach is competitive in generating fluent responses. Edit has a high mean score mainly due to its relatively simple sentence structure. As shown in Figure 2, S2S and MS2S have similar simple sentence structure to 3770 Model # of UNI Dist-1 # of BI Dist-2 Origin Rtr 6,837 0.113 25,863 0.428 0.000 S2S 1,247 0.023 3,122 0.060 0.288 MS2S 2,596 0.049 6,455 0.122 0.351 EDIT 1,847 0.027 5,690 0.085 0.540 AL 1,760 0.033 6,697 0.124 0.590 D+ 2,057 0.038 8,683 0.158 0.775 G+ 2,440 0.046 10,461 0.200 0.792 Ours 3,356 0.060 13,184 0.236 0.842 Table 5: Automatic evaluation results of the number of distinct uni-grams (# of UNI) and bi-grams (# of BI), Dist-1, Dist-2 and Originality (Origin). D+ and G+ are two variants of our approach where candidates are only available for the discriminator and the generator, respectively. Edit, the reason for the relatively low mean scores of S2S and MS2S in grammaticality is that they have some repetitive responses, like “windows, windows, windows”. Agreements among different annotators are calculated by Fleiss’ Kappa (Fleiss, 1971). The values of appropriateness and informativeness are all in an interval of (0.4, 0.6] or (0.6, 0.8], which can be seen as “Moderate agreement” and “Substantial agreement”, respectively. Grammaticality has relatively higher agreement values as it is easier to reach an agreement on grammatical errors. We report the results of Dist-1, Dist-2, and Originality in Table 5. AL outperforms S2S in all metrics, indicating that adversarial training is helpful for generating diverse n-grams and responses. By introducing N-best response candidates, our approach further increases Dist-2 by 0.112 based on AL (from 0.124 to 0.236) and the improvement is significant (t-test, p <0.01). In contrast, the increase of Dist-2 after combining Nbest response candidates in MLE based approach is only 0.062, comparing MS2S with S2S. This suggests that introducing a discriminator with adversarial training is more effective than MLE objective in utilizing N-best response candidates to generate more diverse n-grams. Note that the improvement after introducing candidates in Dist-1 and Originality is not as significant as that in Dist2. This is because responses of MLE based models (MS2S and EDIT) tend to contain informative content with simple sentence structures, like “... is (not) good.” (as shown in Figure 2), resulting in high Dist-1 and Originality scores, but their Dist-2 scores are relatively lower than AL and Ours. To understand the importance of different comUtterance Translation MSG /!Wi-Fi  -'  I have a Wi-Fi signal at home, but do not have access to the Internet, what’s the reason? C#1 $ % +(. .*!01 I guess there is a problem with the call between Telecom and Unicom. C#2 ) &0 1 I don't think it's your problem. S2S  ,& I think so too. MS2S &"  My cell phone signal is not good. EDIT ,%  This ad is too Telecom. AL # #   No signal, no signal. Ours (." Let’s change to Unicom's mobile phone. Figure 2: An example of a test message (MSG), candidates (C#1 and C#2), and responses from different models. The last column are their translations. ponents of our approach, we also train two variants: D+ and G+, where N-best response candidates are only available for the discriminator and the generator, respectively. Note that AL does not utilize candidates in the generator nor the discriminator, thus can be seen as a start point of D+ and G+. As shown in Table 5, there is an improvement in the performance of both the two variants after introducing the candidates comparing to AL. The improvement in G+ is more significant as its generator can directly utilize the candidates as generation materials. While candidates’ information in D+ is compressed into a discriminative signal by the discriminator. Nevertheless, introducing candidates into the discriminator helps to generate more diverse responses comparing AL with D+, and G+ with Ours, demonstrating that the retrieval-enhanced discriminator is able to benefit the generator. Figure 2 shows an example of responses of different models along with the input message and Nbest response candidates (C#1 and C#2). The C#1, which best matches the message among all the candidates, is also the response of the Rtr baseline. We can see that it contains diverse content, such as “Unicom” and “Telecom”(two telecommunication operators in China, providing broadband, mobile communication as well as customized mobile phones). However, it talks about “the call” be3771 tween the two operators, which is irrelevant to the message. The response of S2S is a generic response. AL has a more diverse response than S2S, however, it does not have access to candidates, which limits the diversity. MLE based retrievalenhanced models can make use of the content of candidates, like “Telecom” in EDIT, but the way they present the content is not as diverse as ours. 6 Conclusion and Future Work We propose a Retrieval-Enhanced Adversarial Training method for neural response generation in dialogue systems. In contrast to existing approaches, our REAT method directly uses response candidates from retrieval-based systems to improve the discriminator in adversarial training. Therefore, it can benefit from the advantages of retrieval-based response candidates as well as neural responses from generation-based systems. Experiments show that the REAT method significantly improves the quality of the generated responses, which demonstrates the effectiveness of this approach. In future research, we will further investigate how to better leverage larger training data to improve the REAT method. In addition, we will also explore how to integrate external knowledge in other formats, like the knowledge graph, into adversarial training so that the quality could be further improved. Acknowledgments The authors would like to thank all the anonymous reviewer for their insightful comments. The paper is supported by the National Natural Science Foundation of China (No. 61772153). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representations. Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Journal of Computational Linguistics, 31(3):297–328. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. LTP: A Chinese language technology platform. In Coling 2010: Demonstrations, pages 13–16, Beijing, China. Coling 2010 Organizing Committee. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Journal of Psychological bulletin, 76(5):378. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Journal of Neural computation, 9(8):1735–1780. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499–1508, Melbourne, Australia. Association for Computational Linguistics. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Anton Leuski, Ronakkumar Patel, David Traum, and Brandon Kennedy. 2006. Building effective question answering characters. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, pages 18–27, Sydney, Australia. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202, Austin, Texas. Association for Computational Linguistics. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computational Linguistics. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Proceedings of the ThirtyFirst Conference on Neural Information Processing Systems, pages 3155–3165. 3772 Diane Litman, Satinder Singh, Michael Kearns, and Marilyn Walker. 2000. Njfun: a reinforcement learning spoken dialogue system. In ANLP-NAACL 2000 Workshop: Conversational Systems, pages 17– 20. Erwin Marsi and Emiel Krahmer. 2005. Explorations in sentence fusion. In Proceedings of the Tenth European Workshop on Natural Language Generation (ENLG-05). Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358, Osaka, Japan. The COLING 2016 Organizing Committee. Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1329–1338, Melbourne, Australia. Association for Computational Linguistics. Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, and Wei Chu. 2017. AliMe chat: A sequence to sequence and rerank based chatbot engine. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 498–503, Vancouver, Canada. Association for Computational Linguistics. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Journal of The Knowledge Engineering Review, 21(2):97–126. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron C Courville. 2017a. Multiresolution recurrent neural networks: An application to dialogue response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 3288–3294. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577–1586, Beijing, China. Association for Computational Linguistics. Yiping Song, Rui Yan, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, and Dongyan Zhao. 2018. An ensemble of retrieval-based and generation-based humancomputer conversation systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196–205, Denver, Colorado. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Twenty-Eighth Conference on Neural Information Processing Systems, pages 3104–3112. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Joseph Weizenbaum. 1966. Eliza—a computer program for the study of natural language communication between man and machine. Journal of Communications of the ACM, 9(1):36–45. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M. Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449, Valencia, Spain. Association for Computational Linguistics. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Journal of Computer Speech & Language, 21(2):393–422. 3773 Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Journal of Machine Learning, 3(8):229–256. Yu Wu, Furu Wei, Shaohan Huang, Zhoujun Li, and Ming Zhou. 2019. Response generation by contextaware prototype editing. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2017. A sequential matching framework for multi-turn response selection in retrievalbased chatbots. arXiv preprint arXiv:1710.11344. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, volume 17, pages 3351–3357. Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. 2018. Diversity-promoting GAN: A crossentropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3940–3949, Brussels, Belgium. Association for Computational Linguistics. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 55–64. ACM. Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. 2018. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. In Proceedings of The 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 245–254. Weinan Zhang, Lingzhi Li, Dongyan Cao, and Ting Liu. 2017. Exploring implicit feedback for open domain conversation generation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 547–554. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of the Thirty-Second Conference on Neural Information Processing Systems, pages 1810–1820. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, pages 4623–4629.
2019
366
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3774–3783 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3774 Vocabulary Pyramid Network: Multi-Pass Encoding and Decoding with Multi-Level Vocabularies for Response Generation Cao Liu1,2, Shizhu He1, Kang Liu1,2, Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 University of Chinese Academy of Sciences, Beijing, 100049, China {cao.liu, shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn Abstract We study the task of response generation. Conventional methods employ a fixed vocabulary and one-pass decoding, which not only make them prone to safe and general responses but also lack further refining to the first generated raw sequence. To tackle the above two problems, we present a Vocabulary Pyramid Network (VPN) which is able to incorporate multi-pass encoding and decoding with multi-level vocabularies into response generation. Specifically, the dialogue input and output are represented by multi-level vocabularies which are obtained from hierarchical clustering of raw words. Then, multi-pass encoding and decoding are conducted on the multilevel vocabularies. Since VPN is able to leverage rich encoding and decoding information with multi-level vocabularies, it has the potential to generate better responses. Experiments on English Twitter and Chinese Weibo datasets demonstrate that VPN remarkably outperforms strong baselines. 1 Introduction As one of the long-term goals in AI and NLP, automatic conversation devotes to constructing automatic dialogue systems to communicate with humans (Turing, 1950). Benefited from large-scale human-human conversation data available on the Internet, data-driven dialog systems have attracted increasing attention of both academia and industry (Ritter et al., 2011; Shang et al., 2015a; Vinyals and Le, 2015; Li et al., 2016a,c, 2017). Recently, a popular approach to build dialog engines is to learn a response generation model within an encoder-decoder framework such as sequence-to-sequence (Seq2Seq) model (Cho et al., 2014a). In such a framework, an encoder transforms the source sequence into hidden vectors, and a decoder generates the targeted sequence based on the encoded vectors and previs Decoders Concepts Meanings Words Distilled Concepts Coarse Meanings Grounding Words Distilled Concepts Coarse Meanings Grounding Words Multi-Pass Encoders Multi-Pass Decoders Raw Words High-Level Encoder Low-Level Encoder Raw-Word Encoder High-Level Decoder Low-Level Decoder Raw-Word Decoder Multi-Level Vocabularies High-Level Clusters Low-Level Clusters Figure 1: Vocabulary pyramid networks for response generation. The dialogue input (context) and output (response) are represented by multi-level vocabularies (e.g., raw words, low-level clusters and high-level clusters) and then processed by multi-pass encoder and decoder. ously generated words. In this process, the encoder and decoder share a vocabulary (word list)1, and the targeted words are typically performed by a softmax classifier over the vocabulary word-byword. However, such typical Seq2Seq model is prone to generate safe and repeated responses, such as “Me too” and “I don’t know”. In addition to the exposure bias issue2, the main reasons of this problem include: 1) a fixed (single) vocabulary (word list) in decoding, which usually covers high-frequency words, so it is easy to capture high-frequency patterns (e.g., “Me too”) and lose a great deal of content information in middle and low-frequency patterns; 2) one-pass decoding, 1Encoder and decoder may have different word lists. We find it performs closely using same or different vocabularies. 2A model generates the next word given the previous gold words in training while it is based on previously predicted words in the test (Ranzato et al., 2016). 3775 where word-by-word generation from left to right is prone to error accumulation since previously generated erroneous words will greatly affect future un-generated words. More importantly, it can leverage only the previously generated words but not the future un-generated words. In fact, there are some researches in text generation tasks such as dialogue generation, machine translation and text summarization, are dedicated to solving the above issues. In order to alleviate issues on the fixed vocabulary, Wu et al. (2018a) incorporated dynamic vocabulary mechanism into Seq2Seq models, which dynamically allocates vocabularies for each input by a vocabulary prediction model. Xing et al. (2017) presented topic aware response generation by incorporating topic words obtained from a pre-trained LDA model (Blei et al., 2003). Besides, several works attempted to solve the dilemma of one-pass decoding. Xia et al. (2017) proposed deliberation network for sequence generation, where the first-pass decoder generates a rough sequence and then the secondpass decoder refines the rough sequence. However, so far there has been no unified framework to solve both of the aforementioned problems. In this study, we present Vocabulary Pyramid Networks (VPN) to tackle the issues of one fixed vocabulary and one-pass decoding simultaneously. Specifically, VPN incorporates multipass encoding and decoding with multi-level vocabularies into response generation. As depicted in Figure 1, the multi-level vocabularies contain raw words, low-level clusters and high-level clusters, where low-level and high-level clusters are obtained from hierarchical clustering of raw words. Afterward, the multi-pass encoder (rawword encoder, low-level encoder, and high-level encoder) gradually works on diminishing vocabularies from raw words to low-level clusters until to high-level clusters, and it looks like a “pyramid” concerning the vocabulary size. On the other side, the multi-pass decoder gradually increases the size of processed vocabularies from high-level clusters to low-level clusters and finally to raw words. From a theoretical point of view, people usually associate raw input words with low-level or highlevel abstractions like semantic meanings and concepts on human-human conversations. Based on the abstractive cognition, people organize contents and select the expressive words as the response (Xing et al., 2017). From a practical perspective, VPN is able to capture much more sequence information with multi-level vocabularies. As a result, VPN has the potential to generate better responses. To verify the effectiveness of the proposed model, we conduct experiments on two public response generation datasets: English Twitter and Chinese Weibo. Both automatic and manual evaluations demonstrate that the proposed VPN is remarkably better than the state-of-the-art. 2 Background 2.1 Sequence-to-Sequence Model In Seq2Seq models (Cho et al., 2014a), an encoding RNN (recurrent neural network) transforms the source sequence X = {x1, x2, ..., xLX} into distributed representations H = {h1, h1, ..., hLX} through a basic model: ht = f(xt, ht−1). Here, xt is the word embedding for xt, f is a non-linear transformation, where GRU (Cho et al., 2014b) and LSTM (Hochreiter and Schmidhuber, 1997) are widely used for capturing long-term dependencies. Then a decoder generates the targeted sequence Y = {y1, y2, ..., yLY } as follows: st = f([yt−1, c], st−1) (1) p(yt|y<t, X) = g(yt−1, c, st) (2) where c = hLX, st is the decoding state in time step t, and g is a non-linear function. In the basic Seq2Seq models, each word is generated from a same context vector c. In order to capture different contexts for each generated word, attention mechanism (Bahdanau et al., 2015) extracts dynamic context vector ct in different decoding time steps. Formally, ct = PLX j=1 αijhj, αij ∝ exp(η(si−1, hj)), where η is a non-linear function. 2.2 Deliberation Network Conventional Seq2Seq models can leverage only the generated words but not the un-generated words in decoding, so they lack global information to refine and polish the raw generated sequence. The deliberation network (Xia et al., 2017) is proposed to deal with this issue. A deliberation network has two decoders, where the firstpass decoder generates a raw word sequence Y 1 = {y1 1, y1 2, ..., y1 LY 1} and the second-pass decoder polishes the raw word sequence. In the secondpass decoder, an extra attention model is leveraged to selectively read the output vector sequence Y 1 from the first-pass decoder, and then generate the refined output sequence Y 2 = {y2 1, y2 2, ..., y2 LY 2}. 3776 Seq2Seq Encoder Decoder Model Dynamic Vocab Seq2Seq Topic-Aware Seq2Seq Deliberate Network 1rt Pass Dec. 2nd Pass Dec. Multi-Pass Encoder Multi-Pass Decoder Vocabulary Pyramid Network(VPN) High-Level Dec. Low-Level Dec. Low-Level Enc. High-Level Enc. Raw Words Common Words Topic Words Dynamic Words Low-level Clusters High-level Clusters Legend Raw-Word Enc. Raw-Word Dec. Figure 2: Differences in our VPN with typical Seq2Seq model and its variations, where different rectangles denote different vocabularies (details in “Legend”). Seq2Seq uses a vocabulary (word list) in decoding. Dynamic vocabulary Seq2Seq integrates a common vocabulary and a dynamic vocabulary in decoding. Topic-Aware Seq2Seq incorporates topic words for each input. Deliberate network exploits first-pass and two-pass decoder within the same vocabulary list. VPN employs multi-pass encoder and multi-pass decoder with multi-level vocabularies (raw words, low-level clusters and high-level cluster). Among these models, only VPN makes use of vocabularies beyond words. Therefore, VPN could capture rich encoding and decoding information with multi-level vocabularies. 3 Methodology 3.1 Model Overview As shown in Figure 2, VPN consists of three submodules: multi-level vocabularies (Section 3.2), multi-pass encoder (Section 3.3) and multi-pass decoder (Section 3.4). Specifically, multi-level vocabularies contain raw words, low-level clusters and high-level clusters (black, blue and red solid rectangles in Figure 2). The multi-pass encoder starts from the raw words and then to the low-level clusters finally to the high-level clusters. In contrast, the multi-pass decoder works from the highlevel clusters to the low-level clusters until to the raw words. The details of each component are in the following. 3.2 Multi-Level Vocabularies junior sophomore freshman amazingly surprisingly black Figure 3: Multi-level vocabularies via hierarchical clustering. As illustrated in Figure 3, multi-level vocabularies contain three different vocabularies: raw words, low-level clusters and high-level clusters. Specifically, the raw words are the original words in the training data, and they are denoted as Vr = {wr 1, wr 2, ..., wr R}. The raw words are agglomerated into low-level clusters Vl = {wl 1, wl 2, ..., wl L} and high-level clusters Vh = {wh 1, wh 2, ..., wh H} by “bottom-up” hierarchical clustering. In order to decide which clusters could be agglomerated, we utilize the implementation of hierarchical clustering in Scipy3. Specifically, we pre-train rawword embeddings by the word2vec model4 as inputs, and then we leverage the Ward (Ward, 1963) linkage and maxclust5 criterion to automatically construct hierarchical clustering. In this way, we could obtain three different vocabularies: Vr, Vl and Vh, where their vocabulary sizes are decreased: |Vr|>|Vl|>|Vh|, and it looks like a “pyramid” concerning the vocabulary size. It should be emphasized that an original input sequence could be expanded into three input sequences through the three vocabulary lists, and it is the same for the output sequence. 3.3 Multi-Pass Encoder The encoder aims to transform input sequences into distributional representations. In order to capture much more information from different input sequences, VPN employs a multi-pass encoder, which contains three different encoders in order: raw-word encoder, low-level encoder and highlevel encoder. As a result, the multi-pass encoder is able to encode more and more abstractive infor3https://scipy.org/ 4Implemented in https://radimrehurek.com/gensim/models/word2vec.html 5https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.cluster.hierarchy.fcluster.html 3777 mation from words to clusters. The details are in the following. Raw-Word Encoder The raw-word encoder accepts an input sequence of word ids from raw words Vr. A bi-directional LSTM (Schuster and Paliwal, 1997) is leveraged to capture the long-term dependency from forward and backward directions. The concatenated representation of bi-directional hidden states: hr i = [−→h r i , ←−h r Li−i+1], is regarded as the encoded vector for each input word. Finally, the input sequence is transformed into a hidden state sequence: Hr = {hr 1, hr 2, ..., hr Li} (3) Specifically, the initiated hidden state is a zero vector, and the hidden state (hr Li) in the last word could be used for initiating the next encoder (lowlevel encoder). Low-Level Encoder Low-level encoder is similar to the raw-word encoder. However, low-level encoder takes a sequence of low-level cluster ids from Vl as inputs, and the hidden state is initiated by the last hidden state of the raw-word encoder (hr Li). Similarly, we can obtain the hidden state sequence in the lowlevel encoder: Hl = {hl 1, hl 2, ..., hl Li} (4) High-Level Encoder The high-level encoder accepts a sequence of high-level cluster ids from Vh, and the initiated hidden state is the final hidden state hl Li in the lowlevel encoder. Finally, the hidden state sequence in the high-level encoder is denoted as follows: Hh = {hh 1, hh 2, ..., hh Li} (5) 3.4 Multi-Pass Decoder The decoder is responsible for generating targeted sequences. Inspired from the deliberation network (Xia et al., 2017), we present a multi-pass decoder which consists of three decoders in order: highlevel decoder, low-level decoder and raw-word decoder. The three decoders have their own targeted sequences from different vocabulary lists, and the multi-pass decoder first generates the abstractive (high and low-level) clusters and then generates the raw (specific) words. It is different from the deliberation network where both the first-pass decoder and the second-pass decoder aim to generate raw words in the same vocabulary. The details of our multi-pass decoder are in the following. High-Level Decoder The high-level decoder generates a high-level cluster sequence from Vh. Similar to humanhuman conversations, where people usually associate an input message with high-level abstractions like concepts in their minds before speaking, the high-level decoder generates the most abstractive cluster sequence before selecting specific words as responses. The high-level decoder is based on another LSTM, which is initiated with the last hidden state hh Li in the high-level encoder. In order to decide which parts of sources need more attention, an attention mechanism (Bahdanau et al., 2015) is introduced in the high-level decoder. Intuitively, the encoded hidden state sequence Hh in the highlevel encoder contains the most relevant encoded information for the high-level decoder because they share the same vocabulary Vh. Nevertheless, in order to capture much more encoded information from the source sequences, the high-level decoder adopts three attention models to attentively read different encoded state sequences: Hr, Hl and Hh (Equation 3-5), respectively. Take Hr as an example, at each decoding time step j, the high-level decoder dynamically chooses the context vector chr j based on Hr = {hr 1, hr 2, ..., hr Li} and the decoding state sh j−1 as follows: chr j = XLi i=1 αjihr i ; αji = eρ(sh j−1,hr i ) P i′ eρ(sh j−1,hr i′) (6) where ρ is a non-linear function to compute the attentive strength. Similarly, the attentive context vectors (chl j and chh j ) from the low-level and highlevel encoders could be calculated by the attention models. Based on chr j , chl j and chh j , the decoding state sh j is updated as: sh j = fh([yh j−1, chr j , chl j , chh j ], sh j−1) (7) where yh j−1 is the embedding vector of the previously decoded cluster at time step j −1, and fh is the decoding LSTM unit. Finally, the targeted cluster is typically obtained by a softmax classifier over Vh based on the embedding similarity. In this way, the high-level decoder could generate the output sequence yh = {yh 1, yh 2, ..., yh Lo}, which corresponds to the embedding sequence: Yh = {yh 1, yh 2, ..., yh Lo} (8) 3778 Low-Level Decoder Once the high-level cluster sequence is generated from the high-level decoder, it could be leveraged to the low-level decoder for further decoding the low-level cluster sequence. Based on the three encoded state sequences (Hr, Hl, Hh) and the output embedding sequence Yh of the high-level decoder, the low-level encoder generates another sequence from the low-level clusters Vl. The low-level decoder is similar to the highlevel decoder. However, there still are some differences between them: 1) The initiated hidden state sl 0 in the low-level decoder is performed as the final decoding state sh Lo in the high-level decoder. 2) The attentive context vectors (clr j , cll j and clh j ) from encoded state sequences are calculated with different parameters compared to ones in the high-level decoder. 3) Inspired from deliberation networks, previously generated sequence Yh in the highlevel encoder is fed into the low-level decoder, where high-level (global) information guides lowlevel generations, and another attention model is leveraged to capture such information, which is similar to Equation 6 mathematically: olh j = XLo i=1 βjiyh i (9) where the attentive weight βji is calculated from the low-level decoding states sl j−1 and output embedding sequence Yh (Equation 8) in the highlevel decoder. Thereafter, olh j is concatenated to update the decoded hidden state as follows: sl j = fl([yl j−1, clr j , cll j , clh j , olh j ], sl j−1) (10) where fl is another LSTM unit. Finally, the output yl j is generated by a softmax classifier from Vl based on embedding similarity. Raw-Word Decoder After obtaining the high-level and low-level cluster sequence, the next step is to produce the final raw word sequence from Vr by the raw-word decoder. The hidden state of the raw-word decoder sh 0 is initiated with the final decoding state s l Lo in the low-level decoder. The decoding state in the raw-word decoder is updated as follows: sr j = fr([yr j−1, crr j , crl j , crh j , orl j , orh j ], sr j−1) (11) where crr j , crl j , crh j are attentive context vectors from three encoded hidden state sequences. orh j and orl j (similar to Equation 9) are the weighted sums of output embedding sequences from the high-level decoder and low-level decoder. Similarly, the targeted word is typically predicted by a softmax classifier over Vr based on the word embedding similarity. Eventually, the raw-word decoder iteratively generates a targeted word sequence yr = {yr 1, yr 2, ..., yr Lo}. 3.5 Learning Multi-level vocabularies of hierarchical clustering are obtained in advance through an un-supervised way, while the multi-level encoder and decoder could be optimized with supervised learning. The encoder and decoder are totally differential, so they are able to be optimized in an end-to-end manner by the back propagation. Giving a source input and a targeted output, there are three inputoutput pairs obtained from different vocabulary lists: {xn, yn}n∈{r,l,h}. Each output sequence corresponds to a training loss, and the total losses perform as follows: L = Lh + Ll + Lr Lh = −1 Lo XLo j=1 log[p(yh j |yh <j, xr, xl, xh)] Ll = −1 Lo XLo j=1 log[p(yl j|yl <j, xr, xl, xh, Yh)] Lr = −1 Lo XLo j=1 log[p(yr j|yr <j, xr, xl, xh, Yh, Yl)] (12) where the three negative log-likelihoods (Lh, Ll and Lr) are losses for different-level targeted outputs. Yh and Yl are output embedding sequences in the high-level decoder and low-level decoder, respectively. Finally, the sum of different losses in three decoders is considered as the total losses L. 4 Experiment 4.1 Datasets There are large-scale message-response pairs on social websites, which consist of informational text from different topics (Chen et al., 2017). Our experimental data comes from two public corpus: English “Twitter”6 and Chinese “Weibo” (Shang et al., 2015b). In order to improve the quality of datasets, some noisy message-response pairs are filtered (e.g., containing too many punctuations or emoticons), and the datasets are randomly split into Train/Dev/Test by a proportion (9:0.5:0.5). 6https://github.com/Marsan-Ma-zz/chat corpus 3779 4.2 Implementation Details In order to make our model comparable with typical existing methods, we keep the same experimental parameters for VPN and comparative methods. We set the vocabulary size of raw words as 34000, and the word vector dimension is 300. Moreover, source inputs are encoded by 600dimensional vectors with bi-direction LSTMs, and responses are also decoded by LSTM with 600 dimensions. The total losses are minimized by an Adam optimizer (Kingma and Ba, 2015) with 0.0001 learning rate. Particularly, the size of lowlevel clusters and high-level clusters are 3400 and 340, respectively, which are significantly smaller than the size of raw words (34000), and these clusters are also represented by 300-dimensional vectors. Finally, we implemented all models with the TensorFlow. 4.3 Evaluation Metrics Evaluation for generative responses is a challenging and under-researching problem (Novikova et al., 2017). Similar to (Li et al., 2016b; Gu et al., 2016), we borrow two well-established automatic evaluation metrics from machine translation and text summarization: BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004)7, which could be leveraged to analyze the co-occurrences of n-gram between the generated responses and references. In addition to automatic evaluations, we also leverage manual evaluations to enhance the evaluations. Following previous studies (He et al., 2017; Qian et al., 2018; Liu et al., 2018), we employ three metrics for manual evaluations as follows. 1) Fluency (Flu.): measuring the grammaticality and fluency of generated responses, where too short responses are regarded as lack of fluency. 2) Consistency (Con.): measuring whether the generated responses are consistent with the inputs or not. 3) Informativeness (Inf.): measuring whether the response provides informative (knowledgeable) contents or not. 4.4 Overall Comparisons Comparison Settings. We compare VPN with the following methods: 7Implemented in https://github.com/Maluuba-/nlg-eval. Evaluations on Twitter are based on token level. In particular, the BLEU and ROUGE on Weibo dataset are based on the Chinese character because Chinese characters are with semantics. Models Twitter Weibo BLEU ROUGE BLEU ROUGE S2SA (2014) 6.12 6.42 8.95 9.06 S2STA (2017) 7.73 7.57 11.45 11.29 S2SDV (2018b) 5.91 5.87 9.05 8.71 DelNet (2017) 6.42 6.76 10.04 10.03 VPN (ours) 8.58 7.88 12.51 11.76 Table 1: Overall performance on Twitter and Weibo datasets. Note that the first three lines are only onepass decoding, and the fourth line (DelNet) is beyond one-pass decoding. (1) S2SA: Sequence-to-Sequence (Sutskever et al., 2014) with attention mechanisms (Bahdanau et al., 2015). (2) S2SDV: Seq2Seq with dynamic vocabulary, the implementation is similar to Wu et al. (2018b). (3) S2STA: Seq2Seq with topic aware networks, the implementation is similar to Xing et al. (2017). S2STA could be regarded as using dynamic vocabulary because topic words are changed along with the input. (4) DelNet: deliberation networks, the implementation is similar to Xia et al. (2017). Different from the above methods, deliberation networks are beyond one-pass decoding. Comparison Results. We first report overall performances on Table 1. These results support the following statements: (1) Our VPN achieves the highest performances on English Twitter and Chinese Weibo dataset in all metrics, which demonstrates multi-pass encoding and decoding with multi-level vocabularies are able to deliver better responses than baselines. (2) For the one-pass decoding (the first three methods in Table 1), S2STA performs the best. Pre-trained topic words for each input are able to make the generation more target-focused in S2STA. Nevertheless, it is still worse than VPN. (3) As for models beyond one-pass decoding (the last two lines in Table 1), VPN is remarkably better than the deliberation network (DelNet), which indicates the effectiveness of multi-pass encoder and decoder with multi-level vocabularies. 4.5 The Effectiveness of Multi-Level Vocabularies Comparison Settings. To validate the effectiveness of multi-level vocabularies obtained from hierarchical clustering, we design experiments on whether using Multi-level Vocabularies (MVs) or not. The comparison setting is shown in the first 3780 Models Twitter Weibo BLEU ROUGE BLEU ROUGE enc3-dec1 (SV) 6.27 6.29 6.61 7.08 enc3-dec1 (MVs) 7.16 8.01 9.15 10.63 enc1-dec3 (SV) 7.43 7.54 9.92 10.24 enc1-dec3 (MVs) 6.75 7.78 12.01 10.86 enc3-dec3 (SV) 7.44 7.56 9.95 9.70 enc3-dec3 (MVs) 8.58 7.88 12.51 11.76 Table 2: Performances on whether using multi-level vocabularies or not, where “SV” represents single vocabulary (from raw words), and “MVs” means multilevel vocabularies obtained from hierarchical clustering. “enc” and “dec” denote encoder and decoder, respectively, and numbers after them represent how many passes. For example, “enc1-dec3” means a encoder along with three passes of decoders. column in Table 2, where numbers after “enc/dec” represent the number of encoders/decoders. “SV” denotes single vocabulary (from raw words) while “MVs” means multi-level vocabularies obtained from hierarchical clustering. Comparison Results. Table 2 demonstrates performances on whether using multi-level vocabularies. We can observe that incorporating multilevel vocabularies could improve performances on almost all of the metrics. For example, “enc3dec3 (MVs)” improves relative performance up to 25.73% in BLEU score compared with “enc3-dec3 (SV)” on the Weibo dataset. Only on the Twitter dataset, “enc1-dec3 (MVs)” is slightly worse than “enc1-dec3 (SV)” in the BLEU score. 4.6 The Effectiveness of Multi-Pass Encoding and Decoding Models Twitter Weibo BLEU ROUGE BLEU ROUGE VPN 8.58 7.88 12.51 11.76 w/o low-level ED 7.83 7.57 11.96 11.60 w/o high-level ED 6.84 7.81 10.06 10.70 w/o low&high-level ED 6.12 6.42 8.95 9.06 Table 3: Influences of multi-pass encoding and decoding, where “w/o” indicates without, “ED” represents encoder and decoder. For example, “w/o low-level ED” means removing low-level encoder and low-level decoder. Comparison Settings. In order to demonstrate the effectiveness of multi-pass encoder and multi-pass decoder, we design an ablation study as follows. 1) w/o low-level ED: without lowlevel encoder and low-level decoder; 2) w/o highlevel ED: without high-level encoder and highlevel decoder; 3) w/o low&high-level ED: without low-level encoder/decoder and high-level encoder/decoder, which is the same as the Seq2Seq model with attention mechanisms. Comparison Results. Results of the ablation study are shown in Table 3. We can clearly see that removing any encoder and decoder causes obvious performance degradation. Specifically, “w/o highlevel ED” obtains worse performances than “w/o low-level ED”. We guess that the high-level encoder and decoder are well trained since they have the smallest vocabulary (the size of high-level clusters is only 340), so removing the well-trained component (“w/o high-level ED”) performs poorly (Details in Section 4.8). Furthermore, “w/o low&high-level ED” performs the worst. This further indicates that multi-pass encoder and decoder contribute to generating better responses. 4.7 Manual Evaluations (MEs) Datasets Models Flu. Con. Inf. Twitter VPN vs. S2STA 56.49 54.92 54.07 VPN vs. DelNet 57.89 60.40 57.50 Weibo VPN vs. S2STA 52.31 52.99 53.54 VPN vs. DelNet 56.56 55.66 54.72 Table 4: Manual evaluations with fluency (Flu.), consistency (Con.), and informativeness (Inf.). The score is the percentage that VPN wins a baseline after removing “tie” pairs. VPN is clearly better than all baselines on the three metrics, and all results are at 99% confidence intervals. Comparison Settings. Similar to manual evaluations used in Zhou et al. (2018), we conduct a pair-wise comparison between the response generated by VPN and the one for the same input by two typical baselines: S2STA and DelNet. we sample 100 responses from each system, then two curators judge (win, tie and lose) between these two methods. Comparison Results. The results of manual evaluations are shown in Table 4, where the score is the percentage that VPN wins a baseline after removing “tie” pairs. The Cohen Kappa for interannotator statistics is 61.2, 62.1 and 70.8 for fluency, consistency and informativeness, respectively. We can see that our VPN is significantly (sign test, p-value < 0.01) better than all baselines in terms of the three metrics, which further demonstrates that VPN is able to deliver fluent, consistent and 3781 informative responses. 4.8 Discussion Decoders Twitter Weibo BLEU ROUGE BLEU ROUGE High-Level Dec. 12.44 14.93 23.92 10.66 Low-Level Dec. 8.84 8.21 22.50 9.50 Raw-Word Dec. 8.58 7.88 13.02 5.39 Table 5: Performances on each decoder in VPN8. The multi-pass decoder in VPN has three decoders. In order to investigate the reasons why the multi-pass decoder works, we will see performances on each decoder in Table 5. We can observe that the high-level decoder obtains the best performances on all metrics, and the low-level decoder outperforms the raw-word decoder. It is intuitive that the high-level decoder performs the best since it has the smallest vocabulary (340), while the raw-word decoder performs the worst because it is equipped with the biggest vocabulary (34000). From the point of performances on each decoder, the effectiveness of multi-pass decoder could be explained from curriculum learning (Bengio et al., 2009). Curriculum learning is a learning strategy in machine learning, where the key idea is to start easier aspects of the targeted task and then gradually increase the complexity. It is difficult for response generation tasks to generate raw words directly. To alleviate this problem, the multi-pass decoder first generates the easier (high-level and low-level) clusters from the small vocabularies, and then generates the raw words from the big vocabulary under the guide of the well-generated clusters. Therefore, the multi-pass decoder obtains significant performances. 5 Related Work Researches have achieved remarkable improvements on response generation for human-machine conversations. Currently, encoder-decoder framework, especially the Seq2Seq learning (Cho et al., 2014a), is becoming a backbone of data-drive response generation, and it has been widely applied in response generation tasks. For example, Shang et al. (2015a) presented neural recurrent encoderdecoder frameworks for short-text response gener8In the Discussion Section, all evaluations are based on tokens (IDs) for unifying, so the performances of raw-word decoder on Chinese Weibo dataset are different from the ones (character level) in Table 1. ation with attention mechanisms (Bahdanau et al., 2015). Li et al. (2016b) introduced persona-based neural response generation to obtain consistent responses for similar inputs to a speaker. Shao et al. (2017) added a self-attention to generate long and diversified responses in Seq2Seq learning. In this study, we focus on two important problems in response generation: one fixed vocabulary and one-pass decoding. Our work is inspired by following researches to alleviate issues on the fixed vocabulary. Gu et al. (2016) proposed CopyNet, which is able to copy words from the source message. External knowledge bases were also leveraged to extend the vocabulary (Qian et al., 2018; Zhou et al., 2018; Ghazvininejad et al., 2018). Moreover, Xing et al. (2017) incorporated topic words into Seq2Seq frameworks, where topic words are obtained from a pre-trained LDA model (Blei et al., 2003). Wu et al. (2018b) changed the static vocabulary mechanism by a dynamic vocabulary, which jointly learns vocabulary selection and response generation. We also borrow the idea from studies beyond one-pass decoding. Mou et al. (2016) designed backward and forward sequence generators. Xia et al. (2017) proposed deliberation networks on sequence generation beyond one-pass decoding, where the first-pass decoder generates a raw word sequence, and then the second decoder delivers a refined word sequence based on the raw word sequence. Furthermore, Su et al. (2018) presented hierarchical decoding with linguistic patterns on data-to-text tasks. However, there has been no unified frameworks to solve the issues of fixed vocabulary and onepass decoding. Differently, we propose multi-pass encoding and decoding with multi-level vocabularies to deal with the above two problems simultaneously. 6 Conclusion and Future Work In this study, we tackle the issues of one fixed vocabulary and one-pass decoding in response generation tasks. To this end, we have introduced vocabulary pyramid networks, in which dialogue input and output are represented by multi-level vocabularies and then processed by multi-pass encoding and decoding, where the multi-level vocabularies are obtained from hierarchical clustering of raw words. We conduct experiments on English Twitter and Chinese Weibo datasets. Experiments 3782 demonstrate that the proposed method is remarkably better than strong baselines on both automatic and manual evaluations. In the future, there are some promising explorations in vocabulary pyramid networks. 1) we will further study how to obtain multi-level vocabularies, such as employing other clustering methods and incorporating semantic lexicons like WordNet; 2) we also plan to design deep-pass encoding and decoding for VPN; 3) we will investigate how to apply VPN to other natural language generation tasks such as machine translation and generative text summarization. Acknowledgments This work is supported by the Natural Science Foundation of China (No.61533018), the Natural Key R&D Program of China (No.2017YFB1002101), the Natural Science Foundation of China (No.61702512) and the independent research project of National Laboratory of Pattern Recognition. This work is also supported by Alibaba Group through Alibaba Innovative Research (AIR) Program, CCF-DiDi BigData Joint Lab and CCF-Tencent Open Research Fund. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of ICML, pages 41–48. ACM. David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:2003. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for chinese word segmentation. In Proceedings of ACL. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014a. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of EMNLP, pages 1724–1734. Kyunghyun Cho, B van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014b. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of AAAI. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL, pages 1631–1640. Shizhu He, Cao Liu, Kang Liu, and Jun Zhao. 2017. Generating natural answers by incorporating copying and retrieving mechanisms in sequence-tosequence learning. In Proceedings of ACL, pages 199–208. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of ACL, pages 994–1003. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep reinforcement learning for dialogue generation. In Proceedings of EMNLP, pages 1192–1202. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of EMNLP, pages 2157–2169. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of ACL workshop, page 10. Cao Liu, Shizhu He, Kang Liu, and Jun Zhao. 2018. Curriculum learning for natural answer generation. In Proceedings of IJCAI, pages 4223–4229. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING, pages 3349–3358. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. In Proceedings of EMNLP, pages 2241–2252. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. 3783 Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Assigning personality/profile to a chatting machine for coherent conversation generation. In Proceedings of IJCAI, pages 4279–4285. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In Proceedings of ICLR. Alan Ritter, Colin Cherry, and William B. Dolan. 2011. Data-driven response generation in social media. In Proceedings of EMNLP, pages 583–593. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015a. Neural responding machine for short-text conversation. In Proceedings of the ACL, pages 1577–1586. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015b. Neural responding machine for short-text conversation. In Proceedings of ACL-IJCNLP, pages 1577– 1586. Association for Computational Linguistics. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversation responses with sequence-to-sequence models. In Proceedings of EMNLP, pages 2210–2219. Shang-Yu Su, Kai-Ling Lo, Yi Ting Yeh, and YunNung Chen. 2018. Natural language generation by hierarchical decoding with linguistic patterns. In Proceedings of NAACL, pages 61–66. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, pages 3104–3112. Alan M Turing. 1950. Computing machinery and intelligence. In Parsing the Turing Test, pages 23–65. Springer. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. Proceedings of ICML workshop. Joe H. Ward. 1963. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236–244. Yu Wu, Wei Wu, Can Xu, and Zhoujun Li. 2018a. Knowledge enhanced hybrid neural network for text matching. In Proceedings of AAAI. Yu Wu, Wei Wu, Dejian Yang, Can Xu, and Zhoujun Li. 2018b. Neural response generation with dynamic vocabularies. In Proceedings of AAAI. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In Proceedings of NIPS, pages 1782–1792. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In Proceedings of AAAI, pages 3351–3357. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of IJCAI, pages 4623–4629.
2019
367
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3784–3793 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3784 On-device Structured and Context Partitioned Projection Networks Sujith Ravi Google Research Mountain View, CA, USA [email protected] Zornitsa Kozareva Google Mountain View, CA, USA [email protected] Abstract A challenging problem in on-device text classification is to build highly accurate neural models that can fit in small memory footprint and have low latency. To address this challenge, we propose an on-device neural network SGNN++ which dynamically learns compact projection vectors from raw text using structured and context-dependent partition projections. We show that this results in accelerated inference and performance improvements. We conduct extensive evaluation on multiple conversational tasks and languages such as English, Japanese, Spanish and French. Our SGNN++ model significantly outperforms all baselines, improves upon existing on-device neural models and even surpasses RNN, CNN and BiLSTM models on dialog act and intent prediction. Through a series of ablation studies we show the impact of the partitioned projections and structured information leading to 10% improvement. We study the impact of the model size on accuracy and introduce quantization-aware training for SGNN++ to further reduce the model size while preserving the same quality. Finally, we show fast inference on mobile phones. 1 Introduction Over the last years, the usage of conversational assistants has become extremely popular. On a daily basis, people request weather information, check calendar appointments, perform calls. Large part of the conversational and natural language understanding happens on the server side and then fulfilled resulting in response delays, inconsistent experience and privacy concerns. Therefore, there is a huge demand for developing on-device natural language models that work entirely on-device such as mobile phones, tablets, watches and any internet of things (IoT) devices. On-device computation can circumvent the latency delays, can increase the user privacy and further enable new capabilities for real time interaction. One way to develop on-device natural language understanding is to leverage the power of deep neural networks, which over the years have shown tremendous progress and have improved upon state-of-the-art machine learning methods in Natural Language Processing (NLP) (Sutskever et al., 2014), Speech (Hinton et al., 2012) and Vision (Krizhevsky et al., 2012). These advancements were byproducts of the availability of large amounts of data and high performance computing, enabling the development of more complex and robust neural network architectures. However, despite their success, yet it remains challenging to deploy deep networks on-device such as mobile phone, smart watch and IoT. The limited memory and computation power combined with the need of fast latency require the development of novel on-device neural networks. Inspired by (Ravi and Kozareva, 2018), we propose a novel on-device neural network (SGNN++ ) that uses joint structured (word+character) information and context partitioned projections to learn robust models for short text classification. We employ a modified version of the locality sensitive hashing (LSH) to reduce input dimension from millions of unique words/features to a short, fixedlength sequence of bits (Ravi, 2017, 2019). This allows us to compute a projection for an incoming text very fast, on-the-fly, with a small memory footprint on the device without storing any incoming text and word embeddings. Unlike prior work that focused on developing the best neural network for a specific NLP task and language, we develop one SGNN++ architecture with the same parameters and apply it to wide range of tasks and languages such as En3785 glish, French, Spanish and Japanese. Our experimental results show that SGNN++ improves upon baselines, prior on-device state-of-the-art and even non-on-device RNN, CNN and BiLSTM methods. The main contributions of our paper are: • Novel embedding-free SGNN++ on-device neural model with quantization, and joint structured and context partitioned projections; • Novel context partitioned projections result in small memory footprint with better performance and speedup. • First on-device model evaluated on a wide range of applications such as dialog act, intent prediction, customer feedback. • First on-device model evaluation on English, Spanish, French and Japanese languages demonstrating the language agnostic power of SGNN++ . • Comparison against prior on-device stateof-the-art neural models, which SGNN++ significantly improves upon across multiple tasks. • Ablation studies that show the impact of word vs joint word and character representation on accuracy; the power of the partitioned projection vectors on speed and inference; and the ability of SGNN++ to compress large models while still maintaining high accuracy; the fast latency of the on-device model. 2 On-device Partitioned Projection Networks (SGNN++ ) We propose new on-device neural network architectures for NLP inspired by projection model architectures (Ravi, 2017, 2019). The projection model is a neural network with dynamicallycomputed layers that encodes a set of efficient-tocompute operations which can be performed directly on device for inference. Unlike prior work that employs projections (Ravi and Kozareva, 2018), our new model defines a set of efficient structured and contextdependent “projection” functions PC(xi) that progressively transform each input instance xi to a different space ΩePC and then performs learning in this space to map it to corresponding outputs yi. The model applies dynamically-computed projection functions that are conditioned on context in multiple ways to achieve higher discriminative power (for classification tasks) and better efficiency wrt memory footprint and speedup. Firstly, we introduce a joint structured projection model that uses language structure to project word and character information in each input instance separately (ΩeP=ΩePw S ΩePc) and combines them during learning. Secondly, we introduce context-partitioned projection functions PCk(xi) that leverage feature-context hierarchy to partition the projection space ΩeP based on context type. Both these methods enable learning powerful compact neural networks that achieve high performance and fast inference with low memory footprint. 2.1 SGNN++ Architecture Our on-device projection partitioned neural network architecture is a deep multi-layered contextdependent locality-sensitive projection model. Figure 1 shows the model architecture. The neural model uses projections (Ravi, 2017, 2019) making it an embedding-free approach, i.e., the model can be learned without the need to initialize, load or store any feature or vocabulary weight matrices. This is different from the majority of the widelyused state-of-the-art deep learning techniques in NLP whose performance depends on embeddings pre-trained on large corpora. In this work, we also introduce a novel joint structured projections and context partitioned projection spaces that result in highly efficient and compact neural network models for on-device applications. We will also show how SGNN++ yields significant improvements over prior work (Ravi and Kozareva, 2018) and reaches state-of-the-art on multiple NLP tasks and languages. 2.2 Model Overview In this work, we focus on short text classification. Each input xi contains a sequence of tokens, where xit represents the t-th token in the input. The proposed SGNN++ model progressively projects each raw input text xi to an efficient vector representation eip and then learns a classifier to map xi to output class yi. The raw input text xi is first converted to an intermediate feature vector F(xi) using raw text features such as skip-grams. ⃗xi = F(xi) (1) 3786 Figure 1: SGNN++ Model Architecture: On-Device Joint Structured & Context Partitioned Projection Neural Network The projection eip for xi is then computed by applying a series of T context-partitioned projection functions eP1, ..., ePT on the intermediate sparse feature vector ⃗xi. Details of the projections and computation for SGNN++ are described as follows. ePj(xi) = projection(⃗xi, ePj) (2) eip = eP1...T (xi) (3) = [ eP1(xi), ..., ePT (xi) ] where ePj(xi) refers to output from the j-th projection function. This is followed by a stack of additional layers and non-linear activation to create deep, non-linear combinations of projections that permit the network to learn complex mappings from inputs xi to outputs yi. ehp = σ(Wp ·eip + bp) (4) eht = σ(Wt · eht−1 + bt) (5) yi = softmax(Wo · ehk + bo) (6) where ehp is computed directly from the projection output, ht is applied at intermediate layers of the network with depth k followed by a final softmax activation layer at the top. In an L-layer SGNN++ , ht, where t = p, p + 1, ..., p + L −1 refers to the L subsequent layers after the projection layer. Wp, Wt, Wo and bp, bt, bo represent trainable weights and biases respectively. The projection transformations use pre-computed parameterized functions, i.e., they are not trained during learning, and their outputs are concatenated to form the hidden units for subsequent operations. 2.3 Joint Structured Projection Network Unlike prior work that employs projections (Ravi and Kozareva, 2018), we make an important observation that input instances xi are drawn from natural language rather than random continuous vectors and thereby encode some inherent structure— for example, sentences contain sequence of words, and words contain characters. This motivates us to leverage the underlying linguistic structure in the input and build a hierarchical projection model from the raw text in a progressive fashion rather than taking a one-shot projection approach. We define a joint structured projection model (SGNN++ ). The model jointly combines word and character level context information from the input text to construct the language projection layer. 2.3.1 Word Projections Given an input xi with t words, we first project sequence xi to word projection vectors. We use word-level context features (e.g., phrases and word-level skip-grams) extracted from the raw text to compute the intermediate feature vector ⃗xw = Fw and compute projections. ePj w(xi) = projection( ⃗ xiw, ePj w) (7) eipw = eP1...ℓ w (xiw) (8) = [ eP1 w(xi), ..., ePℓ w(xiw) ] We reserve ℓbits to capture the word projection space computed using a series of ℓfunctions eP1 w, ..., ePℓ w. The functions project the sentence structure into low-dimensional representation that 3787 captures similarity in the word-projection space (Sankar et al., 2019). 2.3.2 Character Projections Given the input text xi, we can capture morphology (character-level) information in a similar way. We use character-level context features (e.g., character-level skip-grams) again extracted directly from the raw text to compute ⃗xc = Fc and compute character projectionseipc. ePj c(xi) = projection( ⃗xic, ePj c) (9) eipc = ePℓ+1...T c (xic) (10) = [ ePℓ+1 c (xi), ..., ePT c (xiw) ] The character feature space and hence projections eipc are reserved and computed separately. Note that even though we compute separate projections for character-level context, the SGNN++ model re-uses the remaining T −ℓfunctions for this step and hence keeps the overall space and time complexity for projections directly ∝T. 2.3.3 Joint Structured Model and Extension We then combine these into eip for the joint structure projection model as shown in Figure 1. The projection functions dynamically transform each input text xi to a low-dimensional representation ip via context-dependent projection spaces that jointly capture word and character information in a succinct representation. The joint structured projections are followed by a stack of additional layers that jointly learn non-linear combinations of these projection spaces to build the classifier. ehp = σ(Wp · [eipw,eipc] + bp) (11) The choice of intermediate features used for projections can be flexible and different for Fw and Fc. For example, we could apply stemming or extract other morphological features for computing eipc. Similarly, we can use syntax information from Part-of-Speech tags or constituency parses at the sentence-level for computing eipw. However, these features might not be available on device to perform inference—e.g., syntax features require an additional tagging or parsing model to be loaded on device, which incurs additional complexity and latency. Hence, for efficiency and simplicity, we only use the same type of raw features (e.g., skipgrams) for word and character-level projections. 2.4 Context Partitioned Projection Network In the SGNN++ model, we further leverage the feature-context type information to introduce an additional level of hierarchy in the network. The motivation is as follows—we use locality-sensitive projections for projection(.) step to transform input text to a low-dimensional representation. Incorporating global information, via contextdependent projections, enables the model to vary the language projections and encode them separately based on feature-type. We use this to avoid collisions in the projected space between different feature types (e.g., unigrams vs. bigrams) and also help the neural network learn the importance of specific types of projections based on the classification task rather than pooling them together and fixing this apriori. We achieve this by introducing contextpartitioned projections in SGNN++ , i.e., we partition the overall projection space into sub-partitions based on context-type. Let CK denote the type of intermediate features extracted via F, where C1 = unigrams, C2 = bigrams, and so on. Both word and character-level outputs ipw, ipc (describe earlier) are generated using context-partitioned projections, i.e., each projection space ΩeP is partitioned into sub-spaces ΩePCk based on context type. The type of context used to represent the input text determines the function choice and size of the sub-partitions and thereby the number of corresponding bits reserved in the projection outputs ipw and ipc. eip = [ eP1 C1(xi), ..., ePℓ1 C1(xi) ] (12) ∥[ eP1 C2(xi), ..., ePℓ2 C2(xi) ] ... ∥[ eP1 CK(xi), ..., ePℓK CK(xi) ] M = maxK ·(maxK +1) 2 (13) ℓK = T · K M (14) where, CK denotes a specific type of contextfeature extracted from the input and P1 CK...PℓK CK denote the projection functions applied to the input for context type CK. maxK is the total number of context types and ℓK is the number of projection functions in the partition reserved for CK and hence the number of output bits reserved in projection output. 3788 Effect of Partitioned Projections: Partitioning the projection space has a significant effect on both memory and time-complexity. This results in a significant speedup for the projection network both during training and inference since the overall size of intermediate feature context vectors F (per type) is smaller and hence fewer operations are required to compute each projection output and these can be computed in parallel. Also, in SGNN++ the overall projection complexity does not increase since we keep T fixed P jw,c ℓj = T. Moreover, the context partitioned SGNN++ neural network uses the global context information to efficiently decompose and learn projections from different contexts and combine them effectively for the classification task. 2.5 q-SGNN++ : Compressing Model Further We also learn hardware-optimized variants of SGNN++ using quantized training similar to (Jacob et al., 2017). This permits fast 8-bit arithmetic operations in the model achieving 4x further reduction in overall model size and improved latency. Both SGNN++ and q-SGNN++ can run efficiently on edge devices and support inference through TensorFlow Lite (tfl) open-source library. 2.6 Computing Projections on-the-fly We employ an efficient randomized projection method for each projection(.) step. We use locality sensitive hashing (LSH) (Charikar, 2002) to model the underlying projection operations in SGNN++ . Equation 1 applies F to dynamically extract features from the raw input text. Text features (e.g., skip-grams) at word and character-level are converted into 64-bit feature-ids fj (via hashing) to generate a sparse feature representation ⃗xi of feature-id, weight pairs (fm, wm). For the projection(.) step (Equation 4), a projection vector eP j is first constructed on-the-fly using a hash function with feature ids fm ∈⃗xi and fixed seed j as input, then dot product of the two vectors < ⃗xi, ePj > is computed and transformed into binary representation ePj(⃗xi) using sgn(.) of the dot product. As shown in Figure 1, both Fw,c and ePw,c steps are computed on-the-fly, i.e., no word/characterembedding or vocabulary/feature matrices need to be stored and looked up during training or inference. Instead feature-ids and projection vectors are dynamically computed via hash functions. For intermediate feature weights wm, we use observed counts in each input text and do not use precomputed statistics like idf. Hence the method is embedding-free. 2.7 Model Parameters SGNN++ uses a total of T different projection functions ePj=1...T , each resulting in d-bit vector that is concatenated to form the projected vector ip in Equations 11. T and d can be tuned to trade-off between prediction quality and model size of the SGNN++ network. For the intermediate feature step F in Equations 1, 9, 11, we use skip-gram features (3-grams with skip-size=2) extracted from raw text both for word and character projections. We set ℓ= T 2 in Equation 9, i.e., the joint structured model (described in Section 2.3) reserves half the projection space (T 2 · d bits) for word projections and remaining half for character projections. The choice of features also determines the size of the context-dependent sub-partitions within each projection space—for example, if we choose features with upto 3-gram context, then maxK = 3 and we compute 3 projection sub-partitions for C1, C2, C3 in Equation 14. 2.8 Training, Inference and Optimization SGNN++ is trained from scratch on the task data using a supervised loss defined wrt ground truth ˆyi L(.) = P i∈N cross −entropy(yi, ˆyi). During training, the network learns to choose and combine context-dependent projection operations that are more predictive for a given task. SGNN++ uses language projections to transform the input into compact bit vectors. This yields a drastically lower memory footprint both in terms of number and size of parameters as well as computation cost. During training, the network learns to move the gradients for points that are nearby to each other in the projected bit space ΩeP in the same direction. SGNN++ is trained end-to-end using backpropagation. Training can progress efficiently with stochastic gradient descent with distributed computing on high-performance CPUs or GPUs. 2.9 Complexity Overall complexity for inference with the SGNN++ model depends on the projection layer, O(n · T · d) where n is the observed feature size (*not* overall vocabulary size) which is linear in input size, d is the number of LSH 3789 bits specified for each projection vector ePj, and T is the number of projection functions used. However, each partitioned projection operation in the model is much faster in practice than non-partitioned projection since it depends on size of intermediate vectors which are partitioned by context and smaller in size. The model size (in terms of number of parameters) and memory storage required for the projection inference step is O(T · d · C), where C is the number of hidden units in ehp in the multi-layer projection network and typically smaller than T · d. 3 NLP Datasets and Experimental Setup 3.1 Datasets & Tasks We evaluate our on-device SGNN++ model on four NLP tasks and languages such as English, Japanese, Spanish and French. The datasets were selected so we can compare against prior ondevice work (Ravi and Kozareva, 2018) and also test the language agnostic capabilities of SGNN++ • MRDA: Meeting Recorder Dialog Act is a dialog corpus of multiparty meetings annotated with 6 dialog acts (Adam et al., 2003; Shriberg et al., 2004). • SwDA: Switchboard Dialog Act is a popular open domain dialog corpus between two speakers with 42 dialog acts (Godfrey et al., 1992; Jurafsky et al., 1997). • ATIS: Intent Understanding is a widely used corpus in the speech and dialog community (T¨ur et al., 2010) for understanding different intents during flight reservation. • CF: Customer Feedback is a multilingual customer feedback analysis task (Liu et al., 2017) that aims at categorizing customer feedback as “comment, “request, “bug, “complaint, “meaningless, or “undetermined. The data is in English (EN), Japanese (JP), French (FR) and Spanish (SP) languages. Table 1 shows the characteristics of each task: language, number of classes, training and test data. 3.2 Experimental Setup & Parameter Tuning We setup our experiments as given a classification task and a dataset, generate an on-device model. For each task, we report Accuracy on the test set. NLP Task Lang. #Classes Train Test MRDA Dialog Act EN 6 78K 15K SwDA Dialog Act EN 42 193K 5K ATIS Intent Prediction EN 21 4,478 893 CF-EN Cust. Feedback EN 5 3,065 500 CF-JP Cust. Feedback JP 5 1,526 300 CF-FR Cust. Feedback FR 5 1,950 400 CF-SP Cust. Feedback SP 5 1,631 299 Table 1: NLP Tasks and Datasets Statistics Unlike prior work that aims at finding the best configuration for a given datasets or task, we use the same on-device architecture and settings across all datasets and tasks. We use 2-layer SGNN++ (PT =80,d=14 × FullyConnected256 × FullyConnected256), mini-batch size of 100, dropout rate of 0.25, learning rate initialized to 0.025 with cosine annealing decay (Loshchilov and Hutter, 2016). We do not do any additional dataset-specific tuning or processing. Training is with SGD over shuffled minibatches with Adam optimizer (Kingma and Ba, 2014). 4 Experimental Results This section focuses on the multiple experiments we have conducted. Table 2 shows the results on the different NLP tasks and languages. Overall, SGNN++ consistently outperformed all baselines, reached state-of-the-art against prior ondevice state-of-the-art work (Ravi and Kozareva, 2018) and even outperformed non-on-device stateof-the-art RNN, CNN and BiLSTM models for MRDA, SWDA, ATIS and CF tasks. 4.1 Comparison with Baselines For each task, we compared SGNN++ against well established baselines. MRDA and SWDA use Naive Bayes classifier (Lee and Dernoncourt, 2016), which our SGNN++ model outperformed with 14 to 41%. ATIS uses a majority baseline, which SGNN++ outperformed with 21.51%. CF (Liu et al., 2017) uses trigrams to find the most similar annotated sentences to the input and assigns their label as final prediction. SGNN++ consistently outperformed CF similarity baselines with 16.2%, 17.66%, 16.18 and 6.69% for EN, JP, FR and SP respectively. 4.2 Comparison with On-Device State-of-Art One of the most important studies in this work is the comparison of our on-device model against prior state-of-the-art on-device NLP model called self-governing neural networks (SGNN) (Ravi and 3790 Model MRDA SwDA ATIS CF-EN CF-JP CF-FR CF-SP SGNN++ (our on-device) 87.30 88.43 93.73 65.00 74.33 70.93 83.95 SGNN(Ravi and Kozareva, 2018)(sota on-device) 86.70 83.10 RNN(Khanpour et al., 2016) 86.80 80.10 RNN+Attention(Ortega and Vu, 2017) 84.30 73.90 CNN(Lee and Dernoncourt, 2016) 84.60 73.10 GatedAtten.(Goo et al., 2018) 93.60 JointBiLSTM(Hakkani-Tur et al., 2016) 92.60 Atten.RNN(Liu and Lane, 2016) 91.10 ADAPT-Run1(Dzendzik et al., 2017) 63.40 67.67 69.50 83.61 Bingo-logistic-reg(Elfardy et al., 2017) 55.80 60.67 59.00 72.91 Baseline 74.60 47.30 72.22 48.80 56.67 54.75 77.26 Table 2: On-device Results and Comparison on Multiple Datasets and Languages Kozareva, 2018). SGNN learns compact projection vectors with local sensitive hashing and has previously reached state-of-the-art results on MRDA and SWDA tasks. While both methods share the ideology of projections, SGNN++ uses more powerful representations via joint structured and context partitioned projections. As shown in Table 2, SGNN++ outperformed SGNN with 1% on MRDA and 5% on SWDA. These significant performance improvements are due to SGNN++ ’s joint structure representations coupled with partitioned projections. Section 5.1 shows detailed ablation study. 4.3 Comparison with Non-On-Device Work The characteristics of on-device models are low memory footprint and low latency. Therefore, a direct comparison of an on-device model against cloud based neural networks might not be fair, due to the resource constraints for on-device models. But we wanted to showcase that despite such constraints, yet our SGNN++ learns powerful neural networks that are competitive and can even outperform widely used approaches like RNNs and CNNs with huge parameters and pre-trained word embeddings. Another aspect to consider on why such a comparison might not be fair, is that prior work focused mostly on creating the best model for a specific task with lot of fine tuning and additional resources like pre-trained embedding, whereas we use the same SGNN++ architecture and parameters across multiple tasks and languages. Taking these major differences into consideration, we still compare results against prior non-ondevice state-of-art neural networks. As shown in Table 2 only (Khanpour et al., 2016; Ortega and Vu, 2017; Lee and Dernoncourt, 2016) have evaluated on more than one task, while the rest of the methods target specific one. We denote with −models that do not have results for the task. SGNN++ is the only approach spanning across multiple NLP tasks and languages. On the Dialog Act MRDA and SWDA tasks, SGNN++ outperformed deep learning methods like CNN (Lee and Dernoncourt, 2016), RNN (Khanpour et al., 2016) and RNN with gated attention (Tran et al., 2017) and reached the best results of 87.3% and 88.43% accuracy. For Intent Prediction, SGNN++ also improved with 0.13% 1.13% and 2.63% over the gated attention (Goo et al., 2018), the joint slot and intent biLSTM model (Hakkani-Tur et al., 2016) and the attention slot and intent RNN (Liu and Lane, 2016) on the ATIS task. This is very significant, given that (Goo et al., 2018; Hakkani-Tur et al., 2016; Liu and Lane, 2016) used a joint model to learn the slot entities and types, and used this information to better guide the intent prediction, while SGNN++ does not have any additional information about slots, entities and entity types. On Customer Feedback, SGNN++ reached better performance than Logistic regression models (Elfardy et al., 2017; Dzendzik et al., 2017). Overall, SGNN++ achieves impressive results given the small memory footprint and the fact that it did not rely on pre-trained word embeddings like (Hakkani-Tur et al., 2016; Liu and Lane, 2016) and used the same architecture and model parameters across all tasks and languages. We believe that the dimensionality-reduction techniques like locality sensitive context projections jointly coupled with deep, non-linear functions are effective at dynamically capturing low dimensional semantic text representations that are useful for text classification applications. 3791 5 Ablation Studies In this section, we show multiple ablation studies focusing on: (1) impact of partitioned projections and joint structured representation on accuracy; (2) impact of model size on accuracy; quantized version of SGNN++ which reduces model size while preserving same quality; and (3) SGNN++ latency. 5.1 Impact of Joint Structured & Context Partitioned Projections on Accuracy Our SGNN++ model uses joint structured (word+character) and context partitioned projections. We want to show the impact of the joint structure (word+character) vs word only; as well as the impact of the partitioned vs non-partitioned projections. Table 3 shows the obtained results on the ATIS intent prediction dataset. First, using joint structured (word+character) information leads to significantly better performance compared to word only. For instance, +9% for non-partitioned projections and +3.9% for partitioned projections. Second, significant improvement is seen when using partitioned vs non-partitioned projections, +6.14% for word and +1% for word+character. Overall, the novel joint structured and context partitioned projections we introduced in our SGNN++ model improve +10.06% performance compared to models using only word and non-partitioned projections. ATIS Partitioned Non-Partitioned SGNN++ SGNN++ Word+Char 93.73 92.72 Word 89.81 83.67 Table 3: Impact of SGNN++ Partitioning on Accuracy It is important to note that in addition to the accuracy improvements, SGNN++ partitioned projection models are also significantly faster for inference and training (upto 3.3X). For example, using T = 80, d = 14 and bigram word features (maxK = 2) for a 10-word sequence requires 80 × 14 × 6 = 6720 multiply-add operations for partitioned projections compared to 80 × 14 × 19 = 21280 for non-partitioned model. 5.2 Accuracy vs Model Size It is easy to customize our model for different devices such as watches, phones or IoT with different size constraints. To showcase this, we show results on varying projection sizes and network parameters. Furthermore, we also trained quantized versions of our SGNN++ model denoted by qSGNN++ which achieves additional model size reduction while maintaining high accuracy. Figure 2 shows the obtained results on the ATIS dataset. Each data point in the figure represents a SGNN++ or qSGNN++ model trained with specific partition projection parameter configuration. We show the model size and the accuracy achieved for that size. Figure 2: Model Size vs. Accuracy Overall, SGNN++ models achieve high accuracy even at low sizes. For instance, 100KB model yields 82.87% accuracy compared to 2.5MB model that yields 94.74%. For a given SGNN++ model we can further reduce the size with little performance degradation by applying the quantization-aware training. For instance, SGNN++ 107KB model (T = 5, d = 14) yields 82.87%, but can be further compressed to qSGNN++ with 33KB and 80.18% accuracy. We also take our model to the extreme, we are able to train qSGNN++ model with extremely tiny size of 7KB (T = 3, d = 14), while still achieving 77.16%. 5.3 Model Latency In addition to being small and highly accurate, ondevice model has to be fast. We measure the latency of our on-device SGNN++ model on a Pixel phone. Given an input text, we measure inference time on the Pixel device and report average latency. On ATIS dataset, SGNN++ accuracy is 93.73% with average latency of 3.35 milliseconds. This shows that our SGNN++ model is compact, highly accurate and with low latency (i.e. very fast). 3792 6 Conclusion We proposed embedding-free on-device neural network that uses joint structured and context partitioned projections for short text classification. We conducted experiments on wide range of NLP applications such as dialog acts, intent prediction and customer feedback. We evaluated the approach on four languages, showing the language agnostic capability of our on-device SGNN++ model. We used the same model architecture and parameter settings across all languages and tasks, which demonstrates the generalizability of this approach compared to prior work that built custom models. Overall, our SGNN++ approach outperformed all baselines from 14 to 41%, improved upon state-of-the-art on-device work (Ravi and Kozareva, 2018) with up to 5%, and also outperformed non-on-device neural approaches (Hakkani-Tur et al., 2016; Liu and Lane, 2016; Dzendzik et al., 2017; Elfardy et al., 2017). Through multiple ablation studies, we showed the impact of partitioned projections on accuracy and the impact of model size on accuracy. We trained quantized versions of SGNN++ showing that we can further reduce the model size while preserving quality. Finally, we showed SGNN++ fast latency on Pixel phone. Acknowledgments We would like to thank the organizers of the customer feedback challenging for sharing the data and the anonymous reviewers for their valuable feedback and suggestions. References TensorFlow Lite. https://www.tensorflow. org/lite/. Janin Adam, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2003. The icsi meeting corpus. In Proceedings of the 5TH SIGdial Workshop on Discourse and Dialogue, pages 364–367. Moses S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the Thiry-fourth Annual ACM Symposium on Theory of Computing, STOC ’02, pages 380–388, New York, NY, USA. ACM. Daria Dzendzik, Alberto Poncelas, Carl Vogel, and Qun Liu. 2017. Adapt centre cone team at ijcnlp2017 task 5: A similarity-based logistic regression approach to multi-choice question answering in an examinations shared task. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 67–72. Asian Federation of Natural Language Processing. Heba Elfardy, Manisha Srivastava, Wei Xiao, Jared Kramer, and Tarun Agarwal. 2017. Bingo at ijcnlp2017 task 4: Augmenting data using machine translation for cross-linguistic customer feedback classification. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 59–66. Asian Federation of Natural Language Processing. John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Proceedings of the 1992 IEEE International Conference on Acoustics, Speech and Signal Processing - Volume 1, ICASSP’92, pages 517–520. IEEE Computer Society. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 753–757. Dilek Hakkani-Tur, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Vivian Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association (INTERSPEECH 2016). Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. 2017. Quantization and training of neural networks for efficient integer-arithmetic-only inference. CoRR, abs/1712.05877. Daniel Jurafsky, Rebecca Bates, Rachel Martin Noah Coccaro, Marie Meteer, Klaus Ries, Elizabeth Shriberg, Audreas Stolcke, Paul Taylor, and Van Ess-Dykema. 1997. Automatic detection of discourse structure for speech recognition and understanding. In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding, pages 88–95. Hamed Khanpour, Nishitha Guntakandla, and Rodney Nielsen. 2016. Dialogue act classification in domain-independent conversations using a deep recurrent neural network. In Proceedings of COLING 3793 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2012– 2021. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105. Ji Young Lee and Franck Dernoncourt. 2016. Sequential short-text classification with recurrent and convolutional neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 515–520. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. Proceedings of The 17th Annual Meeting of the International Speech Communication Association (INTERSPEECH 2016). Chao-Hong Liu, Yasufumi Moriya, Alberto Poncelas, and Declan Groves. 2017. Ijcnlp-2017 task 4: Customer feedback analysis. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 26–33. Asian Federation of Natural Language Processing. Ilya Loshchilov and Frank Hutter. 2016. SGDR: stochastic gradient descent with restarts. CoRR, abs/1608.03983. Daniel Ortega and Ngoc Thang Vu. 2017. Neuralbased context representation learning for dialog act classification. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 247–252. Sujith Ravi. 2017. Projectionnet: Learning efficient on-device deep networks using neural projections. CoRR, abs/1708.00630. Sujith Ravi. 2019. Efficient on-device models using neural projections. In Proceedings of the International Conference on Machine Learning (ICML 2019). Sujith Ravi and Zornitsa Kozareva. 2018. Selfgoverning neural networks for on-device short text classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, (EMNLP 2018). Chinnadhurai Sankar, Sujith Ravi, and Zornitsa Kozareva. 2019. Transferable neural projection representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019). Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, Jeremy Ang, and Hannah Carvey. 2004. The icsi meeting recorder dialog act (mrda) corpus. In Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue, pages 97–100, Cambridge, Massachusetts, USA. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, pages 3104–3112. Quan Hung Tran, Gholamreza Haffari, and Ingrid Zukerman. 2017. A generative attentional neural network model for dialogue act classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 524–529. G¨okhan T¨ur, Dilek Hakkani-T¨ur, and Larry P. Heck. 2010. What is left to be understood in atis? In Proceedings of 2010 IEEE Spoken Language Technology Workshop (SLT), pages 19–24.
2019
368
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3794–3804 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3794 Proactive Human-Machine Conversation with Explicit Conversation Goals Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian and Haifeng Wang Baidu Inc., Beijing, China {wuwenquan01,guozhenguozhen,zhouxiangyang,wu hua}@baidu.com {zhangxiyuan01,lianrongzhong,wanghaifeng}@baidu.com Abstract Though great progress has been made for human-machine conversation, current dialogue system is still in its infancy: it usually converses passively and utters words more as a matter of response, rather than on its own initiatives. In this paper, we take a radical step towards building a human-like conversational agent: endowing it with the ability of proactively leading the conversation (introducing a new topic or maintaining the current topic). To facilitate the development of such conversation systems, we create a new dataset named DuConv where one acts as a conversation leader and the other acts as the follower. The leader is provided with a knowledge graph and asked to sequentially change the discussion topics, following the given conversation goal, and meanwhile keep the dialogue as natural and engaging as possible. DuConv enables a very challenging task as the model needs to both understand dialogue and plan over the given knowledge graph. We establish baseline results on this dataset (about 270K utterances and 30k dialogues) using several state-of-the-art models. Experimental results show that dialogue models that plan over the knowledge graph can make full use of related knowledge to generate more diverse multi-turn conversations. The baseline systems along with the dataset are publicly available 1. 1 Introduction Building a human-like conversational agent is one of long-cherished goals in Artificial Intelligence (AI) (Turing, 2009). Typical conversations involve exchanging information (Zhang et al., 2018), recommending something (Li et al., 2018), and completing tasks (Bordes et al., 2016), most of which rely on background knowledge. However, many 1 https://github.com/PaddlePaddle/models/tree/develop/ PaddleNLP/Research/ACL2019-DuConv dialogue systems only rely on utterances and responses as training data, without explicitly exploiting knowledge associated with them, which sometimes results in uninformative and inappropriate responses (Wang et al., 2018). Although there exist some work that use external background knowledge to generate more informative responses (Liu et al., 2018; Yin et al., 2015; Zhu et al., 2017), these systems usually generate responses to answer questions instead of asking questions or leading the conversation. In order to solve the above problems, some new datasets have been created, where external background knowledge is explicitly linked to utterances (Dinan et al., 2019; Moghe et al., 2018), to facilitate the development of knowledge aware conversation models. With these datasets, conversation systems can be built to talk with humans given a topic based on the provided external knowledge. Unlike taskoriented systems (Bordes et al., 2016), these conversation systems don’t have an explicit goal to achieve, thereof not able to plan over the background knowledge. In this paper, we take a radical step towards building another type of human-like conversational agent: endowing it with the ability of proactively leading the conversation with an explicit conversation goal. To this end, we investigate learning a proactive dialogue system by planning dialogue strategy over a knowledge graph. Our assumption is that reasoning and planning with knowledge are the keystones to achieve proactive conversation. For example, when humans talk about movies, if one person learns more about some movies, he/she usually leads the conversation based on one or more entities in the background knowledge and smoothly changes the topics from one entity to another. In this paper, we mimic this process by setting an explicit goal as a knowledge path “[start] →topic a →topic b”, 3795 Goal Knowledge START McDull: Rise of the Rice Cooker Bo Peng Blood_Type McDull: Rise of the Rice Cooker Bo Peng Lack of explosive performance 6.9 Type O Heze City nice, very funny Yes Comment Native_Place Representative_Work Hero Is_released Rating Comment Conversation Bot: 平时周末去看电影吗? (Do you usually spend your weekend watching movies?) User: 看呀,最近有好看的可以推荐一下,周末去看。 (Of course, could you recommend good films for me? I will watch it on this weekend.) Bot: 麦兜·饭宝奇兵这部电影,大众评分在6.9呢,值得一看。 (You can watch Rise of the Rice Cooker, which is rated 6.9.) User: 好的,一定去看看。(Sounds great, thanks.) Bot: 这部电影还不错,挺搞笑的。你知道饭麦兜·饭宝奇兵的主演是谁吗? (It is very nice and funny. Do you know who is the hero of this movie?) User: 不知道,谁啊?(I don’t know. Who is that?) Bot: 彭博,网友们评论说他缺少爆发力太温柔。(It’s Bo Peng. Do you know him? People said that he was a kind of lacking explosive Performance.) User: 那我有时间去了解一下。(I will know more about him later.) Figure 1: One conversation generated by two annotators, one of which was given a goal and related knowledge. which means that one person leads the conversation from any starting point to topic a and then to topic b. Here topic represents one entity in the knowledge graph. With this in mind, we first build a knowledge graph which combines factoid knowledge and non-factoid knowledge such as comments and synopsis about movies. To construct the knowledge graph, we take a factoid knowledge graph (KG) as its backbone and align unstructured sentences from the non-factoid knowledge with entities. Then we use this KG to facilitate knowledge path planning and response generation, as shown in Figure 1. Based on this knowledge graph, we create a new knowledge-driven conversation dataset, namely the Baidu Conversation Corpus (DuConv) to facilitate the development of proactive conversation models. Specifically, DuConv has around 30k multi-turn conversations and each dialog in the DuConv is created by two crowdsourced workers, where one plays the role of the conversation leader and the other one acts as the conversation follower. At the beginning of each conversation, the leading player is assigned with an explicit goal, i.e., to sequentially change the conversation topic from one to another, meanwhile keeping the conversation as natural and engaging as possible. The conversation goal is a knowledge path comprised of two topics and structured as “[start] →topic a →topic b” and the leading player is also provided with related knowledge of these two topics. For each turn in the conversation, the leading player needs to exploit the provided knowledge triplets to plan his/her conversation strategy and construct responses to get closer to the target topic, while the follower only needs to respond according to the contexts without knowing the goal. Figure 1 illustrates one example dialog in DuConv. It can be seen that DuConv provides a very challenging task: the conversational agents have to fully exploit the provided knowledge to achieve the given goal. To test the usability of DuConv, we propose a knowledge-aware neural dialogue generator and a knowledge-aware retrieval-based dialogue system, and investigate their effectiveness. Experimental results demonstrate that our proposed methods can proactively lead the conversation to complete the goal and make more use of the provided knowledge. To the best of our knowledge, it is the first work that defines an explicit goal over the knowledge graph to guide the conversation process, making the following contributions: • A new task is proposed to mimic the action of humans that lead conversations over a knowledge graph combining factoid and nonfactoid knowledge, which has a wide application in real-world but is not well studied. • A new large-scale dataset named DuConv is constructed and released to facilitate the development of knowledge-driven proactive dialogue systems. • We propose knowledge-aware proactive dialogue models and conduct detailed analysis over the datasets. Experimental results demonstrate that our proposed methods make full use of related knowledge to generate more diverse conversations. 2 Related Work Our related work is in line with two major research topics, Proactive Conversation and Knowledge Grounded Conversation. 3796 2.1 Proactive Conversation The goal of proactive conversation is endowing dialogue systems with the ability of leading the conversation. Existing work on proactive conversation is usually limited to specific dialogue scenarios. Young et al. (2013), Mo et al. (2016) and Bordes et al. (2018) proposed to complete tasks more actively, like restaurant booking, by actively questioning/clarifying the missing/ambiguous slots. Besides the task-oriented dialogue systems, researchers have also investigated building proactive social bots to make the interaction more engaging. Wang et al., (2018) explored to ask good questions in open-domain conversational systems. Li et al., (2018) enabled chatbots to recommend films during chitchatting. Unlike the existing work, we proposed to actively lead the conversation by planning over a knowledge graph with an explicit goal. We also create a new dataset to facilitate the development of such conversation systems. 2.2 Knowledge Grounded Conversation Leveraging knowledge for better dialogue modeling has drawn lots of research interests in past years and researchers have shown the multi-fold benefits of exploiting knowledge in dialogue modeling. One major research line is using knowledge to generate engaging, meaningful or personalized responses in chitchatting (Ghazvininejad et al., 2018; Vougiouklis et al., 2016; Zhou et al., 2018a; Zhang et al., 2018). In addition to proposing better conversation models, researchers also released several knowledge grounded datasets (Dinan et al., 2019; Moghe et al., 2018). Our work is most related to Mogh et al., (2018) and Dinan et al., (2019), where each utterance in their released datasets is aligned to the related knowledge, including both structured triplets and unstructured sentences. We extend their work, by including the whole knowledge graph into dialogue modeling and propose a new task of proactively leading the conversation via planning over the knowledge graph in this paper. 3 DuConv In this section, we describe the creation of DuConv in details. It contains four steps: knowledge crawling, knowledge graph construction, conversation goal assignment, and conversation crowdsourcing. We limit the dialogue topics in # dialogs 29858 # utterances 270399 average # utterances per dialog 9.1 average # words per utterance 10.6 average # words per dialog 96.2 average # knowledge per dialogue 17.1 Table 1: Overview of the conversation dataset DuConv. DuConv to movies and film stars, and crawl this related knowledge from the internet. Then we build our knowledge graph with these crawled data. After constructing our knowledge graph, we randomly sample two linked entities to construct the conversation goal, denoted as “[start] →topic a →topic b”, and ask two annotators to conduct knowledge-driven conversations, with one playing as the conversation leader and the other one playing as the follower. The leader needs to change the conversation topics following the conversation goal and meanwhile keep the conversation as engaging as possible. All those conversations are recorded and around 30k conversations are finally used in DuConv after filtering dirty/offensive parts. Table 1 summarizes the main information about DuConv. 3.1 Knowledge Crawling We crawled the related knowledge information from the website MTime.com2, which records the information of most films, heroes, and heroines in China. We collect both structured knowledge (such as “Harry Potter” is “directed by” “Chris Columbus”) as well as unstructured knowledge including short comments and synopsis. We filter out the dirty or offensive information and further normalize some of the numbers (such as the values of rating) into discrete symbols (good, fair, bad) to facilitate the use of this kind of knowledge. In summary, we crawl more than 91k films and 51k film stars, resulting in about 3.6 million knowledge triplets, the accuracy of which is over 97% 3. 3.2 Knowledge Graph Construction After the raw data collection, we construct a knowledge graph. Our knowledge graph is comprised of multiple SPO (Subject, Predicate, Ob2http://www.mtime.com/ 3We randomly sampled 100 triplets and manually evaluated them. 3797 # entities 143627 # movies 91874 # person names 51753 # properties 45 # spo 3598246 average # spo per entity 25 Table 2: Overview of the knowledge graph in DuConv. ject) knowledge triplets, where objects can be factoid facts and non-factoid sentences such as comments and synopsis. The knowledge triplets in our graph can be classified into: 1. Direct triplets: widely-used knowledge triplets, such as (“Harry Potter and the Sorcerer Stone”, ”directed by”, ”Chris Columbus”), akin to most existing knowledge graphs, with the exception that the objects can be sentences such as short comments and synopsis. 2. Associated triplets: if two entities share the same predicate and the same object in their triplets, then we create a virtual triplet like (”Harry Potter and the Sorcerer Stone”, ”directed by Chris Columbus”, ”Home Alone”) by combining the two original triplets. We call the direct triplets as one-step relation and associated triplets as two-step relation. Table 2 lists the main information of our knowledge graph. 3.3 Conversation Goal Assignment Given the knowledge graph, we sample some knowledge paths, which are used as conversation goals. Specifically, we focus on the simple but challenging scenario: naturally shifting the topics twice, i.e., from “[start]” state to “topic a” then finally to “topic b”. We sample two linked entities in our knowledge graph as ‘topic a” and “topic b” to construct the knowledge path. About 30k different knowledge paths are sampled and used as conversation goals for knowledge-driven conversation crowdsourcing, where half of the knowledge paths are from the one-step relation set while the other half are from the two-step relation set. 3.4 Crowdsourcing Unlike using self-play in dataset construction (Ghazvininejad et al., 2018), we collect lots of crowdsourced workers to generate the dialogues in DuConv 4. For each given conversation goal, we assign two workers different roles: 1) the conversation leader and 2) the follower. The leader is provided with the conversation goal and its related background knowledge in our knowledge graph, and then asked to naturally shift the conversation topic following the given conversation goal. The follower is provided with nothing but the dialogue history and only has to respond to the leader. The dialogue will not stop until the leader achieves the conversation goal. We record conversation utterances together with the related knowledge triplets and the knowledge path, to construct the whole dataset of DuConv. 4 Methods To enable neural dialogue systems to converse with external background knowledge, we propose two models: a retrieval-based model and a generation-based model, by introducing an external memory module for storing all related knowledge, making the models select appropriate knowledge to enable proactive conversations. Figure 2 shows the architectures of our proposed knowledge-aware response ranking model as well as our response generation model. We will give a detailed description of those two knowledgeaware models in next two sub-sections. 4.1 Retrieval-based Model Given a dialogue context X, the retrieval-based dialogue system responds to that context via searching for the best response Y from DuConv. Thus retrieval-based dialogue system often has a pipeline structure with two major steps: 1) retrieve response candidates from a database and 2) select the best one from the response candidates (Zhou et al., 2018b). In our retrieval-based method, the candidate responses are collected similar to most existing work (Wu et al., 2017; Zhou et al., 2018b) with one notable difference that we normalize the entities with their entity types in the knowledge graph to improve generalization capabilities. For each retrieved candidate response Y , the goal of our response ranker is to measure if Y is a good response to the context X considering the given dialogue goal G = [start, topic a, topic b] and related knowledge K. The matching 4The workers are collected from a Chinese crowdsourcing platform http://test.baidu.com/. The workers are paid 2.5 Chinese Yuan per conversation. 3798 r (a) Retrieval-based Model (b) Generation-based Model target Y knowledge1 knowledge3 knowledge2 KLDivLoss MLP decode BOW Loss w NLL Loss attention prior distribution posterior distribution MLP knowledge encode encode  p(k1 x, y) p(k2 x, y) p(k3 x, y) p(k1 x) p(k2 x) p(k3 x) source X goal G knowledge1 knowledge3 knowledge2 knowledge encode Encoder (Transformer) source X target Y p(k1 x) p(k2 x) p(k3 x) MLP Cross Entropy Loss Matching Score !(# = 1|', ), *) goal G Knowledge Reasoner Matcher xy xy kc Figure 2: The retrieval-based model and generation-based model. score measured by our knowledge-aware response ranker is defined as p(l = 1|Y, X, K, G). As shown in Figure 2(a), our knowledge-aware response ranker consists of four major parts, i.e., the context-response representation module (Encoder), the knowledge representation module (Knowledge Encoder), the knowledge reasoning module (Knowledge Reasoner) as well as the matching module (Matcher). The Encoder module has the same architecture as BERT (Devlin et al., 2018), it takes the context X and candidate response Y as segment a and segment b in BERT, and leverages a stacked selfattention to produce the joint representation of X and Y , denoted as xy. Each related knowledge knowledgei is also encoded as vector representations in the Knowledge Encoder module using a bi-directional GRU (Chung et al., 2014), which can be formulated as ki = [−→ hT ; ←− h0], where T denotes the length of knowledge, −→ hT and ←− h0 represent the last and initial hidden states of the two directional GRU respectively. The dialogue goal is also combined with the related knowledge in order to fuse that information into response ranking. To jointly consider context, dialogue goal and knowledge in response ranking, we make the context-response representation xy attended to all knowledge vectors ki and get the attention distribution. For simplicity, the dialogue goal was treated as part of the knowledge used in the conversation. p(ki|x, y) = exp(xy · ki) P j exp(xy · kj) (1) and fuse all related knowledge information into a single vector kc = P i p(ki|x, y) ∗ki. We view kc and xy as the information from knowledge side and dialogue side respectively, and fuse those two kinds of information into a single vector via concatenation, then finally calculate the matching probability as: p(l = 1|X, Y, K, G) = sigmoid(MLP([xy; kc])) (2) Our knowledge-aware response ranker differs from most existing work in jointly considering the previous dialogue context, the dialogue goal as well as the related knowledge, which enables our model to better exploit knowledge to achieve the conversation goal. 4.2 Generation-based Model To generate a knowledge-driven dialogue response, we enhance the vanilla seq2seq model with an extra knowledge selection paradigm, Figure 2(b) demonstrates the structure of our knowledge-aware generator, which is comprised of four parts: the Utterance Encoder, the Knowledge Encoder, the Knowledge Manager and the Decoder. For each given dialogue context X, along with the dialogue goal G and related knowledge K, our knowledge-aware generator first encodes all input information as vectors in the Utterance Encoder and Knowledge Encoder. The encoding method in those two modules also uses bi-directional GRUs, akin to that in the retrieval-based method. Especially, the dialogue context X and dialogue goal G are fused into the same vector x by sequentially concatenate G and X into a single sentence, then feed to the encoder. After encoding, our knowledge-aware generator starts to plan its dialogue strategy by con3799 sidering which knowledge would be appropriate next. Practically, the generator can also conduct knowledge selection via attention mechanism as in the retrieval-based method. However, to force the model to mimic human in knowledge selection, we introduce two different distributions: 1) the prior distribution p(ki|x) and the posterior distribution p(ki|x, y). We take the prior distribution p(ki|x) as the knowledge reasoned by machines and the posterior distribution p(ki|x, y) as the knowledge reasoned by humans, and then force the machine to mimic human by minimizing the KLDivLoss between those two distributions, which can be formulated as: p(ki|x, y) = exp(ki · MLP([x; y])) PN j=1 exp(kj · MLP([x; y])) (3) p(ki|x) = exp(ki · x) PN j=1 exp(kj · x) (4) LKL(θ) = 1 N N X i=1 p(ki|x, y)logp(ki|x, y) p(ki|x) (5) Given that knowledge distribution p(ki|x) and p(ki|x, y), we fused all related knowledge information into a vector kc = P i p(ki|x, y) ∗ki , same as our retrieval-based method, and feed it to the decoder for response generation. In the testing phase, the fused knowledge is estimated by the formula kc = P i p(ki|x) ∗ki without gold responses . The decoder is implemented with the Hierarchical Gated Fusion Unit described in (Yao et al., 2017), which is a standard GRU based decoder enhanced with external knowledge gates. Besides the KLDivLoss, our knowledge-aware generator introduces two additional loss functions: NLL Loss: the Negative Log-Likelihood (NLL) LNLL(θ) measures the difference between the true response and the response generated by our model. BOW Loss: We use the BOW loss proposed by Zhao et al., (2017), to ensure the accuracy of the fused knowledge kc by enforcing the relevancy between the knowledge and the true response. Specifically, let w = MLP(kc) ∈ R|V |, where |V | is the vocabulary size and we define: p(yt|kc) = exp(wyt) P v exp(wv) (6) Then, the BOW loss is defined to minimize: LBOW (θ) = −1 m m X t=1 logp(yt|kc) (7) In summary, the final loss of our generative model is: L(θ) = LKL(θ) + LNLL(θ) + LBOW (θ) (8) 5 Experiments 5.1 Setting Our proposed models are tested under two settings: 1) automatic evaluation and 2) human evaluation. For automatic evaluation, we leverage several common metrics including BLEU, PPL, F1, DISTINCT1/2 to automatically measure the fluency, relevance, diversity etc. In our setting, we ask each model to select the best response from 10 candidates, same as previous work (Zhang et al., 2018). Those 10 candidate responses are comprised of one true response generated by humanbeings and nine randomly sampled ones from the training corpus. We measure the performance of all models using Hits@1 and Hits@3, same as Zhang et al., (2018). Furthermore, we also evaluate the ability of exploiting knowledge of each model by calculating knowledge precision/recall/F1 scores. The human evaluation is conducted at two levels, i.e., the turn-level human evaluation and the dialogue-level human evaluation. The turn-level human evaluation is similar to automatic evaluation. Given the dialogue context, the dialogue goal as well as the related knowledge, we require each model to produce a response according to the dialogue context. The responses are evaluated by three annotators in terms of fluency, coherence, informativeness, and proactivity. The coherence measures the relevance of the response and the proactivity measures if the model can successfully introduce new topics without destructing the fluency and coherence. The dialogue-level evaluation is much more challenging. Given a conversation goal and the related knowledge, each model is required to talk with a volunteer and lead the conversation to achieve the goal. For each model, 100 dialogues are generated. The generated conversations are then evaluated by three persons in terms of two aspects: goal completion and coherence. The goal 3800 Methods Hits@1 Hits@3 PPL F1/BLEU1/BLEU2 DISTINCT 1&2 knowledge P/R/F1 retrieval w/o klg. 45.84% 72.86% 33.08 / 0.280 / 0.147 0.121 / 0.376 86.90 / 39.30 / 13.73 retrieval w/ klg. 46.74% 75.32% 33.12 / 0.282 / 0.146 0.122 / 0.388 8.54 / 37.93 / 13.47 norm retrieval 50.92% 79.02% 34.73 / 0.291 / 0.156 0.118 / 0.373 9.76 / 40.23 / 15.22 S2S w/o klg. 24.88% 49.64% 20.16 26.43 / 0.187 / 0.100 0.032 / 0.088 4.59 / 30.00 / 7.73 S2S w/ klg. 30.58% 57.52% 13.53 32.19 / 0.226 / 0.140 0.064 / 0.168 5.89 / 36.31 / 9.85 norm S2S 31.26% 55.12% 10.96 39.94 / 0.283 / 0.186 0.093 / 0.222 7.52 / 42.74 / 12.34 generation w/o klg. 25.52% 50.14% 20.3 28.52 / 0.29 / 0.154 0.032 / 0.075 6.18 / 27.48 / 9.86 generation w/ klg. 31.90% 58.44% 27.3 36.21 / 0.32 / 0.169 0.049 / 0.144 8.67 / 35.90 / 13.62 norm generation 32.50% 58.50% 24.3 41.84 / 0.347 / 0.198 0.057 / 0.155 9.88 / 38.02 / 15.27 Table 3: Automatic evaluation results. klg. and norm stands for knowledge and normalized here. S2S stands for the vanilla sequence-to-sequence model. methods turn-level human evaluation dialogue-level human evaluation metrics fluency coherence informativeness proactivity goal complete coherence scores (0,1,2) (0,1,2) (0,1,2) (-1,0,1) (0,1,2) (0,1,2,3) norm retrieval 1.93 1.41 0.86 0.80 0.90 1.92 norm generation (s2s) 2.00 1.89 0.74 0.86 1.14 2.01 norm generation 1.87 1.61 1.10 0.87 1.22 2.03 Table 4: Turn-level and dialogue-level human evaluation results completion measures how good the conversation goal is achieved and the coherence scores the fluency of the whole dialogue. All human evaluation metrics, except the turnlevel proactivity and the dialogue-level coherence, has three grades: good(2), fair(1), bad(0). For goal completion, “2” means that the goal is achieved with full use of knowledge, “1” means the goal is achieved by making minor use of knowledge, and “0” means that the goal is not achieved. We additionally set the perfect grade (3) for dialogue-level coherence, to encourage consistent and informative dialogues. For proactivity, we also have three grades: “1” means good proactivity that new topics related to context are introduced, “-1” means bad proactivity that new topics are introduced but irrelevant to context, and “0” means that no new topics are introduced. The detailed description of the human evaluation metrics can be found in the appendices. 5.2 Comparison Models The compared models contain the vanilla seq2seq model, our proposed retrieval-based model as well as our proposed generation-based model5. Moreover, we normalize the train/valid/test data by replacing the specific two topics in the knowledge path with “topic a” and “topic b” respectively. Models using such normalized corpora are named as normalized models. To test the effectiveness 5We also compared MemNet (Ghazvininejad et al., 2018), whose performance is similar to Seq2Seq with knowledge. We omit it for space limit in this paper. of knowledge, we set up one ablation experiment, which removes all the knowledge triplets by replacing them with “UNK, UNK, UNK”. 5.3 Model Training All models are implemented using PaddlePaddle 6 and pytorch (Paszke et al., 2017), trained on a single GPU of NVIDIA Tesla K40. We set the vocabulary size to 30k for both retrieval-based and generation based methods. All hidden sizes, as well as embedding size, are set to 300, and the word embedding layer is initialized via word2vec7 trained on a very large corpus. We apply Adam optimize for model training and the beam size for generative models are set to 10 during decoding. 5.4 Results Table 3 and Table 4 summarize the experimental results on automatic evaluation and human evaluation. For human evaluation, we only evaluate the normalized models since they achieved better performances on our dataset. All human evaluations are conducted by three persons, where the agreement ratio (Fleiss’ kappa (Fleiss et al., 1971)) is from 0.37 to 0.86, with the lowest agreement on multi-turn coherence and others all above 0.6. More details of these measures are available in the Appendix. 6It is an open source deep learning platform (https://paddlepaddle.org) developed by Baidu. Our code and data are available at https://github.com/PaddlePaddle/models/ tree/develop/PaddleNLP/Research/ACL2019-DuConv. 7https://radimrehurek.com/gensim/models/word2vec.html 3801 distribution statistics norm generation norm seq2seq norm retrieval goal completion 0 21% 14% 25% 1 35% 26% 59% 2 43% 29% 15% knowledge used # triplets 2.46 1.51 2.28 # properties 27 20 25 Table 5: Analysis on goal completion and knowledge exploition. It can be seen that the retrieval-based model and the generation-based model have significantly different performances in terms of automatic evaluation and human evaluations. Retrieval-based model works better on Hits@K, however worse on F1 and BLEU compared to the generationbased model. This is perhaps caused by that fact that they are optimized on different metrics. For human evaluation, it can be observed that the retrieval-based method is apparently worse than generation-based models. This is because the retrieved candidates limit the potential of the retrieval-based model. We also found that the methods using knowledge outperform those without using knowledge, which confirms the benefits of using background knowledge. It is very interesting that normalizing the “topic a” and “topic b” can significantly improve the performance for all models because of their generalization capability over the knowledge. From the human evaluation, we found that our proposed generation methods outperform the baseline Seq2Seq model and the retrieval model, especially in terms of turn-level informativeness and proactivity, and dialogue-level goal completion and coherence. In order to further analyze the relationship between informativeness and goal completion, the detailed distribution of goal completion scores and the numbers of used knowledge triplets are shown in Table 5. From this table, it can be seen that our proposed generation model can exploit more knowledge to achieve the conversation goal (much higher rate on score “2”), making the conversation more engaging and coherent. This demonstrates the effectiveness of the knowledge posterior/prior distribution learning. Although the baseline Seq2Seq model can also has good goal completion capability, it usually only uses knowledge directly related to the conversation goal in the conversation process (much higher rate over score “1”), making the conversation usually dull. However, for the dialogue-level human evaluation, there are still 15% to 20% of conversation goals not achieved. The reason may be that our models (both retrieval and generation) have no explicit multi-turn policy mechanism to control the whole conversation flow, which is left for future research. 6 Case Study Figure 3 shows the conversations generated by the models via conversing with humans, given the conversation goal and the related knowledge. It can be seen that our knowledge-aware generator can choose appropriate and more knowledge for diverse conversation generation. Even though the retrieval-based method can also produce knowledge-grounded responses, the used knowledge is often wrong. Although the seq2seq model can smoothly achieve the given knowledge goal, it always generates generic responses using safe dialogue strategy, as the mentioned knowledge is much smaller than our proposed knowledge-aware generator, making the generated conversation less diverse and sometimes dull. 7 Conclusion In this paper, we build a human-like conversational agent by endowing it with the ability of proactively leading the conversation. To achieve this goal, we create a new dataset named DuConv. Each dialog in DuConv is created by two crowdsourced workers, where one acts as the conversation leader and the other acts as the follower. The leader is provided with a knowledge graph and asked to sequentially change the discussed topics following the given conversation goal, and meanwhile, keep the dialogue as natural and engaging as possible. We establish baseline results on DuConv using several state-of-the-art models. Experimental results show that dialogue models that plan over knowledge graph can make more full use of related knowledge to generate more 3802 Bot: 你喜欢看什么小怪兽的电影啊?(Do you like little monster movie?) User: 小孩子看的s意儿了( I believe that is for babies) Bot: 我最近看了一部叫麦兜·饭宝奇兵的电影,我觉得挺好看的。 (I recently watched one. McDull: Rise of the Rice Cooker. It's very hilarious) User:看来你是个宝宝啊 (You must be a baby) Bot: 嗯,这是一部很搞笑的电影,不过我觉得还挺搞笑的。(That is a very hilarious anyway.) User: 真的吗?谁演的?(Really? Who is the hero?) Bot: 这部电影的主演是彭博,我很欣赏他的。(The starring is Bo Peng. I admire him very much) Bot: 你喜欢看动画r吗?(Do you like cartoon?) User: 小时候还挺喜欢了,大了就不怎么喜欢看了 (I used to when I was a little baby. But I quit cartoon after growing up) Bot: 那你看过麦兜·饭宝奇兵吗?(So have you ever watched MuDull: Rise of the Rice Cooker?) User: 没看过,讲什么的(Not yet, what is in the movie?) Bot: 这部电影的主演是彭博。(The starring is Bo Peng.) Bot: 我很喜欢看导演安格斯.麦克莱恩的电影。(I like Angus Mclean's Movie) User: 什么电影呢 (for example?) Bot: 《麦兜·饭宝奇兵》这部电影是2016年9月15日上映的,你可以看一下。 (”McDull: Rise of the Rice Cooker“, it was released on September 15th 2016.You may have a try. ) User: 回头看一下(Ok, I will watch it later) Bot: 这部电影是2016年9月15日上映的呢,这部电影的主演我很喜欢。 (It is released on September 15th, 2016, I really like the starring of the show) User: 谁B,你这么喜欢(Who is that?) Bot: 彭博,1957年3月20日出生的,经验很丰富的演员,你可以看看他的作品。 (Bo Peng, born at March 20th, 1957, a very experienced actor. You may take a look at his movies)   评论 comment 不错,很搞笑 nice, very hilarious 评分 rating 6.9 上映时间 date published 2016年9月15日 2016.9.15 类型 genre 动画 cartoon 主演 Hero 彭博 Bo Peng 评论 comment 这个小怪兽也太恶心了 this little monster is disgusting   评论 comment 缺少爆发力 lack of explosive performance 血型 blood type O型 type O 性别 gender 男 male 职业 occupation 演员 actor 代表作 representative work 麦兜:饭宝奇兵 McDull: Rise of the Rice Cooker Goal Knowledge 麦兜:饭宝奇兵 McDull: Rise of the Rice Cooker 彭博 Bo Peng norm generation norm seq2seq norm etrieval [START]->麦兜·饭宝奇兵->彭博 [START]->McDull: Rise of the Rice Cooker->Bo Peng Figure 3: Conversations generated by three different models: words in yellow represent correct use of knowledge while those in blue for wrong knowledge. diverse conversations. Our dataset and proposed models are publicly available, which can be used as benchmarks for future research on constructing knowledge-driven proactive dialogue systems. Acknowledgement We sincerely thank the PaddlePaddle development team for helping us build the baseline models. We also would like to thank Ying Chen and Na Chen for helping us to collect the dataset through crowdsourcing. This work was supported by the Natural Science Foundation of China (No.61533018). References Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. arXiv preprint arXiv:1605.07683. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations. J.L. Fleiss et al. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. In Advances in Neural Information Processing Systems, pages 9748–9758. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1498. Kaixiang Mo, Yu Zhang, Shuangyin Li, Jiajun Li, and Qiang Yang. 2018. Personalizing a dialogue system with transfer reinforcement learning. In ThirtySecond AAAI Conference on Artificial Intelligence. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. 2018. Towards exploiting background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322–2332. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. 3803 Alan M Turing. 2009. Computing machinery and intelligence. In Parsing the Turing Test, pages 23–65. Pavlos Vougiouklis, Jonathon Hare, and Elena Simperl. 2016. A neural network approach for knowledgedriven response generation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3370–3380. Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. 2018. Learning to ask questions in opendomain conversational systems with typed decoders. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2193–2203. Yu Wu, Wei Wu, Ming Zhou, and Zhoujun Li. 2017. Sequential match network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 372–381. Lili Yao, Yaoyuan Zhang, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. Towards implicit contentintroducing for generative short-text conversation systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2190–2199. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Hang Li, and Xiaoming Li. 2015. Neural generative question answering. CoRR, abs/1512.01337. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664, Vancouver, Canada. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018a. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4623–4629. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018b. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1118–1127. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. CoRR. 3804 Appendix A. Turn-level Human Evaluation Guideline Fluency measures if the produced response itself is fluent: • score 0 (bad): unfluent and difficult to understand. • score 1 (fair): there are some errors in the response text but still can be understood. • score 2 (good): fluent and easy to understand. Coherence measures if the response can respond to the context: • score 0 (bad): not semantically relevant to the context or logically contradictory to the context. • score 1 (fair): relevant to the context as a whole, but using some irrelevant knowledge, or not answering questions asked by the users. • score 2 (good): otherwise. Informativeness measures if the model makes full use of knowledge in the response: • score 0 (bad): no knowledge is mentioned at all. • score 1 (fair): only one triplet is mentioned in the response. • score 2 (good): more than one triplet is mentioned in the response. Proactivity measures if the model can introduce new knowledge/topics in conversation: • score -1 (bad): some new topics are introduced but irrelevant to the context. • score 0 (fair): no new topics/knowledge are used. • score 1(good): some new topics relevant to the context are introduced. B. Dialogue-level Human Evaluation Guideline Goal Completion measures how good the given conversation goal is finished: • score 0 (bad): neither “topic a” nor “topic b”is mentioned in the conversation. • score 1 (fair): “topic a” or “topic b” is mentioned , but the whole dialogue is very boring and less than 3 different knowledge triplets are used. • score 2 (good): both “topic a” or “topic b” are mentioned and more than 2 different knowledge triplets are used. Coherence measures the overall fluency of the whole dialogue: • score 0 (bad): over 2 responses irrelevant or logically contradictory to the previous context. • score 1 (fair): only 2 responses irrelevant or logically contradictory to the previous context. • score 2 (good): only 1 response irrelevant or logically contradictory to the previous context. • score 3 (perfect): no response irrelevant or logically contradictory to the previous context.
2019
369
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 380–389 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 380 Neural Text Simplification of Clinical Letters with a Domain Specific Phrase Table Matthew Shardlow Department of Computing and Mathematics Manchester Metropolitan University [email protected] Raheel Nawaz Department of Operations, Technology, Events and Hospitality Management Manchester Metropolitan University [email protected] Abstract Clinical letters are infamously impenetrable for the lay patient. This work uses neural text simplification methods to automatically improve the understandability of clinical letters for patients. We take existing neural text simplification software and augment it with a new phrase table that links complex medical terminology to simpler vocabulary by mining SNOMED-CT. In an evaluation task using crowdsourcing, we show that the results of our new system are ranked easier to understand (average rank 1.93) than using the original system (2.34) without our phrase table. We also show improvement against baselines including the original text (2.79) and using the phrase table without the neural text simplification software (2.94). Our methods can easily be transferred outside of the clinical domain by using domain-appropriate resources to provide effective neural text simplification for any domain without the need for costly annotation. 1 Introduction Text Simplification is the process of automatically improving the understandability of a text for an end user. In this paper, we use text simplification methods to improve the understandability of clinical letters. Clinical letters are written by doctors and typically contain complex medical language that is beyond the scope of the lay reader. A patient may see these if they are addressed directly, or via online electronic health records. If a patient does not understand the text that they are reading, this may cause them to be confused about their diagnosis, prognosis and clinical findings. Recently, the UK Academy of Medical Royal Colleges introduced the “Please Write to Me” Campaign, which encouraged clinicians to write directly to patients, avoid latin-phrases and acronyms, ditch redundant words and generally write in a manner that is accessible to a non-expert (Academy of Medical Royal Colleges, 2018). Inspired by this document, we took data from publicly available datasets of clinical letters (Section 3), used state of the art Neural Text Simplification software to improve the understandability of these documents (Section 4) analysed the results and identified errors (Section 5), built a parallel vocabulary of complex and simple terms (Section 6), integrated this into the simplification system and evaluated this with human judges, showing an overall improvement (Section 7). 2 Related Work The idea of simplifying texts through machine translation has been around some time (Wubben et al., 2012; Xu et al., 2016), however with recent advances in machine translation leveraging deep learning (Wu et al., 2016), text simplification using neural networks (Wang et al., 2016; Nisioi et al., 2017; Sulem et al., 2018) has become a realistic prospect. The Neural Text Simplification (NTS) system (Nisioi et al., 2017) uses the freely available OpenNMT (Klein et al., 2017) software package1 which provides sequence to sequence learning between a source and target language. In the simplification paradigm, the source language is difficult to understand language and the target language is an easier version of that language (in our case both English, although other languages can be simplified using the same architecture). The authors of the NTS system provide models trained on parallel data from English Wikipedia and Simple English Wikipedia which can be used to simplify source documents in English. NTS provides lexical simplifications at the level of both single lexemes and multiword expressions in addition to syntactic simplifications such as paraphrasing or removing redundant 1http://opennmt.net/ 381 grammatical structures. Neural Machine Translation is not perfect and may sometimes result in errors. A recent study found that one specific area of concern was lexical cohesion (Voita et al., 2019), which would affect the readability and hence simplicity of a resulting text. Phrase tables for simplification have also been applied in the context of paraphrasing systems where paraphrases are identified manually (Hoard et al., 1992) or learnt from corpora (Yatskar et al., 2010; Grabar et al., 2014; Hasan et al., 2016) and stored in a phrase table for later application to a text. A paraphrase consists of a complex phrase paired with one or more simplifications of that phrase. These are context specific and must be applied at the appropriate places to avoid semantic errors that lead to loss of meaning (Shardlow, 2014). The clinical/medical domain recieves much attention for NLP (Shardlow et al., 2018; Yunus et al., 2019; Jahangir et al., 2017; Nawaz et al., 2012) and is well suited to the task of text simplification as there is a need for experts (i.e., clinicians) to communicate with non-experts (i.e., patients) in a language commonly understood by both. Previous efforts to address this issue via text simplification have focussed on (a) public health information (Kloehn et al., 2018), where significant investigations have been undertaken to understand what makes language difficult for a patient and (b) the simplification of medical texts in the Swedish language (Abrahamsson et al., 2014), which presents its own unique set of challenges for text simplification due to compound words. 3 Data Collection To assess the impact of simplification on patient understanding, we obtained 2 datasets representing clinical texts that may be viewed by a patient. We selected data from the i2b2 shared task, as well as data from MIMIC. A brief description of each dataset, along with the preprocessing we applied is below. We selected 149 records from i2b2 and 150 from MIMIC. Corpus statistics are given in Table 1. 3.1 i2b2 The i2b2 2006 Deidentification and Smoking Challenge (Uzuner et al., 2007) consists of 889 unannotated, de-identified discharge summaries. We selected the test-set of 220 patient records and i2b2 MIMIC Total Records 149 150 299 Words 80,273 699,798 780,071 Avg. Words 538.7 4665.3 2,608.9 Table 1: Corpus statistics filtered these for all records containing more than 10 tokens. This gave us 149 records to work with. We concatenated all the information from each record into one file and did no further preprocessing of this data as it was already tokenised and normalised sufficiently. 3.2 MIMIC In addition to i2b2, we also downloaded data from MIMIC-III v1.4 (Johnson et al., 2016) (referred to herein as MIMIC). MIMIC provides over 58,000 hospital records, with detailed clinical information regarding a patient’s care. One key difference between MIMIC and i2b2 was that each of MIMIC’s records contained multiple discrete statements separated by time. We separated these sub-records, and selected the 150 with the largest number of tokens. This ensured that we had selected a varied sample from across the documents that were available to us. We did not use all the data available to us due to the time constraints of (a) running the software and (b) performing the analysis on the resulting documents. We preprocessed this data using the tokenisation algorithm distributed with OpenNMT. 4 Neural Text Simplification We used the publicly available NTS system (Nisioi et al., 2017). This package is freely available via GitHub2. We chose to use this rather than reimplementing our own system as it allows us to better compare our work to the current state of the art and makes it easier for others to reproduce our work. We have not included details of the specific algorithm that underlies the OpenNMT framework, as this is not the focus of our paper and is reported on in depth in the original paper, where we would direct readers. Briefly, their system uses an Encoder-Decoder LSTM layer with 500 hidden units, dropout and attention. Original words are substituted when an out of vocabulary word is detected, as this is appropriate in mono2https://github.com/senisioi/ NeuralTextSimplification/ 382 lingual machine translation. The simplification model that underpins the NTS software is trained using aligned English Wikipedia and Simple English Wikipedia data. This model is distributed as part of the software. We ran the NTS software on each of our 299 records to generate a new simplified version of each original record. We used the standard parameters given with the NTS software as follows: Beam Size = 5: This parameter controls the beam search that is used to select a final sentence. A beam size of 1 would indicate greedy search. n-best = 4: This causes the 4 best translations to be output, although in practice, we only selected the best possible translation in each case. model = NTS-w2v epoch11 10.20.t7: Two models were provided with the NTS software, we chose the model with the highest BLEU score in the original NTS paper. replace unk: This parameter forces unknown words to be replaced by the original token in the sentence (as opposed to an <UNK> marker). 4.1 Readability Indices To identify whether our system was performing some form of simplification we calculated three readability indices,3 each of which took into account different information about the text. We have not reported formulae here as they are available in the original papers, and abundantly online. Flesch-Kincaid: The Flesch-Kincaid reading grade calculator (Kincaid et al., 1975) takes into account the ratio of words to sentences and the ratio of syllables to words in a text. This tells us information about how long each sentence is and how many long words are used in each text. The output of Flesch-Kincaid is an approximation of the appropriate US Reading Grade for the text. Gunning-Fox: The Gunning Fox index (Gunning, 1952) estimates the years of education required for a reader to understand a text. It 3using the implementations at: https://github. com/mmautner/readability i2b2 MIMIC Flesch Pre 8.70 6.40 Kincaid Post 6.46 4.84 P-Value < 0.001 < 0.001 Gunning Pre 14.53 12.69 Fox Post 12.35 7.36 P-Value < 0.001 < 0.001 Coleman Pre 10.60 10.12 Liau Post 9.04 5.90 P-Value < 0.001 < 0.001 Table 2: The results of calculating 3 readability indices on the texts before and after simplification. We show a significant reduction in the metrics in each case indicating that the texts after simplification are suitable for a lower reading grade level. takes into account the ratio of words to sentences and the proportion of words in a text which are deemed to be complex, where a complex word is considered to be any words of more than 3 syllables, discounting suffixes. Coleman-Liau: The Coleman-Liau index (Coleman and Liau, 1975) estimates the US reading grade level of a text. It takes into account the average numbers of letters per word and sentences per word in a text. The results of each of these metrics for the i2b2 and MIMIC documents are shown in Table 2. In each case, using the NTS software improved the readability of the document. We calculated the statistical significance of this improvement with a t-test, receiving a p-value of less than 0.001 in each case. However, readability indices say nothing about the understandability of the final text and it could be the case that the resultant text was nonsensical, but still got a better score. This concern led us to perform the error analysis in the following section. 5 Error Analysis Our previous analysis showed that the documents were easier to read according to automated indices, however the automated indices were not capable of telling us anything about the quality of the resulting text. To investigate this further, we analysed 1000 sentences (500 from i2b2 and 500 from MIMIC) that had been processed by the system and categorised each according to the following framework: 383 Type 1: A change has been made with no loss or alteration of the original meaning. Type 2: No change has been made. Type 3: A significant reduction in the information has been made, which has led to critical information being missed. Type 4: A single lexical substitution has been made, which led to loss or alteration of the original meaning. Type 5: An incorrect paraphrase or rewording of the sentence has been made, which led to loss or alteration of the original meaning. Type 6: A single word from the original text is repeated multiple times in the resulting text. We developed this framework by looking at the 1000 sentences in our corpus. Although the framework does not give any information about the readability of sentences, it does tell us about the existing pitfalls of the algorithm. We were able to categorise every sentence using these six categories. Each category represents an increased level of severity in terms of the consequences for the readability of the text. A Type 1 sentence may have a positive impact on the readability of a text.4 A Type 2 sentence will not have any impact as no modification has been made. A Type 3 sentence may improve the readability according to the automated metric and may help the reader understand one portion of the text, however some critical information from the original text has been missed. In a clinical setting, this could lead to the patient missing some useful information about their care. Types 4, 5 and 6 represent further errors of increasing severity. In these cases, the resulting sentences did not convey the original meaning of the text and would diminish the understandability of a text if shown to a reader. The first author of this paper went through each sentence with the categories above and assigned each sentence to an appropriate category. Where one sentence crossed multiple categories, the highest (i.e., most severe) category was chosen. However, this only occurred in a small proportion of 4note, we do not claim that all Type 1 sentences are simplifications, only that the system has made a change which is attempting to simplify the text. This may or may not result in the text being easier to understand by a reader. Type i2b2 MIMIC Total 1 25 33 58 2 337 322 659 3 41 55 96 4 55 61 116 5 25 21 46 6 17 8 25 Table 3: The results of the error analysis. 500 sentences each were annotated from i2b2 and MIMIC to give 1000 annotated sentences in the ‘Total’ column. the data and would not significantly affect our results had we recorded these separately. The results of the error analysis are shown in Table 3. The results show that the majority of the time the system does not make a change to the text (659/1000 = 65.9% of the time). We would not expect every single sentence to be simplified by the system, as some sentences may not require simplification to be understood by an end user. Other sentences may require simplification, but the system does not realise this, in which case the system may still choose not to simplify the text. Only in 5.8% of the cases is a valid simplification made. These generally consisted of helpful lexical substitutions, however there were also some examples of helpful rephrasing or paraphrasing. In addition to the 5.8% of valid simplifications, a further 9.6% of cases were instances where a significant chunk of a sentence had been removed. In these cases, the resulting sentence was still readable by an end user, however some important information was missing. These sentences do not necessarily constitute an error in the system’s behaviour as the information that was omitted may not have been relevant to the patient and removing it may have helped the patient to better understand the text overall, despite missing some specific detail. The rate of Type 4 errors is 11.6%. These errors significantly obfuscated the text as an incorrect word was placed in the text, where the original word would have been more useful. 4.6% of errors were incorrect rewordings (Type 5) and a further 2.5% were cases of a word being repeated multiple times. In total this gives 18.7% of sentences that result in errors. The error rate clearly informs the use of the NTS software. It may be the case that in a clinical setting, NTS could be used as an aid to the doctor when writing a patient letter to suggest simplifications, however it is clear that it 384 would not be appropriate to simplify a doctor’s letter and send this directly to a patient without any human intervention. 6 Phrase Table Development The NTS system is trained on parallel Wikipedia and Simple Wikipedia documents. Whilst these may contain some medical texts, they are not specific to the clinical genre and we should not expect that direct simplification of medical language will occur. Indeed, when we examined the texts, it was clear that the majority of simplifications that were made concerned general language, rather than simplifying medical terminology. One way of overcoming this would be to create a large parallel corpus of simplified clinical letters. However this is difficult due to the licensing conditions of the source texts that we are using, where an annotator would be required to agree to the licence conditions of the dataset(s). In addition, we would require clinical experts who were capable of understanding and simplifying the texts. The clinical experts would have to produce vast amounts of simplified texts in order to provide sufficient training data for the OpenNMT system to learn from. Although this is possible, it would require significant time and financial resources. OpenNMT provides an additional feature that allows a pre-compiled phrase table to be used when an out-of-vocabulary word is identified. This can be used in cross-lingual translation to provide idioms, loan words or unusual translations. In monolingual translation, we can use this feature to provide specific lexical replacements that will result in easier to understand text. This allows us to use a general language simplification model, with a domain-specific phrase table and effectively simplify complex vocabulary from the (clinical) domain. We downloaded the entire SNOMED-CT clinical thesaurus (Donnelly, 2006), which contains 2,513,952 clinical terms, each associated with a concept identifier. We chose this resource over the full UMLS Metathesaurus as SNOMED-CT contains terms specific to the clinical domain and we expected this would lead to fewer false positives. Where terms share an identifier, these are considered synonymous with each other, allowing us to create groups of semantically equivalent terms. We filtered out terms that were greater than 4 tokens long or contained punctuation, As these indicated sentential terms that were not appropriate for our purposes. We identified abbreviations and automatically removed any explanations that were associated with these. We used the Google Web1T frequencies to identify which terms were the most common in general language use. Although this is not a direct measure of how easy to understand each word will be, it has been shown previously that lexical frequency correlates well with ease of understanding (Paetzold and Specia, 2016). Where there were multi-word expressions, we took the average frequency of all words in the multi-word expression, rather than taking the frequency of the N-gram. For each set of semantically equivalent terms, we took the most frequent term as the easiest to understand and added one entry to our phrase table for each of the other terms contained in the group. So, for a group of 3 terms, A, B and C, where B is the most frequent, we would add 2 pairs to our phrase table A-B, and C-B. This means that whenever A or C are seen in the original texts and they are considered to be out-of-vocabulary words, i.e., technical medical terms that were not present in the training texts, then the more frequent term B, will be substituted instead. We identified any instances where one word had more than one simplification (due to it being present in more than one synonym group). If the original word was an acronym, we removed all simplifications as an acronym may have multiple expansions and there is no way for the system to distinguish which is the correct expansion. If the original word with more than one simplification is not an acronym then we selected the most frequent simplification and discarded any others. This resulted in 110,415 pairs of words that were added to the phrase table. In Table 4, we have shown examples of the types of simplifications that were extracted using the methodology outlined above. Clearly these are the type of simplifications that would be helpful for patients. In some cases, it may be possible that the resulting simplified term would still be difficult to understand for an end user, for example ‘hyperchlorhydria’ is translated to ‘increased gastric acidity’, where the term ‘gastric’ may still be difficult for an end user. A human may have simplified this to ‘increased stomach acidity’, which would have been easier to understand. This phrase was not in the SNOMED-CT vocabulary and so was not available for the construction of our phrase ta385 ble. Nonetheless, the type of simplifications that are produced through this methodology appear to improve the overall level of understanding of difficult medical terms. The methodology we have outlined above is suitable for domains outside of medical terminology. The only domain-specific resource that is required is a thesaurus of terms that are likely to occur in the domain. By following the methodology we have outlined, it would be simple to create a phrase table for any domain, which could be applied to the NTS software that we have used in this work. 7 Human Evaluation In our final section of experiments, we wanted to determine the effect that our system had on the ease of understanding of sentences from the original texts. We evaluated this through the use of human judges. In order to thoroughly evaluate our system we compared the original texts from i2b2 and MIMIC to three methods of transformation as detailed below: Original Texts (ORIG): We used the original texts as they appeared after preprocessing. This ensured that they were equivalent to the transformed texts and that any effects would be from the system, not the preprocessing. NTS: We ran the sentences through the NTS system using the configuration described in Section 4. NTS + Phrase Table (NTS + PT): We ran the sentences through the NTS system. We configured OpenNMT to use the phrase table that we described in Section 6. Note that the phrase table was only used by the system when OpenNMT identified a word as being out-of-vocabulary. Phrase Table Baseline (PTB): To demonstrate that the benefit of our system comes from using the phrase table in tandem with the NTS system, we also provided a baseline which applied the phrase table to any word that it was possible to replace in the text. We collected the sentences for each of the methods as described above from both of our data sources and collated these so as we could compare the results. We analysed the data and removed any instances of errors that had resulted from the NTS system, according to our error analysis. The sentences that we selected correspond to Type 1, in our categorisation. Type 1 does not necessarily indicate a simplification, instead it implies that a transformation has been successfully completed, with the potential for simplification. Selecting against errors allows us to see the simplification potential of our system. We do not claim that NTS can produce error-free text, but instead we want to demonstrate that the error-free portion of the text is easier to understand when using our phrase table. We selected 50 4-tuples from each dataset (i2b2 and MIMIC) to give 100 4-tuples, where one 4-tuple contained parallel sentences from each of the methods described above. Sentences within a 4-tuple were identical, apart from the modifications that had been made by each system. No two sentences in a 4-tuple were the same. We have put an example 4-tuple in Table 5, to indicate the type of text that was contained in each. We used crowd sourcing via the Figure Eight platform to annotate our data. As we had a relatively small dataset, we chose to ask for 10 annotations for each 4-tuple. We allowed each annotator to complete a maximum of 20 annotations to ensure that we had a wide variety of perspectives on our data. No annotator saw the same 4-tuple twice. We provided a set of test annotations, which we intended to use to filter out bad-actors, although we found that all annotators passed the test adequately. We selected for annotators with a higher than average rating on the Figure Eight platform (level 2 and above). In each annotation, we asked the annotator to rank the 4 sentences according to their ease of understanding, where the top ranked sentence (rank 1) was the easiest to understand and the bottom ranked sentence (rank 4) was the hardest to understand. We explicitly instructed annotators to rank all sentences, and to use each rank exactly once. If an annotator found 2 sentences to be of the same complexity, they were instructed to default to the order in which the sentences were displayed. We posed our task as 4 separate questions with the exact wording shown in the supplementary material, where we have reproduced the instructions we provided to our annotators. In our post analysis we identified that 20 out of the 1000 annotations that we collected (100 4-tuples, with 10 annotation per 4-tuple) did not use all 4 ranks (i.e., 2 or more sentences were at the same rank). There was no clear pattern of spamming and we 386 Complex Term Simple Term ability to be ambulant ability to walk carcinoma of stomach cancer of stomach hyperchlorhydria increased gastric acidity hypertension high blood pressure lying supine lying on back osteophyte bony spur photophobia intolerance to light talipes congenital clubfoot AACTG aids clinical trial group BIPLEDS bilateral periodic epileptiform discharges BLADES bristol language development scale Table 4: Term pairs that were created for our phrase table. System Sentence ORIG Patient has been suffering from photophobia and wheezing. NTS Patient suffers from photophobia and wheezing. NTS + PT Patient suffers from sensitivity to light and wheezing. PTB Patient has been suffering from sensitivity to light and asthmatic breath sounds. Table 5: An example of the type of text produced by our system. The NTS system has performed a syntactic simplification, converting “has been suffering” to “suffers”, the NTS + PT system has simplified “photophobia” to “sensitivity to light” and the baseline system (PTB) has further simplified “wheezing” to “asthmatic breath sounds”. chose to ignore these 20 sentences in our further analysis, giving us 980 rankings. In Table 6, we have shown the raw results of our crowd sourcing annotations as well as the average rank of each system. We calculate average rank rs of a system s as rs = P4 i=1 i × f(s, i) P4 i=1 f(s, i) where i is a rank from 1 to 4 and f(s, i) is a function that maps the system and rank to the number of times that system was placed at that rank (as shown in Table 6). We can see that our system using NTS and the phrase table has the highest average rank, indicating that the text it produced was the easiest to understand more often than other systems. The NTS is ranked second highest indicating that in many cases this system still produces text which is easier to understand than the original. The original texts are ranked third most frequently, ahead of the baseline system which is most often ranked in last position. The baseline system overzealously applied simplifications from our phrase table and this led to long winded explanations and words being simplified that did not require it. System Rank Avg 1 2 3 4 NTS + PT 430 255 230 65 1.93 NTS 259 294 264 163 2.34 ORIG 120 222 381 257 2.79 PTB 171 209 105 495 2.94 Table 6: The results of our crowdsourcing annotations. We have ordered the annotations by their average rank and highlighted the most common rank for each system. The first column in the table shows the system. Columns 2 through 5 show the number of times each system was ranked at rank 1, 2, 3 or 4 and column 6 shows the average rank calculated according to the formula in Section 7 . 8 Discussion In our work we have applied NTS software to clinical letters and adapted the software using a bespoke phrase table mined from SNOMED-CT. We have shown the types of errors that can occur when using NTS software and we have evaluated our improved algorithm against the state of the art, showing an improvement. Our system improved over the original NTS 387 software when adapted to use our phrase table. The NTS software was developed by using parallel sentences from Wikipedia and Simple Wikipedia and training OpenNMT to learn simplifications from these. OpenNMT learns an internal set of vocabulary substitutions, however these will have been targeted towards general language, rather than medical specific language. By using our phrase table, we are able to give specific simplifications for medical terms. The system only accesses the phrase table when it detects a word which is out-of-vocabulary, i.e., a word that was not seen sufficiently often in the training texts to be incorporated into the model that was produced. This works well at modelling a lay reader, where the vocabulary understood by the system is analogous to the vocabulary understood by a typical (i.e., non-specialised) reader of English. In addition to the NTS system adapted to use our phrase table, we also tested a baseline which greedily applied the phrase table at all possible points in a sentence. However, this system was ranked as least understandable more often than any other system. The text it produced was generally much longer than the original text. The benefit of our work comes from using the phrase table together with the neural text simplification software, which is capable of applying the phrase table at the correct points in the text. This can be seen in Table 5, where the NTS system has altered the language being used, but has not simplified a medical term, the NTS + PT system has simplified the medical term (photophobia), but left a term which would be generally understood (wheezing) and the baseline system has correctly simplified the difficult medical term, but has also changed the generally understood term. Our phrase table is additional to the NTS system and could be applied to other, improved neural models for text simplification as research in this field is progressed. We have shown that our phrase table adds value to the NTS system in the clinical setting. We have demonstrated in Section 5 that the type of text produced by NTS software and by our adapted NTS software will contain errors. This is true of any translation software which relies on learning patterns from data to estimate future translations of unseen texts. In cross-lingual translation, a small error rate may be acceptable as the text is transformed from something that is initially incomprehensible to text in the reader’s own language which may be intelligible to some degree. With simplification, however, even a small error rate may lead to the resulting text becoming more difficult to understand by an end user, or the meaning of a text being changed. This is particularly the case in the clinical setting, where life changing information may be communicated. It is important then to consider how to use Neural Text Simplification in a clinical setting. We would propose that the clinician should always be kept in the loop when applying this type of simplification. The system could be applied within a word editor which suggests simplifications of sentences as and when they are discovered. The clinician could then choose whether or not to accept and integrate the simplified text. We have presented our methodology in the context of the clinical domain, however it would be trivial to apply this elsewhere. Our methodology is particularly suitable when 3 conditions are met: (a) There is text being produced by experts that is read by lay readers. (b) that text contains specialised terminology that will be unintelligible to the intended audience and (c) a comprehensive thesaurus of domain specific terms exists, which can be used to generate a domain appropriate phrase table. Given these conditions are met, our work could be applied in the legal, financial, educational or any other domain. We have made significant use of licensed resources (i2b2, MIMIC and SNOMED-CT). These are available for research purposes from their providers, given the user has signed a licensing agreement. We are not at liberty to share these resources ourselves and this inhibits our ability to provide direct examples of the simplifications we produced in our paper. To overcome this, we have provided the following GitHub repository, which provides all of the code we used to process the data: https://github.com/ MMU-TDMLab/ClinicalNTS. Instructions for replication are available via the GitHub. 9 Conclusion + Future Work Our work has explored the use of neural machine translation for text simplification in the clinical domain. Doctors and patients speak a different language and we hope that our work will help them communicate. We have shown that general language simplification needs to be augmented with domain specific simplifications and that doing so 388 leads to an improvement in the understandability of the resulting text. One clear avenue of future work is to apply this system in a clinical setting and to test the results with actual patients. We will look to develop software that uses NTS to identify possible simplifications for a clinician when they are writing a letter for a patient. We could also look to use parallel simplified medical text to augment the general language parallel text used in the NTS system. Additionally, we could improve the measure of lexical complexity for single and multi word expressions. Currently, we are only using frequency as an indicator of lexical complexity, however other factors such as word length, etymology, etc. may be used. Finally, we will explore adaptations of our methodology for general (non-medical) domains, e.g., simplified search interfaces (Ananiadou et al., 2013) for semantically annotated news (Thompson et al., 2017). References Emil Abrahamsson, Timothy Forni, Maria Skeppstedt, and Maria Kvist. 2014. Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 57–65. Academy of Medical Royal Colleges. 2018. Please, write to me. Writing outpatient clinic letters to patients. Sophia Ananiadou, Paul Thompson, and Raheel Nawaz. 2013. Enhancing search: Events and their discourse context. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 318–334. Springer. Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283. Kevin Donnelly. 2006. Snomed-ct: The advanced terminology and coding system for ehealth. Studies in health technology and informatics, 121:279. Natalia Grabar, Thierry Hamon, and Dany Amiot. 2014. Automatic diagnosis of understanding of medical words. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 11–20. Robert Gunning. 1952. The technique of clear writing. McGraw-Hill, New York. Sadid A. Hasan, Bo Liu, Joey Liu, Ashequl Qadir, Kathy Lee, Vivek Datla, Aaditya Prakash, and Oladimeji Farri. 2016. Neural clinical paraphrase generation with attention. In Proceedings of the Clinical Natural Language Processing Workshop (ClinicalNLP), pages 42–53, Osaka, Japan. The COLING 2016 Organizing Committee. James E Hoard, Richard Wojcik, and Katherina Holzhauser. 1992. An automated grammar and style checker for writers of simplified english. In Computers and Writing, pages 278–296. Springer. M. Jahangir, H. Afzal, M. Ahmed, K. Khurshid, and R. Nawaz. 2017. An expert system for diabetes prediction using auto tuned multi-layer perceptron. In 2017 Intelligent Systems Conference (IntelliSys), pages 722–728. Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72. Nicholas Kloehn, Gondy Leroy, David Kauchak, Yang Gu, Sonia Colina, Nicole P Yuan, and Debra Revere. 2018. Improving consumer understanding of medical text: Development and validation of a new subsimplify algorithm to automatically generate term explanations in english and spanish. Journal of medical Internet research, 20(8). Raheel Nawaz, Paul Thompson, and Sophia Ananiadou. 2012. Identification of manner in bio-events. In LREC, pages 3505–3510. Sergiu Nisioi, Sanja ˇStajner, Simone Paolo Ponzetto, and Liviu P Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 85–91. Gustavo Paetzold and Lucia Specia. 2016. Semeval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560–569. Matthew Shardlow. 2014. Out in the open: Finding and categorising errors in the lexical simplification pipeline. In LREC, pages 1583–1590. 389 Matthew Shardlow, Riza Batista-Navarro, Paul Thompson, Raheel Nawaz, John McNaught, and Sophia Ananiadou. 2018. Identification of research hypotheses and new knowledge from scientific literature. BMC Medical Informatics and Decision Making, 18(1):46. Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Simple and effective text simplification using semantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 162–173. Paul Thompson, Raheel Nawaz, John McNaught, and Sophia Ananiadou. 2017. Enriching news events with meta-knowledge information. Language Resources and Evaluation, 51(2):409–438. ¨Ozlem Uzuner, Yuan Luo, and Peter Szolovits. 2007. Evaluating the state-of-the-art in automatic deidentification. Journal of the American Medical Informatics Association, 14(5):550–563. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machinetranslation improves on deixis, ellipsis, and lexical cohesion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computational Linguistics. Tong Wang, Ping Chen, John Rochford, and Jipeng Qiang. 2016. Text simplification using neural machine translation. In Thirtieth AAAI Conference on Artificial Intelligence. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 1015–1024. Association for Computational Linguistics. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Mark Yatskar, Bo Pang, Cristian Danescu-NiculescuMizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 365– 368, Stroudsburg, PA, USA. Association for Computational Linguistics. R. Yunus, O. Arif, H. Afzal, M. F. Amjad, H. Abbas, H. N. Bokhari, S. T. Haider, N. Zafar, and R. Nawaz. 2019. A framework to estimate the nutritional value of food in real time using deep learning techniques. IEEE Access, 7:2643–2652.
2019
37
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3805–3815 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3805 Learning a Matching Model with Co-teaching for Multi-turn Response Selection in Retrieval-based Dialogue Systems Jiazhan Feng1∗, Chongyang Tao1∗, Wei Wu2, Yansong Feng1, Dongyan Zhao1,3 and Rui Yan1,3† 1Institute of Computer Science and Technology, Peking University, Beijing, China 2Microsoft Corporation, Beijing, China 3Center for Data Science, Peking University, Beijing, China [email protected] [email protected] 1,3{chongyangtao,fengyansong,zhaody,ruiyan}@pku.edu.cn Abstract We study learning of a matching model for response selection in retrieval-based dialogue systems. The problem is equally important with designing the architecture of a model, but is less explored in existing literature. To learn a robust matching model from noisy training data, we propose a general co-teaching framework with three specific teaching strategies that cover both teaching with loss functions and teaching with data curriculum. Under the framework, we simultaneously learn two matching models with independent training sets. In each iteration, one model transfers the knowledge learned from its training set to the other model, and at the same time receives the guide from the other model on how to overcome noise in training. Through being both a teacher and a student, the two models learn from each other and get improved together. Evaluation results on two public data sets indicate that the proposed learning approach can generally and significantly improve the performance of existing matching models. 1 Introduction Human-machine conversation is a long-standing goal of artificial intelligence. Recently, building a dialogue system for open domain human-machine conversation is attracting more and more attention due to both availability of large-scale human conversation data and powerful models learned with neural networks. Existing methods are either retrieval-based or generation-based. Retrievalbased methods reply to a human input by selecting a proper response from a pre-built index (Ji et al., 2014; Zhou et al., 2018b; Yan and Zhao, 2018), while generation-based methods synthesize a response with a natural language model (Shang et al., 2015; Serban et al., 2017). In this ∗Equal Contribution. †Corresponding author: Rui Yan ([email protected]). work, we study the problem of response selection for retrieval-based dialogue systems, since retrieval-based systems are often superior to their generation-based counterparts on response fluency and diversity, are easy to evaluate, and have powered some real products such as the social bot XiaoIce from Microsoft (Shum et al., 2018), and the E-commerce assistant AliMe Assist from Alibaba Group (Li et al., 2017). A key problem in response selection is how to measure the matching degree between a conversation context (a message with several turns of conversation history) and a response candidate. Existing studies have paid tremendous effort to build a matching model with neural architectures (Lowe et al., 2015; Zhou et al., 2016; Wu et al., 2017; Zhou et al., 2018b), and advanced models such as the deep attention matching network (DAM) (Zhou et al., 2018b) have achieved impressive performance on benchmarks. In contrary to the progress on model architectures, there is little exploration on learning approaches of the models. On the one hand, neural matching models are becoming more and more complicated; on the other hand, all models are simply learned by distinguishing human responses from some automatically constructed negative response candidates (e.g., by random sampling). Although this heuristic approach can avoid expensive and exhausting human labeling, it suffers from noise in training data, as many negative examples are actually false negatives1. As a result, when evaluating a well-trained model using human judgment, one can often observe a significant gap between training and test, as will be seen in our experiments. In this paper, instead of configuring new architectures, we investigate how to effectively learn existing matching models from noisy training 1Responses sampled from other contexts may also be proper candidates for a given context. 3806 data, given that human labeling is infeasible in practice. We propose learning a matching model under a general co-teaching framework. The framework maintains two peer models on two i.i.d. training sets, and lets the two models teach each other during learning. One model transfers knowledge learned from its training set to its peer model to help it combat with noise in training, and at the same time gets updated under the guide of its peer model. Through playing both a role of a teacher and a role of a student, the two peer models evolve together. Under the framework, we consider three teaching strategies including teaching with dynamic margins, teaching with dynamic instance weighting, and teaching with dynamic data curriculum. The first two strategies let the two peer models mutually “label” their training examples, and transfer the soft labels from one model to the other through loss functions; while in the last strategy, the two peer models directly select training examples for each other. To examine if the proposed learning approach can generally bridge the gap between training and test, we select sequential matching network (SMN) (Wu et al., 2017) and DAM as representative matching models, and conduct experiments on two public data sets with human judged test examples. The first data set is the Douban Conversation benchmark published in Wu et al. (2017), and the second one is the E-commerce Dialogue Corpus published in Zhang et al. (2018b) where we recruit human annotators to judge the appropriateness of response candidates regarding to their contexts on the entire test set2. Evaluation results indicate that co-teaching with the three strategies can consistently improve the performance of both matching models over all metrics on both data sets with significant margins. On the Douban data, the most effective strategy is teaching with dynamic margins that brings 2.8% absolute improvement to SMN and 2.5% absolute improvement to DAM on P@1; while on the E-commerce data, the best strategy is teaching with dynamic data curriculum that brings 2.4% absolute improvement to SMN and 3.2% absolute improvement to DAM on P@1. Through further analysis, we also unveil how the peer models get evolved together in learning and how the choice of peer models affects the performance of 2We have released labeled test data of E-commerce Dialogue Corpus at https://drive.google. com/open?id=1HMDHRU8kbbWTsPVr6lKU_ -Z2Jt-n-dys. learning. Our contributions in the paper are four-folds: (1) proposal of learning matching models for response selection with a general co-teaching framework; (2) proposal of two new teaching strategies as special cases of the framework; and (3) empirical verification of the effectiveness of the proposed learning approach on two public data sets. 2 Problem Formalization Given a data set D = {(yi, ci, ri)}N i=1 where ci represents a conversation context, ri is a response candidate, and yi ∈{0, 1} denotes a label with yi = 1 indicating ri a proper response for ci and otherwise yi = 0, the goal of the task of response selection is to learn a matching model s(·, ·) from D. For any context-response pair (c, r), s(c, r) gives a score that reflects the matching degree between c and r, and thus allows one to rank a set of response candidates according to the scores for response selection. To obtain a matching model s(·, ·), one needs to deal with two problems: (1) how to define s(·, ·); and (2) how to learn s(·, ·). Existing studies concentrate on Problem (1) by defining s(·, ·) with sophisticated neural architectures (Wu et al., 2017; Zhou et al., 2018b), and leave Problem (2) in a simple default setting where s(·, ·) is optimized with D using a loss function L usually defined by cross entropy. Ideally, when D is large enough and has good enough quality, a carefully designed s(·, ·) learned using the existing paradigm should be able to well capture the semantics in dialogues. The fact is that since large-scale human labeling is infeasible, D is established under simple heuristics where negative response candidates are automatically constructed (e.g., by random sampling) with a lot of noise. As a result, advanced matching models only have sub-optimal performance in practice. The gap between ideal and reality motivates us to pursue a better learning approach, as will be presented in the next section. 3 Learning a Matching Model through Co-teaching In this section, we present co-teaching, a new framework for learning a matching model. We first give a general description of the framework, and then elaborate three teaching strategies as special cases of the framework. 3807                ("# $,&$)    ("#(,&()       ("#$,&$)       ") $    ") (             ("# (,&()     ") Figure 1: Co-teaching framework. 3.1 Co-teaching Framework The idea of co-teaching is to maintain two peer models and let them learn from each other by simultaneously acting as a teacher and a student. Figure 1 gives an overview of the co-teaching framework. The learning program starts from two pre-trained peer models A and B. In each iteration, a batch of training data is equally divided into two sub-batches without overlap as ¯DA and ¯DB for B and A respectively. A and B then examine their sub-batches and output learning protocols ( ˜DB, JB) and ( ˜DA, JA) for their peers, where ˜DB and ˜DA are training data and JB and JA are loss functions. After that, A and B get updated according to ( ˜DA, JA) and ( ˜DB, JB) respectively, and the learning program moves to the next iteration. Algorithm 1 describes the pseudo code of co-teaching. The rationale behind the co-teaching framework is that the peer models can gradually obtain different abilities from the different training data as the learning process goes on, even when the two models share the same architecture and the same initial configuration, and thus, they can acquire different knowledge from their training data and transfer the knowledge to their peers to make them robust over the noise in the data. This resembles two peer students who learn from different but related materials. Through knowledge exchange, one can inspire the other to get new insights from his or her material, and thus the two students get improved together. Advantages of the framework reside in various aspects: first, the peer models have their own “judgment” regarding to the quality of the same training example. Thus, one model may guide the other how to pick high quality training examples and circumvent noise; second, since the peer models are optimized with different training sub-batches, knowledge from one sub-batch could be supplementary to the other through exchange of learning protocols; third, the two peer models may have different decision boundaries, and thus are good at recognizing different patterns in data. This may allow one model to help the other rectify errors in learning. To instantiate the co-teaching framework, one needs to specify initialization of the peer models and teaching strategies that can form the learning protocols. In this work, to simplify the learning program of co-teaching, we assume that model A and model B are initialized by the same matching model pre-trained with the entire training data. We focus on design of teaching strategies, as will be elaborated in the next section. 3.2 Teaching Strategies We consider the following three strategies that cover teaching with dynamic loss functions and teaching with data curriculum. Teaching with Dynamic Margins: The strategy fixes ¯DA and ¯DB as ˜DA and ˜DB respectively, and dynamically creates loss functions as the learning protocols. Without loss of generality, the training data D can be re-organized in a form of {(ci, r+ i , r− i )}N′ i=1, where r+ i and r− i refer to a positive response candidate and a negative response candidate regarding to ci respectively. Suppose that ¯DA = {(cA,i, r+ A,i, r− A,i)}NA i=1 and ¯DB = {(cB,i, r+ B,i, r− B,i)}NB i=1, then model A evaluates each (cB,i, r+ B,i, r− B,i) ∈¯DB with matching scores sA(cB,i, r+ B,i) and sA(cB,i, r− B,i), and form a margin for model B as ∆B,i = max  0, λsA(cB,i, r+ B,i) −sA(cB,i, r− B,i) , (1) where λ is a hyper-parameter. Similarly, ∀(cA,i, r+ A,i, r− A,i) ∈¯DA, the margin provided by model B for model A can be formulated as ∆A,i = max  0, λsB(cA,i, r+ A,i) −sB(cA,i, r− A,i) , (2) where sB(cA,i, r+ A,i) and sB(cA,i, r− A,i) are matching scores calculated with model B. Loss functions 3808 Algorithm 1: The proposed co-teaching framework Input: model parameters θA, θB, learning rate η, number of epochs nT , number of iterations nK; 1 for T = 1, 2, ..., TnT do 2 Shuffle training set D; 3 for K = 1, 2, ..., KnK do 4 Fetch a batch of training data ¯D; 5 Distributes ¯D equally to two sub-batches of training data ¯DA, ¯DB; ▷¯DA, ¯DB ⊂¯D 6 Obtain learning protocol ( ˜DB, JB) from model A and ¯DB; 7 Obtain learning protocol ( ˜DA, JA) from model B and ¯DA; 8 Update θA = θA −η∇JA( ˜DA); ▷Update model A by ( ˜DA, JA). 9 Update θB = θB −η∇JB( ˜DB); ▷Update model B by ( ˜DB, JB). 10 end 11 end Output: θA, θB. JA and JB are then defined as JA = NA X i=1 max{0, ∆A,i −sA(cA,i, r+ A,i) + sA(cA,i, r− A,i)}, (3) JB = NB X i=1 max{0, ∆B,i −sB(cB,i, r+ B,i) + sB(cB,i, r− B,i)}. (4) Intuitively, one model may assign a small margin to a negative example if it identifies the example as a false negative. Then, its peer model will pay less attention to such an example in its optimization. This is how the two peer models help each other combat with noise under the strategy of teaching with dynamic margins. Teaching with Dynamic Instance Weighting: Similar to the first strategy, this strategy also defines the learning protocols with dynamic loss functions. The difference is that this strategy penalizes low-quality negative training examples with weights. Formally, let us represent ¯DB as {(yB,i, cB,i, rB,i)} N′ B i=1, then ∀(yB,i, cB,i, rB,i) ∈ ¯DB, its weight from model A is defined as wB,i =  1 yB,i = 1 1 −sA(cB,i, rB,i) yB,i = 0 (5) Similarly, ∀(yA,i, cA,i, rA,i) ∈¯DA, model B assign a weight as wA,i =  1 yA,i = 1 1 −sB(cA,i, rA,i) yA,i = 0 (6) Then, loss functions JA and JB can be formulated as JA = N′ A X i=1 wA,iL(yA,i, sA(cA,i, rA,i)), (7) JB = N′ B X i=1 wB,iL(yB,i, sB(cB,i, rB,i)), (8) where L(·, ·) is defined by cross entropy: −y log(s(c, r)) + (1 −y) log(1 −s(c, r)). (9) In this strategy, negative examples that are identified as false negatives by one model will obtain small weights from the model, and thus be less important than other examples in the learning process of the other model. Teaching with Dynamic Data Curriculum: In the first two strategies, knowledge is transferred mutually through “soft labels” defined by the peer matching models. In this strategy, we directly transfer data to each model. During learning, JA and JB are fixed as cross entropy, and the learning protocols vary by ˜DA and ˜DB. Inspired by Han et al. (2018), we construct ˜DA and ˜DB with small-loss instances. These instances are far from decision boundaries of the two models, and thus are more likely to be true positives and true negatives. Formally, ˜DA and ˜DB are defined as ˜DB = argmin| ˜ DB|=δ| ¯ DB|, ˜ DB⊂¯ DBJA( ˜DB), ˜DA = argmin| ˜ DA|=δ| ¯ DA|, ˜ DA⊂¯ DAJB( ˜DA), (10) 3809 where | · | measures the size of a set, JA( ˜DB) and JB( ˜DA) stand for accumulation of loss on the corresponding data sets, and δ is a hyper-parameter. Note that we do not shrink δ as in Han et al. (2018), since fixing δ as a constant yields a simple yet effective learning program, as will be seen in our experiments. 4 Experiments We test our learning schemes on two public data sets with human annotated test examples. 4.1 Experimental Setup The first data set we use is Douban Conversation Corpus (Douban) (Wu et al., 2017) which is a multi-turn Chinese conversation data set crawled from Douban group3. The data set consists of 1 million context-response pairs for training, 50 thousand pairs for validation, and 6, 670 pairs for test. In the training set and the validation set, the last turn of each conversation is regarded as a positive response and negative responses are randomly sampled. The ratio of the positive and the negative is 1:1 in training and validation. In the test set, each context has 10 response candidates retrieved from an index whose appropriateness regarding to the context is judged by human annotators. The average number of positive responses per context is 1.18. Following Wu et al. (2017), we employ R10@1, R10@2, R10@5, mean average precision (MAP), mean reciprocal rank (MRR), and precision at position 1 (P@1) as evaluation metrics. In addition to the Douban data, we also choose E-commerce Dialogue Corpus (ECD) (Zhang et al., 2018b) as an experimental data set. The data consists of real-world conversations between customers and customer service staff in Taobao4, which is the largest e-commerce platform in China. There are 1 million context-response pairs in the training set, and 10 thousand pairs in both the validation set and the test set. Each context in the training set and the validation set corresponds to one positive response candidate and one negative response candidate, while in the test set, the number of response candidates per context is 10 with only one of them positive. In the released data, human responses are treated as positive responses, and negative ones are automatically collected by ranking the response corpus based on 3https://www.douban.com/group 4https://www.taobao.com conversation history augmented messages using Apache Lucene5. Thus, we recruit 3 active users of Taobao as human annotators, and ask them to judge each context-response pair in the test data (i.e., in total 10 thousand pairs are judged). If a response can naturally reply to a message given the conversation history before it, then the contextresponse pair is labeled as 1, otherwise, it is labeled as 0. Each pair receives three labels and the majority is taken as the final decision. On average, each context has 2.5 response candidates labeled as positive. There are only 33 contexts with all responses labeled as positive or negative, and we remove them from test. Fleiss’ kappa (Fleiss, 1971) of the labeling is 0.64, indicating substantial agreement among the annotators. We employ the same metrics as in Douban for evaluation. Note that we do not choose the Ubuntu Dialogue Corpus (Lowe et al., 2015) for experiments, because (1) the test set of the Ubuntu data is constructed by randomly sampling; and (2) conversations in the Ubuntu data are in a casual style and too technical, and thus it is very difficult for us to find qualified human annotators to label the data. 4.2 Matching Models We select the following two models that achieve superior performance on benchmarks to test our learning approach. SMN: (Wu et al., 2017) first lets each utterance in a context interact with a response, and forms a matching vector for the pair through CNNs. Matching vectors of all the pairs are then aggregated with an RNN as a matching score. DAM: (Zhou et al., 2018b) performs matching under a representation-matching-aggregation framework, and represents a context and a response with stacked self-attention and crossattention. Both models are implemented with TensorFlow according to the details in Wu et al. (2017) and Zhou et al. (2018b). To implement co-teaching, we pre-train the two models using the training sets of Douban and ECD, and tune the models with the validation sets of the two data. Each pre-trained model is used to initialize both model A and model B. After co-teaching, the one in A and B that performs better on the validation sets is picked for comparison. We denote models learned with the teaching strategies in Section 3.2 5http://lucene.apache.org/ 3810 Douban ECD MAP MRR P@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 SMN (Wu et al., 2017) 0.529 0.569 0.397 0.233 0.396 0.724 SMN-Pre-training 0.527 0.570 0.396 0.236 0.392 0.734 0.662 0.742 0.598 0.302 0.464 0.757 SMN-Margin 0.559∗ 0.601∗ 0.424∗ 0.260∗ 0.426∗ 0.764∗ 0.674 0.750 0.615 0.318 0.481 0.765 SMN-Weighting 0.550∗ 0.593∗ 0.414 0.253 0.413 0.762∗ 0.666 0.745 0.601 0.311 0.475 0.775 SMN-Curriculum 0.548 0.594∗ 0.418∗ 0.254∗ 0.411 0.763∗ 0.678 0.762∗ 0.622∗ 0.323∗ 0.487∗ 0.778∗ DAM (Zhou et al., 2018b) 0.550 0.601 0.427 0.254 0.410 0.757 DAM-Pre-training 0.552 0.605 0.426 0.258 0.408 0.766 0.685 0.756 0.621 0.325 0.491 0.772 DAM-Margin 0.583∗ 0.628∗ 0.451∗ 0.276∗ 0.454∗ 0.806∗ 0.692 0.777∗ 0.652∗ 0.337 0.506 0.778 DAM-Weighting 0.579∗ 0.629∗ 0.453∗ 0.272 0.454∗ 0.809∗ 0.695 0.775 0.651∗ 0.343 0.497 0.789 DAM-Curriculum 0.580∗ 0.623∗ 0.442 0.269 0.459∗ 0.804∗ 0.696 0.777∗ 0.653∗ 0.345∗ 0.506 0.781 Table 1: Evaluation results on the two data sets. Numbers marked with ∗mean that the improvement is statistically significant compared with the best baseline (t-test with p-value < 0.05). Numbers in bold indicate the best strategies for the corresponding models on specific metrics. as Model-Margin, Model-Weighting, and ModelCurriculum respectively, where “Model” refers to either SMN or DAM. These models are compared with the pre-trained model denoted as Model-Pretraining, and those reported in Wu et al. (2017); Zhou et al. (2018b); Zhang et al. (2018b). 4.3 Implementation Details We limit the maximum number of utterances in each context as 10 and the maximum number of words in each utterance and response as 50 for computational efficiency. Truncation or zeropadding are applied when necessary. Word embedding is pre-trained with Word2Vec (Mikolov et al., 2013) on the training sets of Douban and ECD, and the dimension of word vectors is 200. The co-teaching framework is implemented with TensorFlow. In co-teaching, learning rates (i.e., η in Algorithm 1) in dynamic margins, dynamic instance weighting, and dynamic data curriculum are set as 0.001, 0.0001, and 0.0001 respectively. We choose 200 in co-teaching with SMN and 50 in co-teaching with DAM as the size of mini-batches. Optimization is conducted using stochastic gradient descent with Adam algorithm (Kingma and Ba, 2015). In teaching with dynamic margins, we vary λ in {1, 1 2, 1 3, 1 5, 1 10, 1 15, 1 20}, and choose 1 10 for SMN on Douban, 1 2 for SMN on ECD, 1 3 for DAM on Douban, and 1 2 for DAM on ECD. In teaching with dynamic data curriculum, we select δ in {0.1, 0.2, ..., 0.9, 1.0}, and find that 0.9 is the best choice for both models on both data sets. 4.4 Evaluation Results Table 1 reports evaluation results of co-teaching with the three teaching strategies on the two data sets. We can see that all teaching strategies can improve the original models on both data sets, and improvement from the best strategy is statistically significant (t-test with p-value < 0.05) on most metrics. On Douban, the best strategy for SMN is teaching with dynamic margins, and it is comparable with teaching with dynamic instance weighting for DAM, while on ECD, for both SMN and DAM, the best strategy is teaching with dynamic data curriculum. The difference may stem from the nature of training sets of the two data. The training set of Douban is built from random sampling, while the training set of ECD is constructed through response retrieval that may contain more false negatives. Thus, in training, Douban could be cleaner than ECD, making “hard data filtering” more effective than “soft labeling” on ECD. It is worth noting that on ECD, there are significant gaps between the results of SMN (pre-trained) reported in Table 1 and those reported in Zhang et al. (2018b), since SMN in this paper is evaluated on the human-judged test set while SMN in Zhang et al. (2018b) is evaluated on the automatically constructed test set that is homogeneous with the training set. This somehow indicates the gap between training and test in real applications for the existing research on response selection, and thus demonstrates the merits of this work. 4.5 Discussions In addition to efficacy of co-teaching as a learning approach, we are also curious about Q1: if model A and model B can “co-evolve” when they are initialized with one network; Q2: if co-teaching is still effective when model A and model B are initialized with different networks; and Q3: if the teaching strategies are sensitive to the hyperparameters (i.e., λ in Equations (1)-(2) and δ in Equation (10)). 3811 0 5 10 15 20 25 30 35 40 45 50 55 Iterations (103) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 P@1 Pre-training Co-teaching-Model A Co-teaching-Model B (a) Dynamic data curriculum 0 5 10 15 20 25 30 35 40 45 50 Iterations (103) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 P@1 Pre-training Co-teaching-Model A Co-teaching-Model B (b) Dynamic instance weighting 0 5 10 15 20 25 30 35 40 45 Iterations (103) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 P@1 Pre-training Co-teaching-Model A Co-teaching-Model B (c) Dynamic margins Figure 2: Test P@1 of DAM with the three teaching strategies on ECD. All curves are smoothed by exponential moving average6 for beauty. Douban (Margin) ECD (Curriculum) MAP MRR P@1 R10@1 R10@2 R10@5 MAP MRR P@1 R10@1 R10@2 R10@5 SMN-Pre-training 0.527 0.570 0.396 0.236 0.392 0.734 0.662 0.742 0.598 0.302 0.464 0.757 SMN-Co-teaching 0.558 0.602 0.420 0.255 0.431 0.787 0.674 0.765 0.626 0.322 0.485 0.779 DAM-Pre-training 0.552 0.605 0.426 0.258 0.408 0.766 0.685 0.756 0.621 0.325 0.491 0.772 DAM-Co-teaching 0.570 0.617 0.438 0.270 0.455 0.781 0.696 0.775 0.652 0.341 0.499 0.784 Table 2: Evaluation results of co-teaching initialized with different networks. Answer to Q1: Figure 2 shows P@1 of DAM vs. number of iterations on the test set of ECD under the three teaching strategies. Co-teaching with any of the three strategies can improve both the performance of model A and the performance of model B after pre-training, and the peer models move with almost the same pace. The results verified our claim that “by learning from each other, the peer models can get improved together”. Curves of dynamic margins oscillate more fiercely than others, indicating that optimization with dynamic margins is more difficult than optimization with the other two strategies. Answer to Q2: as a case study of co-teaching with two networks in different capabilities, we initialize model A and model B with DAM and SMN respectively, and select teaching with dynamic margins for Douban and teaching with dynamic data curriculum for ECD (i.e., the best strategies for the two data sets when co-teaching is initialized with one network). Table 2 shows comparison between models before/after co-teaching. We find that co-teaching is still effective when starting from two networks, as both SMN and DAM get improved on the two data sets. Despite the improvement, it is still better to learn the two networks one by one, as co-teaching with two networks cannot bring more improvement than coteaching with one network, and the performance of the stronger one between the two networks could also drop (e.g., DAM on Douban). We guess this is because the stronger model cannot be well taught by the weaker model, especially in teaching via soft labels, and as a result, it is not able to transfer more knowledge to the weaker one as well. Answer to Q3: finally, we check the effect of hyper-parameters to co-teaching. Figure 3(a) illustrates how the performance of DAM varies under different λs in teaching with dynamic margins on Douban. We can see that both small λs and large λs will cause performance drop. This is because small λs will reduce the effect of margins, making clean examples and noisy examples indifferent in learning, while with large λs, some errors from the “soft labels” might be magnified, and thus hurt the performance of the learning approach. Figure 3(b) shows the performance of DAM under different δs in teaching with dynamic data curriculum on ECD. Similarly, DAM gets worse when δ becomes small or large, since a smaller δ means fewer data will be involved in training, while a larger δ brings more risks to introducing noise into training. Thus, we conclude that the teaching strategies are sensitive to the choice of hyperparameters. 6https://en.wikipedia.org/wiki/Moving_ average#Exponential_moving_average 3812 1 1/2 1/3 1/5 1/10 1/15 1/20 λ 0.35 0.40 0.45 P@1 (a) Dynamic margins on Douban 0.1 0.3 0.5 0.7 0.9 0.95 1.0 δ 0.60 0.62 0.64 0.66 P@1 (b) Data curriculum on ECD Figure 3: Effects of λ and δ to co-teaching. Experiments are conducted with DAM on the two data sets. 5 Related Work So far, methods used to build an open domain dialogue system can be divided into two categories. The first category utilize an encoderdecoder framework to learn response generation models. Since the basic sequence-to-sequence models (Vinyals and Le, 2015; Shang et al., 2015; Tao et al., 2018) tend to generate generic responses, extensions have been made to incorporate external knowledge into generation (Mou et al., 2016; Xing et al., 2017), and to generate responses with specific personas or emotions (Li et al., 2016; Zhang et al., 2018a; Zhou et al., 2018a). The second category design a discriminative model to measure the matching degree between a human input and a response candidate for response selection. At the beginning, research along this line assumes that the human input is a single message (Lu and Li, 2013; Wang et al., 2013; Hu et al., 2014; Wang et al., 2015). Recently, researchers begin to make use of conversation history in matching. Representative methods include the dual LSTM model (Lowe et al., 2015), the deep learning to respond architecture (Yan et al., 2016), the multi-view matching model (Zhou et al., 2016), the sequential matching network (Wu et al., 2017, 2018c), the deep attention matching network (Zhou et al., 2018b), and the multi-representation fusion network (Tao et al., 2019). Our work belongs to the second group. Rather than crafting a new model, we are interested in how to learn the existing models with a better approach. Probably the most related work is the weakly supervised learning approach proposed in Wu et al. (2018b). However, there is stark difference between our approach and the weak supervision approach: (1) weak supervision employs a static generative model to teach a discriminative model, while co-teaching dynamically lets two discriminative models teach each other and evolve together; (2) weak supervision needs pretraining a generative model with extra resources and pre-building an index for training data construction, while co-teaching does not have such request; and (3) in terms of multi-turn response selection, weak supervision is only tested on the Douban data with SMN and the multi-view matching model, while co-teaching is proven effective on both the Douban data and the E-commerce data with SMN and DAM which achieves state-of-theart performance on benchmarks. Moreover, improvement to SMN on the Douban data from coteaching is bigger than that from weak supervision, when the ratio of the positive and the negative is 1:1 in training7. Our work, in a broad sense, belongs to the effort on learning with noisy data. Previous studies including curriculum learning (CL) (Bengio et al., 2009) and self-paced learning (SPL) (Jiang et al., 2014, 2015) tackle the problem with heuristics, such as ordering data from easy instances to hard ones (Spitkovsky et al., 2010; Tsvetkov et al., 2016) and retaining training instances whose losses are smaller than a threshold (Jiang et al., 2015). Recently, Fan et al. (2018) propose a deep reinforcement learning framework in which a simple deep neural network is used to adaptively select and filter important data instances from the training data. Jiang et al. (2017) propose a MentorNet which learns a data-driven curriculum with a Student-Net to mitigate overfitting on corrupted labels. In parallel to curriculum learning, several studies explore sample weighting schemes where training samples are re-weighted according to their label-quality (Wang et al., 2017; Dehghani et al., 2018; Wu et al., 2018b). Instead of considering data quality, Wu et al. (2018a) employ a parametric model to dynamically create appropriate loss functions. The learning approach in this work is mainly inspired by the work of Han et al. (2018) for handling extremely noisy labels. However, with substantial extensions, our work is far beyond that work. First, we generalize the concept of “coteaching” to a framework, and now the method in Han et al. (2018) becomes a special case of the framework. Second, Han et al. (2018) only exploits data curriculum, while in addition to data 7Our results are 0.559 (MAP), 0.601 (MRR), and 0.424 (P@1), while results reported in (Wu et al., 2018b) are 0.542 (MAP), 0.588 (MRR), and 0.408 (P@1). 3813 curriculum, we also propose two new strategies for teaching with dynamic loss functions as special cases of the framework. Third, unlike Han et al. (2018) who only use one network to initialize the peer models in co-teaching, we studied coteaching with both one network and two different networks. Finally, Han et al. (2018) verified that the special co-teaching method is effective in some computer vision tasks, while we demonstrate that the co-teaching framework is generally useful for building retrieval-based dialogue systems. 6 Conclusions We propose learning a matching model for response selection under a general co-teaching framework with three specific teaching strategies. The learning approach lets two matching models teach each other and evolve together. Empirical studies on two public data sets show that the proposed approach can generally improve the performance of existing matching models. Acknowledgement We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC Nos. 61672058 and 61876196). References Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, and Bernhard Sch¨olkopf. 2018. Fidelity-weighted learning. In International Conference on Learning Representations. Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, and TieYan Liu. 2018. Learning to teach. In International Conference on Learning Representations. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378. Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. 2018. Co-sampling: Training robust networks for extremely noisy supervision. CoRR, abs/1804.06872. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Zongcheng Ji, Zhengdong Lu, and Hang Li. 2014. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988. Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann. 2014. Self-paced learning with diversity. In Advances in Neural Information Processing Systems, pages 2078–2086. Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G Hauptmann. 2015. Self-paced curriculum learning. In Twenty-Ninth AAAI Conference on Artificial Intelligence. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2017. Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels. In Proceedings of the 35-th International Conference on Machine Learning,, pages 2304–2313. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. 2017. Alime assist: An intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2495– 2498. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. In Association for Computational Linguistics, pages 994– 1003. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285–294. Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367–1375. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. 3814 Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3349–3358. Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron Courville. 2017. Multiresolution recurrent neural networks: An application to dialogue response generation. In AAAI, pages 3288–3294. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1577–1586. Heung-Yeung Shum, Xiaodong He, and Di Li. 2018. From Eliza to XiaoIce: Challenges and opportunities with social chatbots. Frontiers of IT & EE, 19(1):10–26. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2010. From baby steps to leapfrog: How ”less is more” in unsupervised dependency parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 751–759. Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, and Rui Yan. 2018. Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In IJCAI, pages 4418–4424. Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multirepresentation fusion network for multi-turn response selection in retrieval-based chatbots. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 267– 275. ACM. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016. Learning the curriculum with bayesian optimization for task-specific word representation learning. arXiv preprint arXiv:1605.03852. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Hao Wang, Zhengdong Lu, Hang Li, and Enhong Chen. 2013. A dataset for research on short-text conversations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 935–945. Mingxuan Wang, Zhengdong Lu, Hang Li, and Qun Liu. 2015. Syntax-based deep matching of short texts. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, pages 1354–1361. Yixin Wang, Alp Kucukelbir, and David M Blei. 2017. Robust probabilistic modeling with bayesian data reweighting. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3646–3655. JMLR. org. Lijun Wu, Fei Tian, Yingce Xia, Yang Fan, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2018a. Learning to teach with dynamic loss functions. CoRR, abs/1810.12081. Yu Wu, Wei Wu, Zhoujun Li, and Ming Zhou. 2018b. Learning matching models with weak supervision for response selection in retrieval-based chatbots. arXiv preprint arXiv:1805.02333. Yu Wu, Wei Wu, Chen Xing, Can Xu, Zhoujun Li, and Ming Zhou. 2018c. A sequential matching framework for multi-turn response selection in retrieval-based chatbots. Computational Linguistics, 45(1):163–197. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 496–505. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, pages 3351– 3357. Rui Yan, Yiping Song, and Hua Wu. 2016. Learning to respond with deep neural networks for retrievalbased human-computer conversation system. In SIGIR, pages 55–64. Rui Yan and Dongyan Zhao. 2018. Coupled context modeling for deep chit-chat: towards conversations between human and computer. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2574– 2583. ACM. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. 2018b. Modeling multiturn conversation with deep utterance aggregation. CoRR, abs/1806.09102. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting machine: Emotional conversation generation with internal and external memory. In The Thirty-Second AAAI Conference on Artificial Intelligence, pages 730–738. 3815 Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 372–381. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018b. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1118–1127.
2019
370
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3816–3825 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3816 Learning to Abstract for Memory-augmented Conversational Response Generation Zhiliang Tian,1,3∗Wei Bi,2 Xiaopeng Li,1,3 Nevin L. Zhang1,3 1Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong 2Tencent AI Lab, Shenzhen, China 3HKUST-Xiaoi Joint Lab, Hong Kong [email protected] [email protected] {xlibo,lzhang}@cse.ust.hk Abstract Neural generative models for open-domain chit-chat conversations have become an active area of research in recent years. A critical issue with most existing generative models is that the generated responses lack informativeness and diversity. A few researchers attempt to leverage the results of retrieval models to strengthen the generative models, but these models are limited by the quality of the retrieval results. In this work, we propose a memory-augmented generative model, which learns to abstract from the training corpus and saves the useful information to the memory to assist the response generation. Our model clusters query-response samples, extracts characteristics of each cluster, and learns to utilize these characteristics for response generation. Experimental results show that our model outperforms other competitive baselines. 1 Introduction Automatic human-computer dialogue / conversation is a core topic in natural language processing. There is a boom in research on open-domain chit-chat dialogue systems due to the availability of vast conversational data online. Most existing models of dialogue systems can be divided into retrieval-based models and generative models. Given a query, retrieval-based (Ji et al., 2014) models search for the most similar query stored in the training corpus and directly copy its corresponding response as the result. These models cannot create new replies customized for the given queries. Generative models (Shang et al., 2015) learn a query-response mapping to generate responses by maximizing P(r|q), where q is the input query and r is the response. The most popular generative model is the Sequence-to-Sequence ∗Work done while Zhiliang Tian was collaborating with Tencent AI Lab. Query Response Memory Where did you go for holiday? Fine weather today in Chicago! I like the weather today. There are many places for tour. Which city to travel next time? I traveled to Tibet. Sure, sunny in Chicago. It is too hot for me. Yes, especially museums. Maybe New York. (weather, today) (sunny, hot) Learn to Abstract Any places for traveling this weekend? Travel to New York’s museum for this weekend. Training corpus Input Output (place, travel) (Tibet, New York, museum) key1 value1 key2 value2 𝑘? @ 𝑘A @ 𝑘@ @ 𝑘B @ 𝑣? @ 𝑣A @ 𝑣B @ 𝑣@ @ 𝑘? A 𝑘A A 𝑘@ A 𝑘B A 𝑣? A 𝑣A A 𝑣B A 𝑣@ A . . . . . . . . . . . . Figure 1: An example of abstracting training corpus and memorizing their characteristics in the form of key vectors and value vectors. Red and blue indicate two clusters. The input query matches the blue one and generates the response assisted by information collected from the last two training samples. (Seq2Seq) model (Sutskever et al., 2014), which generates new utterances tailored for queries and achieves high coherence between queries and generated utterances. However, existing generative models often generate uninformative and universal responses (Li et al., 2016a). To address these issues, several researchers leverage retrieved results R to augment the information used in generative models. Such methods are called retrieval-augmented generative models and their objectives are to maximize P(r|q, R), where R is one or a few (at most 3 in practice) retrieved results. Particularly, some researchers (Li et al., 2017; Zhuang et al., 2017; Song et al., 2018) build the combination of retrieval and generative models, which retrieve one or a few responses r+, and then feeds both the query q and r+ into the generative model to maximize P(r|q, R = r+). It enriches generated responses by informatively retrieved responses but can only utilize a limited 3817 number of retrieved results due to their model architecture. Wu et al. (2018) edit the retrieved response r+ with the Seq2Seq model based on the lexical differences between the input query q and its retrieved query q+, whose objective is to maximize P(r|q, R = ⟨r+, q+⟩). It edits retrieved responses r+ to make them relevant to queries, but their edited results rely heavily on the sentence pattern of r+. Generally, the responses from such models are more informative and diverse than those from plain generative models, while maintain better relevance than the retrieved responses. Although current retrieval-augmented generative models have achieved promising results, they still have following weaknesses: Firstly, they are limited by the quality of the retrieved results. Retrieval results are less coherent and relevant with query than generative models’ (Song et al., 2018). Irrelevant retrieved results would mislead the response generation. Secondly, these models can only utilize individual retrieved results, which makes the generation sensitive to those results, leading to a high variance in the performance. Moreover, the information from very few retrieved results may not be sufficient to enrich the response generation. In this paper, we propose a memory-augmented generative model that memorizes and utilizes the common characteristics M of groups of queryresponse (q-r) pairs to enhance the response generation by maximizing P(r|q, M). The advantage is that our model is less sensitive to the quality of individual q-r pairs and hence increases the robustness of response generation. In particular, we divide the training corpus into multiple groups by clustering, extract common characteristics of each group, and learn to utilize the characteristics to assist generation. The idea is illustrated in Figure 1 (top), the training corpus is divided into two sets of closely related queries and their responses. We abstract query-response relationship hidden in those q-r pairs, save them to the memory (Figure 1 bottom), and use those relationships for response generation. Our contributions can be summaried as: 1. We are the first to extract information from clusters of query-response pairs using a learnable memory, and to use the information to enhance the performance of conversation systems. 2. We propose a novel framework where the Seq2Seq, autoencoder and clustering model are jointly trained to abstract the training corpus and generate responses. 3. Our model outperforms state-of-the-art generative models and retrieval-augmented generative models in single-round conversation scenarios. 2 Related Work Generative models build dialogue systems via end-to-end training. Ritter et al. (2011) first regard response generation as query-to-response translation. Following that, Shang et al. (2015) implement an end-to-end dialogue system borrowing the Seq2Seq model, while Li et al. (2016b) replace the maximum likelihood criterion with maximum mutual information (MMI) to deal with the universal response issue of the seq2seq. The retrieval-based models are another branch in building dialogue systems. Ji et al. (2014) propose to apply information retrieval techniques to search for related queries and replies. Zhou et al. (2016) and Yan et al. (2016) improve it by neural networks. Recently, several researchers (Song et al., 2018; Li et al., 2017; Zhuang et al., 2017) propose to merge retrieval-based models and generative models. Cai et al. (2018) generate the response skeleton from the retrieved results and revise the skeletons to get the response. Guu et al. (2018) use the Seq2Seq model to edit a prototype from the retrieved results for text generation and Wu et al. (2018) leverage context lexical differences to edit prototypes for conversation. There are some other directions to enhance generative models by adding additional information. Some of them introduce the knowledge base to conversation models, which provide task-specific knowledge (Madotto et al., 2018; Wu et al., 2019) or lexical knowledge to text generation (Young et al., 2018; Parthasarathi and Pineau, 2018). Some other directions are to maintain sample-level temporary memory. Weston et al. (2014), Shang et al. (2015), Serban et al. (2016), Tian et al. (2017) and Le et al. (2018) memorize the previous utterances in a multi-round session. Unlike them, our model brings in the corpus-level memory and does not rely on any external resource. 3818 3 Models 3.1 Model Architecture Our model consists of two components: a memory module and a generative model. The memory module divides the training corpus into multiple groups of query-response pairs, and it extracts and memorizes the essential query-response correspondence information hidden in each group of pairs. The generative model generates responses for input queries and, while doing so, takes information stored in the memory module into consideration. It also learns the representations of queries and responses that are used in the memory module. 3.2 Query-Response Memory Module Our memory module consists of K memory slots, and each memory slot is a pair containing a key cell and its corresponding value cell. Given a query, we search for the most similar key and output its corresponding value. Both the keys and the values are real-value vectors. They are called key embeddings and value embeddings respectively, and denoted as ki and vi. We group queries in the training corpus into K clusters. Each query is embedded as a vector. So, it makes sense to talk about the center ki of each query cluster i. The center ki is used as a key in our memory module. The corresponding value vi is a vector that captures the common characteristics of the responses to queries in cluster i. We will say more about ki and vi later. Read Operation. In our model, the input of Read Operation is the current query’s representation eq from the generative model. Given the eq, the Read Operation addresses the memory by the similarity between the current query eq and every memorized key embedding ki, in which we apply a dotproduct operation to measure the similarity. We design two modes to fetch the value: Soft Read is a weighted summation over all K value embeddings in the whole memory according to the normalized similarity scores (Eq. 1). Hard Read is to fetch the value embedding whose key embedding is most similar to the current query eq (Eq. 2). Finally, it returns the value as the output of the read operation. SoftRead(eq) = K X i=1 αivi, αi = softmax(ki • eq). (1) HardRead(eq) = {vi|i = K argmax i=1 (ki •eq)}. (2) Write Operation. We collect the query representation eq’s and the response representation er’s of all the training samples from the generative model, and then conduct K class K-Means clustering using eq’s. We let the center of the i-th cluster Ci be key embedding ki, and let the average of the representations of the responses to queries in Ci be value embedding vi (Eq. 3). vi = P j∈Ci erj |Ci| . (3) In this way, each key embedding ki gathers similar queries together and obtains their representative information by fetching their cluster center. Each value embedding vi retains the common characteristics of a group of responses er whose queries are similar. Hence, the pair ⟨ki, vi⟩ can be regarded as an abstraction of the queryresponse correspondence relationship hidden in the i-th cluster of queries and their responses. We can control the granularity of the abstraction by varying the number of clusters K. In an extreme setting, if we set the memory size K equal to the training corpus size and use the hard read operation, our model nearly degenerates into a retrieval-augmented generative model. In this case, the generation relies on only one retrieved sample, the generated response becomes sensitive to the quality of that single sample and is restrained by the pattern of the single sample. 3.3 Memory-Augmented Response Generative Model Our overall model consists of two branches (Figure 2). The top branch is a memory-augmented Seq2Seq (M-Seq2Seq) model that is used for response generation. The lower branch is a conditional autoencoder (CAE) that is used to learn response representations necessary for the memory writing. The input to the M-Seq2Seq branch is a query q. It is first passed through an encoder to get a 3819 𝑞 𝑟 𝑒$% Any places for traveling this weekend? National park is a good place. 𝑟𝑒𝑎𝑑 𝑒( 𝑒$% 𝑤𝑟𝑖𝑡𝑒 (by clustering) 𝑟𝑒𝑎𝑑 Where travel Tibet New York museum 𝑟̂ 𝑟′= 𝑟 Travel to New York's museum for this weekend. National park is a good place for traveling this week. National park is a good place. 𝑃𝑟𝑒𝑑 𝐿𝑜𝑠𝑠 𝑅𝑒𝑐 𝐿𝑜𝑠𝑠 𝑒$ 𝑒( 𝑒( 𝑒( 𝑒( 𝑒$ 𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛𝑠 𝑘𝑒𝑦𝑠H 𝑣𝑎𝑙𝑢𝑒𝑠H 𝑘𝑒𝑦𝑠HLM 𝑣𝑎𝑙𝑢𝑒𝑠HLM 𝑒𝑛𝑐𝑜𝑑𝑒𝑟 𝑒𝑛𝑐𝑜𝑑𝑒𝑟 𝑑𝑒𝑐𝑜𝑑𝑒𝑟 𝑑𝑒𝑐𝑜𝑑𝑒𝑟 𝑀𝐿𝑃 𝑀𝐿𝑃 𝑚𝑒𝑚𝑜𝑟𝑦HLM 𝑚𝑒𝑚𝑜𝑟𝑦H 𝑘𝑒𝑦𝑠 𝑣𝑎𝑙𝑢𝑒𝑠 Figure 2: The architecture of our model. Solid arrows show both the training and generation (testing) processes; dashed arrows show the training process. The left part shows how to read memory at t-th step and write to update the memory from t-th to (t+1)-th step, where t indicates the step of updating the memory. The callout illustrates that a query matches the blue memory slot and reads its value “Tibet”, “New York”, and “museum” to promote its response generation. “Pred Loss” and “Rec Loss” mean the prediction and reconstruction loss respectively. representation eq = Encoder(q). The memory is then read using eq as the key, and the output of the memory read is erm. After that, eq and erm are merged by an MLP (multi-layer perceptron), and then the merged results are fed into the decoder to decode the final response ˆr′, which is the generated response for the query q. The objective function for this branch is the first term of Eq. 8 During training, we feed ⟨q, r⟩pairs to our model. The query q is fed to the M-Seq2Seq branch, while the corresponding response r is fed to the CAE branch. Similar to the previous case, r is first pushed through to get a representation er = Encoder(r). Then both eq and er are fed to an MLP and then a decoder. The output is ˆr, a reconstructed version of r. The reconstruction loss is the second term of Eq. 8. We formalize the operations of M-Seq2Seq and CAE by Eq. 4 to Eq. 7. Note that eq is feed to the CAE for two reasons. First, it makes the embedding er of r dependent on the embedding eq of q. Second, it makes the two branches work in a similar fashion so that the representaions learnt by CAE is adaptive to MSeq2Seq. The CAE branch tries to reconstruct ˆr from er and eq, while the M-Seq2Seq tries to generate an appropriate response ˆr′ from eq and erm. erm can be viewed as a rough estimation of er and hence is helpful in improving the quality of the generated response. eq = Encode(q), er = Encode(r), (4) erm = Read(eq), (5) z = MLP([er, eq]), z′ = MLP([erm, eq]), (6) ˆr = Decode(z), ˆr′ = Decode(z′). (7) The overall objective function contains two parts as shown in Eq. 8 : the prediction loss (first term) is derived from the general objective of the retrieval- or memory-augmented generative models max P(r|q, M), where we set M = erm for our model. The reconstruction loss (second term) is for learning the representations by reconstructing r, whose target is to improve the memory module so as to improve the erm for enhancing the generative model. In addition, λ is a factor to balance the losses. L = Eq,r∼D log P(ˆr|q, erm) + λ · Eq,r∼D log P(ˆr|q, r). (8) 3.4 Joint Training and Generation To enable the memory module and the generative model to work together, we combine and jointly train them. We separate the memory writing and the generative model training into two phases, and then train the two phases alternatively. The two training phases switch once per epoch, which means we conduct the memory writing once the generative model finishes training current epoch. 3820 In the generative model training phase, we train and update the model while keeping the memory module read-only. The generative model reads from the memory, trains to update itself, and collects representations eq’s and er’s in preparation for memory writing. In the memory writing phase, parameters of the generative model are fixed. We conduct the clustering over all representations eq’s and er’s collected from generative model training, and then write the results into the memory. For response generation (testing) phase, we only rely on the M-Seq2Seq branch and the memory module in the read-only mode, since we cannot observe the r during generation. As indicated by the solid lines in Figure 2, it encodes q to acquire eq, reads out estimated response erm by the Read Operation, and goes through the MLP and decoder to generate ˆr. 4 Experimental Settings 4.1 Dataset In our experiments, we validate the performance of our model on the context-independent (singleround) conversation task setting in which each sample is a query-response (q-r) pair. We utilize the benchmark dataset (Shang et al., 2015), which collects about 4 millions q-r pairs from a popular Chinese social network website, Weibo.1 For both testing set and validation set, we randomly select 900 queries, and then select randomly 5 responses under each query, thus both our testing set and validation set consist of 4.5k samples. Sentences are tokenized into word sequences with the Jieba word segmentation tool.2 The vocabulary consists of the top 50k tokens (a mixture of Chinese words and characters), covering 99.98% words in this corpus, and all the out-of-vocabulary words are replaced with ⟨UNK⟩. 4.2 Implementation Details We implement the query and response encoder with a one-layer bi-directional GRU, and the decoder with a one-layer GRU and attention mechanism (Bahdanau et al., 2015). We apply the idea of variantial autoencoder (Kingma and Welling, 2014; Zhao et al., 2017) into our model: before the MLP, we use the neural network to estimate the distribution of response vector, sample the vector by reparameterization, then feed it into MLP. 1www.weibo.com 2github.com/fxsjy/jieba Parameters of the query encoder and response encoder are not shared; the two MLP components in M-Seq2Seq branch and CAE branch also do not share parameters. The dimension of all hidden vectors and embeddings are 620 and the batch size is 64. We employ the Adam optimizer (Kingma and Ba, 2014) with the initial learning rate 0.0001 and gradient clipping 5. For generation, we apply a beam search with the size of 10. The memory size K is 1000, and the loss factor λ is 0.1. We implement our model on PyTorch. The implementation details can be found in our codes 3. 4.3 Baselines We compare two versions of our proposed memory-augmented generative model (MemGM), i.e. MemGM with SoftRead (MemGM-S) and MemGM with HardRead (MemGM-H), with the following methods: 1. Seq2Seq. The standard Seq2Seq with the attention mechanism (Bahdanau et al., 2015) in the decoder and the beam search during generation. 2. MMI (Li et al., 2016b). We implement the MMI-bidi model that re-ranks the candidate responses by the maximum mutual information (MMI) criterion in the beam search to promote response diversity. 3. CVAE. The conditional variational autoencoders applied in conversation systems (Zhao et al., 2017). We follow the their implementation and adapt it in our single-round conversation setting. 4. EditRetrieve (Wu et al., 2018). The stateof-the-art retrieval-augmented generative model, which uses the information of the top-1 retrieved response to guide the response generation. 4.4 Evaluation Metric Following previous work on response generation (Li et al., 2016b; Yao et al., 2017), we evaluate all competing methods by both automatic metrics and human evaluations. The automatic metrics are:: 1. Bleu 1-4. Bleu N (Papineni et al., 2002) measures the N-gram matching between generated responses with the ground-truth responses. 3github.com/tianzhiliang/MemoryAugDialog 3821 Automatic Metrics Human Annotate Bleu1,Bleu2,Bleu3,Bleu4 Sim-A, Sim-M Dist1,Dist2 Entropy Quality Info Seq2Seq 39.98 14.68 6.452 3.227 0.291 0.911 0.043 0.153 7.609 2.33 1.71 MMI 40.08 14.71 6.467 3.236 0.288 0.910 0.053 0.183 7.724 2.39 1.63 CVAE 39.85 14.80 6.318 3.012 0.294 0.919 0.044 0.156 7.569 2.42 1.71 EditRetrieve 37.67 10.73 3.437 1.111 0.294 0.932 0.057 0.187 7.586 2.41 1.72 MemGM-S 41.29 15.94 8.084 4.911 0.303 0.936 0.059 0.214 7.576 2.49 1.70 MemGM-H 41.40 16.06 8.289 4.872 0.300 0.935 0.062 0.218 7.684 2.56 1.75 Table 1: The overall performance for all competing methods on quality, relevance, diversity and informativeness. 2. Sim-A, Sim-M. They measure the relevance between the query and its response by their word embedding cosine similarity. Sim-A is the similarity between two sentence-level embeddings composed by averaging all word embeddings, while Sim-M is maximal word-word similarity among all the words of two sentences as (Liu et al., 2016). 3. Dist 1-2. Distinct-1 and Distinct-2 (Li et al., 2016b) are the metrics to evaluate the diversity of generated responses, which count the percentage of unique unigrams and bigrams among all test responses. 4. Entropy. It measures the informativeness of generated responses proposed by (Mou et al., 2016), which is computed by averaging over all the character-level entropy within responses. For human evaluations, we hire five annotators from a commercial annotation company to annotate 250 randomly selected test samples. Responses generated by different models are shuffled for each annotator. The annotators evaluate these samples on two aspects: the overall quality (Quality) and the informativeness (Info). We conduct a 5-scale rating on Quality: 1 point for a response irrelevant to the query, 3 points for a valid but meaningless response, 5 points for a coherent and appropriate response without typos. Points of 2 and 4 are for decision dilemmas. We also conduct a 3-scale rating on Info: 1 point for the universal response or the response containing no more than three unique words, 2 points for a normal response of a single clause or a single topic, and 3 points for an informative response including at least two clauses of different topics, which transfer the current conversation to another scenario (For example, the query is “How’s the weather?”; and response “It’s fine today, let’s play basketball” transfers the weather topic to sports, which should be marked as 3 points). 5 Experimental Results and Analysis 5.1 Overall Performance We report both the automatic metrics and human evaluation results of MemGM compared with other methods in Table 1. MMI scores higher than Seq2Seq on Dist-1&2 owing to its re-ranking mechanism to promote the response diversity. CVAE has a similar performance to the Seq2Seq model. EditRetrieve outperforms Seq2Seq, MMI and CVAE on most metrics. But EditRetrieve underperforms on Bleu scores since retrieval models do not learn a query-to-response mapping and their ability of matching with the ground-truth is naturally lower. MemGM gets the highest scores under most metrics, indicating that our model outperforms current methods on quality, relevance, diversity and informativeness. The improvement of MemGM-H’s Bleu-3&4 (+28.2% and +48.6% in comparison with Seq2Seq) indicates the memory module can extract and memorize trigram and 4gram response patterns to enhance the generated responses. For the two versions of our models, HardRead outperforms SoftRead on most metrics. This phenomenon indicates that fetching a single top memorized piece of information would be more helpful than fetching a mixture of multiple memory slots with multiple topics for generative models. Thus, in MemGM, HardRead is the proper mode for reading memory. 5.2 Impact of Memory Size To investigate how the memory capacity influences the performance of MemGM, we carry out experiments on MemGM-H with a various memory size K and show the results in Table 2 (omitting Bleu-3&4 and Sim-M due to limited space). In Table 2, the extreme setting K = |D| works similarly to retrieval-augmented generative models since it saves all q-r pairs separately and utilize 3822 them individually. The difference between them is that K = |D| reads the memory based on the simple similarity between two vectors eq and ki instead of searching the corpus via mature information retrieval technique, which is usually an ensemble of several text matching methods including similarity of query embeddings. That makes the query matching of K = |D| less accurate and unstable, thus its relevance and quality are weaker than EditRetrieval (Table 1) but does better on diversity. We treat K = |D| and other results in Table 2 as the comparison between individual and grouped q-r pairs on memory-augmented framework. Bleu1,Bleu2 Sim-A Dist1,Dist2 Entropy K=10 42.06 15.11 0.301 0.042 0.140 7.478 K=100 42.75 15.42 0.300 0.043 0.141 7.526 K=1k 41.40 16.06 0.300 0.062 0.218 7.684 K=10k 41.22 15.37 0.296 0.057 0.200 7.659 K=|D| 34.89 9.764 0.265 0.094 0.412 9.241 Table 2: The performance of MemGM-H with different memory size K, where K = |D| means the extreme setting that each sample occupies a memory slot. MemGM with a large memory size (K ≥10k) performs poorly on the response quality (Bleu1&2) and the relevance (Sim-A) compared with the MemGMs with small memory. Too large of the memory size leads to too small of the sample size under each memory slot, which increases the instability and lower the quality of each memory slot. Especially, the performance of K = |D| illustrates the individual retrieval results are not reliable and usually lead to irrelevant results as the observation we will discuss in Sec 5.4. However, large memory MemGMs (K ≥1k) performs well on diversity (Dist-1&2) and informativeness (Entropy), since the training corpus is partitioned into more memory slots and each slots contains more specific topics. And small memory MemGMs (K ≤10) result in low response diversity and informativeness. In conclusion, K=1k is the appropriate memory size to balance all aspects. 5.3 Contents of Memory Our model is expected to cluster similar queries together to leverage the information of their responses. In addition, each memory slot should own a group of closely related queries. To verify the quality of the memory slots, we pick up the queries from the same memory slot and check the similarity between these queries. General Topic Related Entity Overlap Total |m| ≥1,000 350∼1,000 ≤350 – Cluster # 14 205 781 1,000 Query # 18,291 114,454 81,003 213,748 Query % 8.4% 54.4% 37.2% 100% Table 3: The statistics on the size of memory slots (|m|), cluster number (Cluster #), query number (Query #), and query proportion over all queries (Query %) for the three memory slot types. 5 Queries under this Memory Slot Case1: Topic Related Memory Slot 昨天在吉他店里,我们合作了一曲《猜谜到老》 (We play the ”guess forever” in guitar store yesterday) 快来听我唱的”至少还有你”。 (Listen to the “at least I have you” sung by me!) 天空之城吉他独奏,最好听的一个版本 (”Castle in the Sky”guitar solo, the best version to hear) 艾薇儿出道十年12首风靡全球的单曲超赞 (12 world-renowned songs since Avril debuted) 夕阳醉了,太好听了。 (”the Setting Sun is Drunk”. pleasant to hear.) Case2: Entity Overlap Memory Slot 十年前的米 米 米兰 兰 兰,AC米兰 (Milan 10 years ago, AC Milan) 摩纳哥600万打包报价沙拉维+博阿滕—-米 米 米兰 兰 兰体育 (Monaco offers 600 millions$ for Shaarawy and Boateng—-Milan Sprots.) 全场比赛结束,乌迪内斯2 - 1 米 米 米兰 兰 兰 (The whole match was over, Udinese 2:1 Milan) 又是一件卡卡米 米 米兰 兰 兰时期的队服 (Another Kaka’s team uniform when he was in Milan) PPTV这俩解说一直在黑米 米 米兰 兰 兰啊 (The two PPTV’s commentators depreciated Milan) Table 4: Five randomly selected queries under each example of memory slots. We find that the status of memory slots are different under the different size of memory slot |m|, where |m| means how many queries are memorized in this memory slot. We divide the memory slots into three types by their size |m| and show their statistics information in Table 3. Topic-related Memory Slot (with size 350 < |m| < 1000 roughly) has a clear topic and the topics of its queries are highly related. For example, the queries under the memory slot shown in Case1 (Table 4) are related to the music topic. There are 205 such slots covering 54.4% queries, which can supply helpful information within the same topic for response generation. Entity-overlap Memory Slot (|m| ≤ 350 roughly) has more specific topics and its queries usually share common entities. As shown in Case2 in Table 4, the cluster of queries talk about various football news related to “Milan”. 781 out of 1000 slots are of this type and they cover 37.2% queries. In response generation, when the query has the same or similar entities with the memory 3823 slot, it can read the information from that memory slot, which summarizes a group of utterances closely related to that entity. General Memory Slot (|m| ≥1000) means the memory slot owning too many queries to have a clear topic, whose clustered queries have various topics and are not similar to each other. Fortunately, there are only 14 such slots influencing 8.4% queries. In summary, most of the memory slots are of good quality and store useful information as we expect, which cover 91.6% of the queries. 5.4 Case Study In this section, we first compare the cases from MemGM and EditRetrieve to analyze how MemGM exceeds the retrieval-augmented model. Then, we show two examples over all the methods to reveal the characteristics of different methods. For the comparison between MemGM and EditRetrieve, we analyze the good/bad cases where MemGM-H outperforms/underperforms EditRetrieve and investigate the reasons. 4 10 8 0 2 4 6 8 10 12 14 16 Good Cases Relevance(Misled by Word Overlap) Relevance(Topic Matching) Informativeness(Meaningful Words) 6 3 6 0 2 4 6 8 10 12 14 16 Bad Cases Relevance(Topic Matching) Quality(Advanced Words) Informativeness(Long Sentence) Good Cases (MemGM outperforms EditRetrieve) Bad Cases (MemGM underperforms EditRetrieve) Figure 3: The reasons that MemGM outperforms/underperforms EditRetrieve on human annotated cases. We collect the good/bad cases from human annotation results by this criterion: If more than four annotators marked the MemGM-H’s Quality score higher/lower than EditRetrieve’s by 2 points, this sample is the good/bad case. From all annotated samples, we obtain 22 good cases and 15 bad cases, and summarize the reasons in Figure 3. There are 3 reasons for the good cases where MemGM outperforms EditRetrieve, shown in the left side of Figure 3. Firstly, we observe a phenomenon from EditRetrieve’s results that the EditRetrieve’s response has the word overlap with its query but is not related to the query at semantic level. The reason is that retrieval systems are highly reliant on the word matching, so they may retrieve fake results with high lexical similarity but indeed low relevance. Therefore, that phenomenon is due to “misled by word overlap” and it leads to EditRetrieve’s irrelevant results on 4 cases where MemGM performs well. It indicates the retrieval quality limits the performance of retrievalaugmented models. Secondly, EditRetrieve’s results mismatch the topics of given queries in 10 cases, where MemGM can generate relevant responses. Thirdly, MemGM outperforms EditRetrieve in 8 cases due to containing more meaningful words in MemGM’s responses, where meaningful words means the notional words carrying the specific topic information. To summarize the good cases, the major advantage of MemGM is the high relevance with queries compared with EditRetrieve. The performance on Sim-A and Sim-M (Table 1) verifies that MemGM exceeds EditRetrieve on relevance. Note that the retrieval-augmented models are sensitive to the quality of retrieved results; if retrieved query-response pairs are irrelevant to the query, utilizing such information would lead to topic drift and the generation of irrelevant responses. There are 3 reasons for MemGM underperforming EditRetrieve (Figure 3 right). Firstly, MemGM also suffers from irrelevant responses due to mismatching the topic of queries. However, in terms of the relevance, the number of the MemGM’s bad cases (6 cases) is much fewer than its good cases’ (14 cases). Secondly, 3 cases of EditRetrieve are of higher quality owing to the suitable use of advanced and low-frequency words. Thirdly, 6 responses from EditRetrieve are longer and carry more information. Two examples in Table 5 show the model performance. In the first case, EditRetrieve is misled by “ribs”; it treats “ribs of human” as “pork ribs” and responds an utterance related to “food” topic. MemGMs understand the topic and generate informative responses. In the second case, Seq2Seq, CVAE and MMI only respond by simply repeating the query. EditRetrieve can bring in a new word ”rest” related to the query. MemGMs capture the “cold” from the given query and respond utterances related to both “cold” and “health”. 6 Conclusion In this paper, we propose a novel memoryaugmented generative model for response generation. It leverages groups of query-response pairs to augment generative models instead of the individ3824 Query 多亏我是个瘦子,伤心时我可以数数排骨 (Thanks to being a thin man, I can count my ribs during my sad time.) Seq2Seq 我也是这么想的。(I do think so.) MMI 我也曾伤心过(I have experienced the sad time) CVAE 哈哈,我也是这样的。(Ha-ha, me to.) EditRetrieve 我也想吃。。。(I want to eat it too...) MemGM-S 伤心的时候会想起谁 (Who do you remember during sad time.) MemGM-H 我可以说我是个胖子吗? (Can I say I am a fat man?) Query 今天出来好冷,冻脸啊!大家要注意身体 (It’s cold today when leaving, freezing my face.Attention to health!) Seq2Seq 注意身体哦,注意身体 (Attention to health. attention to health.) MMI 注意身体。。。。(Attention to health....) CVAE 注意身体哦,注意身体 (Attention to health! attention to health.) EditRetrieve 注意身体,好好休息。 (Attention to health, have a good rest.) MemGM-S 好冷啊。注意保暖。 (It’s too cold, pay attention to keep warm.) MemGM-H 好冷啊!注意身体! (It’s too cold, attention to health.) Table 5: The case study among all competing methods. ual retrieval results. It clusters the training corpus into multiple groups, extracts and memorizes common characteristics of each group for generation. Compared with retrieval-augmented models, our model makes improvements on the quality, relevance and informativeness of response generation. 7 Acknowledgments We would like to acknowledge Zhourong Chen and Xingjian Shi for their constructive advices and the anonymous reviewers for their insightful comments. Research on this article was supported by Hong Kong Research Grants Council under grants 16202118 and 16212516. References Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Deng Cai, Yan Wang, Victoria Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. Skeleton-toresponse: Dialogue generation guided by retrieval memory. arXiv preprint arXiv:1809.05296, 2018. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. Generating sentences by editing prototypes. Transactions of the Association of Computational Linguistics, 6:437–450, 2018. Zongcheng Ji, Zhengdong Lu, and Hang Li. An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988, 2014. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. stat, 1050:10, 2014. Hung Le, Truyen Tran, Thin Nguyen, and Svetha Venkatesh. Variational memory encoder-decoder. In Advances in Neural Information Processing Systems, pages 1515–1525, 2018. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. In NAACLHLT, pages 110–119, 2016. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, 2016. Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, et al. Alime assist: an intelligent assistant for creating an innovative e-commerce experience. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2495–2498. ACM, 2017. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint arXiv:1603.08023, 2016. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1468–1478, 2018. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation. arXiv preprint arXiv:1607.00970, 2016. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics, 2002. Prasanna Parthasarathi and Joelle Pineau. Extending neural generative conversational model using external knowledge sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 690–695, 2018. 3825 Alan Ritter, Colin Cherry, and William B Dolan. Datadriven response generation in social media. In EMNLP, pages 583–593, 2011. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. Building endto-end dialogue systems using generative hierarchical neural network models. In AAAI, pages 3776– 3783, 2016. Lifeng Shang, Zhengdong Lu, and Hang Li. Neural responding machine for short-text conversation. In ACL-IJCNLP, pages 1577–1586, 2015. Yiping Song, Cheng-Te Li, Jian-Yun Nie, Ming Zhang, Dongyan Zhao, and Rui Yan. An ensemble of retrieval-based and generation-based humancomputer conversation systems. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 4382–4388. AAAI Press, 2018. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112, 2014. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. How to make context more useful? an empirical study on contextaware neural conversational models. Annual Meeting of the Association for Computational Linguistics, 2:231–236, 2017. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. Yu Wu, Furu Wei, Shaohan Huang, Zhoujun Li, and Ming Zhou. Response generation by context-aware prototype editing. arXiv preprint arXiv:1806.07042, 2018. Chien-Sheng Wu, Richard Socher, and Caiming Xiong. Global-to-local memory pointer networks for task-oriented dialogue. arXiv preprint arXiv:1901.04713, 2019. Rui Yan, Yiping Song, and Hua Wu. Learning to respond with deep neural networks for retrieval-based human-computer conversation system. In SIGIR, pages 55–64, 2016. Lili Yao, Yaoyuan Zhang, Yansong Feng, Dongyan Zhao, and Rui Yan. Towards implicit contentintroducing for generative short-text conversation systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2190–2199, 2017. Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. Augmenting end-to-end dialogue systems with commonsense knowledge. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664, 2017. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 372–381, 2016. Yimeng Zhuang, Xianliang Wang, Han Zhang, Jinghui Xie, and Xuan Zhu. An ensemble approach to conversation generation. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 51–62. Springer, 2017.
2019
371
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3826–3835 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3826 Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References Lisong Qiu1,2, Juntao Li1,2, Wei Bi3, Dongyan Zhao1,2, Rui Yan1,2∗ 1Center for Data Science, Peking University, Beijing, China 2Institute of Computer Science and Technology, Peking University, Beijing, China 3Tencent AI Lab, Shenzhen, China {qiuls,lijuntao,zhaody,ruiyan}@pku.edu.cn [email protected] Abstract Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-to-n mapping in dialogue that there may exist multiple valid responses corresponding to the same query. In this paper, we propose to utilize the multiple references by considering the correlation of different valid responses and modeling the 1-to-n mapping with a novel two-step generation architecture. The first generation phase extracts the common features of different responses which, combined with distinctive features obtained in the second phase, can generate multiple diverse and appropriate responses. Experimental results show that our proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations. 1 Introduction In recent years, open-domain dialogue generation has become a research hotspot in Natural Language Processing due to its broad application prospect, including chatbots, virtual personal assistants, etc. Though plenty of systems have been proposed to improve the quality of generated responses from various aspects such as topic (Xing et al., 2017), persona modeling (Zhang et al., 2018b) and emotion controlling (Zhou et al., 2018b), most of these recent approaches are primarily built upon the sequence-to-sequence architecture (Cho et al., 2014; Shang et al., 2015) which suffers from the “safe” response problem (Li et al., 2016a; Sato et al., 2017). This can be ascribed to modeling the response generation process as 1to-1 mapping, which ignores the nature of 1-to∗Corresponding author: Rui Yan ([email protected]) Figure 1: An illustration of the two-step generation architecture. Different from the conventional methods (shown in green color) which model each response from scratch every time, our method first builds a common feature of multiple responses and models each response based on it afterward. n mapping of dialogue that multiple possible responses can correspond to the same query. To deal with the generic response problem, various methods have been proposed, including diversity-promoting objective function (Li et al., 2016a), enhanced beam search (Shao et al., 2016), latent dialogue mechanism (Zhou et al., 2017, 2018a), Variational Autoencoders (VAEs) based models (Zhao et al., 2017; Serban et al., 2017), etc. However, these methods still view multiple responses as independent ones and fail to model multiple responses jointly. Recently, Zhang et al. (2018a) introduce a maximum likelihood strategy that given an input query, the most likely response is approximated rather than all possible responses, which is further implemented by Rajendran et al. (2018) with reinforcement learning for task-oriented dialogue. Although capable of generating the most likely response, these methods fail to model other possible responses and ignore the correlation of different responses. In this paper, we propose a novel response generation model for open-domain conversation, which learns to generate multiple diverse responses with multiple references by considering 3827 the correlation of different responses. Our motivation lies in two aspects: 1) multiple responses for a query are likely correlated, which can facilitate building the dialogue system. 2) it is easier to model each response based on other responses than from scratch every time. As shown in Figure 1, given an input query, different responses may share some common features e.g. positive attitudes or something else, but vary in discourses or expressions which we refer to as distinct features. Accordingly, the system can benefit from modeling these features respectively rather than learning each query-response mapping from scratch. Inspired by this idea, we propose a two-step dialogue generation architecture as follows. We jointly view the multiple possible responses to the same query as a response bag. In the first generation phase, the common feature of different valid responses is extracted, serving as a base from which each specific response in the bag is further approximated. The system then, in the second generation phase, learns to model the distinctive feature of each individual response which, combined with the common feature, can generate multiple diverse responses simultaneously. Experimental results show that our method can outperform existing competitive neural models under both automatic and human evaluation metrics, which demonstrates the effectiveness of the overall approach. We also provide ablation analyses to validate each component of our model. To summarize, our contributions are threefold: • We propose to model multiple responses to a query jointly by considering the correlations of responses with multi-reference learning. • We consider the common and distinctive features of the response bag and propose a novel two-step dialogue generation architecture. • Experiments show that the proposed method can generate multiple diverse responses and outperform existing competitive models on both automatic and human evaluations. 2 Related Work Along with the flourishing development of neural networks, the sequence-to-sequence framework has been widely used for conversation response generation (Shang et al., 2015; Sordoni et al., 2015) where the mapping from a query x to a reply y is learned with the negative log likelihood. However, these models suffer from the “safe” response problem. To address this problem, various methods have been proposed. Li et al. (2016a) propose a diversity-promoting objective function to encourage diverse responses during decoding. Zhou et al. (2017, 2018a) introduce a responding mechanism between the encoder and decoder to generate various responses. Xing et al. (2017) incorporate topic information to generate informative responses. However, these models suffer from the deterministic structure when generating multiple diverse responses. Besides, during the training of these models, response utterances are only used in the loss function and ignored when forward computing, which can confuse the model for pursuing multiple objectives simultaneously. A few works explore to change the deterministic structure of sequence-to-sequence models by introducing stochastic latent variables. VAE is one of the most popular methods (Bowman et al., 2016; Zhao et al., 2017; Serban et al., 2017; Cao and Clark, 2017), where the discourse-level diversity is modeled by a Gaussian distribution. However, it is observed that in the CVAE with a fixed Gaussian prior, the learned conditional posteriors tend to collapse to a single mode, resulting in a relatively simple scope (Wang et al., 2017). To tackle this, WAE (Gu et al., 2018) which adopts a Gaussian mixture prior network with Wasserstein distance and VAD (Du et al., 2018) which sequentially introduces a series of latent variables to condition each word in the response sequence are proposed. Although these models overcome the deterministic structure of sequence-to-sequence model, they still ignore the correlation of multiple valid responses and each case is trained separately. To consider the multiple responses jointly, the maximum likelihood strategy is explored. Zhang et al. (2018a) propose the maximum generated likelihood criteria which model a query with its multiple responses as a bag of instances and proposes to optimize the model towards the most likely answer rather than all possible responses. Similarly, Rajendran et al. (2018) propose to reward the dialogue system if any valid answer is produced in the reinforcement learning phase. Though considering multiple responses jointly, the maximum likelihood strategy fails to utilize all the references during training with some cases ig3828 Figure 2: The overall architecture of our proposed dialogue system where the two generation steps and testing process are illustrated. Given an input query x, the model aims to approximate the multiple responses in a bag {y} simultaneously with the continuous common and distinctive features, i.e., the latent variables c and z obtained from the two generation phases respectively. nored. In our approach, we consider multiple responses jointly and model each specific response separately by a two-step generation architecture. 3 Approach In this paper, we propose a novel response generation model for short-text conversation, which models multiple valid responses for a given query jointly. We posit that a dialogue system can benefit from multi-reference learning by considering the correlation of multiple responses. Figure 2 demonstrates the whole architecture of our model. We now describe the details as follows. 3.1 Problem Formulation and Model Overview Training samples {(x, {y})i}i=N i=1 consist of each query x and the set of its valid responses {y}, where N denotes the number of training samples. For a dialogue generation model, it aims to map from the input query x to the output response y ∈{y}. To achieve this, different from conventional methods which view the multiple responses as independent ones, we propose to consider the correlation of multiple responses with a novel twostep generation architecture, where the response bag {y} and each response y ∈{y} are modeled by two separate features which are obtained in each generation phase respectively. Specifically, we assume a variable c ∈Rn representing the common feature of different responses and an unobserved latent variable z ∈Z corresponding to the distinct feature for each y in the bag. The common feature c is generated in the first stage given x and the distinctive feature z is sampled from the latent space Z in the second stage given the query x and common feature c. The final responses are then generated conditioned on both the common feature c and distinct feature z simultaneously. 3.2 Common Feature of the Response Bag In the first generation step, we aim to map from the input query x to the common feature c of the response bag {y}. Inspired by multi-instance learning (Zhou, 2004), we start from the simple intuition that it is much easier for the model to fit multiple instances from their mid-point than a random start-point, as illustrated in Figure 1. To obtain this, we model the common feature of the response bag as the mid-point of embeddings of multiple responses. In practice, we first encode the input x with a bidirectional gated recurrent units (GRU) (Cho et al., 2014) to obtain an input representation hx. Then, the common feature c is computed by a mapping network which is implemented by a feed-forward neural network whose trainable parameter is denoted as θ. The feature c is then fed into the response decoder to obtain the intermediate response yc which is considered to approximate all valid responses. Mathematically, the objective function is defined as: Lavg = 1 |{y}| X y∈{y} log pψ(y|c) (1) where |{y}| is the cardinality of the response bag {y} and pψ represents the response decoder. 3829 Figure 3: The sentence embedding function of the discriminator in the first generation phase. Besides, to measure how well the intermediate response yc approximates the mid-point response, we set up an individual discriminator and derive the mapping function to produce better results. As to the discriminator, we first project each utterance to an embedding space with fixed dimensionality via convolutional neural networks (CNNs) with different kernels as the process shown in Figure 3. Then, the cosine similarity of the query and response embeddings is computed, denoted as Dθ′(x, y), where θ′ represents trainable parameter in the discriminator. For the response bag {y}, the average response embedding is used to compute the matching score. The objective of intermediate response yc is then to minimize the difference between Dθ′(x, yc) and Dθ′(x, {y}): Ldisc = Ex,{y},yc[Dθ′(x, yc) −Dθ′(x, {y})] (2) where yc denotes the utterance produced by the decoder conditioned on the variable c. To overcome the discrete and non-differentiable problem, which breaks down gradient propagation from the discriminator, we adopt a “soft” continuous approximation (Hu et al., 2017): ˆyct ∼softmax(ot/τ) (3) where ot is the logit vector as the inputs to the softmax function at time-step t and the temperature τ is set to τ →0 as training proceeds for increasingly peaked distributions. The whole loss for the step-one generation is then Lfirst = Lavg + Ldisc (4) which is optimized by a minimax game with adversarial training (Goodfellow et al., 2014). 3.3 Response Specific Generation The second generation phase aims to model each specific response in a response bag respectively. In practice, we adopt the CVAE (Sohn et al., 2015; Yan et al., 2015) architecture, while two prominent modifications remain. Firstly, rather than modeling each response with the latent variable z from scratch, our model approximates each response based on the bag representation c with only the distinctive feature of each specific response remaining to be captured. Secondly, the prior common feature c can provide extra information for the sampling network which is supposed to decrease the latent searching space. Specifically, similar to the CVAE architecture, the overall objective for our model in the second generation phase is as below: Lcvae =Eqφ(z|x,y,c)pθ(c|x)[log pψ(y|c, z)] −D[qφ(z|x, y, c)||pϕ(z|x, c)] (5) where qφ represents the recognition network and pϕ is the prior network with φ and ϕ as the trainable parameters; D(·||·) is the regularization term which measures the distance between the two distributions. In practice, the recognition networks are implemented with a feed-forward network that  µ log σ2  = Wq   hx hy c  + bq (6) where hx and hy are the utterance representations of query and response got by GRU respectively, and the latent variable z ∼N(µ, σ2I). For the prior networks, we consider two kinds of implements. One is the vanilla CVAE model (Zhao et al., 2017) where the prior pϕ(z|x, c) is modeled by a another feed-forward network conditioned on the representations hx and c as follows, " µ′ log  σ′2 # = Wp hx c  + bp (7) and the distance D(·||·) here is measured by the KL divergence. For the other, we adopt the WAE model (Gu et al., 2018) in which the prior pϕ(z|x, c) is modeled by a mixture of Gaussian distributions GMM(πk, µ′ k, σ′ k 2I)K k=1, where K is the number of Gaussian distributions and πk is the mixture coefficient of the k-th component of the GMM module as computed: πk = exp(ek) PK i=1 exp(ei) (8) 3830 and   ek µ′ k log σ′2 k  = Wp,k hx c  + bp,k (9) To sample an instance, Gumble-Softmax reparametrization trick (Kusner and Hern´andezLobato, 2016) is utilized to normalize the coefficients. The distance here is measured by the Wasserstein distance which is implemented with an adversarial discriminator (Zhao et al., 2018). Recap that in the second generation phase the latent variable z is considered to only capture the distinctive feature of each specific response. Hence to distinguish the latent variable z for each separate response, we further introduce a multireference bag-of-word loss (MBOW) which requires the network to predict the current response y against the response bag: Lmbow = Eqφ(z|x,y,c)[log p(ybow|x, z) + λ log(1 −p({¯y}bow|x, z))] (10) where the probability is computed by a feedforward network f as the vanilla bag-of-word loss (Zhao et al., 2017) does; {¯y} is the complementary response bag of y and its probability is computed as the average probability of responses in the bag; and λ is a scaling factor accounting for the difference in magnitude. As it shows, the MBOW loss penalizes the recognition networks if other complementary responses can be predicted from the distinctive variable z. Besides, since the probability of the complementary term may approach zero which makes it difficult to optimize, we actually adopt its lower bound in practice: log(1 −p(ybow|x, z)) = log(1 − |y| Y t=1 efyt P|V | j efj ) ≥log( |y| Y t=1 (1 − efyt P|V | j efj )) (11) where |V | is vocabulary size. Totally, the whole loss for the step-two generation is then: Lsecond = Lcvae + Lmbow (12) which can be optimized in an end-to-end way. 3.4 Optimization and Testing Our whole model can be trained in an end-to-end fashion. To train the model, we first pre-train the word embedding using Glove ((Pennington et al., 2014))1. Then modules of the model are jointly trained by optimizing the losses Lfirst and Lsecond of the two generation phases respectively. To overcome the vanishing latent variable problem (Wang et al., 2017) of CVAE, we adopt the KL annealing strategy (Bowman et al., 2016), where the weight of the KL term is gradually increased during training. The other technique employed is the MBOW loss which is able to sharpen the distribution of latent variable z for each specific response and alleviate the vanishing problem at the same time. During testing, diverse responses can be obtained by the two generation phases described above, where the distinctive latent variable z corresponding to each specific response is sampled from the prior probability network. This process is illustrated in Figure 2. Capable of capturing the common feature of the response bag, the variable c is obtained from the mapping network and no intermediate utterance is required, which facilitates reducing the complexity of decoding. 4 Experimental Setup 4.1 Dataset Focusing on open-domain dialogue, we perform experiments on a large-scale single-turn conversation dataset Weibo (Shang et al., 2015), where each input post is generally associated with multiple response utterances2. Concretely, the Weibo dataset consists of short-text online chit-chat dialogues in Chinese, which is crawled from Sina Weibo 3. Totally, there are 4,423,160 queryresponse pairs for training set and 10000 pairs for the validation and testing, where there are around 200k unique query in the training set and each query used in testing correlates with four responses respectively. For preprocessing, we follow the conventional settings (Shang et al., 2015). 4.2 Baselines We compare our model with representative dialogue generation approaches as listed below: 1https://nlp.stanford.edu/projects/glove/ 2More such multi-reference data is widely available, e.g. social media like Twitter. But we adopt Weibo in this work since it is large and publicly available. 3https://www.weibo.com/ 3831 Method Multi-BLEU EMBEDDING Intra-Dist Inter-Dist BLEU-1 BLEU-2 G A E Dist-1 Dist-2 Dist-1 Dist-2 S2S 21.49 9.498 0.567 0.677 0.415 0.311 0.447 0.027 0.127 S2S+DB 20.20 9.445 0.561 0.682 0.422 0.324 0.457 0.028 0.130 MMS 21.40 9.398 0.569 0.691 0.427 0.561 0.697 0.033 0.158 CVAE 22.71 8.923 0.601 0.730 0.452 0.628 0.801 0.035 0.179 CVAE+BOW 23.12 8.420 0.605 0.741 0.456 0.687 0.873 0.038 0.194 WAE 24.02 9.281 0.611 0.754 0.460 0.734 0.885 0.044 0.196 Ours-First 23.68 9.240 0.619 0.762 0.471 0.725 0.891 0.045 0.199 Ours-Disc 24.22 9.101 0.617 0.754 0.465 0.670 0.863 0.036 0.184 Ours-MBOW 23.88 9.582 0.622 0.778 0.477 0.681 0.877 0.040 0.190 Ours 24.04 9.362 0.625 0.771 0.480 0.699 0.876 0.042 0.190 Ours+GMP 24.20 9.417 0.618 0.769 0.482 0.728 0.889 0.044 0.198 Table 1: Automatic evaluation results of different models where the best results are bold. The G, A and E of Embedding represent Greedy, Average, Extreme embedding-based metrics, repsectively. Method Rela. Divt. Red. Overall Gold 3.90 4.22 3.79 3.97 S2S 3.10 2.77 3.24 3.07 CVAE 2.98 3.12 3.10 3.07 Ours 3.22 3.19 3.23 3.21 Table 2: Human evaluation results of different models. Rela., Divt. and Red. represent Relevance, Diversity and Readability, respectively. The Kappa score among different human evaluators is 0.4412, which indicates moderate human agreements. S2S: the vanilla sequence-to-sequence model with attention mechanism (Bahdanau et al., 2014) where standard beam search is applied in testing to generate multiple different responses. S2S+DB: the vanilla sequence-to-sequence model with the modified diversity-promoting beam search method (Li et al., 2016b) where a fixed diversity rate 0.5 is used. MMS: the modified multiple responding mechanisms enhanced dialogue model proposed by Zhou et al. (2018a) which introduces responding mechanism embeddings (Zhou et al., 2017) for diverse response generation. CVAE: the vanilla CVAE model (Zhao et al., 2017) with and without BOW (bag-of-word) loss (CVAE+BOW and CVAE). WAE: the conditional Wasserstein autoencoder model for dialogue generation (Gu et al., 2018) which models the distribution of data by training a GAN within the latent variable space. Ours: we explore our model Ours and conduct various ablation studies: the model with only the second stage generation (Ours-First), the model without the discriminator (Ours-Disc) and multireference BOW loss (Ours-MBOW), and the model with GMM prior networks (Ours+GMP). 4.3 Evaluation Metrics To comprehensively evaluate the quality of generated response utterances, we adopt both automatic and human evaluation metrics: BLEU: In dialogue generation, BLEU is widely used in previous studies (Yao et al., 2017; Shang et al., 2018). Since multiple valid responses exist in this paper, we adopt multi-reference BLEU where the evaluated utterance is compared to provided multiple references simultaneously. Distinctness: To distinguish safe and commonplace responses, the distinctness score (Li et al., 2016a) is designed to measure word-level diversity by counting the ratio of distinctive [1,2]-grams. In our experiments, we adopt both Intra-Dist: the distinctness scores of multiple responses for a given query and Inter-Dist: the distinctness scores of generated responses of the whole testing set. Embedding Similarity: Embedding-based metrics compute the cosine similarity between the sentence embedding of a ground-truth response and that of the generated one. There are various ways to obtain the sentence-level embedding from the constituent word embeddings. In our experiments, we apply three most commonly used strategies: Greedy matches each word of the reference with the most similar word in the evaluated sentence; Average uses the average of word embed3832 Input 火山喷发瞬间的一些壮观景象。 再过十分钟就进入win8时代,我是系统升级控。 Query These are some magnificent sights at the moment of the volcanic eruption. There remain ten minutes before we entering the era of win8. I am a geek of system updating. Gold 大自然才是人类的最终boss。 问个白痴问题必须正版才能升级吧? Nature is the final boss of human. May I ask an idiot problem. Does the update require a license? 真帅,12月份的时候就能亲眼看到了,好开心啊。 不是给平板电脑用的系统吗? So cool! I am so happy to see it by myself in December. Isn’t this system for PAD? 被惊艳震撼到了。 已经用了一个多月了,不过还是不喜欢8 I am deeply surprised and shocked. I have used it for a month but I still don’t like it 8 震撼了,小小人类仰视造物主的强大。 好久木用电脑了,想念。 Shocked! The imperceptible humanity looks up to the power of the creator. Having not used the computer for a long time, I miss it. CVAE 大半夜的不光是白天。 这是要用手机吗? It’s midnight, not only daytime. Do you want to use the phone? 一天一天就能看到了。 我是升级了升级版了。 We can see it day after day. I have updated to the upgrade. 天地之间的风景有如此之美。 我还以为是我的电脑。 How could there exist such amazing sights. I thought it was my computer. 火山喷发瞬间的萤火虫。 升级版的机器人。 The glowworm at the moment of volcanic eruption. The upgraded robot. Ours 好美,这是哪里呀? 这是什么软件啊,求解。 So amazing! Where is this? I am wondering what software it is. 好壮观啊一定要保存下来。 我觉得微软的ui还不错。 It’s so magnificent that it should be preserved. I think the ui of Microsoft is not bad. 大白天的不能看到。 现在的产品已经不是新产品了。 It can’t be seen during the day. The current product is not the new. 如果有机会亲眼所见过。 这个是什么应用啊。 If you have chance to see it yourself. What application is this. 如此这般这般淼小。 我觉得这样的界面更像windows8。 It is so so imperceptible. I think interface like this looks more like windows8. Table 3: Case study for the generated responses from the testing set of Weibo, where the Chinese utterances are translated into English for the sake of readability. For each input query, we show four responses generated by each method and an additional intermediate utterance (marked with underline) for our model. dings; and Extreme takes the most extreme value among all words for each dimension of word embeddings in a sentence. Since multiple references exist, for each utterance to be evaluated, we compute its score with the most similar reference. Human Evaluation with Case Analysis: As automatic evaluation metrics lose sight of the overall quality of a response (Tao et al., 2018), we also adopt human evaluation on 100 random samples to assess the generation quality with three independent aspects considered: relevance (whether the reply is relevant to the query), diversity (whether the reply narrates with diverse words) and readability (whether the utterance is grammatically formed). Each property is assessed with a score from 1 (worst) to 5 (best) by three annotators. The evaluation is conducted in a blind process with the utterance belonging unknown to the reviewers. 4.4 Implementation Details All models are trained with the following hyperparameters: both encoder and decoder are set to one layer with GRU (Cho et al., 2014) cells, where the hidden state size of GRU is 256; the utterance length is limited to 50; the vocabulary size is 50,000 and the word embedding dimension is 256; the word embeddings are shared by the encoder and decoder; all trainable parameters are initialized from a uniform distribution [-0.08, 0.08]; we employ the Adam (Kingma and Ba, 2014) for optimization with a mini-batch size 128 and initialized learning rate 0.001; the gradient clipping strategy is utilized to avoid gradient explosion, where the gradient clipping value is set to be 5. For the latent variable, we adopt dimensional size 256 and the component number of the mixture Gaussian for prior networks in WAE is set to 5. As to the discriminator, we set the initialized learning rate as 0.0002 and use 128 different kernels for each kernel size in {2, 3, 4}. The size of the response bag is limited to 10 where the instances inside are randomly sampled for each mini-batch. All the models are implemented with Pytorch 0.4.1 4. 5 Results and Analysis 5.1 Comparison against Baselines Table 1 shows our main experimental results, with baselines shown in the top and our models at the bottom. The results show that our model (Ours) outperforms competitive baselines on various evaluation metrics. The Seq2seq based models (S2S, S2S-DB and MMS) tend to generate 4https://pytorch.org 3833 fluent utterances and can share some overlapped words with the references, as the high BLEU-2 scores show. However, the distinctness scores illustrate that these models fail to generate multiple diverse responses in spite of the diversitypromoting objective and responding mechanisms used. We attribute this to that these models fail to consider multiple references for the same query, which may confuse the models and lead to a commonplace utterance. As to the CVAE and WAE models, with the latent variable to control the discourse-level diversity, diverse responses can be obtained. Compared against these previous methods, our model can achieve the best or second best performances on different automatic evaluation metrics where the improvements are most consistent on BLEU-1 and embedding-based metrics, which demonstrates the overall effectiveness of our proposed architecture. In order to better study the quality of generated responses, we also report the human evaluation results in Table 2. As results show, although there remains a huge gap between existing methods and human performance (the Gold), our model gains promising promotions over previous methods on generating appropriate responses with diverse expressions. With both obvious superiority (readability for S2S and diversity for CVAE) and inferiority (diversity for S2S and relevance for CVAE), the baselines show limited overall performances, in contrast to which our method can output more diverse utterances while maintaining the relevance to the input query and achieve a high overall score. 5.2 Ablation Study To better understand the effectiveness of each component in our model, we further conduct the ablation studies with results shown at the bottom of Table 1. Above all, to validate the effectiveness of the common feature, we remove the first generation stage and get the Ours-First model. As the results of BLEU and embedding-based metrics show, the system can benefit from the common feature for better relevance to the query. Moreover, pairwise comparisons Ours-Disc vs. Ours and Ours-MBOW vs. Ours validate the effects of the discriminator and modified multireference bag-of-word loss (MBOW). As results show, the discriminator facilitates extracting the common feature and yields more relevant responses to the input query afterward. The MBOW Figure 4: The statistics of distances between the input query/intermediate utterance and gold references/generated responses, where the distance is measured by the cosine similarity of sentence embeddings. loss, similar to the effects of BOW loss in the CVAE, can lead to a more unique latent variable for each response and improve the final distinctness scores of generated utterances. In the experiments, we also observed the KL vanishing problem when training our model and we overcame it with the KL weight annealing strategy and the MBOW loss described above. 5.3 Case Study and Discussion Table 3 illustrates two examples of generated replies to the input query got from the testing set. Comparing the CVAE and Ours, we can find that although the CVAE model can generate diverse utterances, its responses tend to be irrelevant to the query and sometimes not grammatically formed, e.g. the words “glowworm” and “robot” in the sentences. In contrast, responses generated by our model show better quality, achieving both high relevance and diversity. This demonstrates the ability of the two-step generation architecture. For better insight into the procedure, we present the intermediately generated utterances which show that the feature extracted in the first stage can focus on some common and key aspects of the query and its possible responses, such as the “amazing” and “software”. With the distinctive features sampled in the second generation phase, the model further revises the response and outputs multiple responses with diverse contents and expressions. Recap that the common feature is expected to capture the correlations of different responses and serve as the base of a response bag from which different responses are further generated, as shown 3834 in Figure 1. To investigate the actual performances achieved by our model, we compute the distance between the input query/intermediate utterance and gold references/generated responses and present the results in Figure 4. As shown, intermediate utterances obtained in the first generation phase tend to approximate multiple responses with similar distances at the same time. Comparing the generated responses and the references, we find that generated responses show both high relevant and irrelevant ratios, as the values near 0.00 and 1.00 show. This actually agrees well with our observation that the model may sometimes rely heavily on or ignore the prior common feature information. From a further comparison between the input query and the mid, we also observe that the intermediate utterance is more similar to final responses than the input query, which correlates well with our original intention shown in Figure 1. 6 Conclusion and future work In this paper, we tackle the one-to-many queryresponse mapping problem in open-domain conversation and propose a novel two-step generation architecture with the correlation of multiple valid responses considered. Jointly viewing the multiple responses as a response bag, the model extracts the common and distinct features of different responses in two generation phases respectively to output multiple diverse responses. Experimental results illustrate the superior performance of the proposed model in generating diverse and appropriate responses compared to previous representative approaches. However, the modeling of the common and distinct features of responses in our method is currently implicit and coarse-grained. Directions of future work may be pursuing betterdefined features and easier training strategies. 7 Acknowledgments We would like to thank the anonymous reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61672058; NSFC No. 61876196). Rui Yan was sponsored by CCFTencent Open Research Fund and Alibaba Innovative Research (AIR) Fund. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Kris Cao and Stephen Clark. 2017. Latent variable dialogue models and their diversity. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 182–187. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, and Xuan Wang. 2018. Variational autoregressive decoder for neural response generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3154–3163. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Xiaodong Gu, Kyunghyun Cho, Jungwoo Ha, and Sunghun Kim. 2018. Dialogwae: Multimodal response generation with conditional wasserstein auto-encoder. arXiv preprint arXiv:1805.12352. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. arXiv preprint arXiv:1703.00955. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Matt J Kusner and Jos´e Miguel Hern´andez-Lobato. 2016. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of NAACL-HLT, pages 110–119. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562. 3835 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Janarthanan Rajendran, Jatin Ganhotra, Satinder Singh, and Lazaros Polymenakos. 2018. Learning endto-end goal-oriented dialog with multiple answers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3834–3843. Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, and Masaru Kitsuregawa. 2017. Modeling situations in neural chat bots. In Proceedings of ACL 2017, Student Research Workshop, pages 120–127. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pages 3295–3301. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1577–1586. Mingyue Shang, Zhenxin Fu, Nanyun Peng, Yansong Feng, Dongyan Zhao, and Rui Yan. 2018. Learning to converse with noisy data: Generation with calibration. In IJCAI, pages 4338–4344. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2016. Generating long and diverse responses with neural conversation models. openreview. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pages 3483– 3491. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. In Thirty-Second AAAI Conference on Artificial Intelligence. Liwei Wang, Alexander Schwing, and Svetlana Lazebnik. 2017. Diverse and accurate image description using a variational auto-encoder with an additive gaussian encoding space. In Advances in Neural Information Processing Systems, pages 5756–5766. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI, volume 17, pages 3351–3357. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2015. Attribute2image: Conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570. Lili Yao, Yaoyuan Zhang, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. Towards implicit contentintroducing for generative short-text conversation systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2190–2199. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018a. Tailored sequence to sequence models to different conversation scenarios. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1479–1488. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243. Junbo Zhao, Yoon Kim, Kelly Zhang, Alexander Rush, and Yann LeCun. 2018. Adversarially regularized autoencoders. In International Conference on Machine Learning, pages 5897–5906. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In AAAI, pages 3400–3407. Ganbin Zhou, Ping Luo, Yijun Xiao, Fen Lin, Bo Chen, and Qing He. 2018a. Elastic responding machine for dialog generation with dynamically mechanism selecting. In AAAI Conference on Artificial Intelligence, AAAI. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018b. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence. Zhi-Hua Zhou. 2004. Multi-instance learning: A survey. Department of Computer Science & Technology, Nanjing University, Tech. Rep.
2019
372
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3836–3845 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3836 Pretraining Methods for Dialog Context Representation Learning Shikib Mehri∗, Evgeniia Razumovskaia∗, Tiancheng Zhao and Maxine Eskenazi Language Technologies Institute, Carnegie Mellon University {amehri,erazumov,tianchez,max+}@cs.cmu.edu Abstract This paper examines various unsupervised pretraining objectives for learning dialog context representations. Two novel methods of pretraining dialog context encoders are proposed, and a total of four methods are examined. Each pretraining objective is fine-tuned and evaluated on a set of downstream dialog tasks using the MultiWoz dataset and strong performance improvement is observed. Further evaluation shows that our pretraining objectives result in not only better performance, but also better convergence, models that are less data hungry and have better domain generalizability. 1 Introduction Learning meaningful representations of multi-turn dialog contexts is the cornerstone of dialog systems. In order to generate an appropriate response, a system must be able to aggregate information over multiple turns, such as estimating a belief state over user goals (Williams et al., 2013) and resolving anaphora co–references (Mitkov, 2014). In the past, significant effort has gone into developing better neural dialog architectures to improve context modeling given the same in-domain training data (Dhingra et al., 2017; Zhou et al., 2016). Recent advances in pretraining on massive amounts of text data have led to state-of-theart results on a range of natural language processing (NLP) tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018) including natural language inference, question answering and text classification. These promising results suggest a new direction for improving context modeling by creating general purpose natural language representations that are useful for many different downstream tasks. ∗* Equal contribution. Yet pretraining methods are still in their infancy. We do not yet fully understand their properties. For example, many pretraining methods are variants of language modeling (Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018), e.g. predicting the previous word, next word or the masked word, given the sentence context. This approach treats natural language as a simple stream of word tokens. It relies on a complex model to discover high-level dependencies, through the use of massive corpora and expensive computation. Recently the BERT model (Devlin et al., 2018) achieved state-of-the-art performance on several NLP benchmarks. It introduces a sentence-pair level pretraining objective, i.e. predicting whether two sentences should come after one another. This is a step towards having pretraining objectives that explicitly consider and leverage discourse-level relationships. However, it is still unclear whether language modeling is the most effective method of pretrained language representation, especially for tasks that need to exploit multi-turn dependencies, e.g. dialog context modeling. Thornbury and Slade (2006) underline several discourse-level features which distinguish dialog from other types of text. Dialog must be coherent across utterance and a sequence of turns should achieve a communicative purpose. Further, dialog is interactive in nature, with feedback and back-channelling between speakers, and turntaking. These unique features of dialog suggest that modelling dialog contexts requires pretraining methods specifically designed for dialog. Building on this prior research, the goal of this paper is to study various methods of pretraining discourse-level language representations, i.e. modeling the relationship amongst multiple utterances. This paper takes a first step in the creation of a systematic analysis framework of pretraining methods for dialog systems. Concretely, 3837 we pretrain a hierarchical dialog encoder (Serban et al., 2016) with four different unsupervised pretraining objectives. Two of the objectives, nextutterance generation (Vinyals and Le, 2015) and retrieval (Lowe et al., 2016), have been explored in previous work. The other two pretraining objectives, masked-utterance retrieval and inconsistency identification, are novel. The pretrained dialog encoder is then evaluated on several downstream tasks that probe the quality of the learned context representation by following the typical pretrain & fine-tune procedure. Pretraining and downstream evaluation use the MultiWoz dialog dataset (Budzianowski et al., 2018), which contains over 10,000 dialogs spanning 6 different domains. The downstream tasks include next-utterance generation (NUG), nextutterance retrieval (NUR), dialog act prediction (DAP), and belief state prediction (BSP). The pretraining objectives are assessed under four different hypotheses: (1) that pretraining will improve downstream tasks with fine-tuning on the entire available data, (2) that pretraining will result in better convergence, (3) that pretraining will perform strongly with limited data and (4) that pretraining facilitates domain generalizability. The results here show that pretraining achieves significant performance gains with respect to these hypotheses. Furthermore, the novel objectives achieve performance that is on-par with or better than the pre-existing methods. The contributions of this paper are: (1) a study of four different pretraining objectives for dialog context representation, including two novel objectives. (2) a comprehensive analysis of the effects of pretraining on dialog context representations, assessed on four different downstream tasks. 2 Related Work This work is closely related to research in auxiliary multi-task learning and transfer learning with pretraining for NLP systems. Training with Auxiliary Tasks Incorporating a useful auxiliary loss function to complement the primary objective has been shown to improve the performance of deep neural network models, including, but not limited to, error detection (Rei and Yannakoudakis, 2017), crosslingual speech tagging (Plank et al., 2016), domain independent sentiment classification (Yu and Jiang, 2016), latent variable inference for dialog generation (Zhao et al., 2017) and opinion extraction (Ding et al., 2017). Some auxiliary loss functions are designed to improve performance on a specific task. For instance, Yu and Jiang (2016) pretrained a model for sentiment classification with the auxiliary task of identifying whether a negative or positive word occurred in the sentence. In some cases, auxiliary loss is created to encourage a model’s general representational power. Trinh et al. (2018) found that a model can capture far longer dependencies when pretrained with a suitable auxiliary task. This paper falls in line with the second goal by creating learning objectives that improve a representation to capture general-purpose information. Transfer Learning with Pretraining The second line of related research concerns the creation of transferable language representation via pretraining. The basic procedure is typically to first pretrain a powerful neural encoder on massive text data with unsupervised objectives. The second step is to fine-tune this pretrained model on a specific downstream task using a much smaller in-domain dataset (Howard and Ruder, 2018). Recently, several papers that use this approach have achieved significant results. ELMo (Peters et al., 2018) trained a two-way language model with Bidirectional Long Short-Term Memory Networks (biLSTM) (Huang et al., 2015) to predict both the next and previous word. OpenAI’s GPT created a unidirectional language model using transformer networks (Radford et al., 2018) and BERT was trained with two simultaneous objectives: the masked language model and next sentence prediction (Devlin et al., 2018). Each of the models has demonstrated state-of-the-art results on the GLUE benchmark (Wang et al., 2018). The GPT model has also been adapted to improve the performance of end-to-end dialog models. In the 2nd ConvAI challenge (Dinan et al., 2019), the best models on both human and automated evaluations were generative transformers (Wolf et al., 2019), which were initialized with the weights of the GPT model and fine-tuned on in-domain dialog data. These models, which leveraged largescale pretraining, outperformed the systems which only used in-domain data. There has been little work on pretraining methods that learn to extract discourse level information from the input text. Next sentence predic3838 tion loss in BERT (Devlin et al., 2018) is a step in this direction. While these pretraining methods excel at modelling sequential text, they do not explicitly consider the unique discourse-level features of dialog. We therefore take the first steps in the study of pretraining objectives that extract better discourse-level representations of dialog contexts. 3 Pretraining Objectives This section discusses the unsupervised pretraining objectives, including two novel approaches aimed at capturing better representations of dialog context. When considering a specific pretraining method, both the pretraining objective and the model architecture must facilitate the learning of strong and general representations. We define a strong representation as one that captures the discourse-level information within the entire dialog history as well as utterance-level information in the utterances that constitute that history. By our definition, a representation is sufficiently general when it allows the model to perform better on a variety of downstream tasks. The next section describes the pretraining objectives within the context of the strength and generality of the learned representations. For clarity of discussion, the following notation is used: an arbitrary T-turn dialog segment is represented by a list of utterances c = [u1, ...uT ], where ui is an utterance. Further, we denote the set of all observed dialog responses in the data by R = {r1, ...rM}. The pretraining objectives, discussed below, are next-utterance retrieval (NUR), next-utterance generation (NUG), masked-utterance retrieval (MUR), and inconsistency identification (InI). 3.1 Next-Utterance Retrieval NUR has been extensively explored both as an independent task (Lowe et al., 2015, 2016) and as an auxiliary loss in a multi-tasking setup (Wolf et al., 2019). Given a dialog context, the aim of NUR is to select the correct next utterance from a set of k candidate responses. NUR can be thought of as being analogous to language modelling, except that the utterances, rather than the words, are the indivisible atomic units. Language modelling pretraining has produced strong representations of language (Radford et al., 2018; Peters et al., 2018), thereby motivating the choice of NUR as a pretraining objective. For this task we use a hierarchical encoder to produce a representation of the dialog context by first running each utterance independently through a Bidirectional Long-short Term Memory Network (biLSTM) and then using the resulting utterance representations to produce a representation of the entire dialog context. We use a single biLSTM to encode candidate responses. Given [u1, ...uT−1], the task of NUR is to select the correct next utterance uT from R. Note that for large dialog corpora, R is usually very large and it is more computationally feasible to sample a subset of R and as such we retrieve K negative samples for each training example, according to some distribution pn(r), e.g. uniform distribution (Mikolov et al., 2013). Concretely, we minimize the cross entropy loss of the next utterance by: ˆui = fu(ui) i ∈[1, T −1] (1) [h1, ...hT−1] = fc(ˆu1, ...ˆuT−1) (2) rgt = fr(uT ) (3) rj = fr(rj) rj ∼pn(r) (4) αgt = hT−1T rgt (5) αj = hT−1T rj (6) where fu, fc and fr are three distinct biLSTM models that are to be trained. The final loss function is: L = −log p(uT |u1, ...uT−1) (7) = −log exp(αgt) exp(αgt) + PK j=1 exp(αj) ! 3.2 Next-Utterance Generation NUG is the task of generating the next utterance conditioned on the past dialog context. Sequenceto-sequence models (Sutskever et al., 2014; Bahdanau et al., 2015) have been used for pretraining (Dai and Le, 2015; McCann et al., 2017), and have been shown to learn representations that are useful for downstream tasks (Adi et al., 2016; Belinkov et al., 2017). The hierarchical recurrent encoder-decoder architecture (Serban et al., 2016) was used during NUG pretraining. Although the decoder is used in pretraining, only the hierarchical context encoder is transferred to the downstream tasks. Similarly to NUR, the optimization goal of NUG is to maximize the log-likelihood of the next utterance given 3839 the previous utterances. However, it differs in that it factors the conditional distribution to word-level in an auto-regressive manner. Specifically, let the word tokens in uT be [w1, ...wN]. The dialog context is encoded as in Eq 8 with an utterance and a context biLSTM. Then the loss function to be minimized is shown in Eq 9: L = −log p(uT |u1, ...uT−1) (8) = − N X k log p(wk|w<k, hT−1) (9) 3.3 Masked-Utterance Retrieval MUR is similar to NUR: the input contains a dialog context and a set of K candidate responses. The objective is to select the correct response. The difference between the two is twofold. First, one of the utterances in the dialog context has been replaced by a randomly chosen utterance. Secondly, rather than use the final context representation to select the response that should immediately follow, the goal here is to use the representation of the replacement utterance to retrieve the correct utterance. The replacement index t is randomly sampled from the dialog segment: t ∼Uniform[1, T] (10) Then ut is randomly replaced by a replacement utterance q that is sampled from the negative distribution pn(r) defined in NUR. Finally, the goal is to minimize the negative log-likelihood of the original ut given the context hidden state at timestamp t, i.e. −log p(ugt|u1, ...q, ...uT ), where ugt is the original utterance at index t. ˆui = fu(ui) i ∈[1, T] (11) [h1, ...hT] = fc(ˆu1, ...ˆuT) (12) rgt = fr(ugt) (13) rj = fr(rj) rj ∼pn(r) (14) αgt = htT rgt (15) αj = htT rj (16) The final loss function is: L = −log p(ut|u1, ...q, ...uT ) (17) = −log exp(αgt) exp(αgt) + PK j=1 exp(αj) ! MUR is analogous to the MLM objective of Devlin et al. (2018), which forces model to keep a distributional contextual representation of each input token. By masking entire utterances, instead of input tokens, MUR learns to produce strong representations of each utterance. 3.4 Inconsistency Identification InI is the task of finding inconsistent utterances within a dialog history. Given a dialog context with one utterance replaced randomly, just like MUR, InI finds the inconsistent utterance. The replacement procedure is the same as the one described for MUR, where a uniform random index t is selected in the dialog context and ut is replaced by a negative sample q. While MUR strives to create a model that finds the original utterance, given the replacement index t, InI aims to train a model that can identify the replacement position t. Specifically, this is done via: ˆui = fu(ui) i ∈[1, T] (18) [h1, ...hT] = fc(ˆu1, ...ˆuT) (19) αi = hTT hi i ∈[1, T] (20) Finally, the loss function is to minimize the cross entropy of the replaced index: L = −log p(t|u1, ...q, ...uT ) (21) = −log exp(αt) PT j=1 exp(αi) ! This pretraining objective aims to explicitly model the coherence of the dialog, which encourages both local representations of each individual utterance and a global representation of the dialog context. We believe that this will improve the generality of the pretrained representations. 4 Downstream Tasks This section describes the downstream tasks chosen to test the strength and generality of the representations produced by the various pretraining objectives. The downstream evaluation is carried out on a lexicalized version of the MultiWoz dataset (Budzianowski et al., 2018). MultiWoz contains multi-domain conversations between a Wizard-ofOz and a human. There are 8422 dialogs for training, 1000 for validation and 1000 for testing. 3840 4.1 Belief State Prediction Given a dialog context, the task is to predict a 1784-dimensional belief state vector. Belief state prediction (BSP) is a multi-class classification task, highly dependant on strong dialog context representations. The belief state vector represents the values of 27 entities, all of which can be inferred from the dialog context. To obtain the 1784dimensional label, the entity values are encoded as a one-hot encoded vector and concatenated. The entities are shown in Appendix ??. Performance is measured using the F-1 score for entities with nonempty values. This approach is analogous to the one used in the evaluation of Dialog State Tracking Challenge 2 (Henderson et al., 2014). This task measures the ability of a system to maintain a complete and accurate state representation of the dialog context. With a 1784dimensional output, the hidden representation for this task must be sufficiently general. Therefore, any pretrained representations that lack generality will struggle on belief state prediction. 4.2 Dialog Act Prediction Dialog act prediction (DAP), much like belief state prediction, is a multi-label task aimed at producing a 32-dimensional dialog act vector for the system utterances. The set of dialog acts for a system utterance describes the actions that may be taken by the system. This might include: informing the user about an attraction, requesting information about a hotel query, or informing them about specific trains. There are often multiple actions taken in a single utterance, and thus this is a multi-label task. To evaluate performance on dialog act prediction, we use the F-1 score. 4.3 Next-Utterance Generation NUG is the task of producing the next utterance conditioned on the dialog history. We evaluate the ability of our models to generate system utterances using BLEU-4 (Papineni et al., 2002). This task requires both a strong global context representation to initialize the decoder’s hidden state and strong local utterance representations. 4.4 Next-Utterance Retrieval Given a dialog context, NUR selects the correct next utterance from a set of k candidate responses. Though this task was not originally part of the MultiWoz dataset, we construct the necessary data for this task by randomly sampling negative examples. This task is underlined by Lowe et al. (2016)’s suggestion that using NUR for evaluation is extremely indicative of performance and is one of the best forms of evaluation. Hits@1 (H@1) is used to evaluate our retrieval models. The latter is equivalent to accuracy. Although some of these pretraining models had a response encoder, which would have been useful to transfer to this task, to ensure a fair comparison of all of the methods, we only transfer the weights of the context encoder. 5 Experiments and Results This section presents the experiments and results aimed at capturing the capabilities and properties of the above pretraining objectives by evaluating on a variety of downstream tasks. All unsupervised pretraining objectives are trained on the full MultiWoz dataset (Budzianowski et al., 2018). Data usage for downstream fine-tuning differs, depending on the property being measured. 5.1 Experimental Setup Each model was trained for 15 epochs, with the validation performance computed at each epoch. The model achieving the highest validation set performance was used for the results on the test data. The hyperparameters and experimental settings are shown in the Appendix ??. The source code will be open-sourced when this paper is released. In the experiments, the performance on each downstream task was measured for each pretraining objective. Combinations where the pretraining objective is the same as the downstream task were excluded. The pretraining and finetuning is carried out on the same dataset. This evaluates the pretraining objectives as a means of extracting additional information from the same data, in contrast to evaluating their ability to benefit from additional data. Though pretraining on external data may prove to be effective, identifying a suitable pretraining dataset is challenging and this approach more directly evaluates the pretraining objectives. 5.2 Performance on Full Data To first examine whether the pretraining objectives facilitate improved performance on downstream tasks a baseline model was trained for each down3841 BSP DAP NUR NUG F-1 F-1 H@1 BLEU None 18.48 40.33 63.72 14.21 NUR 17.80 43.25 – 15.39 NUG 17.96 42.31 67.34 – MUR 16.76 44.87 62.38 15.27 InI 16.61 44.84 62.62 15.52 Table 1: Results of evaluating the chosen pretraining objectives, preceded by the baseline, on the four downstream tasks. This evaluation used all of the training data for the downstream tasks as described in Section 5.2. stream task, using the entire set of MultiWoz data. The first row of Table 1 shows the performance of randomly initialized models for each downstream task. To evaluate the full capabilities of the pretraining objectives above, the pretrained models were used to initialize the models for the downstream tasks. Results are shown on Table 1. This experimental setup speaks to the strength and the generality of the pretrained representations. Using unsupervised pretraining, the models produce dialog representations that are strong enough to improve downstream tasks. The learned representations demonstrate generality because the multiple downstream tasks benefit from the same pretraining. Rather than learning representations that are useful for just the pretraining objective, or for a single downstream task, the learned representations are general and beneficial for multiple tasks. For the DAP and NUG downstream tasks, the pretrained models consistently outperformed the baseline. InI has the highest BLEU score for NUG. This may be a consequence of the importance of both global context representations and local utterance representations in sequence generation models. Both InI and MUR score much higher than the baseline and the other methods for DAP, which may be due to the fact that these two approaches are trained to learn a representation of each utterance rather than just an overall context representation. NUR has significant gains when pretraining with NUG, possibly because the information that must be captured to generate the next utterance is similar to the information needed to retrieve the next utterance. Unlike the other downstream tasks, BSP did not benefit from pretraining. A potential justification of this result is that due to the difficulty of the task, the model needs to resort to word-level pattern matching. The generality of the pretrained representations precludes this. 5.3 Convergence Analysis This experimental setup measures the impact of pretraining on the convergence of the downstream training. Sufficiently general pretraining objectives should learn to extract useful representations of the dialog context. Thus when fine-tuning on a given downstream task, the model should be able to use the representations it has already learned rather than having to learn to extract relevant features from scratch. The performance on all downstream tasks with the different pretraining objectives is evaluated at every epoch. The results are presented on Figure 1. These figures show faster convergence across all downstream tasks with significant improvement over a random initialization baseline. The results show that performance on the initial epochs is considerably better with pretraining than without. In most cases, performance evens out during training, thus attaining results that are comparable to the pretraining methods on the full dataset. It is important to note that performance of the models after just a single epoch of training is significantly higher on all downstream tasks when the encoder has been pretrained. This underlines the usefulness of the features learned in pretraining. The convergence of BSP shown in Figure 1 is very interesting. Though the baseline ultimately outperforms all other methods, the pretrained models attain their highest performance in the early epochs. This suggests that the representations learned in pretraining are indeed useful for this task despite the fact that they do not show improvement over the baseline. 5.4 Performance on Limited Data Sufficiently strong and general pretrained representations, should continue to succeed in downstream evaluation even when fine-tuned on significantly less data. The performance on downstream tasks is evaluated with various amounts of finetuning data (1%, 2%, 5%, 10% and 50%). The effect of the training data size for each downstream task is also evaluated. The performance of NUR with different amounts of training data is shown on Figure 2. With 5% of the fine-tuning data, the NUG pretrained model outperforms the baseline that used 10%. With 10% 3842 Figure 1: The performance of (from left to right) BSP, DAP, NUR, NUG across epochs with different pretraining objectives. For the BLEU-4 score in NUG, the results are noisy due to the metric being the BLEU score, however the general trend is still apparent. Figure 2: NUR Hits@1 at different training set sizes. The blue horizontal line is the baseline performance with 50% of the data. The red horizontal line is the baseline performance with 10% of the data. of the fine-tuning data, this model outperforms the baseline that used 50% of the data. Table 2 shows all of the results with 1% of the fine-tuning data, while Table 3 shows the results with 10% of the fine-tuning data. More results may be found in the Appendix ??. BSP DAP NUR NUG F-1 F-1 H@1 BLEU None 4.65 16.07 12.28 6.82 NUR 6.44 14.48 – 11.29 NUG 7.63 17.41 28.08 – MUR 5.89 17.19 23.37 10.47 InI 6.18 12.20 21.84 11.10 Table 2: Performance using 1% of the data; the rows correspond to the pretraining objectives and the columns correspond to the downstream tasks. The results shown here strongly highlight the effectiveness of pretraining. With a small fraction BSP DAP NUR NUG F-1 F-1 H@1 BLEU None 5.73 18.44 34.88 9.19 NUR 7.30 20.84 – 14.04 NUG 9.62 22.11 45.05 – MUR 7.08 22.24 39.38 11.63 InI 7.30 20.73 35.26 13.23 Table 3: Results with 10% of the data; the rows correspond to the pretraining objectives and the columns correspond to the downstream tasks. of the data, unsupervised pretraining shows competitive performance on downstream tasks. When the amount of data is very limited, the best results were obtained by models pretrained with NUG. This may be indicative of the generality of NUG pretraining. Since the generation task is difficult, it is likely that the pretrained model learns to capture the most general context representation that it can. This makes the representations especially suitable for low resource conditions since NUG pretrained representations are general enough to adapt to different tasks given even very small amounts of data, 5.5 Domain Generalizability Sufficiently general pretrained representations should facilitate domain generalizability on the downstream tasks, just as pretraining should encourage the downstream models to use domain agnostic representations and identify domain agnostic relationships in the data. This experimental setup is designed to mimic the scenario of adding a new domain as the downstream task. It assumes that there are large quantities of unlabeled data for unsupervised pretraining in all domains but that there is a limited set of labeled data for the downstream tasks. More specif3843 BSP DAP NUR NUG F-1 F-1 H@1 BLEU None 4.07 15.22 13.62 7.80 NUR 19.64 17.88 – 9.97 NUG 17.11 20.53 21.57 – MUR 15.84 17.45 21.06 9.81 InI 14.61 15.56 19.80 10.87 Table 4: Results of evaluating pretrained objectives on their capacity to generalize to the restaurant domain using only 50 in-domain samples and 2000 outof-domain samples during training. The evaluation is carried out only on the in-domain test samples. ically, for each downstream task there are 1000 labeled out-of-domain examples (2% of the dataset) and only 50 labeled in-domain examples (0.1% of the dataset). The performance of the downstream models is computed only on the in-domain test samples, thereby evaluating the ability of our models to learn the downstream task on the limited in-domain data. The results on Table 4 show that pretraining produces more general representations and facilitates domain generalizability. 6 Discussion The results with different experimental setups demonstrate the effectiveness of the pretraining objectives. Pretraining improves performance, leads to faster convergence, works well in lowdata scenarios and facilitates domain generalizability. We now consider the respective strengths of the different pretraining objectives. NUR and NUG are complementary tasks. Over all of the results, we can see that pretraining with either NUG or NUR, gives strong results when fine-tuning on the other one. This property, which has also been observed by Wolf et al. (2019), is a consequence of the similarity of the two tasks. Both for retrieval and generation, context encoding must contain all of the information that is necessary to produce the next utterance. NUG learns representations that are very general. We see that NUG, especially in low data experiments, effectively transfers to many downstream tasks. This speaks to the generality of its representations. To auto-regressively generate the next utterance, the context encoder in NUG must capture a strong and expressive representation of the dialog context. This representation is all that the decoder uses to generate its response at word level so it must contain all of the relevant infor< 3 ≥3 & < 7 ≥7 None 11.02 14.17 15.30 NUR 13.95 15.08 15.88 MUR 12.21 15.36 16.10 InI 11.52 15.40 16.63 Table 5: Results on the downstream task of NUG, with different dialog context lengths (< 3 utterances, 3-7 utterances, and > 7 utterances. mation. Despite the similarity of NUG and NUR, generation is a more difficult task, due to the potential output space of the model. As such, the representations learned by NUG are more general and expressive. The representative capabilities of the encoder in a generation model are also demonstrated by the work of Adi et al. (2016). InI and MUR learn strong local representations of each utterance. The two novel pretraining objectives, InI and MUR, consistently show strong improvement for the downstream NUG task. Both of these objectives learn local representations of each utterance in the dialog context since both of their respective loss functions use the representation of each utterance instead of just the final hidden state. In an effort to better understand the properties of the different objectives, Table 5 shows performance on the NUG task for different dialog context lengths. Generating a response to a longer dialog context requires a strong local representation of each individual utterance. A model that does not capture strong representations of each utterance will likely perform poorly on longer contexts. For example, for a dialog in which the user requests a restaurant recommendation, in order to generate the system utterance that recommends a restaurant, the model must consider all of the past utterances in order to effectively generate the recommendation. If the local representations of each utterance are not strong, it would be difficult to generate the system output. The results in Table 5 demonstrate that both InI and MUR strongly outperform other methods on long contexts, suggesting that these methods are effective for capturing strong representations of each utterance. Both MUR and InI perform poorly on shorter contexts. This further demonstrates that fine-tuned NUG models learn to rely on strong utterance representations, and therefore struggle when there are few utterances. Using the same dataset for pretraining and 3844 finetuning. The pretraining objectives demonstrate large improvements over directly training for the downstream task. No additional data is used for pretraining, which suggests that the proposed objective allow the model to extract stronger and more general context representations from the same data. The reduced data experiments show that pretraining on a larger corpora (i.e., the full data), results in strong performance on smaller task-specific datasets (i.e., the reduced data). As such, it is likely that pretraining on larger external data will result in further performance gains, however, it is challenging to identify a sufficient corpus. 7 Conclusion and Future Work This paper proposes several methods of unsupervised pretraining for learning strong and general dialog context representations, and demonstrates their effectiveness in improving performance on downstream tasks with limited fine-tuning data as well as out-of-domain data. It proposes two novel pretraining objectives: masked-utterance retrieval and inconsistency identification which better capture both the utterance-level and context-level information. Evaluation of the learned representations on four downstream dialog tasks shows strong performance improvement over randomly initialized baselines. In this paper, unsupervised pretraining has been shown to learn effective representations of dialog context, making this an important research direction for future dialog systems. These results open three future research directions. First, the models proposed here should be pretrained on larger external dialog datasets. Second, it would be interesting to test the representations learned using unsupervised pretraining on less-related downstream tasks such as sentiment analysis. Finally, the addition of word-level pretraining methods to improve the dialog context representations should be explored. References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv preprint arXiv:1608.04207. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 861–872. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2017. Gatedattention readers for text comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1832–1846. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019. The second conversational intelligence challenge (convai2). arXiv preprint arXiv:1902.00098. Ying Ding, Jianfei Yu, and Jing Jiang. 2017. Recurrent neural networks with auxiliary labels for crossdomain opinion target extraction. In Thirty-First AAAI Conference on Artificial Intelligence. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328–339. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Ryan Lowe, Nissan Pow, Iulian V Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 285. 3845 Ryan Lowe, Iulian V Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. On the evaluation of dialogue systems with next utterance classification. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Ruslan Mitkov. 2014. Anaphora resolution. Routledge. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 412–418. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Marek Rei and Helen Yannakoudakis. 2017. Auxiliary objectives for neural error detection models. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 33–43. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, volume 16, pages 3776–3784. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Scott Thornbury and Diana Slade. 2006. Conversation: From description to pedagogy. Cambridge University Press. Trieu Trinh, Andrew Dai, Thang Luong, and Quoc Le. 2018. Learning longer-term dependencies in rnns with auxiliary losses. In International Conference on Machine Learning, pages 4972–4981. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. Jianfei Yu and Jing Jiang. 2016. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 236–246. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–664. Xiangyang Zhou, Daxiang Dong, Hua Wu, Shiqi Zhao, Dianhai Yu, Hao Tian, Xuan Liu, and Rui Yan. 2016. Multi-view response selection for human-computer conversation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 372–381.
2019
373
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3846–3856 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3846 A Large-Scale Corpus for Conversation Disentanglement Jonathan K. Kummerfeld1∗ Sai R. Gouravajhala1 Joseph J. Peper1 Vignesh Athreya1 Chulaka Gunasekara2 Jatin Ganhotra2 Siva Sankalp Patel2 Lazaros Polymenakos2 Walter S. Lasecki1 Computer Science & Engineering1 T.J. Watson Research Center2 University of Michigan IBM Research AI Abstract Disentangling conversations mixed together in a single stream of messages is a difficult task, made harder by the lack of large manually annotated datasets. We created a new dataset of 77,563 messages manually annotated with reply-structure graphs that both disentangle conversations and define internal conversation structure. Our dataset is 16 times larger than all previously released datasets combined, the first to include adjudication of annotation disagreements, and the first to include context. We use our data to re-examine prior work, in particular, finding that 80% of conversations in a widely used dialogue corpus are either missing messages or contain extra messages. Our manually-annotated data presents an opportunity to develop robust data-driven methods for conversation disentanglement, which will help advance dialogue research. 1 Introduction When a group of people communicate in a common channel there are often multiple conversations occurring concurrently. Often there is no explicit structure identifying conversations or their structure, such as in Internet Relay Chat (IRC), Google Hangout, and comment sections on websites. Even when structure is provided it often has limited depth, such as threads in Slack, which provide one layer of branching. In all of these cases, conversations are entangled: all messages appear together, with no indication of separate conversations. Automatic disentanglement could be used to provide more interpretable results when searching over chat logs, and to help users understand what is happening when they join a channel. Over a decade of research has considered conversation disentanglement (Shen et al., 2006), but using datasets that are either small (2,500 messages, Elsner and Charniak, 2008) or not released (Adams and Martell, 2008). ∗[email protected] We introduce a conversation disentanglement dataset of 77,563 messages of IRC manually annotated with reply-to relations between messages.1 Our data is sampled from a technical support channel at 173 points in time between 2004 and 2018, providing a diverse set of speakers and topics, while remaining in a single domain. Our data is the first to include context, which differentiates messages that start a conversation from messages that are responding to an earlier point in time. We are also the first to adjudicate disagreements in disentanglement annotations, producing higher quality development and test sets. We also developed a simple model that is more effective than prior work, and showed that having diverse data makes it perform better and more consistently. We also analyze prior disentanglement work. In particular, a recent approach from Lowe et al. (2015, 2017). By applying disentanglement to an enormous log of IRC messages, they developed a resource that has been widely used (over 315 citations), indicating the value of disentanglement in dialogue research. However, they lacked annotated data to evaluate the conversations produced by their method. We find that 20% of the conversations are completely right or a prefix of a true conversation; 58% are missing messages, 3% contain messages from other conversations, and 19% have both issues. As a result, systems trained on the data will not be learning from accurate humanhuman dialogues. 2 Task Definition We consider a shared channel in which a group of people are communicating by sending messages that are visible to everyone. We label this data with a graph in which messages are nodes and edges indicate that one message is a response to another. Each connected component is a conversation. 1 https://jkk.name/irc-disentanglement 3847 [03:05] <delire> hehe yes. does Kubuntu have ’KPackage’? === delire found that to be an excellent interface to the apt suite in another distribution. === E-bola [...@...] has joined #ubuntu [03:06] <BurgerMann> does anyone know a consoleprog that scales jpegs fast and efficient?.. this digital camera age kills me when I have to scale photos :s [03:06] <Seveas> delire, yes [03:06] <Seveas> BurgerMann, convert [03:06] <Seveas> part of imagemagick === E-bola [...@...] has left #ubuntu [] [03:06] <delire> BurgerMann: ImageMagick [03:06] <Seveas> BurgerMann, i used that to convert 100’s of photos in one command [03:06] <BurgerMann> Oh... I’ll have a look.. thx =) Figure 1: #Ubuntu IRC log sample, earliest message first. Curved lines are our graph annotations of reply structure, which define two conversations shown with blue solid edges and green dashed edges. Figure 1 shows an example of two entangled conversations and their graph structure. It includes a message that receives multiple responses, when multiple people independently help BurgerMann, and the inverse, when the last message responds to multiple messages. We also see two of the users, delire and Seveas, simultaneously participating in two conversations. This multi-conversation participation is common. The example also shows two aspects of IRC we will refer to later. Directed messages, an informal practice in which a participant is named in the message. These cues are useful for understanding the discussion, but only around 48% of messages have them. System messages, which indicate actions like users entering the channel. These all start with ===, but not all messages starting with === are system messages, as shown by the second message in Figure 1. 3 Related Work IRC Disentanglement Data: The most significant work on conversation disentanglement is a line of papers developing data and models for the #Linux IRC channel (Elsner and Charniak, 2008; Elsner and Schudy, 2009; Elsner and Charniak, 2010, 2011). Until now, their dataset was the only publicly available set of messages with annotated conversations (partially re-annotated by Mehri and Carenini (2017) with reply-structure graphs), and has been used for training and evaluation in subsequent work (Wang and Oard, 2009; Mehri and Carenini, 2017; Jiang et al., 2018). We are aware of three other IRC disentanglement datasets. First, Adams and Martell (2008) studied disentanglement and topic identification, but did not release their data. Second, Riou et al. (2015) annotated conversations and discourse relations in the #Ubuntu-fr channel (French Ubuntu support). Third, Lowe et al. (2015, 2017) heuristically extracted conversations from the #Ubuntu channel.2 Their work opened up a new research opportunity by providing 930,000 disentangled conversations, and has already been the basis of many papers (315 citations), particularly on developing dialogue agents. This is far beyond the size of resources previously collected, even with crowdsourcing (Lasecki et al., 2013). Using our data we provide the first empirical evaluation of their method. Other Disentanglement Data: IRC is not the only form of synchronous group conversation online. Other platforms with similar communication formats have been studied in settings such as classes (Wang et al., 2008; Dulceanu, 2016), support communities (Mayfield et al., 2012), and customer service (Du et al., 2017). Unfortunately, only one of these resources (Dulceanu, 2016) is available, possibly due to privacy concerns. Another stream of research has used userprovided structure to get conversation labels (Shen et al., 2006; Domeniconi et al., 2016) and replyto relations (Wang and Ros´e, 2010; Wang et al., 2011a; Aumayr et al., 2011; Balali et al., 2013, 2014; Chen et al., 2017a). By removing these labels and mixing conversations they create a disentanglement problem. While convenient, this risks introducing a bias, as people write differently when explicit structure is defined, and only a few papers have released data (Abbott et al., 2016; Zhang et al., 2017; Louis and Cohen, 2015). Models: Elsner and Charniak (2008) explored various message-pair feature sets and linear classifiers, combined with local and global inference methods. Their system is the only publicly released statistical model for disentanglement of chat conversation, but most of the other work cited above applied similar models. We evaluate their model on both our data and our re-annotated version of their data. Recent work has applied neural networks (Mehri and Carenini, 2017; Jiang et al., 2 This channel was first proposed as a useful data source by Uthus and Aha (2013a,b,c), who identified messages relevant to the Unity desktop environment, and whether questions can be answered by the channel bot alone. 3848 Data Authors Anno. Available? Dataset Messages Parts Part Length / part Context / msg Yes This work Pilot 1,250 9 100–332 msg 19-48 0-100 1-5 47,500 95 500 msg 33-95 1000 1 Train ———— 1,000 10 100 msg 20-43 1000 3+a 18,963 48 1 hr 22-142 1000 1 Dev 2,500 10 250 msg 76-167 1000 2+a Test 5,000 10 500 msg 79-221 1000 3+a Channel 2 2,600 1 5 hr 387 0 2+a Elsner and Charniak (2008) 2,500 1 5 hr 379 0 1-6 Mehri and Carenini (2017) 530 1 1½ hr 54 0 3 Request Riou et al. (2015) 1,429 2 12 / 60 hr 21/70 0 2/1 Dulceanu (2016) 843 3 ½–1½ hr 8-9 n/a 1 No Shen et al. (2006) 1,645 16 35–381 msg 6-68 n/a 1 Adams and Martell (2008) 19,925 38 67–831 msg ? 0 3 Wang et al. (2008) 337 28 2–70 msg ? n/a 1-2 Mayfield et al. (2012) ? 45 1 hr 3-7 n/a 1 Guo et al. (2017) 1,500 1 48 hr 5 n/a 2 Table 1: Annotated disentanglement dataset comparison. Our data is much larger than prior work, one of the only released sets, and the only one with context and adjudication. ‘+a’ indicates there was an adjudication step to resolve disagreements. ‘?’ indicates the value is not in the paper and the authors no longer have access to the data. 2018), with slight gains in performance. Graph Structure: Within a conversation, we define a graph of reply-to relations. Almost all prior work with annotated graph structures has been for threaded web forums (Schuth et al., 2007; Kim et al., 2010; Wang et al., 2011b), which do not exhibit the disentanglement problem we explore. Studies that do consider graphs for disentanglement have used small datasets (Dulceanu, 2016; Mehri and Carenini, 2017) that are not always released (Wang et al., 2008; Guo et al., 2017). 4 Data We introduce a manually annotated dataset of 77,563 messages: 74,963 from the #Ubuntu IRC channel,3 and 2,600 messages from the #Linux IRC channel.4 Annotating the #Linux data enables comparison with Elsner and Charniak (2008), while the #Ubuntu channel has over 34 million messages, making it an interesting largescale resource for dialogue research. It also allows us to evaluate Lowe et al. (2015, 2017)’s widely used heuristically disentangled conversations. When choosing samples we had to strike a balance between the number of samples and the size 3 https://irclogs.ubuntu.com/ 4 From Elsner and Charniak (2008), including the 100 messages they did not annotate. of each one. We sampled the training set in three ways: (1) 95 uniform length samples, (2) 10 smaller samples to check annotator agreement, and (3) 48 time spans of one hour that are diverse in terms of the number of messages, the number of participants, and what percentage of messages are directed. For additional details of the data selection process, see the supplementary material. 4.1 Dataset Comparison Table 1 presents properties of our data and prior work on disentanglement in real-time chat. Availability: Only one other dataset, annotated twice, has been publicly released, and two others were shared when we contacted the authors. Scale: Our dataset is 31 times larger than almost any other dataset, the exception being one that was not released. As well as being larger, our data is also based on many different points in time. This is crucial because a single sample presents a biased view of the task. Having multiple samples also means our training and evaluation sets are from different points in time, preventing overfitting to specific users or topics of conversation. Context: We are the first to consider the fact that IRC data is sampled from a continuous stream and the context prior to the sample is important. In prior work, a message with no antecedent could 3849 either be the start of a conversation or a response to a message that occurs prior to the sample. Adjudication: Our labeling method is similar to prior work, but we are the first to perform adjudication of annotations. While some cases were ambiguous, often one option was clearly incorrect. By performing adjudication we can reduce these errors, creating high quality sets. 4.2 Methodology Guidelines: We developed annotation guidelines through three rounds of pilot annotations in which annotators labeled a set of messages and discussed all disagreements. We instructed annotators to link each message to the one or more messages it is a response to. If a message started a new conversation it was linked to itself. We also described a series of subtle cases, using one to three examples to tease out differences. These included when a question is repeated, when a user responds multiple times, interjections, etc. For our full guidelines, see the supplementary material. All annotations were performed using SLATE (Kummerfeld, 2019), a custom-built tool with features designed specifically for this task.5 Adjudication: Table 1 shows the number of annotators for each subset of our data. For the development, test, out-of-domain data, and a small set of the training data, we labeled each sample multiple times and then resolved all disagreements in an adjudication step. During adjudication, there was no indication of who had given which annotation, and there was the option to choose a different annotation entirely. In order to maximize the volume annotated, we did not perform adjudication for most of the training data. Also, the 18,924 training message set initially only had 100 messages of context per sample, and we later added another 900 lines and checked every message that was not a reply to see if it was a response to something in the additional context. Annotators: The annotators were all fluent English speakers with a background in computer science (necessary to understand the technical content): a postdoc, a master’s student, and three CS undergraduates. All adjudication was performed by the postdoc, who is a native English speaker. Time: Annotations took between 7 and 11 seconds per message depending on the complexity of the discussion, and adjudication took 5 seconds 5https://jkk.name/slate [21:29] <MOUD> that reminds me... how can I use CTRL+C/V on terminal? [21:29] <MonkeyDust> MOUD ctrl ins pasts [21:29] <nacc> MOUD: it depends on your terminal application, in gnome-terminal ... -> [21:30] <MOUD> -.[17:35] <Moae> i have to remove LCDproc ... [17:38] <Madsy> Moae: sudo make uninstall && make clean? :-) [17:39] <Madsy> Open the makefile and see what the targets are. -> [17:40] <Madsy> Moae: Don’t message people in private please. It’s ... [17:42] <Moae> Madsy: sorry [17:42] <Moae> Madsy where i have to launch the command? Figure 2: Examples of annotation ambiguity. Top: The message from MOUD could be a response to either nacc or MonkeyDust. Bottom: The message from Madsy could be part of this conversation or a separate exchange between the same users. per message. Overall, we spent approximately 240 hours on annotation and 15 hours on adjudication. 4.3 Annotation Quality Our annotations define two levels of structure: (1) links between pairs of messages, and (2) sets of messages, where each set is one conversation. Annotators label (1), from which (2) can be inferred. Table 2 presents inter-annotator agreement measures for both cases. These are measured in the standard manner, by comparing the labels from different annotators on the same data. We also include measurements for annotations in prior work. Figure 2 shows ambiguous examples from our data to provide some intuition for the source of disagreements. In both examples the disagreement involves one link, but the conversation structure in the second case is substantially changed. Some disagreements in our data are mistakes, where one annotation is clearly incorrect, and some are ambiguous cases, such as these. In Channel Two, we also see mistakes and ambiguous cases, including a particularly long discussion about a user’s financial difficulties that could be divided in multiple ways (also noted by Elsner and Charniak (2008)). Graphs: We measure agreement on the graph structure annotation using Cohen (1960)’s κ. This measure of inter-rater reliability corrects for chance agreement, accounting for the class imbalance between linked and not-linked pairs. Values are in the good agreement range proposed by Altman (1990), and slightly higher than for Mehri and Carenini (2017)’s annotations. Results are not shown for Elsner and Charniak (2008) because they did not annotate graphs. 3850 Conversations: We consider three metrics:6 (1) Variation of Information (VI, Meila, 2007). A measure of information gained or lost when going from one clustering to another. It is the sum of conditional entropies H(Y |X) + H(X|Y ), where X and Y are clusterings of the same set of items. We consider a scaled version, using the bound for n items that VI(X; Y ) ≤log(n), and present 1−VI so that larger values are better. (2) One-to-One Overlap (1-1, Elsner and Charniak, 2008). Percentage overlap when conversations from two annotations are optimally paired up using the max-flow algorithm. We follow Mehri and Carenini (2017) and keep system messages. (3) Exact Match F1. Calculated using the number of perfectly matching conversations, excluding conversations with only one message (mostly system messages). This is an extremely challenging metric. We include it because it is easy to understand and it directly measures a desired value (perfectly extracted conversations). Our scores are higher in 4 cases and lower in 5. Interestingly, while κ was higher for us than Mehri and Carenini (2017), our scores for conversations are lower. This is possible because a single link can merge two conversations, meaning a single disagreement in links can cause a major difference in conversations. This may reflect the fact that our annotation guide was developed for the Ubuntu channel, which differs in conversation style from the Channel Two data. Manually comparing the annotations, there was no clear differences in the types of disagreements. Agreement is lower on the Channel Two data, particularly on its test set. From this we conclude that there is substantial variation in the difficulty of conversation disentanglement across datasets.7 5 Evaluating Disentanglement Quality In this section, we propose new simple disentanglement models that perform better than prior methods, and re-examine prior work. The models we consider are: Previous: Each message is linked to the most recent non-system message before it. 6 Metrics such as Cohen’s κ and Krippendorff’s α are not applicable to conversations because there is no clear mapping from one set of conversations to another. 7 Riou et al. (2015) also observe this, noting that their French IRC data is less entangled than Elsner’s, making it possible to achieve an agreement level of 0.95. Graph Conversation Data κ VI 1-1 F1 Train (subset) 0.71 94.2 85.0 52.5 Dev 0.72 94.0 83.8 42.9 Test 0.74 95.0 83.8 49.5 Channel Two 0.72 90.4 75.9 28.2 Subparts of Channel Two Pilot This work 0.68 90.9 82.4 43.5 Elsner (2008) 94.2 90.0 40.7 Dev This work 0.74 92.2 81.7 27.5 Mehri This work 0.73 86.2 71.9 22.2 Mehri (2017) 0.67 91.3 80.7 38.7 Test This work 0.73 84.3 66.5 23.8 Elsner (2008) 80.8 62.4 20.6 Table 2: Inter-annotator agreement for graphs (κ) and conversations (1-1, VI, F1). Our annotations are comparable to prior work, and κ is in the good agreement range proposed by Altman (1990). We also adjudicated all disagreements to improve quality. Lowe et al. (2017): A heuristic based on time differences and identifying directed messages. Elsner and Charniak (2008): A linear pairwise scoring model in which each message is linked to the highest scoring previous message, or none if all scores are below zero. Linear: Our linear ranking model that scores potential antecedents using a feature-based model based on properties such as time, directedness, word overlap, and context. Feedforward (FF): Our feedforward model with the same features as the linear model, plus a sentence embedding calculated using an average of vectors from GloVe (Pennington et al., 2014). Union: Run 10 FF models trained with different random seeds and combine their output by keeping all edges predicted. Vote: Run 10 FF models and combine output by keeping the edges they all agree on. Link messages with no agreed antecedent to themselves. Intersect: Conversations that 10 FF models agree on, and other messages as singleton conversations. For Channel Two we also compare to Wang and Oard (2009) and Mehri and Carenini (2017), but their code was unavailable, preventing evaluation on our data. We exclude Jiang et al. (2018) as they substantially modified the dataset. For details of models, including hyperparameters tuned on the development set, see the supplementary material. 3851 System P R F Previous 35.7* 34.4* 35.0* Linear 64.7 62.3 63.5 Feedforward 73.7* 71.0* 72.3* x10 union 64.3 79.7* 71.2* x10 vote 74.9* 72.2* 73.5* Table 3: Graph results on the Ubuntu test set. * indicates a significant difference at the 0.01 level compared to Linear. System VI 1-1 P R F Previous 66.1 27.6 0.0 0.0 0.0 Linear 88.9 69.5 19.3 24.9 21.8 Feedforward 91.3 75.6 34.6 38.0 36.2 x10 union 86.2 62.5 40.4 28.5 33.4 x10 vote 91.5 76.0 36.3 39.7 38.0 x10 intersect 69.3 26.6 67.0 21.1 32.1 Lowe (2017) 80.6 53.7 10.8 7.6 8.9 Elsner (2008) 82.1 51.4 12.1 21.5 15.5 Table 4: Conversation results on the Ubuntu test set. Our new model is substantially better than prior work. Significance is not measured as we are unaware of methods for set structured data. Training Condition Graph-F Conv-F Standard 72.3 (0.4) 36.2 (1.7) No context 72.3 (0.2) 37.6 (1.6) 1k random msg 63.0* (0.4) 21.0 (2.3) 2x 500 msg samples 61.4* (1.8) 20.4 (3.2) Table 5: Performance with different training conditions on the Ubuntu test set. For Graph-F, * indicates a significant difference at the 0.01 level compared to Standard. Results are averages over 10 runs, varying the data and random seeds. The standard deviation is shown in parentheses. 5.1 Results Graphs: Table 3 presents precision, recall, and F-score over links. Our models perform much better than the baseline. As we would expect, vote has higher precision, while union has higher recall. Vote has higher recall than a single feedforward model because it identifies more of the selflink cases (its default when there is no agreement). Conversations: Table 4 presents results on the metrics defined in Section 4.3. There are three regions of performance. First, the baseline has consistently low scores since it forms a single conversation containing all messages. Second, Elsner and Charniak (2008) and Lowe et al. (2017) perform similarly, with one doing better on VI and the other on 1-1, though Elsner and Charniak (2008) do consistently better across the exact conversation extraction metrics. Third, our methods do best, with x10 vote best in all cases except precision, where the intersect approach is much better. Dataset Variations: Table 5 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions. Removing context does not substantially impact results. Decreasing the data size to match Elsner and Charniak (2008)’s training set leads to worse results, both if the sentences are from diverse contexts (3rd row), and if they are from just two contexts (bottom row). We also see a substantial increase in the standard deviation when only two samples are used, indicating that performance is not robust when the data is not widely sampled. 5.2 Channel Two Results For channel Two, we consider two annotations of the same underlying text: ours and Elsner and Charniak (2008)’s. To compare with prior work, we use the metrics defined by Shen et al. (2006, Shen) and Elsner and Charniak (2008, Loc).8 We do not use these for our data as they have been superseded by more rigorously studied metrics (VI for Shen) or make strong assumptions about the data (Loc). We do not evaluate on graphs because Elsner and Charniak (2008)’s annotations do not include them. This also prevents us from training our method on their data. Model Comparison: For Elsner’s annotations (top section of Table 6), their approach remains the most effective with just Channel Two data. However, training on our Ubuntu data, treating Channel Two as an out-of-domain sample, yields substantially higher performance on two metrics and comparable performance on the third. On our annotations (bottom section), we see the same trend. In both cases, the heuristic from Lowe et al. (2015, 2017) performs poorly. We suspect our model trained only on Channel Two data is overfitting, 8 Loc is a Rand index that only counts messages less than 3 apart. Shen calculates the F-score for each gold-system conversation pair, finds the max for each gold conversation, and averages weighted by the size of the gold conversation (this allows a predicted conversation to match to zero, one, or multiple gold conversations). Following Wang and Oard (2009) and Mehri and Carenini (2017), we include system messages in evaluation. We also checked our metric implementations by removing system messages and calculating results for Elsner and Charniak (2008)’s output. 3852 Test Train System 1-1 Loc Shen Elsner Ch 2 (Elsner) Elsner (2008) 53.1 81.9 55.1 Ch 2 (Elsner) Wang (2009) 47.0 75.1 52.8 Ch 2 (Ours) Elsner (2008) 51.1 78.0 53.9 Ch 2 (Ours) Feedforward 52.1 77.8 53.8 Multiple Mehri (2017) 55.2 78.6 56.6 n/a Lowe (2017) 45.1 73.8 51.8 Ubuntu Feedforward 57.5 82.0 60.5 Ours Ch 2 (Elsner) Elsner (2008) 54.0 81.2 56.3 Ch 2 (Ours) Elsner (2008) 59.7 80.8 63.0 Ch 2 (Ours) Feedforward 57.7 80.3 59.8 n/a Lowe (2017) 43.4 67.9 50.7 Ubuntu Feedforward 62.8 84.3 66.6 Table 6: Results for different annotations of Channel Two. The best result is bold, and the best result with only Channel Two data is underlined. as the graph F-score on the training data is 94, whereas on the Ubuntu data it is 80. Data Comparison: Comparing the same models in the top and bottom section, scores are consistently higher for our annotations, except for the Lowe et al. (2015, 2017) heuristic. Comparing the annotations, we find that their annotators identified between 250 and 328 conversations (mean 281), while we identify 257. Beyond this difference it is hard to identify consistent variations in the annotations. Another difference is the nature of the evaluation. On Elsner’s data, evaluation is performed by measuring relative to each annotators labels and averaging the scores. On our data, we adjudicated the annotations, providing a single gold standard. Evaluating our ChannelTwo-trained Feedforward model on our two preadjudication annotations and averaging scores, the results are lower by 3.1, 1.8, and 4.3 on 1-1, Loc and Shen respectively. This suggests that our adjudication process removes annotator mistakes that introduce noise into the evaluation. 5.3 Evaluating Lowe et al. (2015, 2017) The previous section showed that only 10.8% of the conversations extracted by the heuristic in Lowe et al. (2015, 2017) are correct (P in Table 4). We focus on precision because the primary use of their method has been to extract conversations to train and test dialogue systems, which will be impacted by errors in the conversations. Recall errors (measuring missed conversations) are not as serious a problem because the Ubuntu chat logs are so large that even with low recall a large number of conversations will still be extracted. Additional Metrics: First, we must check this is Missed [02:06] <TheBuntu> in virtualbox... win7 in VM... i have an ntfs partition.. How do i access that partition in VM ? [02:06] <L1nuxRules> share it with the vm [02:08] <L1nuxRules> anywy this is ubuntu so windows &> /duv/null [02:09] <L1nuxRules> dev* Extra [02:11] <L1nuxRules> it shouldnt unless theres depency issues [02:11] <TheBuntu> L1nuxRules: how do i share with the vm... i dont see VM in share Missed [02:12] <L1nuxRules> buntu if its virtuasl box click on setttings > shared folders Missed [02:13] <TheBuntu> ok Figure 3: An example conversation extracted by the heuristic from Lowe et al. (2015, 2017) with the messages it misses and the one it incorrectly includes. not an artifact of our test set. On our development set, P, R, and F are slightly higher (11.6, 8.1 and 9.5), but VI and 1-1 are slightly lower (80.0 and 51.7). We can also measure performance as the distribution of scores over all of the samples we annotated. The average precision was 10, and varied from 0 to 50, with 19% of cases at 0 and 95% below 23. To avoid the possibility that we made a mistake running their code, we also considered evaluating their released conversations. On the data that overlapped with our annotations, the precision was 9%. These results indicate that the test set performance is not an aberration: the heuristic’s results are consistently low, with only about 10% of output conversations completely right. Error Types: Figure 3 shows an example heuristic output with several types of errors. The initial question was missed, as was the final resolution, and in the middle there is a message from a separate conversation. 67% of conversations were a subset of a true conversation (ie., only missed messages), and 3% were a superset of a true conversation (ie., only had extra messages). The subset cases were missing 1-187 messages (missing 56% of the conversation on average) and the superset cases had 1-3 extra messages (an extra 31% of the conversation on average). The first message is particularly important because it is usually the question being resolved. In 47% of cases the first message is not the true start of a conversation. It is important to note that the dialogue task the conversations were intended for only uses a prefix of each conversation. For this purpose, missing the end of a conversation is not a problem. In 9% of cases, the conversation is a true prefix of a gold conversation. Combined with the exact match cases, that means 20% of the conversations are accurate as used in the next utterance selection task. A further 9% of cases are a continuous 3853 Figure 4: Time between consecutive messages in conversations. Jumps are at points when the scale shifts as indicated on the x-axis. The circled upper right point is the sum over all larger values, indicating that messages weeks apart are often in the same conversation. chunk of a conversation, but missing one or more messages at the start. Long Distance Links: One issue we observed is that conversations often spanned days. We manually inspected a random sample: 20 conversations 12 to 24 hours long, and 20 longer than 24 hours. All of the longer conversations and 17 of the shorter ones were clearly incorrect.9 This issue is not measured in the analysis above because our samples do not span days (they are 5.5 hours long on average when including context). The original work notes this issue, but claims that it is rare. We measured the time between consecutive messages in conversations and plot the frequency of each value in Figure 4.10 The figure indicates that the conversations often extend over days, or even more than a month apart (note the point in the topright corner). In contrast, our annotations rarely contain links beyond an hour, and the output of our model rarely contains links longer than 2 hours. Causes: To investigate possible reasons for these issues, we measured several properties of our data to test assumptions in the heuristic. First, the heuristic assumes if all directed messages from a user are in one conversation, all undirected messages from the user are in the same conversation. 9 The exceptions were two cases where a user thanked another user for their help the previous day, and one case where a user asked if another user ended up resolving their question. 10 In 68,002 conversations there was a negative time difference because a message was out of order. To resolve this, we sorted the messages in each conversation by timestamp. Model Test Train MRR R@1 R@5 DE Lowe Lowe 0.75 0.61 0.94 Ours 0.63 0.45 0.90 Ours Lowe 0.72 0.57 0.93 Ours 0.76 0.63 0.94 ESIM Lowe Lowe 0.82 0.72 0.97 Ours 0.69 0.53 0.92 Ours Lowe 0.78 0.67 0.95 Ours 0.83 0.74 0.97 Table 7: Next utterance prediction results, with various models and training data variations. The decrease in performance when training on one set and testing on the other suggests they differ in content. We find this is true 52.2% of the time. Second, it assumes that it is rare for two people to respond to an initial question. In our data, of the messages that start a conversation and receive a response, 37.7% receive multiple responses. Third, that a directed message can start a conversation, which we find in 6.8% of cases. Fourth, that the first response to a question is within 3 minutes, which we find is true in 94.8% of conversations. Overall, these assumptions have mixed support from our data, which may be why the heuristic produces so few accurate conversations. Dialogue Modeling: Most of the work building on Lowe et al. (2017) uses the conversations to train and evaluate dialogue systems. To see the impact on downstream work, we constructed a next utterance selection task as described in their work, disentangling the entire #Ubuntu logs with our feedforward model. We tried two dialogue models: a dual-encoder (Lowe et al., 2017), and Enhanced Long Short-Term Memory (Chen et al., 2017b). For full details of the task and model hyperparameters, see the supplementary material. Table 7 show results when varying the training and test datasets. Training and testing on the same dataset leads to higher performance than training on one and testing on the other. This is true even though the heuristic data contains nine times as many training conversations. This is evidence that our conversations are fundamentally different despite being derived from the same resource and filtered in the same way. This indicates that our changes lead to quantitatively different downstream models. Fortunately, the relative performance of the two models remains consistent across the two datasets. 3854 5.4 Re-Examining Disentanglement Research Using our data we also investigate other assumptions made in prior work. The scale of our data provides a more robust test of these ideas. Number of samples: Table 1 shows that all prior work with available data has considered a small number of samples. In Table 5, we saw that training on less diverse data samples led to models that performed worse and with higher variance. We can also investigate this by looking at performance on the different samples in our test set. The difficulty of samples varies considerably, with the F-score of our model varying from 11 to 40 and annotator agreement scores before adjudication varying from 0.65 to 0.78. The model performance and agreement levels are also strongly correlated, with a Spearman’s rank correlation of 0.77. This demonstrates the importance of evaluating on data from more than one point in time to get a robust estimate of performance. How far apart consecutive messages in a conversation are: Elsner and Charniak (2008) and Mehri and Carenini (2017) use a limit of 129 seconds, Jiang et al. (2018) limit to within 1 hour, Guo et al. (2017) limit to within 8 messages, and we limit to within 100 messages. Figure 4 shows the distribution of time differences in our conversations. 94.9% are within 2 minutes, and almost all are within an hour. 88.3% are 8 messages or less apart, and 99.4% are 100 or less apart. This suggests that the lower limits in prior work are too low. However, in Channel Two, 98% of messages are within 2 minutes, suggesting this property is channel and sample dependent. Concurrent conversations: Adams and Martell (2008) forced annotators to label at most 3 conversations, while Jiang et al. (2018) remove conversations to ensure there are no more than 10 at once. We find there are 3 or fewer 46.4% of the time and 10 or fewer 97.3% of the time (where time is in terms of messages, not minutes, and we ignore system messages), Presumably the annotators in Adams and Martell (2008) would have proposed changes if the 3 conversation limit was problematic, suggesting that their data is less entangled than ours. Conversation and message length: Adams and Martell (2008) annotate blocks of 200 messages. If such a limit applied to our data, 13.7% of conversations would not finish before the cutoff point. This suggests that their conversations are typically shorter, which is consistent with the previous conclusion that their conversations are less entangled. Jiang et al. (2018) remove conversations with fewer than 10 messages, describing them as outliers, and remove messages shorter than 5 words, arguing that they were not part of real conversations. Not counting conversations with only system messages, 83.4% of our conversations have fewer than 10 messages, 40.8% of which have multiple authors. 88.5% of messages with less than 5 words are in conversations with more than one author. These values suggest that these messages and conversations are real and not outliers. Overall: This analysis indicates that working from a small number of samples can lead to major bias in system design for disentanglement. There is substantial variation across channels, and across time within a single channel. 6 Conclusion Conversation disentanglement has been understudied because of a lack of public, annotated datasets. We introduce a new corpus that is larger and more diverse than any prior corpus, and the first to include context and adjudicated annotations. Using our data, we perform the first empirical analysis of Lowe et al. (2015, 2017)’s widely used data, finding that only 20% of the conversations their method produces are true prefixes of conversations. The models we develop have already enabled new directions in dialogue research, providing disentangled conversations for DSTC 7 track 1 (Gunasekara et al., 2019; Yoshino et al., 2018) and will be used in DSTC 8. We also show that diversity is particularly important for the development of robust models. This work fills a key gap that has limited research, providing a new opportunity for understanding synchronous multiparty conversation online. Acknowledgements We would like to thank Jacob Andreas, Greg Durrett, Will Radford, Ryan Lowe, and Glen Pink for helpful feedback on earlier drafts of this paper and the anonymous reviewers for their helpful suggestions. This material is based in part on work supported by IBM as part of the Sapphire Project at the University of Michigan. Any opinions, findings, conclusions or recommendations expressed above do not necessarily reflect the views of IBM. 3855 References Rob Abbott, Brian Ecker, Pranav Anand, and Marilyn Walker. 2016. Internet Argument Corpus 2.0: An SQL schema for Dialogic Social Media and the Corpora to go with it. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Page H. Adams and Craig H. Martell. 2008. Topic Detection and Extraction in Chat. In 2008 IEEE International Conference on Semantic Computing. Douglas G Altman. 1990. Practical statistics for medical research. CRC press. Erik Aumayr, Jeffrey Chan, and Conor Hayes. 2011. Reconstruction of threaded conversations in online discussion forums. In International AAAI Conference on Web and Social Media. Ali Balali, Hesham Faili, and Masoud Asadpour. 2014. A Supervised Approach to Predict the Hierarchical Structure of Conversation Threads for Comments. The Scientific World Journal. Ali Balali, Hesham Faili, Masoud Asadpour, and Mostafa Dehghani. 2013. A Supervised Approach for Reconstructing Thread Structure in Comments on Blogs and Online News Agencies. Computacion y Sistemas, 17(2):207–217. Jun Chen, Chaokun Wang, Heran Lin, Weiping Wang, Zhipeng Cai, and Jianmin Wang. 2017a. Learning the Structures of Online Asynchronous Conversations, volume 10177 of Lecture Notes in Computer Science. Springer. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017b. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37–46. Giacomo Domeniconi, Konstantinos Semertzidis, Vanessa Lopez, Elizabeth M. Daly, Spyros Kotoulas, and Gianluca Moro. 2016. A Novel Method for Unsupervised and Supervised Conversational Message Thread Detection. In Proceedings of the 5th International Conference on Data Management Technologies and Applications - Volume 1: DATA,. Wenchao Du, Pascal Poupart, and Wei Xu. 2017. Discovering Conversational Dependencies between Messages in Dialogs. In Proceedings of the ThirtyFirst AAAI Conference on Artificial Intelligence. Andrei Dulceanu. 2016. Recovering implicit thread structure in chat conversations. Revista Romana de Interactiune Om-Calculator, 9:217–232. Micha Elsner and Eugene Charniak. 2008. You Talking to Me? A Corpus and Algorithm for Conversation Disentanglement. In Proceedings of ACL-08: HLT. Micha Elsner and Eugene Charniak. 2010. Disentangling Chat. Computational Linguistics, 36(3):389– 409. Micha Elsner and Eugene Charniak. 2011. Disentangling chat with local coherence models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Micha Elsner and Warren Schudy. 2009. Bounding and comparing methods for correlation clustering beyond ilp. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. Chulaka Gunasekara, Jonathan K. Kummerfeld, Lazaros Polymenakos, , and Walter S. Lasecki. 2019. Dstc7 task 1: Noetic end-to-end response selection. In 7th Edition of the Dialog System Technology Challenges at AAAI 2019. Gaoyang Guo, Chaokun Wang, Jun Chen, and Pengcheng Ge. 2017. Who Is Answering to Whom? Finding ”Reply-To” Relations in Group Chats with Long Short-Term Memory Networks. In International Conference on Emerging Databases (EDB’17). Jyun-Yu Jiang, Francine Chen, Yan-Ying Chen, and Wei Wang. 2018. Learning to disentangle interleaved conversational threads with a siamese hierarchical network and similarity ranking. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Su Nam Kim, Li Wang, and Timothy Baldwin. 2010. Tagging and Linking Web Forum Posts. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Jonathan K. Kummerfeld. 2019. Slate: A superlightweight annotation tool for experts. In Proceedings of ACL 2019, System Demonstrations. Walter S. Lasecki, Ece Kamar, and Dan Bohus. 2013. Conversations in the crowd: Collecting data for taskoriented dialog learning. In Proceedings of the Human Computation Workshop on Scaling Speech, Language Understanding and Dialogue through Crowdsourcing. Annie Louis and Shay B. Cohen. 2015. Conversation Trees: A Grammar Model for Topic Structure in Forums. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 3856 Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured MultiTurn Dialogue Systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017. Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus. Dialogue & Discourse, 8(1). Elijah Mayfield, David Adamson, and Carolyn Penstein Ros´e. 2012. Hierarchical Conversation Structure Prediction in Multi-Party Chat. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Shikib Mehri and Giuseppe Carenini. 2017. Chat disentanglement: Identifying semantic reply relationships with random forests and recurrent neural networks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Marina Meila. 2007. Comparing clusterings–an information based distance. Journal of Multivariate Analysis, 98(5):873–895. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthieu Riou, Soufian Salim, and Nicolas Hernandez. 2015. Using discursive information to disentangle French language chat. In NLP4CMC 2nd Workshop on Natural Language Processing for ComputerMediated Communication / Social Media at GSCL Conference. Anna Schuth, Maarten Marx, and Maarten de Rijke. 2007. Extracting the discussion structure in comments on news-articles. In Proceedings of the 9th annual ACM international workshop on Web information and data management. Dou Shen, Qiang Yang, Jian-Tao Sun, and Zheng Chen. 2006. Thread Detection in Dynamic Text Message Streams. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. David Uthus and David Aha. 2013a. Detecting BotAnswerable Questions in Ubuntu Chat. In Proceedings of the Sixth International Joint Conference on Natural Language Processing. David Uthus and David Aha. 2013b. Extending word highlighting in multiparticipant chat. In Florida Artificial Intelligence Research Society Conference. David C. Uthus and David W. Aha. 2013c. The Ubuntu Chat Corpus for Multiparticipant Chat Analysis. In Analyzing Microtext: Papers from the 2013 AAAI Spring Symposium. Hongning Wang, Chi Wang, ChengXiang Zhai, and Jiawei Han. 2011a. Learning Online Discussion Structures by Conditional Random Fields. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval. Li Wang, Marco Lui, Su Nam Kim, Joakim Nivre, and Timothy Baldwin. 2011b. Predicting thread discourse structure over technical web forums. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 13–25. Lidan Wang and Douglas W. Oard. 2009. Contextbased Message Expansion for Disentanglement of Interleaved Text Conversations. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Yi-Chia Wang, Mahesh Joshi, William Cohen, and Carolyn Ros´e. 2008. Recovering Implicit Thread Structure in Newsgroup Style Conversations. In Proceedings of the International Conference on Weblogs and Social Media. Yi-Chia Wang and Carolyn P. Ros´e. 2010. Making Conversational Structure Explicit: Identification of Initiation-response Pairs within Online Discussions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda Alamari, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2018. Dialog system technology challenge 7. In NeurIPS Workshop: The 2nd Conversational AI: ”Today’s Practice and Tomorrow’s Potential”. Amy Zhang, Bryan Culbertson, and Praveen Paritosh. 2017. Characterizing Online Discussion Using Coarse Discourse Sequences. In 11th AAAI International Conference on Web and Social Media (ICWSM).
2019
374
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3857–3867 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3857 Self-Supervised Dialogue Learning Jiawei Wu and Xin Wang and William Yang Wang Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 USA {jiawei wu,xwang,william}@cs.ucsb.edu Abstract The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to lowquality and incoherent conversations. We consider the order information as a crucial supervised signal for dialogue learning, which, however, has been neglected by many previous dialogue systems. Therefore, in this paper, we introduce a self-supervised learning task, inconsistent order detection, to explicitly capture the flow of conversation in dialogues. Given a sampled utterance pair triple, the task is to predict whether it is ordered or misordered. Then we propose a samplingbased self-supervised network SSN to perform the prediction with sampled triple references from previous dialogue history. Furthermore, we design a joint learning framework where SSN can guide the dialogue systems towards more coherent and relevant dialogue learning through adversarial training. We demonstrate that the proposed methods can be applied to both open-domain and taskoriented dialogue scenarios, and achieve the new state-of-the-art performance on the OpenSubtitiles and Movie-Ticket Booking datasets. 1 Introduction In recent years, dialogue systems have achieved fruitful results with neural conversation models in both open-domain generation (Ritter et al., 2011; Sordoni et al., 2015b; Li et al., 2016b, 2017; Xu et al., 2017; Zhang et al., 2018b) and task-oriented completion (Wen et al., 2015, 2017; Williams et al., 2017; Bordes et al., 2017; Su et al., 2018). These methods empower lots of real-world dialogue applications such as Google Home and Apple Siri. However, the utterance generation from dialogue systems still faces some critical challenges, including utterance blandness and incoherence (Gao et al., 2018). They are mainly caused by the objective function of the dialogue systems that prefer utterances with unconditionally high probability (Li et al., 2016a). We argue that in a meaningful and coherent dialogue, the change of utterance order will lead to a low-quality dialogue. However, most existing neural-based dialogue systems either encode the full dialogue history (Li et al., 2017; Xu et al., 2017) or only the current utterance (Liu and Lane, 2018). None of them explicitly models the sequential order and studies its criticality to the dialogue learning problem. In this paper, we explore the sequential order within the dialogue as the self-supervised signal to guide meaningful and coherent dialogue learning. We introduce a self-supervised learning task, inconsistent order detection, to explicitly capture the order signal of the dialogue. The task is defined as, given a target utterance pair triple, the model is required to predict whether the triple is correctly ordered or not. For instance, the utterance pair triple ⟨(Q1, A1), (Q4, A4), (Q2, A2)⟩is misordered. The key to solving this task is to model the utterance order based on the dialogue context effectively. But when directly encoding the full dialogue history along the temporal order, the model actually only focuses on the ending utterances, and earlier information is largely discarded (Li et al., 2017). Thus, we propose a sampling-based selfsupervised network (SSN) to account for the forgetfulness problem and solve the inconsistent order detection task. In order to accurately predict if a target utterance triple is ordered or not, we randomly sample utterance triples from the dialogue history as the reference to incorporate the dialogue context. Since for the same target utterance triple, the sampled triple references are different at different iterations during training. It essentially approximates the full dialogue history without suf3858 fering from the forgetfulness issue. To further utilize SSN in real dialogue learning, we propose to jointly learn SSN and the dialogue model via alternative training, where the output probability of SSN is treated as the order signal to evaluate the generated utterance. Moreover, the proposed approach can be applied to both open-domain and task-oriented dialogue learning, which indicates that SSN is a general and scalable approach for dialogue learning. Empirical results on two widely-used benchmark datasets, OpenSubtitles and Movie-Ticket Booking, show that our self-supervised network consistently improves the state-of-the-art (SOTA) neural-based dialogue training methods. In summary, our main contributions are three-fold: • We introduce the task of inconsistent order detection, and propose a self-supervised learning network SSN to solve this task and explicitly model the crucial order information in dialogue. • We propose a general framework to jointly learn SSN and the dialogue models, where the sequential order in dialogues can be explicitly used to guide the utterance generation. • Our method advances the existing state-ofthe-art dialogue systems in both open-domain and task-oriented scenarios. 2 Related Work Dialogue Learning Dialogue systems can be roughly classified into open-domain and taskoriented scenarios. In recent years, neural-based conversation models have shown great power in building dialogue systems (Ritter et al., 2011; Sordoni et al., 2015b; Vinyals and Le, 2015; Serban et al., 2016; Luan et al., 2016). However, the utterances generated by neural-based dialogue systems still suffer from blandness and incoherence (Gao et al., 2018). To address these problems, Li et al. (2016a) propose a mutual information objective to infer the utterance generation. Serban et al. (2017) and Zhang et al. (2018a) further apply the latent variable models to generate more specific responses. Similar to some language generation tasks (Lamb et al., 2016; Yu et al., 2017), Generative adversarial networks (GAN) (Goodfellow et al., 2014) have also been adapted to learn a better objective function for the dialogue (Li et al., 2017; Xu et al., 2017; Liu and Lane, 2018; Su et al., 2018). The discriminator in GAN is often used to evaluate the generated utterances and guide dialogue learning. However, these methods mainly focus on the surface information of generated utterances to guide the dialogue learning, and fail to consider the utterance connection within the dialogue history. In this paper, we focus on the sequential information of the dialogue and show that the unique sequential order in a meaningful and coherent dialogue contains more useful semantic information for dialogue learning. Self-Supervised Learning Self-supervised learning, which aims to train a network on an auxiliary task where ground-truth is obtained automatically, has been successfully applied in computer vision. Many self-supervised tasks have been introduced to use non-visual but intrinsically correlated features to guide the visual feature learning (Doersch et al., 2015; Wang and Gupta, 2015; Pathak et al., 2016). As for natural language processing, predicting nearby words (Mikolov et al., 2013b,a) is a self-supervised task to learn word embeddings. The language modeling is another line of self-supervision where a language model learns to predict the next word given the previous sequence (Bengio et al., 2003; Dai and Le, 2015; Peters et al., 2018). Recently, Devlin et al. (2019) further proposes two self-supervised tasks, the masked language model and next sentence prediction, to learn sentence embeddings. Lample and Conneau (2019); Liu et al. (2019) further extend these two tasks into multi-lingual and multi-task paradigms. Wang et al. (2019) consider them at the sentence-level for extractive summarization. Our work is the first to consider the sequential order as the self-supervised signal in dialogue and we propose the self-supervised task of inconsistent order detection towards more coherent and relevant dialogue learning. 3 Methods In this section, we systematically describe how to utilize the internal sequential order of utterances as self-supervision for dialogue learning. In Section 3.1, we first introduce the task of inconsistent order detection, where the model needs to predict whether one sampled triple of the dialogue is correctly ordered or not. We then present an effective sampling-based approach, self-supervised network (SSN), to learn to capture the important 3859 Q1 A1 Q2 A2 Qt-1 At-1 . . Qt At Dialogue History Current Utterance Pair Q1, A1 Q2, A2 Qt-1, At-1 Ordered Sampling Q1, A1 Qt-1, At-1 Q2, At2 Misordered Q1, A1 Qt, At Qt-1, At-1 To be predicted Utterance Pair Encoder Order Reasoning Layer + + Concatenation (a) Triple Reference Sampling (b) Inconsistent Order Prediction Multi-layer Perceptron Output: 1 Misordered Output: 0 Ordered Figure 1: The overview of our self-supervised network (SSN) for inconsistent order detection. Given a target triple containing the current utterance pair (Qt, At) to be predicted, (a) we first sample triple references from previous dialogue history {(Q1, A1), · · · , (Qt−1, At−1)} in each iteration. The references can be ordered or misordered. (b) For each triple, it is transformed into the triple embedding. The concatenation of triple embeddings is fed into a MLP, and gives the probability based on the current sampling. order signal and solve this task (see Section 3.2). In the end, we show in Section 3.3 how SSN can contribute to both open-domain and task-oriented dialogue learning by modeling the inconsistent order detection. 3.1 Inconsistent Order Detection The dialogue systems aim at conversing with the human in a meaningful and coherent way (Gao et al., 2018). Thus, the sequential order in dialogue data is an important signal for building a good dialogue system. Existing neuralbased dialogue systems only consider this signal in a weak and implicit way, where they use hierarchical encoders to model the dialogue history (Sordoni et al., 2015a; Serban et al., 2016; Li et al., 2017; Serban et al., 2017; Xing et al., 2018). However, we argue that these methods are mainly designed to model the overall semantic context information of the dialogue history but not good at modeling intermediate sequential order. Especially, the order signal is becoming weak as the number of dialogue turns increases. Thus, we propose the task of inconsistent order detection to force building models to capture this signal as self-supervision explicitly. Given a dialogue till the turn t, we can formulate it as {(Q1, A1), (Q2, A2), · · · , (Qt, At)}, where (Qt, At) is a pair of human-machine utterances. Then we can sample multiple triples of this dialogue as utterance pair triples using the following strategies: • Ordered triple sampling: We sample a triple following the dialogue sequential order as ⟨(Qi, Ai), (Qj, Aj), (Qk, Ak)⟩, where i < j < k ≤t. • Misordered triple sampling: The three utterance pairs are sampled in a triple as ⟨(Qi, Ai), (Qk, Ak), (Qj, Aj)⟩, where i < j < k ≤t. Note that when the current dialogue length t <= 2, it is not enough to get a rational sampling for utterance pair triples. Thus, we add three extra shared padding utterance pairs (Q−2, A−2), (Q−1, A−1) and (Q0, A0) ahead of all the dialogue data before sampling1. Based on above triple sampling strategies, we define the task of inconsistent order detection as: given a dialogue history {(Q1, A1), (Q2, A2), · · · , (Qt, At)} and the target utterance pair (Qt, At) for evaluation, the model needs to predict whether the sampled triple T containing (Qt, At) is ordered or not. For instance, ⟨(Q1, A1), (Q2, A2), (Qt, At)⟩is ordered (output: 0), while ⟨(Q1, A1), (Qt, At), (Q2, A2)⟩ is misordered (output: 1). 1Specifically, e.g., for the added padding utterance Q−2, it is represented as a sequence of one same padding word {w (Q−2) 1 , w (Q−2) 2 , · · · , w (Q−2) N }, where N is the roundedup averaged length of utterances in the dataset. 3860 3.2 Self-Supervised Network SSN We plan to build the model to solve the inconsistent order detection task, and explicitly capture the sequential order in dialogue. The overview of our approach is shown in Figure 1. At each dialogue turn t, given a target triple containing the current utterance pair, we first sample triple references from the previous dialogue history to capture more semantic context in dialogue. The target triple and triple references are then transformed into embeddings using an utterance pair encoder and an order reasoning layer. Finally, the concatenation of embeddings is used for the final prediction. We then describe the SSN in detail as follows. 3.2.1 Triple Reference Sampling Given the task definition in Section 3.1, the model needs to predict whether there is inconsistent order in the target triple containing the current utterance pair (Qt, At). It is intuitive that if we can get more previous dialogue history, we may make a better prediction for inconsistent order. One trivial way is to encode the full previous dialogue history using a hierarchical network and make the prediction. However, Li et al. (2017) suggests that this structure actually focuses more on the final two preceding utterances instead of the whole history. The sequential order signal is very weak in this condition. We also report some similar results in Section 4.1. Therefore, we propose a sampling-based approach to model the utterance order based on the dialogue context effectively. For each sampling operation, we sample two triple references T ′ and T ′′ from the previous dialogue history {(Q1, A1), (Q2, A2), · · · , (Qt−1, At−1)} following the sampling strategies in Section 3.1. In general, we explore the following three combinations of reference sampling strategies for T ′ and T ′′: • T ′ and T ′′ are sampled ordered references. • T ′ and T ′′ are sampled misordered ones. • T ′ is ordered while T ′′ is misordered. Note that in our experiments, we choose one certain combination and keep using it for sampling the triple references for all the target triples. 3.2.2 Objective Function Given the target triple embedding T and the triple reference embedding T ′ and T ′′, we use SSN to calculate the probability p(T|T ′, T ′′) = SSN(T, T ′, T ′′). We use the Binary Cross Entropy loss to train the model: L = −E(y log p(T|T ′, T ′′)), (1) where y is the ground-truth label. Considering that for the same target triple T, the triple references are sampled m times to approximate the full dialogue history. Then we can rewrite the loss function as L = −E( 1 m m X i=1 y log(p(i)(T|T (i)′, T (i)′′))), (2) where T (i)′, T (i)′′ are the triple references of i-th sampling. This is essentially a Monte Carlo estimation and the model would effectively incorporate the dialogue context and capture the order information, avoiding from directly encoding the full dialogue history and the forgetfulness issue. 3.2.3 Network Structure In this section, we demonstrate how SSN embeds both the target triple T and triple reference T ′ and T ′′ to generate p(T|T ′, T ′′) in each sampling. Utterance Pair Encoder First, given a utterance pair (Qt, At), we concatenate the Qt and At as one sequence. The sequence is then fed into a bidirectional long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997), and the utterance pair embedding Ut is the concatenation of the final two states of the bi-LSTM: Ut = " ←− h1 −−→ hNt # , (3) where Nt is the length of the concatenated utterance sequence. Order Reasoning Layer After obtaining the utterance pair embeddings (Ui, Uj, Uk) of a sampled triple T =< (Qi, Ai), (Qj, Aj), (Qk, Ak) >, we need to reason and predict whether there is inconsistent order or not. To simplify our model, we use a 3-step reasoning bi-LSTM with the maxpooling layer to perform the order reasoning: T = " max-pooling(←− h1, ←− h2, ←− h3) max-pooling(−→ h1, −→ h2, −→ h3) # , (4) where the input of each time step in bi-LSTM is one utterance pairs embedding, and T is the final embedding of the given triple. 3861 Given the target triple embedding T and the triple reference embedding T′ and T′′, the concatenation of these three embeddings is fed into a multi-layer perceptron, returning the probability p(T|T ′, T ′′) of the triple is ordered (approaching 0) or misordered (approaching 1). 3.3 Self-Supervised Network for Dialogue In this section, we explain how the SSN can be applied to the current dialogue system in both open-domain and task-oriented scenarios. Suppose we have a dialogue system the the history {(Q1, A1), · · · , (Qt−1, At−1)}, at turn t, the system generate the utterance At based on the Qt. We can sample a misordered target triple T containing (Qt, At). Following the assumption that the sequential order in a meaningful and coherent dialogue should be unique, the SSN will be easy to detect the inconsistent order in T if the generated At is good. Otherwise, the At may be of low quality. Therefore, we take a two-step sampling approach to evaluate the generated utterance At using SSN. First, a misordered target triple T containing (Qt, At) is sampled. Then we further sample triple references T ′ and T ′′ as in Section 3.2.1 and how easily the misorder in the sampled T can be detected is measured as ET ′,T ′′(p(T|T ′, T ′′). Based on the generated utterance At, we can sample multiple misordered T, and we set the following expectation to measure the probability that At is a good generated utterance: p∗ SSN = Emisordered T ET ′,T ′′(p(T|T ′, T ′′)). (5) In this way, we can view human-generated utterances as good ones, and machine-generated utterances as bad ones. Then we can use the adversarial training methods (Goodfellow et al., 2014; Li et al., 2017; Xu et al., 2017; Su et al., 2018) to train the dialogue system, where SSN can give clear order-based signal to guide the generator G in the system. The framework of using SSN with the two-step sampling in real dialogue systems are shown in Figure 2. The objective function then can be formulated as: min θG max θSSNEreal[log p∗ SSN (x)] + Egen[log(1 −p∗ SSN (G(.)))], (6) where θG and θSSN are the parameters of the generator G and SSN in the dialogue Dialogue System Self-Supervised Network Dialogue History Generated Utterance Misordered Target Triple Signal Sampling Triple References Sampling Figure 2: The general framework for dialogue learning with self-supervised network. systems separately. The x stands for real human-generated utterances, which G(.) represents machine-generated ones. The G and SSN are alternately updated during training. We further describe the details in open-domain and taskoriented scenarios separately. 3.3.1 Open-Domain Dialogue Learning The open-domain dialogue task is, given a dialogue history consisting of a sequence of dialogue utterances {(Q1, A1), . . . , (Qt−1, At−1)}, and current Qt, the model needs to generate a response utterance At. We consider the adversarial training (Li et al., 2017; Xu et al., 2017) for dialogue generation systems. Following the previous approach (Vinyals and Le, 2015; Serban et al., 2016; Luan et al., 2016; Li et al., 2017), we use the SEQ2SEQ model for response generation as the generator G. The SEQ2SEQ first transforms the dialogue history into an embedding using an encoder recurrent network. Conditioned on the history embedding, another decoder recurrent network then computes the probability of tokens at each generation step of the response using a softmax function. As for the discriminator D, in previous methods, the discriminator directly takes the response utterance At with or without the full dialogue history, and predicts whether it is human-generated (output: 1) or machine-generated (output: 0). The probability of being human-generated is set as the reward to update the G using the REINFORCE algorithm (Williams, 1992). As for our SSN, the reward R is set as R = p∗ SSN . 3.3.2 Task-Oriented Dialogue Learning The task-oriented dialogue, usually formulated as a reinforcement learning problem, aims to build a 3862 dialogue agent to interact with real users and learn the policy to complete the slot-filling task (Jurafsky and Martin, 2014). While the real-user interaction is expensive and time-consuming, in this scenario, the dialogue systems are often trained with user simulators (Schatzmann et al., 2006; Li et al., 2016c). However, due to the complexity of real conversations and biases in the design of user simulators, the quality of simulated utterances is unstable. Su et al. (2018) propose an adversarial learning approach to differentiate simulated experience from real experience. Following the similar assumption that real-user interactions should be meaningful and coherent, we implement our SSN instead of the conventional discriminator D to select high-quality stimulated utterances in the task-oriented dialogue systems. In this scenario, the generator G is the world model which produces simulated user experience, and the SSN focuses on scoring the simulated user experience Qt during the training process. Thus, instead of sampling and encoding utterance pairs (Qt, At), here we only use the user utterance Qt in SSN. We keep other parts of the SSN remain the same as in Section 3.2. Because the world model G is updated using the multi-task learning without the reward from the SSN, the objective function of the SSN in Equation 6 can be rewritten as the following during the mini-batch training: 1 b b X i=1 [log p∗ SSN (x(i)) + log(1 −p∗ SSN (G(.)(i)))], (7) where b represents the batch size. 4 Experiments 4.1 Intrinsic Evaluation Before we deploy the self-supervised network into real dialogue systems, we first test the model architectures for reliability. We randomly choose 40K balanced ordered and misordered utterance pair triples from the OpenSubtitles (Tiedemann, 2009) dataset, and train the SSN to solve this 2class classification. We sample another 1K balanced triples for testing. We also consider a baseline model, where the target triple is encoded by SSN, and the previous dialogue history is encoded by a hierarchical LSTM. The concatenation of two embeddings is used for the final prediction. Because our SSN is a sampling-based apReference Strategy of SSN Average Accuracy All history by hierarchical LSTM .694 (.006) w/o Refers .670 (.011) 2*Ordered Refers .740 (.031) 2*misordered Refers .744 (.029) 1*Ordered + 1*misordered Refers .856 (.017) Table 1: The intrinsic evaluation results. The numbers in brackets stand for deviation. Refers: Reference Triples. proach, we report the average prediction accuracy of 5 runs on the 2-class classification as shown in Table 1. From the results, we can observe that: (1) The conventional hierarchical LSTM is not suitable for this task, and this baseline only shows a marginal improvement compared with the strategy that only considers target triple without any history. The results also match previous findings (Li et al., 2017), where they suggest that only the last two proceeding utterances in the hierarchical network are semantically significant. (2) As for our SSN, it is safe to tell that reference triples can be a tremendous supplement to the inconsistent order detection. It is not surprising because by adding reference triples, the SSN will know more information of semantic context within the dialogue. Especially when having both ordered and misordered references, the SSN has the highest classification accuracy. This also shows that the sampling strategy, 1*Ordered + 1*misordered references, is the most reliable structure for real dialogue systems. Thus, for the rest of the experiments, we directly use the SSN with one ordered and one misordered references strategy to achieve the best performance. 4.2 Open-Domain Dialogue Learning Dataset Following the previous studies (Vinyals and Le, 2015; Li et al., 2017; Xu et al., 2017), we choose the widely-used OpenSubtitles (Tiedemann, 2009) dataset to evaluate different methods. The OpenSubtitles dataset contains movie scripts organized by characters, where we follow Li et al. (2016b) to retain subtitles containing 5-50 words. Baselines We consider the following two popular adversarial methods for dialogue learning as the baselines: • REGS (Li et al., 2017): The discriminator D takes the full dialogue history by a hierarchi3863 Separated G/D D-REGS D-AEL D-SSN G-REGS .094 .087 .041 G-AEL .146 .128 .093 G-SSN .203 .185 .162 Table 2: The cross evaluation of adversarial success rate on different generators and discriminators. Please refer to Section 4.2 Adversarial Evaluation for explanations. Model distinct-1 distinct-2 REGS 0.0217 0.0695 AEL 0.0311 0.0948 SSN 0.0393 0.1126 Table 3: The automatic evaluation of generated utterances on distinct-1 and distinct-2 metrics. Please refer to Section 4.2 Automatic Evaluation for explanations. cal LSTM, and the Monte Carlo search is implemented to obtain rewards for every generation step to update the generator G. • AEL (Xu et al., 2017): The discriminator D only encodes the currently generated utterance by a CNN model and the generator G is optimized using an approximate embedding layer. Implementation Details We follow the most of parameters in Li et al. (2017); Xu et al. (2017) to make a fair comparison. For the generator model G, we adopt the same SEQ2SEQ model (Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) for our approach and baselines. We approximate the dialogue history for G using the concatenation of two preceding utterances following the Li et al. (2017). To train the generator G, we use the REINFORCE algorithm (Williams, 1992) to maximize the expected reward of generated utterances. We also implement the Monte Carlo search to give rewards for each generation step. To accelerate the sampling process, we use multiple GPUs to parallelize and distribute the jobs. As for the SSN, it first gets pre-trained using sampled data from OpenSubtitiles, and then iteratively updated during the min-max adversarial training process. The dimension of the utterance embeddings is 128. The hidden size is 256 for utterance encoding bi-LSTM and 1024 for triple reasoning bi-LSTM. The MLP has a single hidden layer of size 512. Win REGS AEL SSN Single-turn Percentage .095 .192 .713 Multi-turn Percentage .025 .171 .804 Table 4: The human evaluation of generated utterances in three methods. The result here is statistically significant with p < 0.01 according to sign test. Please refer to Section 4.2 Human Evaluation for explanations. Adversarial Evaluation Here we use adversarial success rate (AdverSuc), which is the fraction of instances where a G is capable of fooling the D, to evaluate different methods. Higher values of AdverSuc for a dialogue system usually lead to a better response generator. After training three (G, D) using REGS, AEL and SSN, we sample 4K dialogue history and use three trained generators to generate response utterances. These machine-generated utterances are then fed into three trained discriminators to see if they are indistinguishable from human-generated ones. The cross evaluation of AdverSuc is shown in Table 2. From the results, we can observe that: (1) Our trained generator achieve higher AdverSuc in three discriminators, which shows that the generator in our approach can generate more humanlike utterance responses. (2) The generators of the other two methods have a noticeable drop in AdverSuc when evaluating on our SSN-based discriminator. This demonstrates that our selfsupervised policy for discriminating utterances is successful. (3) The REGS method with full dialogue history encoded performs worse than the AEL that only considers the current utterances. We think this indicates that without explicitly stating the guiding signal, both the generator and the discriminator can be lost about figuring out a good objective function during the training process even when encoding the full history. Automatic Evaluation For automatic evaluations, we use the two commonly accepted metrics distinct-1 and distinct-2. The distinct-1 and distinct-2, proposed by Li et al. (2016a), are two ways to measure the degree of diversity by calculating the number of distinct unigrams and bigrams in the generated response utterances. The evaluation results are reported in Table 3. The results show that based on the distinct-1 and distinct-2 metrics, the generator trained in our approach can generate relatively more diverse responses. The results are attractive considering that 3864 Agent Planning Steps Epoch 100 Epoch 200 Epoch 300 Succ Reward Turns Succ Reward Turns Succ Reward Turns D3Q 5 .7467 43.59 14.03 .6800 34.64 15.92 .7200 40.85 13.11 D3Q-SSN .7600 45.71 13.52 .7400 42.93 14.80 .7633 46.16 15.24 D3Q (fixed θD) .6800 33.86 17.48 .7000 36.57 16.85 .6933 35.67 17.06 D3Q-SSN (fixed θSSN ) .6633 32.04 16.21 .7133 36.71 17.74 .7067 36.03 12.91 D3Q 10 .6333 28.99 16.01 .7000 37.24 15.52 .6667 33.09 15.83 D3Q-SSN .7800 48.71 15.84 .8733 56.15 19.57 .8067 50.29 16.48 D3Q (fixed θD) .7133 36.36 20.48 .8400 54.87 20.48 .7400 42.89 13.81 D3Q-SSN (fixed θSSN ) .7367 42.30 14.79 .8300 52.92 18.16 .7933 48.05 13.73 Table 5: The experimental results of different dialogue agents at training epoch = {100, 200, 300}. Each number is averaged over 3 runs, and each run tested on 50 dialogues. The D3Q-SSN denotes the D3Q agent where our proposed SSN replaces the discriminator. The “fixed θD/θSSN ” indicates the discriminator/SSN is pre-trained and fixed during the training process. Succ: Success Rate. Reward: Average Reward. Turns: Average Turns. we do not explicitly use a diversity-guided objective function during the training process. We think the reason is that the diverse utterances are easier to reserve the order information. In previous methods, the discriminator D only gives good or bad signals to response generator G, and the G has to figure out what is an acceptable response by itself. As for our SSN, it explicitly forces the G to generate responses that will have unique orders in dialogue, which leads to more diverse utterances. Human Evaluation For human evaluation, we follow protocols in Li et al. (2016a) and employing crowd-sourced judges from the Amazon Mechanical Turk to evaluate a random sample of 1000 unique generated utterances from three generators in the OpenSubtitles test dataset. We present both the input dialogue history and the generated responses to 5 judges and ask them to decide which one of the three results is the be.ts Ties are not permitted. We consider both single-turn and multiturn for the evaluation. The results are shown in Table 4. Evidently, the generator trained in our method shows a significant improvement in the quality of generated sentences. The gain is even higher in the multi-turn setting than the single-turn setting. This is because when only considering the single-turn dialogue, the information encoded in three methods will be similar. 4.3 Task-Oriented Dialogue Learning Dataset Following the previous work (Peng et al., 2018; Su et al., 2018), we use the same Movie-Ticket Booking dataset collected from Amazon Mechanical Turk for evaluation. The dataset is manually labeled based on a schema defined by domain experts consisting of 11 intents and 16 slots in the full domain setting. In total, the dataset has 280 annotated dialogues with an average length of approximately 11 turns. In this scenario, the goal of dialogue systems is to help the user complete the tasks through the conversation. Baselines We compare our SSN-based discriminator within the state-of-the-art task-oriented dialogue policy learning approach, Discriminative Deep Dyna-Q (D3Q) (Su et al., 2018). At each turn, the D3Q agent takes S planning steps interacting with the simulator and store stimulated user experiences based on the scoring of the discriminator. The stimulated user experiences are generated by the world model, which can be viewed as the generator G in our case. We replace the conventional discriminator D of D3Q with our SSN. Implementation Details For a fair comparison, we remain most of the parameters in the D3Q algorithm the same as in Su et al. (2018). In the self-supervised network, the dimension of the utterance embeddings is 80. The hidden size is 128 for utterance encoding bi-LSTM and 512 for triple reasoning bi-LSTM. The MLP has a single hidden layer of size 128. We use the simulator2 as in Li et al. (2016c) to generate user utterances, and the threshold interval is set to a range between 0.45 and 0.55. Results The experimental results of different agents at training epoch are shown in Table 5. From the results, we can observe that: (1) The D3Q-SSN outperform the D3Q in the most of cases, which shows that our SSN-based discriminator can improve the ability to recognize 2https://github.com/MiuLab/TC-Bot 3865 the high-quality stimulated user experiences. (2) When the planning step increases in D3Q, the performance shows an apparent drop. This is because the discriminator D in the original D3Q agent keeps lots of low-quality stimulated user experiences, which significantly degrade the performance of the D3Q agent. As for our SSN, we can see some performance improvement even when using 10-step planning. This substantially means that our SSN has a better ability to select the good simulated user experiences, especially in the multi-turn dialogue cases. 5 Conclusion In this paper, we introduce a self-supervised task, inconsistent order detection, to explicitly capture the order signal of the dialogue. While previous methods suffer from forgetfulness problem when modeling dialogue history, we further propose a sampling-based self-supervised network SSN, to approximately encoding the dialogue history and highlight the order signal. We also show how our SSN can contribute to real dialogue learning. Empirically, our method advances the previous state-of-the-art dialogue systems in both opendomain and task-oriented scenarios. Theoretically, we believe this self-supervision can be generalized to other types of temporal order in different NLP tasks. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of the 5th International Conference on Learning Representations (ICLR). Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Proceedings of the 29th Conference on neural information processing systems (NeurIPS), pages 3079–3087. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Carl Doersch, Abhinav Gupta, and Alexei A Efros. 2015. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1422–1430. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neural approaches to conversational ai. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2–7. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 28th Conference on Neural Information Processing Systems (NeurIPS), pages 2672–2680. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Dan Jurafsky and James H Martin. 2014. Speech and language processing. Pearson Education UK. Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Proceedings of the 30th conference on Neural Information Processing Systems (NeurIPS), pages 4601–4609. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 110–119. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016b. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1192–1202. Jiwei Li, Will Monroe, Tianlin Shi, S˙ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2157–2169. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016c. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688. 3866 Bing Liu and Ian Lane. 2018. Adversarial learning of task-oriented neural dialog models. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL), pages 350–359. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Yi Luan, Yangfeng Ji, and Mari Ostendorf. 2016. Lstm based conversation models. arXiv preprint arXiv:1603.09457. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1412–1421. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of the 27th Conference on Neural Information Processing Systems (NeurIPS), pages 3111–3119. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. 2016. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 2536–2544. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018. Deep dyna-q: Integrating planning for task-completion dialogue policy learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2182–2192. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227–2237. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the 2011 conference on empirical methods in natural language processing (EMNLP), pages 583–593. Jost Schatzmann, Karl Weilhammer, Matt Stuttle, and Steve Young. 2006. A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. The knowledge engineering review, 21(2):97–126. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI). Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI). Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015a. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management (CIKM), pages 553–562. ACM. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015b. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT), pages 196–205. Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3813–3823. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 28th Conference on Neural Information Processing Systems (NeurIPS), pages 3104–3112. J¨org Tiedemann. 2009. News from opus-a collection of multilingual parallel corpora with tools and interfaces. In Proceedings of the 2nd Recent advances in natural language processing (RANLP), volume 5, pages 237–248. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. ICML Deep Learning Workshop. Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, and William Yang Wang. 2019. Self-supervised learning for contextualized extractive summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Xiaolong Wang and Abhinav Gupta. 2015. Unsupervised learning of visual representations using videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2794–2802. 3867 Tsung-Hsien Wen, Milica Gasic, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned lstm-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1711–1721. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 438–449. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 665–677. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI). Zhen Xu, Bingquan Liu, Baoxun Wang, SUN Chengjie, Xiaolong Wang, Zhuoran Wang, and Chao Qi. 2017. Neural response generation via gan with an approximate embedding layer. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 617–626. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI). Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018a. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 1108–1117. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeuIPS), pages 1815–1825.
2019
375
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3868–3877 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3868 Are we there yet? Encoder-decoder neural networks as cognitive models of English past tense inflection Maria Corkery [email protected] Yevgen Matusevych [email protected] Sharon Goldwater [email protected] School of Informatics University of Edinburgh Abstract The cognitive mechanisms needed to account for the English past tense have long been a subject of debate in linguistics and cognitive science. Neural network models were proposed early on, but were shown to have clear flaws. Recently, however, Kirov and Cotterell (2018) showed that modern encoder-decoder (ED) models overcome many of these flaws. They also presented evidence that ED models demonstrate humanlike performance in a nonce-word task. Here, we look more closely at the behaviour of their model in this task. We find that (1) the model exhibits instability across multiple simulations in terms of its correlation with human data, and (2) even when results are aggregated across simulations (treating each simulation as an individual human participant), the fit to the human data is not strong—worse than an older rule-based model. These findings hold up through several alternative training regimes and evaluation measures. Although other neural architectures might do better, we conclude that there is still insufficient evidence to claim that neural nets are a good cognitive model for this task. 1 Introduction For over 30 years, the English past tense has served as both inspiration and testbed for models of language acquisition and processing (Rumelhart and McClelland, 1986; Pinker and Prince, 1988; Marcus, 1995; Plunkett and Juola, 1999; Pinker and Ullman, 2002; Albright and Hayes, 2003; Seidenberg and Plaut, 2014; Kirov and Cotterell, 2018; Blything et al., 2018, etc.). One of the most wellknown debates centres on whether the apparently rule-governed regular past tense is indeed represented cognitively using explicit rules. Rumelhart and McClelland (1986) famously argued against this hypothesis, presenting a neural network model intended to capture both regular and irregular verbs with no explicit rules. However, Pinker and Prince (1988) presented a scathing rebuttal, pointing out both theoretical and empirical failures of the model. In their alternative (dual-route) view, the regular past tense is categorical and captured via explicit rules, while irregular past tenses are memorized and can (occasionally) generalize via gradient analogical processes (Pinker and Prince, 1988; Prasada and Pinker, 1993). Their arguments were so influential that although neural networks gained considerable traction in cognitive science more generally (Bechtel and Abrahamsen, 1991; McCloskey, 1991; Elman et al., 1996), many linguists dismissed the whole approach.1 With the recent success of deep learning in NLP, however, there has been renewed interest in exploring the extent to which neural networks capture human behaviour in psycholinguistic tasks (e.g., Linzen and Leonard, 2018; Linzen, 2019). In particular, Kirov and Cotterell (2018; henceforth K&C) revisited the past tense debate and showed that modern sequence-based encoder-decoder (ED) models overcome many of the criticisms levelled at Rumelhart and McClelland’s model. Specifically, these models permit variable-length input and output that represent sequential ordering; can reach near-perfect accuracy on both regular and irregular verbs seen in training; and (using multi-task learning) can effectively generalize phonological rules across different inflections. These primary claims are undoubtedly correct (and indeed, we replicate the accuracy results below). However, we take issue with another part of K&C’s work, in which they claim that their ED model also effectively models human behaviour in a nonce-word experiment (i.e., wug test, described below). We explore the model’s behaviour on this 1Though see Seidenberg and Plaut (2014), who argue that some of the core ideas, such as the focus on statistical learning, have nevertheless permeated the study of language. 3869 task in detail, and conclude that its ability to model humans is considerably weaker than K&C suggest. In particular, we begin by showing that multiple simulations of the same model (with different random initializations) result in very different correlations with the human data. To ensure that this instability is not just due to the evaluation measure, we introduce an alternative measure, but still find unstable results. We then consider whether treating individual simulations as individual participants (rather than as a model of the average participant) captures the human data better. This aggregate model does show some high-level similarities to the human participants: both model and humans tend to produce irregulars more frequently for nonce words that are similar to many real irregular verbs. However, the model is still poor at capturing fine-grained distinctions at the level of individual verbs. We conclude that, although deep learning approaches overcome many of the problems of earlier neural network models, there is still insufficient evidence to claim that they are good models of human morphological processing. 2 Background 2.1 Nonce word experimental data Like K&C, we use data from two experiments run by Albright and Hayes (2003; henceforth A&H). In Experiment 1, using a dialogue-based prompt, A&H presented participants auditorily with nonce “verbs” that are phonotactically legal in English (e.g., spling, dize), and prompted participants to produce past tense forms of these verbs, resulting in a data set of production probabilities of various past tense forms. In Experiment 2, participants first produced each past tense form (as in Experiment 1) and were then asked to rate the acceptability of either two or three possible past tense forms for that verb—one regular, and one or two potential irregulars. For example, for scride /skr"aId/, participants rated scrided /skr"aId@d/ (regular), scrode /skr"oUd/ and scrid /skr"Id/ (irregular). This gives a data set of past tense form ratings. Most of A&H’s own analyses rely on the ratings data, but the ED model is a model of production, so we follow K&C and use the data from Experiment 1. The data is coded using the same set of suggested forms that were rated in Experiment 2: for each nonce word, A&H counted how many participants produced the regular form, the irregular form (or each of the two forms, if there are two), and “other” (any other past tense form that was not among those rated in Experiment 2). The counts are normalized to compute production probabilities for each output form. The nonce words used by A&H were carefully chosen according to several criteria. First, they are phonologically “bland”: i.e., not unusual-sounding as English words (as confirmed by a pre-test with participants). Second, as explained in the following section, they fall into several categories designed to test A&H’s hypothesis that (contra Prasada and Pinker, 1993), both regular and irregular past tense forms exhibit gradient (and not categorical) effects. 2.2 A&H’s model and islands of reliability To explain the categories of nonce words (which we will refer to in our analyses), we briefly describe A&H’s theory of past tense formation, which they implement as a computational model. The model postulates that speakers maintain a set of explicit structured rules that capture inflectional changes at different levels of generality. For example, a speaker might have rules such as: • /∅/ →/@d/ if verb matches [X {/d/, /t/} ] based on, e.g., want, need, start. • /i/ →/E/ if verb matches [X {/r/, /l/} /d/] based on, e.g., read, lead, breed. where X represents arbitrary phonological material and is the location of the changing material. Each rule is given a confidence score based on its precision and statistical strength (the number of cases to which it could potentially apply). When a nonce word is presented, several rules may apply (e.g., the two rules above for gleed), and the goodness of each possible past tense is determined by the confidence score of the corresponding rule. Crucially, A&H’s model can learn multiple rules that all produce regular past tense forms, but with phonological contexts of different specificity, hence different confidence scores. Therefore, some nonce words may reside in so-called “islands of reliability” (IOR) for regular verbs: that is, there is an applicable regular rule that has a very high confidence score. Meanwhile other nonce words might also be considered regular, but with lower confidence. Thus, the model predicts gradient effects even for regular inflection. It also predicts gradient effects for irregular inflection, since there can be IORs for irregular rules as well. To test these predictions, A&H chose four types of nonce words: those residing in an IOR for regu3870 lars, for both regulars and irregulars, for irregulars only, or for neither. They also included several nonce verbs similar to burn–burnt, spell–spelt, and some that might potentially elicit single-form analogies. Their results (discussed further in Section 4) showed that the different IOR categories were indeed treated differently by participants. 2.3 Evaluating models To go beyond coarse-grained analysis based on the IOR categories, both A&H and K&C evaluate their models by correlating model output with the human data at the level of individual past tense forms. Correlations are computed between the human data (either production probabilities or ratings) and the model scores for each form. The regulars and irregulars are treated separately. That is, the irregular correlation value is computed by considering the average human production probability (or rating) for each suggested irregular past tense, and comparing these with the model scores for those same forms. The correlation for regulars is computed analogously. Regulars and irregulars are treated separately because the scores for regulars are nearly always larger, so if all forms were considered at once, a baseline that simply assigned (say) 1 to regulars and 0 to irregulars would already achieve a high correlation with humans. We initially follow K&C in computing the Spearman (rank) correlation against the production probabilities, and later also examine Pearson (linear) correlations and ratings data. 3 Methods 3.1 Model and hyperparameters We adopt the encoder-decoder architecture used by K&C, as well as their implementation framework and hyperparameters. Encoder-decoder models are a type of recurrent neural network (RNN) introduced for machine translation (Sutskever et al., 2014) but also often used for other sequence-tosequence transductions, such as morphological inflection and lemmatization (Kann and Sch¨utze, 2016; Bergmanis and Goldwater, 2018). The encoder is an RNN that reads in the input sequence (here, a sequence of characters representing the phonemes in the present tense verb form) and creates a fixed-size vector representation of it. The decoder is another RNN that takes this vector as input and decodes it sequentially, outputting one symbol at each timestep (here, the phonemes of the past tense form). The ED model with attention (Bahdanau et al., 2015) is implemented in OpenNMT (Klein et al., 2017).2 It has two bidirectional LSTM encoder layers and two LSTM decoder layers, 300dimensional character embeddings in the encoder, and 100-dimensional hidden layers in the encoder and decoder. The Adadelta optimizer (Zeiler, 2012) is used for training, with the default beam size of 12 for decoding. The batch size is 20, and dropout is applied between layers with a probability of 0.3. Except where otherwise noted below, all models were trained for 100 epochs. 3.2 Training data To compare our results to both A&H and K&C, we use their corresponding training sets, both based on data from CELEX (Baayen et al., 1995). A&H’s training data contains all verbs listed in CELEX with a lemma frequency of 10 or more (4253 verbs, 218 of which are irregular). We use A&H’s American English IPA phonemic transcriptions, to match the nonce word experiment (which was carried out with American English speakers), and also follow them in using the nonce words as the unseen test set rather than creating dev/test splits from the CELEX data. As argued by A&H, adult English speakers will have been exposed to all of the real verbs many times and would be able to correctly produce the past tense of all of them. Adults’ generalization to nonce words is therefore predicated on their knowledge of this entire training set (including, crucially, all of the irregular forms). For our second training set, we obtained the data from K&C, which is a subset of A&H’s: it contains 4039 verbs, 168 of which are irregular—that is, 50 real irregular verbs are missing. Examples of verbs that are missing from the K&C data include do–did and use–used. K&C also randomly divided their data into training, development, and test sets, but we weren’t able to obtain these splits, so (since we are using the nonce words for test data) we simply use all 4039 verbs as training data. We include results using the K&C’s data mainly to allow closer (though still not exact) comparison with their work, but we feel that A&H’s training data, which includes all the irregulars, more accurately reflects adult linguistic exposure. It has been argued that morphological generalization in humans is governed by type frequencies 2In early tests, we also tried the Nematus toolkit with hyperparameters following (Kann and Sch¨utze, 2016; Bergmanis and Goldwater, 2018); the pattern of results was similar. 3871 Rank nold /n"oUld/ Probability 1 nolded /n"oUld@d/ 0.9869 2 nelt /n"Elt/ 0.0120 3 neelded /n"i:ld@d/ 0.0004 4 nelded /n"Eld@d/ 0.0004 5 neld /n"Eld/ 0.0001 Rank murn /m"@rn/ Probability 1 murned /m"@rnd/ 0.8636 2 murnt /m"@rnt/ 0.1363 3 murn /m"@rn/ <0.0001 4 murnaid /m"@rneId/ <0.0001 5 murnoo /m"@rnu:/ <0.0001 Table 1: Top 5 outputs from two sample beams, for the nonce words nold and murn. Past tenses suggested by A&H are bolded. For nold, one suggested past tense form, nold /n"oUld/, is missing from the top 5. rather than token frequencies (Bybee and Thompson, 1997; Pierrehumbert, 2001). Modelling evidence, including from A&H, also supports the idea that token frequencies are ignored or severely downweighted (i.e., effectively using log frequencies: O’Donnell, 2015; Goldwater et al., 2006). We therefore follow A&H and K&C in training our models on the list of distinct word types, with each type occurring once in the training data. 3.3 Evaluation We report three different evaluation measures. First, we compute training set accuracy: the percentage of verbs in the training data for which the model’s top-ranked output is the correct past tense form. This is largely a sanity check and test of convergence: a fully-trained model of adult performance should have near-perfect training set accuracy. Next, as described in Section 2.3, we report Spearman’s rank correlation (ρ) of the model’s probabilities for the various nonce past tense forms with the human production probabilities. The probability for each suggested past tense form was obtained by forcing the model to output that form (e.g., providing scride as input and forcing it to output scrid). This made it possible to get probabilities for forms that did not occur in the beam (the list of most likely forms output by the model). Finally, we introduce a third measure, motivated further in Section 4.1, complete recall@5: CR@5 = 1 n × n X i=1 [Si ⊆Bi] (1) where n is the total number of nonce verbs, Si Data all regular irregular K&C 99.79 (0.05) 99.92 (0.04) 96.90 (1.06) A&H 99.51 (0.04) 99.86 (0.07) 92.98 (1.18) Table 2: Mean training set accuracy (in %, with standard deviations in brackets), averaged over 10 runs for each training set with different random seeds. Oracle accuracy is 99.85% on the K&C data and 99.55% on the A&H data, due to homophones and forms with multiple past tenses. In order to do better on irregulars, the model would have to get more of the regulars wrong. is the set of A&H’s suggested past tense forms for verb i, Bi is the set of the top five verbs in the model’s beam for i, and [Si ⊆Bi] = 1 if all verbs from Si appear in Bi, and 0 otherwise. For example, a model which only processed the two verbs in Table 1 would have a CR@5 of 0.5, since the beam includes all suggested past tenses for murn (murned, murnt), but not for nold (nolded, nold, neld).3 4 Experiments 4.1 Experiment 1: Model variability Our first experiment aims to replicate K&C’s results showing that (a) the model is able to produce the past tense forms of training verbs with nearperfect accuracy, and (b) its correlation with human data on the nonce verb test set is higher than that of A&H’s model. In K&C’s paper, these results were based on a single trained model. Here we trained 20 models (10 on each training set) initialized with different random seeds. Accuracy Table 2 lists the mean and standard deviation of training set accuracy for each of the two training sets. It is not possible to get 100% accuracy because the training sets contain some homophones with different past tenses (e.g., write–wrote and right–righted), and some verbs which have two possible past tenses (e.g., spring–sprung and spring– sprang). Nevertheless, the models get very close to the best possible accuracy, confirming K&C’s finding that they learn both regular and irregular past tenses of previously seen words within 100 epochs. Example convergence plots are shown in 3Not all of A&H’s suggested forms were actually produced by participants, but all of them seem plausible and we felt that a good model should rank them higher than most other potential past tenses, i.e., they should be included within a small beam size. Indeed, in cases where they are not (e.g., nold in Table 1) we do typically see much less plausible forms (such as neelded) included in the beam. 3872 0 20 60 100 0 40 80 Epochs Accuracy (%) A&H training data Reg All Irreg 0 20 60 100 Epochs K&C training data Figure 1: Accuracy values on the training set during training for one model per training set. 0.2 0.3 0.4 0.5 A&H K&C Training dataset Correlation with regulars 0.1 0.2 0.3 0.4 A&H K&C Training dataset Correlation with irregulars A&H rules K&C neural our model Figure 2: Spearman correlation coefficients between model scores and human production probabilities, using the A&H and K&C training data. Values reported by K&C and A&H are shown in addition to those of our models. Horizontal jitter is added for readability. Figure 1, illustrating that the models learn regular verbs very quickly, and irregular verbs more slowly, but both are learned well after 60–80 epochs. Correlation Despite having consistently high accuracy on real words, Figure 2 shows that models with different random initializations vary considerably in their correlation with human speakers’ production probabilities on nonce words, from 0.15 to 0.56 for regulars, and from 0.23 to 0.41 for irregulars. K&C’s reported results are at the high end of what we obtained, suggesting that they are likely not representative. On the other hand, we were concerned that the variability in the correlation measure might be due to an artefact: the vast majority of the beams returned by the model assign very high probability (> 98%) to the top item and little mass to anything else (as in the first example in Table 1).4 Since the 4The skewedness of the beams is likely because of the training/testing scenario, where the model is effectively asked to do different tasks: at training time, it is trained to produce one correct past tense, while at test time, it’s expected to produce a probability distribution over potential nonce past tenses. We could surely produce better matches to the human probability distributions by training directly to do so, but that wouldn’t 0.30 0.40 Complete recall@5 Training dataset A&H K&C Figure 3: Complete recall@5 for 20 models with different random seeds (10 with each training dataset). Horizontal jitter is added for readability. Number of models 0 10 Figure 4: The number of models (of the 10 trained on the A&H dataset) which agree on the second-place past tense form. The X-axis shows 281 different past tense forms (for 59 nonce words in the present tense), and the Y-axis shows, for each form, how many times a model places it in the second position in the beam. correlation measure is computed across different nonce forms, tiny changes in the beam probabilities of one nonce verb could change the ranking of (say) its regular past with respect to the regular past of another nonce word, even if the relative ranking of forms within each nonce’s beam stayed the same. CR@5 and second best forms The above observation motivated the CR@5 measure (Section 3.3). Rather than measuring the relative probabilities of past forms across different verbs, CR@5 considers the relative rankings of different past forms for each verb. However, CR@5 also yielded unstable results: 39–47% on A&H’s data, and 29–44% on K&C’s data, as shown in Figure 3. As a final exploration of the models’ instability across different simulations, we looked at how often the models agree with each other on the verb occupying the first and the second position in the beam. While there is very high agreement on the most likely form (top of the beam) across the simulations—usually a regular past tense—very few forms in the second position are the same across simulations (see Figure 4). make sense as a cognitive model, since human learners are exposed only to correct past tenses, not to distributions. 3873 % of human responses 0 100 IOR both IOR Irreg. IOR neither IOR Reg. burnt single-form analogy other irreg 2 irreg 1 reg % of model output 0 100 bˈaɪz dˈaɪz drˈaɪs flˈɪd͡ ʒ frˈoʊ ɡˈeɪr ɡlˈɪp rˈaɪf stˈɪn stˈɪp blˈɪɡ t͡ ʃˈeɪk drˈɪt flˈiːp ɡlˈiːd ɡlˈɪt kwˈiːd plˈɪm skrˈaɪd splˈɪŋ tˈiːp ɡˈuːd nˈʌŋ pˈæŋk prˈiːk rˈæsk ʃˈɪlk tˈärk tˈʌŋk trˈɪsk nˈoʊld blˈeɪf brˈɛd͡ ʒ t͡ ʃˈuːl dˈeɪp ɡˈɛz nˈeɪs spˈæk stˈaɪr tˈɛʃ wˈɪs ɡrˈɛl mˈərn ʃˈərn skˈɔɪl skˈɛl skwˈɪl snˈɛl kˈɪv lˈʌm pˈʌm ʃˈiː zˈeɪ flˈɛt ɡrˈaɪnt rˈaɪnt ʃˈaɪnt t͡ ʃˈaɪnd Figure 5: Percentage of regular, irregular, and “other” responses produced by humans (top) and the model (bottom). Each of the six blocks corresponds to a different category of nonce words (see Section 2.2). Summary To recap, we find similar training set accuracy to what K&C reported, but the correlation scores between the model and the human data are generally lower, and the model exhibits unstable behaviour across different simulations. However, the unstable behaviour can potentially be accounted for, if each simulation is interpreted as an individual participant rather than as a model of the average behaviour of all participants. In that case, we should aggregate results from multiple simulations in order to compare them to the human results, since production probabilities from A&H’s experiment were obtained by aggregating data over multiple participants. The next experiment examines this alternative interpretation. 4.2 Experiment 2: Aggregate model To simulate A&H’s production experiment with each simulation as one participant, we trained 50 individual models on the A&H training data5 using the same architecture and hyperparameters as before. We then sampled 100 past tense forms for each verb from each model’s output probability distribution. Each of the 5000 output forms (100 each from 50 simulated participants) was categorized either as (a) the verb’s regular past tense form, (b–c) the first or second irregular past tense form suggested by A&H, or (d) any other possible form. For the aggregate model, the correlation measure is the only evaluation that makes sense. For regulars, correlation with the human production proba5In the absence of clear differences between the model’s performance on A&H’s vs. K&C’s data in Experiment 1, we only use the more complete A&H dataset henceforth. bilities was higher than in the previous experiment (0.45 vs. an average of 0.28 in Experiment 1), but for irregulars it was lower (0.19 vs. 0.22 in Experiment 1). The differences between the humans and aggregate model are clear from Figure 5, which shows the distribution of various past tense forms for both model and humans. For example, in only one case did the humans produce an irregular more frequently than the regular (no-change past chind for present chind), whereas there are several cases where the aggregated model does so. Moreover, for the word chind itself, the model prefers chound rather than chind. In the previous experiment, we saw that individual models often rank implausible past tenses higher than plausible ones. However, we see here that on aggregate nearly all the model’s proposed past tenses are those suggested by A&H. Apparently, the unstable beam rankings wash out the implausible forms, i.e., the plausible forms on average occur nearer the top of the beam than any particular implausible form. In fact, the model actually produces fewer “other” forms than the humans. We also looked at the model’s average production of regular and suggested irregular forms for each of the six categories in Figure 5. The results, shown in Figure 6, indicate that the model does capture the main trends seen in humans across these categories, but overall it is more likely to produce irregular forms. Together with the low overall correlation to human results and obvious differences at the fine-grained level, these results suggest that there are serious weaknesses in the ED model, even when results are aggregated across simulations. 3874 0.7 1 Regular production prob humans our model 0 0.3 Irregular production prob IOR reg IOR both IOR irreg IOR neither burnt single form analogy Figure 6: Mean production probabilities for regulars (top) and A&H’s suggested irregulars (bottom) in each of A&H’s categories of nonce words, for humans and for the aggregated ED model. 5 Further analyses 5.1 Is the model overfitting? We began by assuming that models should be trained at least until they achieve perfect performance on the training set, but perhaps 100 epochs is too much, and the model is just overfitting. Training for less time might produce less skewed beam probabilities, more stable beam rankings, and perhaps better correlations with the human data. To investigate this possibility, we took the 10 models originally trained on the A&H dataset and computed the correlation with human data for regulars and irregulars after every 10 epochs of training. The highest correlation is achieved after only 10 epochs (0.47 for regulars and 0.50 for irregulars) and the beam probabilities are indeed less skewed: the average probability of the top ranked output is 0.92 after 10 epochs, vs. 0.97 after 100 epochs. However, the models average only 6.5% accuracy on the real irregular words after 10 epochs, so it is difficult to argue that these are good models of human behaviour.6 It seems that the ED model displays a fundamental tension between correctly modelling humans on real words and nonce words. 5.2 Rating data and correlations We have so far evaluated all models against human production data. However, the A&H model outputs unnormalized scores, so arguably it makes more 6Early exposure to more irregulars could help in principle, so we also tried training the models on token or log token frequencies rather than type frequencies, but the resulting models’ correlations with production probabilities were no higher than models trained on type frequencies (the same for log tokens, and lower for tokens). Data Cor. Verbs A&H Individ. Agg. Production ρ reg. .35 .32 (.12) .45 irreg. .36 .31 (.05) .19 r reg. .62 .16 (.09) .30 irreg. .14 .16 (.03) .17 Rating ρ reg. .55 .32 (.09) .43 irreg. .57 .39 (.08) .31 r reg. .71 .34 (.07) .40 irreg. .48 .35 (.06) .40 Table 3: Correlations (using Spearman’s ρ and Pearson’s r) between the models’ output probabilities vs. human production probabilities and rating data. The data for the individual model is an average over 10 simulations (standard deviation shown in brackets). Highest correlation in each line is shown in bold. sense as a model of ratings. A&H also originally evaluated it using Pearson correlation. For completeness we report in Table 3 the correlations for all models on both ratings data and production data, using both Spearman and Pearson coefficients. We find that the A&H model does score better against ratings data, although surprisingly the ED models do too. More importantly, though, the A&H model fits the human data best on 6 out of 8 measures. 5.3 What is the model learning? To examine the representations acquired by the model, we extract vectors from the encoder’s hidden state. As the encoder is a bidirectional LSTM, we concatenate the two states at the last time step (after training on the A&H data). Figure 7a shows a t-SNE visualization of hidden state vectors for both real and nonce verbs in one of our simulations. The model clearly groups the verbs into small clusters, and Figures 7b–c show that this clustering is based on the verbs’ trailing phonemes, including some structure withing the clusters: e.g., strip /str"Ip/, grip /gr"Ip/, and trip /tr"Ip/ are next to each other in Figure 7b, and so are clip /kl"Ip/, flip /fl"Ip/, and glip /gl"Ip/. It is not so clear, however, how the model decides on whether to produce a regular or an irregular form for nonce verbs. We do see some evidence in Figure 7c that nonce verbs similar to regular English verbs yield a regular form (note the regular neighbours of nung /n"2N/), and the same holds for irregulars (note the irregular forms around spling /spl"IN/, for which the model produced splung). However, the model also produces an irregular form (stup /st"2p/) for stip /st"Ip/, which is clearly surrounded by regular En3875 (b) (c) -100 -50 0 50 100 -100 -50 0 50 100 English reg. English irreg. Nonce reg. Nonce irreg. (a) All verbs. strˈɪp tˈaɪp dˈɪp sˈɪp tˈɪp rˈɪp ɡrˈɪp drˈɪp ɪkwˈɪp skˈɪp trˈɪp flˈɪp t͡ʃˈɪp klˈɪp ʃˈɪp nˈɪp aʊtstrˈɪp zˈɪp snˈɪp kwˈɪp ɡrˈaɪp ɡlˈɪp stˈɪp 7 8 9 10 40 41 42 43 (b) Zooming in on /st"Ip/. brˈɪŋ hˈæŋ rˈɪŋ sˈɪŋ swˈɪŋ sprˈɪŋ klˈɪŋ flˈɪŋ stˈɪŋ slˈɪŋ strˈɪŋ oʊvərhˈæŋ bəlˈɔːŋ lˈɔːŋ bˈæŋ proʊlˈɔːŋ klˈæŋ θrˈɔːŋ hərˈæŋ wˈɪŋ twˈæŋ rˈɔːŋ pˈɪŋ bˈʌŋ splˈɪŋ nˈʌŋ 46 48 50 52 54 42 44 46 48 50 52 (c) Zooming in on /n"2N/. Figure 7: A t-SNE plot of encoder state vectors for regular and irregular verb forms. (a) shows an an overview of all (real and nonce) verbs, and (b) and (c) zoom in on the boxed areas in (a). d t ə ɪ r s n l k p eɪ m əræ ɛ boʊ aɪ iː f ʌ ä z v ɡ uː w d͡ ʒ ʃ t͡ ʃ h ɔː aʊ j ŋ ʊ ɔɪ θ ðʒ -10 0 10 20 -20 -10 0 10 20 a a a coronal stop voiced voiceless Figure 8: PCA plot of character-level (phoneme) vectors extracted from the decoder’s hidden state. The phonemes are coloured based on the three different regular past-tense suffixes they would be followed by. glish verbs in Figure 7b. We also tested whether the clustering by trailing phonemes is simply an artefact, by training another model on data where we reversed the order of the input phonemes in all cases (e.g., /w"IS/– /w"ISt/ [wish–wished] becomes /SI"w/–/tSI"w/). This time, verbs were grouped based on their leading phonemes—that is, the endings of the original verbs—suggesting that the model finds the regularities in the data regardless of the order of phonemes. Finally, we investigated the model’s phoneme representations, expecting a clustering corresponding to the three types of phonemes that trigger different endings in regular past tense forms: /-Id/ after coronal stops /t/ and /d/, /-d/ after voiced consonants and vowels, and /-t/ after voiceless consonants. We extract character-level vectors from the decoder hidden state, apply PCA (which worked better than t-SNE in this case) and visualize the resulting vectors. Figure 8 shows that the expected pattern has emerged (except for /h/ in the ‘voiced’ cluster, but this phoneme never appears at the end of English words). 6 General discussion and conclusions Our results confirm that, unlike earlier neural net models, the ED model has no trouble learning the past tense forms of verbs it is trained on. We found, however, that its behaviour on nonce verbs does not correlate with human experimental data as well as K&C’s results implied, and indeed not as well as that of A&H’s much earlier rule-based model. One issue in particular seems to be overproduction of irregulars, which the model consistently prefers to regulars for four verbs (7% of considered nonce verbs), while humans nearly always prefer the regular form. This was an issue with earlier neural net models as well (Plunkett and Juola, 1999). On the other hand, when the model 3876 outputs something other than the regular form, its choices are plausible. This was not true for earlier models: Plunkett and Juola’s model often chose the wrong regular suffix (with incorrect voicing in the final phoneme), and Rumelhart and McClelland’s (1986) model failed to produce regular endings for nonce verbs (Prasada and Pinker, 1993; Marcus, 1998). Here, we see from both our model’s output and its internal representations that it has correctly identified the necessary voicing distinctions and that nonce words trigger similar representations and behaviour to real words. In future, a stricter test might use nonce words that are intentionally less similar to real words (e.g., the example from Prasada and Pinker (1993): to out-Gorbachev). It is also worth pointing out that the ED model, unlike A&H’s model and many earlier connectionist models, is fed raw phonemes (rather than the phonemes’ distinctive features) as input. Although it learns some of the relevant features anyway, it would be interesting to see whether its behaviour becomes more human-like if the correct features are provided in the input. Although our paper has revealed a number of weaknesses of the ED model, we do agree with K&C that neural network-based cognitive models of inflection deserve re-evaluation in light of recent technical advances. There are many other potential architectures and modelling decisions that could be explored, as well as other behavioural data such as developmental patterns (Blything et al., 2018; Ambridge, 2010) and inflection in other languages (e.g., Clahsen et al., 1992; Ernestus and Baayen, 2004). As noted by Seidenberg and Plaut (2014), models’ failures as well as successes can be informative, and we hope that our detailed exploration of the ED model’s behaviour will inspire future developments in these models, both for cognitive modelling and NLP. Acknowledgements This work was supported in part by a James S McDonnell Foundation Scholar Award (220020374). References Adam Albright and Bruce Hayes. 2003. Rules vs. analogy in English past tenses: a computational/experimental study. Cognition, 90:119–161. Ben Ambridge. 2010. Children’s judgments of regular and irregular novel past-tense forms: New data on the English past-tense debate. Developmental Psychology, 46:1497–1504. R. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1995. CELEX2 LDC96L14. Web Download. Linguistic Data Consortium, Philadelphia, PA. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA. William Bechtel and Adele Abrahamsen. 1991. Connectionism and the mind: An introduction to parallel processing in networks. Basil Blackwell, Oxford, England. Toms Bergmanis and Sharon Goldwater. 2018. Context sensitive neural lemmatization with Lematus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1391–1400. Association for Computational Linguistics. Ryan P. Blything, Ben Ambridge, and Elena V.M. Lieven. 2018. Children’s acquisition of the english past-tense: Evidence for a single-route account from novel verb production data. Cognitive Science, 42:621–639. Joan Bybee and Sandra Thompson. 1997. Three frequency effects in syntax. In Proceedings of the 23rd Annual Meeting of the Berkeley Linguistics Society, pages 378–388. Berkeley Linguistics Society, Berkeley, CA. Harald Clahsen, Monika Rothweiler, Andreas Woest, and Gary F. Marcus. 1992. Regular and irregular inflection in the acquisition of German noun plurals. Cognition, 45:225–255. Jeffrey Elman, Elizabeth Bates, Mark H. Johnson, Anette Karmiloff-Smith, Domenico Parisi, and Kim Plunkett. 1996. Rethinking innateness: A connectionist perspective on development. MIT Press, Cambridge, MA. Mirjam Ernestus and R. Harald Baayen. 2004. Analogical effects in regular past tense production in Dutch. Linguistics, 42:873–903. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Interpolating between types and tokens by estimating power-law generators. In Advances in NIPS-18, pages 459–466. Curran Associates, Inc., Red Hook, NY. Katharina Kann and Hinrich Sch¨utze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proceedings of the 14th Annual SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 62–70. Association for Computational Linguistics, Stroudsburg, PA. 3877 Christo Kirov and Ryan Cotterell. 2018. Recurrent neural networks in linguistic theory: Revisiting Pinker and Prince (1988) and the past tense debate. Transactions of the Association for Computational Linguistics, 6:651–665. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, System Demonstrations, pages 67–72. Association for Computational Linguistics. Tal Linzen. 2019. What can linguistics and deep learning contribute to each other? Response to Pater. Language, 95:99–108. Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 692– 697. Cognitive Science Society, Austin, TX. Gary F. Marcus. 1995. The acquisition of the English past tense in children and multilayered connectionist networks. Cognition, 56:271–279. Gary F. Marcus. 1998. Can connectionism save constructivism? Cognition, 66:153–182. Michael McCloskey. 1991. Networks and theories: The place of connectionism in cognitive science. Psychological Science, 2:387–395. Timothy J. O’Donnell. 2015. Productivity and reuse in language: A theory of linguistic computation and storage. MIT Press, Cambridge, MA. Janet Pierrehumbert. 2001. Stochastic phonology. Glot International, 5:195–207. Steven Pinker and Alan Prince. 1988. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28:73–193. Steven Pinker and Michael T. Ullman. 2002. The past and future of the past tense. Trends in Cognitive Sciences, 6:456–463. Kim Plunkett and Patrick Juola. 1999. A connectionist model of English past tense and plural morphology. Cognitive Science, 23:463–490. Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and Cognitive Processes, 8:1–56. David E. Rumelhart and James L. McClelland. 1986. On learning the past tenses of English verbs. In James L. McClelland, David E. Rumelhart, and the PDP Research Group, editors, Parallel distributed processing: Explorations in the microstructure of cognition, chapter 18, pages 216–271. MIT Press, Cambridge, MA. Mark S. Seidenberg and David C. Plaut. 2014. Quasiregularity and its discontents: The legacy of the past tense debate. Cognitive Science, 38:1190– 1228. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, pages 3104–3112. Curran Associates, Inc., Red Hook, NY. Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. Computing Research Repository, arXiv:1212.5701.
2019
376
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3878–3887 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3878 A Spreading Activation Framework for Tracking Conceptual Complexity of Texts Ioana Hulpus¸1, Sanja ˇStajner2 and Heiner Stuckenschmidt1 1Data and Web Science Group, University of Mannheim, Germany 2 Symanto Research, N¨urnberg, Germany {ioana,heiner}@informatik.uni-mannheim.de [email protected] Abstract We propose an unsupervised approach for assessing conceptual complexity of texts, based on spreading activation. Using DBpedia knowledge graph as a proxy to long-term memory, mentioned concepts become activated and trigger further activation as the text is sequentially traversed. Drawing inspiration from psycholinguistic theories of reading comprehension, we model memory processes such as semantic priming, sentence wrap-up, and forgetting. We show that our models capture various aspects of conceptual text complexity and significantly outperform current state of the art. 1 Introduction Reading comprehension has long been linked to processes over semantic memory, such as semantic priming through spreading activation (Anderson, 1981; Collins and Loftus, 1975; Neely, 1991; Gulan and Valerjev, 2010). While psycholinguistic literature abounds in research and demonstration of such processes (Just and Carpenter, 1980; Kutas and Hillyard, 1984; Carroll and Slowiaczek, 1986), there is a gap in understanding if they can be modeled in an automated way for capturing the cognitive load required by texts. At the same time, the recent advances in the publication of encyclopedic knowledge graphs provide an unprecedented opportunity for modeling human knowledge at scale. We focus on conceptual complexity which, as opposed to lexical and syntactic complexity (Vajjala and Meurers, 2014; Ambati et al., 2016), has received very little attention so far. Conceptual complexity accounts for the background knowledge necessary to understand mentioned concepts as well as the implicit connections that the reader has to access between the mentioned concepts in order to fully understand a text. It plays an important role in making texts accessible to children, non-native speakers, as well as people with low literacy levels or intellectual disabilities (Arf´e et al., 2017). Apart from being one of the main factors for understanding the story, conceptual complexity also influences the readers’ interest in the text: readers who lack relevant background knowledge have difficulties in understanding conceptually complex texts (Arf´e et al., 2017; Benjamin, 2012), while high-knowledge readers need some obstacles (more conceptual complexity) to maintain their interest (Arf´e et al., 2017; Benjamin, 2012; Kalyuga et al., 2003). Therefore, correctly estimating conceptual complexity of a text, and offering a reader a text of an appropriate cognitive load, is of utmost importance for: (1) ensuring correct understanding of a text; (2) maintaining the readers’ interest; and (3) promoting deeper-level processing and enhancing the readers knowledge. In this paper, we are building on top of the psycholinguistic findings that words are recognized faster if preceded by words related in meaning (semantic priming) (Gulan and Valerjev, 2010), and we adopt spreading activation theory as one of the main theories that tries to explain how priming occurs. Specifically, we introduce a framework that considers sequential text reading and models two simultaneous processes: (i) a spreading activation process that runs over long-term memory (approximated by the knowledge graph), activates concepts and transfers them to working memory, and (ii) a process that tracks concepts and their activation in working memory and subjects them to forgetting. We use the activation values of concepts in working memory at different points in the text in order to assess the amount of priming triggered by the text. Our hypothesis is that the higher these activation values (more priming), the lower the conceptual complexity. 3879 We validate our framework through extensive experiments, and show that the models we propose on top of it outperform state-of-the-art measures that aim to predict conceptual complexity. 2 Related Work In spite of its real-world importance, automatic assessment of conceptual complexity of texts has not received much attention so far. A few approaches have been proposed, but most of them are either not freely available, or have not been tested on large corpora (see (Benjamin, 2012) for the extensive list of approaches and their shortcomings). DeLite (vor der Br¨uck et al., 2008) software and Coh-Metrix (Graesser et al., 2004), for example, do not have any features related to conceptual clarity, which would measure ambiguity, vagueness, and abstractness of a concept, or the level of necessary background knowledge. From this perspective, the work of ˇStajner and Hulpus¸ (2018) is the only work that attempts to automatically measure conceptual complexity of texts. They propose a supervised method using a set of graph-based features over DBpedia knowledge graph. In our experiments, we use these features as state-of-the-art for comparison with our approach. In the cognitive science domain, the work most related to ours is in the direction of capturing knowledge in cognitive architectures (Lieto et al., 2018). Salvucci (2014) proposes the use of DBpedia as a source of declarative knowledge to be integrated with the ACT-R cognitive architecture (Anderson and Lebiere, 1998). They implement a very basic spreading activation model for scoring facts in the knowledge base for answering natural language, factual questions such as “What is the population of Philadelphia?”. Several other approaches have been proposed for extending ACTR with knowledge and reasoning (Ball et al., 2004; Oltramari and Lebiere, 2012), but none of them aim to assess the complexity of texts. With respect to spreading activation, it has long been adopted as a methodology for information retrieval (Crestani, 1997), used for document summarization (Nastase, 2008), document similarity (Syed et al., 2008), as well as cross-domain recommendation (Heitmann and Hayes, 2016), among others. Nevertheless, there is no prior attempt to apply spreading activation to the recently developed encyclopedic knowledge graphs with the purpose of modeling reading comprehension. This paper fills in this gap and shows that pairing spreading activation with other working memory processes (such as forgetting) can result in models that accurately assess conceptual complexity of a document. 3 Framework for Unsupervised Assessment of Conceptual Complexity Our framework tracks the activation of concepts in working memory during reading processes. We consider an encyclopedic knowledge graph, DBpedia1, as a proxy to long-term memory over which spreading activation processes run and bring concepts into the working memory. Text is processed sequentially, and each mention of a DBpedia concept triggers a tide of spreading activation over the DBpedia knowledge graph. Once brought into working memory, the activated concepts are subject to a forgetting process which decays their activation as the text is being read. At the same time, concepts in working memory accumulate more activation as they are repeated, or as related concepts are mentioned. We track the cumulative activation (CA) of the mentioned concepts at different points in time: at encounter (AE), at the end of sentences (AEoS) and at the end of paragraphs (AEoP). We use these values to estimate the conceptual complexity of texts, under the overarching hypothesis that a higher activation of text concepts in working memory indicates more accessible texts. 3.1 Spreading Activation over DBpedia For the spreading activation (SA) process, we exploit the graph structure of DBpedia. Each DBpedia concept is a node in the knowledge graph (KG). Each triple <s, p, o> (short from <subject, predicate, object>) whose subject and object are DBpedia concepts, becomes a typed relation (or typed edge), that we denote with s p −→o. This way, the knowledge base is represented as a graph KG = (V, E, T, τ), where V is the set of concepts, E is the set of directed relations between the concepts and τ : E →T assigns a type in T to each edge in E. We denote by ρ(x) ⊂E the set of all relations of node x ∈V , and by nr(x) ∈V the neighbour of x through relation r ∈E. We denote by A(p)(c) the amount of activation node c has after pulse p, by A(p) out(c) the amount of activation node c outputs at pulse 1http://dbpedia.org 3880 p and A(p) in (c) the amount of activation that flows into node c at pulse p. The core idea common to all SA models in literature is that concepts become active and fire, spreading their activation to their neighbors in KG, who in turn fire and activate their neighbors and so on, until preset termination conditions are met. Therefore, the SA process consists of multiple iterations called pulses. In our model, a SA process is triggered whenever a concept is mentioned in the text (the seed concept), by setting its activation to 1.0, and that of all other nodes in V to 0.0. Formally, the initial conditions are A(0)(seed) = 1.0 and A(0)(i) = 0.0, ∀i ∈V, i ̸= seed. Then at pulse 1, the seed fires and the SA process starts. Formally, a SA model must define three functions: the output function, the input function and an activation function (Berthold et al., 2009; Crestani, 1997). In the following, we describe how we define these functions in order to study conceptual complexity of text. The output function defines how much activation is output by a concept at pulse p + 1, given its activation at current pulse p. To define this function, we use a distance decay parameter α, which decays the activation going out of each node exponentially with respect to p. Furthermore, our output function limits the concepts that fire to those concepts whose activation surpasses a given firing threshold β for the first time. Hence, α and β control the number of activated concepts and the intensity of their activation, providing potential for personalization according to memory capacity of the target audience. A(p+1) out (c) = α · fβ(A(p)(c)); (1) where fβ(x) = x if x ≥β; 0 otherwise. The input function aggregates the amount of activation that flows into a node (called target node) given the activations flowing out of their neighbours (called source nodes). Drawing inspiration from spreading activation theory in cognitive science (Collins and Quillian, 1969; Collins and Loftus, 1975), we define accessibility of a target concept given a source concept based on how strong is the semantic relation between them, as well as by how familiar the target concept is to the reader. We define the strength of the semantic relation between two nodes as its exclusivity, introduced by Hulpus¸ et al (2015) and proven by Zhu and Iglesias (2017) to be particularly effective for computing semantic relatedness. Regarding the user’s familiarity with the target concept, in absence of user data we approximate it by the popularity of the target concept computed as the normalized node degree as pop(c) = log(D(c)) log(|V |−1), where D(c) denotes the number of neighbors of concept c. Formally, given the relation s p −→o, the accessibility scores of its endpoints s and o are computed as shown in Formula 2. acc(o, s p −→o) = excl(s p −→o) · pop(o) acc(s, s p −→o) = excl(s p −→o) · pop(s) (2) Consequently, although the edges of the KG are directed, activation can flow in both directions over the same edge. For example, given the relation Accordion isA −→Musical instrument, the mention of Accordion will activate the concept Musical instrument, and vice-versa. We can therefore generalize our notation, so that given a concept c and one of its relations (incoming or outgoing), r, c’s accessibility over the relation r is defined as accr(c) = excl(r) · pop(c). To make sure that the total amount of activation received from a concept by its neighbours, equals the amount of activation it outputs, we normalize the accessibility value as in Formula 3. accr(c) = accr(c) P r′∈ρ(c) accr′(nr′ ◦nr(c)); (3) Finally, the input function is defined in Formula 4. A(p+1) in (c) = X r∈ρ(c) A(p+1) out (nr(c)) · accr(c) (4) The activation function is the function that computes the activation of a concept after the pulse p + 1, given its activation at time p and its incoming activation at p + 1. Formally, A(p+1)(c) = A(p)(c) + A(p+1) in (c) (5) In order to avoid cycles in which concepts keep activating each other, we constrain the process so that a concept can only fire in the first pulse after its activation overpasses β. 3881 After firing, concepts become burned and although during the future pulses they can receive activation, they cannot fire again. When there are no more unburnt concepts with an activation higher than β in the graph, the SA process finishes. The activations resulted after this process are the activations that the nodes have after the last pulse, denoted in the following by SA(·). 3.2 The Working Memory Model At the beginning of a text, the working memory (WM) is considered empty. As the text is being read, the concepts activated through SA are brought into WM with an activation computed as a function of their SA activation φ(SA(c)). The WM keeps track of all activated concepts and aggregates the activation that they achieve from different SA processes. Furthermore, a forgetting process takes place on top of WM, that is triggered at every word. We therefore use words as time unit in our WM model. The forgetting process decays the activations of all concepts in WM with a preset decay factor at every encountered word (γw), and additionally at every end of sentence (γs) and at every end of paragraph (γp). Therefore, given the words at indices i and j, (i < j), in paragraphs pi and pj (pi ≤pj) and sentences si and sj (si ≤sj) respectively, we denote the decay that occurs in the interval of time between the two words as γi,j and compute it as in Equation 6. γi,j = γwj−i · γssj−si · γppj−pi. (6) We define cumulative activation, CA(i)(c) of a concept c as its activation in the WM at time of reading word i. It is defined recursively as it consists of the cumulative activation that the concept has at time i −1 and that has been subject to forgetting, together with the activation φ(SA(i)(c)) that it receives as a result from the SA process that takes place at time i (see Equation 7). CA(i)(c) = γi−1,iCA(i−1)(c) + φ(SA(i)(c)) = i X k=0 γk,iφ(SA(k)(c)) (7) We illustrate this process with an example (Table 1) which shows, after the given text having been linked to DBpedia, the seed concepts corresponding to each mention and the set of text concepts activated by the seed concepts. Figure 1 shows the evolution of concepts’ activation in WM, e.g. the concept db:Shelf (storage) becomes active when it is mentioned, with an activation of 1.0. We compute the activations in Figure 1 by defining the function φ as a constant function in which all concepts that become active in the SA process receive an activation of 1 in WM. We denote this function as φ1. In this example, we use values 0.85 and 0.7 for word (γw) and sentence decay (γs), respectively. The forgetting process is also illustrated as, unless reactivated, the CA scores decrease with every token, and the decrease is stronger after each sentence. The figure also shows how the concepts’ CAs get adjusted every time they are activated by mentioned concepts. For example, at the time “instruments” is mentioned, the concepts db:Musical instrument, and db:Accordion increase their existing CAs, and db:Band (rock and pop) becomes active in WM. 3.3 Estimating Conceptual Text Complexity One of the hypotheses that we want to test is that our framework can capture the forward priming phenomenon. We therefore hypothesize that in simpler texts, target concepts already exist in WM before they are explicitly mentioned in the text. In other words, the higher CA(c) at the encounter of concept c, the easier it is to comprehend the concept c and connect it to its context. Considering concept ci is the concept encountered in text at time i, its activation at encounter (AE) is CA(i−1)(ci), hence its CA at the time of the word that precedes it. AE(ci) = CA(i−1)(ci) (8) Furthermore, the psycholinguistic theory of backward semantic priming states that concepts can actually receive activation from concepts mentioned afterwards, in a way explaining their previous occurrence. To account for this, concepts keep accumulating CA in WM after they are mentioned. More over, in the psycholinguistic literature the end of sentences have been proven to trigger a wrapping up process (Just and Carpenter, 1980), in which the information of the sentence is being reviewed. Based on these insights, we hypothesize that in simpler texts, the concepts exhibit a higher CA at the end of the sentences / paragraphs they occur in, than in more conceptually complex texts. Formally, given a sentence s, and denoting 3882 Mention Seed Concept Activated text concepts shelves db:Shelf (storage) db:Shelf (storage) accordions db:Accordion db:Accordion, db:Musical instrument instruments db:Musical instrument db:Musical instrument, db:Accordion, db:Band (rock and pop) pictures db:Image db:Image Irish db:Irish people db:Irish people, db:The Pogues band db:Band (rock and pop) db:Band (rock and pop), db:Musical instrument, db:The Pogues The Pogues db:The Pogues db:The Pogues, db:Accordion, db:Irish people, db:Musical instrument, db:Band (rock and pop) wall db:Wall db:Wall Table 1: Example of text linked to DBpedia, together with the text concepts activated through spreading activation. (Text: The 2 shelves hold a selection of accordions and other instruments for sale. Pictures of the Irish band The Pogues hang on the wall.). db: stands for the DBpedia namespace http://dbpedia.org/resource/ 0 0.5 1 1.5 2 2.5 http://dbpedia.org/page/Shelf_(storage) http://dbpedia.org/page/Accordion http://dbpedia.org/page/Musical_instrument http://dbpedia.org/page/Image http://dbpedia.org/page/Irish_people http://dbpedia.org/page/Band_(rock_and_pop) http://dbpedia.org/page/The_Pogues http://dbpedia.org/page/Wall Figure 1: The change of CA in WM for the concepts in Table 1 as the text is sequentially traversed. the index of its last word as eos(s), we can define the sentence wrap-up activation (AEoS) of any concept c that is mentioned in s as in Formula 9. The paragraph wrap-up activation(AEoP) is defined similarly. AEoSs(c) = CA(eos(s))(c); AEoPp(c) = CA(eop(p))(c); (9) Therefore, each concept mention in the text produces three CA scores: activation at encounter (AE), activation at the end of the sentence it occurs in (AEoS), and activation at the end of the paragraph it occurs in (AEoP). Table 2 presents the scores of the defined CAs for the example in Table 1. Scores for AE are seen in Figure 1 on the word just before the target mention. Scores for AEoS are seen on the last word of the corresponding sentence, and scores for AEoP are seen at the end of the text. For assessing the conceptual complexity of a given document D that has been linked to the knowledge base KG, resulting in m concept mentions, we propose to compute the activations of the mentioned concepts and take the inverse of their average as in Equation 10. con comp(D) = m Pm i=1 activation(ci) (10) where ci is the concept that mention i is linked to, and activation(ci) is a placeholder for any linear combination of the AE(ci), AEoS(ci) and AEoP(ci). 4 Experiments 4.1 Dataset As ground truth, we use Newsela corpus which provides English news text on five complexity levels, the original story, and four manually simplified versions, gradually simplified by trained human editors under high quality control (Xu et al., 2015). As the target audience are children and second language learners, and texts are intended to maintain readers’ interest, texts are not only simplified at a linguistic level but also at a cognitive level. We report our experiments on 200 randomly sampled original texts from the English Newsela corpus, and for each of them, their four corresponding simplifications resulting in 1000 documents. All texts have been linked to DBpedia using KanDis (Hulpus¸ et al., 2015). 3883 Mention shelves accordions instruments pictures Irish band The Pogues wall Concept db:Shelf (storage) db:Accordion db:Musical instrument db:Image db:Irish people db:Band (rock and pop) db:The Pogues db:Wall AE 0 0 0.72 0 0 0.26 1.57 0 AEoS 0.20 1.17 1.17 0.20 0.84 0.98 1.21 1 AEoP 0.02 0.66 1.04 0.20 0.84 0.98 1.21 1 Table 2: AE, AEoS and AEoP scores for the mentions from the example in Table 1. 4.2 SA Model Settings Settings of the output function. To explore our output function, we study how α (the graph decay) and β (the firing threshold) influence the performance of the models. We implemented models for α taking values from the set {0.25, 0.5, 0.75} and β taking values from the set {0.0025, 0.005, 0.0075, 0.01}, excluding the α = 0.75 and β = 0.0025 combination because the activated sub-graph becomes computationally too expensive. Settings of the input function. To explore our input function, we implemented four accessibility computation schemes that we name according to the exclusivity and popularity factors (ExclPop) being used or not: (No-No): acc(o, s p→o) = acc(s, s p→o) = 1.0; (Yes-No): acc(o, s p→o) = acc(s, s p→o) = excl(s p→o); (No-Yes): acc(o, s p→ o) = pop(o) and acc(s, s p→o) = pop(s); (Yes-Yes): following Equation 2. We transmit the intuition behind the input/output-function settings by reporting the average number of activated KG nodes per SA process over a sample of 1579 SA processes (triggered by 100 texts: 20 titles on all 5 levels). The results, shown in Table 3, indicate that exclusivity dramatically reduces the number of activated concepts. This is because exclusivity gives preference to less common relations, directing the activation in the graph to the few concepts that are strongly related to the seed. At the same time, the use of popularity in the absence of exclusivity has the opposite effect because popularity gives preference to the nodes with high degrees. When both exclusivity and popularity are used, only the high degree concepts that have very specific relations to the seed are activated. With respect to the output function parameters, as expected, more concepts are activated as α decreases and as β increases. Output function Input function settings (Excl-Pop) β α Yes-Yes Yes-No No-Yes No-No 0.0025 0.5 1,245 4,448 135,381 115,935 0.0025 0.25 1,106 2,977 95,142 75,491 0.005 0.75 1,003 3,428 108,086 90,082 0.005 0.5 1,002 2,576 85,895 68,318 0.005 0.25 979 2,190 51,190 34,921 0.0075 0.75 935 2,424 79,858 65,182 0.0075 0.5 917 2,111 60,840 45,175 0.0075 0.25 911 1,893 30,561 19,652 0.01 0.75 903 2,016 61,280 47,664 0.01 0.5 897 1,807 45,065 31,839 0.01 0.25 897 1,535 20,864 13,916 Table 3: Number of activated nodes in different SA settings. 4.3 WM Settings We experimented with multiple definitions for the φ function, and report values for two definitions, φA and φ1 as shown below: φA(SA(c)) = ( SA(c) if SA(c) < 1.0 pop(c) if SA(c) = 1.0 φ1(SA(c)) = 1 if SA(c) > 0.0 φA uses the activations computed in the SA process, except for the seed concept where it uses its popularity score. This ensures that concepts mentioned in text become active in WM according to their popularity. φA is therefore sensitive to the actual SA scores, and to the popularity of mentioned concepts. On the contrary, φ1 is only sensitive to changes in the set of activated concepts. We investigated six parameter combinations for the values of the token, sentence and paragraph decay factors (< γw, γs, γp >): no forgetting: <1, 1, 1>; no paragraph transfer: <1, 1, 0> - there is no forgetting within a paragraph, but complete forgetting takes place between paragraphs; no sentence transfer : <1, 0, 0> - there is no forgetting within a sentence, but complete forgetting takes place between sentences; weak decay: <0.995, 0.9, 0.8> - the CA of a concept drops by one order of magnitude every 6 paragraphs of original texts (assuming 3884 20 words per sentence, and 4 sentences per paragraph) and every 8 paragraphs for simple texts (assuming 12 words per sentence, 4 sentences per paragraph)2; medium decay: <0.85, 0.7, 0.5> - the CA decays one order of magnitude every 2 original sentences, and every 3 simple sentences respectively; strong decay: <0.75, 0.5, 0.25> - the CA decays one order of magnitude with every original sentence, and with every 2 simple sentences. Given the described SA and WM models, we implemented a total of 264 models in our framework. 4.4 State-of-the-art Measures We compare our system to the graph-based metrics proposed by ˇStajner and Hulpus¸ (2018), as well as the baseline (ent/men) that computes the number of unique concepts per mention. For completeness, we briefly describe these features: PageRank represents the average of the PageRank scores computed over the knowledge graph for all the mentioned concepts in the text; PairDistSent and PairDistPar compute the average shortest path distance over the knowledge graph between all pairs of concepts mentioned in a sentence or paragraph respectively, averaged over all sentences or paragraphs respectively; PairSemRelSent and PairSemRelPar are similar to the previous two measures, but instead of shortest path distances, they compute the exclusivity-based semantic relatedness (Hulpus¸ et al., 2015); DensitySent and DensityPar compute the average density of the subgraphs extracted such that they connect all the pairs of concepts mentioned in the sentences or paragraphs respectively, by paths of at most 4 hops; ConnCompSent and ConnCompPar are computed using the same subgraphs as those extracted in the previous measures, but comput2The numbers of 20 words per normal sentence and 12 words per simple sentence are taken from published statistics on the dataset we use (Xu et al., 2015). ing the number of connected components averaged over sentences or paragraphs respectively. All state-of-the-art features were computed over the same knowledge-graph (DBpedia) and using the same entity linker, KanDis (Hulpus¸ et al., 2015). Therefore, there are no biases stemming from the choice of those two in the comparison of our models with the state of the art. 4.5 Tasks and Evaluation Metrics For each of our models we calculated 4 scores, by plugging into Equation 10 the values for AE, AEoS, AEoP, and All (the sum of previous three). Based on these scores, we test our models on two tasks: (i) Ranking five versions of the same news story according to their conceptual complexity; (ii) Identifying the conceptually simpler of two versions of the same news story. In the ranking task, we compare our models’ ranking of the five versions to the ground truth ranking, by computing their Kendall’s Taub (Kendall, 1948), which calculates the difference between the number of concordant and discordant pairs, out of the number of all pairs, while also handling ties. Generally, Kendall Tau values range between −1 and 1, with 1 being obtained when there is perfect agreement between the compared rankings, −1 when one ranking is the reverse of the other, and 0 when the two compared rankings are independent. Hence for a random ranking we would expect Kendall Tau-b results close to 0. In the second task, we calculate accuracy as the percentage of text pairs in which the simpler version was predicted as less conceptually complex by our models. In this task, the scores can range from 0 to 1, with a value of 0.5 for random picks. 5 Results and Discussion We present our results starting with the WM settings because variations in these settings lead to the highest variations in the results. 5.1 Impact of WM Settings Table 4 presents the average Kendall Tau-b scores for the six WM decay settings, four types of activation scores and two φ functions. The first conclusion that stands out from this table is that there is a certain sweet spot when the WM decay is strong or medium, in which AEoS performs substantially better than all other scores 3885 WM decay φ1 φA AE AEoS AEoP All AE AEoS AEoP All strong decay -.15 .74 .34 .58 -.06 .76 .40 .64 medium decay -.08 .77 .36 .60 .03 .79 .41 .66 weak decay -.08 -.11 -.15 -.12 .16 .19 .10 .17 no forgetting -.08 -.11 -.08 -.09 .16 .15 .19 .17 no paragraph transfer .01 -.19 -.01 -.06 .19 .12 .20 .17 no sentence transfer -.44 -.46 NA NA -.28 -.05 NA NA Table 4: Kendall Tau-b scores averaged over the 200 titles for all models with corresponding reading decay. φ α β exclusivity-popularity y-y y-n n-y n-n φA any any .80 .80 .79 .79 φ1 0.50 0.0025 .81 .74 .70 .72 0.25 0.0025 .81 .75 .72 .74 0.75 0.005 .82 .76 .72 .73 0.50 0.005 .82 .76 .74 .75 0.25 0.005 .82 .76 .75 .76 0.75 0.0075 .82 .77 .75 .76 0.50 0.0075 .82 .77 .76 .76 0.25 0.0075 .82 .78 .76 .76 0.75 0.01 .82 .77 .76 .76 0.50 0.01 .82 .78 .77 .76 0.25 0.01 .82 .79 .77 .77 Table 5: Kendall Tau-b scores of the AEoS measures computed with WM medium decay setting averaged over all 200 titles. in all other settings. If the WM decay is either too strong (no sentence transfer) or too weak (no forgetting, weak decay and no paragraph transfer), all models perform poorly. The second finding that is revealed by this table is that AE achieves very poor results across all WM settings. On the one hand, this indicates that our experiments are not able to confirm the forward semantic priming hypothesis. On the other hand, given the good results of AEoS, our experiments confirm the backwards priming hypothesis and sentence wrap-up. 5.2 Impact of the SA Settings Table 5 shows the influence of the graph settings parameters in the ranking task. We focus on the best performing settings from Table 4, which measures AEoS using WM medium decay. Input function. Among all the SA settings, the definition of accessibility has the most influence. Our results show that the use of both exclusivity and popularity leads to AEoS scores that best correlate with our ground truth complexity levels. Output function. The choice of α and β parameters makes no noticeable difference for φA, while it makes a statistically significant difference3 for φ1. In the latter case, the best results are 3Statistically significant difference refers to a 0.001 level obtained when α = 0.25 and β = 0.01, which corresponds to the setting which activates the smallest DBpedia subgraph (Table 3). A somehow unexpected finding that has a great impact on SA parameter selection is that the bigger the activated DBpedia subgraph, the worse the results. This indicates that allowing the activation to spread through more of KG, might result in more noise. Consequently, controlling the flow of activation through relation and concept relevance scoring dramatically reduces the activated network, while improving the results. 5.3 Results on Pairwise Text Comparison The pairwise comparison task provides insight on the models’ ability to discriminate between two versions of the same news story. The results of the models with a medium WM decay and with the combination of α and β at the opposite sides of the proposed spectrum are shown in Table 6 for both tasks, together with the results of the state of the art and the baseline (ent/men). The first observation is that our models distinguish almost perfectly between very complex and very simple versions of the same text (0−4, 1−4, 0−3). Also, generally they significantly outperform the baseline and state-of-the-art measures. However, our models perform close to random on distinguishing between the two most complex versions of the same title (0−1), the only setting in which they are outperformed by some stateof-the-art features and the baseline. Manual inspection indicates that the simplification that takes place between the two levels mostly involves sentence /paragraph splitting (syntactical simplification) which, as a side effect can have the decrease in the number of connected components, favouring ConnCompPar and ConnCompSent measures. The results of the best model using φ1 surpass the results of the best model using φA, particularly for the close level pairs (1−2, 2−3 and 3−4), which are generally harder to distinguish (paired ttest at 0.001 level of significance). This indicates that the fact that a concept is activated by SA is more relevant than the actual amount of activation, particularly for capturing subtle differences in texts. of significance using paired t-test, whenever mentioned. 3886 Model Level pairs Type α β Exc. Pop. 0-1 0-2 0-3 0-4 1-2 1-3 1-4 2-3 2-4 3-4 Kendall Tau-b φA any any yes yes .64 .88 .97 1 .88 .97 .99 .87 .95 .80 .80 φ1 0.5 0.0025 no no .54 .83 .94 .95 .86 .93 .96 .83 .92 .82 .72 φ1 0.25 0.01 no no .58 .84 .95 .99 .85 .94 .98 .86 .95 .86 .77 φ1 0.5 0.0025 no yes .52 .83 .94 .95 .86 .92 .96 .83 .91 .82 .70 φ1 0.25 0.01 no yes .58 .86 .95 .99 .87 .94 .98 .87 .93 .87 .77 φ1 0.5 0.0025 yes no .54 .85 .94 .97 .87 .95 .98 .85 .91 .82 .74 φ1 0.25 0.01 yes no .58 .88 .96 1 .86 .96 .99 .88 .95 .85 .79 φ1 0.5 0.0025 yes yes .59 .89 .97 1 .90 .97 .99 .89 .97 .87 .81 φ1 0.25 0.01 yes yes .61 .89 .98 1 .92 .97 .99 .90 .97 .88 .82 (ˇStajner and Hulpus¸, 2018) ent/men .67 .76 .83 .82 .71 .80 .79 .71 .72 .54 .47 PageRank .50 .53 .57 .57 .62 .58 .55 .53 .55 .57 .12 PairDistSent .58 .64 .65 .67 .58 .66 .64 .63 .59 .50 .23 PairSemRelSent .56 .62 .68 .77 .56 .69 .75 .63 .71 .55 .24 DensitySent .61 .69 .68 .72 .60 .63 .66 .51 .56 .58 .25 ConnCompSent .67 .71 .83 .83 .60 .72 .74 .68 .73 .56 .41 PairDistPar .58 .70 .77 .84 .60 .76 .80 .67 .78 .60 .42 PairSemRelPar .60 .74 .87 .88 .70 .83 .88 .77 .83 .71 .56 DensityPar .59 .64 .57 .64 .57 .56 .62 .56 .62 .56 .19 ConnCompPar .69 .64 .74 .76 .61 .66 .62 .65 .62 .52 .22 SeedDegree .53 .51 .59 .55 .58 .55 .50 .53 .54 .58 .12 Table 6: Accuracies of the pairwise comparison task, and the Kendall Tau-b correlations for the AEoS scores of our models for medium WM decay, and for the state-of-the-art measures. Level 0 is the original text, while level 4 is the simplest version. Any signifies that the reported results were the same for all parameter choices. 6 Conclusion We introduced a framework for tracking the conceptual complexity of texts during sequential reading, by mimicking human memory processes such as forward and backward semantic priming through spreading activation, sentence wrap-up and forgetting, and implemented a series of unsupervised models within it. Our results confirmed the hypothesis that texts are simpler when the concepts therein are highly active at the end of their corresponding sentences. From the SA perspective, we showed that measures that account for relevance of relations and nodes make a significant impact, and that targeted search in the close proximity of the seeds performs best. Finally, our models strongly outperform the state-of-the-art measures in automatic assessment of conceptual complexity. References Bharat Ram Ambati, Siva Reddy, and Mark Steedman. 2016. Assessing relative sentence complexity using an incremental ccg parser. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1051–1057. John Robert Anderson and Christian Lebiere. 1998. The atomic components of thought. Lawrence Erlbaum Associates. Jonathan Anderson. 1981. Analysing the readability of English and non-English texts in the classroom with Lix. In Proceedings of the Annual Meeting of the Australian Reading Association. Barbara Arf´e, Lucia Mason, and Inmaculada Fajardo. 2017. Simplifying informational text structure for struggling readers. Reading and Writing. Jerry Ball, Stuart Rodgers, and Kevin Gluck. 2004. Integrating act-r and cyc in a large-scale model of language comprehension for use in intelligent agents. In AAAI Workshop, pages 19–25. Rebekah George Benjamin. 2012. Reconstructing Readability: Recent Developments and Recommendations in the Analysis of Text Difficulty. Educational Psychology Review, 24(1):63–88. Michael R Berthold, Ulrik Brandes, Tobias K¨otter, Martin Mader, Uwe Nagel, and Kilian Thiel. 2009. Pure spreading activation is pointless. In The 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems-GIS’09, pages 1915–1918. Tim vor der Br¨uck, Sven Hartrumpf, and Hermann Helbig. 2008. A readability checker with supervised learning using deep indicators. Informatica, 32(4):429–435. Patrick Carroll and Maria L. Slowiaczek. 1986. Constraints on semantic priming in reading: A fixation time analysis. Memory & Cognition, 14(6):509– 522. Allan M. Collins and Elizabeth F. Loftus. 1975. A spreading activation theory of semantic processing. Psychological Review, 82:407–428. Allan M. Collins and M. Ross Quillian. 1969. Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8(2):240 – 247. 3887 Fabio Crestani. 1997. Application of Spreading Activation Techniques in Information Retrieval. Artificial Intelligence Review, pages 453–482. Arthur C. Graesser, Danielle S. McNamara, Max M. Louwerse, and Zhiqiang Cai. 2004. Coh-Metrix: Analysis of text on cohesion and language. Behavior Research Methods, Instruments, & Computers, 36(2):193–202. Tanja Gulan and Pavle Valerjev. 2010. Semantic and related types of priming as a context in word recognition. Review of psychology, 17(1):53–58. Benjamin Heitmann and Conor Hayes. 2016. Semstim: Exploiting knowledge graphs for cross-domain recommendation. In 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), pages 999–1006. Ioana Hulpus¸, Narumol Prangnawarat, and Conor Hayes. 2015. Path-based semantic relatedness on linked data and its use to word and entity disambiguation. In The Semantic Web - ISWC 2015, pages 442–457, Cham. Springer International Publishing. Marcel Adam Just and Patricia A. Carpenter. 1980. A theory of reading:from eye fixations to comprehension. Psychological Review, 87(4). Slava Kalyuga, Paul Ayres, Paul Chandler, and John Sweller. 2003. The expertise reversal effect. Journal of Educational Psychology, 38:23–31. Maurice G. Kendall. 1948. Rank correlation methods. Griffin, London. Marta Kutas and Steven A. Hillyard. 1984. Brain potentials during reading reflect word expectancy and semantic association. Nature, 307(161). Antonio Lieto, Christian Lebiere, and Alessandro Oltramari. 2018. The knowledge level in cognitive architectures: Current limitations and possible developments. Cognitive Systems Research, 48:39 – 55. Cognitive Architectures for Artificial Minds. Vivi Nastase. 2008. Topic-driven multi-document summarization with encyclopedic knowledge and spreading activation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 763–772, Stroudsburg, PA, USA. Association for Computational Linguistics. James H. Neely. 1991. Semantic priming effects in visual word recognition: A selective review of current findings and theories. In D. Besner and G. W. Humphreys, editors, Basic processes in reading: Visual word recognition, pages 265–335. Lawrence Erlbaum Associates, Hillsdale. Alessandro Oltramari and Christian Lebiere. 2012. Pursuing artificial general intelligence by leveraging the knowledge capabilities of act-r. In Artificial General Intelligence, pages 199–208, Berlin, Heidelberg. Springer Berlin Heidelberg. Dario D. Salvucci. 2014. Endowing a cognitive architecture with world knowledge. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36. Sanja ˇStajner and Ioana Hulpus¸. 2018. Automatic assessment of conceptual text complexity using knowledge graphs. In Proceedings of the 27th International Conference on Computational Linguistics, pages 318–330. Association for Computational Linguistics. Zareen Syed, Tim Finin, and Anupam Joshi. 2008. Wikipedia as an ontology for describing documents. In Proceedings of the Second International Conference on Weblogs and Social Media. AAAI Press. Sowmya Vajjala and Detmar Meurers. 2014. Assessing the relative reading level of sentence pairs for text simplification. In Proceedings of the EACL 2014, pages 288–297. Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in Current Text Simplification Research: New Data Can Help. Transactions of the Associaton for Computational Linguistics, 3:283–297. Ganggao Zhu and Carlos. A. Iglesias. 2017. Computing semantic similarity of concepts in knowledge graphs. IEEE Transactions on Knowledge and Data Engineering, 29(1):72–85.
2019
377
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3888–3898 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3888 End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories Rui Mao1, Chenghua Lin2, and Frank Guerin1 1Department of Computing Science, University of Aberdeen, AB24 3UE, UK 1{r03rm16, f.guerin}@abdn.ac.uk 2Department of Computer Science, University of Sheffield, S1 4DP, UK [email protected] Abstract End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification. However, standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification. We experiment with two DNN models which are inspired by two human metaphor identification procedures. By testing on three public datasets, we find that our models achieve state-of-the-art performance in end-to-end metaphor identification. 1 Introduction Metaphoric expressions are common in everyday language, attracting attention from both linguists and psycho-linguists (Wilks, 1975; Glucksberg, 2003; Group, 2007; Holyoak and Stamenkovi´c, 2018). Computationally, metaphor identification is a task that detects metaphors in texts. Traditional approaches, such as phrase-level metaphor identification, detect metaphors with word pairs (Tsvetkov et al., 2014; Shutova et al., 2016; Rei et al., 2017), where a target word whose metaphoricity is to be identified is given in advance. However, such target words are not highlighted in real world text data; a newer approach is sequential metaphor identification, where the metaphoricity of a target word is identified without knowing its position in a sentence. Therefore, it is more readily applied to support Natural Language Processing tasks. The most recent approaches (Wu et al., 2018; Gao et al., 2018) treat this as a sequence tagging task: the classified labels are only conditioned on BiLSTM (Graves and Schmidhuber, 2005) hidden states of target words. This approach is not tailormade for metaphors; it is the same procedure to that used in other sequence tagging tasks, such as Part-of-Speech (PoS) tagging (Plank et al., 2016) and Named Entity Recognition (NER) (Lample et al., 2016). However, we have available linguistic theories of metaphor identification, which have not yet been exploited with Deep Neural Network (DNN) models. We hypothesise that by exploiting linguistic theories of metaphor identification in the design of a DNN architecture, the model performance can be further improved. Linguistic theories suggest that a metaphor is identified by noticing a semantic contrast between a target word and its context. This is the basis of Selectional Preference Violation (SPV) (Wilks, 1975, 1978). E.g., in the sentence my car drinks gasoline (Wilks, 1978), ‘drinks’ is identified as metaphoric, because ‘drinks’ is unusual in the context of ‘car’ and ‘gasoline’; a car cannot drink, nor is gasoline drinkable. Formally, a label is predicted, conditioned on a target word and its context. An alternative approach by Group (2007) and Steen et al. (2010) is the Metaphor Identification Procedure (MIP): a metaphor is identified if the literal meaning of a word contrasts with the meaning that word takes in this context. E.g., in my car drinks gasoline, the contextual meaning of ‘drink’ is ‘consuming too much’, which contrasts with its literal meaning of ‘taking a liquid into the mouth’1. Formally, a label is predicted, conditioned on literal and contextual meanings. Fundamentally, the two models are similar, as both MIP and SPV analyse the relations between metaphors and their contexts, but with different procedures. We propose two end-to-end metaphor identification models2, detecting metaphors based on MIP and SPV, respectively. The experimental re1https://en.oxforddictionaries.com/ definition/drink 2Our code is available at: https://github.com/RuiMao1988/ Sequential-Metaphor-Identification 3889 sults show that both of our models perform better than the state-of-the-art baseline (Gao et al., 2018) across three benchmark datasets. In particular, our MIP based model with a simple DNN architecture, outperforms the baseline with an average of 2.2% improvement in F1 score, whereas the SPV based model with a novel multi-head contextual attention mechanism achieves an even higher gain of 2.5% against the baseline. The contribution of our work can be summarized as follows: (1) To the best of our knowledge, we are the first to explore using linguistic theories (MIP and SPV) to directly inform the design of Deep Neural Networks (DNN) for endto-end sequential metaphor identification; (2) Our first DNN model is based on MIP, which encapsulates the idea that a metaphor is classified by the contrast between its contextual and literal meanings. The second model is inspired by SPV, in which we propose a novel window-based contextual attentive method, allowing the model to attend to important fragments of BiLSTM hidden states and hence better capture the context of text; (3) We conducted extensive experiments on three public datasets for end-to-end metaphor identification, where both of our models outperform the state-of-the-art DNN models. 2 Related Work Metaphor identification is a linguistic metaphor processing task that identifies metaphors in textual data, which is different from conceptual metaphor processing that maps concepts between source and target domains (Shutova, 2016), based on Conceptual Metaphor Theory (Lakoff and Johnson, 1980). In linguistic metaphor processing a metaphor is identified when the contextual meaning of a word contrasts with its literal meaning (summarised as MIP by Group (2007) and Steen et al. (2010)). Many metaphor dataset annotations were guided by MIP, e.g., VU Amsterdam Metaphor Corpus (Steen et al., 2010), and a verbal metaphor dataset, formed by Mohammad et al. (2016). Another hypothesis for linguistic metaphor identification, SPV, was proposed by Wilks (1975, 1978) who argued that a metaphoric word could violate selectional preferences of an agent. E.g., ‘drinks’ violates selectional preferences of the agent of ‘car’ in the sentence, my car drinks gasoline. Ortony (1979) further claimed that metaphoric words, phrases and sentences are contextually anomalous. There are also other relevant theories, e.g., semantic constraints (Katz, 1964) and expectations (Schank, 1975). However, Wilks and Fass (1992) found that these theories are mostly very similar. In terms of computational metaphor identification, feature-engineering has been widely discussed (Leong et al., 2018). Unigrams, imageability, concreteness, abstractness, word embedding and semantic classes are features, commonly employed by supervised machine learning (Turney et al., 2011; Assaf et al., 2013; Tsvetkov et al., 2014; Klebanov et al., 2016), deep learning (Rei et al., 2017; Gutierrez et al., 2017; Bizzoni and Ghanimifard, 2018) and unsupervised leaning (Shutova et al., 2016; Mao et al., 2018) approaches. Recently, metaphor identification has been treated as a sequence tagging task. Wu et al. (2018) proposed a model based on word2vec (Mikolov et al., 2013), PoS tags and word clusters, which were encoded by a Convolutional Neural Network (CNN) and BiLSTM. The encoded information was directly fed into a softmax classifier. This model performed best on the NAACL2018 Metaphor Shared Task (Leong et al., 2018) with an ensemble learning strategy. Gao et al. (2018) proposed a model that concatenated GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) representations which were then encoded by BiLSTM. Hidden states of the BiLSTM were classified by a softmax classifier. These sequential metaphor identification models classify labels, conditioned on encoder hidden states. However, we expect that explicit modelling of interactions between either contextual and literal meanings (MIP) or target words and their contexts (SPV) may further boost performance. 3 Methodology Here we detail our two models, inspired by MIP and SPV respectively, and systematically compare the differences between them. 3.1 MIP based model Our first model (Figure 1) is built upon MIP: a metaphor is classified by the contrast between a word’s contextual and literal meanings. To facilitate the classifier in making this comparison we concatenate the contextual meaning representation with the literal meaning representation. 3890 g1 g2 g3 g4 g5 e1 e2 e3 e4 e5 ℎ" ℎ# ℎ$ ℎ% ℎ& ℎ" ℎ# ℎ$ ℎ% ℎ& GloVe Embedding & ELMo BiLSTM Linear + Softmax w1 L w2 L w3 M w4 L w5 L Comparison Embedding Figure 1: RNN HG model framework based on MIP. ⊕ denotes concatenating tensors along the last dimension. RNN HG (Recurrent Neural Network HiddenGloVe) Humans infer the contextual meanings of a word conditioned on its context. We use BiLSTM hidden states as our contextual meaning representations, where the hidden state of a word is encoded by its forward and backward contexts and itself (Graves and Schmidhuber, 2005). Pretrained GloVe (Pennington et al., 2014) is considered as our literal meaning representation, as words have been embedded with their most common senses (trained on Web crawled data3). The most common senses are likely literal, as literals occur more than metaphors in typical corpora (Cameron, 2003; Martin, 2006; Steen et al., 2010; Shutova, 2016). The comparison of literal and contextual can be seen at the top of Figure 1, comparison stage; the GloVe embedding (literal) from below joins the hidden state from the BiLSTM (contextual). The probability of a label prediction (ˆy) for a target word at position t is conditioned on contextual and literal meaning representations of the target word p(ˆyt|ht, gt) = σ(w⊤[ht; gt] + b) (1) where σ is softmax function. h is a BiLSTM hidden state. g is GloVe embedding. w is trained parameters. b is bias. [; ] denotes concatenating tensors along the last dimension. Similar to Gao et al. (2018), we use GloVe and ELMo (Embeddings from Language Models) as input features for the BiLSTM. The recommended way of using 3Note that our results are likely to improve if the pretrained GloVe is trained on a cleaner set of purely literal data. (a) (b) ℎ" ℎ" 𝑣$ " 𝑣$ % ℎ& ℎ' ℎ( 𝑐"" 𝑐%" 𝑐(" 𝑨𝒕𝒕𝒕,𝟑𝑨𝒕𝒕𝒕,𝟑𝑨𝒕𝒕𝒕,𝟑𝑨𝒕𝒕𝒕,𝟑𝑨𝒕𝒕𝒕,𝟑 ℎ% 𝑣$ % 𝑣$ ( 𝑣$ " 𝑐'" 𝑐& " 𝑐%" 𝑐(" 𝑨𝒕𝒕𝒕.𝟑𝑨𝒕𝒕𝒕.𝟑𝑨𝒕𝒕𝒕.𝟑𝑨𝒕𝒕𝒕.𝟑𝑨𝒕𝒕𝒕.𝟑 𝑣$ ( 𝑐"" 𝑐& " 𝑐'" ℎ% ℎ( ℎ' ℎ& ℎ" ℎ" 𝑣$ % 𝑣$ / 𝑨𝒕𝒕𝒕,𝒏 ℎ' 𝑐"/ 𝑐"/ 𝑨𝒕𝒕𝒕.𝒏 ℎ% ℎ( ℎ& 𝑐& / g1 g2 g3 g4 g5 e1 e2 e3 e4 e5 ℎ' ℎ& ℎ% ℎ( GloVe Embedding & ELMo BiLSTM Linear + Softmax w1 L w2 L w3 M w4 L w5 L 𝑐%/ 𝑐(/ 𝑐'/ 𝑐%/ 𝑐(/ 𝑐'/ 𝑐& / 𝑣$ / 𝑣$ % … … Figure 2: (a) RNN MHCA model framework based on SPV. Attt±n denotes attention mechanisms on a window of n context words. The blue and orange nodes and lines denote examples of computing −→c 3 3 by a query of −→h 3 and its context v3 0 (padding zero vectors), −→h 1, −→h 2, and computing ←−c 3 3 by a query of ←−h 3 and its context ←−h 4, ←−h 5, and v1 0, respectively. (b) Attentive context representations with a window size of 3. Solid lines are queries. Dashed lines are their contexts (keys and values). ELMo is to concatenate ELMo (e) with GloVe (g), e.g., [gt; et] (Peters et al., 2018). Thus, the BiLSTM hidden state ht is ht = fBiLSTM([gt; et], −→ h t−1, ←− h t+1). (2) 3.2 SPV based model The intuition behind SPV is that metaphoricity is identified by detecting the incongruity between a target word and its context. RNN MHCA (Recurrent Neural Network Multi-Head Contextual Attention) Our second model (Figure 2) compares a target word representation ht with its context ct. This is achieved by concatenating these two representations (see top of Figure 2). Target word representation ht is a BiLSTM hidden state. Context is composed of left-side (−→c n t ) and right-side (←−c n t ) attentive context representations, where n is a window size of 3891 context words. We adopt a multi-head contextual attention (MHCA) mechanism to compute cn t . The BiLSTM hidden state matrix (H, where h ∈H) is split into equivalent pieces H = [H1; H2; ...; HM; ...; HN] (3) −−→ headM t−n = n X i=1 σ( −→ h M t ⊤−→ h M t−i) −→ h M t−i (4) −→ c n t = [ −−→ headM t−n|M = 1, 2, ..., N] (5) ←−− headM t+n = n X i=1 σ( ←− h M t ⊤←− h M t+i) ←− h M t+i (6) ←− c n t = [ ←−− headM t+n|M = 1, 2, ..., N] (7) cn t = [ −→ c n t ; ←− c n t ] (8) where N is the number of heads. Irrelevant context hidden states, hj /∈[ht±1, ht±n], are masked out. We apply a window size of n context words, as hj only encodes words that are out of the window. In computing a context representation, hj may bring in noise, and it may miss important context information, provided by the close context words, while the distant context information could be memorized by hi ∈[ht±1, ht±n]. Noticeably, MHCA is similar to dot-product attention (Luong et al., 2015), if N = 1. Using N > 1 heads would attend to different parts of hidden states of context words and recall previous important context information that is forgotten at the current point. Unlike multi-head self-attention (Vaswani et al., 2017) that encodes a target word by its context, MHCA computes the context representation by attending to a target word. The query of MHCA is a hidden state of a target word, while the key and value are hidden states of its context. We do not employ training parameters, non-linear operations or positional encoding in MHCA, because performance is better (compared with MHA in Figure 4) when we model context (via attention) and the target word (via BiLSTM) in the same space (see § 3.3). Besides, extra position encoding is unnecessary in our model, as input sentences have been encoded along with a time sequence by BiLSTM. The probability of a label prediction, given by RNN MHCA is p(ˆyt|ht, cn t ) = σ(w⊤[ht; cn t ] + b) (9) where a label prediction is conditioned on a hidden state of a target word (ht) and its attentive context representation (cn t ). The input feature of word t is also [gt; et]. So, ht is given by Equation 2. Embedding space C C D G D G C G Encoding space RNN_HG (MIP) RNN_MHCA (SPV) 𝑒𝑛𝑐 𝑫 [𝑪; 𝑮] 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑒𝑟 1 𝑒𝑚𝑏 D A W 𝑎𝑡𝑡 𝑒𝑛𝑐 𝑪 𝑎𝑡𝑡 𝑮 𝑒𝑛𝑐 𝑒𝑛𝑐 𝑫 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑒𝑟 2 Figure 3: A comparison between RNN HG and RNN MHCA. C is car. D is drinks. G is gasoline. A is animal. W is water. emb is GloVe embedding. enc is BiLSTM encoding. att is an attention mechanism. In embedding space, the lighter part of a node is ELMo embedding, while the darker part is GloVe embedding. 3.3 Model comparison Figure 3 gives an overview of the two models and how they process the example of ‘drinks’ in the sequence car drinks gasoline. We use different coloured nodes to indicate that words are distant from each other in vector space. E.g., red ‘drinks’ (D) is distant from blue ‘car’ (C) and ‘gasoline’ (G), because they are from non-literally related domains (Shutova et al., 2016; Mao et al., 2018). Note that there is no external knowledge base for domain knowledge. ‘Drinks’ (D) is distant because of the statistics of the corpus; it occurs in contexts relating to humans and other animals consuming liquids such as water. Our MIP based RNN HG model is on the left. In the leftmost part of the figure, we have the literal embedding of ‘drinks’ (D), which is embedded by words in the domains of ‘animal’ (A) and ‘water’ (W). To the right of this, the green ‘drinks’ (←→ D ) captures the meaning of ‘drinks’ in context via BiLSTM encoding; it is encoded by ‘car’ (C), ‘gasoline’ (G) and itself (D). These two different vectors for ‘drinks’ are concatenated. Classifier 1 (RNN HG) learns to recognise if the two vectors represent similar meanings (indicating literal) or different meanings (indicating metaphor), which is p(ˆyt|ht, gt) in Equation 1. In the case illustrated, the meaning of ‘drinks’ (green ←→ D ) from the encoding is very different from its word embedding meaning (red D). The right part of Figure 3 is our SPV based RNN MHCA model. Blue ‘car’ (C) and ‘gasoline’ (G) are encoded by themselves from left to right 3892 and right to left, respectively. Purple ‘car’ (−→ C ) and ‘gasoline’ (←− G) are still closer to each other than green ‘drinks’ (←→ D ) in encoding space, because the green ‘drinks’ (←→ D ) has a component of literal meaning from red ‘drinks’ (D). Our attention mechanism does not employ non-linear transformations. Thus, the attentive context ([−→ C ; ←− G]) does not significantly change its colour from the context word encoding (−→ C and ←− G). Classifier 2 (RNN MHCA) learns to recognise the contrast between green ‘drinks’ (←→ D ) and its purple context ([−→ C ; ←− G]), which is p(ˆyt|ht, cn t ) in Equation 9. In RNN MHCA, we use the BiLSTM green ‘drinks’ (←→ D ) as the target word representation, rather than the word embedding red ‘drinks’ (D). This is necessary because it will be concatenated with the purple attentive context representation, in encoding space; we found that performance is better when both meanings are in the encoding space. On the other hand, the RNN HG does concatenate vectors from two different spaces; this works because they are representations of the same word, rather than word versus context. In Figure 3, it appears that both models use the same BiLSTM encoded green ‘drinks’ (←→ D ), however the two models have different objective functions (Equation 1 and 9), therefore the two classifiers backpropagate different errors to the BiLSTM during training. The result is that the two models are actually receiving different hidden states (different green ‘drinks’ (←→ D ) vectors). 4 Experiment 4.1 Dataset We adopt three widely used metaphor datasets. Relevant statistics can be viewed in Table 1. VUA4 (Steen et al., 2010) VU Amsterdam Metaphor Corpus (VUA) is the largest publicly available metaphor dataset. Every word in the corpus is labeled, guided by MIP. Each sequence contains several metaphors, ranging from 0 to 28. The corpus was used by the NAACL-2018 Metaphor Shared Task. Similar to the task that has all PoS and verb tracks, we also examine our methods on VUA ALL POS and VUA VERB tracks. MOH-X5 (Mohammad et al., 2016) Its sam4http://ota.ahds.ac.uk/headers/2541. xml 5http://saifmohammad.com/WebPages/ metaphor.html Dataset # Tgt token % M # Seq Avg # seq len Avg # M/S VUA all 205,425 11.6 10,567 19.4 3.4 VUA trn 116,622 11.2 6,323 18.4 3.3 VUA dev 38,628 11.6 1,550 24.9 4.0 VUA tst 50,175 12.4 2,694 18.6 3.4 VERB tst 5,873 30.0 2,694 18.6 1.5 MOH-X 647 48.7 647 8.0 1.0 TroFi 3,737 43.5 3,737 28.3 1.0 Table 1: Dataset statistics. NB: # Tgt token is the number of target tokens whose metaphoricity is to be identified. % M is the percentage of metaphoric tokens among target tokens. # Seq is the number of sequences. Avg # seq len is the average of the number of sequence lengths. Avg # M/S is the average number of metaphors per metaphorical sentence. ple sentences are from WordNet (Fellbaum, 1998). Only a single target verb in each sentence is annotated. The average length of sentences is the shortest of our three datasets. TroFi6 (Birke and Sarkar, 2006) The dataset consists of sentences from the 1987-89 Wall Street Journal Corpus Release 1 (Charniak et al., 2000). The average length of sentences is the longest of our datasets. Each sentence has a single annotated target verb. 4.2 Baselines CNN+RNNensmb (Wu et al., 2018) This is the best model at the NAACL-2018 Metaphor Shared Task, which encodes three concatenated input features (word2vec, PoS tags, and word2vec clusters) with CNN and BiLSTM. The label prediction is conditioned on BiLSTM hidden states p(ˆyt|ht) with a weighted softmax classifier. The performance is further boosted by ensemble learning. RNN ELMo (Gao et al., 2018) This is a model that uses GloVe and ELMo as features for sequential metaphor identification. GloVe and ELMo are concatenated and encoded by BiLSTM, classified by a softmax classifier, which is also conditioned on BiLSTM hidden states p(ˆyt|ht). RNN ELMo is the strongest baseline to our knowledge. RNN BERT (Devlin et al., 2018) We introduce feature-based BERT (cased, large) as a baseline, as it has shown strong performance on the NER task, which is also a sequence tagging task. We use the same framework as RNN ELMo. The inputs are the concatenation of the hidden states of the last four BERT layers, which was recommended 6http://natlang.cs.sfu.ca/software/ trofi.html 3893 Model VUA ALL POS VUA VERB MOH-X (10-fold) TroFi (10-fold) P R F1 Acc P R F1 Acc P R F1 Acc P R F1 Acc CNN+RNNensmb 60.8 70.0 65.1 60.0 76.3 67.2 RNN ELMo 71.6 73.6 72.6 93.1 68.2 71.3 69.7 81.4 79.1 73.5 75.6 77.2 70.7 71.6 71.1 74.6 RNN BERT 71.5 71.9 71.7 92.9 66.7 71.5 69.0 80.7 75.1 81.8 78.2 78.1 70.3 67.1 68.7 73.4 RNN HG ours 71.8 76.3 74.0* 93.6 69.3 72.3 70.8* 82.1 79.7 79.8 79.8* 79.7 67.4 77.8 72.2* 74.9 RNN MHCA ours 73.0 75.7 74.3* 93.8 66.3 75.2 70.5* 81.8 77.5 83.1 80.0* 79.8 68.6 76.8 72.4* 75.2 Table 2: Model performance. * denotes p < 0.01 on a two-tailed t-test, against the best baseline with an underline. by Devlin et al. (2018). Hyperparameters are finetuned on each dataset. 4.3 Setup The inputs are 300 dimension pre-trained GloVe7 embeddings, concatenated with 1024 dimension pre-trained ELMo (Peters et al., 2018). We adopt a batch size of 2, 2 × 256 dimension hidden state BiLSTM, SGD optimiser, and weighted cross entropy loss L = − X i wyiyi log(ˆyi) (10) where yi is a ground truth label for a word at position i. ˆyi is its prediction. The weight wyi = 1, if yi is literal, otherwise wyi = 2, which is in line with Wu et al. (2018). In RNN MHCA, the window size (n) is 3 on VUA and MOH-X, while n is 5 on TroFi. The number of attention heads (N) is 16, which is in line with Vaswani et al. (2017). Training, development and testing sets of VUA ALL POS are built in line with the NAACL-2018 Metaphor Shared Task (see Table 1). Since the examined models predict labels for all words in a sentence, the outputs have covered the target verbs in VUA VERB. So, we simply evaluate on the verb track without training models separately. As annotations of MOH-X and TroFi datasets only cover target verbs, we consider the remaining words as literal for training, but only evaluate on the target words. We adopt 10-fold cross validation on MOH-X and TroFi datasets, since the sizes of these two datasets are small. Our hyperparameters are tuned on each dataset. 5 Results F1 score is the main measurement of model performance. Metaphors are positive labels. The accuracy is measured by the number of correct target token predictions over the total number of target tokens. For the VUA ALL POS dataset, we 7http://nlp.stanford.edu/data/glove. 840B.300d.zip consider all tokens as the target tokens. For the VUA VERB, MOH-X and TroFi, we consider target verbs as target tokens. As shown in Table 2, our two proposed models are consistently the top two for F1 on the four evaluation tasks, where the improvements against the third best model (F1 with an underline) are statistically significant (two-tailed t-test, p < 0.01). RNN MHCA achieves state-of-the-art performance in VUA ALL POS (F1=74.3%), MOHX (F1=80.0%) and TroFi (F1=72.4%). RNN HG performs slightly worse than RNN MHCA. However, it exceeds RNN MHCA by 0.3% on the VUA VERB track (F1=70.8%). Compared with RNN ELMo, the biggest improvements of RNN HG and RNN MHCA appear in MOH-X dataset, gaining 4.2% and 4.4%, respectively. Our models also outperform RNN BERT by at least 1.6% in MOH-X. In contrast with VUA ALL POS that has an average of 3.4 metaphors (see Table 1) per metaphoric sentence, each metaphoric sentence in MOH-X contains a single metaphor. We observed that in MOH-X most non-target words are literal, so that a metaphor can be better identified by RNN MHCA via modelling the contrast between the metaphor and its context in a single-metaphor sentence. Furthermore, the average length of MOH-X sentences is the shortest, therefore the context of a target word will be cleaner. MOHX source sentences are from WordNet sample sentences, where the language is straightforward because the writer designed it to illustrate the meaning of a word, e.g., Don’t abuse the system. Similarly, the straightforward contexts also help RNN HG to infer contextual meanings of words. The anomalies that MIP and SPV are designed to detect are very clear in MOH-X, so that our models improve the most against RNN ELMo. VUA in contrast is more complex (see examples in VUA Breakdown Analysis and Error Analysis below). In TroFi the improvements of RNN HG and RNN MHCA against RNN ELMo are small 3894 68 70 72 74 76 78 80 Window 1 Window 2 Window 3 Window 4 Window 5 Window 6 RNN_ELMo F1 Score MOH-X MHCA MOH-X MHA MOH-X DPA VUA ALL POS MHCA VUA ALL POS MHA VUA ALL POS DPA TroFi MHCA TroFi MHA TroFi DPA VUA VERB MHCA VUA VERB MHA VUA VERB DPA Figure 4: RNN MHCA performance with different windows and attention mechanisms. MHCA is multihead (16 heads) context attention. MHA is multi-head (16 heads) attention (Vaswani et al., 2017). DPA is dotproduct attention (Luong et al., 2015). (1.1% and 1.3%). We have observed that many of the non-target words in TroFi are metaphoric (but not labeled), as the sample sentences are from financial news, where word play is common (e.g., VUA news contains the largest percentage of metaphors in Table 4). Our system considers TroFi non-target words as literal without knowing their ground truth labels during training. Additionally, the average length of sequences of TroFi is the longest among the datasets, at 28.3 tokens. Although RNN MHCA slightly outperforms RNN HG, the difference is small. This is because modelling the contrast between contextual and literal meanings of metaphors in MIP is theoretically similar to modelling in SPV (see §1). Variations of RNN HG An alternative way of encapsulating contextual and literal meanings in RNN HG is taking the sum of ht and gt (ht + gt) instead of their concatenations ([ht; gt]) in Equation 1. Such an idea is inspired by residual connection (He et al., 2016). In this approach, we take 2 × 150 dimension BiLSTM hidden states so that ht and gt are aligned in dimensionality. However, such an approach yields 73.7%, 70.0%, 78.9% and 71.8% F1 scores on VUA ALL POS, VUA VERB, MOH-X and TroFi datasets, which is worse than the concatenation approach (RNN HG) in Table 2. This is because the concatenation approach highlights the contrast between GloVe and BiLSTM hidden states of metaphors. Variations of RNN MHCA We examined the impact of different window sizes and attention mechanisms of RNN MHCA. All these baselines are fine-tuned on each dataset. Given a window size of 1, bi-directional hidden states of a target Model Feature P R F1 Acc. RNN BERT Bl 69.1 72.0 70.5 93.0 RNN HG Bl+G 70.3 74.6 72.4 93.4 E+G 71.0 76.1 73.5 93.7 RNN MHCA Bl+G 70.5 72.3 71.4 93.2 E+G 71.3 75.5 73.4 93.6 Table 3: Model performance on VUA ALL POS development set. Bl is BERT large. E is ELMo. G is GloVe. are concatenated with the left to right hidden state of its left-side word and right to left hidden state of its right-side word ([−→h t; ←−h t; −→h t−1; ←−h t+1]). The context2vec model (Melamud et al., 2016) used −→h t−1 and ←−h t+1 as their context representations, with Multilayer Perceptron tuning. As shown in Figure 4, setting a window size of 3 surpasses other sizes on 3 out of 4 datasets. The attentive context representation with a window size larger than 1 can better represent a context than the hidden states of adjacent words (window = 1). The average length of TroFi sequences is the longest, so that a larger window size, e.g., window = 5, performs better. Given a window size of 3, MHCA outperforms the multi-head attention (Vaswani et al., 2017) which employs training parameters and non-linear operations. This shows that modelling the contrast between a target word and its context in the same space performs better than that in different spaces. MHCA exceeds the dot-prodcut attention (Luong et al., 2015) which demonstrates the utility of multi-heads that attend to different fragments of hidden states. We also examined variations, e.g., an infinite window size and a different number of heads, but the performances did not improve. Variations of Feature Selection We examine the concatenation of hidden states of the last four BERT large model layers (Bl) instead of ELMo on RNN HG and RNN MHCA. Our models with the combination of BERT and GloVe (Bl+G) perform better than the BERT baseline model (RNN BERT) with Bl on VUA ALL POS development set by at least 2.9% in terms of F1 score (see Table 3). However, the performance, given by Bl+G, is not further improved, compared with the combination of ELMo and GloVe (E+G) on each of our models. VUA Breakdown Analysis We report the model performance on different types of articles and words based on VUA ALL POS test set. We analyse all four genres and four types of open class words (verbs, adjectives, nouns and adverbs), 3895 Type Train Dev Test All %T %M %T %M %T %M %T %M News 21.8 14.9 23.8 15.5 24.6 15.2 22.9 15.1 Acad. 36.4 11.2 37.3 11.6 27.1 17.3 34.3 12.4 Fict. 23.4 10.7 23.5 10.6 21.9 9.2 23.0 10.4 Conv. 18.3 7.4 15.4 7.2 26.4 7.6 19.8 7.4 Verb 17.9 18.1 18.5 18.7 19.7 19.1 18.5 18.5 Noun 17.6 13.6 17.8 13.5 17.1 15.0 17.5 13.9 Adj 8.3 11.5 8.3 10.7 7.9 13.6 8.2 11.9 Adv 6.0 6.0 5.8 6.9 6.8 7.2 6.1 6.5 Table 4: VUA Statistics on genres and POS. % T denotes the percentage of the category tokens among the total VUA tokens. % M denotes the percentage of the category metaphors among the category tokens. which is in line with Leong et al. (2018). The verbal statistics in Table 5 are different from VUA VERB in Table 2, as they are different tracks in the Metaphor Shared Task. Not all verbs in VUA ALL POS are included in VUA VERB. In Table 5, metaphor identification achieves better performance on academic articles across all the models and genres, where RNN MHCA yields the highest F1 (79.8%). Intuitively, metaphor identification is easier as the style of English is more formal. E.g., (using underlines for metaphors) This mixture, heated by recession and high unemployment, inevitably generates a high level of crime. (VUA ID: as6-fragment01-30). Identifying metaphors in conversation is the hardest for our baselines, probably due to its fragmented language. E.g., Drawing, oh well! (VUA ID: kbpfragment09-4105). However, RNN HG achieves large improvements against RNN ELMo (3.8%) and RNN BERT (3.4%) on conversation. The improvements of our models against RNN ELMo on news are larger than in TroFi, although source sentences of both datasets are from news. It supports our arguments that the noise of treating non-target words as literals in TroFi negatively impact our models’ ability to learn the difference between literals and metaphors. In contrast, all words in VUA news are annotated, so that the advantages of our models are more obvious. In PoS breakdown analysis, verb metaphors are better identified than others, as verbal metaphors take the largest part among all PoS. RNN HG achieves the biggest improvement (4.1%) in adverbs against RNN ELMo, whereas RNN BERT also presents strong performance. In adjectives, CNN+RNNensmb surpasses the second best RNN HG by 2.9%. The use of word embedding clusters, PoS tags and ensemble learning may conModel P R F1 Acc Acad. CNN+RNNensmb 72.5 74.6 73.5 RNN ELMo 78.2 80.2 79.2 92.8 RNN BERT 76.7 76.0 76.4 91.9 RNN HG ours 76.5 83.0 79.6 92.7 RNN MHCA ours 79.6 80.0 79.8 93.0 Conv. CNN+RNNensmb 45.3 71.1 55.3 RNN ELMo 64.9 63.1 64.0 94.6 RNN BERT 64.7 64.2 64.4 94.6 RNN HG ours 63.6 72.5 67.8 94.8 RNN MHCA ours 64.0 71.1 67.4 94.8 Fict. CNN+RNNensmb 48.3 69.2 56.9 RNN ELMo 61.4 69.1 65.1 93.1 RNN BERT 66.5 68.6 67.5 93.9 RNN HG ours 61.8 74.5 67.5 93.4 RNN MHCA ours 64.8 70.9 67.7 93.8 News CNN+RNNensmb 66.4 64.7 65.5 RNN ELMo 72.7 71.2 71.9 91.6 RNN BERT 71.2 72.5 71.8 91.4 RNN HG ours 71.6 76.8 74.1 91.9 RNN MHCA ours 74.8 75.3 75.0 92.4 VERB CNN+RNNensmb 67.4 RNN ELMo 68.1 71.9 69.9 RNN BERT 67.1 72.1 69.5 87.9 RNN HG ours 66.4 75.5 70.7 88.0 RNN MHCA ours 66.0 76.0 70.7 87.9 ADJ CNN+RNNensmb 65.1 RNN ELMo 56.1 60.6 58.3 RNN BERT 58.1 51.6 54.7 88.3 RNN HG ours 59.2 65.6 62.2 89.1 RNN MHCA ours 61.4 61.7 61.6 89.5 NOUN CNN+RNNensmb 62.9 RNN ELMo 59.9 60.8 60.4 RNN BERT 63.3 56.8 59.9 88.6 RNN HG ours 60.3 66.8 63.4 88.4 RNN MHCA ours 69.1 58.2 63.2 89.8 ADV CNN+RNNensmb 58.8 RNN ELMo 67.2 53.7 59.7 94.8 RNN BERT 64.8 61.1 62.9 94.8 RNN HG ours 61.0 66.8 63.8 94.5 RNN MHCA ours 66.1 60.7 63.2 94.9 Table 5: Model performance on different types of texts and words in VUA ALL POS. tribute to identifying adjective metaphors. Error Analysis By comparing our two models, 96.3% of predictions are the same in the VUA ALL POS testing set. For these same predictions, precision, recall, F1 and accuracy are 80.2%, 77.2%, 78.7% and 95.3%, respectively, which is better than each of our models on the full dataset. False negatives are common in sentences with multiple metaphors, e.g., Or: ‘When Cupid shot his dart He shot it at your heart.’ (VUA ID: a5e-fragment06-187), where 10 out of 12 words have true labels as metaphor. However, our models only classify ‘heart’ as metaphoric in this sentence. Ambiguous contexts are also challenging, e.g., I’m gonna play with that and see what (VUA ID: kbd-fragment21-8037), where the referent of 3896 ‘that’ is not in the context, so that ‘play with’ are also false negatives. For the samples where our models predict different labels, the main errors of RNN HG are false negatives, while the main errors of RNN MHCA are false positives. This is likely due to the fact that some conventional metaphors frequently appear in typical corpora, so that GloVe embeddings of metaphors are not distinct from their contextual meaning encodings. Metaphors may be misclassified as literal by RNN HG. On the other hand, RNN MHCA may flag the clash between literals and their contexts, if there are many metaphors in the contexts, so that literal target words may be misclassified as metaphoric. 6 Conclusion We proposed two metaphor identification models based on Metaphor Identification Procedure (Group, 2007; Steen et al., 2010) and Selectional Preference Violation (Wilks, 1975, 1978). Our models achieve state-of-the-art performance on three public datasets. The performances of the two models are close in terms of F1 score, as their linguistic fundamentals, MIP and SPV are similar in principle. The breakdown analysis of VUA demonstrates that the improvements of our models derive from the problematic instances for our baselines, e.g., conversation articles and adverb metaphors. In future work, we will explore ensemble learning. Our error analysis demonstrates that when the predictions of our two models are the same, the prediction is more accurate with high precision, suggesting the idea of combining them. Another interesting direction is to explore combining different semantic similarity measures (Lin et al., 2015) for our task. Acknowledgments We thank anonymous reviewers for their comments, which will further influence our next work. We also appreciate Sujie Guo for providing GPU resources. This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1). References Dan Assaf, Yair Neuman, Yohai Cohen, Shlomo Argamon, Newton Howard, Mark Last, Ophir Frieder, and Moshe Koppel. 2013. Why ”dark thoughts” aren’t really dark: A novel algorithm for metaphor identification. In Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), 2013 IEEE Symposium on, pages 60–65. IEEE. Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Yuri Bizzoni and Mehdi Ghanimifard. 2018. Bigrams and BiLSTMs two neural networks for sequential metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 91–101. Lynne Cameron. 2003. Metaphor in educational discourse. A&C Black. Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. Bllip 1987-89 WSJ corpus release 1. Linguistic Data Consortium, Philadelphia, 36. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Sam Glucksberg. 2003. The psycholinguistics of metaphor. Trends in cognitive sciences, 7(2):92–96. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5-6):602–610. Pragglejaz Group. 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and symbol, 22(1):1–39. E Dario Gutierrez, Guillermo Cecchi, Cheryl Corcoran, and Philip Corlett. 2017. Using automated metaphor identification to aid in detection and prediction of first-episode schizophrenia. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2923–2930. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. 3897 Keith J Holyoak and Duˇsan Stamenkovi´c. 2018. Metaphor comprehension: A critical review of theories and evidence. Psychological bulletin, 144(6):641. Jerrold J Katz. 1964. Analyticity and contradiction in natural language. In The Structure of Language: Readings in the Philosophy of Language. Prentice Hall. Beata Beigman Klebanov, Chee Wee Leong, E Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, volume 2, pages 101–106. George Lakoff and Mark Johnson. 1980. Metaphors we live by. University of Chicago press. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Chee Wee Ben Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018. A report on the 2018 VUA metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 56–66. Chenghua Lin, Dong Liu, Wei Pang, and Zhe Wang. 2015. Sherlock: A semi-automatic framework for quiz generation using a hybrid semantic similarity measure. Cognitive computation, 7(6):667–679. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and WordNet based metaphor identification and interpretation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1–10. James H Martin. 2006. A corpus-based analysis of context effects on metaphor comprehension. Technical Report CU-CS-738-94, Boulder: University of Colorado: Computer Science Department. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An empirical study. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 23–33. Andrew Ortony. 1979. Some psycholinguistic aspects of metaphor. Center for the Study of Reading Technical Report; no. 112. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing, pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 2227–2237. Barbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In The 54th Annual Meeting of the Association for Computational Linguistics, page 412. Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A supervised similarity network for metaphor detection. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1537–1546. Roger C Schank. 1975. The structure of episodes in memory. In Representation and understanding, pages 237–272. Elsevier. Ekaterina Shutova. 2016. Design and evaluation of metaphor processing systems. Computational Linguistics. Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor identification with visual features. Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 160–170. Gerard J Steen, Aletta G Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identification: From MIP to MIPVU, volume 14. John Benjamins Publishing. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 248–258. 3898 Peter D Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 680– 690. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Yorick Wilks. 1975. A preferential, pattern-seeking, semantics for natural language inference. Artificial intelligence, 6(1):53–74. Yorick Wilks. 1978. Making preferences more active. Artificial intelligence, 11(3):197–223. Yorick Wilks and Dann Fass. 1992. The preference semantics family. Computers & Mathematics with Applications, 23(2-5):205–221. Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neural metaphor detecting with CNN-LSTM model. In Proceedings of the Workshop on Figurative Language Processing.
2019
378
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 3899 Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View Renfen Hu 1,2,♠ Shen Li 3,♣ Shichen Liang 1,2,♠ ♠{irishu, shichen}@mail.bnu.edu.cn ♣[email protected] 1 Institute of Chinese Information Processing, Beijing Normal University 2 UltraPower-BNU Joint Laboratory for Artificial Intelligence, Beijing Normal University 3 DeeplyCurious.ai Abstract Diachronic word embeddings have been widely used in detecting temporal changes. However, existing methods face the meaning conflation deficiency by representing a word as a single vector at each time period. To address this issue, this paper proposes a sense representation and tracking framework based on deep contextualized embeddings, aiming at answering not only what and when, but also how the word meaning changes. The experiments show that our framework is effective in representing fine-grained word senses, and it brings a significant improvement in word change detection task. Furthermore, we model the word change from an ecological viewpoint, and sketch two interesting sense behaviors in the process of language evolution, i.e. sense competition and sense cooperation. 1 Introduction The meanings of words continuously change over time, reflecting complicated processes in language and society (Kutuzov et al., 2018). With the rapid development of language representation learning, word embeddings have been widely introduced into diachronic linguistic studies. By training and comparing word embeddings of different time epochs, one can capture the semantic drift of words (Kim et al., 2014), learn diachronic analogies between terms (Szymanski, 2017), as well as discover the statistical laws of semantic change (Hamilton et al., 2016). Furthermore, this kind of method has gained fruitful results in broader social science studies, e.g. tracing armed conflicts (Kutuzov et al., 2017), gender and ethnic stereotypes (Garg et al., 2018) and social attitudes (Jaidka et al., 2018). It is well known that word meaning can be represented with a range of senses. However, existing methods only assign one embedding to a 1840 1860 1880 1900 1920 1940 1960 1980 2000 0.0 0.2 0.4 0.6 0.8 1.0 sense_1_adjective: Foolish, stupid, or unimpressive. sense_2_adjective: (of a person) homosexual (used especially of a man) sense_3_noun: A homosexual, especially a man. sense_4_adjective: Light-hearted and carefree. Figure 1: The evolvement of four senses for word gay. Two important phenomena: (1) competition between sense 2 and sense 4; (2) cooperation between sense 2 and sense 3. word for a time period, thus they face challenges in representing senses and tracking the change of them. Given the word embeddings, one can tell the coarse-grained change of the word from one time to another, e.g. the word gay’s nearest neighbors in the vector space move from cheerful and flaunting to homosexual and lesbian. But these word representations are not able to show which sense has changed, which sense is stable, and how they may interact with each other. Recently, an increasing boom on large-scale pre-trained language models e.g. ELMo and BERT have attracted considerable attention in the field of NLP (Peters et al., 2018; Devlin et al., 2018). These models can ideally capture complex characteristics of word use, and how they vary across linguistic contexts, i.e. a word with different contexts can yield different representations. Inspired by the above works, this paper proposes to use deep contextualized embeddings to represent and track word senses. Figure 1 shows that our method can trace the fine-grained senses of a word in a smooth process, i.e. change does not 3900 happen at a time point, but continuously throughout the process. We further model the evolvement from an ecological viewpoint, and propose that senses can compete and cooperate just like groups of organisms. The contribution of this paper is as following: • We construct an efficient sense representation method using the pre-training language model BERT and data from Oxford dictionary. This method can precisely learn and identify fine-grained senses, and achieves a high accuracy of 93.8% in a sense identification task. • Based on the sense representation, we detect in depth the trend of word senses in 200 years of texts. In evaluation, our method brings a significant improvement on word meaning change task. • Interestingly, we further model the word change from an ecological viewpoint, and introduce two important sense behaviors in language evolution, i.e. sense competition and sense cooperation. The remaining part of this paper is organized as following. After introducing the related work in Section 2, we will describe our sense representation model and how to track senses in 200 years in Section 3. In Section 4, we analyze the sense behaviors from an ecological viewpoint, and sketch two interesting phenomena: sense competition and cooperation. At last, we draw conclusions and propose future work in Section 5. 2 Related Work 2.1 Diachronic Word Embeddings Neural word embeddings have been widely used in diachronic linguistic studies. The basic idea is to train word embeddings on different time-sliced corpora and then compare them over time. Kim et al. (2014) firstly use neural embeddings to capture the change of word meaning. Their method initializes the vectors with the data of the previous year. Kulkarni et al. (2015) and Hamilton et al. (2016) train the embeddings independently and then use a mapping method to align them for comparison. Bamler and Mandt (2017) propose to use dynamic word embeddings trained jointly over all times periods. Instead of modeling lexical change via time series, Rosenfeld and Erk (2018) represent time as a continuous variable and model a word’s usage as a function of time. Yin et al. (2018) propose global anchor method for detecting linguistic shifts and domain adaptation. However, the above methods could only assign one neural embedding to a word at each time period, which cannot model the change of the word senses. To address this problem, we propose to conduct a sense-level diachronic study with deep contextualized word embeddings, and detect in depth not only what and when, but also how the word meaning changes. 2.2 Diachronic Sense Modeling Existing works on sense modeling mainly exploit topic modeling and clustering methods. Lau et al. (2012) and Cook et al. (2014) propose to detect novel senses by comparing a reference corpus and a focus corpus with topic modeling. Wijaya and Yeniterzi (2011) firstly try to track word senses with K-means clustering and the Topic-Over-Time algorithm. Mitra et al. (2014) identify the sense birth, death, join and split based on clustering of a co-occurrence graph. Frermann and Lapata (2016) present a dynamic Bayesian model to track the prevalence of senses, and further model language change as a smooth, gradual process. Tang et al. (2016) attempted to cluster the contexts to find senses, and to classify the senses into different change types. Tahmasebi and Risse (2017) exploit curvature clustering algorithm to induce word senses and track the change of them. Although these studies have made great progress in novel sense detection and diachronic sense tracking, they may have two disadvantages in sense modeling: (1) It is arbitrary and difficult to select the number k of the clusters or topics, and there are few works explaining the reason of the setting. (2) The “senses” induced from clusters or topics require huge amount of human analysis to interpret or additional mappings to an external sense inventory. Thus, the discussion is usually limited to a few cases. 2.3 Learning Sense and Contextual Embeddings Pilehvar and Collier (2016); Camacho-Collados and Pilehvar (2018) address the meaning conflation deficiency of existing methods representing a word as a single vector, as it may have negative impacts on accurate semantic modeling. For example, rat and screen are pulled towards each 3901 other in the vector space for their similarities to two different senses of mouse. To solve this problem, there are a line of works making extensions of the Skip-gram model to learn sense-specific embeddings (Neelakantan et al., 2014; Liu et al., 2015; Qiu et al., 2016; Lee and Chen, 2017). In addition, knowledge bases e.g. Wordnet are introduced into representation (Chen et al., 2014, 2015; Faruqui et al., 2014; Johansson and Pina, 2015; Rothe and Sch¨utze, 2015). Recently, it has attracted considerable attention by constructing unsupervised contextual representations with language models. Melamud et al. (2016) represent the context of a target word with the output embedding of a multi-layer perceptron built on top of a Bi-LSTM language model. Peters et al. (2018) show that their language model ELMo can implicitly disambiguate word meaning with their contexts. Devlin et al. (2018) propose bidirectional encoder representations from Transformers (BERT). It is fine-tuned with just one additional output layer, and achieves state-of-the-art results for a wide range of tasks. In this study, we propose to learn sense representations following Devlin et al. (2018)’s work since it can yield deep and effective contextual representations on both sentence and token level. 3 The Framework 3.1 Sense Representation In this paper, we build fine-grained sense representations with deep contextualized word embeddings, i.e. represent each sense as a distinguished sense embedding. We directly adopt the finegrained senses defined by lexicographers. Comparing with existing diachronic sense studies, our method does not rely on human interpretations or mappings to dictionary definitions. For a sense sj of word wi, we can obtain its example sentences {Sentwisj 1 , Sentwisj 2 , ..., Sentwisj n } from a dictionary. After feeding them into a pre-trained language model, wi’s token representations {ewisj 1 , ewisj 2 , ..., ewisj n } can be retrieved from the final hidden layer of the model. The sense embedding ewisj of sj is computed by taking the average of {ewisj 1 , ewisj 2 , ..., ewisj n }. In the experiments, we choose the Oxford English dictionary since it has (1) a comprehensive record of word senses in different times and (2) a sufficient number of example sentences for each sense. To select the target words for diachronic study, we firstly extract word frequency information from COHA, a genre balanced corpus containing English texts from 1810 to 20091. Only words that appear at least 10 times a year for over 50 consecutive years are retained. After lemmatization, we totally retrieve 4881 words, including 15836 senses in Oxford dictionary. The sense definitions and example sentences are then extracted from the online version of Oxford dictionary2. We feed at most 10 sentences for each sense to the pre-trained BERT model (Devlin et al., 2018) as the inputs. We use the uncased BertBase model that has 12 layers, 768 hidden units, 12 heads and 110M parameters. The language model is trained on BookCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words) with Masked LM and Next Sentence Prediction tasks. With deep bidirectional architecture, BERT yields powerful language representations on both sentence and token level. After feeding the sentences containing a target word with a specific sense, its token representations can be generated from the hidden layers of the pre-trained model. We only keep the token representations of the final hidden layer of the Transformer. After obtaining the token embeddings of the target word for the specific sense, we can represent the sense as a 768-dimentional embedding by averaging the token embeddings. 3.2 Sense Identification After obtaining the sense representations of the target words, we can easily identify the sense of a word in a sentence with its contextual embedding. Given a new sentence Sentk that contains a target word wi with m senses, we can feed it into BERT to get wi’s contextual embedding ewi k , and compute the cosine similarities between the token embedding ewi k and the word sense embeddings {ewis1, ewis2, ..., ewism}. The sense sˆj that has the highest similarity score is selected as the belonging sense. sˆj = arg max sj ewisj · ewi k ∥ewisj∥2 ∥ewi k ∥2 (1) 1https://corpus.byu.edu/coha/ 2https://en.oxforddictionaries.com/ 3902 Sentences with the target word Most similar sense 1. You’ll be satisfied with less food, which means you’ll consume fewer calories each time you sit down to eat. v. Have as a consequence or result. 2. Anna wanted to know exactly what he meant, but she did not ask. v. Intend to convey or refer to; signify. 3. The mean score for this question is 55.0 for those who did not receive bills from physicians and labs. n. calculated as a mean; average. 4. They were n’t necessarily fighting or being mean to each other constantly. a. Unkind, spiteful, or unfair. 5. Do not bring thine eye to their small, mean, and plodding lives... a. poor in quality and appearance; shabby. 6. This man is a mean motor scooter on the mound. a. Very skillful or effective; excellent. 7. I left for work before the kid crawled out of bed. v. Move forward on the hands and knees. 8. This beta search site crawls the web for product-related information, including data from the product maker, magazine articles. v. systematically visit a number of web pages in order to create an index of data. Table 1: Sense identification for word mean and crawl. The model performs well in detecting dated sense (sent5), infrequent sense (sent6), and new sense (sent8). Table 1 gives several sentences that contain polysemous words mean or crawl. With our method, the senses can be precisely captured, even when the word is used in a dated sense e.g. poor in quality3, an infrequent sense e.g. skillful and excellent, or a new sense as seen in sentence 8. It shows that our method based on contextual embeddings and Oxford dictionary is able to capture the word senses of different periods and frequencies effectively. 3.3 Sense Tracking To track the sense evolvement, we use the 200 years of texts from COHA corpus. After preprocessing and POS tagging, we feed the sentences to BERT, and retrieve the token embedding if the lemmatized token4 is one of the 4881 target words. Using the sense representations built via the above method, we can easily tag the sense for each token. Tang et al. (2016) suggest that a time series of word status data can be decomposed into a trend component and a random noise. We follow this idea to model the time series of sense status. Given a word wi that has senses {s1, s2, ..., sm}, the diachronic status of sense sj is represented by T(sj) = {P sj t1 , P sj t2 , ..., P sj ty }, (2) where P sj t is defined as P sj t = N sj t m P k=1 N sk t , (3) where Nsj t is the number of tokens identified as sense sj at time t. According to (Brockwell et al., 2002), T(sj) can be decomposed as T(sj) = Tr(sj) + Noise(sj), (4) 3This sense is labeled as “dated” in Oxford dictionary. 4We use the NLTK WordNet Lemmatizer. Figure 2: The evolvement of word please. where Tr(sj) is the trend and Noise(sj) is a random noise. We conduct quartic polynomial curve fitting on account of the fluctuation. The noise Noise(sj) is thus striped, and we can get the trend Tr(sj) for further analysis. We set the time interval ∆t = 10 since it has a robust performance in curve fitting. With this method, we can clearly monitor the status of each individual sense, whether it is growing, decreasing, or unchanged. Figure 2 shows the fitting result of please which receives few attention from previous diachronic studies. It can be seen that sense 2 that expresses indignation and unreasonable is going down, sense 1 and sense 3 that use in verb meaning are relatively stable, and sense 4 that used in polite requests or questions has been consistently growing. 3.4 Evaluation of The Framework To evaluate the sense representation and tracking methods, we conduct experiments on two tasks: (1) a synchronic sense identification task, and (2) 3903 a diachronic word meaning change task. 3.4.1 Word Sense Identification To test the sense representation, we construct a dataset by randomly selecting another 2000 sentences from Oxford dictionary that have not been used in sense representation. Each test sentence contains at least a polysemous target word. Given the senses of the target word as candidates, the model needs to select the correct sense for the word in the sentence. Considering the part of speech (POS) is a useful feature for word sense disambiguation tasks, we firstly do POS tagging for the sentences with NLTK. In the test, if the POS information is used, the model will limit the candidates to the senses with the same POS. Otherwise the model considers all the senses of this word being candidates. The test result is shown in Table 2. We can see that POS information does improve the accuracy. We further analyze the 124 bad cases of Baseline + POS system, and some examples are shown in Table 3. Firstly, we find that in some cases the model predictions are not real mistakes. (1) the model prediction seems to be a better option in 16 cases, e.g. the sentence 1 in Table 3. (2) Given the context, the model prediction and the answer can be both reasonable for 3 cases, e.g. the sentence 2. Secondly, for the remaining 105 real bad cases, the mistakes are mostly due to the following reasons. (1) The model prediction is a highly similar sense with the answer, or there is a meaning overlap between the two senses, e.g. sentence 3 in Table 3. (2) The model does not get a precise contextualized embedding from BERT since the text is short and can not provide sufficient information, e.g. sentence 4. It should be noted that in this case, the model also has a low confidence given the highest cosine similarity as 0.25. We also find that the similarity scores indicating the model confidence have high correlations with the accuracy. Given the 902 cases that have similarities ≥0.8, the model accuracy increases to 98%. For the 44 cases with similarities ≥0.9, the accuracy is 100%. The experiment shows that it is very effective to use deep contextualized embeddings to represent word sense. With very few data (10 or less sentences for a sense), it yields reliable and precise sense representations. Using a very simple similarity measurement, the method achieves a high accuracy in the sense identification System Accuracy Baseline 92.3% Baseline + POS 93.8% Table 2: Results of word sense identification task. task. We believe it could serve as a good basis for the diachronic sense studies. 3.4.2 Word Meaning Change For evaluation on the diachronic side, we conduct experiments on word meaning change task with the human rating dataset proposed by Gulordava and Baroni (2011). The test set consists of 100 words taken from different frequency range. Five annotators are asked to label the change of each word from 1960s to 1990s on a 4-point scale (0: no change; 1: almost no change; 2: somewhat change; 3: changed significantly). The inter-annotator agreement is 0.51 (pairwise Pearson correlation, p < 0.01). We follow Frermann and Lapata (2016)’s work to quantify the word change via the novelty score defined by Cook et al. (2014). Given a word wi with m senses, the novelty score is calculated by N(sj) = pf(sj) + α pr(sj) + α , (5) where sj is one of the senses, pf(sj) is the proportion of usages of sj in the focus corpus, pr(sj) is the proportion of usages of sj in the reference corpus and α is a small parameter to avoid dividing by zero. Further, we can calculate the score of word change wi by C(wi) = max {N(s1), N(s2), ..., N(sm)}. (6) In the test, we select the data of 1960s from COHA as the reference corpus, and data of 1990s as the focus corpus. α is set to 0.01. After computing the novelty score for each word, we measure the correlation coefficient between the novelty scores and the average human ratings. As shown in Table 4, the Pearson correlation score of our method is 0.52 (p < 0.01), and Spearman’s ρ rank is 0.428 (p < 0.01), which achieve a significant improvement comparing with the existing studies. The test result further proves the effectiveness of our sense modeling method built on deep contextualized embeddings. 3904 Sentence Answer Predict Simi 1. Again, you’d expect that the most “important” words in a document, in terms of identifying what it’s about, would be the ones most individually freighted with meaning. Transport (goods) in bulk by truck, train, ship, or aircraft. Be laden or burdened with. 0.89 2. He said a car had just managed to squeeze past the people carrier, and he had tried to do the same but in vain. Barely; by a little. Very recently; in the immediate past. 0.76 3. The move to establish the Pratas marine sanctuary must not be separated from the international movement to protect marine areas. Divide into constituent or distinct elements. Cause to move or be apart. 0.68 4. he paused significantly. In a way that has a particular meaning. In a sufficiently great or important way. 0.25 Table 3: Examples of bad cases in word sense identification task. System Corpus Pearson Spearman Gulordava (2011) Google Bigram 0.386 Frermann (2016) COHA, DTE, CLMET3.0 0.377 Our method COHA 0.52 0.428 Table 4: Results of word change task. 4 An Ecological View Ecologists are interested in the dynamics of species populations over time (Odum and Barrett, 1971), while linguists focus on the language change. These two systems may share some commonalities, e.g. Nadas (1985) applied the Turing Formula (Good, 1953) which studies the population frequencies of species to word probabilities. In this study, after tracking the prevalence of word senses in 200 years, we find that senses can compete and cooperate just like ecological organisms. Of course, these behaviors are primarily determined by people who use it, learn it and transmit it to others (Haugen, 1971, 2001). 4.1 Sense Competition A word is like an ecological population, and different senses are its subgroups. “Competition” exists between the senses. They do not compete for sunlight or food, but the dominance of the word. We can observe the semantic and grammatical change of words from the perspective of “competition”. Intuitively, word meaning changes gradually, and a significant change may take place at a time period when a dominant sense handing over to another one, usually referring to a semantic shift (Kulkarni et al., 2015). When the new dominant sense has different grammatical features, e.g. a different part-of-speech, we can observe a grammatical change. Thus, the sense competition for dominance may result in semantic and grammatical changes. Figure 1 shows an example of semantic change for word gay. The adjective meaning of homosexual grows quickly in 20th century, and finally took the place of light-hearted to be the most dominant sense at the end of 1990s. Figure 2 illustrates both grammatical and semantic changes of word please, which is more and more frequently used as an adverb (in polite requests or questions) than as a verb. Interestingly, the competition is not a monotonous process. As shown in Figure 3a, the magnetic recording material meaning of tape has a strong growth during 1920-1980, but degrades soon since 1990 because this material become dated in daily life. Then the dominant sense goes back to the material for fastening things. In order to capture the trend of language evolvement, we track the senses of 3220 polysemous words with time interval ∆t = 10. The tracking is based on polynomial curve fitting result. If the dominant sense changes from one to another, we count it as a word change. If the new dominant sense has a different part-of-speech from the old one, we count it as a grammatical change, otherwise a semantic change5. Among the 3220 words, 70.12% have a stable dominant sense, whereas 29.88% undergo a change of dominant sense for at least once, resulting in 1064 detected changes in which 69.26% are semantic changes, and 30.73% are grammatical changes. It indicates that the language system is mostly stable, and semantic change occupies 5It should be noted that the “semantic change” denoted here refers to a change of the semantic meaning, while the “grammatical change” may involve both changes of the POS and semantic meaning, e.g. the dominant sense of please changes from sense 1 verb (cause to feel happy and satisfied) to sense 4 adverb (polite requests or questions). 3905 (a) tape (b) alien Figure 3: Examples of sense competition and cooperation. (a) tape: competition between sense 1 and sense 3; (b) alien: cooperation between sense 1 and sense 5, sense 2 and sense 3. 0 10 20 30 40 50 60 70 80 90 100 1820s 1830s 1840s 1850s 1860s 1870s 1880s 1890s 1900s 1910s 1920s 1930s 1940s 1950s 1960s 1970s 1980s 1990s 2000s gramma1cal seman1c Figure 4: The counts of grammatical and semantic changes from 1820s to 2000s. a larger proportion. From a diachronic perspective, Figure 4 shows that the counts of detected word changes are similarly distributed across the decades, while in 1990s and 2000s, senses are more active in competition. 4.2 Sense Cooperation In addition to competition for selfish benefit, a group of organisms can also work together for common or mutual benefit in the evolution. Hamilton (1964) proposes that cooperation helps in transmitting underlying genes to future generations either for direct fitness (increasing personal reproductive successes) or for indirect fitness (increasing the reproductive successes of genetically similar relatives). In this study, we also find that similar senses are prone to cooperate to survive and compete with others. Figure 1 gives us an intuitive example for word gay. The adjective sense homosexual has a relative: a noun sense of homosexual man. These two senses are not only very related in meaning, but also have highly consistent growth curve. In the competition, they cooperate to overtake sense 2 (light-hearted and carefree). Based on the above analysis, we attempt to detect the cooperating senses automatically. We hypothesize that the cooperating senses should satisfy two conditions. Firstly, these senses should be similar or related in meaning. Secondly, they should grow or degrade in a similar trend. Starting from this hypothesis, we model the meaning similarity r with their sense embeddings, and the trend similarity c with Pearson correlation coefficient. In the case of gay, sense 2 and 3 are identified as relative senses which are cooperating in the competition because they have a high r = 0.9565 and c = 0.8995. With the thresholds setting r ≥0.6 and c ≥0.6, we detect 490 pairs of relative senses that cooperate and also win in the competition against other senses, accounting for 31.67% of the changes. Table 5 lists the 10 words that has the highest mean value of r and c. It can be seen that the relative senses are highly similar in meaning or usages, and can be considered as a sense family. We illustrate the cooperation between the senses and its role in language evolvement with an example word alien. As shown in Figure 3b, alien was 3906 word old dominant sense new dominant sense relative sense r c lot (1890s) n. A person’s luck, situation, or destiny in life. pron. A large number or amount; a great deal. d. A great deal; much. 0.98 0.91 decline (1940s) v. Politely refuse (an invitation or offer) n. A gradual and continuous loss of strength, numbers, quality, or value. v. (typically of something regarded as good) become smaller, fewer, or less; decrease. 0.99 0.88 alien (2000s) a. Unfamiliar and disturbing or distasteful. a. Supposedly from another world; extraterrestrial. n. A hypothetical or fictional being from another world. 0.96 0.91 fancy (1940s) n. A superficial or transient feeling of liking or attraction. a. Elaborate in structure or decoration. a. (of a drawing, painting, or sculpture) created from the imagination rather than from life. 0.94 0.92 review (1960s) v. Write a critical appraisal of (a book, play, film, etc.) for publication in a newspaper or magazine. n. A formal assessment of something with the intention of instituting change if necessary. v. Assess (something) formally with the intention of instituting change if necessary. 0.98 0.88 gay (1990s) a. Light-hearted and carefree. a. (of a person) homosexual (used especially of a man) n. A homosexual, especially a man. 0.96 0.90 desert (1940s) v. Abandon (a person, cause, or organization) in a way considered disloyal or treacherous. n. A waterless, desolate area of land with little or no vegetation, typically one covered with sand. a. Like a desert. 0.96 0.90 exercise (1970s) v. Use or apply (a faculty, right, or process) n. Activity requiring physical effort, carried out to sustain or improve health and fitness. v. Engage in physical activity to sustain or improve health and fitness. 0.98 0.88 abroad (1910s) d. In different directions; over a wide area. d. In or to a foreign country or countries. n. Foreign countries considered collectively. 0.94 0.91 hit (1910s) v. Reach (a particular level, point, or figure) v. Bring one’s hand or a tool or weapon into contact with (someone or something) quickly and forcefully. n. An instance of striking or being struck. 0.99 0.86 Table 5: Examples of the cooperating senses that win in the competition. mainly used as an adjective of unfamiliar meaning until the beginning of 20th century. After that, there are two groups of cooperation captured: • With the increasing global communication at the end of the 19th century, sense 1 and sense 5 constituted a powerful family, in which one sense represents the noun meaning (foreigner), and the other one denotes the adjective (belonging to a foreign country). • Since 1950s, with the exploration in the space, alien is used to refer to extraterrestrial and hypothetical beings from another world, i.e. the sense 2 and sense 3 which form a new sense family. They finally achieve the dominance of the word meaning via their cooperation. It should be noted that just like groups of organisms, the cooperation does not only exist in growing senses, but also in stable and degrading senses. In addition, the competition can also take place between two relative senses, e.g. the dominant sense of word heavily changed from with a lot of force or effort; with weight to a more abstract meaning to a great degree; in large amounts in 1920s. 5 Conclusion and Future Work This paper proposes a sense representation and tracking framework based on deep contextualized embeddings. With our method, we can find out not only what and when, but also how the word meaning changes from a fine-grained sense level. The experiment shows that our framework is effective in representing word senses and detecting word change. Furthermore, we model the word change from an ecological viewpoint, and sketch two interesting sense behaviors in language evolution, i.e. sense competition and sense cooperation. Overall, our study sheds some light on diachronic language study with deep contextualized embeddings. The sense modeling data we built may serve as a basis for further and deeper analysis of linguistic regularities, as well as an important reference of sense granularities for lexicographers6. In addition to tracking the language evolvement in the history, we believe it is promising future work to use deep contextual embeddings in pre6We release the sense modeling data and a visualization tool at https://github.com/iris2hu/diachronic-sense-modeling. 3907 dicting the future change or trend, as well as detecting novel senses that are not included in existing dictionaries. Acknowledgments The authors would like to thank Zhe Zhao for his helpful comments and discussions, Kristina Gulordava for sharing the dataset of word meaning change, and the anonymous reviewers for their feedback and suggestions. This work is supported by the Fundamental Research Funds for the Central Universities, National Social Science Fund of China (No. 18CYY029) and China Postdoctoral Science Foundation funded project (No. 2018M630095). References Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 380–389. JMLR. org. Peter J Brockwell, Richard A Davis, and Matthew V Calder. 2002. Introduction to time series and forecasting, volume 2. Springer. Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. From word to sense embeddings: A survey on vector representations of meaning. Journal of Artificial Intelligence Research, 63:743–788. Tao Chen, Ruifeng Xu, Yulan He, and Xuan Wang. 2015. Improving distributed representation of word sense via wordnet gloss composition and context clustering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 15–20. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1025–1035. Paul Cook, Jey Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel word-sense identification. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1624–1635. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166. Lea Frermann and Mirella Lapata. 2016. A bayesian model of diachronic meaning change. Transactions of the Association for Computational Linguistics, 4:31–45. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Irving J Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3-4):237–264. Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the google books ngram corpus. In Proceedings of the GEMS 2011 workshop on geometrical models of natural language semantics, pages 67–71. William D Hamilton. 1964. The genetical evolution of social behaviour. ii. Journal of theoretical biology, 7(1):17–52. William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1489–1501. Einar Haugen. 1971. The ecology of language. Linguistic Reporter. Einar Haugen. 2001. The ecology of language. The ecolinguistics reader: Language, ecology and environment, pages 57–66. Kokil Jaidka, Niyati Chhaya, and Lyle Ungar. 2018. Diachronic degradation of language models: Insights from social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 195–200. Richard Johansson and Luis Nieto Pina. 2015. Embedding a semantic network in a word space. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1428–1433. Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 61–65. 3908 Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625–635. International World Wide Web Conferences Steering Committee. Andrei Kutuzov, Erik Velldal, and Lilja Øvrelid. 2017. Tracing armed conflicts with diachronic word embedding models. Association for Computational Linguistics. Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. arXiv preprint arXiv:1806.03537. Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 591–601. Association for Computational Linguistics. Guang-He Lee and Yun-Nung Chen. 2017. Muse: Modularizing unsupervised sense embeddings. arXiv preprint arXiv:1704.04601. Yang Liu, Zhiyuan Liu, Tat-Seng Chua, and Maosong Sun. 2015. Topical word embeddings. In TwentyNinth AAAI Conference on Artificial Intelligence. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context embedding with bidirectional lstm. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 51–61. Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Biemann, Animesh Mukherjee, and Pawan Goyal. 2014. That’s sick dude!: Automatic identification of word sense change across different timescales. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1020–1029. Arthur Nadas. 1985. On turing’s formula for word probabilities. IEEE Transactions on Acoustics, Speech, and Signal Processing, 33(6):1414–1416. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1059–1069. Eugene Pleasants Odum and Gary W Barrett. 1971. Fundamentals of ecology, volume 3. Saunders Philadelphia. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Mohammad Taher Pilehvar and Nigel Collier. 2016. De-conflated semantic representations. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1680–1690. Lin Qiu, Kewei Tu, and Yong Yu. 2016. Contextdependent sense embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 183–191. Alex Rosenfeld and Katrin Erk. 2018. Deep neural models of semantic shift. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 474–484. Sascha Rothe and Hinrich Sch¨utze. 2015. Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 1793–1803. Terrence Szymanski. 2017. Temporal word analogies: Identifying lexical replacement with diachronic word embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 448–453. Nina Tahmasebi and Thomas Risse. 2017. Finding individual word sense changes and their delay in appearance. In RANLP, pages 741–749. Xuri Tang, Weiguang Qu, and Xiaohe Chen. 2016. Semantic change computation: A successive approach. World Wide Web, 19(3):375–415. Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding semantic change of words over centuries. In Proceedings of the 2011 international workshop on DETecting and Exploiting Cultural diversiTy on the social web, pages 35–40. ACM. Zi Yin, Vin Sachidananda, and Balaji Prabhakar. 2018. The global anchor method for quantifying linguistic shifts and domain adaptation. In Advances in Neural Information Processing Systems, pages 9434–9445. Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In arXiv preprint arXiv:1506.06724.
2019
379
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 390–401 Florence, Italy, July 28 - August 2, 2019. c⃝2019 Association for Computational Linguistics 390 What You Say and How You Say It Matters: Predicting Financial Risk Using Verbal and Vocal Cues Yu Qin School of Information Renmin University of China [email protected] Yi Yang ∗ HKUST Business School Hong Kong University of Science and Technology [email protected] Abstract Predicting financial risk is an essential task in financial market. Prior research has shown that textual information in a firm’s financial statement can be used to predict its stock’s risk level. Nowadays, firm CEOs communicate information not only verbally through press releases and financial reports, but also nonverbally through investor meetings and earnings conference calls. There are anecdotal evidences that CEO’s vocal features, such as emotions and voice tones, can reveal the firm’s performance. However, how vocal features can be used to predict risk levels, and to what extent, is still unknown. To fill the gap, we obtain earnings call audio recordings and textual transcripts for S&P 500 companies in recent years. We propose a multimodal deep regression model (MDRM) that jointly model CEO’s verbal (from text) and vocal (from audio) information in a conference call. Empirical results show that our model that jointly considers verbal and vocal features achieves significant and substantial prediction error reduction. We also discuss several interesting findings and the implications to financial markets. The processed earnings conference calls data (text and audio) are released for readers who are interested in reproducing the results or designing trading strategy. 1 Introduction Predicting financial risks of publicly traded companies is of great interest to capital market participants. In finance, stock price volatility, which is the standard deviation of a stock’s returns over a period of time, is often used as a measure of financial risks. Unlike directly predicting stock prices, it is uncontroversial in the field of economics that one can predict a stock’s volatility level using publicly available information (Bernard et al., 2007). Based on this assumption, a burgeoning body of ∗Corresponding author. research, both in finance and computational linguistics, has studied predicting stock volatility using various textual sources, including company disclosed reports (Kogan et al., 2009), public news articles (Tetlock, 2007), company earnings call transcripts (Wang and Hua, 2014), and social media (Ding et al., 2015). Thanks to technological advances, massive amounts of unstructured multimedia data, such as investor conference audio records and CEO public speech videos, have been archived and can be accessed by institutional and individual investors. Everything CEOs (or other executives) say will be closely examined and analyzed by investors. There are anecdotal evidences that CEO’s nonverbal features, such as emotions and voice tones, can also be used to reveal firm’s performance. For example, it has been reported that hedge fund companies hire ex-CIA agents trained in reading nonverbal cues to assess public statements by managers 1. While prior research in speech communication has reported that the vocal cues have the power to strengthen or weaken the verbal message, and vocal cues can reflect speaker’s affective states or emotion, little research has studied the interplay of verbal cues (language) and nonverbal cues (voice) and their impact on the financial markets. To fill the gap, we choose a novel multimodal learning setting of company earnings conference call. Earnings conference calls are the periodic conference calls company executives hold with outside investors and analysts to discuss financial results and answer questions raised by analysts. There are three reasons that we choose earnings conference calls as our research setting. First, almost all of the calls are webcast live, and they are later archived on company investor relation (IR) websites or third-party databases. Therefore, both audio and text modalities are available so that we 1MarketWatch website. From CIA to BIA: Spotting execs who bend the truth. Accessed: 2019-06-02 391 can align vocal cues with verbal cues in multimodal learning, and examine the interplay of both modalities and their impact on the financial markets. Secondly, company earnings announcements are one of biggest stock-moving events. If company reports an earning that does not meet analyst expectation or the CEO fails to address critical questions during the conference call, it often causes significant stock price moves, i.e. high volatility. Lastly, the audio recording and textual transcripts of company earnings conference calls are publicly accessible so interested readers can reproduce the results. In our work, we propose a stock volatility prediction pipeline using company earnings conference call audio and text data. We construct a unique dataset containing conference call audio and text data of S&P 500 companies in recent years. We then align each sentence in the call transcript with the corresponding audio recording clip. For the multimodal learning, we propose a Multimodal Deep Regression Model (MDRM). The MDRM model utilizes BiLSTM layer to extract context-dependent unimodal features, and subsequently fuses unimodal features together using another layer of BiLSTM to extract multimodal inter-dependencies for the regression task. We empirically demonstrates that MDRM models outperform other benchmark methods significantly and substantially. More importantly, the empirical results confirm that audio modality (vocal cues) help to improve volatility prediction accuracy and may reveal the fact that market participants listen to not only what CEOs say but also how CEOs say it. Our contributions can be summarized in two folds. First, we are among the first to study the impact of both verbal and vocal features on financial markets, specifically, stock volatility. Secondly, we empirically show that multimodal learning with audio and text can indeed reduce prediction error, compared to previous work that relies on text only. The interesting finding that vocal cues play a role in stock volatility is worth further exploring. In the next section, we briefly provide institutional background on earnings conference call and its impact on financial markets. In Section 3, we outline related work in financial text regression and multimodal learning. We then present our earnings conference call dataset and how data is processed in Section 4. In section 5, we introduce our multimodal learning framework that fuses verbal and vocal features in a deep model. Experiments results are presented in Section 6. Our experiment results show several interesting findings, which we discuss in Section 7. Finally, we conclude this paper in Section 8. 2 Earnings Conference Call and Post Earnings Announcement Drift (PEAD) Earnings calls are quarterly conference calls company executives hold with outside investors and analysts to discuss firm overall performance. An earnings call consists of two sections: an introduction section and a question-and-answer section. During the introduction section, executives such as CEOs and CFOs read forward-looking statements and provide their information and interpretation of their firms performance during the quarter. During the question-and-answer section, analysts have the opportunity to request managers to clarify information and solicit additional information that the management team does not disclose in the introduction section. The National Investor Relations Institute reports that 92% of companies conduct earnings calls. Institutional and individual investors listen to the earnings call and spot the tones of executives that portend good or bad news for the company. Company earnings conference call can often result in significant stock price moves. For example, Facebook’s stock price dropped over 20% during its nightmare earnings call (second quarter 2018) when the executives said the company expected a revenue growth slowdown in the years ahead. In finance and accounting research, Post Earnings Announcement Drift (PEAD) is a well documented phenomenon that a stock’s abnormal returns drift in the direction of an earnings surprise for several weeks following an earnings announcement (Ball and Brown, 1968; Bernard and Thomas, 1989). Moreover, the finance and accounting literature has shown that the stock price moves are largely due to the market reaction to the earnings announcement. The move is most significant during the earnings conference call when the executives start to take analysts questions. In our work, we focus on using executive’s verbal and nonverbal cues in conference calls to predict stock price volatility for days following the calls. 392 3 Related Work Our work is closely related with the following two lines of research: financial risk prediction with multimedia data: It is a received wisdom in economics and finance that one can predict a stock’s risk using historical information (Bernard et al., 2007). Various work has studied the problem of financial risk prediction using firm financial reports. A pioneer work (Kogan et al., 2009) shows that simple bagof-words features in firm annual report (Form 10Ks) combined with historical volatility can simply outperform statistical models that is built upon historical volatility only. Other work (Tsai and Wang, 2014; Nopp and Hanbury, 2015; Rekabsaz et al., 2017; Theil et al., 2018; Wang and Hua, 2014) also proposes different document representation methods to predict stock price volatility. To the best of our knowledge, none of existing NLP research on stock volatility prediction considers the usage of vocal features from audio data, especially the interplay between vocal and verbal features. In finance research, only two studies (Mayew and Venkatachalam, 2012; Hobson et al., 2012) have examined the executive voice in earnings calls. However, they extract CEO’s affective state from a blackbox third-party audio processing software, the validity of which has been seriously questioned (Lacerda, 2012). multimodal learning: Despite our financial domain, our approach is relevant to multimodal learning using text and audio. Recent studies on speech communication have shown that a speaker’s acoustic features, such as voice pitch, amplitude, and intensity, are highly correlated with the speaker’s emotion (Bachorowski, 1999), deception or trustworthiness(Sporer and Schwandt, 2006; Belin et al., 2017), anxiety (Laukka et al., 2008) and confidence or doubt (Jiang and Pell, 2017). Recently, multimodal learning has drawn attentions for different applications, such as sentiment analysis (Zadeh et al., 2016b,a; Poria et al., 2017; Luo et al., 2018), image caption generation (You et al., 2016), suicide risk detection (Scherer et al., 2016), crime drama understanding (Frermann et al., 2018) and human trafficking detection (Tong et al., 2017). To the best of our knowledge, this work presents the first multimodal deep learning model using text and audio features for a financial markets application. 4 Earnings Conference Calls Dataset In this section, we present dataset details. 4.1 Data Acquisition Conference call transcripts have been extensively studied in prior research. However, there is no existing conference call audio dataset. Therefore, we set up our S&P 500 Earnings Conference Calls dataset by acquiring audio records and text transcripts from the following two sources. Earnings Call Transcripts. The earnings call transcripts are obtained from the website Seeking Alpha2. The transcripts are well labeled, including the name of speaker (executives and analysts) and speech content. Earnings Call Audio. Given each transcript, we download corresponding audio recording from the website EarningsCast3. The downloaded audio data does not provide any segmentation or labeling for speakers. 4.2 Data Processing It is too coarse to extract audio features at the conference call transcript level, and it is also too difficult to segment audio recordings at word level. Therefore, we analyze each conference call at sentence level. That is, we want to represent a conference call as a sequence of sentences with corresponding audio clips. Since conference call normally lasts for about one hour, determining, for each sentence of the transcript, the time interval (in the audio file) containing the spoken text of the sentence is quite challenging. To tackle this challenge, we propose an Iterative Forced Alignment (IFA) algorithm to align each sentence of the transcript with the audio clip containing the spoken text of the sentence. Due to space limit, we present the details of IFA in Appendix. Furthermore, to avoid interference among different speakers, we select only the sentenece made by the most spoken executive (usually the CEO). After the forced alignment step, for each sentence in the conference call transcript, we obtain the sentence text as well as its corresponding audio clip4. 2https://seekingalpha.com/ 3https://earningscast.com/ 4It is worth noting that some third-party data provider companies provide human-annotated transcript text and audio recording alignment. In that case, text-audio forced alignment step may not be necessary. 393 Textual Features We use pre-trained word embeddings and calculate the arithmetic mean of word vector in each sentence as the sentence representation. We choose the embedding GloVe-300 (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword 55. Therefore, each sentence is represented as a 300-dimension vector. Audio Features We use Praat (Boersma and Van Heuven, 2001) to extract vocal features, such as pitch, intensity, jitter, HNR(Harmonic to Noise Ratio) and etc, from audio recordings. A total of 27 vocal features are extracted by Praat. In summary, for each sentence in an earnings conference call, we generate a 300-dimension text vector and a 27-dimension audio vector to represent verbal and vocal features separately. Data Statistics We build our dataset by acquiring all S&P 500 companies’ quarterly earnings conference calls in 2017. We choose S&P 500 constituent firms as the target for volatility prediction for reasons of importance and tractability. Firms in the S&P 500 index encompass roughly three-quarters of the total U.S. market capitalization. A total of 2,243 earnings conference calls are downloaded from Seeking Alpha and EarningsCast. We discard conference calls which text-audio alignment is not done properly, using the abovementioned data processing method. The final dataset consists of 576 conference calls, with a total number of 88,829 sentences. It can be seen that we discard a large proportion of raw data because the audio-text alignment is very noisy and is prone to errors. We release our processed earnings conference calls dataset6 (text and audio) for readers who are interested in reproducing the results. 5 Model We formalize the problem as a supervised machine learning task. The input data is a company’s earnings conference call verbal (textual) features and corresponding vocal (audio) features; This is mapped to a numerical variable which is the company’s stock price volatility following the conference call. Prior research (Kogan et al., 2009; Rekabsaz et al., 2017) uses only shallow machine learning model (such as logistic regression) and bag-of5https://nlp.stanford.edu/projects/glove/ 6Our dataset is available at https://github.com/ GeminiLn/EarningsCall_Dataset word features to represent financial documents. In other words, the relation and dependencies among the sentences are largely ignored. However, every sentence in a conference call is spoken at a distinct time and in a particular order. Therefore, it is better to treat a conference call as a sequence of sentences. To this end, like other sequence classification problems, we choose to use a recurrent neural network to capture the sentences relation and dependency. When multimodal verbal and vocal features are available, it is also important to capture the dependency between different modalities, as the vocal cues either affirm or discredit the verbal message. For example, if a CEO says “we are confident about the future product sales” with a voice that is different from the CEO’s base vocal cues, such as increased pitch or pauses, we may infer that the CEO is not as confident as he claims. In fact, existing research (Jiang and Pell, 2017) in speech communication has shown that voice (vocal cues) plays a critical role in verbal communication. If we ignore the voice patterns that are accompanied with the verbal language, we may misinterpret the CEO’s statement. Especially in financial markets where CEO’s word and voice are closely examined by professional analysts and investors, it is plausible that market reacts to both verbal and vocal signals. Therefore, we present a deep model to capture context-dependent unimodal features and fuse multimodal features for the regression task. The high-level idea behind the design is to use contextual BiLSTM to extract context-dependent unimodal features separately corresponding to each sentence, and then use a BiLSTM to fuse multimodalities and extract the inter-dependencies between different modalities. The details of our model is described below. 5.1 Notations We first introduce our notations. Let M be the total number of conference call transcripts while the longest one has N sentences. Then we denote Xj as the jth conference call, where 1 ≤j ≤M. In our multimodal setting, Xj = [T j; Aj]. Tj is a N × dt matrix that represents the document embeddings of the call transcripts, where N is the number of sentences in a document7 and dt is the 7Assuming the longest document has N sentences, for documents which contain less than N sentences, we utilize zero-padding to fill them to N to keep consistency. 394 dimensions of word embedding. Aj is a N × da matrix that represents the vocal features extracted from earnings call audios, where da is the dimensions audio feature. yj and ˆyj represent the true and predicted stock volatility value corresponding to jth conference call. 5.2 Multimodal Deep Regression Model Our multimodal deep regression model (MDRM) includes two components. The first component is a contextual BiLSTM that extracts unimodal features for either text or audio modality. The contextual BiLSTM is able to capture the relationship and dependency for unimodal inputs. In the second component, the extracted multimodal (text and audio) features are then combined and are fed into a BiLSTM with a fully-connected layer, which extracts inter-dependencies between text and audio modality. 5.2.1 Extracting Unimodal Features with Contextual BiLSTM The Contextual LSTM is proposed by (Poria et al., 2017), designed to analyze video emotion utilizing text, speech and video image. The contextual LSTM connects dense layers and softmax output with each LSTM unit. In the implementation, this architecture is also called time-distributed dense layer. This structure helps maintain the latent time sequence in data while making sentiment classification on the utterance level. In our contextual LSTM, we choose the BiLSTM as fundamental LSTM architecture by its best performence in past work (Poria et al., 2017). BiLSTM is the bidirectional LSTM (Hochreiter and Schmidhuber, 1997), which is an extended model of recurrent neural network (RNN). Specifically, LSTM is designed to acquire key information from time series data while overcoming the defect that traditional RNN might lose information in long time series. BiLSTM is then developed from LSTM, considering not only the forward information transfer but backward transfer. The bidirectional information transmission significantly improves model prediction power. For the construction of Contextual BiLSTM, detailed formulas (Only forward transmission formulas) are described below. fj = σg(Wfxj + Ufhj−1 + bf) ij = σg(Wixj + Uihj−1 + bi) oj = σg(Woxj + Uohj−1 + bo) cj = fj ◦cj−1 + ij ◦σc(Wcxj + Uchj−1 + bc) hj = oj ◦σh(cj) Zj = ReLU(Wzhj + bz) In the above formulas, xj denotes the jth input features, i.e., the jth sentence textual or audio features. fj, ij, and oj represent the standard forget gate, input gate and output gate. W and b are trainable vectors in the training process, and all the vectors described above are used to generate hidden state hj and cell state cj. Zj in the last formula stands for the output of time-distributed dense layer connected to the jth LSTM unit. Compared with Poria’s work (Poria et al., 2017), we remove the softmax output on LSTM unit since our regression is applied on document level, instead of utterance level. The dense layer output is constructed as a new time sequence feature to be further utilized in next stage. 5.2.2 Hierarchical Fusion of Unimodal Features Hierarchical fusion of unimodal features is achieved by our Multimodal Deep Regression Model. Figure 1 demonstrates the integral process. In this process, the hierarchical fusion consists of two stages. Stage 1 Vectors T and A are represented by the matrices on the left. Matrix T is 520 × 300 dimensional and matrix A is 520 × 27 dimensional, while 520 is the length of document, 300 and 27 are the dimensions of textual features and audio features. The matrices are then fed into Contextual BiLSTM through a Mask layer to screen the effect of zero-padding. As described in 5.2.1, Contextual BiLSTM extracts unimodal features for each matrix separately while keep the original chronological order. After extracted, unimodal features are still organized on sentence level so they can be horizontally stitched as merged features in the middle of Figure 1. Stage 2 The merged features are then fed into a BiLSTM connected with a two-layer neural network. To be specifically, we avoid the same network architecture as Poria’s work (Poria et al., 2017) here to achieve our unique purpose. Unlike video emotion classification, the regression problem in our study is document-level, which means that we do not make prediction on each utterance. Therefore, Contextual BiLSTM is not suitable for 395 Transcript T Audio A Text Feature Audio Feature BiLSTM FC Layer Regression Result Merged Feature Contextual BiLSTM Contextual BiLSTM Mask Mask ... ... ... ... N Word Embeddings N Audio Features Figure 1: The proposed Multimodal Deep Regression Model (MDRM). The inputs to the model is a company’s conference call audio file with correpsonding transcript. Each conference call consists of N sentences. The output variable is a numerical value, i.e., the company’s stock price volatility following the conference call. stage 2 since the features are already extracted on high-level. In stage 2, we use the BiLSTM connected with a two-layer neural network to complete the regression. The effectiveness of this concision structure will be experimental proved in the experiment result section. 6 Experiment Setup The stock volatility prediction problem is formulated following (Kogan et al., 2009). The volatility is defined as: v[t−τ,t] = ln rPτ i=0(rt−i −¯r)2 τ  (1) where rt is the return price at day t and ¯r is the mean of the return price over the period of day t −τ to day t. The return price is defined as rt = Pt Pt−1 −1, where Pt is the closing price on day t. We choose different τ values, including 3, 7, 15, 30 calendar days to evaluate the shortterm and long-term effectiveness of volatility prediction. We obtain daily stock prices of year 2017 (dividend-adjusted) from CRSP database. We report the performance using the Mean Squared Error (MSE) between the predicted volatility and true volatility: MSE = 1 M′ M′ X i=1 (f(X′ i) −y′ i)2 (2) where M′ is the size of the test set, and y′ i is the true volatility associated with testing example X′ i. 6.1 Baselines We consider several stock volatility prediction baselines as described below. Past Volatility. It is often reported in prior research that past volatility is a strong predictor of future volatility. Thus we consider using the volatility of previous τ-days before conference call to predict the τ-days volatility following the conference call. We call this baseline vpast. tf-idf bag-of-words. It is used in (Kogan et al., 2009). The feature value is classic tf-idf score. Term frequency (tf) is calculated as TF = ni,j P k nk,j , and inverse document frequency (idf) is calculated as IDF = log ( |d| 1+df(t)), where the ni,j is the number of frequency of term ti in document dj, and P k nk,j denotes the sum of all terms appear in document dj. |d| is the total number of document, and df(t) is the sum of documents which contain the term ti. 396 word embeddings. Each transcript is represented as a weighted average of word embeddings. In our experiment, we use pre-trained GloVe-300 word embeddings. This document representation is shown to be a simple yet effective method (Arora et al., 2017). This baseline can help us to evaluate the effectiveness of proposed deep model. We also experiment with pre-trained word embeddings GloVe-50 and GloVe-100 but find GloVe-300 performs the best among those. Therefore, we use GloVe-300 as input word embeddings throughout our experiments. For the above two baselines tf-idf bag-of-words and word embeddings, given conference call transcript representations, we apply Support Vector Regression (SVR) (Drucker et al., 1997) with Radial Basis Function (RBF) kernel to predict stock volatility yi, following previous studies (Kogan et al., 2009; Rekabsaz et al., 2017; Tsai and Wang, 2014). We also consider two multimodal learning baselines that fuse both audio and textual features. simple fusion This is a baseline using a simple shallow model to fuse different modalities. The audio and text features are fed into SVR as input. Using this baseline, we can compare the effectiveness of deep multimodal model with shallow multimodal model. bc-LSTM It is a state-of-the-art multimodal learning model as proposed in (Poria et al., 2017). They present a bidirectional contextual LSTM (bc-LSTM) framework for fusing multimodal features including audio, video and text. We replicate their deep model as a direct baseline. For our multimodal deep regression model (MDRM), we also evaluate three different scenarios: text-only, audio-only, and both text and audio are available text+audio. 6.2 Training Setup Our deep model is built and trained with Keras8. We apply backpropagation with stochastic gradient descent in the training, and we choose the mean square error as the loss function. We use linear activation for the final regression layer and implement ReLU activation function for the remaining layers. During the experiment, we find that training with audio data is more prone to overfitting. We then implement dropout in our model. In the first 8Keras: https://keras.io/ stage, we set dropout as 0.5 for audio contextual BiLSTM and 0.8 for text contextual BiLSTM. In the second stage, we remove the dropout layer. For the model evaluation, randomly splitting dataset into training/validation/testing is not reasonable since we should not use later years’ conference calls to predict previous years’ stock volatilities. Therefore, we choose the top 80% of the data as training data and the remaining 20% as test data. 7 Experiment Results and Discussion Predicting stock volatility is a rather challenging task given the noisiness of the stock markets. Following prior research, we report volatility number in the 3-th decimal. The main experiment results are shown in Table 1. We now discuss the experiment results and several interesting findings as well as their implications to the stock markets. Multimodal Deep Regression Model is Effective. The results show that our multimodal deep regression model (MDRM) outperforms all baselines. Using both text and audio data, the model has prediction error of 1.371, 0.420, 0.300 and 0.217 for 3-days, 7-days, 15-days and 30-days following the conference call respectively. Comparing with using past volatility only, the improvement gain is as substantial as 54.1% for 3-days prediction. The improvement over other baseline methods are 19.1% (tf-idf bag-of-words), 17.8% (word embeddings), 20.4%(simple fusion) respectively for 3-days prediction. Comparing with the state-of-art baseline bc-LSTM (Poria et al., 2017), MDRM also achieve 3.3% error reduction for 3days prediction. It is worth emphasizing the substantial improvement over simple fusion model. As our design motivation, verbal and vocal features should be modeled jointly as vocal cues either affirm or discredit the verbal message in public communication. Our deep regression model is able to capture the interplay of both modalities that a simple feature fusion model cannot. Both modalities are helpful. We can also conclude from the results that multimodal features are more helpful than unimodal features (either text or audio) alone. When we predict the stock volatility 3-days following the conference call, multimodal (1.371) outperform unimodal (1.431) by 4.2%. As shown in Table 1, MDRM (text+audio) significantly outperforms MDRM (text only) and MDRM (audio-only) model for 3-days, 7-days and 15 days stock volatility prediction. The im397 τ=3 τ=7 τ=15 τ=30 vpast 2.986 0.826 0.420 0.231 tf-idf bag-of-words 1.695 0.498 0.342 0.249 word embeddings 1.667 0.549 0.345 0.275 simple fusion 1.722 0.501 0.307 0.233 bc-LSTM (text+audio) (Poria et al., 2017) 1.418 0.436 0.304 0.219 Multimodal Deep Regression Model (MDRM) text only 1.431 0.439 0.309 0.219 audio only 1.412 0.440 0.315 0.224 text+audio 1.371∗∗∗ 0.420∗∗∗ 0.300∗∗ 0.217 Table 1: MSE of different models on stock volatility prediction τ-days following the conference call. The * denotes statistical significance compared to MDRM (text only) results under a one-tailed t-test (*** for p ≤0.001 and ** for p ≤0.01) provement is not statistically significant for 30days prediction, which we will explain the possible reasons later. In addition to reduced prediction error, fusing both modalities can mitigate potential overfitting problem. We find that training a deep LSTM network with audio data only can result in overfitting very quickly. In our experiment, the audio-only deep network shows a trend of over-fitting in 10 epochs. Therefore, the result that audio-only MDRM performs better than textonly MDRM (1.412 vs. 1.431) may need careful interpretation as we have to stop audio-only model training early to prevent overfitting. However, using both audio features and text features, the model usually converges in 20 epochs without over-fitting. Some Individual Vocal Features are Important. We also design another experiment to investigate the importance of different vocal features. We examine whether the left-out of individual vocal features can affect prediction results. We follow the prior research (Jiang and Pell, 2017) to select five representative vocal features including mean pitch, standard deviation of pitch, mean intensity, number of pulses and mean HNR (Harmonic-to-Noise Ratio). Our experiment results show that without mean pitch feature, the MSE of our model increases 0.7%. The left-out of standard deviation of pitch also raises MSE by 0.65%. For mean intensity and number of pulses, MSE increases by 0.63% and 0.56% respectively. However, MSE is not changed with mean HNR being left-out.This finding is consistent with prior research in speech communication that pitch and intensity are important features when detecting a speaker’s confident and doubt. Short-term Volatility Prediction is Hard. Our prediction results consistently show that short term volatility prediction error is much greater than long term prediction error. For example, the 3days prediction MSE of MDRM is 1.371, while the 30-days MSE is 0.217. The gain of MDRM over past volatility baseline vpast diminishes from 54% (τ = 3) to 6% (τ = 30). In other words, short term volatility prediction is much more difficult than long term prediction. This phenomenon has also been extensively documented in finance and accounting literature, known as post earnings announcement drift (PEAD). Research (Ball and Brown, 1968; Bernard and Thomas, 1989) have shown that the stock price moves more significantly (volatile) in a short period of time (several trading days) following the conference call than in a long period of time (from weeks to months). Even though the absolute value of MSE is higher in short-term, the 54% improvement over baseline past volatility is still encouraging, because any information that helps to formulate realistic estimates of the volatility can be invaluable to capital market participants. Marginal Gain over Simple Models is Diminishing in Long-term. Our experiment results also consistently show that complex deep models such as bc-LSTM (Poria et al., 2017) or our proposed deep regression model outperform shallow models (such as SVR) by large margin in short-term prediction (τ=3 or 7). However, the margin becomes smaller as we predict a relative long-term stock volatility (τ=15 or 30). For example, comparing with tf-idf bag-of-words model at τ = 3, our MDRM reduces prediction error by 19.1% (1.371 vs. 1.695). However, at τ = 30, the prediction error reduction is 12.8% (0.217 vs. 0.249). This can also be confirmed that when τ = 30, 398 the MSE of past volatility method is as small as 0.231, which is even better than tf-idf bagof-words model and is only slightly worse than MDRM. In other words, the benefit of using complex deep model for long-term volatility prediction is smaller than for short-term volatility prediction. This phenomenon can be explained by Efficientmarket hypothesis (EMH), which is a theory in financial economics that states that the stock prices only react to new information so it is impossible to predict the stock price based on historical information. Therefore, as we target for a longer time horizon, the predictive power of using the previous conference calls information becomes less significant and substantial. 7.1 Case Study: AMD Conference Call First Quarter 2017 We conduct a case study to further investigate the validity of multimodal learning for stock volatility prediction. The case study is based on the AMD (Advanced Micro Devices Inc.)’s earnings conference call in the first quarter of 2017. We qualitatively explain why multimodal features are more helpful than unimodal text features. May 1st 2017 is a bad day for AMD investors. After the company’s earnings conference call, the stock price dropped by 16.1% in the post market session. The company’s stock price became very volatile for the next few days. We analyze the conference call transcript with corresponding audio recording of the company’s Chief Executive Officer (CEO) Dr. Lisa T. Su. Figure 2 illustrates the inconsistencies between the CEO’s verbal cues and her vocal cues. We observe that there is a significant increase in mean pitch while the CEO is saying “Overall, from a performance standpoint, the product and the customer engagements are going as we would expect” (Case 1). While the language is positive, the mean pitch of CEO’s voice increases 20% above her average mean pitch (203.39 Hz) and the mean pitch values in nearby sentences. According to prior acoustic research (Jiang and Pell, 2017), the high mean pitch may correlate with a speaker being not confident about what he or she is talking about. A similar inconsistency also happens when the CEO is saying We have more memory bandwidth” (Case 2). After the earnings conference call, it turns out that the revenue of AMD actually missed the an 0 50 100 150 200 250 300 -5 -4 -3 -2 -1 0 1 2 3 4 5 Mean Pitch Sentence Number Mean Pitch around Specific Sentence Case 1 Case 2 Figure 2: The change of Mean Pitch around specific sentence. Sentence with number 0 is the corresponding Case1 and Case2 sentence described in the paper. alyst expectation by $0.38M. Thus, the positive words in the CEO’s language is not as credible as it sounds. Using unimodal text data only, we may miss the inconsistency in verbal and vocal cues. Therefore, the multimodal learning model may capture the inter-dependency between multimodal features and better predict market reactions to earnings conference calls. 8 Conclusion Predicting financial risks of publicly traded companies is an essential task in financial markets. In this work, we have demonstrated that CEO’s language and voice in company earnings conference calls can be utilized to predict the company financial risk level, as measured by stock price volatility for days following the conference call. We propose a BiLSTM-based multimodal deep regression model that extracts and fuses multimodal features from text transcripts and audio recordings. Even though our work is an application of financial domain, we hope our multimodal learning model can also be useful in other areas (such as social media and customer service) where multimodality data is available. Acknowledgments This work was supported by Theme-based Research Scheme (No. T31-604/18-N) from Research Grants Council in Hong Kong, and the National Natural Science Foundation of China (Grant No. 71771212, U1711262). We thank the anonymous reviewers for helpful comments. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. 399 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In In Proceedings of ICLR. Jo-Anne Bachorowski. 1999. Vocal expression and perception of emotion. Current directions in psychological science, 8(2):53–57. Ray Ball and Philip Brown. 1968. An empirical evaluation of accounting income numbers. Journal of accounting research, pages 159–178. Pascal Belin, Bibi Boehme, and Phil McAleer. 2017. The sound of trustworthiness: Acoustic-based modulation of perceived voice personality. PloS one, 12(10):e0185651. Dumas Bernard, Kurshev Alexander, and Uppal Raman. 2007. Equilibrium portfolio strategies in the presence of sentiment risk and excess volatility. Working Paper 13401, National Bureau of Economic Research. Victor L Bernard and Jacob K Thomas. 1989. Postearnings-announcement drift: delayed price response or risk premium? Journal of Accounting research, 27:1–36. Paul Boersma and Vincent Van Heuven. 2001. Speak and unspeak with praat. Glot International, 5(9/10):341–347. Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock prediction. In In Proceedings of IJCAI, pages 2327–2333. Harris Drucker, Christopher J. C. Burges, Linda Kaufman, Alex J. Smola, and Vladimir Vapnik. 1997. Support vector regression machines. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, In Proceedings of NIPS, pages 155–161. Lea Frermann, Shay B Cohen, and Mirella Lapata. 2018. Whodunnit? crime drama as a case for natural language understanding. Transactions of the Association of Computational Linguistics, 6:1–15. Jessen L Hobson, William J Mayew, and Mohan Venkatachalam. 2012. Analyzing speech to detect financial misreporting. Journal of Accounting Research, 50(2):349–392. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Xiaoming Jiang and Marc D Pell. 2017. The sound of confidence and doubt. Speech Communication, 88:106–126. Shimon Kogan, Dimitry Levin, Bryan R. Routledge, Jacob S. Sagi, and Noah A. Smith. 2009. Predicting risk from financial reports with regression. In In Proceedings of NAACL, pages 272–280. Francisco Lacerda. 2012. Money talks: The power of voice: A critical review of mayew and ventachalams the power of voice: Managerial affective states and future firm performance. PERILUS, pages 1–10. Petri Laukka, Clas Linnman, Fredrik ˚Ahs, Anna Pissiota, ¨Orjan Frans, Vanda Faria, ˚Asa Michelg˚ard, Lieuwe Appel, Mats Fredrikson, and Tomas Furmark. 2008. In a nervous voice: Acoustic analysis and perception of anxiety in social phobics speech. Journal of Nonverbal Behavior, 32(4):195. Ziqian Luo, Hua Xu, and Feiyang Chen. 2018. Utterance-based audio sentiment analysis learned by a parallel combination of cnn and lstm. arXiv preprint arXiv:1811.08065. William J Mayew and Mohan Venkatachalam. 2012. The power of voice: Managerial affective states and future firm performance. The Journal of Finance, 67(1):1–43. Pedro J Moreno, Chris Joerg, Jean-Manuel Van Thong, and Oren Glickman. 1998. A recursive algorithm for the forced alignment of very long audio segments. In In proceedings of ICSLP. Clemens Nopp and Allan Hanbury. 2015. Detecting risks in the banking system by sentiment analysis. In In Proceedings of EMNLP, pages 591–600, Lisbon, Portugal. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In In Proceedings of EMNLP, pages 1532–1543. Soujanya Poria, Erik Cambria, Devamanyu Hazarika, Navonil Majumder, Amir Zadeh, and Louis-Philippe Morency. 2017. Context-dependent sentiment analysis in user-generated videos. In In Proceedings of ACL, volume 1, pages 873–883. Navid Rekabsaz, Mihai Lupu, Artem Baklanov, Allan Hanbury, Alexander Duer, and Linda Anderson. 2017. Volatility prediction using financial disclosures sentiments with word embedding-based ir models. In Proceedings of ACL, pages 1712–1721. Stefan Scherer, Gale M Lucas, Jonathan Gratch, Albert Skip Rizzo, and Louis-Philippe Morency. 2016. Self-reported symptoms of depression and ptsd are associated with reduced vowel space in screening interviews. IEEE Transactions on Affective Computing, 7(1):59–73. Siegfried Ludwig Sporer and Barbara Schwandt. 2006. Paraverbal indicators of deception: A meta-analytic synthesis. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 20(4):421–446. Paul C. Tetlock. 2007. Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3):1139–1168. 400 Christoph Kilian Theil, Sanja Stajner, and Heiner Stuckenschmidt. 2018. Word embeddings-based uncertainty detection in financial disclosures. In In Proceedings of the First Workshop on Economics and Natural Language Processing, pages 32–37. Edmund Tong, Amir Zadeh, Cara Jones, and LouisPhilippe Morency. 2017. Combating human trafficking with multimodal deep models. In In Proceedings of ACL, pages 1547–1556. Ming-Feng Tsai and Chuan-Ju Wang. 2014. Financial keyword expansion via continuous word vector representations. In In Proceedings of EMNLP, pages 1453–1458. William Yang Wang and Zhenhao Hua. 2014. A semiparametric gaussian copula regression model for predicting financial risks from earnings calls. In In Proceedings of ACL, volume 1, pages 1155–1165. Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with semantic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4651–4659. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016a. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016b. Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82–88. A Appendices In this appendix section, we present details of our text and audio forced alignment method. Given an audio file containing speech, and the corresponding transcript, forced alignment is defined as the process of determining, for each fragment of the transcript, the time interval (in the audio file) containing the spoken text. In our setting, we need to match speaker’s speech and corresponding spoken text from an earnings conference call data. However, earnings conference call normally lasts for about one hour or longer. Therefore, aligning audio clips with the corresponding text is quite challenging. Toward this end, we propose an Iterative Forced Alignment (IFA) algorithm to promote the alignment results on our data set. The IFA method is inspired by a spoken language processing work (Moreno et al., 1998). We implement IFA on the basis of normal forced alignment technology, in Algorithm 1 Iterative Forced Alignment 1: function Alignment(ai, ti, si) 2: if Length(ai) = 0 then 3: return True 4: end if 5: if Length(ai)! = 0 then 6: result ←Aeneas(ai, ti) 7: speaker ←LastSpeaker(si) 8: slicea,t ←LastParagraph(ai, ti) 9: si ←CutLastSpeaker(si) 10: ai, ti ←CutLastParagraph(ai, ti) 11: Save slicea,t as files 12: return False 13: end if 14: end function 15: function IterativeSegmentation 16: for i = 0 →M do ▷M is the number of calls 17: ai, ti ←Audioi, Trasncripti 18: si ←SpeechSequencei 19: while result! = True do 20: result ←Alighment(ai, ti, si) 21: end while 22: end for 23: end function Python, we use Aeneas9 as fundamental forced alignment method. Algorithm 1 demonstrates the specific architecture of our method. During our experiment, we find the forced alignment performs well in the beginning and end of the whole document. In the middle parts, alignment result might be influenced by short syllable words, fast switching of speakers or omission of text record. Therefore, we utilize the iterative strategy in segmentation. Instead of aligning the whole document and then segment it according to alignment result, the IFA chooses to segment only the last paragraph at one time, since the last paragraph is most likely to be aligned precisely. After segment the last paragraph, IFA will restart the forced alignment on the remaining audio and text, generate the new alignment result and segment the last paragraph, until document is fully processed. We randomly select 200 earnings conference calls to test the effectiveness of IFA. As shown in Table 2, the adoption of IFA improves segmentation accuracy and reduces the degree of error significantly. 9Aeneas: https://github.com/readbeyond/aeneas 401 Match Not Match Begin End Begin End Iterative 63 60 37 40 Total:123 Total:77 One-Time 33 22 67 78 Total:55 Total:145 Table 2: Comparison of Iterative Segmentation and One-Time Segmentation To acquire right-segmented earnings conference calls automatically. We implement both IFA and One-Time segmentation on the remaining data, selecting the right-segmented earnings conference call by comparing the result of two methods. If the difference of segmentation result between the two methods is small in one document, we note this document as right-segmented. By adopting IFA on our dataset, we solve the long, noisy audio segmentation problem in an effective way. Since there is no recognized practical method to deal with such a problem, our work can contribute to those researchers who are interested in long audio processing and analyzing. Not only in financial materials analysis field but also in other areas including social media analysis and emotion recognition.
2019
38